Robot Says No To Humans

Tufts University is researching on robots disobeying humans if the command is harmful to them or to human beings. The researchers have found a way to simplify logical arguments programmed in robots to say "NO."

It may sound obvious that robots should follow human commands at all times. For robots wielding tools that are dangerous to humans such as in car productions, clearly the machines must follow a set of protocols programmed in their chip sets. On the other hand, researchers in Massachusetts are trying to do something new. They are teaching robots to disobey some instructions.

Clever robots are on the rise. Developers are programming these machines to make decisions. Robots are even teaching nowadays. However, the tricky part is how the robots are going to overrule orders if human commands become dangerous and harmful to other robots and human life.

The Human-Robot Interaction Laboratory at Tufts University have come up with a strategy to reject human commands intelligently. The strategy is similar to a human brain that processes verbal commands and eventually carrying them out. The research approach comes with a long list of ethics and trust programs that can be installed in a robot's brain. It is a simplified inner human monologue converted to a set of logical arguments that a robot can understand through its software.

The result seems reassuring. The university's team of researchers tested it on an experimental android that said "NO" when they asked it to walk through a wall that it could easily break through, since the human command is potentially dangerous and the command cannot trusted.

Machine ethics are becoming a critical material as recent as Google's autonomous cars. Autonomous driving programs use a set of protocols when put in situations that could potentially put their passengers at harm's way, thereby disobeying human commands.

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

Company from iTechPost

More from iTechPost