As Apple Inc. (NASDAQ:AAPL) and Google Inc (NSADAQ:GOOG) (NASDAQ:GOOGL) battle to dominate the computer-networked house, car and apparel products industry, creating potentially the most track-able, monitored and controlled society in history, comes news that robots learn better when their human master isn’t involved.
“We are trying to create a method for a robot to seek help from the whole world when it is puzzled by something,” Rajesh Rao, associate professor of computer science and engineering at the University of Washington in Seattle, was quoted in a report saying. “This is a way to go beyond just one-on-one interaction between a human and a robot by also learning from other humans around the world.”
Robots now learning from other robots
Robots can now be “puzzled” by something and can learn from other robots, which comes on the heals of Google buying Boston Dynamics, a robot manufacturer that provides services to the US military, as well as other drone manufacturers with military control capabilities.
“Just like humans, robots often learn by imitating what they’ve already seen from others,” a Science World report recently noted. “However, many researchers believe that gaining help from crowd-sourcing could assist the efficiency of their learning.”
If robots learn from others, let’s just hope aggressive robot types are staying away from the west and south sides of Chicago, an area of the country in which crowds are known to source violence and murder.
“Because our robots use machine-learning techniques, they require a lot of data to build accurate models of the task. The more data they have, the better model they can build. Our solution is to get that data from crowd sourcing,” Maya Cakmak, a UW assistant professor of computer science and engineering, was quoted in the article saying.
Conclusion: Robots given model building task
To draw their conclusions, the academics studied a robot assigned a model building task. The robot was more productive when accessing information in the online community than relying on an individual. To perform the task the robot used something termed “goal-based imitation,” which called on the robot’s growing ability to make a determination as to what their human operators really want (a handy application that can be applied to numerous human situations.)
It is unclear, or at least the question hasn’t been asked: Can a self learning robot army or police state force be programmed to have human compassion, ethics with regards to violence and judgment? This is the much larger question that will likely be answered over the next decade or so.