Robots Will Revolt, but Not for the Reason You Think
Most of the time when people think of robots taking over the world and overpowering humans they think of The Terminator, Hal 3000 from 2001 A Space Odyssey, or even iRobot. In these cases, robots develop a form of intelligent AI that surpasses that of a human, but the truth is humans are not near understanding consciousness let alone granting it to a thing comprised of hardware and metal. The real risk robots have is harming humans with commands given to them by us. It is much more realistic than robots inadvertently harm or frustrate humans while carrying out our orders than they would become conscious and rise up against us.
The Center for Human-Compatible Artificial Intelligence launched back in 2016 with $5.5m in funding from the Open Philanthropy Project, is led by computer science professor and artificial intelligence pioneer Stuart Russell. He’s quick to dispel any “unreasonable and melodramatic” comparisons to the threats posed in science fiction. He states, "The risk doesn’t come from machines suddenly developing spontaneous malevolent consciousness,” he said. “It’s important that we’re not trying to prevent that from happening because there’s absolutely no understanding of consciousness whatsoever." Stuart Russell is well known in the department of Artificial Intelligence and in 2015 sent out an open letter calling for researchers to look beyond the goal of simply making AI more capable and powerful to think about maximizing its social benefit. The question is not mindfulness, but competence. You produce devices that are extremely competent at achieving goals and they can trigger errors in attempting to achieve those objectives.”
The Center for Human-Compatible Artificial Intelligence launched back in 2016 with $5.5m in funding from the Open Philanthropy Project, is led by computer science professor and artificial intelligence pioneer Stuart Russell. He’s quick to dispel any “unreasonable and melodramatic” comparisons to the threats posed in science fiction. He states, "The risk doesn’t come from machines suddenly developing spontaneous malevolent consciousness,” he said. “It’s important that we’re not trying to prevent that from happening because there’s absolutely no understanding of consciousness whatsoever." Stuart Russell is well known in the department of Artificial Intelligence and in 2015 sent out an open letter calling for researchers to look beyond the goal of simply making AI more capable and powerful to think about maximizing its social benefit. The question is not mindfulness, but competence. You produce devices that are extremely competent at achieving goals and they can trigger errors in attempting to achieve those objectives.”
To fix this, Russell and his colleagues at the center suggest creating AI systems that observe human actions and try and find out what the purpose of the human is, then act accordingly and learn from mistakes. So instead of trying to give the machine a long list of rules to follow, the machine is told that its main objective is to do what the human wants them to do. It sounds simple, but it’s not how engineers have been building systems for the past 50 years. But if AI systems can be designed to learn from humans in this way, it should ensure that they remain under human control even when they develop capabilities that exceed our own.
Comments
Post a Comment