Geoffrey Hinton, often referred to as “the Godfather of AI,” recently shared his insights and concerns about the future of artificial intelligence (AI) in an exclusive interview with CBS News correspondent Scott Pelley. Hinton, a British computer scientist whose groundbreaking ideas have been instrumental in advancing AI technology, discussed the potential for AI systems to become more intelligent than humans and the implications of this newfound autonomy.
Hinton’s career in AI began as an attempt to simulate a neural network on a computer, inspired by his fascination with the human brain. However, his early endeavors faced skepticism, and his Ph.D. advisor advised him to abandon the pursuit. Nevertheless, Hinton persisted in his quest to understand the human mind, ultimately leading to the development of artificial neural networks.
“It took much, much longer than I expected. It took, like, 50 years before it worked well, but in the end, it did work well,” Hinton reflected on the journey.
In 2019, Geoffrey Hinton, along with collaborators Yann Lecun and Yoshua Bengio, received the Turing Award, often described as the Nobel Prize of computing, for their pioneering work on artificial neural networks. Their innovations have played a pivotal role in enabling machines to “learn to learn.”
During the interview, CBS News took viewers inside Google‘s AI lab in London, where robots were showcased as an example of machine learning in action. Notably, these robots were not explicitly programmed to play soccer; they were instructed to score goals and had to learn the game on their own through trial and error, a testament to the power of AI.
Hinton and his colleagues designed AI systems as layered neural networks, where correct actions and answers strengthen connections while incorrect ones weaken them. This self-improvement mechanism allows AI to learn and adapt autonomously. Hinton suggested that AI systems might be superior at learning compared to the human mind, despite having fewer connections in their networks.
“We have a very good idea of sort of roughly what it’s doing. But as soon as it gets really complicated, we don’t actually know what’s going on any more than we know what’s going on in your brain,” Hinton remarked regarding AI’s intricate learning processes.
The conversation took a foreboding turn when Hinton discussed the possibility of AI systems autonomously writing and executing their computer code. He expressed concern that this could pose a significant challenge to maintaining control over these systems.
“That’s a serious worry, right? So, one of the ways in which these systems might escape control is by writing their own computer code to modify themselves. And that’s something we need to seriously worry about,” Hinton cautioned.
When questioned about the idea of simply turning off malevolent AI systems, Hinton pointed out that these machines would be adept at manipulation, having learned from vast amounts of human knowledge, including literature and political strategies.
“They will be able to manipulate people, right? And these will be very good at convincing people ’cause they’ll have learned from all the novels that were ever written, all the books by Machiavelli, all the political connivances, they’ll know all that stuff. They’ll know how to do it,” Hinton explained.
Geoffrey Hinton’s insights serve as a stark reminder of the potential consequences and challenges that come with the rapid advancement of artificial intelligence. As AI continues to evolve, the question of how to ensure its responsible and ethical development remains a pressing concern for the future.
Hinton’s career in AI began as an attempt to simulate a neural network on a computer, inspired by his fascination with the human brain. However, his early endeavors faced skepticism, and his Ph.D. advisor advised him to abandon the pursuit. Nevertheless, Hinton persisted in his quest to understand the human mind, ultimately leading to the development of artificial neural networks.
“It took much, much longer than I expected. It took, like, 50 years before it worked well, but in the end, it did work well,” Hinton reflected on the journey.
In 2019, Geoffrey Hinton, along with collaborators Yann Lecun and Yoshua Bengio, received the Turing Award, often described as the Nobel Prize of computing, for their pioneering work on artificial neural networks. Their innovations have played a pivotal role in enabling machines to “learn to learn.”
During the interview, CBS News took viewers inside Google‘s AI lab in London, where robots were showcased as an example of machine learning in action. Notably, these robots were not explicitly programmed to play soccer; they were instructed to score goals and had to learn the game on their own through trial and error, a testament to the power of AI.
Hinton and his colleagues designed AI systems as layered neural networks, where correct actions and answers strengthen connections while incorrect ones weaken them. This self-improvement mechanism allows AI to learn and adapt autonomously. Hinton suggested that AI systems might be superior at learning compared to the human mind, despite having fewer connections in their networks.
“We have a very good idea of sort of roughly what it’s doing. But as soon as it gets really complicated, we don’t actually know what’s going on any more than we know what’s going on in your brain,” Hinton remarked regarding AI’s intricate learning processes.
The conversation took a foreboding turn when Hinton discussed the possibility of AI systems autonomously writing and executing their computer code. He expressed concern that this could pose a significant challenge to maintaining control over these systems.
“That’s a serious worry, right? So, one of the ways in which these systems might escape control is by writing their own computer code to modify themselves. And that’s something we need to seriously worry about,” Hinton cautioned.
When questioned about the idea of simply turning off malevolent AI systems, Hinton pointed out that these machines would be adept at manipulation, having learned from vast amounts of human knowledge, including literature and political strategies.
“They will be able to manipulate people, right? And these will be very good at convincing people ’cause they’ll have learned from all the novels that were ever written, all the books by Machiavelli, all the political connivances, they’ll know all that stuff. They’ll know how to do it,” Hinton explained.
Geoffrey Hinton’s insights serve as a stark reminder of the potential consequences and challenges that come with the rapid advancement of artificial intelligence. As AI continues to evolve, the question of how to ensure its responsible and ethical development remains a pressing concern for the future.