Geoffrey Hinton, widely regarded as the “godfather of artificial intelligence,” recently resigned from his position at Google so he could freely speak about the potential dangers of AI, according to a report from the New York Times.
Hinton had been working with Google since 2013, during which time he mentored many rising stars in the AI field and designed machine learning algorithms. In a tweet on Monday, Hinton revealed that he left Google “so that I could talk about the dangers of AI without considering how this impacts Google.”
While Hinton had previously thought the development of artificial general intelligence was still 20-50 years away, he now believes that developers “might be” close to computers being able to come up with ideas to improve themselves. This presents a potential problem, Hinton warns, as society needs to figure out how to manage technology that could greatly empower a handful of governments or companies.
Hinton has previously called for people to be more proactive in thinking about the ethical implications of AI, stating that it’s reasonable for people to be worrying about these issues now, even if they won’t arise in the next year or two.
During a recent interview with CBS, Hinton suggested that it wasn’t inconceivable for AI to try to wipe out humanity, emphasizing the importance of managing the risks associated with this technology.
Google CEO Sundar Pichai has also expressed the need for AI advancements to be released in a responsible way, calling for regulations and laws to punish abuse. In an interview with “60 Minutes” in April, Pichai emphasized the importance of involving not just engineers, but also social scientists, ethicists, and philosophers in the development of AI.
While Hinton has left his position at Google, the company remains committed to a responsible approach to AI, according to a statement from Google’s chief scientist Jeff Dean. Google will continue to learn about emerging risks and innovate boldly while considering the ethical implications of AI, he said.