Laman

Wednesday, 3 May 2023

Google AI pioneer says he quit to speak freely about technology's 'dangers'

Google AI pioneer says he quit to speak freely about technology's 'dangers'

Google AI pioneer says he quit to speak freely about technology's 'dangers'




Artificial intelligence pioneer Geoffrey Hinton speaks at the Thomson Reuters Financial and Risk Summit in Toronto, December 4, 2017. REUTERS/Mark Blinch






A pioneer of artificial intelligence said he quit Google (GOOGL.O) to speak freely about the technology's dangers, after realising computers could become smarter than people far sooner than he and other experts had expected.







"I left so that I could talk about the dangers of AI without considering how this impacts Google," Geoffrey Hinton wrote on Twitter.




In an interview with the New York Times, Hinton said he was worried about AI's capacity to create convincing false images and texts, creating a world where people will "not be able to know what is true anymore".


"It is hard to see how you can prevent the bad actors from using it for bad things," he said.


The technology could quickly displace workers, and become a greater danger as it learns new behaviours.


“The idea that this stuff could actually get smarter than people — a few people believed that,” he told the New York Times. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that."


In his tweet, Hinton said Google itself had "acted very responsibly" and denied that he had quit so that he could criticise his former employer.


Google, part of Alphabet Inc., did not immediately reply to a request for comment from Reuters.


The Times quoted Google’s chief scientist, Jeff Dean, as saying in a statement: “We remain committed to a responsible approach to A.I. We’re continually learning to understand emerging risks while also innovating boldly.”







Since Microsoft-backed (MSFT.O) startup OpenAI released ChatGPT in November, the growing number of "generative AI" applications that can create text or images have provoked concern over the future regulation of the technology.


“That so many experts are speaking up about their concerns regarding the safety of AI, with some computer scientists going as far as regretting some of their work, should alarm policymakers," said Dr Carissa Veliz, an associate professor in philosophy at the University of Oxford's Institute for Ethics in AI. "The time to regulate AI is now."


"I console myself with the normal excuse: If I hadn't done it, somebody else would have," Hinton told the New York Times, which was first to report his decision.


In a tweet Monday, Hinton said he left Google so he could speak freely about the risks of AI, rather than because of a desire to criticize Google specifically.


"I left so that I could talk about the dangers of AI without considering how this impacts Google," Hinton said in a tweet. "Google has acted very responsibly."


Jeff Dean, chief scientist at Google, said Hinton "has made foundational breakthroughs in AI" and expressed appreciation for Hinton's "decade of contributions at Google."


"We remain committed to a responsible approach to AI," Dean said in a statement provided to CNN. "We're continually learning to understand emerging risks while also innovating boldly."


Hinton's decision to step back from the company and speak out on the technology comes as a growing number of lawmakers, advocacy groups and tech insiders have raised alarms about the potential for a new crop of AI-powered chatbots to spread misinformation and displace jobs.


The wave of attention around ChatGPT late last year helped renew an arms race among tech companies to develop and deploy similar AI tools in their products. OpenAI, Microsoft and Google are at the forefront of this trend, but IBM, Amazon, Baidu and Tencent are working on similar technologies.


In March, some prominent figures in tech signed a letter calling for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing "profound risks to society and humanity." The letter, published by the Future of Life Institute, a nonprofit backed by Elon Musk,came just two weeks after OpenAI announced GPT-4, an even more powerful version of the technology that powers ChatGPT. In early tests and a company demo, GPT-4 was used to draft lawsuits, pass standardized exams and build a working website from a hand-drawn sketch.








In the interview with the Times, Hinton echoed concerns about AI's potential to eliminate jobs and create a world where many will "not be able to know what is true anymore." He also pointed to the stunning pace of advancement, far beyond what he and others had anticipated.


"The idea that this stuff could actually get smarter than people — a few people believed that," Hinton said in the interview. "But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that."


Even before stepping aside from Google, Hinton had spoken publicly about AI's potential to do harm as well as good.


"I believe that the rapid progress of AI is going to transform society in ways we do not fully understand and not all of the effects are going to be good," Hinton said in a 2021 commencement address at the Indian Institute of Technology Bombay in Mumbai. He noted how AI will boost healthcare while also creating opportunities for lethal autonomous weapons. "I find this prospect much more immediate and much more terrifying than the prospect of robots taking over, which I think is a very long way off."


Hinton isn't the first Google employee to raise a red flag on AI. In July, the company fired an engineer who claimed an unreleased AI system had become sentient, saying he violated employment and data security policies. Many in the AI community pushed back strongly on the engineer's assertion.















No comments:

Post a Comment