Type Here to Get Search Results !

Unregulated AI can lead to humanity's extinction, world tech leaders warn


Unregulated AI can lead to humanity's extinction, world tech leaders warn

OpenAI's ChatGPT's viral success has fueled an artificial intelligence arms race in the tech sector

Technology leaders, including CEOs of OpenAI and Google Deepmind, have warned that AI may lead to the potential extinction of humanity.

According to dozens of AI industry leaders, academics, and even some celebrities, the possibility of an AI extinction event and how to prevent it should be the top worldwide priority now.

A statement published by the Center for AI Safety said: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."



The statement underlines a variety of worries regarding the ultimate risks posed by unrestrained artificial intelligence, CNN reported.

According to AI experts, society is still a long way from achieving the level of artificial general intelligence that is the stuff of science fiction; today's state-of-the-art chatbots only "replicate patterns based on training data they have been fed" and do not think for themselves, the report said.

Leading names in the AI industry who signed the statement include OpenAI CEO Sam Altman; the godfather of AI, Geoffrey Hinton; top executives and researchers from Google DeepMind and Anthropic; Kevin Scott, among others.

 

Unregulated AI can lead to humanity's extinction, world tech leaders warn

However, the "flood of hype and investment into the AI business" has prompted proposals for regulation at the outset of the AI age, before any significant problems occur.

The announcement comes in response to OpenAI's ChatGPT's viral success, which has fueled an artificial intelligence arms race in the tech sector.

As a result, numerous lawmakers, advocacy organisations, and tech industry insiders have expressed concern about the possibility that a new generation of AI-powered chatbots may "spread false information and eliminate jobs."



Hinton, whose groundbreaking work helped create today’s AI systems, previously said that he decided to leave his role at Google and "blow the whistle" on the technology after “suddenly” realising “that these things are getting smarter than us.”

The statement originally put forth by David Kreuger, an AI professor at the University of Cambridge, did not preclude society from addressing other types of AI risk, such as algorithmic bias or false information, said Dan Hendrycks, director of the Center for AI Safety, in a tweet on Tuesday.

Hendrycks  

"Society may address several threats concurrently; it's not "either/or," he tweeted.

"From a risk management standpoint, it would be dangerous to disregard them, just as it would be reckless to exclusively prioritise present damages." he added.

 

Tags

Post a Comment

0 Comments
* Please Don't Spam Here. All the Comments are Reviewed by Admin.

Below Post Ad