AI News

Elon Musk and Tech Execs Call for Pause on AI Development

Elon Musk and Tech Execs Call for Pause on AI Development

More than 2,600 IT professionals and researchers have signed an open letter calling for a temporary halt to further AI development, citing “deep hazards to society and mankind.” Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, and a slew of AI CEOs, CTOs, and academics were among those who signed the letter, which was released on March 22 by the US think tank Future of Life Institute (FOLI).

The institute urged all AI firms to “immediately cease” training AI systems stronger than GPT-4 for at least six months, citing fears that “human-competitive intelligence can pose serious hazards to society and mankind,” among other things.

“Advanced artificial intelligence (AI) could signify a fundamental transformation in the history of life on Earth, and it should be planned for and handled with appropriate care and resources.” Sadly, this degree of planning and administration is not taking place,” noted the institute.

GPT-4 is the most recent generation of OpenAI‘s AI-powered chatbot, which was published on March 14. To date, it has passed some of the most difficult high school and law tests in the United States in the 90th percentile. It is thought to be ten times more advanced than the initial ChatGPT version. The AI industry is in a “out-of-control race” to produce increasingly powerful AI, which “no one — not even their inventors — can understand, anticipate, or reliably control,” according to FOLI.

Among the top fears were whether robots could potentially flood information channels with “propaganda and misinformation,” and whether machines would “automate away” all employment possibilities. FOLI took these fears a step further, claiming that these AI businesses’ entrepreneurial endeavors may result in an existential threat: “Shall we construct alien minds that could one day outnumber, outwit, obsolete, and replace us?”

“Such decisions should not be outsourced to unelected technology leaders,” the letter continued. The institute also agreed with OpenAI founder Sam Altman’s recent declaration that an independent evaluation should be conducted before training future AI systems. In a blog article published on February 24, Altman emphasized the importance of preparing for artificial general intelligence (AGI) and artificial superintelligence (ASI) robots.

Nevertheless, not all AI experts have hurried to sign the petition. Ben Goertzel, CEO of SingularityNET, said in a March 29 Twitter response to Gary Marcus, author of Rebooting.AI, that language learning models (LLMs) will not become AGIs, which have seen little advancement to date. Instead, he suggested that research and development for items like bioweapons and nuclear weapons be slowed:

In addition to language learning models such as ChatGPT, AI-powered deep fake technology has been used to make convincing image, audio, and video frauds, as well as AI-generated artwork, with some concerns expressed about whether it may breach copyright laws in specific situations. Galaxy Digital CEO Mike Novogratz recently told investors that he was surprised by the level of regulatory scrutiny paid to cryptocurrency while paying little attention to artificial intelligence.

“When I think about AI, it amazes me that we’re talking so much about crypto regulation and nothing about AI regulation; I mean, I think the government has it absolutely backwards,” he said during a March 28 shareholders call. FOLI has advocated that if an AI development stop is not implemented immediately, governments should intervene with a moratorium. “If such a stop cannot be enforced promptly, governments should step in and institute a moratorium,” it wrote.

Disclaimer: The information provided is not trading advice, Bitcoinworld.co.in holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.