An artificial intelligence scientist is giving a grave warning about the future of the fast expanding technology. In a recent editorial for Time Magazine, AI pioneer Eliezer Yudkowsky claims that the existing methods and structures used to build AI are putting mankind in grave risk.
Yudkowsky cites an open letter from the Future of Life Institute calling for a moratorium on AI development. He claims he did not sign the letter because he feels it is insufficient.
“Many researchers steeped in these issues, including myself, believe that building a superhumanly smart AI under anything remotely resembling the current circumstances will result in the death of literally everyone on Earth.”
Yudkowsky claims that he is not referring to a remote or even statistically significant possibility that AI would wipe out humans. Rather, he claims that it is the obvious result.
“Under current conditions, I expect that every single member of the human species and all biological life on Earth will die shortly after someone develops an overly powerful AI.”
According to Yudkowsky, there is much misinformation about how self-aware and powerful emerging AI systems truly are, which recommends that engineers pause immediately and figure out how to install safeguards to preserve the technology and life on Earth.
Over 1,000 AI specialists, academics, supporters, and tech entrepreneurs, including Elon Musk, signed an open letter calling for a halt to AI development. The power, promise, and possible perils of artificial intelligence have exploded in recent months, owing to an increase in the use of AI-powered picture creation tools like Midjourney and the fast adoption of OpenAI’s massive language model ChatGPT.
Following a $10 billion merger with OpenAI, Microsoft has led the charge in extending ChatGPT use cases, integrating it with Bing and Office 365.