• Strait of Hormuz Crisis: Iranian Official Declares Permanent Transformation in Alarming Statement
  • Sui’s Beep Wallet Unleashes AI Power: Agentic Trading Expands to 300+ Assets
  • TRY Depreciation: Lira Faces Unrelenting Pressure Despite CPI Figures – Commerzbank Analysis
  • USDC Transfer Shock: $361 Million Moves from Coinbase Institutional to Mysterious Wallet
  • Circle Wrapped Bitcoin (cirBTC) Unveiled: A Revolutionary Step for Secure, Verified Bitcoin Tokenization
2026-04-02
Coins by Cryptorank
  • Crypto News
  • AI News
  • Forex News
  • Sponsored
  • Press Release
  • Submit PR
    • Media Kit
  • Advertisement
  • More
    • About Us
    • Learn
    • Exclusive Article
    • Reviews
    • Events
    • Contact Us
    • Privacy Policy
  • Crypto News
  • AI News
  • Forex News
  • Sponsored
  • Press Release
  • Submit PR
    • Media Kit
  • Advertisement
  • More
    • About Us
    • Learn
    • Exclusive Article
    • Reviews
    • Events
    • Contact Us
    • Privacy Policy
Skip to content
Home AI News Elon Musk and Tech Execs Call for Pause on AI Development
AI News

Elon Musk and Tech Execs Call for Pause on AI Development

  • by Sofiya
  • 2023-03-30
  • 0 Comments
  • 2 minutes read
  • 1004 Views
  • 3 years ago
Facebook Twitter Pinterest Whatsapp
Elon Musk and Tech Execs Call for Pause on AI Development

More than 2,600 IT professionals and researchers have signed an open letter 1 for a temporary halt to further AI development, citing “deep hazards to society and mankind.” Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, and a slew of AI CEOs, CTOs, and academics were among those who signed the letter, which was released on March 22 by the US think tank Future of Life Institute (FOLI).

The institute urged all AI firms to “immediately cease” training AI systems stronger than GPT-4 for at least six months, citing fears that “human-competitive intelligence can pose serious hazards to society and mankind,” among other things.

“Advanced artificial intelligence (AI) could signify a fundamental transformation in the history of life on Earth, and it should be planned for and handled with appropriate care and resources.” Sadly, this degree of planning and administration is not taking place,” noted the institute.

GPT-4 is the most recent generation of OpenAI‘s AI-powered chatbot, which was published on March 14. To date, it has passed some of the most difficult high school and law tests in the United States in the 90th percentile. It is thought to be ten times more advanced than the initial ChatGPT version. The AI industry is in a “out-of-control race” to produce increasingly powerful AI, which “no one — not even their inventors — can understand, anticipate, or reliably control,” according to FOLI.

Among the top fears were whether robots could potentially flood information channels with “propaganda and misinformation,” and whether machines would “automate away” all employment possibilities. FOLI took these fears a step further, claiming that these AI businesses’ entrepreneurial endeavors may result in an existential threat: “Shall we construct alien minds that could one day outnumber, outwit, obsolete, and replace us?”

“Such decisions should not be outsourced to unelected technology leaders,” the letter continued. The institute also agreed with OpenAI founder Sam Altman’s recent declaration that an independent evaluation should be conducted before training future AI systems. In a blog article published on February 24, Altman emphasized the importance of preparing for artificial general intelligence (AGI) and artificial superintelligence (ASI) robots.

Nevertheless, not all AI experts have hurried to sign the petition. Ben Goertzel, CEO of SingularityNET, said in a March 29 Twitter response to Gary Marcus, author of Rebooting.AI, that language learning models (LLMs) will not become AGIs, which have seen little advancement to date. Instead, he suggested that research and development for items like bioweapons and nuclear weapons be slowed:

In addition to language learning models such as ChatGPT, AI-powered deep fake technology has been used to make convincing image, audio, and video frauds, as well as AI-generated artwork, with some concerns expressed about whether it may breach copyright laws in specific situations. Galaxy Digital CEO Mike Novogratz recently told investors that he was surprised by the level of regulatory scrutiny paid to cryptocurrency while paying little attention to artificial intelligence.

“When I think about AI, it amazes me that we’re talking so much about crypto regulation and nothing about AI regulation; I mean, I think the government has it absolutely backwards,” he said during a March 28 shareholders call. FOLI has advocated that if an AI development stop is not implemented immediately, governments should intervene with a moratorium. “If such a stop cannot be enforced promptly, governments should step in and institute a moratorium,” it wrote.

Disclaimer: The information provided is not trading advice, Bitcoinworld.co.in holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Tags:

administrationAI developmentAI systemschatbotdeep hazardsElon MuskFuture of Life InstituteGPT-4IT professionalsmankindopen letterplanningsocietyTech Execstemporary halt

Share This Post:

Facebook Twitter Pinterest Whatsapp
Previous Post

MLB and Candy Digital Swing for the Fences with 2023 NFT Collectibles: Navigating the Evolving Digital Landscape

Next Post

Paris Blockchain Week: Thriving Crypto Ecosystem in the Heart of Paris Amidst Bear Market

Categories

92

AI News

Crypto News

Bitcoin Treasury Ambition: The Blockchain Group Seeks Staggering €10 Billion

Events

97

Forex News

33

Learn

Press Release

Reviews

Google NewsGoogle News TwitterTwitter LinkedinLinkedin coinmarketcapcoinmarketcap BinanceBinance YouTubeYouTubes

Copyright © 2026 BitcoinWorld | Powered by BitcoinWorld