OpenAI And Microsoft Join Forces To Prevent Cyberattacks From Malicious Actors
Latest News News

OpenAI And Microsoft Join Forces To Prevent Cyberattacks From Malicious Actors

OpenAI, the developer behind artificial intelligence (AI) chatbot ChatGPT, has collaborated with its top investor, Microsoft, to prevent five cyberattacks linked to different malicious actors.

According to a report released on Wednesday, Microsoft has monitored hacking groups linked to Russian military intelligence, Iran’s Revolutionary Guard, and the Chinese and North Korean governments, which it says have been exploring using AI large language models (LLMs) in their hacking strategies.

LLMs utilize vast text data sets to create responses that sound human-like.

OpenAI reported that the five cyberattacks originated from two groups associated with China: Charcoal Typhoon and Salmon Typhoon. 

Attacks were linked to Iran through Crimson Sandstorm, North Korea through Emerald Sleet and Russia through Forest Blizzard.

The groups tried to employ ChatGPT-4 for researching company and cybersecurity tools, debugging code, generating scripts, conducting phishing campaigns, translating technical papers, evading malware detection and studying satellite communication and radar technology, according to OpenAI. 

The accounts were deactivated upon detection.

See Also: Beware Of New ERC-404 Token! A Crypto Trader Lost $26K In No Handle (NO) Tokens Rugpull

The company revealed the discovery while implementing a blanket ban on state-backed hacking groups utilizing AI products. 

While OpenAI effectively prevented these occurrences, it acknowledged the challenge of avoiding every malicious use of its programs.

Following a surge of AI-generated deepfakes and scams after the launch of ChatGPT, policymakers stepped up scrutiny of generative AI developers. 

In June 2023, OpenAI announced a $1 million cybersecurity grant program to enhance and measure the impact of AI-driven cybersecurity technologies.

Despite OpenAI’s efforts in cybersecurity and implementing safeguards to prevent ChatGPT from generating harmful or inappropriate responses, hackers have found methods to bypass these measures and manipulate the chatbot to produce such content.

More than 200 entities, including OpenAI, Microsoft, Anthropic and Google, recently collaborated with the Biden Administration to establish the AI Safety Institute and the United States AI Safety Institute Consortium (AISIC). 

See Also: Central Bank Of Russia Noted A Sharp Increase In Crypto Scams And Criminality Within The Country

The groups were established as a result of President Joe Biden’s executive order on AI safety in late October 2023, which aims to promote the safe development of AI, combat AI-generated deepfakes and address cybersecurity issues.

#Binance #WRITE2EARN

Crypto products and NFTs are unregulated and can be highly risky. There may be no regulatory recourse for any loss from such transactions. Crypto is not a legal tender and is subject to market risks. Readers are advised to seek expert advice and read offer document(s) along with related important literature on the subject carefully before making any kind of investment whatsoever. Crypto market predictions are speculative and any investment made shall be at the sole cost and risk of the readers.