BitcoinWorld

Latest News

AI Fears Grow as Germany, France, Sweden, and Canada Voice Concerns

OpenAI’s ChatGPT release opened a Pandora’s box of powerful artificial intelligence, making some nations nervous. Even as they consider laws, it’s uncertain if the genie can be rebottled.

After Italy banned ChatGPT on Sunday, the Canadian privacy commissioner joined Germany, France, and Sweden in probing the popular chatbot on Tuesday.

Canadian Privacy Commissioner Philippe Dufresne stated, “A.I. technology and its effects on privacy is a priority for my office.” As Commissioner, I prioritize keeping up with fast-moving technology changes. Italy’s ban followed OpenAI’s March 20 disclosure of a fault that disclosed users’ payment and conversation histories. OpenAI temporarily disabled ChatGPT to solve the fault.

“We do not need a ban on AI applications, but rather ways to ensure values like democracy and transparency,” a German Ministry of the Interior official told Handelsblatt on Monday.

But can virtual private networks (VPNs) block software and AI?  A VPN creates an encrypted connection between a device and a distant server to provide safe and private internet access. This connection hides the user’s home IP address, making it look like they’re accessing the internet from the distant server. “An A.I. ban may not be realistic because there are already many A.I. models in use and more are being developed,” Jake Maymar, vice president of Innovations at A.I. Consulting company the Glimpse Group, told Decrypt. An A.I. ban would require banning computers and cloud technologies, which is impractical.

Italy’s proposal to outlaw ChatGPT comes amid increased concerns about artificial intelligence’s influence on privacy, data security, and abuse. The Center for A.I. and Digital Policy, an AI think tank, filed a formal complaint with the U.S. Federal Trade Commission last month accusing OpenAI of deceptive and unfair practices after an open letter signed by several high-profile tech figures called for a slowdown in AI development.

In an April 5 blog post on AI safety, OpenAI pledged long-term safety research and community collaboration. OpenAI claimed it wants to increase factual accuracy, reduce “hallucinations,” and safeguard user privacy and minors by exploring age verification alternatives. “We also recognize that, like any technology, these tools come with real risks—so we work to ensure safety is built into our system at all levels,” the business noted.

Some termed OpenAI’s statement PR window dressing that ignored AI’s existential threat.Some warn about ChatGPT, but others claim the problem is society’s usage of technology.

“What this moment does provide is a chance to consider what sort of society we want to be—what rules we want to apply to everyone equally, AI powered or not—and what kind of economic rules serve society best,” USC Viterbi Associate Professor of Computer Science Barath Raghavan told Decrypt. “The best policy responses will not target specific technological mechanisms of today’s A.I. (which will quickly be outdated) but behaviors and rules we’d like to see apply universally.”

 

Crypto products and NFTs are unregulated and can be highly risky. There may be no regulatory recourse for any loss from such transactions. Crypto is not a legal tender and is subject to market risks. Readers are advised to seek expert advice and read offer document(s) along with related important literature on the subject carefully before making any kind of investment whatsoever. Crypto market predictions are speculative and any investment made shall be at the sole cost and risk of the readers.