BitcoinWorld

Meta Implements AI Content Labeling By May In Response To Deepfake Concerns
Latest News News

Meta Implements AI Content Labeling By May In Response To Deepfake Concerns

  • Facebook and Instagram’s parent firm Meta said it will begin labeling artificial intelligence (AI)-generated content in May.
  • The governing board for Meta made this decision in response to criticism about how altered material is handled and how deepfake technology is becoming more and more prevalent.
  • In addition to addressing concerns about misinformation ahead of crucial elections throughout the world, the labeling project seeks to provide transparency and context.

To allay concerns among users and governments regarding deepfakes, Facebook and Instagram behemoth Meta announced on Friday that it will start tagging AI content in May. 

The social media behemoth further stated that, in order to respect the right to free speech, it will no longer delete altered photos and sounds that do not violate its policies and will instead rely on terminology and context. the corporation declared, and will include a “Made with AI” label. 

Facebook, Threads, and Instagram posts are all subject to the policy.

Meta has announced that it would begin designating more audio, video, and image content as artificial intelligence (AI)-generated, admitting that its present policy is “too narrow.” 

Although the business didn’t elaborate on its detection method, labels would be applied either when users confess using AI tools or when Meta recognizes the content with its AI image indicators that are industry standard.

See Also: Google Unveils Multi-Blockchain Wallet Search Feature, Sparking Privacy Debates

Meta Responds To Oversight Board Criticism

The modifications were prompted by criticism from Meta’s oversight board, which is an independent evaluation panel for the internet behemoth that modifies material. 

With the tremendous advancements in AI and the ease with which media can be manipulated into highly convincing deepfakes, the board asked Meta in February to rapidly reassess its strategy for manipulating media. 

In the middle of an important election year for elections both locally and globally, the board issued its warning amid worries about the rising misuse of AI-powered applications for disinformation on platforms.

Monika Bickert, Vice President of Content Policy said,

“We agree with the Oversight Board’s argument that our existing approach is too narrow since it only covers videos that are created or altered by AI to make a person appear to say something they didn’t say.”

In response to the oversight board’s recommendations, Meta opted to shift its approach towards manipulated media. 

Rather than outright removal, the company will now rely on labeling and contextualization to provide transparency regarding the origin and authenticity of content. 

As Meta says, that is because of its commitment to freedom of speech while safeguarding from harm posed by misleading or deceptive media.

Implementation of AI Content Labeling

The labeling project from Meta will be implemented in two stages, the first of which will start in May 2024. 

AI-generated content—which includes a variety of media formats like audio, video, and images—will be recognized and categorized appropriately during this stage. 

In order to warn users of the possibility of manipulation, content that is judged to have a high risk of deceiving the public will also be labeled more prominently.

The removal of altered media based just on the previous policy will be discontinued as part of Meta’s second rollout phase, which is set to go live in July. 

Unless AI-manipulated content violates other Community Standards, like those that prohibit hate speech or interfere with elections, it will remain accessible on the platform under the new norm.

 Meta is dedicated to striking a balance between preserving the integrity of its platforms and allowing freedom of expression, and this method represents that commitment.

See Also: Ethena Labs Adds Bitcoin Backing To Its USDe Synthetic Dollar

Bickert added

“The labels will cover a broader range of content in addition to the manipulated content that the Oversight Board rec­ommended labeling.” 

Regardless of whether it was made with AI tools or not, content that breaks other guidelines—such as those pertaining to bullying, meddling in elections, and harassment—will still be taken down.

Major digital giants and AI companies came to an agreement in February to crack down on manipulated content meant to mislead voters, which is connected to these new labeling approaches. 

A single watermarking standard to be used for photos generated by AI applications was already agreed upon by Meta, Google, and OpenAI.

Disclaimer: The information provided is not trading advice. Bitcoinworld.co.in holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

 

#Binance #WRITE2EARN

Crypto products and NFTs are unregulated and can be highly risky. There may be no regulatory recourse for any loss from such transactions. Crypto is not a legal tender and is subject to market risks. Readers are advised to seek expert advice and read offer document(s) along with related important literature on the subject carefully before making any kind of investment whatsoever. Crypto market predictions are speculative and any investment made shall be at the sole cost and risk of the readers.