• Campbell Brown, once Meta’s news chief, warns AI is repeating social media’s worst mistakes
  • Binance Executive: South Korea Needs Clear Crypto Rules to Draw Institutional Investors
  • Ethereum Profit-Taking Surges to Three-Week High Amid Price Correction, Santiment Reports
  • Upbit to Delist NKN on June 15 Citing Project Shortcomings and User Harm Risk
  • Bitcoin Perpetual Futures: Long/Short Ratios Signal Cautious Bearish Sentiment Across Top Exchanges
2026-05-14
Coins by Cryptorank
  • Crypto News
  • AI News
  • Forex News
  • Sponsored
  • Press Release
  • Media Kit
  • Advertisement
  • More
    • About Us
    • Learn
    • Exclusive Article
    • Reviews
    • Events
    • Contact Us
    • Privacy Policy
  • Crypto News
  • AI News
  • Forex News
  • Sponsored
  • Press Release
  • Media Kit
  • Advertisement
  • More
    • About Us
    • Learn
    • Exclusive Article
    • Reviews
    • Events
    • Contact Us
    • Privacy Policy
Skip to content
Home AI News Campbell Brown, once Meta’s news chief, warns AI is repeating social media’s worst mistakes
AI News

Campbell Brown, once Meta’s news chief, warns AI is repeating social media’s worst mistakes

  • by Keshav Aggarwal
  • 2026-05-14
  • 0 Comments
  • 5 minutes read
  • 0 Views
  • 10 seconds ago
Facebook Twitter Pinterest Whatsapp
Campbell Brown, founder of Forum AI, in a modern office setting, looking directly at the camera with a serious expression.

Campbell Brown has spent her career chasing accurate information, first as a renowned TV journalist, then as Facebook’s first and only dedicated news chief. Now, watching AI reshape how people consume information, she sees history threatening to repeat itself. This time, she’s not waiting for someone else to fix it.

Her company, Forum AI — which she discussed recently with Bitcoin World’s Tim Fernholz at a StrictlyVC evening in San Francisco — evaluates how foundation models perform on what she calls “high-stakes” topics: geopolitics, mental health, finance, hiring. These are subjects where “there are no clear yes-or-no answers, where it’s murky and nuanced and complex.” The idea is to find the world’s foremost experts, have them architect benchmarks, then train AI judges to evaluate models at scale.

From Facebook to fixing AI

Brown traces the origin of Forum AI, founded 17 months ago in New York, to a specific moment. “I was at Meta when ChatGPT was first released publicly,” she recalled, “and I remember really shortly after realizing this is going to be the funnel through which all information flows. And it’s not very good.” The implications for her own children made the moment feel almost existential. “My kids are going to be really dumb if we don’t figure out how to fix this,” she recalled thinking.

What frustrated her most was that accuracy didn’t seem to be anyone’s priority. Foundation model companies, she said, are “extremely focused on coding and math,” whereas news and information are harder. But harder, she argued, doesn’t mean optional.

What Forum AI found when it tested the leading models

When Forum AI began evaluating the leading models, the findings weren’t encouraging. She cited Gemini pulling from Chinese Communist Party websites “for stories that have nothing to do with China,” and noted a left-leaning political bias across nearly all models. Subtler failures abound too, she said, including missing context, missing perspectives, and straw-manning arguments without acknowledgment. “There’s a long way to go,” she said. “But I also think that there are some very easy fixes that would vastly improve the outcomes.”

For Forum AI’s geopolitics work, Brown has recruited Niall Ferguson, Fareed Zakaria, former Secretary of State Tony Blinken, former House Speaker Kevin McCarthy, and Anne Neuberger, who led cybersecurity in the Obama administration. The goal is to get AI judges to roughly 90% consensus with those human experts, a threshold she says Forum AI has been able to reach.

The lesson from social media that AI is ignoring

Brown spent years at Facebook watching what happens when a platform optimizes for the wrong thing. “We failed at a lot of the things we tried,” she told Fernholz. The fact-checking program she built no longer exists. The lesson, even if social media has turned a blind eye to it, is that optimizing for engagement has been lousy for society and left many less informed. Her hope is that AI can break that cycle. “Right now it could go either way,” she said; companies could give users what they want, or they could “give people what’s real and what’s honest and what’s truthful.”

Why enterprise demand might be the unlikely ally

She acknowledged the idealistic version of that — AI optimizing for truth — might sound naive. But she thinks enterprise may be the unlikely ally here. Businesses using AI for credit decisions, lending, insurance, and hiring care about liability, and “they’re going to want you to optimize for getting it right.” That enterprise demand is also what Forum AI is betting its business on, though turning compliance interest into consistent revenue remains a challenge, particularly given that much of the current market is still satisfied with checkbox audits and standardized benchmarks that Brown considers inadequate.

The compliance landscape, she said, is “a joke.” When New York City passed the first hiring bias law requiring AI audits, the state comptroller found more than half had violations that went undetected. Real evaluation, she said, requires domain expertise to work through not just known scenarios but edge cases that “can get you into trouble that people don’t think about.” And that work takes time. “Smart generalists aren’t going to cut it.”

The disconnect between Silicon Valley hype and user reality

Brown — whose company last fall raised $3 million led by Lerer Hippeau — is uniquely positioned to describe the disconnect between the AI industry’s self-image and the reality for most users. “You hear from the leaders of the big tech companies, ‘This technology is going to change the world,’ ‘it’s going to put you out of work,’ ‘it’s going to cure cancer,'” she said. “But then to a normal person who’s just using a chatbot to ask basic questions, they’re still getting a lot of slop and wrong answers.”

Trust in AI sits at extraordinarily low levels, and she thinks that skepticism is, in many cases, justified. “The conversation is sort of happening in Silicon Valley around one thing, and a totally different conversation is happening among consumers.”

Conclusion

Campbell Brown’s trajectory from TV news to Meta to founding Forum AI reflects a growing concern that AI, left unchecked, could amplify the same misinformation dynamics that plagued social media. Her approach — using expert-designed benchmarks to hold foundation models accountable — offers a potential path forward, but it depends on whether the industry and regulators are willing to prioritize accuracy over engagement. For now, the gap between what AI companies promise and what users experience remains wide, and Brown is betting that enterprise demand for liability-proof AI will close it.

FAQs

Q1: What is Forum AI?
Forum AI is a startup founded by Campbell Brown that evaluates foundation models on high-stakes topics like geopolitics, mental health, finance, and hiring. It uses expert-designed benchmarks and AI judges to assess accuracy and bias at scale.

Q2: Why does Campbell Brown think AI accuracy is important?
Brown argues that AI chatbots are becoming the primary funnel for information, and if they provide inaccurate or biased answers, it could leave people less informed — repeating the mistakes of social media platforms that optimized for engagement over truth.

Q3: How does Forum AI evaluate AI models?
Forum AI recruits leading domain experts to architect benchmarks, then trains AI judges to evaluate models against those benchmarks. The goal is to reach roughly 90% consensus between AI judges and human experts.

Disclaimer: The information provided is not trading advice, Bitcoinworld.co.in holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Tags:

AI accuracyAI RegulationCampbell BrownForum AIFoundation Models

Share This Post:

Facebook Twitter Pinterest Whatsapp
Next Post

Binance Executive: South Korea Needs Clear Crypto Rules to Draw Institutional Investors

Categories

92

AI News

Crypto News

Bitcoin Treasury Ambition: The Blockchain Group Seeks Staggering €10 Billion

Events

97

Forex News

33

Learn

Press Release

Reviews

Google NewsGoogle News TwitterTwitter LinkedinLinkedin coinmarketcapcoinmarketcap BinanceBinance YouTubeYouTubes

Copyright © 2026 BitcoinWorld | Powered by BitcoinWorld