• An AI godfather has had it with the doomsdayers.
  • Meta’s Yann LeCun thinks tech bosses’ bleak comments on AI risks could do more harm than good.
  • The naysaying is actually about keeping control of AI in the hands of a few, he said.

Loading Something is loading.

Thanks for signing up!

Access your favorite topics in a personalized feed while you’re on the go.


AI godfather Yann LeCun wants us to forget some of the more far-fetched doomsday scenarios.

He sees a different, real threat on the horizon: the rise of power hungry one-percenters who rob everyone else of AI’s riches.

Over the weekend, Meta’s chief AI scientist accused some of the most prominent founders in AI of “fear-mongering” and “massive corporate lobbying” to serve their own interests.

He named OpenAI’s Sam Altman, Google DeepMind’s Demis Hassabis, and Anthropic’s Dario Amodei in a lengthy weekend post on X.

“Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment,” LeCun wrote, referring to these founders’ role in shaping regulatory conversations about AI safety. “They are the ones who are attempting to perform a regulatory capture of the AI industry.”

He added that if these efforts succeed, the outcome would be a “catastrophe” because “a small number of companies will control AI.”

That’s significant since, as almost everyone who matters in tech agrees, AI is the biggest development in technology since the microchip or the internet.

Altman, Hassabis, and Amodei did not immediately respond to Insider’s request for comment.

LeCun’s comments came in response to a post on X from physicist Max Tegmark, who suggested that LeCun wasn’t taking the AI doomsday arguments seriously enough.

“Thanks to @RishiSunak & @vonderleyen for realizing that AI xrisk arguments from Turing, Hinton, Bengio, Russell, Altman, Hassabis & Amodei can’t be refuted with snark and corporate lobbying alone,” Tegmark wrote, referring to the UK’s upcoming global AI safety summit.

Yann, I’d love to hear you make arguments rather than acronyms. Thanks to @RishiSunak & @vonderleyen for realizing that AI xrisk arguments from Turing, Hinton, Bengio, Russell, Altman, Hassabis & Amodei can’t be refuted with snark and corporate lobbying alone. https://t.co/Zv1rvOA3Zz

— Max Tegmark (@tegmark) October 29, 2023

LeCun says founder fretting is just lobbying

Since the launch of ChatGPT, AI’s power players have become major public figures.

But, LeCun said, founders such as Altman and Hassabis have spent a lot of time drumming up fear about the very technology they’re selling.

In March, more than 1,000 tech leaders, including Elon Musk, Altman, Hassabis, and Amodei, signed a letter calling for a minimum six-month pause on AI development.

The letter cited “profound risks to society and humanity” posed by hypothetical AI systems. Tegmark, one of the letter’s signatories, has described AI development as “a suicide race.”

LeCun and others say these kinds of headline-grabbing warnings are just about cementing power and skating over the real, imminent risks of AI.

Those risks include worker exploitation and data theft that generates profit for “a handful of entities,” according to the Distributed AI Research Institute (DAIR).

The focus on hypothetical dangers also divert attention away from the boring-but-important question of how AI development actually takes shape.

LeCun has described how people are “hyperventilating about AI risk” because they have fallen for what he describes as the myth of the “hard take-off.” This is the idea that “the minute you turn on a super-intelligent system, humanity is doomed.”

But imminent doom is unlikely, he argues, because every new technology in fact goes through a very ordered development process before wider release.

Every new technology is developed and deployed the same way:

You make a prototype, try it at a small scale, make limited deployment, fix the problems, make it safer, and then deploy it more widely.

At that point, governments regulate it and establish safety standards.


— Yann LeCun (@ylecun) April 2, 2023

So the area to focus on, is in fact, how AI is developed right now. And for LeCun, the real danger is that the development of AI is locked into private, for-profit entities who never release their findings, while AI’s open-source community gets obliterated.

His consequent worry is that regulators let it happen because they’re distracted by killer robot arguments.

Leaders like LeCun have championed open-source developers as their work on tools that rival, say, OpenAI’s ChatpGPT, brings a new level of transparency to AI development.

LeCun’s employer, Meta, made its own large language model that competes with GPT, LLaMa 2, (somewhat) open source. The idea is that the broader tech community can look under the hood of the model. No other big tech company has done a similar open-source release, though OpenAI is rumored to be thinking about it.

For LeCun, keeping AI development closed is a real reason for alarm.

The alternative, which will inevitably happen if open source AI is regulated out of existence, is that a small number of companies from the West Coast of the US and China will control AI platform and hence control people’s entire digital diet,” he wrote.

“What does that mean for democracy? What does that mean for cultural diversity?”

Read More