Anxiety over artificial intelligence is running high. Workers fear for their jobs. Some experts fear for the human race.
Companies, the fearful say, are speeding up making machines smarter without having yet ensured that they can keep them tethered to human values. A safety researcher recently left OpenAI, a leading AI development concern, declaring himself “pretty terrified” at the pace of development.
The AI industry is racing to develop “artificial general intelligence” – AI that can think and learn the way people do, perform tasks without being programmed to perform them and match or even outdo humans in creativity, flexibility and abstract reasoning. It’s a concept that’s at the center of more than one controversy.
Experts are divided on whether AGI will ever be reached. Those who think it will are divided on how fast. And there’s deep division on whether achieving it would be a good thing. The researcher who recently left OpenAI said, “When I think about where I’ll raise a future family, or how much to save for retirement, I can’t help but wonder: Will humanity even make it to that point?”
Speaking in defense of AGI, author Reid Hoffman asks us to take a time-out from streaming the Terminator movies and look at the bright side. Hoffman is a venture capitalist and Silicon Valley heavyweight. He co-founded LinkedIn and has served on more than one AI company’s board.
In a just published book, Superagency: What Could Possibly Go Right with Our AI Future, Hoffman and co-author Greg Beato argue the potential benefits far outweigh the risks, which can be mitigated through “iterative development” and democratization. By that they mean releasing AI innovations a little at a time to a wide range of users, encouraging acceptance and allowing flaws to be discovered and corrected.
Hoffman describes himself as a “techno-humanist.” He stands neither with the Silicon Valley “solutionists,” who see AI as the answer to every problem and favor gung-ho, no-holds-barred development, nor with the “problemists,” who will only accept a technology if it’s proven to pose zero risks and favor heavy regulation or even bans.
Between the two, Hoffman worries more about the problemists. American farmers who’ve been through the GMO wars will appreciate his critique of the “precautionary principle.” In his discussion of attitudes toward technology generally, he cites examples from a variety of fields, agriculture included.
Hoffman doesn’t say “no regulation, ever.” He asks, though, that we appreciate that innovation is itself a form of regulation – whereas strict adherence to the precautionary principle can stifle innovations that would improve a technology’s safety.
To illustrate the innovation-as-regulation notion, he cites the early, unregulated days of the automobile, when auto makers introduced – for competitive reasons – many safety features we take for granted. For example, countless wrists, arms and jaws were broken by people trying to crank-start cars until 1911, when Charles Kettering invented the electric starter.
The next year it was available on Cadillacs, helping establish that brand’s reputation for luxury. It eventually became standard equipment.
Even as the authors respond to AGI’s critics, they keep returning to all the wonderful things the technology will make possible. They see improvement in people’s lives in fields ranging from manufacturing to agriculture, from health care to education.
“What if every child on the planet suddenly has access to an AI tutor that is as smart as Leonardo da Vinci and as empathetic as Big Bird?”
Superagency is a well-informed, thought-provoking book. I’m especially intrigued by the authors’ theory that the key to acceptance is getting the technology into the hands of a large number and wide variety of people.
Using AI, something I’ve started to do fairly recently, has certainly changed my attitude. AI tools like Gemini and Perplexity are helping me greatly in my study of the Italian language. My opinion of AI has gone from neutral to somewhat positive.
The reason I’m not even more positive lies in the question Hoffman and Beato fail to answer: Just how serious is the risk of a Terminator scenario? And if the risk isn’t negligible, what’s the best way to meet it? Even accepting the innovation-is-regulation premise, you have to wonder if innovation alone could keep this risk at bay.
I suspect Hoffman might have a convincing answer. I wish he’d shared it with us.
Yuval Noah Harari, author of the book “Sapiens,” spoke for many in one of the book’s many marketing blurbs:
“Superagency is a fascinating and insightful book, providing humanity with a bright vision for the age of AI. I disagree with some of its main arguments, but I nevertheless hope they are right. Read it and judge for yourself.”
Former longtime Wall Street Journal Asia correspondent and editor Urban Lehner is editor emeritus of DTN/The Progressive Farmer.
This article, originally published on April 22 by the latter news organization and now republished by Asia Times with permission, is © Copyright 2025 DTN/The Progressive Farmer. All rights reserved. Follow Urban Lehner on X @urbanize