Superagency: What Could Possibly Go Right With Our AI Future
By Reid Hoffman and Greg Beato
Author’s Equity, 2025

If you’re on the fence about whether to be excited or anxious about AI, Reid Hoffman and Greg Beato’s new book Superagency offers a refreshingly pragmatic way to think about it, grounded in historical precedent.

The book is framed as a “techno-humanist compass” to help drive us to a world where AI benefits as many people as possible. Its focusing question centers on maintaining and increasing human agency: “Can we continue to maintain control of our lives, and successfully plot our own destinies?”

From agency to superagency

What is human agency? It’s your capacity to make choices, act independently, and direct your life — from deciding what to buy, where to live, who to marry, etc. Sure, external circumstances influence choices, but our sense of agency — the feeling that choices matter — is a big part of what makes life feel meaningful. And AI can empower us beyond individual agency.

Hoffman (I’ll refer to him as the author, since he seems the driving force) argues that developed and deployed mindfully, AI can lead us to a state of superagency: “what happens when a critical mass of individuals, personally empowered by AI, begin to operate at levels that compound through society.”

That is, the benefits don’t just accrue to people using AI directly, but to everyone. Think for example of someone using AI to clarify and streamline their writing, as I’m doing now. I stand to benefit from publishing a better post, but so do my readers. Such effects compound.

Four principles for superagency

How do we achieve superagency? The book lays out four principles as a blueprint for architecting social and technical approaches to AI that expand human agency:

  1. The key to creating broadly beneficial outcomes is designing for human agency.
  2. When systems are designed for agency, sharing data and knowledge can empower people and societies (rather than control them.)
  3. Innovation and safety are synergistic — rather than opposing — forces.
  4. The collective use of AI will have compounding effects, as previous technologies have.

Hoffman argues for cautious regulation of AI. Rather than adopting the precautionary principle (assuming the tech is dangerous by default, and constraining it), he wants regulation informed by on-the-ground experimentation. He cites numerous historical examples to illustrate how this has worked with previous technologies, including:

  • Writing
  • The printing press
  • The steam engine
  • The spinning jenny and automated looms
  • The telephone
  • The automobile
  • Computers

All were contested at first — often, with valid reasons. But ultimately, they deepened our agency and humanity:

Every new technology we’ve invented—from language, to books, to the mobile phone—has defined, redefined, deepened, and expanded what it means to be human.

They evolved — and created broad societal benefits — through iterative development, which led to regulations informed by real-world data rather than blanket constraints based on theory.

What makes AI different is its capacity to sidestep human decision-making. Hoffman wants an “AI that works for us and with us.” We must quickly figure out how to do it, and this entails a broad conversation about our stance towards the new technology.

A broad social conversation

He sees four “key constituencies” influencing this discourse:

  • Doomers: believe that in a worst-case scenario, misaligned agentic superintelligences might destroy humanity.
  • Gloomers: believe that the Doomer position distracts us from the real near-term risks: job losses, disinformation, systemic biases, etc.
  • Zoomers: believe the gains and innovation stemming from AI will greatly exceed its risks.
  • Bloomers: believe AI can increase human progress, but recognize the technology must be developed and deployed mindfully.

All four entail different approaches to regulatory oversight, ranging from outright prohibition to limited constraints.

Which feels more reasonable to you? I’m with Hoffman here; he sympathizes with the Bloomers:

While they’re not unconditionally opposed to government regulation, they believe that the fastest and most effective way to develop safer, more equitable, and more useful AI tools is to make them accessible to a diverse range of users with different values and intentions.

This entails giving the organizations developing AI enough leeway to experiment with different approaches. Hoffman proposes benchmarking as a first step to testing technologies before drafting new laws. He also acknowledges code itself as a sort of law, baking constraints into the technology’s architecture.

Why this is hard, especially now

Hoffman acknowledges the fundamental issue is trust, which is, unfortunately, at low ebb. Many of us don’t really trust organizations and governments to do the right thing with our data. But it’s a tradeoff: we get value in exchange for privacy. Rather than an all-or-nothing stance, the book explores middle grounds such as a “private commons” of quasi-public infrastructure.

We can consider GPS as a model for a data-intensive technology developed in close partnership between industry and government and carefully deployed to broad public benefit. AI can be a sort of “informational GPS,” helping us make sense of complex issues without forcing our direction.

While there are risks and downsides — most of which we can’t predict — there are also unforeseeable upsides, especially judging by historical precedents. For example, today we can travel the same route the Donner party followed, without hardship. Conversely, Hoffman asks us to imagine an England where the Luddites had their way. Soon, the society gets left behind while neighbors enjoy the advantages of modernity. (Including workers’ rights.)

New technologies aren’t bad by default. And in any case, we can’t stuff the toothpaste back into the tube. Our best option is to take a page from the past and regulate them pragmatically as we learn about their capabilities and limitations.

Bottom line

The book’s ethos is summed up in a section from the last chapter, which resonates with me:

Advances in technology are often presented as challenges to our humanity. We submit that the opposite is true: technology is a time-tested key to human flourishing. Absent technology, our numbers would be far smaller, our lives half as long, our passions less diverse and less developed, our agency not much greater than that of other animals. Empowered by technology, we humans escaped the eternal present of mere subsistence. Then we learned to cure diseases, invented new ways to express and memorialize our humanity, enabled individual rights, and made it possible to extend our reach beyond the planet.

To accomplish any of these things, we had to envision what could possibly go right.

That is, we developed mindful approaches to adopting new technologies that a) didn’t strangle them in their cribs while b) not upending society as a whole. That is where we find ourselves now. It’s a tricky balancing act, but we’ve pulled it off before. This is part of what I mean by “architecting intelligence”: we put conditions in place for reasonable constraints to emerge as we learn what technologies can (and cannot) do well.

AI can be for cognitive work what the steam engine was for manual labor: a transformational and ultimately beneficial technology. Given AI’s potential upsides for human flourishing, economic development, and national security, its development and deployment are too important to be left solely to pessimists. Superagency clearly articulates the issues at stake and outlines a practical approach: mindful iterative deployment.

Buy it on Amazon