The Coming Wave by Mustafa Suleyman and Michael Bhaskar – Nonfiction Review
The best policy-focused AI book out there — grounded, urgent, and accessible to non-insiders — though it’s showing its age (2+ years) slightly.
My Rating: ★★★★☆ (4/5 stars: Great)
- Category: AI, Policy, Technology
- Published: 2023
- Runtime: 12 hours
I read this a year ago and liked it. Coming back for a reread — now that I’ve gotten through most of the recent wave of AI books — I like it slightly less. I’ve been trying to figure out whether that’s because these ideas have become common knowledge, or because I’ve genuinely outgrown the book (I work in AI, so I’m in this space constantly). Probably both. Either way, it still earns its rating, because it’s a valuable, policy-focused perspective on an industry we all need to pay attention to.
The core argument is straightforward but worth sitting with: AI is a general-purpose technology, which means it can be applied almost anywhere — for good outcomes and catastrophic ones. That’s not the interesting part. The interesting part is the second trend Suleyman layers on top of it: trust in government is at historic lows at the exact moment that technology companies are gaining unprecedented power. Which means we have a very powerful, very adaptable technology arriving into an environment with no reliable mechanism to govern it. The proposed solution — international councils to monitor development and use — sounds a little like a policy white paper. But the diagnosis of the problem is a simple, useful framing in a space with lots of pundits and perspectives.
Exploring how this situation will evolve, Suleyman describes a containment dilemma: governments trying to prevent the worst AI outcomes will be tempted to deploy AI surveillance to do it. The cure and the disease start to look similar, and there’s a potential for them to spiral. That’s the tension the book doesn’t fully resolve, but at least names clearly.
A note on scope: Suleyman’s view on AI safety differs pretty sharply from the “if anyone builds it, everyone dies” camp. He’s not dismissing existential risk — he just thinks near-term misuse, deepfakes, bioweapons, and erosion of state authority are more pressing than ASI scenarios. In a space with many voices, he provides a level-headed one.
The book runs about an hour or two longer than it needs to. The middle sections on related technologies — quantum computing, robotics, nanotech — feel more like due diligence than essential reading. You can skim those without losing the thread, though unfortunately there’s no one section you can just skip.
Put It To Work
- General-purpose technologies require general-purpose governance: Suleyman’s framework for why AI is hard to contain applies to any technology a company is deploying. If a tool can be used for nearly anything, it needs governance that’s equally adaptable, not a one-time policy.
- A power vacuum is the real risk: The book’s most useful insight for executives isn’t about AI specifically; it’s about what happens when institutional trust collapses at the same time that private power concentrates. That dynamic is already visible in how AI companies are making decisions that used to belong to governments. Worth having a perspective on.
- Containment is possible, but not automatic: Suleyman argues that nuclear non-proliferation is less of a success story than we assume. I was surprised that there have been more near-misses than the public knows. The implication for AI policy: we shouldn’t assume existing governance frameworks will transfer.
The Audiobook Experience
★★★☆☆
Author-narrated by Suleyman. The narration is solid but not remarkable.
Medium multitasking potential: the narrative sections move easily, but the chapters where he lists characteristics, policy recommendations, or categories of risk require attention or you’ll lose the thread. No major navigation issues.
Audio or print? Either works; this isn’t a book where the narration elevates the material.
Read It or Skip It?
Read it if: you want the clearest policy-level framing of AI risk — what the dangers are, why they’re hard to govern, and what realistic mitigation might look like.
Skip it if: you follow AI closely and have already internalized the general-purpose technology argument; the ideas will feel familiar and the pacing will frustrate you.
Related: The Optimist by Keach Hagey for the human story behind OpenAI’s rise.
Listen Now
I only recommend audiobooks and resources I’ve personally experienced. This post contains affiliate links — if you purchase through them, I earn a small commission at no extra cost to you.