Supremacy by Parmy Olson – Nonfiction Book Review
An award-winning but ultimately disappointing AI industry overview. While offering engaging parallel coverage of OpenAI and DeepMind, it presents strong opinions without sufficient evidence. Skip this—read The Optimist, Empire of AI, or The Coming Wave instead.
Just finished: Supremacy: AI, ChatGPT, and the Race That Will Change the World
Author: Parmy Olson
Narrator: Lisa Flanagan
Category: Technology, Business, AI
Publication Year: 2024
Runtime: 9 hours and 25 minutes
My Rating: ★★☆☆☆ (2/5 stars: Disappointing)
The Overview
Journalist Parmy Olson traces the parallel evolution of OpenAI and DeepMind from startup idealism through corporate acquisitions. The thesis: Sam Altman (OpenAI) and Demis Hassabis (DeepMind) began with noble intentions to build AI for humanity’s benefit, but corporate partnerships with Microsoft and Google corrupted these ideals, transforming both companies into profit-focused entities that prioritize growth over safety. The book covers key technical developments (transformers, GPT), personality clashes (Elon Musk’s involvement and exits from both companies), and the 2022 ChatGPT launch that intensified the competitive race. Written in an engaging, accessible style that prioritizes narrative flow over technical depth.
Target Reader: General audiences seeking an accessible introduction to the AI industry race
The Deep Dive
My Take
I’m going out on a limb here: Even though this book received broad praise, I don’t recommend it. For the record, I work in the AI industry, and over the last few months I’ve read several AI books to find the best.
While I agree with reviewers who say this is a gripping story, this book glosses over important details. Olson’s thesis—that corporate influence corrupted noble AI ideals—may be correct, but she presents it with conviction rather than documentation. Having just read The Optimist (a detailed, even-keeled Altman biography), I caught multiple oversimplifications in Supremacy. Although I appreciate Olson presenting a valuable, cautious perspective on AI, the book risks misleading readers unfamiliar with the industry, who could easily mistake opinion for fact here.
Take the coverage of Sam Altman’s firing and rehiring as OpenAI CEO. Olson mentions in a few sentences that concerns included Altman exploring other ventures but provides zero depth to assess how central this was to the board’s decision. Other detailed accounts of this episode (like The Optimist) don’t emphasize these factors, suggesting they may be less central than Olson implies—but without more detail, readers can’t evaluate this themselves.
The book’s treatment of AI safety versus AI ethics is another example. AI safety concerns whether AI superintelligence could destroy humanity; AI ethics addresses current harms like bias, worker exploitation, and environmental impact. There is much debate in the industry about how to prioritize these concerns, yet Olson covers this in mere sentences—an oversimplification that any AI expert would find inadequate.
One genuinely valuable contribution: Supremacy provides the best parallel coverage of OpenAI and DeepMind’s simultaneous evolution that I’ve encountered. The account of how DeepMind created transformers (detailed in the paper “Attention Is All You Need”), which OpenAI then expanded into the GPT concept, offers useful industry context. Elon Musk’s pattern—investing in DeepMind, growing frustrated with lack of control, promising investment in OpenAI, again becoming frustrated, then starting his own AI through Tesla and X—is also illuminating.
Timing matters: When published in September 2024, this may have been the best accessible AI overview available. Since then, multiple books on AI have been published offering more rigorous, evidence-based insights.
Why This Book Matters Now
Understanding AI development—both its potential and its dangers—matters for everyone, not just tech industry insiders. As AI tools rapidly integrate into workplaces, schools, and daily life, informed public discourse about both AI safety (existential risks from superintelligence) and AI ethics (current harms from bias and exploitation) has never been more critical.
That said, skip Supremacy and jump to the Similar Reads section for recommendations of the latest and best AI books.
The Audiobook Experience
Narrator Performance
Voice Quality: Good, engaging narration
Ease of Comprehension: Clear delivery and easy to follow
Overall Narrator Rating: 3/5 stars (Good)
Audio Production Quality
Chapter Organization: Chapters not labeled by title, only numbers
Length: 9 hours 25 minutes is a typical nonfiction runtime.
Audiobook vs. Print Recommendation
Audio works fine for this narrative-driven book, although I imagine a print book would be just fine too.
Is This For You?
I don’t recommend this for most people since better intros to the AI industry are available.
The exception: If you specifically want the best side-by-side coverage of OpenAI and DeepMind’s parallel innovation—and you have enough AI industry knowledge to critically assess the author’s opinions rather than accepting them as fact—this is the only book I’ve found that covers both companies’ evolution in depth.
For everyone else: Skip this and choose a more recent or topical book from the Similar Reads instead.
Similar Reads
- The Optimist by Keach Hagey – Best neutral, detailed introduction to OpenAI and ChatGPT’s development.
- Empire of AI by Karen Hao – Another option for an AI industry intro. Like Supremacy, this is critical of the AI industry, but it’s better researched. This focuses on AI ethics (bias, worker exploitation, environmental impact). Very long and in-depth.
- The Coming Wave by Mustafa Suleyman – AI industry concerns from a higher-level, policy perspective
- The Thinking Machine by Stephen Witt – Less directly about the ChatGPT and the AI industry. This history of Nvidia and how it enabled AI to take off provides excellent strategic leadership insights
- Co-Intelligence by Ethan Mollick – Beginner’s guide focused on how individuals can interact with AI
- If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares – AI safety and superintelligence risks from long-time industry experts
The Bottom Line
Despite winning the 2024 FT Business Book of the Year Award, this book presents strong opinions about AI’s corporate corruption without sufficient evidence. The investigative journalism approach creates engaging, accessible reading, but accessibility came at the cost of rigor. Published in September 2024, it may have been the best broad AI overview available then—but superior alternatives now exist. Skip this and read The Optimist, Empire of AI, or The Coming Wave for more substantive AI industry analysis.
Where to Listen
Quick note: This review includes affiliate links to help support Lark’s Edition. If you purchase through these links, I may earn a small commission at no extra cost to you. I only recommend audiobooks and resources I’ve personally experienced and believe add value.
- Audible – Get your first audiobook free with Audible’s trial
- Libro.fm – Support independent bookstores
- Print version from Amazon
- Kindle version from Amazon
For Your Team or Book Club
- How do you think that powerful technologies like AI should be governed?
- How should we balance AI safety concerns (existential risks) versus AI ethics concerns (current harms from bias and exploitation)?
- Given the rapid pace of AI development, what sources can we rely on for staying informed about the industry’s evolution?
- Do you think corporate partnerships with Microsoft and Google genuinely corrupted OpenAI and DeepMind’s original missions?