AI and the Future of Life the Universe and Everything - A Reading List - Insights Platforms

AI & The Future of Life, The Universe & Everything: A Reading List

By Insight Platforms

  • article
  • AI
  • AI Agents
  • Generative AI
  • Machine Learning

There are lots of books, essays and articles about AI and its potential impact on everything: work, the economy, nation states, global security, even the future of the solar system.

If you read all these, it’s easy to get a little freaked out about the future. But knowledge is power and, hopefully, something of an antidote to AI anxiety.

If you want a hyper distilled version of my learnings from these, join this webinar at the July 2025 Demo Days virtual event (or watch the replay if you’re reading after the event):

Depending on what you read, we’re heading for either the collapse of civilisation or the dawn of a digital utopia. Totalitarian surveillance and human irrelevance; or a future of post-scarcity abundance and cognitive liberation.

It’s a hell of a range of outcomes.

Books

Nexus – Yuval Noah Harari

This is very readable but it’s nothing like Sapiens. It’s a bit darker, and more philosophical. But still grounded in history.

The big idea is that information networks are not really oriented towards truth; they are much more about power. And in the era of AI, that power could be held by a very small number of people (or machines).

Superintelligence - Nick Bostrom

Superintelligence – Nick Bostrom

This one’s about 10 years old now. The first half of it is accessible; the second half drifts off into arcane philosophy. The big idea is that recursively self-improving AI – the kind that programs better versions of itself – poses huge risks of runaway behaviour if you give it the wrong objectives. Especially if it’s connected to the real world via advanced robotics. A bit like the Sorcerer’s Apprentice.

This book gave birth to the paperclip problem: imagine giving a superintelligent AI the objective to maximise production of paperclips. It eventually transforms everything into paperclips: every kind of matter, all of us, all of earth, all of the solar system. This is what they mean by the alignment challenge.

The Coming Wave - Mustafa Suleiyman

The Coming Wave – Mustafa Suleyman

This one covers both ends of the spectrum. It’s a bit long – like most of these books – but it does a good job of weaving together the narratives of these exponential technologies: AI, synthetic biology, autonomous systems.

He explores plenty of the upsides – especially the areas of healthcare, agriculture and climate change (AI and solar powered robots = inexhaustible labour supply); but he also maps out the challenges for our governance models. To his credit, this is not just a shoulder shrug. Suleyman suggests 10 ways to help address the possible downside risks to employment, social cohesion and global peace.

Co-intelligence - Ethan Mollick

Co-Intelligence – Ethan Mollick

This one is much more grounded in the near term uses for AI and a lot less scary than some of the other books on this list. If you don’t already follow Ethan Mollick on LinkedIn, you should. He is a great source of inspiration for pragmatic AI use cases.

His main point in this book is that we’re entering a golden age of human-AI partnership — if we embrace it.

He’s very glass half full, and pitches AI not as an existential threat but as a co-pilot: a multiplier for human creativity and productivity. The opportunity is to augment knowledge workers, democratise expertise, and create value much faster. Here’s hoping.

Deep Utopia - Nick Bostrom

Deep Utopia – Nick Bostrom

Like Superintelligence, the first half of this is great – the second half is turgid.

This is another big thought experiment about what could happen if everything goes well with the development of advanced superintelligence.

Imagine a world in which AI and robotics – powered by sunlight – take care of everything. It’s a world of infinite abundance, and we quickly move to a ‘post work’ existence. But beyond that, it’s a world of ‘post instrumentation’ – our infinitely capable robot helpers even take care of the stuff we might want to do for leisure.

Then we meld with AI to become a hybrid digital life form. And the drugs in this new existence are like nothing anyone’s ever experienced. Something to look forward to then.

The Exponential Age - Azeem Azhar

The Exponential Age – Azeem Azhar

I think in the U.S. this one is just called Exponential. This is another largely optimistic take on the potential for technology to improve our lives.

Azhar brings in a bunch of useful comparisons from the history of technology, and some rules of thumb that show how exponential growth and recursive improvements drive change.

His main thesis is that all this stuff can dramatically improve lives by reducing scarcity, tackling disease, solving political problems … but society, governments and regulatory frameworks are on the back foot because they’re not able to adapt quickly enough. There’s going to be a crunch.

The Singularity is Nearer - Ray Kurzweil

The Singularity Is Nearer – Ray Kurzweil

Kurzweil is a veteran of tech boosterism, although to be fair a lot of the predictions he made through the 80s and 90s are coming to fruition in a time scale that many others doubted.

This is a follow-up to the book he wrote in 2006 The Singularity Is Near. The most annoying thing about this one is that he can barely make it through a page without referring back to something he said in the last book.

His big idea is that by the 2030s, humans will merge with AI, live indefinitely, and access intelligence at planetary scale. Immortality and infinite intelligence beckon. But not like the Struldbrugs. The amazing miracles of synthetic biology and AI-powered nano-bot healthcare will have sorted all that out.

Machine Platform Crowd -McAfee and Brynjolfsson

Machine, Platform, Crowd – Andrew McAfee & Erik Brynjolfsson

A bit more near term and down to earth, this one. it’s all about the power of AI and advanced technologies re-ordering corporate power structures and turbo-charging creativity and innovation.

Algorithms outperform managers, crowds beat experts, open platforms decentralise innovation. This means that it’s easier for more people to do more stuff. Everything gets democratized. We can all make great things and improve the world. Hurray.

Life 3.0 - Max Tegmark

Life 3.0 – Max Tegmark

Another one that’s coming up ten years old, but still very relevant. Tegmark is a professor at MIT and President of the Future of Life Institute – one of the think tanks that’s most anxious about catastrophic AI outcomes.

A bit like Superintelligence, it is interplanetary in scope: once AIs start to build smarter AIs and we connect them to the real world through robotics, they’ll begin to build ever better capabilities for harnessing energy, driving forward science, and exploring the universe.

The problem – just as in the paperclips thought experiment – is that we are a blind Prometheus: we don’t see that the fire we play with could burn down everything, so we’re not building adequate safeguards for the future. Cheerful stuff.

Articles & Essays

The first two of these are relentlessly positive. They will also give you a window into the mind of the fabulously wealthy techno-utopians who sit at the top of multi-billion dollar AI corporations.

The Gentle Singularity, Sam Altman (CEO, OpenAI)

This article lays out Sam’s vision for the next decade or so. AI will transform humanity. Sure, a few people might have to lose their jobs and, you know, that could hurt a little bit. But we’ll all be fine because of the abundance.

“The rate of new wonders being achieved will be immense.”

Machines of Loving Grace, Dario Amodei (CEO, Anthropic)

What would a world with powerful AI look like if everything goes right?

This long-form essay from Amodei is much more considered and well reasoned than Sam’s blog post. It covers five key areas of AI’s impact in a hypothetical future where things go well: biology and health; neuroscience; economic development and poverty; peace and governance; work and meaning.

The jury is still out on whether Anthropic’s safety orientation is real or a marketing gimmick; but if you give him the benefit of the doubt, there is lots here to feel optimistic about.

AI 2027, various AI academics, researchers and ethicists

This is a long-form exploration – with its own website – of the potential near-term effects of the dramatic improvements in AI model capabilities that we’ve seen. The recursive self-improvement mentioned earlier begins to reach escape velocity in 2026; AI permeates all aspects of the economy, governmental institutions and scientific research; and it re-invents multiple versions of itself to the point where humans can no longer ‘peer inside’ at all.

At this point there’s a choose your own adventure fork in the road as a reader you can choose to slow down AI development or continue on the current path.

Before you jump right in and decide which path to choose, I strongly suggest reading another of Dario Amodei’s essays The Urgency of Interpretability. TL;DR: even the people building these advanced LLMs don’t understand how they arrive at their outputs; this is a very dangerous place to be. Oh well.

Final Thoughts

Are you a bit freaked out? Sorry. I think it’s inevitable when properly engaging with the implications of AI.

But hopefully there’s some medicine for you in this webinar. Join us live at the July 2025 Demo Days or watch the recording here if you can’t make it:

Author

Scroll to Top