Introduction

Welcome back to Laboratory. This week we unpack Nature’s feature series “The future of AI” (14 November 2025), which brings together six voices at the sharp end of artificial intelligence: Mustafa Suleyman at Microsoft AI, Pushmeet Kohli at Google DeepMind, Timnit Gebru at DAIR, Jared Kaplan at Anthropic, Anima Anandkumar at Caltech, and Amandeep Gill at the United Nations. Instead of a single narrative about progress, the series offers a prism: six different vantage points on what AI is doing to science, work, security, and power.

In this briefing, we chart:


Executive Summary

Artificial intelligence is no longer a speculative technology. Trillions of dollars in capital and infrastructure are flowing into models that already touch hundreds of millions of people. Yet the question that quietly sits behind every product launch and policy memo is deceptively simple: what is all of this actually for?

Nature recently brought together six influential voices who sit at very different points in the AI ecosystem: Mustafa Suleyman at Microsoft AI, Pushmeet Kohli at Google DeepMind, Timnit Gebru at DAIR, Jared Kaplan at Anthropic, Anima Anandkumar at Caltech, and Amandeep Gill at the United Nations. Read together, their perspectives reveal less a single narrative and more a set of tension lines that will shape the next decade: ambition versus restraint, openness versus consolidation, productivity versus precarity, innovation versus inequality.

The emerging picture is one of AI as a general purpose capability that will be deeply embedded in scientific discovery, knowledge work, public services, and security architectures. But this future is not prewritten. Decisions about who controls the infrastructure, how risks are governed, which communities are listened to, and what is shared openly will heavily influence whether AI becomes a tool for genuine global flourishing or simply amplifies existing power imbalances.


1. A crossroads for AI

The six interviewees agree on one core point: AI is no longer confined to the lab. It is moving into a phase where underlying models, data centers, and integration into workflows form a new layer of digital infrastructure.

At this crossroads, three questions dominate:

The six perspectives in the Nature series can be read as different, sometimes conflicting, answers to these questions.


2. Mustafa Suleyman: Copilots, platforms, and concentrated power

As chief executive of Microsoft AI, Mustafa Suleyman embodies the new industrial phase of AI: large scale, deeply integrated products such as Copilot that sit across the operating system, productivity suite, and cloud.

His view highlights three structural shifts:

Suleyman’s optimism about the potential of AI to augment human capability coexists with concern about misuse and misalignment. Yet his answers make clear that, in practice, platform incentives and market share are inseparable from debates about safety and responsibility.


3. Pushmeet Kohli: AI as a new instrument for science

Pushmeet Kohli, leading AI for science at Google DeepMind, represents a different frontier: AI as a scientific instrument. The success of AlphaFold in protein structure prediction is a preview of what happens when machine learning is tuned not just for text or images, but for physical and biological systems.

Several themes stand out:

Kohli’s perspective underscores a crucial point: the most transformative AI use cases may unfold in domains that are invisible to everyday users, reshaping drug discovery, materials science, and climate modeling long before the public sees a dramatic consumer app.


4. Timnit Gebru: Power, justice, and who gets a say

Where Suleyman and Kohli focus on capability and opportunity, Timnit Gebru starts from power and inequality. As head of the Distributed AI Research Institute, her work foregrounds those who usually sit at the receiving end of technologies rather than at the design table.

Key fault lines she emphasizes include:

From this angle, the “future of AI” is less about speculative superintelligence and more about who gets harmed or empowered next year, in specific contexts like migration control, social services, and labor platforms.


5. Jared Kaplan: Frontier models, labour markets, and safety

As co founder and chief science officer at Anthropic, Jared Kaplan stands at the center of the frontier model race. His perspective highlights the dual nature of systems like Claude: they are both powerful general purpose tools and potential sources of systemic risk.

On the opportunity side, Kaplan expects:

On the risk side, he focuses on:

He is relatively supportive of stronger regulation, particularly in areas like safety standards, red teaming, and incident reporting, but also wary of frameworks that freeze market structure in favor of incumbents. This mirrors a broader industry tension: how to regulate without entrenching.


6. Anima Anandkumar: Open research, academia, and the next generation

Anima Anandkumar, based at Caltech with a track record at Nvidia and Amazon, occupies a bridge position between industry and academia. Her focus is on ensuring that AI remains a scientific field, not just a product pipeline.

She stresses several levers:

Her view points toward a hybrid ecosystem in which public institutions, open source communities, and private labs each have a distinct, complementary role, rather than academia becoming a mere talent funnel for a few firms.


7. Amandeep Gill: Global rules for a global technology

As the UN’s special envoy for digital and emerging technologies, Amandeep Gill approaches AI not as a product or research topic, but as an issue of international security and governance. His background in non proliferation shapes his thinking.

Several parallels and contrasts stand out:

In this framing, AI is part of a broader struggle to update multilateral institutions for a world where intangible, rapidly evolving technologies shape everything from trade to warfare.


8. Three tension lines that will shape the next decade

Reading these six perspectives together, three major tension lines emerge. They are not binary choices, but axes along which policy and strategy will move.

  1. Acceleration vs deliberation

    • Companies like Microsoft and Anthropic are under pressure to ship products, grow user bases, and monetize infrastructure investments.

    • Researchers like Gebru and diplomats like Gill push for slower, more deliberate deployment in high stakes domains, with stronger guardrails and more community input.

  2. Centralization vs pluralism

    • Frontier models and cloud platforms are inherently capital intensive, which pushes toward concentration of power in a handful of firms and governments.

    • Open research, academic participation, and capacity building for low income countries are needed to keep the ecosystem diverse and contestable.

  3. Productivity vs justice

    • Many visions foreground gains in efficiency, creativity, and scientific discovery.

    • Others highlight that without explicit redistribution, worker protections, and legal safeguards, those gains can translate into higher profits for a few and greater precarity for many.

How these tensions are negotiated will do more to determine the “future of AI” than any individual product launch.


9. Conclusion

If there is a common thread across these six voices, it is that passivity is not an option. AI is not a neutral wave that society must simply surf. It is a collection of design decisions, business models, research agendas, and legal frameworks that can be nudged in better or worse directions.

Several practical implications follow:

The Nature series does not offer a single answer to what AI is “for”. Instead it maps the contours of an argument that will run through the next decade: whether this technology ends up reinforcing old hierarchies, or whether it becomes part of a serious project to expand human capabilities and reduce global inequalities. That choice will not be made by algorithms. It will be made by us.


For the full details: Future of AI


Thanks for reading Artificial Intelligence Monaco! If you liked this post, please consider subscribe to support my work.

Leave a Reply

Your email address will not be published. Required fields are marked *