Everyone is asking what AI can do.
The real question is what it does to us.

Trillions of dollars are being invested in artificial intelligence based on its potential to revolutionize the future of work. And yet that potential seems increasingly uncertain. Headlines can't agree whether AI will take all of our jobs or if AI is all hype, creating a financial "bubble" ready to burst at any moment. Workers are anxious, executives are spending money they can't justify, and there seems to be no clarity about what's actually happening.

The reason for this confusion is that most contemporary discussions of AI-driven work miss the forest for the trees — focusing on AI's technical particulars instead of the social consequences of AI's ongoing disruption of humanity's relationship to our work.

Our identities, teams, organizations, and economies are built on the premise that humans work. When that premise no longer holds, our systems will collapse if not fundamentally redesigned for the new paradigm. Discussions of the future of work focusing on benchmarking LLMs against white-collar workers or chess professionals hide impending societal disruption beneath layers of statistical jargon. Worse, comparing LLMs to human experts implies that AI is not (and might never be) "good enough" to form the backbone of the global workforce. That is not true.

Having worked at the cutting edge of technology development for over a decade, my assessment is that LLMs are already capable of replacing the vast majority of workers. Quests for superintelligence will continue, and improvements to accuracy and efficiency will remain important areas of investment. But given the scope of the encroaching disruption, incremental technical advancements are far less consequential than social and economic policies in determining our collective future. This work is not being conducted with the same urgency.

Although my position is not yet mainstream, it is informed by established expertise. As a Senior Researcher at Google and Ethnographic Lead for their DORA Research Team, I led the research that produced the world's first validated model of the conditions which determine success in AI-driven work. The findings showed that AI users' outcomes were determined by social dynamics within respondents' organizations — communication clarity, prioritization of end user needs, healthy process norms — not which foundational model the organization used, the amount of training workers received, or how well they structured their prompts.

In the course of my career, I have gathered hundreds of hours of data from the people at the forefront of this paradigm shift — software developers, who are being laid off in the hundreds of thousands annually. What I've found is that their concerns about AI-driven work are rarely technical. Conversations don't stay on hallucinations or accuracy for long. They quickly turn to questions about what they will do when AI takes their jobs, how much fulfillment they get from their work, how their expertise is being devalued, and their eroding sense of identity and community.

Software developers have held one of the most privileged positions in the global economy for a generation. If these workers — technically sophisticated, highly compensated, well educated — are already experiencing this level of disruption to their professional identities, nobody is exempt. What I have learned from studying software developers is not a niche or isolated concern. It is a preview.

The most urgent questions of our moment are not about AI's capabilities, but about our own: How will people react when human labor is obsolete? Will people collaborate when the will of one person can be executed by a horde of agents? Will we form shared identities if we don't need to work toward shared goals? Will an abundant labor force usher in a utopian post-scarcity society, or upend the foundations of our world?

The answers are not predetermined. A positive future is well within reach. But it is not automatic or inevitable.

Achieving that future requires honest, informed leadership — and the public conversation is not providing it. Influencers build audiences on fear. Vendors promise transformation that serves their margins. Academics with the expertise to contribute study the questions without committing to a position. Consultants say whatever their partners pay them to say.

Expertise that stays in a journal is irresponsible right now. And advice without expertise is dangerous.

In the absence of credible people willing to stake their authority on speaking honestly, existential decisions for humanity are being made by people with something to sell or nothing to lose. The moment demands experts who say what we actually think.

If this resonates, I'd like to hear from you.