AI Didn't Break the Academy. It Revealed What Was Already Broken.
Inside Higher Ed convened five voices to predict what AI means for higher ed in 2026. Read together, the responses orbit a question none of them quite name: whether the academy still knows what it is for.
Artificial intelligence is creating an existential crisis in academia, calling into question the financial and philosophical value of education in a world where formerly-niche skills and knowledge are becoming increasingly accessible. That's why Inside Higher Ed recently invited five voices to predict what 2026 holds for higher ed. The roundtable features a futurist, a business officer, a learning scientist, an EDUCAUSE researcher, and an ed-tech CEO. Their answers track different concerns, from the AI bubble to scaling and ROI, faculty disillusionment, partnerships, and system fragmentation. Read as a chorus, the predictions orbit a question none of them quite name: whether the academy still knows what it is for.
On Bryan Alexander's bubble dependence. Alexander frames academia's path as contingent on whether the AI bubble pops. If it does, internal and external pressure to deploy AI slackens. If it doesn't, academic AI efforts continue to expand.
Pandora's Box is already opened. AI is deeply embedded in enterprise workflows, university partnerships, and the daily working life of millions of professionals. Once a technology reaches that level of integration, public opinion stops being especially consequential to its trajectory. Meta is the obvious example. Its products were already too integrated into daily life to be abandoned even when there was backlash at the level of Cambridge Analytica. AI has not generated that level of directed backlash yet. The pressure on higher ed to make AI work is unlikely to be alleviated by either a market correction or a public mood shift.
In this light, the question isn't really whether the bubble pops. The question is what the academy offers when its educational monopoly is broken — and that break is already underway, regardless of what the markets do.
On Lindsay Wayt's pace problem. Wayt describes the pace of change as the biggest challenge institutions face, and it exacerbates everything else. I agree.
And the observation generalizes well past universities. I write this as someone whose job is depenedent on staying current on AI, and I cannot do that job without the support of AI itself. The pace is genuinely impossible without AI assistants doing research, triage, and summarization in the workflow.
The institutions Wayt describes are not piloting AI purely because they want to. They are piloting because they cannot afford not to. The uncomfortable fact is that even when AI shows no measurable return, you cannot simply opt out without losing relevance. That is the position higher ed finds itself in, and it is the position every organization built around knowledge work is in.
On Rebecca Quintana's disillusionment. Quintana is right that disillusionment is coming, and right that the moment invites broader conversations about the purposes of education. Reading between the lines, I think the question she is gesturing at is sharper than what she names directly.
She writes that students are using AI "in ways that do not support their learning and growth." The premise hidden in that sentence is that learning and growth are presently the explicit goals of the system. They may be the values the academy advertises, but the American university has been a credentialing pipeline for the labor market for generations. I got my first computer science degree because it was a job pipeline, not because I was so taken with Python.
AI is not degrading the academy's purpose. It is revealing that the purpose was already eroded. The liberal-arts ideal did not survive the conversion of the university into a job-training program with prestige pricing. AI is forcing Quintana's question because the trade — pay tuition, get credential, get hired — is breaking on the employer side. Critical engagement matters more now precisely because the credential is worth less. And I suspect this is the implication she is really driving at, even if it would be uncouth to say so outright in an IHE article.
On Mark McCormack's partnerships. McCormack argues that ed-tech vendors and academic communities need to build sustained connections grounded in shared governance. I disagree, mildly, from experience.
As a private-sector AI researcher who has kept a foot in the academic world for years, I have spent considerable effort looking for ways to help academics with AI work. The academics I talk to often cannot articulate what they want from the partnership, or what they bring to it. Even when a private-sector researcher walks in and asks plainly — how can we help each other — the most common response is a shrug, followed in some cases by a separate lament that Silicon Valley is ruining academia.
That is not a personal failing of the academics involved. It is a structural issue. For the first time in modern history, industrial AI research is ahead of academic AI research. Frontier models are being built behind locked doors at Anthropic, OpenAI, and Google. Academia used to chart courses before they were commercially proven, and that head start was the value it offered industry. Now industry is doing the charting and the building.
The partnership McCormack describes has to rest on a value proposition academia has not figured out. Academia is being asked, by this moment, to articulate what it is actually for and what it actually offers — not just to Silicon Valley, but to society broadly. Until that work happens, the partnerships will be transactional, if they are established at all.
On Joe Abraham's fragmentation pitch. Abraham predicts that institutions will use AI to address the fragmentation of their administrative systems — advising, enrollment, financial aid, billing, the LMS — through agentic orchestration and workflow automation. It is the most agreeable take in the roundtable, mostly because it points at something obvious. University administration has been a quiet disaster for as long as universities have existed. To be fair, that is a real and persistent problem. And to his credit, Abraham is actually working on solving it.
But notice what gets conceded along the way. In a piece convened to address AI's existential challenge to the academy, the most concrete answer on offer is workflow automation. If "we will fix the billing systems" is the strongest pitch for AI's contribution to the academy in 2026, the existential question has already been answered without anyone needing to ask it.
My own prediction. As technical and informational skills commoditize, what gains value is precisely what doesn't. Perspective, taste, the capacity to see what is missing from an AI's output.
My doctoral dissertation focused on Critical Computing, a discipline within the larger umbrella of Human-Computer Interaction focused on applying critical scholarship and humanistic approaches to technology design. For most of my career, "computing" was the part that carried weight and "critical" was the part employers tolerated. Since AI proliferated, the inversion has happened. The "computing" is increasingly accessible to anyone with a Claude account. The "critical" is what gives me a continued edge — the ability to challenge AI output, to see what is not being said, to argue why one response is better than another when both are technically correct.
The disciplines that gain in 2026 and beyond will be the ones that train people to make value judgments on subjective material — the humanities, the social sciences, art history, writing, design. The work of saying "this is a good response and this is a bad one" on questions that have no provable answer is precisely what AI cannot do for you.
That is not a retreat into the past. It is an acceleration into a labor economy where unique perspective is a privileged form of value, as I discussed previously. The academy's path forward is not optimizing operations or scaling pilots. It is recovering an old purpose: training people to think, judge, and challenge, including by challenging AI itself.
Academia bemoans an existential crisis. It may actually be receiving a liberatory opportunity.