We Cut for AI. Now We're Paying for It.
A new Forrester, a research and advisory firm working in research, consulting and events finds 55% of employers regret AI-driven layoffs. As a people scientist, I'm not surprised - and I want to explain exactly why this was always going to happen.
55%
of employers regret AI-driven layoffs
~50%
of AI-attributed cuts predicted to reverse
50K+
jobs cut in AI-justified reductions in 2025
Let me be clarify: the companies now expressing regret didn't fail because AI is bad. They failed because they treated a technology adoption question as a headcount math problem - and skipped the hard organizational science in between.
Forrester's "Predictions 2026: The Future of Work" report landed with a thud in the CHRO community last autumn. Fifty-five percent of employers admit they regret layoffs made in the name of AI. Around half of those cuts are expected to quietly reverse - roles returning as offshore positions, contractor arrangements, or simply rehires at lower salaries, sometimes tasked with supervising the very systems meant to replace them. This is not a technology story. This is an organizational failure story.
I've spent my career studying how humans and systems interact inside organizations. What Forrester documented is something we in people science have a name for: premature automation bias - the tendency to act on a technology's projected future potential rather than its demonstrated current capability.
"Most AI-justified layoffs were based on expected future potential, with only a tiny fraction tied to proven existing capabilities."
The six failure modes I keep seeing
Across industries and functions, the regret clusters around a recognizable set of organizational mistakes:
- 01Betting on potential, not performance. Executives modeled headcount against AI's projected capabilities two years out, then cut today. Harvard Business Review confirmed this pattern across more than 1,000 executives surveyed. The gap between the roadmap and the reality did the damage.
- 02Underestimating tacit knowledge. AI agents perform well on narrow, single-step tasks. They collapse on multi-step, context-dependent, relationship-driven work - precisely the work that experienced mid-level employees do invisibly every day. Companies discovered this only after the people were gone.
- 03Hollowing out the talent pipeline. Junior and mid-level roles were cut first because they seemed most automatable. What firms didn't model was that those roles are also the pipeline - the career pathway through which senior capability is grown. Economists and organizational theorists have a term for what followed: a talent doom cycle.
- 04Ignoring customer experience fragility. Contact centers and customer-facing functions leaned hard into AI agents, then saw quality collapse when systems encountered emotionally complex or edge-case situations. The resulting churn and reputational damage routinely exceeded the salary savings from the layoffs.
- 05Weak ROI within 12 months. Forrester notes that relatively few firms see clear operating-profit gains within a year of AI-justified restructuring. Many AI initiatives are now being postponed or cancelled - analysts predict a significant share of agentic AI projects will be cancelled by 2027 due to cost, unclear business value, and governance complexity.
- 06Cultural and trust damage that compounds. Employees and candidates see AI-justified mass cuts as opportunistic. That perception erodes psychological safety, degrades engagement among survivors, and makes every subsequent change initiative harder to execute. The organizational cost is real and long-lasting.
HR itself is not immune
Forrester explicitly warns that HR departments are among the functions most vulnerable to AI-rationalized downsizing - the assumption being that talent management tools can maintain service levels with far fewer people. As a people scientist, I find this particularly troubling. The very moment organizations need more human capacity to manage AI governance, workforce reskilling, and ethical oversight, many are eliminating the function best positioned to do that work.
There is also a pattern I'd call "AI-washing" - using AI as the public rationale for reductions that were, in reality, driven by financial pressure or strategic pivots that would have occurred regardless. This is now drawing media, academic, and employee scrutiny. It corrodes trust in leadership in ways that generic restructuring announcements historically have not, because it introduces a layer of perceived dishonesty about the cause.
What the reversal actually looks like
The "quiet reversal" Forrester describes is not triumphant. It is messy. Roles return as offshore positions at lower wages. Former employees are re-engaged as contractors. Institutional knowledge that took years to build has to be painstakingly reconstructed. Some of it cannot be recovered at all.
The companies adapting most effectively are moving toward hybrid operating models: AI handling repetitive, low-risk, high-volume tasks; humans handling exceptions, complex judgment calls, and relationship-intensive work. Hiring shifts toward people who can orchestrate AI tools - not because those humans are cheaper, but because the work genuinely requires a combination that neither party can supply alone.
What I'm telling leadership teams today
First, distinguish between task automation and role elimination. Most jobs are portfolios of tasks. AI may competently handle 30–40% of a role's tasks, but that rarely means the role itself is redundant - it often means the role changes. Design for that change rather than eliminating the person.
Second, protect your pipeline. The junior roles that seem most automatable are also the roles that produce your future senior talent. Model the long-term cost of pipeline hollowing before treating entry-level headcount as a cost-cutting lever.
Third, invest in the humans who will manage AI. Governance, ethics, reskilling, and change management capacity cannot be automated. Cutting HR and people operations to fund AI adoption is precisely backwards.
The 55% regret number is not a verdict against AI. It is a verdict against the organizational shortcuts taken in its name. AI is genuinely powerful. But it is not a replacement for the organizational science, the change design, and the people investment that successful transformation has always required.
Data cited: Forrester "Predictions 2026: The Future of Work"; Harvard Business Review executive survey (2026); Gartner agentic AI projections. This article represents the author's professional analysis and perspective.

