
A growing body of research suggests that current enterprise AI usage isn't actually improving organizational productivity. Not meaningfully. Not in any way that shows up on the bottom line.
Depending on where you sit, that lands differently. If you're a CIO who's spent 18 months standing up a GenAI program, it stings. If you're a CFO still waiting for ROI, it's vindication. And if you work anywhere in the physical economy — energy, infrastructure, housing, healthcare, cities — it should worry you, because what's happening in software companies right now is about to happen to you, only worse.
The technology is ready. The operating models aren't. That gap is becoming the most expensive problem on every leadership team's desk, and most teams don't know they have it yet.
The Numbers
Roughly 9 in 10 firms report no measurable firm-level impact from AI on productivity or employment over the past three years. — NBER Working Paper No. 34836 (Yotzov et al., Feb 2026), survey of ~6,000 CEOs, CFOs, and executives across the US, UK, Germany, and Australia
77% of employees using AI tools say those tools have added to their workload. — Upwork Research Institute, "From Burnout to Balance," July 2024
~200 employees tracked over eight months — finding: AI doesn't reduce work, it intensifies it. — Ranganathan & Ye, UC Berkeley Haas, Harvard Business Review, Feb 2026

What the data actually says
Three recent studies, all pointing at the same pattern.
The NBER (Yotzov, Barrero, Bloom, and colleagues, Working Paper 34836, February 2026) surveyed nearly 6,000 CFOs, CEOs, and senior executives across the US, UK, Germany, and Australia. Around 89% reported no measurable productivity impact, and more than 90% reported no measurable employment impact at their firm over the last three years. Two-thirds of those same executives use AI tools personally, but only about 1.5 hours a week. One in four doesn't use them at all. So adoption is real, but the impact is somewhere between invisible and imaginary.
Upwork's Research Institute ran a different kind of survey in 2024: 2,500 people across the US, UK, Australia, and Canada, split between C-suite leaders, full-time employees, and freelancers. Among employees using AI, 77% said these tools have actually decreased their productivity and added to their workload. They pointed to three culprits: reviewing AI outputs (39%), learning the tools (23%), and being asked to produce more because AI supposedly made them faster (21%). In the same study, 96% of executives expected AI to boost productivity. That gap — between what leadership thinks is happening and what employees experience — is the real story.
And then there's the study that made everyone stop. Aruna Ranganathan and Xingqi Maggie Ye at UC Berkeley Haas spent eight months inside a ~200-person US tech company, observing people at work and running 40-plus interviews across engineering, product, design, research, and operations. They published their findings in Harvard Business Review in February 2026.
The title does most of the work:
"AI doesn't reduce work. It intensifies it."
— Ranganathan & Ye, Harvard Business Review, Feb 2026
What they found inside that company: people worked faster, took on more, and stretched their days longer — mostly voluntarily. Tasks expanded. The line between work and not-work blurred. Multitasking got worse. People were doing more without feeling any less busy. The researchers warn that what looks like productivity is often just intensity, and intensity isn't sustainable. Over time it leads to cognitive fatigue, worse decisions, and burnout.
That's the paradox. At the individual task level, AI clearly makes people more capable. At the company level, none of that capability is translating into measurable output.
Why capability isn't translating into output
Matthew Fitzpatrick, CEO of Invisible Technologies, has put this well in recent commentary: companies haven't redesigned the workflows, incentives, or job structures that AI was supposed to improve. The model sits on top of a broken process. Everyone produces more. Nothing actually moves through the system any faster. And leadership is left wondering what happened to the ROI.
Most enterprise AI programs aren't really technology programs. They're operating-model programs that have been mislabelled, budgeted, and governed as technology programs. When the operating model doesn't change, technology just makes the dysfunction faster.
A faster version of the same chokepoint isn't productivity. It's a louder chokepoint.
The Labrynth Lens
I keep coming back to the same metaphor: renovating a 1970s house with smart bulbs.
You can put every light on an app and ask the thermostat to tell you a joke. But the plumbing is still galvanized steel, the wiring is still knob-and-tube, the foundation is still cracked. You've upgraded the experience without fixing the house. That's what most enterprise AI rollouts look like right now — and the problem is much bigger in regulated industries, because the 1970s architecture wasn't an accident there. It was a design choice.
Why it gets worse before it gets better
This is the part of the conversation that isn't happening loudly enough.
If the productivity paradox looks this bad inside modern software companies — flat hierarchies, engineering cultures, generous AI budgets, and almost no regulatory drag — think about what it looks like outside that bubble.
Think about permitting, where getting a data center, a housing development, or a transmission line approved can take 18 months and a dozen agencies.
Think about licensing, where one compliance slip can kill a deal.
Think about healthcare operations, where AI gets layered onto an EHR workflow that was itself layered onto a paper chart.
In these environments, the process isn't just inefficient. It was engineered to be cautious. It was built so that no single person at any step could be blamed for letting something unsafe or non-compliant through. That design has real value. It also has a cost, and the cost is speed.
Putting a chatbot on top of a 1970s permitting process doesn't fix the process. It just helps everyone get stuck faster.
TOOL ROLLOUT — an AI wrapper on a legacy workflow
Enterprise license. Prompt library. Reviewers now reviewing AI outputs on top of their original work. Everyone producing more. Nothing shipping faster. Audit trails getting weaker. Regulators staying skeptical.
OPERATING-MODEL REDESIGN — an AI-native, outcomes-based workflow
Decisions, evidence, and precedent treated as first-class objects. Reasoning is transparent and auditable. Regulators can verify what happened and why. Approval cycles go from months to days. Capacity compounds.
This is why AI in regulated industries has to be treated as an operating-model question, not a procurement question. There is no version of "buy more tools" that gets you to productivity here. The workflow has to be rebuilt around what AI can actually do — and, just as importantly, around what regulators can actually verify.
From tool to operating model
The companies pulling ahead right now aren't the ones with the most seats, the most tokens, or the biggest prompt library. They're the ones doing the less glamorous work — pulling the process apart, asking which steps exist because of evidence and which exist because of inherited caution, and putting the workflow back together around outcomes.
A few things tend to become clear when you do that work.
First, transparency stops being a feature and becomes the license to operate. In regulated industries, a black-box answer is worse than no answer at all. If the system can't show its reasoning, cite its precedent, or surface its evidence, no regulator can defend a decision that relies on it. Legal-grade transparency isn't a nice-to-have in this part of the economy. It's the condition of use.
Second, outcomes beat features. "AI-powered" is a marketing line. "Approvals in days, not months" is an outcome. Only the second one is worth paying for. The teams that are winning are measuring end-to-end cycle time, not seat utilization or daily active users.
Third, redesigning beats retrofitting, and it isn't close. AI-native systems built from scratch outperform AI layers bolted onto legacy stacks by orders of magnitude. Retrofitting is the 1970s house with smart bulbs — and in 2026, it's not the cheaper path anymore. It's the more expensive one, because it disguises the real work that still needs to happen.
Fourth, the finance case and the operations case have to be built in the same room. When procurement is modelling ROI on one floor and operations is redesigning the workflow on another, even well-deployed AI stalls. That alignment is half the battle.
Operating-model change, not tool rollout
None of this is an argument against AI. It's an argument against buying AI instead of redesigning around it.
The technology is genuinely good now. It's ready. What isn't ready is the organizational architecture it's being dropped into — particularly in the parts of the economy where regulation, safety, and public trust shape the workflow.
The organizations that will pull ahead from here are the ones doing the boring work right now: finding where the real chokepoints are, being honest with themselves about which steps exist for good reason and which are just inherited caution, and rebuilding the operating model so AI isn't bolted on — it's built in.
Everyone else is looking at another three years that feel a lot like the last three. More seats, more tools, more intensity, same output.
Execution is the only real moat left. In regulated industries, the operating model is the execution.
This is why we built Labrynth. The operating model in regulated industries isn't broken because people aren't working hard enough. It's broken because the map is missing. Every permit, every approval, every compliance path sits inside a tangle of statutes, agency workflows, and institutional memory that no single person holds end-to-end. Labrynth maps that ontology. We turn the maze into a traversable graph and give teams the shortest defensible path from application to approval — the thread that leads out. That's the name. That's the platform. That's the work.
Labrynth.ai — Turning regulatory friction into economic flow. Labrynth is an outcomes-based, transparent AI platform built to eliminate bottlenecks across permitting, licensing, and compliance — compressing approval cycles from months to days for cities, developers, energy projects, and infrastructure firms. Built AI-native. Auditable by design.
Stuart Lacey — Founder & CEO of Labrynth.ai. Serial entrepreneur with 13+ patents, RegTech pioneer (previously Trunomi), TEDx speaker on data rights, and a two-decade member of YPO. Based in Bermuda, working with governments, developers, and investors across North America, Europe, and Australia to reframe regulation as a bridge — not a wall — to growth.
Ranganathan, A. & Ye, X. M. (2026, February 9). AI Doesn't Reduce Work — It Intensifies It. Harvard Business Review.
Yotzov, I., Barrero, J. M., Bloom, N., Bunn, P., Davis, S. J., Foster, K. M., Jalca, A., Meyer, B. H., Mizen, P., Navarrete, M. A., Smietanka, P., Thwaites, G., & Wang, B. Z. (2026, February). Firm Data on AI. NBER Working Paper No. 34836.
Upwork Research Institute. (2024, July 23). From Burnout to Balance: AI-Enhanced Work Models for the Future.
Upwork Inc. press release (2024, July 23). Upwork Study Finds Employee Workloads Rising Despite Increased C-Suite Investment in Artificial Intelligence.
UC Berkeley Haas Newsroom (2026). AI promised to free up workers' time. UC Berkeley Haas researchers found the opposite.
Fitzpatrick, M. — CEO of Invisible Technologies, commentary on enterprise AI adoption and operating-model redesign.