Humans in the loop
– 7 min read
What’s your company’s AIQ? Featuring J.P. Gownder, Forrester VP and principal analyst
- Across hundreds of AI deployments, J.P. identified that sophisticated organizations with massive budgets fail spectacularly while others succeed — and the difference has nothing to do with the technology itself.
- Only 26% of organizations trained workers on prompt engineering in 2025, a mere one-percentage-point increase from 2024.
- Forrester’s research reveals that social learning is at least twice as important as formal training, but most organizations are approaching AI adoption with outdated playbooks from twenty years ago.
- What’s happening with AI and jobs isn’t what it seems — executives are making announcements that create a darker, cascading effect most leaders won’t see until it’s too late.
- After two decades of guiding enterprise transformations, J.P. has developed a framework, AIQ, that addresses the missing human side of the equation.
While enterprises race to deploy AI agents and pour billions into the latest models, a pattern is emerging. Two companies in the same industry buy the same AI tools from the same vendors, roll them out to thousands of employees within weeks of each other, and 12 months later get radically different results. One reports 30% productivity gains. The other gets tool abandonment, hallucinations in customer communications, and quiet failures no one talks about but everyone feels.
Same technology. Opposite outcomes.
Our guest on the latest episode of Humans of AI, J.P. Gownder, VP and principal analyst at Forrester Research, saw this pattern forming over a decade ago. He discovered it wasn’t about AI at all. He reveals the invisible factor most organizations don’t even realize they’re missing — and shares a framework that predicted these failures years before they happened.
We believe his insights align with the core principles of the Agentic Compact — WRITER’s framework for responsible AI — that workforce enablement, not just technological sophistication, drives successful AI deployment.
The invisible factor destroying expensive AI investments
Across hundreds of deployments, J.P. kept seeing the same pattern. Sophisticated organizations. Massive budgets. Best-in-class technology.
Spectacular failures.
But what separated the successes from the catastrophic failure wasn’t technology — it was something invisible at the human level. Four critical elements that determine whether AI gets adopted or abandoned.
Forrester built a framework to measure it: AIQ — Artificial Intelligence Quotient — similar, in our view, to WRITER’s AI readiness framework that helps organizations assess their preparedness for AI adoption.
“It is often the case that we find organizations who are missing one or all of those elements,” J.P. explains.
But which elements? And why do even Fortune 100 companies miss them?
What J.P. uncovered traces back to a specific moment when AI first hit the market. Organizations made a mistake then. They’re making it now. And it’s costing them money.
The one-percentage-point crisis
There’s a critical training gap in the enterprise AI adoption landscape.
“In 2024, 25% of organizational leaders that we surveyed said that they were actively enabling and training their workers on prompt engineering,” J.P. explains. “Surely when the 2025 numbers came around, that was going to go up. And it did — by one percentage point. 26%.”
After a full year of explosive AI adoption and billion-dollar investments, just 26% of employees have acquired one of the most basic skills. Our 2025 enterprise AI adoption report confirms this troubling trend across 1,600 knowledge workers surveyed.
“My surprise is that people are throwing money almost at AI,” J.P. observes. “They’re not understanding that this human factor is underlying all of it.”
Why? The answer isn’t what most executives expect — and it explains why sophisticated organizations keep repeating the same mistake.
What’s twice as important as formal training
Most organizations approach AI training the same way they’ve approached every enterprise software rollout. J.P. discovered this is almost exactly backwards.
There’s something that works better. J.P. calls it social learning, and his research shows it’s at least twice as important as formal learning — a principle echoed in how successful organizations empower AI champions.
“That video of someone at your own organization using the tool is more valuable than an hour of generic online training,” J.P. states.
But why? What makes five minutes of watching a colleague solve a real problem more powerful than comprehensive vendor documentation?
The answer challenges everything most executives believe about training and adoption.
The weekly confession revealing what’s happening with AI and jobs
J.P. regularly gets calls from Fortune 500 leaders with the same confession.
“Every week I talk to a client who will say, J.P., our C-level said that we’re laying off 20% of the workforce and we’re going to replace them with AI,” he reveals.
His response is always the same: “Oh, okay, so you have a mature AI application ready to step in and do that job?”
“Oh no,” they say. “Well, we haven’t started.”
What’s happening with AI and jobs isn’t what it seems. And it’s darker than most people realize.
J.P. has tracked predictions about AI and employment for over a decade. What he’s discovered — and what he predicts for 2026 — contradicts almost everything you’re hearing. The gap between perception and reality is accelerating, and the damage extends far beyond the people being laid off.
The cascading effect is already destroying AI adoption from the inside. Most leaders won’t see it until it’s too late.
The most dangerous thing about AI has nothing to do with the technology
There’s a reason organizations keep applying old playbooks to AI.
“If I hit the $80 button on the ATM, I know I’m going to get $80. It’s very deterministic,” J.P. explains. “With generative AI, we don’t know exactly what we’re going to get.”
But probabilistic outputs aren’t the real danger. There’s something more insidious happening at the psychological level — something about how AI delivers information that makes it uniquely dangerous.
When a colleague is uncertain, you hear it in their voice. But AI? It does something different. Something that warps decision-making in ways most people don’t recognize until it’s too late.
We see what J.P. reveals about this psychological trap as connecting directly to what WRITER’s Agentic Compact calls continuous observability and the human mandate. The governance structures most organizations need aren’t what you’d expect.
The framework that separates success from catastrophic failure
After two decades of guiding organizations through technological transformation, J.P. developed Forrester’s AIQ framework — a holistic approach that addresses the invisible human factors most executives miss entirely.
This isn’t about a silver bullet or a single transformative insight. It’s about understanding how multiple critical elements work together — and how missing even one can sink your entire AI strategy.
What does this framework reveal about successful implementation? The organizational changes, governance structures, and cultural shifts aren’t what conventional wisdom suggests. And the companies that got this right didn’t just avoid failure — they created sustainable competitive advantages.
Those two companies from the beginning? One understood how these elements interconnect. One spent millions on tools and got nothing.
The difference comes down to factors most executives completely overlook. And it’s costing you right now.
To hear J.P. break down all four AIQ elements and discover why the social learning approach outperforms traditional training by 2X, listen to the episode on Apple Podcasts, Spotify, or watch on YouTube.
For leaders ready to build AI strategies that work, the workforce enablement principles J.P. discusses are detailed in the Agentic Compact. Download The Agentic Compact: A new social contract for human-agent collaboration in the enterprise to get the complete framework for responsible AI deployment.
And to get a deeper understanding of why Forrester projects a 333% ROI on WRITER’s approach to human-centered enterprise AI, check out our upcoming webinar.