Humans in the loop

– 6 min read

From AGI to AMI: Dan Bikel, WRITER’s head of AI, on why “manageable intelligence” is the real revolution

Alaura Weaver   |  November 12, 2025

"manageable intelligence" is the real revolution

While the world chased AGI, an AI pioneer who had built it all — from Google Assistant to Llama 2‌ — ‌made a contrarian bet. Dan Bikel wagered that the future of enterprise AI wouldn’t be won by the biggest model, but by the most accountable one. He left the race for scale to solve a much harder problem — building AI that an enterprise can actually control. His journey provides the real-world validation for the principles now codified in The Agentic Compact, WRITER’s framework for responsible AI. According to Dan, WRITER’s head of AI, the industry’s obsession with scale has created a massive transparency paradox — leaving businesses with powerful tools they can’t fully understand or control.

Summarized by Writer

  • According to Dan, the randomness that makes consumer AI feel “human” is a critical liability that breaks enterprise trust and makes true transparency impossible.
  • Dan argues that the most important question isn’t “What can this model do?” but “What does this business need?” — a reversal that puts clear, transparent objectives at the core of AI strategy.
  • He reveals that the foundation of trustworthy AI lies in a decades-old lesson — if a system can’t tell the difference between “May” the person and “May” the month, it can’t be trusted with anything important.
  • This philosophy is the foundation of Article II: Foundational transparency from The Agentic Compact, which demands clarity into an agent’s design, data, and objectives — not just its outputs.

On the latest episode of Humans of AI, Dan explains that our biggest risk is deploying powerful AI systems that can’t account for their own thinking. His journey — from debating etymology at the dinner table to building some of the world’s most advanced semantic parsers — is a masterclass in why the future of AI belongs not to the biggest model, but to the most transparent and auditable system.

Why you must reverse the core question of AI

For 30 years, Dan had a front-row seat to the AI revolution, watching as models grew exponentially more powerful. But inside the world’s biggest labs, he witnessed a fundamental disconnect. The focus was always on the technology’s potential, not its practical application.

“I had to make this mental shift,” Dan explains. “My job was to go out, see what the business problems were, and then, work my way back to what is the research that we should be doing to support those business problems… It was like a complete reversal.”

This reversal is the first principle of foundational transparency. An AI system can’t be transparent if its core objectives are not clearly defined and aligned with a specific business need. Without this clarity of purpose, even the most powerful model is just a black box operating without context. In the episode, Dan shares what it took to make this mental shift and why it’s the most critical first step any enterprise must take before deploying AI.

How AI’s “human” randomness breaks enterprise trust

When it comes to enterprise AI, Dan points to a fundamental misunderstanding he often sees business leaders make. They experience the unpredictability that makes consumer AI feel creative and engaging, and they worry it’s an inherent liability for structured business processes.

But as Dan explains, the reality is more nuanced and — importantly — more controllable. He clarifies that the core “model itself is deterministic,” and the human-like variation comes from a layer of randomness that is intentionally added on top. “We add randomness on top to make it sound more human,” he notes.

From his perspective, this isn’t a bug to be fixed. It’s a dial to be managed. The key for any enterprise is to have a platform that allows knowledge workers to turn that dial. For creative and strategic work, a touch of randomness in an AI agent can augment human ingenuity. But for programmatic workflows, that dial can be turned down to demand absolute consistency and reliability. This insight reframes the conversation from a problem of unpredictability to an opportunity for control.

What a simple word reveals about the origin of AI trust

Long before the era of massive LLMs, Dan was working on a seemingly simple problem. He was teaching a computer to tell the difference between “May” the person and “May” the month. That project taught him the single most important lesson of his career — precision is the foundation of trust.

“It’s finding the boundaries of named entities — person, organization, location…” he recalls. This isn’t just a technical detail — it’s the essence of Article II: Foundational Transparency. If your AI can’t tell “Apple the company” from “apple the fruit,” you can’t use it to run your business.

If an agent’s foundational understanding of your data is flawed, its entire decision-making architecture sits on sand. In the episode, Dan tells the story of this early project and how it became the blueprint for building auditable, reliable systems, proving that transparency starts not with the model’s size, but with the integrity of its data.

Why AI can’t and shouldn’t explain itself

In the quest for transparency, the most tempting solution is also the most dangerous — asking the AI to explain its own reasoning. “It’s tempting to use a large language model to explain itself,” Dan warns. “It’ll give a plausible answer — but not necessarily the truth.”

This is the transparency paradox. The more complex and powerful a model is, the less capable it is of providing a truthful, auditable account of its own internal state. This is why, as Dan states, “No sane businessperson would want to deploy an LLM that changes without their consent.”

The vision is to build systems that are inherently explainable. It’s about creating auditable logs, ensuring observability, and maintaining a clear chain of custody for every action an agent takes. It’s a shift from trusting the model’s narrative to trusting the system’s architecture.

Turning transparency into a strategic advantage

Dan’s journey reveals that the next wave of AI innovation won’t be about raw power, but about provable trust. The principles of foundational transparency are more than just a compliance checklist. For any organization that wants to move from AI experiments to enterprise-scale systems, they’re a strategic imperative.

To hear more of Dan’s story and dive deeper into his framework for building trustworthy AI, listen to the full episode of Humans of AI on Apple Podcasts, Spotify, or watch on YouTube.

And for leaders ready to turn transparency into a competitive advantage, the principles Dan discusses are detailed in The Agentic Compact. Download The Agentic Compact: A new social contract for human-agent collaboration in the enterprise to get the complete framework.

Explore now