Humans in the loop
– 10 min read
The hybrid AI-human workforce: When everyone becomes a manager
Matan-Paul Shetrit on the skills gap no one's preparing for
- The real bottleneck to enterprise AI isn’t the technology—it’s scaling it responsibly
- Regulated industries like healthcare and financial services are pushing harder for AI transformation than expected
- Deploying 10,000 AI agents into a 1,000-person company creates an 11,000-employee organization overnight, breaking all existing processes
- Everyone will become a manager of humans, agents, or hybrid teams—yet most have never managed before
- We’re shifting from being limited by our capacity to do work to being limited by our capacity to manage
- AI is teaching us to become critical thinkers and editors rather than writers—a skillset no one is learning today
- The key distinction: moving from observability (what happened) to supervision (should it have happened)
- Success won’t come from the most sophisticated AI, but from organizations that treat transformation as an ongoing process
- Humans and machines working together is an organizational challenge, not just a technical one
Picture this: You’re 21 years old, fresh into your first real job at Israel’s Ministry of Finance. They sit you down in a corner office — no one else has access. For the next three years, you’ll work on restructuring the entire social security system. Healthcare, old age benefits, child benefits. If word leaks, careers end. You’re the fall guy if things go wrong.
This is where Matan-Paul Shetrit, now Head of Product Design at WRITER, learned a lesson that most Silicon Valley product leaders miss entirely: even when systems are broken, there’s a method to that madness. There’s a reason things were designed the way they were. Success comes from respecting why things work the way they do, even as you reimagine them.
In the latest episode of Humans of AI, Matan shares insights from his unusual career journey and reveals a profound workplace shift that’s already happening: as AI agents become collaborators, every employee is becoming a manager — not just of people, but of hybrid teams of humans and AI agents working side by side.
While the industry obsesses over AGI, the real challenge is right in front of us: training an entire workforce to manage when most have never managed before.
The surprising truth about enterprise AI adoption
When Matan joined Writer, he expected resistance from traditional industries. The narrative around enterprise companies, especially regulated ones, is that they’re “dying dinosaurs”—slow to change, risk-averse, stuck in their ways.
He was wrong.
“I was blown away by the appetite for change these companies have,” Matan reveals. “We sit with them on a daily basis and they understand that they have to change, they want to change, and they’re pushing for change.”
Healthcare companies. Financial services. The heavily regulated industries everyone assumes would be last to adopt AI? They’re some of the most eager.
But there’s a critical difference between wanting change and successfully implementing it. And that’s where most AI initiatives fail.
Why transformation is the actual product
We’ve seen this pattern before. Twenty years ago, hyperscalers told enterprises: “Here’s cloud. It’s great. Good luck.” The technology was revolutionary, but organizations didn’t know what to do with it. Adoption was slow, painful, and often unsuccessful.
With AI, the challenge is exponentially harder.
“The pace the industry is moving means your assumptions and decisions could be out of date very, very quickly,” Matan explains. “You might sign up for a service that does X, but very quickly it evolves to do something slightly different or completely different because the technology unlocked completely new capabilities.”
The technology isn’t static. It evolves while you’re implementing it.
This is why Matan insists: The technology isn’t the product. The transformation is the product.
If you don’t handhold organizations through this transformation — understanding their processes, their constraints, their culture — you’re going to fail. No matter how sophisticated your AI is.
The management revolution no one’s talking about
Here’s where the conversation gets uncomfortable.
If you’re a 1,000-person company and you deploy 10,000 AI agents, you’re no longer a 1,000-person company. You’re an 11,000-employee organization overnight.
“That’s not the same beast,” Matan warns. “Anyone who’s scaled a startup knows that growing from 50 to 100 to 500 to 1,000—all of your processes break.”
But here’s the deeper challenge: AI agents aren’t just automatons following deterministic rules. They’re more like junior interns—capable of creativity, prone to mistakes, with sparks of randomness.
“This idea that agents are machines is actually wrong,” Matan argues. “I’m not saying they’re human, but they misbehave. They have sparks of creativity, sparks of randomness.”
This means you can’t just deploy them and forget about them. You have to manage them.
And there’s the problem: Most people have never managed before.
The skills gap that should scare you more than AGI
“This is to me, in some ways, more scary than the AGI discussion,” Matan admits. “My daughter will grow up in a world where people manage agents. But between someone who’s a toddler and me, there’s a skill gap. What does it mean for me as an employee to manage when I’ve never managed before?”
The future of work isn’t just about AI replacing tasks. It’s about everyone becoming a manager—of humans, agents, or hybrid teams.
That shift changes everything:
- Your constraint is no longer your capacity to do work
- It’s your capacity to manage those doing the work
- Managing people and managing agents may require completely different skillsets
And nobody is teaching these skills yet.
From writers to editors: The AI literacy revolution
Matan sees a parallel to previous technological shifts. When he was in school, he couldn’t use a calculator for AP math or a spell-checker for essays. Those tools were considered crutches.
Now they’re standard.
Generative AI is a bigger leap, but the principle is the same: AI is teaching us to be critical thinkers and editors, not just writers.
“If you take everything AI says as gospel, there’s a problem,” Matan emphasizes. “These machines, by definition, make mistakes. We’re essentially turning into editors rather than writers. And yes, that is a skillset. It’s a skillset that no one is learning today.”
This editorial mindset requires:
- Critical thinking about AI outputs
- Judgment about what’s good enough versus what needs refinement
- Accountability for what goes out under your name
Because here’s the uncomfortable truth: AI doesn’t reduce accountability.
“Whatever output the AI brings out, it still comes out under your name,” Matan states firmly. “If you publish garbage, that’s the quality of what you do. It’s not the AI’s fault. That’s on you as the person who stands behind the product you bring out there.”
Supervision vs. Observability: A Critical Distinction
Most organizations focus on observability: What happened? What did the agent do? What was the output?
But according to Matan, that’s not enough.
“Observability tells you what happened. What we should care about is supervision, which tells you: the thing that happened, should it have happened? And if so, why?”
This distinction matters because:
- Observability is backward-looking
- Supervision drives iteration and refinement
- Supervision helps you understand whether the agent’s behavior aligns with your goals
The problem is we don’t have good benchmarks yet. We’re comparing AI agents to computers (deterministic, perfect) when we should be comparing them to humans (creative, imperfect, sometimes wrong).
The 80% Accuracy Paradox
Matan shares a telling story: Writer showed a demo to a client with an 80% accuracy rate for an internal support agent use case. The client’s first reaction? “Why is it only 80%?”
Matan’s response: “Can you tell me what’s your accuracy rate for your human agents?”
They couldn’t. They’d never measured it.
“Let’s call your customer support center 10 times with the same questions and see what we get,” Matan suggested. “What do you think you’ll get?”
The client admitted: “Probably less than 80%.”
We hold AI to standards we’ve never applied to humans. And until we start measuring human performance, we can’t set fair benchmarks for AI.
The Real Bottleneck: Scaling AI Responsibly
The industry talks constantly about building agents. Frameworks, tools, capabilities. But building them isn’t the bottleneck.
Scaling them responsibly is.
“Where the real bottleneck on AI is not about how we build agents, it’s how we scale them,” Matan argues. “How we do this in a way that the CIO or CISO doesn’t wake up with cold sweats at night.”
Managing a fleet of organic and synthetic agents—humans and machines working together—requires:
- New organizational structures
- New measurement frameworks
- New supervision mechanisms
- New accountability models
It’s a completely different skillset, both individually and organizationally.
Why Businesses Run on Exceptions, Not Happy Paths
Here’s a dose of reality most AI vendors don’t want to acknowledge:
“Most of the demos you see on social media are horseshit,” Matan says bluntly. “They don’t scale in enterprise use cases.”
Why? Because demos show the happy path. The perfectly formatted request. The ideal scenario.
But businesses don’t run on happy paths. They run on handling exceptions.
- The customer with the unusual request
- The process that breaks at 2 AM
- The edge case no one anticipated
That’s where the real work happens. That’s where AI has to prove itself.
And that’s why accountability can’t disappear just because you’re using AI. The output still goes out under your organization’s name. The responsibility is still yours.
The Partnership Mindset: Navigating New Terrain Together
Success with enterprise AI requires humility from both sides.
From AI companies: Understanding that this isn’t a silver bullet. The technology will evolve. Customer needs will change. You need to adapt together.
From enterprises: Patience and willingness to partner. You’re navigating new terrain. You need to push each other to test the limits of what can and cannot be done.
“We are navigating through a new terrain and that requires really pushing each other and testing each other to the limit,” Matan emphasizes.
This partnership mindset comes from Matan’s early experience restructuring Israel’s social security system. He learned that even if you think a system is broken, there’s a reason it was designed that way. You need to respect the existing structure while reimagining it.
The same principle applies to AI transformation. You can’t just bulldoze existing processes. You need to understand them, respect them, and thoughtfully redesign them.
The path forward: Three key principles
Based on Matan’s journey from government policy to enterprise AI, three principles emerge for successfully navigating the hybrid human-agent workforce:
1. Focus on transformation, not just technology
The most sophisticated AI is worthless if you can’t successfully integrate it into your organization. Invest in change management. Understand your processes. Prepare your people.
2. Build supervision capabilities, not just observability
Don’t just track what your agents do. Develop frameworks to determine whether they should have done it. Build feedback loops for continuous improvement.
3. Accept that everyone becomes a manager
The future isn’t about AI replacing workers. It’s about workers managing AI. Start teaching supervision skills, critical thinking, and editorial judgment now. This is the literacy challenge of our time.
The Uncomfortable questions we need to answer
As this conversation makes clear, the transformation ahead isn’t just about deploying agents. It’s about fundamentally rethinking how work gets done.
We need to grapple with hard questions:
- How do we measure AI performance fairly compared to human performance?
- What management skills do people need to supervise AI agents effectively?
- How do we maintain accountability when AI is in the mix?
- How do we handle the exceptions, not just the happy paths?
- What does it mean to scale responsibly when your “headcount” can 10x overnight?
The organizations that get this right won’t be the ones with the most sophisticated AI. They’ll be the ones who understand that transformation is the product, that supervision matters more than observability, and that humans and machines working together is an organizational challenge, not just a technical one.
Listen to the full episode of Humans of AI featuring Matan-Paul Shetrit
Learn more about WRITER and how we’re helping enterprises scale AI agents responsibly at writer.com