Humans in the loop
– 6 min read
The security paradox: Key insights on the human-agent workforce from WRITER’s CISO, Eric Freeman
There’s a new workforce inside your enterprise. Thousands of AI agents are deploying across enterprises everywhere, and according to Eric Freeman, WRITER’s chief information security officer (CISO), most organizations are exceptionally unprepared.
- According to Eric Freeman, security in the age of AI is no longer a cost center but a velocity multiplier that protects productivity.
- Eric argues that fundamental security principles, like least-privilege access, must now be extended to AI agents as new members of the workforce.
- He proposes a proactive defense model that uses autonomous AI agents to continuously attack and test internal systems for weaknesses.
- Eric envisions a cellular AI architecture, composed of specialized agents, as more resilient and secure than a single monolithic system.
- He believes that failing to treat digital agents as real employees is a failure of imagination and a critical security risk.
On the latest episode of Humans of AI, Eric explains that our biggest security risk is a failure of imagination — we don’t treat our digital employees as real. And where does a perspective like that come from? Well, not from a traditional tech background. His career started with a “questionable” decision and a restaurant computer, and it’s that line-cook’s outsider view that’s now shaping how WRITER approaches security from the inside out — in our company and in our products.
In this conversation, Eric lays out a new vision for enterprise security in the age of autonomous AI. Here are four key insights for every leader navigating this transformation.
Security isn’t insurance — it’s a velocity multiplier
For decades, the security industry has operated on a flawed premise. As Eric explains, it’s a business built on fear.
“The entire security industry is based on fear tactics because it’s what has been sold,” he says. “CISOs and security leaders have long struggled to make a business justification to get the right tools, investment in technology to mitigate risk. And they’ve used fear as a way of doing it.”
Agentic AI flips this model on its head. When an AI agent can perform the work of multiple employees, securing that agent is no longer a cost center. It’s the protection of a massive productivity multiplier.
The conversation shifts from “How much will a breach cost us?” to “How much velocity and profitability are we losing by not securing our AI workforce?” Properly secured agents can operate at full speed, turning your security posture into a direct driver of business growth.
The principles haven’t changed, but the players have
Eric’s journey into security began unconventionally — he taught himself to hack his employer’s computer while working as a line cook. But it was the lessons learned from the kitchen, not the hack, that truly formed his security philosophy.
“Let’s say you’re in a restaurant,” he says. “You need to put out 17 different orders of eggs during brunch. Three are over easy, five are scrambled… How do you do all of this coordination and chaos at once under such extreme pressure.”
This experience taught him that security, like a restaurant, is a system built on culture. Every person — from the dishwasher to the head chef — impacts the final product. The same is true in an enterprise. This principle is central to The Agentic Compact — we must treat AI agents as new, privileged members of the workforce that are subject to the same principles of role-based access control and least privilege as any human employee. The fundamentals of security remain, but they must now apply to a new class of digital collaborators.
At scale, your best defender is an attacker
How do you secure thousands of AI agents, each with unique permissions, all learning and adapting in real-time? Eric’s answer is radical — you build AI agents that attack other AI agents.
At WRITER, Eric’s team is pioneering a concept he calls “purple teaming” at machine scale by building autonomous agents with the singular purpose of constantly testing the company’s defenses from the inside.
“We’ve been building out agents to do functional tasks… trying to identify what we call as reachability,” Eric explains. “Meaning, if a vulnerability or some type of poor practice is seen within a code base, if I’m an attacker, is this reachable?… We’re trying to build our agents internally here to have specific sub-agents that do specific tasks that are both defensive and offensive in their nature.”
Instead of waiting for an external threat, this approach uses an internal, AI-driven team to probe for weaknesses, test for exploits, and harden systems 24/7. It’s a security model that operates at the same speed and scale as the AI workforce it protects.
The future of AI architecture is cellular, not monolithic
When asked how agents fit into an organization, Eric offers a powerful metaphor that aligns directly with the vision of The Agentic Compact.
“It’s no different than how DNA is wired together and each molecule has a different component and function that builds out this entire body,” he says. “If companies have a DNA that’s made up of their people, their processes, their tools, and technology… that’s exactly what agents are going to be able to provide.”
This cellular model has profound architectural implications. Instead of relying on one massive, monolithic AI system that could be compromised, the future lies in deploying thousands of smaller, specialized agents.
Each agent has a limited scope and a hyper-specific task, reducing the risk of hallucination and creating an “antifragile” system that gets stronger under stress. It’s a vision of a distributed, resilient, and inherently more secure human-agent workforce.
Turning unseen risks into strategic advantages
The human-agent workforce introduces risks that many leaders don’t yet see. There’s the opportunity cost of a slow AI workforce, the catastrophic potential of a single misconfiguration at scale, and the vulnerability of a defense that can’t keep up. Eric’s philosophy provides a path forward, showing how a proactive, culturally-driven security model can illuminate these blind spots.
To hear more of Eric’s incredible stories and dive deeper into these insights, listen to the full episode of Humans of AI on Apple Podcasts, Spotify, or watch on YouTube.
And for leaders looking to turn the unseen risks of this new era into a competitive edge, the principles Eric discusses are essential. They’re detailed in The Agentic Compact, a framework designed to provide that strategic advantage.
Download The Agentic Compact: A New Social Contract for Human-Agent Collaboration in the Enterprise to get the complete framework.