Inside Writer

– 5 min read

Writer achieves ISO trust triad, setting new standard for security, privacy, and responsible AI in the enterprise

Ryan Maple, Head of Information Security and Compliance   |  May 28, 2025

Writer achieves ISO trust triad, setting new standard for security, privacy, and responsible AI in the enterprise

As a leader in enterprise AI, Writer is committed to setting the standard for safe, accountable, and production-ready AI. Since 2020, we’ve built both our models and platform in-house, creating an end-to-end system designed to power agents that transform work — with security, safety, and control built into every layer.

Today, we’re taking the next step in our commitment to building trustworthy AI for the enterprise. We’ve achieved ISO/IEC 27001, 27701, and 42001 certifications, making Writer one of the first model developers to meet this full slate of global standards for privacy, security, and responsible AI. We’ve also completed our annual SOC 2 (Type II) audit (including HIPAA/HITECH), continuing our commitment to industry-specific security and compliance requirements.

The full ISO trust triad

The ISO/IEC 27001, 27701, and 42001 certifications are internationally recognized standards for responsible, secure, and ethical AI practices. Achieving these certifications isn’t just a checkbox — it reflects a deep commitment to reducing risk and supporting compliance needs. Below is a breakdown of what each of these certifications means, and why it’s important.

  • Information security – ISO/IEC 27001: Certifies Writer’s information security management systems, validating that we protect our customers’ data with enterprise-grade security infrastructure and end-to-end, managed platform deployment.
  • Privacy management – ISO/IEC 27701: This certification is an extension of ISO 27001 that demonstrates our commitment to upholding stringent privacy practices that align with global regulations, such as GDPR, HIPAA, CCPA, PCI, and Data Privacy Framework.
  • AI management – ISO/IEC 42001: is one of the first global standards for responsible and safe AI, certifying that Writer has adequate governance, risk, and accountability processes in place to develop and deploy AI systems.

Raising the bar for enterprise-grade AI 

The sheer speed of innovation in AI is unlike anything we’ve witnessed before. As AI advances faster than ever, the imperative to uphold best practices for developing safe and responsible AI becomes urgent – and more complex.

At Writer, our relentless focus on building enterprise-grade AI reflects a deep commitment to trust that extends beyond adhering to today’s global standards. By pairing a deep understanding of those standards with a close collaboration with our enterprise customers, we’ve developed our own framework for building and deploying AI that’s reliable, transparent, and controllable across each layer – from the underlying language models, to the data used to train and augment them, to the application layer that the end user experiences.

Here’s how our framework for enterprise-grade AI shows up across our platform.

AI governance and oversight tools for supervising agents

As enterprises deploy more AI agents, building trust in how those agents behave is crucial. Writer’s AI HQ includes a control panel for supervising agents with session logs and data-level controls. IT teams can build, test, and manage agents in one interface, run execution paths in a controlled environment, and use the state explorer to inspect and debug issues. We also provide explainability tools, such as output citations and chain-of-thought insights, and role-based permissions to control access to specific agents and data.

Transparent model development, optimized for enterprise use

Most model developers offer shockingly little visibility into pre- or post-training methodologies, such as the types of data used to train the model and how the model is maintained in post-training, all of which affect a model’s performance and reliability. Writer gives customers insight into how Palmyra models are trained, including how synthetic data is used, and our models are never trained on customers’ data. In post-training, we never quantize our models, so the behavior you validate today is the behavior you’ll see tomorrow. We also maintain a track record of minimal downtime and share deprecation policies and roadmaps to ensure our customers don’t experience disruptions. To help our customers compare models on the basis of cost, speed, efficiency, scalability, and reliability, we offer technical reports, benchmarking evaluations, and performance metrics.

Secure platform infrastructure and deployment

Platform infrastructure and the way services are deployed directly shape how an AI system performs in production environments. Writer’s unique, full-stack approach is purpose-built for greater control of the way services work together. Our end-to-end platform provides fully integrated LLMs, RAG, orchestration, and UI to eliminate the risk of stitching together third-party platforms. To give our customers peace of mind, platform deployment is fully managed by Writer, including GP scaling, 24/7 monitoring, penetration testing, vulnerability management, and more.

New Trust Center for Writer customers

To provide centralized access to new ISO certifications, along with other important documentation, Writer has launched a new public-facing Trust Center for its customers’ security and legal teams. It provides centralized access to security documentation, compliance certifications, and notably, model transparency artifacts like technical reports, bias audits, and benchmark results. Information and documentation about our compliance certifications, including ISO/IEC 27001, 27701, 42001, and SOC-2 can be found at trustcenter.writer.com.

You can read more about Writer’s approach to Trust at writer.com/trust.

More resources

Model Context Protocol (MCP) security Considerations for safer agent-tool interoperability
Thought leadership

– 14 min read

Model Context Protocol (MCP) security

Muayad Ali, Director of Engineering