WRITER’s approach to global AI regulation
Last Updated: August 1, 2025
Over the past several years, governments and regulators around the world have increasingly focused their attention on artificial intelligence. From across the EU to the U.S. and beyond, there has and will continue to be regulation in this space, and for good reason. The pace of innovation around AI and the novelty of its current and future applications is unprecedented. Not surprisingly, this has caught the attention of lawmakers, who have inherited the difficult task of balancing how to foster AI innovation while promoting transparency, fairness, safety, security, and the protection of individuals’ rights within AI.
Given the influx of regulation, we wanted to take the opportunity to describe the measures WRITER has and will continue to take to remain compliant with our obligations under global AI laws and regulations, beginning with the EU AI Act. Consistent with our commitment to responsible AI governance, WRITER has signed on to the EU’s Code of Practice for General-Purpose AI Models. Read more about our decision to do here.
The EU AI Act
In 2024, the EU passed the world’s first comprehensive law regulating the use of artificial intelligence – Regulation 2024/1689, otherwise known as the AI Act. At a high level, the AI Act governs the development and use of artificial intelligence while also seeking to promote innovation in the AI space. It comes into effect in stages based on how the AI Act classifies risk.
The AI Act sets out specific requirements for different types of AI systems according to their risk, distinguishing between AI systems that are prohibited, high risk, or that present transparency risks (“limited risk” AI systems), with different rules applicable to each category. It also sets out rules for providers of general-purpose AI models (“GPAI models”). These rules cover a variety of areas including data governance, risk management, quality management, human oversight, and transparency.
Where does WRITER fit under the AI Act’s risk classification framework?
WRITER’s AI Act compliance journey began by identifying the AI system and models that we make available to you, specifically to identify their risk classification under the AI Act. Our AI system (WRITER’s platform, which can be used to build and deploy AI agents) falls under the AI Act’s rules for limited risk AI systems, and our generative models fall under the AI Act’s rules for GPAI models. Neither our platform nor the models we make available are designed or permitted to be used for any of the prohibited practices set out in Article 5 of the AI Act.
We then analyzed whether we should be classified as a provider (i.e. a developer) or a deployer (i.e. a user) of AI systems and models. In the context of providing services to our customers, we are naturally a provider of our AI system and models given that we own, design, and build our platform and our Palmyra models, the family of large language models that help power our platform.
What are WRITER’s obligations under the AI Act?
Because we offer a limited risk AI system, Article 50 of the AI Act applies. This requires limited risk AI system providers to fulfil certain transparency obligations when providing AI systems intended for human interaction and/or that generate artificial content. In doing so, Article 50 provides a map for what we will need to have in place by August 2026 (when Article 50 comes into effect for limited risk AI systems). While we believe it should be self-evident to those using our generative AI platform that any output it provides has been artificially-generated and that our platform acts as an AI assistant in a myriad of ways through either our out-of-the-box agents we make available or agents built on our platform, we are paying close attention to any further guidance from regulators and will evolve our practices as needed to make sure we meet our transparency obligations.
We next looked to Chapter V of the AI Act to assess whether our GPAI models have systemic risk. We determined that none of our GPAI models have systemic risk given that the total computation power used in training each model is less than 1025 FLOPs and the European Commission has also not designated any of our GPAI models as having systemic risk, which are the determining factors.
As a provider of non-systemic-risk GPAI models, Chapter V, which came into effect on August 2, 2025, will require us to maintain certain technical documentation, integration information about our models and their capabilities, copyright compliance policies, and training summaries, all of which are intended to promote transparency and allow the providers of AI systems that integrate our models to better understand the AI being made available to them.
In July 2025, the European AI Office published the final GPAI model Code of Practice, which was developed after an extensive process involving input from hundreds of expert stakeholders across general-purpose AI model providers, industry organizations, academia, and civil society. The Code, which is voluntary, is designed to help industry comply with the AI Act’s legal obligations on safety, transparency and copyright of GPAI models.
Consistent with our commitment to responsible AI governance, WRITER is proud to confirm that we have signed on to the voluntary Code of Practice, further highlighting our continued commitment to transparency and compliance with the AI Act.
What steps is WRITER taking to comply with the AI Act?
First, in line with our obligations under the Code of Practice, we are committed to providing regulatory bodies and our customers with the information they need to understand and trust our GPAI models. This includes, but is not limited to:
- Providing clear and comprehensive documentation of our GPAI model capabilities, limitations, and intended uses;
- Providing summary information about the content used to train our GPAI models; and
- Documenting a copyright policy that describes the measures we take to comply with EU copyright law.
Second, WRITER has achieved ISO 42001 certification. ISO 42001 is an international standard for implementing artificial intelligence management systems. Certifying under ISO 42001 demonstrates the measures we have taken to implement governance, risk, and accountability processes when developing and deploying AI systems, and highlights our commitment to building, developing, and providing our AI-powered platform and features safely and responsibly.
Third, our Acceptable Use Policy specifies overarching principles that apply to customers who use our services, prohibits use of our services for, or to facilitate, any prohibited AI practices under the AI Act, and sets out the additional steps our customers must take if they intend to integrate our services into any AI systems that are designated as high-risk AI systems under the AI Act.
Finally, we recognize that compliance remains an ongoing exercise, and expect further guidance to be published by the EU Commission and national AI regulators. We look forward to engaging with this guidance as and when it is published.
AI regulation in the United States
We remain particularly focused on how regulation has and will continue to develop in the United States, both at the state level where a number of AI laws and and regulations have already been passed and at the federal level, either through executive orders and other administrative guidance or as part of a comprehensive federal AI bill. As with the EU, we look forward to getting more clarity when it comes to AI regulation in the United States so that we can continue to innovate and do so in a safe and responsible manner.
A global approach to compliance
While approaches to regulation may differ – across the EU, the United States, and other countries around the world – we are taking a global, unified approach to compliance so that we can continue to give you the ability to build and use AI productively, safely, and responsibly in line with our Acceptable Use Policy. Above all else, our approach remains grounded in trust. This means building, evaluating, and monitoring our models to protect against unlawful bias, misuse, and harm, and helping you generate highly accurate, safe, and reliable outputs. This means providing you with transparency and control around the data you share with us in that we do not train our models on your data, inputs, or outputs. This means protecting and securing your data and complying with global privacy and data protection laws already in place. And this means remaining committed to complying with our obligations under global AI laws as they evolve, and likewise evolving our approach as needed.