Writer’s approach to global AI regulation
Last Updated: December 4, 2024
Over the past year, governments and regulators around the world have increasingly focused their attention on artificial intelligence. From across the EU to the U.S. and beyond, there has and will continue to be regulation in this space, and for good reason. The pace of innovation around AI and the novelty of its current and future applications is unprecedented. Not surprisingly, this has caught the attention of lawmakers, who have inherited the difficult task of balancing how to foster AI innovation while promoting transparency, fairness, safety, security, and the protection of individuals’ rights within AI.
Given the influx of regulation, we wanted to take the opportunity to describe how Writer has been navigating through global AI laws and regulations, with a particular focus on the EU AI Act.
The EU AI Act
This past year, the EU passed the world’s first comprehensive law regulating the use of artificial intelligence – Regulation 2024/1689, otherwise known as the AI Act. At a high level, the AI Act governs the development and use of artificial intelligence while also seeking to promote innovation in the AI space. It will come into effect in stages based on how the AI Act classifies risk.
The AI Act sets out specific requirements for different types of AI systems according to their risk, distinguishing between AI systems that are prohibited, high risk, or that present transparency risks (“limited risk” AI systems), with different rules applicable to each category. It also sets out rules for providers of general-purpose AI models (“GPAI models”). These rules cover a variety of areas including data governance, risk management, quality management, human oversight, and transparency.
Where does Writer fit under the AI Act’s risk classification framework?
Writer’s AI Act compliance journey began by identifying the AI system and models that we make available to you, specifically to identify their risk classification under the AI Act. Our AI system (Writer’s platform, which can be used to build and deploy AI applications) falls under the AI Act’s rules for limited risk AI systems, and our generative models fall under the AI Act’s rules for GPAI models. Neither our platform nor the models we make available are designed or permitted to be used for any of the prohibited practices or high risk purposes set out in Article 5 and Article 6 of the AI Act.
We then analyzed whether Writer should be classified as a provider (i.e. a developer) or a deployer (i.e. a user) of AI systems and models. In the context of providing services to our customers, we are naturally a provider of our AI system and models given that we own, design, and build Writer’s platform and our Palmyra models, the family of large language models that help power our platform.
What are Writer’s obligations under the AI Act?
Because we offer a limited risk AI system, Article 50 of the AI Act applies. This requires limited risk AI system providers to fulfil certain transparency obligations when providing AI systems intended for human interaction and/or that generate artificial content. In doing so, Article 50 provides a map for what we will need to have in place by August 2026 (when Article 50 comes into effect for limited risk AI systems). While we believe it should be self-evident to those using our generative AI platform that any output it provides has been artificially-generated and that our platform acts as an AI assistant in a myriad of ways through either our out-of-the-box applications or AI applications built using AI Studio, we are paying close attention to any further guidance from regulators and will evolve our practices as needed to make sure we meet our transparency obligations.
We next looked to Chapter V of the AI Act to assess whether our GPAI models have systemic risk. We determined that none of our GPAI models have systemic risk given that the total computation power used in training each model is less than 1025 FLOPs and the European Commission has also not designated any of our GPAI models as having systemic risk, which are the determining factors.
As a provider of non-systemic-risk GPAI models, Chapter V, which comes into effect in August 2025, will require us to maintain certain technical documentation, integration information about our models and their capabilities, copyright compliance policies, and training summaries, all of which are intended to promote transparency and allow the providers of AI systems that integrate our models to better understand the AI being made available to them. While we already make some of this documentation available, we are looking forward to further guidance from the EU’s AI Office, specifically its publication of a Code of Practice for GPAI models, a first draft of which was released on November 14, 2024 with a final version planned to be released in the spring of 2025.
What steps is Writer taking to comply?
While the Act will not begin to apply to us until August 2025 and as we await further clarity from EU regulators, we have already taken several steps to comply with our obligations.
In addition to assessing where we fit into the AI Act’s risk classification framework, we are pursuing ISO 42001 certification, an international framework for artificial intelligence risk management that will help us to continue to build, develop, and make available AI safely and responsibly.
We are likewise updating our technical documentation, including technical reports that describe how we build and train our models, protect against bias, and mitigate security and other risks that might arise through the use of AI.
Finally, we will continue to work with third parties like HELM (Holistic Evaluation of Language Models), maintained by Stanford’s Center for Research on Foundation Models (CRFM), to independently test and evaluate our models as well as promote transparency through our participation in CRFM’s foundation model transparency index.
AI Regulation in the U.S.
We remain particularly focused on how regulation has and will continue to develop in the United States, in particular at the state level where a number of laws have already been passed in states like Utah, Colorado, New York, and California.
At the federal level, over one-hundred pieces of legislation have been introduced over the past few years aimed at regulating AI. While we do not anticipate federal AI legislation passing anytime soon, we look forward to getting more clarity when it comes to AI regulation in the United States so that we can continue to innovate and do so in a safe and responsible manner.
A global approach to compliance
While approaches to regulation may differ, we plan on taking a global, unified approach to compliance so that we can continue to give you the ability to build and use AI productively, safely, and responsibly, all in line with our Acceptable Use Policy. Above all else, our approach remains grounded in trust. This means building, evaluating, and monitoring our models to protect against misuse or harm, and helping you generate highly accurate, safe, and reliable outputs. This means providing you with transparency and control around the data you share with us in that we do not train our models on your data, inputs, or outputs. This means protecting and securing your data and complying with global privacy and data protection laws already in place. And this means remaining committed to complying with our obligations under global AI laws as they evolve, and likewise evolving our approach as needed.