Acceptable use policy
This Acceptable Use Policy (“AUP”) applies both to individuals and businesses accessing or using our generative artificial intelligence platform and any other products, models and/or services that we make available from time to time (including our APIs) (collectively, “Services”).
We’re committed to fostering the safe and responsible use of our Services and expect you to share that same commitment. This AUP sets out the rules for how our Services can and cannot be used. These rules will be updated from time to time, in response to developments in our technology, the Services we offer, their possible uses, and changes in law. If we believe you’ve violated this AUP, we reserve the right to suspend or terminate your account. By using any of our Services, you agree to comply with this AUP.This AUP is comprised of three sections: (1) Overarching Principles; (2) Prohibited Uses of Our Services; and (3) High Risk Uses of Our Services.
Overarching Principles.
If you use our Services, you must:
- Comply with the law. You must use our Services in ways that comply with applicable law at all times and not use our Services to engage in or facilitate any illegal activities.
- Not cause harm or otherwise use our Services to engage in threatening behavior. You must not use our Services to cause (or promote) harm to yourself or others, nor engage in (or promote) threatening behavior towards anyone else.
- Respect the safety and security measures we put in place. You must not circumvent or seek to circumvent our safety or security measures, which are in place to protect you and others, including the data you share with us, unless we give you specific written permission to do so.
Prohibited Uses of Our Services.
In addition to our Overarching Principles above, you must not use our Services for, or to facilitate, any of the following activities:
- Activities that are illegal. This includes, for example, using the Services to:
- create, distribute, or promote child sexual abuse material – we will report any child sexual abuse material we identify to relevant authorities and organizations where appropriate;
- engage in any crime or illegal activity (including for example, human trafficking, exploitation or sexual violence or terrorism, or development or distribution of controlled substances);
- provide legal, medical/health or financial advice without review by a qualified professional and disclosure of the potential limitations of the artificial intelligence (“AI”) use for this purpose;
- make automated decisions that have legal or potentially significant effects for persons, such as those that might affect a person’s safety, rights or well-being, including, for example, in areas related to law enforcement, judicial or democratic processes, migration, critical infrastructure, essential services, life or health insurance, safety components of products, extending credit, employment, housing, social scoring and/or education;
- engage in, promote or facilitate the development, manufacturing, distribution, or improvements of weapons or arms;
- violate any other person’s rights (such as third party intellectual property rights or privacy rights);
- use subliminal, manipulative or deceptive techniques to materially distort a person’s behavior and impair their ability to make informed decisions in ways that are likely to cause harm;
- exploit a person’s vulnerabilities (whether age, disability or social economic circumstances) in order to materially distort their behaviors in ways that are likely to cause harm;
- score or classify people based on their social behaviors or known, inferred or predicted characteristics in ways that might cause them unfair treatment or discrimination;
- create or add to facial recognition databases by scraping images from the internet or CCTV footage;
- predict whether a person is likely to commit a crime based on profiling of their personality traits and/or characteristics;
- identify or infer emotions of a person using biometrics in the workplace or educational institutions;
- deduce or infer a person’s race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation based on their biometric data; or
- process personal information likely to be “sensitive” in nature (including, for example, information relating to health, political or religious views, sexual preferences, finances or criminal convictions) without obtaining any necessary consents or otherwise complying with any other applicable legal requirements.
- Deceptive or misleading activities. This includes, for example, using the Services to:
- impersonate another individual or organization;
- misrepresent to a person (or persons) that they are interacting with an AI system rather than a human, or misrepresent that an output of the Services is human-generated;
- generate deceptive or misleading content, including, for example, disinformation campaigns, conspiracies, propaganda, misleading information about health related issues or treatments, or fake reviews;
- deter or otherwise circumvent or interfere with participation in democratic or political processes; or
- engage in or promote any forms of academic dishonesty, such as plagiarism.
- Harmful, abusive or fraudulent activities. This includes, for example, using the Services to:
- defame or slander another person or organization;
- describe, encourage, support, or provide instructions on how to harm people (including, for example, suicide, self-harm, mutilation, eating disorders, shaming, torture, bullying and hate speech), animals, or property;
- engage in or facilitate gambling and/or payday lending;
- promote, facilitate or generate content for fraudulent activities, including, for example, scams, pyramid schemes, phishing or malware;
- compromise the security or integrity of, or gain unauthorized access to, computer systems or networks, including, for example, spoofing and social engineering or the development of malicious malware or code;
- “jailbreak” or override the safety features built into our Services;
- engage in, promote or facilitate any behavior that is discriminatory, hateful or that harasses, abuses, threatens or bullies any persons or groups of persons;
- generate or process pornographic, sexual or any other form of sexually explicit content; or
- generate outputs for use in training or improving another AI model (e.g., engage in model scraping).
High Risk Uses of Our Services.
In addition to our Overarching Principles and Prohibited Uses above, you must follow these additional rules if you integrate our Services into, or use our Services to build, applications or AI systems for any of the following uses (“High Risk Uses”):
- Biometrics. Identifying individuals based on their biometric data (especially in public places), categorizing individuals according to sensitive or protected attributes (such as age, gender or health) based on their biometric data (where such categorization is not a Prohibited Use of the Services above), or identifying or inferring emotions of individuals based on their biometric data.
- Critical infrastructure. Managing and operating critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
- Education and vocational training (“education”). Determining access to education (and levels of access to education), evaluating learning outcomes in education (such as marking exams), or monitoring or detecting prohibited behaviors (like cheating) during educational tests or exams.
- Employment and work relationships. Analyzing and filtering job applications, or evaluating candidates, making decisions about work related relationships (including termination), allocating tasks based on behavior or traits, or monitoring or evaluating performance or behavior.
- Access to and enjoyment of essential services. Including:
- Evaluating the eligibility of persons for essential public benefits (including healthcare) or making decisions about changes to such benefits;
- Evaluating the creditworthiness of persons or to determine credit scores (unless to detect fraud);
- Risk assessment and pricing for life and health insurance; or
- Evaluating or classifying emergency calls or to dispatch or to establish priorities for dispatching emergency services or triaging access to emergency healthcare.
- Democratic processes. Influencing the outcome of an election or referendum, or the voting behavior of individuals.
- Other high-risk AI systems. Integrating our Services into, or using our Services to build, AI systems that qualify as high-risk AI systems (or any analogous classification) under applicable laws.
By their nature, High Risk Uses of our Services pose a greater risk of negatively impacting people’s safety, rights or well being. Consequently, you may only use our Services for a High Risk Use, if you:
- Comply with your legal obligations. You must comply with any and all legal obligations that may apply to you and/or your use of the Services for a High Risk Use.
- Introduce human oversight. Without limiting your general obligation to comply with your legal obligations, you must ensure that your use of the Services for a High Risk Use is designed and developed in such a way as to ensure that it is properly overseen by natural persons in order to prevent and minimize the risks to individuals that may result from such High Risk Use.
- Are transparent. You must provide easily accessible, clear and comprehensive information to people when they are interacting with, or are exposed to, AI or AI-generated content.
- Are vigilant. You must have in place robust governance, policies, procedures and processes to provide ongoing quality assurance and risk management of any High Risk Use you make of our Services.