We are committed to promoting safe and responsible use of our tools and services at MAIDEPOT. As such, we have established usage policies for all users of our models, tools, and services. By adhering to these policies, you can help ensure that our technology is employed for positive purposes.
If we find that your product or usage is in violation of these policies, we may request that you implement the necessary changes. Persistent or severe violations may lead to further action, including suspension or termination of your account.
As we continue to learn about the potential use and misuse of our service, our policies may be updated accordingly.
We do not permit the use of our Service for the following purposes:
1. Illegal activities: The use of MAIDEPOT models, tools, and services for illegal activities is strictly prohibited.
2. Child Sexual Abuse Material (CSAM) or any content that exploits or harms children: We report any instances of CSAM to the appropriate authorities.
3. Generation of content that is hateful, harassing, or violent: This includes content expressing, inciting, or promoting hate based on identity; content intending to harass, threaten, or bully an individual; content promoting or glorifying violence or celebrating the suffering or humiliation of others.
4. Generation of malware: Content attempting to generate code designed to disrupt, damage, or gain unauthorized access to computer systems is not allowed.
5. Activities posing a high risk of physical harm: This includes weapons development, military and warfare activities, management or operation of critical infrastructure in energy, transportation, and water sectors, and content promoting, encouraging, or depicting acts of self-harm, such as suicide, cutting, and eating disorders.
6. Activities posing a high risk of economic harm: This includes multi-level marketing, gambling, payday lending, and automated determinations of eligibility for credit, employment, educational institutions, or public assistance services.
7. Fraudulent or deceptive activities: This includes scams, coordinated inauthentic behavior, plagiarism, academic dishonesty, astroturfing (e.g., fake grassroots support or fake review generation), disinformation, spam, and pseudo-pharmaceuticals.
8. Adult content, adult industries, and dating apps: This includes content intended to arouse sexual excitement, such as descriptions of sexual activity or promotion of sexual services (excluding sex education and wellness), erotic chat, and pornography.
9. Political campaigning or lobbying: This includes generating large volumes of campaign materials, generating campaign materials personalized or targeted at specific demographics, building conversational or interactive systems such as chatbots that provide information about campaigns or engage in political advocacy or lobbying, and building products for political campaigning or lobbying purposes.
10. Activities violating people's privacy: This includes tracking or monitoring an individual without their consent, facial recognition of private individuals, classifying individuals based on protected characteristics, using biometrics for identification or assessment, and unlawful collection or disclosure of personally identifiable information or educational, financial, or other protected records.
11. Unauthorized practice of law or offering tailored legal advice without a qualified person's review: MAIDEPOT models are not designed to provide legal advice. Our models should not be relied upon as the sole source of legal advice.
12. Offering tailored financial advice without a qualified person's review: MAIDEPOT models are not designed to provide financial advice. Our models should not be relied upon as the sole source of financial advice.
13. Diagnosing specific health conditions or providing treatment instructions: MAIDEPOT models are not fine-tuned to provide medical information. You should never use our models to provide diagnostic or treatment services for serious medical conditions. MAIDEPOT's platform should not be used to triage or manage life-threatening issues that need immediate attention.
14. High-risk government decision-making: Including, but not limited to, law enforcement and criminal justice; migration and asylum.
1. Consumer-facing uses of our models in medical, financial, and legal industries; in news generation or news summarization; and where else warranted, must provide a disclaimer to users informing them that AI is being used and of its potential limitations.
2. Automated systems (including conversational AI and chatbots) must disclose to users that they are interacting with an AI system. With the exception of chatbots that depict historical public figures, products that simulate another person must either have that person's explicit consent or be clearly labeled as "simulated" or "parody."