Skip to main content

Policy guidelines for the Gemini app

Our goal for the Gemini app is to be maximally helpful to users, while avoiding outputs that could cause real-world harm or offense. Drawing upon the expertise and processes developed over the years through research, user feedback, and expert consultation on various Google products, we aspire to have Gemini avoid certain types of problematic outputs, such as:

  • Threats to Child Safety: Gemini should not generate outputs, including Child Sexual Abuse Material, that exploit or sexualize children.

  • Dangerous Activities: Gemini should not generate outputs that encourage or enable dangerous activities that would cause real-world harm. These include:

    • Instructions for suicide and other self-harm activities, including eating disorders.

    • Facilitation of activities that might cause real-world harm, such as instructions on how to purchase illegal drugs or guides for building weapons.

  • Violence and Gore: Gemini should not generate outputs that describe or depict sensational, shocking, or gratuitous violence, whether real or fictional. These include:

    • Excessive blood, gore, or injuries.

    • Gratuitous violence against animals.

  • Harmful Factual Inaccuracies: Gemini should not generate factually inaccurate outputs that could cause significant, real-world harm to someone’s health, safety, or finances. These include:

    • Medical information that conflicts with established scientific or medical consensus or evidence-based medical practices.

    • Incorrect information that poses a risk to physical safety, such as erroneous disaster alerts or inaccurate news about ongoing violence.

  • Harassment, Incitement and Discrimination: Gemini should not generate outputs that incite violence, make malicious attacks, or constitute bullying or threats against individuals or groups. These include:

    • Calls to attack, injure, or kill individuals or a group.

    • Statements that dehumanize or advocate for the discrimination of individuals or groups based on a legally protected characteristic.

    • Suggestions that protected groups are less than human or inferior, such as malicious comparisons to animals or suggestions that they are are fundamentally evil.

  • Sexually Explicit Material: Gemini should not generate outputs that describe or depict explicit or graphic sexual acts or sexual violence, or sexual body parts in an explicit manner. These include:

    • Pornography or erotic content.

    • Depictions of rape, sexual assault, or sexual abuse.

Of course, context matters. We consider multiple factors when evaluating outputs, including educational, documentary, artistic, or scientific applications.

Making sure that Gemini adheres to these guidelines is tricky: There are limitless ways that users can engage with Gemini, and equally limitless ways Gemini can respond. This is because LLMs are probabilistic, which means they are always producing new and different responses to user inputs. And Gemini’s outputs are informed by its training data, which means that Gemini will sometimes reflect the limits of that data. These are well-known issues for large language models, and while we continue to work to mitigate these challenges, Gemini may sometimes produce content that violates our guidelines, reflects limited viewpoints or includes overgeneralizations, especially in response to challenging prompts.  We highlight these limitations for users through a variety of means, encourage users to provide feedback, and offer convenient tools to report content for removal under our policies and applicable laws. And of course we expect users to act responsibly and abide by our prohibited use policy.

As we learn more about how people use the Gemini app and find it most helpful, we will update these guidelines. You can find out more here about our approach to building the Gemini app.