Home > @google/generative-ai > HarmCategory
Harm categories that would cause prompts or candidates to be blocked.
Signature:
export declare enum HarmCategory
Member | Value | Description |
---|---|---|
HARM_CATEGORY_DANGEROUS_CONTENT | "HARM_CATEGORY_DANGEROUS_CONTENT" |
|
HARM_CATEGORY_HARASSMENT | "HARM_CATEGORY_HARASSMENT" |
|
HARM_CATEGORY_HATE_SPEECH | "HARM_CATEGORY_HATE_SPEECH" |
|
HARM_CATEGORY_SEXUALLY_EXPLICIT | "HARM_CATEGORY_SEXUALLY_EXPLICIT" |
|
HARM_CATEGORY_UNSPECIFIED | "HARM_CATEGORY_UNSPECIFIED" |