GenerateAnswerRequestSafetySettings Property
Optional. A list of unique
SafetySetting instances for blocking unsafe content.
This will be enforced on the
Contents and
GenerateAnswerResponse.candidate. There should not be more than one setting for each
SafetyCategory type. The API will block any contents and responses that fail to meet
the thresholds set by these settings. This list overrides the default settings for each
SafetyCategory specified in the safetySettings. If there is no
SafetySetting
for a given
SafetyCategory provided in the list, the API will use the default safety
setting for that category. Harm categories HARM_CATEGORY_HATE_SPEECH,
HARM_CATEGORY_SEXUALLY_EXPLICIT, HARM_CATEGORY_DANGEROUS_CONTENT, HARM_CATEGORY_HARASSMENT
are supported. Refer to the
guide
for detailed information on available safety settings. Also refer to the
Safety guidance
to learn how to incorporate safety considerations in your AI applications.
Namespace: GenerativeAI.TypesAssembly: GenerativeAI (in GenerativeAI.dll) Version: 2.0.2+aa51399cad6d90cc71158d589a6268608b3c1893
public List<SafetySetting>? SafetySettings { get; set; }
Property Value
ListSafetySetting