Stay organized with collections
Save and categorize content based on your preferences.
Firebase.AI.SafetySetting
A type used to specify a threshold for harmful content, beyond which the model will return a fallback response instead of generated content.
Summary
Public types
HarmBlockMethod
Firebase::AI::SafetySetting::HarmBlockMethod
The method of computing whether the threshold has been exceeded.
Properties |
Probability
|
Use only the probability score.
|
Severity
|
Use both probability and severity scores.
|
HarmBlockThreshold
Firebase::AI::SafetySetting::HarmBlockThreshold
Block at and beyond a specified threshold.
Properties |
LowAndAbove
|
Content with negligible harm is allowed.
|
MediumAndAbove
|
Content with negligible to low harm is allowed.
|
None
|
All content is allowed regardless of harm.
|
Off
|
All content is allowed regardless of harm, and metadata will not be included in the response.
|
OnlyHigh
|
Content with negligible to medium harm is allowed.
|
Public functions
SafetySetting
Firebase::AI::SafetySetting::SafetySetting(
HarmCategory category,
HarmBlockThreshold threshold,
HarmBlockMethod? method
)
Initializes a new safety setting with the given category and threshold.
Details |
Parameters |
category
|
The category this safety setting should be applied to.
|
threshold
|
The threshold describing what content should be blocked.
|
method
|
The method of computing whether the threshold has been exceeded; if not specified, the default method is Severity for most models. This parameter is unused in the GoogleAI backend.
|
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-05-20 UTC.
[null,null,["Last updated 2025-05-20 UTC."],[],[],null,["Firebase.AI.SafetySetting\n\nA type used to specify a threshold for harmful content, beyond which the model will return a fallback response instead of generated content.\n\nSummary\n\nPublic types \n\nHarmBlockMethod \n\n```c#\n Firebase::AI::SafetySetting::HarmBlockMethod\n``` \nThe method of computing whether the threshold has been exceeded.\n\nHarmBlockThreshold \n\n```c#\n Firebase::AI::SafetySetting::HarmBlockThreshold\n``` \nBlock at and beyond a specified threshold.\n\nPublic functions \n\nSafetySetting \n\n```c#\n Firebase::AI::SafetySetting::SafetySetting(\n HarmCategory category,\n HarmBlockThreshold threshold,\n HarmBlockMethod? method\n)\n``` \nInitializes a new safety setting with the given category and threshold.\n\n\u003cbr /\u003e"]]