You can use safety settings to adjust the likelihood of getting responses that may be considered harmful. By default, safety settings block content with medium and/or high probability of being unsafe content across all dimensions.
Gemini safety settings Jump to Imagen safety settings Jump to
Safety settings for Gemini models
Learn more about safety settings in the Google Cloud documentation.
You configure
SafetySettings
during initialization of the model. Here are some basic examples.
Here's how to set one safety setting:
// ...
let model = vertex.generativeModel(
modelName: "GEMINI_MODEL_NAME ",
safetySettings: [
SafetySetting(harmCategory: .harassment, threshold: .blockOnlyHigh)
]
)
// ...
You can also set more than one safety setting:
// ...
let harassmentSafety = SafetySetting(harmCategory: .harassment, threshold: .blockOnlyHigh)
let hateSpeechSafety = SafetySetting(harmCategory: .hateSpeech, threshold: .blockMediumAndAbove)
let model = vertex.generativeModel(
modelName: "GEMINI_MODEL_NAME ",
safetySettings: [harassmentSafety, hateSpeechSafety]
)
// ...
Safety settings for Imagen models
Learn about all the supported safety settings and their available values for Imagen models.
// Initialize the Vertex AI service
let vertex = VertexAI.vertexAI()
// Initialize with an Imagen 3 model that supports your use case
let model = vertex.imagenModel(
modelName: "IMAGEN_MODEL_NAME ",
// Configure image generation safety settings for the model
safetySettings: ImagenSafetySettings(
safetyFilterLevel: .blockLowAndAbove,
personFilterLevel: .allowAdult
)
)
// ...
Other options to control content generation
- Learn more about prompt design so that you can influence the model to generate output specific to your needs.
- Configure model parameters to control how the model generates a response. For Gemini models, these parameters include max output tokens, temperature, topK, and topP. For Imagen models, these include aspect ratio, person generation, watermarking, etc.
- Set system instructions to steer the behavior of the model. This feature is like a "preamble" that you add before the model gets exposed to any further instructions from the end user.
- Pass a response schema along with the prompt to specify a specific output schema. This feature is most commonly used when generating JSON output, but it can also be used for classification tasks (like when you want the model to use specific labels or tags).