FirebaseVertexAI Framework Reference
Stay organized with collections
Save and categorize content based on your preferences.
ResponseModality
@available(iOS 15.0, macOS 12.0, tvOS 15.0, watchOS 8.0, *)
public struct ResponseModality : EncodableProtoEnum, Sendable
Represents the different types, or modalities, of data that a model can produce as output.
To configure the desired output modalities for model requests, set the responseModalities
parameter when initializing a GenerationConfig
. See the multimodal
responses
documentation for more details.
Important
Support for each response modality, or combination of modalities, depends on the
model.
-
Specifies that the model should generate textual content.
Use this modality when you need the model to produce written language, such as answers to
questions, summaries, creative writing, code snippets, or structured data formats like JSON.
Declaration
Swift
public static let text: ResponseModality
-
Public Experimental: Specifies that the model should generate image data.
Use this modality when you want the model to create visual content based on the provided input
or prompts. The response might contain one or more generated images. See the image
generation
documentation for more details.
Warning
Image generation using Gemini 2.0 Flash is a Public Experimental feature, which
means that it is not subject to any SLA or deprecation policy and could change in
backwards-incompatible ways.
Declaration
Swift
public static let image: ResponseModality
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-04-21 UTC.
[null,null,["Last updated 2025-04-21 UTC."],[],[],null,["# FirebaseVertexAI Framework Reference\n\nResponseModality\n================\n\n @available(iOS 15.0, macOS 12.0, tvOS 15.0, watchOS 8.0, *)\n public struct ResponseModality : EncodableProtoEnum, Sendable\n\nRepresents the different types, or modalities, of data that a model can produce as output.\n\nTo configure the desired output modalities for model requests, set the `responseModalities`\nparameter when initializing a [GenerationConfig](../Structs/GenerationConfig.html). See the [multimodal\nresponses](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal-response-generation)\ndocumentation for more details. \nImportant\n\nSupport for each response modality, or combination of modalities, depends on the\nmodel.\n- `\n ``\n ``\n `\n\n ### [text](#/s:16FirebaseVertexAI16ResponseModalityV4textACvpZ)\n\n `\n ` \n Specifies that the model should generate textual content.\n\n Use this modality when you need the model to produce written language, such as answers to\n questions, summaries, creative writing, code snippets, or structured data formats like JSON. \n\n #### Declaration\n\n Swift \n\n public static let text: ResponseModality\n\n- `\n ``\n ``\n `\n\n ### [image](#/s:16FirebaseVertexAI16ResponseModalityV5imageACvpZ)\n\n `\n ` \n **Public Experimental**: Specifies that the model should generate image data.\n\n Use this modality when you want the model to create visual content based on the provided input\n or prompts. The response might contain one or more generated images. See the [image\n generation](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal-response-generation#image-generation)\n documentation for more details. \n Warning\n\n Image generation using Gemini 2.0 Flash is a **Public Experimental** feature, which\n means that it is not subject to any SLA or deprecation policy and could change in\n backwards-incompatible ways. \n\n #### Declaration\n\n Swift \n\n public static let image: ResponseModality"]]