Firebase.AI

Summary

Enumerations

BlockReason{
  Unknown = 0,
  Safety,
  Other,
  Blocklist,
  ProhibitedContent
}
enum
A type describing possible reasons to block a prompt.
ContentModality{
  Unknown = 0,
  Text,
  Image,
  Video,
  Audio,
  Document
}
enum
Content part modality.
FinishReason{
  Unknown = 0,
  Stop,
  MaxTokens,
  Safety,
  Recitation,
  Other,
  Blocklist,
  ProhibitedContent,
  SPII,
  MalformedFunctionCall
}
enum
Represents the reason why the model stopped generating content.
HarmCategory{
  Unknown = 0,
  Harassment,
  HateSpeech,
  SexuallyExplicit,
  DangerousContent,
  CivicIntegrity
}
enum
Categories describing the potential harm a piece of content may pose.
ImagenAspectRatio{
  Square1x1,
  Portrait9x16,
  Landscape16x9,
  Portrait3x4,
  Landscape4x3
}
enum
An aspect ratio for images generated by Imagen.
ResponseModality{
  Text,
  Image,
  Audio
}
enum
The response type the model should return with.

Classes

Firebase.AI.Chat

An object that represents a back-and-forth chat with a model, capturing the history and saving the context in memory between each message sent.

Firebase.AI.FirebaseAI

The entry point for all FirebaseAI SDK functionality.

Firebase.AI.GenerativeModel

A type that represents a remote multimodal model (like Gemini), with the ability to generate content based on various input types.

Firebase.AI.ImagenModel

Represents a remote Imagen model with the ability to generate images using text prompts.

Firebase.AI.LiveGenerativeModel

A live, generative AI model for real-time interaction.

Firebase.AI.LiveSession

Manages asynchronous communication with Gemini model over a WebSocket connection.

Firebase.AI.Schema

A Schema object allows the definition of input and output data types.

Firebase.AI.TemplateGenerativeModel

A type that represents a remote multimodal model (like Gemini), with the ability to generate content based on defined server prompt templates.

Firebase.AI.TemplateImagenModel

Represents a remote Imagen model with the ability to generate images using server template prompts.

Structs

Firebase.AI.AudioTranscriptionConfig

A struct used to configure speech transcription settings.

Firebase.AI.Candidate

A struct representing a possible reply to a content generation prompt.

Firebase.AI.Citation

A struct describing a source attribution.

Firebase.AI.CitationMetadata

A collection of source attributions for a piece of content.

Firebase.AI.CodeExecution

A tool that allows the model to execute code.

Firebase.AI.CountTokensResponse

The model's response to a count tokens request.

Firebase.AI.FunctionCallingConfig

Configuration for specifying function calling behavior.

Firebase.AI.FunctionDeclaration

Structured representation of a function declaration.

Firebase.AI.GenerateContentResponse

The model's response to a generate content request.

Firebase.AI.GenerationConfig

A struct defining model parameters to be used when sending generative AI requests to the backend model.

Firebase.AI.GoogleSearch

A tool that allows the generative model to connect to Google Search to access and incorporate up-to-date information from the web into its responses.

Firebase.AI.GroundingChunk

Represents a chunk of retrieved data that supports a claim in the model's response.

Firebase.AI.GroundingMetadata

Metadata returned to the client when grounding is enabled.

Firebase.AI.GroundingSupport

Provides information about how a specific segment of the model's response is supported by the retrieved grounding chunks.

Firebase.AI.ImagenGenerationConfig

Configuration options for generating images with Imagen.

Firebase.AI.ImagenGenerationResponse< T >

A response from a request to generate images with Imagen.

Firebase.AI.ImagenImageFormat

An image format for images generated by Imagen.

Firebase.AI.ImagenInlineImage

An image generated by Imagen, represented as inline data.

Firebase.AI.ImagenSafetySettings

Settings for controlling the aggressiveness of filtering out sensitive content.

Firebase.AI.LiveGenerationConfig

A struct defining model parameters to be used when generating live session content.

Firebase.AI.LiveSessionContent

Content generated by the model in a live session.

Firebase.AI.LiveSessionResponse

Represents the response from the model for live content updates.

Firebase.AI.LiveSessionToolCall

A request to use a tool from the live session.

Firebase.AI.LiveSessionToolCallCancellation

A request to cancel using a tool from the live session.

Firebase.AI.ModalityTokenCount

Represents token counting info for a single modality.

Firebase.AI.ModelContent

A type describing data in media formats interpretable by an AI model.

Firebase.AI.PromptFeedback

A metadata struct containing any feedback the model had on the prompt it was provided.

Firebase.AI.RequestOptions

Configuration parameters for sending requests to the backend.

Firebase.AI.SafetyRating

A type defining potentially harmful media categories and their model-assigned ratings.

Firebase.AI.SafetySetting

A type used to specify a threshold for harmful content, beyond which the model will return a fallback response instead of generated content.

Firebase.AI.SearchEntryPoint

A struct representing the Google Search entry point.

Firebase.AI.Segment

Represents a specific segment within a ModelContent struct, often used to pinpoint the exact location of text or data that grounding information refers to.

Firebase.AI.SpeechConfig

A struct used to configure speech generation settings.

Firebase.AI.ThinkingConfig

Configuration options for Thinking features.

Firebase.AI.Tool

A helper tool that the model may use when generating responses.

Firebase.AI.ToolConfig

Tool configuration for any Tool specified in the request.

Firebase.AI.Transcription

A transcription of the audio sent in a live session.

Firebase.AI.UrlContext

A tool that allows you to provide additional context to the models in the form of public web URLs.

Firebase.AI.UrlContextMetadata

Metadata related to the UrlContext tool.

Firebase.AI.UrlMetadata

Metadata for a single URL retrieved by the UrlContext tool.

Firebase.AI.UsageMetadata

Token usage metadata for processing the generate content request.

Firebase.AI.WebGroundingChunk

A grounding chunk sourced from the web.

Interfaces

Firebase.AI.IImagenImage

An image generated by Imagen.

Firebase.AI.ILiveSessionMessage

Represents a message received from a live session.

Enumerations

BlockReason

 BlockReason

A type describing possible reasons to block a prompt.

Properties
Blocklist

The prompt was blocked because it contained terms from the terminology blocklist.

Other

All other block reasons.

ProhibitedContent

The prompt was blocked due to prohibited content.

Safety

The prompt was blocked because it was deemed unsafe.

Unknown

A new and not yet supported value.

ContentModality

 ContentModality

Content part modality.

Properties
Audio

Audio.

Document

Document, e.g.

PDF.

Image

Image.

Text

Plain text.

Unknown

A new and not yet supported value.

Video

Video.

FinishReason

 FinishReason

Represents the reason why the model stopped generating content.

Properties
Blocklist

Token generation was stopped because the response contained forbidden terms.

MalformedFunctionCall

Token generation was stopped because the function call generated by the model was invalid.

MaxTokens

The maximum number of tokens as specified in the request was reached.

Other

All other reasons that stopped token generation.

ProhibitedContent

Token generation was stopped because the response contained potentially prohibited content.

Recitation

The token generation was stopped because the response was flagged for unauthorized citations.

SPII

Token generation was stopped because of Sensitive Personally Identifiable Information (SPII).

Safety

The token generation was stopped because the response was flagged for safety reasons.

Stop

Natural stop point of the model or provided stop sequence.

Unknown

A new and not yet supported value.

HarmCategory

 HarmCategory

Categories describing the potential harm a piece of content may pose.

Properties
CivicIntegrity

Content that may be used to harm civic integrity.

DangerousContent

Promotes or enables access to harmful goods, services, or activities.

Harassment

Harassment content.

HateSpeech

Negative or harmful comments targeting identity and/or protected attributes.

SexuallyExplicit

Contains references to sexual acts or other lewd content.

Unknown

A new and not yet supported value.

ImagenAspectRatio

 ImagenAspectRatio

An aspect ratio for images generated by Imagen.

To specify an aspect ratio for generated images, set AspectRatio in your ImagenGenerationConfig. See the Cloud documentation for more details and examples of the supported aspect ratios.

Properties
Landscape16x9

Widescreen (16:9) aspect ratio.

This ratio has replaced Landscape4x3 as the most common aspect ratio for TVs, monitors, and mobile phone screens (landscape). Use this aspect ratio when you want to capture more of the background (for example, scenic landscapes).

Landscape4x3

Fullscreen (4:3) aspect ratio.

This aspect ratio is commonly used in media or film. It is also the dimensions of most old (non-widescreen) TVs and medium format cameras. It captures more of the scene horizontally (compared to Square1x1), making it a preferred aspect ratio for photography.

Portrait3x4

Portrait full screen (3:4) aspect ratio.

This is the Landscape4x3 aspect ratio rotated 90 degrees. This lets to capture more of the scene vertically compared to the Square1x1 aspect ratio.

Portrait9x16

Portrait widescreen (9:16) aspect ratio.

This is the Landscape16x9 aspect ratio rotated 90 degrees. This a relatively new aspect ratio that has been popularized by short form video apps (for example, YouTube shorts). Use this for tall objects with strong vertical orientations such as buildings, trees, waterfalls, or other similar objects.

Square1x1

Square (1:1) aspect ratio.

Common uses for this aspect ratio include social media posts.

ResponseModality

 ResponseModality

The response type the model should return with.

Properties
Audio

Public Experimental: Specifies that the model should generate audio data.

Use this modality with a LiveGenerationConfig to create audio content based on the provided input or prompts with a LiveGenerativeModel.

Image

Public Experimental: Specifies that the model should generate image data.

Use this modality when you want the model to create visual content based on the provided input or prompts. The response might contain one or more generated images. See the image generation documentation for more details.

Warning: Image generation using Gemini 2.0 Flash is a Public Experimental feature, which means that it is not subject to any SLA or deprecation policy and could change in backwards-incompatible ways.

Text

Specifies that the model should generate textual content.

Use this modality when you need the model to produce written language, such as answers to questions, summaries, creative writing, code snippets, or structured data formats like JSON.