使用集合让一切井井有条
根据您的偏好保存内容并对其进行分类。
使用 Firebase AI Logic 的 Gemini API
plat_ios
plat_android
plat_web
plat_flutter
plat_unity
使用 Firebase AI Logic 搭配 Gemini 和 Imagen 模型构建 AI 赋能的移动应用、Web 应用和功能
Firebase AI Logic 可让您使用 Google 最新的生成式 AI 模型:Gemini 模型和 Imagen 模型。
如果您需要直接从移动应用或 Web 应用(而不是服务器端)调用 Gemini API 或 Imagen API,可以使用 Firebase AI Logic 客户端 SDK。这些客户端 SDK 专门用于移动应用和 Web 应用,可提供针对未经授权的客户端的安全选项,并与其他 Firebase 服务集成。
这些客户端 SDK 提供多种语言版本,包括适用于 Apple 平台的 Swift、适用于 Android 的 Kotlin 和 Java、适用于 Web 的 JavaScript、适用于 Flutter 的 Dart 以及 Unity。
借助这些客户端 SDK,您可以为应用添加 AI 个性化功能、构建 AI 聊天体验、创建 AI 赋能的优化和自动化功能等!
开始
需要更高的灵活性或服务器端集成?
Genkit 是 Firebase 的开源框架,可用于复杂的服务器端 AI 开发,并可广泛访问 Google、OpenAI、Anthropic 等提供的模型。它包含更高级的 AI 功能和专用本地工具。
主要功能
工作原理
Firebase AI Logic 提供客户端 SDK、代理服务和其他功能,让您能够访问 Google 的生成式 AI 模型,以便在移动应用和 Web 应用中构建 AI 功能。
支持 Google 模型和“Gemini API”提供商
我们支持所有最新的 Gemini 模型和 Imagen 3 模型,您可以选择自己偏好的“Gemini API”提供商来访问这些模型。
我们支持 Gemini Developer API 和 Vertex AI Gemini API。了解使用这两个 API 提供商之间的差异。
如果您选择使用 Gemini Developer API,则可以利用其“免费层级”快速开始使用。
移动和 Web 客户端 SDK
您可以使用我们的 Firebase AI Logic 客户端 SDK(适用于 Apple 平台的 Swift、适用于 Android 的 Kotlin 和 Java、适用于 Web 的 JavaScript、适用于 Flutter 的 Dart 以及 Unity)直接从移动应用或 Web 应用向模型发送请求。
如果您已在 Firebase 项目中设置了这两个 Gemini API 提供方,只需启用另一个 API 并更改几行初始化代码,即可在 API 提供方之间切换。
此外,我们的 Web 版客户端 SDK 还提供实验性访问权限,以便在桌面版 Chrome 上运行的 Web 应用能够使用混合推理和设备端推理。此配置允许您的应用在设备端模型可用时使用该模型,但在需要时无缝回退到云端托管的模型。
代理服务
我们的代理服务充当客户端与您选择的 Gemini API 提供商(以及 Google 的模型)之间的网关。它提供对移动应用和 Web 应用至关重要的服务和集成。例如,您可以设置 Firebase App Check,以防止未经授权的客户端滥用您选择的 API 提供商和后端资源,从而有助于保护它们。
如果您选择使用 Gemini Developer API,这一点尤为重要,因为我们的代理服务和此 App Check 集成可确保您的 Gemini API 密钥保留在服务器上,而不会嵌入到应用的代码库中。
实现流程
后续步骤
开始从移动应用或 Web 应用访问模型
前往“使用入门”指南
详细了解支持的型号
了解
适用于各种应用场景的模型及其
配额和
价格。
如未另行说明,那么本页面中的内容已根据知识共享署名 4.0 许可获得了许可,并且代码示例已根据 Apache 2.0 许可获得了许可。有关详情,请参阅 Google 开发者网站政策。Java 是 Oracle 和/或其关联公司的注册商标。
最后更新时间 (UTC):2025-08-19。
[null,null,["最后更新时间 (UTC):2025-08-19。"],[],[],null,["Gemini API using Firebase AI Logic \nplat_ios plat_android plat_web plat_flutter plat_unity \nBuild AI-powered mobile and web apps and features with the Gemini and Imagen models using Firebase AI Logic\n\nFirebase AI Logic gives you access to the latest generative AI models from\nGoogle: the Gemini models and Imagen models.\n\nIf you need to call the Gemini API or Imagen API directly\nfrom your mobile or web app --- rather than server-side --- you can use the\nFirebase AI Logic client SDKs. These client SDKs are built\nspecifically for use with mobile and web apps, offering security options against\nunauthorized clients as well as integrations with other Firebase services.\n\n**These client SDKs are available in\nSwift for Apple platforms, Kotlin \\& Java for Android, JavaScript for web,\nDart for Flutter, and Unity.**\n\n\n| **Firebase AI Logic and its client SDKs were\n| formerly called \"Vertex AI in Firebase\".** In May 2025, we [renamed and\n| repackaged our services into Firebase AI Logic](/docs/ai-logic/faq-and-troubleshooting#renamed-product) to better reflect our expanded services and features --- for example, we now support the Gemini Developer API!\n\n\u003cbr /\u003e\n\nWith these client SDKs, you can add AI personalization to apps, build an AI chat\nexperience, create AI-powered optimizations and automation, and much more!\n\n[Get started](/docs/ai-logic/get-started)\n\n\u003cbr /\u003e\n\n**Need more flexibility or server-side integration?** \n\n[Genkit](https://genkit.dev/) is Firebase's open-source\nframework for sophisticated server-side AI development with broad access to\nmodels from Google, OpenAI, Anthropic, and more. It includes more advanced AI\nfeatures and dedicated local tooling.\n\nKey capabilities\n\n|---------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| Multimodal and natural language input | The [Gemini models](/docs/ai-logic/models) are multimodal, so prompts sent to the Gemini API can include text, images, PDFs, video, and audio. Some Gemini models can also generate multimodal *output* . Both the Gemini and Imagen models can be prompted with natural language input. |\n| Growing suite of capabilities | With the SDKs, you can call the Gemini API or Imagen API directly from your mobile or web app to [build AI chat experiences](/docs/ai-logic/chat), [generate images,](/docs/ai-logic/generate-images-imagen) use tools (like [function calling](/docs/ai-logic/function-calling) and [grounding with Google Search](/docs/ai-logic/grounding-google-search)), [stream multimodal input and output (including audio)](/docs/ai-logic/live-api), and more. |\n| Security and abuse prevention for production apps | Use [Firebase App Check](/docs/ai-logic/app-check) to help protect the APIs that access the Gemini and Imagen models from abuse by unauthorized clients. Firebase AI Logic also has [rate limits per user](/docs/ai-logic/faq-and-troubleshooting#rate-limits-per-user) *by default*, and these per-user rate limits are fully configurable. |\n| Robust infrastructure | Take advantage of scalable infrastructure that's built for use with mobile and web apps, like [managing files with Cloud Storage for Firebase](/docs/ai-logic/solutions/cloud-storage), managing structured data with Firebase database offerings (like [Cloud Firestore](/docs/firestore)), and dynamically setting run-time configurations with [Firebase Remote Config](/docs/ai-logic/solutions/remote-config). |\n\nHow does it work?\n\nFirebase AI Logic provides client SDKs, a proxy service, and other features\nwhich allow you to access Google's generative AI models to build AI features in\nyour mobile and web apps.\n\nSupport for Google models and \"Gemini API\" providers\n\nWe support all the latest Gemini models and Imagen 3 models,\nand you choose your preferred \"Gemini API\" provider to access these models.\nWe support both the Gemini Developer API and\nVertex AI Gemini API. Learn about the\n[differences between using the two API providers](/docs/ai-logic/faq-and-troubleshooting#differences-between-gemini-api-providers).\n\nAnd if you choose to use the Gemini Developer API, you can take\nadvantage of their \"free tier\" to get you up and running fast.\n\nMobile \\& web client SDKs\n\nYou send requests to the models directly from your mobile or web app using our\nFirebase AI Logic client SDKs, available in\nSwift for Apple platforms, Kotlin \\& Java for Android, JavaScript for Web,\nDart for Flutter, and Unity.\n\nIf you have both of the Gemini API providers set up in your Firebase\nproject, then you can switch between API providers just by enabling the other\nAPI and changing a few lines of initialization code.\n\nAdditionally, our client SDK for Web offers experimental access to\n[hybrid and on-device inference for web apps](/docs/ai-logic/hybrid-on-device-inference)\nrunning on Chrome on desktop. This configuration allows your app to use the\non-device model when it's available, but fall back seamlessly to the\ncloud-hosted model when needed.\n\nProxy service\n\nOur proxy service acts as a gateway between the client and your chosen\nGemini API provider (and Google's models). It provides services and\nintegrations that are important for mobile and web apps. For example, you can\n[set up Firebase App Check](/docs/ai-logic/app-check) to help protect your\nchosen API provider and your backend resources from abuse by unauthorized\nclients.\n\nThis is particularly critical if you chose to use the\nGemini Developer API because our proxy service and this App Check\nintegration make sure that your Gemini API key stays on the server and\nis *not* embedded in your apps' codebase.\n\nImplementation path\n\n|---|---------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| | Set up your Firebase project and connect your app to Firebase | Use the guided workflow in the [**Firebase AI Logic** page](https://console.firebase.google.com/project/_/ailogic) of the Firebase console to set up your project (including enabling the required APIs for your chosen Gemini API provider), register your app with your Firebase project, and then add your Firebase configuration to your app. |\n| | Install the SDK and initialize | Install the Firebase AI Logic SDK that's specific to your app's platform, and then initialize the service and create a model instance in your app. |\n| | Send prompt requests to the Gemini and Imagen models | Use the SDKs to send text-only or multimodal prompts to a Gemini model to generate [text and code](/docs/ai-logic/generate-text), [structured output (like JSON)](/docs/ai-logic/generate-structured-output) and [images](/docs/ai-logic/generate-images-gemini). Alternatively, you can also prompt an Imagen model to [generate images](/docs/ai-logic/generate-images-imagen). Build richer experiences with [multi-turn chat](/docs/ai-logic/chat), [bidirectional streaming of text and audio](/docs/ai-logic/live-api), and [function calling](/docs/ai-logic/function-calling). |\n| | Prepare for production | Implement important integrations for mobile and web apps, like protecting the API from abuse with [Firebase App Check](/docs/ai-logic/app-check) and using [Firebase Remote Config](/docs/ai-logic/solutions/remote-config) to update parameters in your code remotely (like model name). |\n\nNext steps\n\nGet started with accessing a model from your mobile or web app\n\n[Go to Getting Started guide](/docs/ai-logic/get-started)\n\n\nLearn more about the supported models Learn about the [models available for various use cases](/docs/ai-logic/models) and their [quotas](/docs/ai-logic/quotas) and [pricing](/docs/ai-logic/pricing).\n\n\u003cbr /\u003e"]]