将自定义 TensorFlow Lite 模型与 Flutter 搭配使用
使用集合让一切井井有条
根据您的偏好保存内容并对其进行分类。
如果您的应用使用自定义 TensorFlow Lite 模型,您可以使用 Firebase ML 部署模型。使用 Firebase 部署模型,您可以缩减应用的初始下载大小,而且无需发布应用的新版本即可更新应用的机器学习模型。此外,借助 Remote Config 和 A/B Testing,您可以为不同的用户组动态提供不同的模型。
TensorFlow Lite 模型
TensorFlow Lite 模型是经过优化,可以在移动设备上高效运行的机器学习模型。如需获取 TensorFlow Lite 模型,请执行以下操作:
请注意,在缺少适用于 Dart 的受维护 TensorFlow Lite 库的情况下,您将需要与适用于您的平台的原生 TensorFlow Lite 库集成。此处未记录该集成。
准备工作
安装并初始化适用于 Flutter 的 Firebase SDK(如果您尚未这样做)。
从 Flutter 项目的根目录运行以下命令,以安装机器学习模型下载程序插件:
flutter pub add firebase_ml_model_downloader
重新构建您的项目:
flutter run
1. 部署模型
使用 Firebase 控制台或 Firebase Admin Python 和 Node.js SDK 部署自定义 TensorFlow 模型。请参阅部署和管理自定义模型。
将自定义模型添加到 Firebase 项目后,您可以使用指定的名称在应用中引用该模型。您可以随时部署新的 TensorFlow Lite 模型,并调用 getModel()
来将新模型下载到用户的设备上(见下文)。
2. 将模型下载到设备并初始化一个 TensorFlow Lite 解释器
如需在您的应用中使用 TensorFlow Lite 模型,请先使用模型下载程序将模型的最新版本下载到设备上。然后,使用该模型实例化 TensorFlow Lite 解释器。
如需启动模型下载,请调用模型下载程序的 getModel()
方法,指定您在上传该模型时为其分配的名称、是否总是下载最新模型,以及您希望在什么条件下允许下载。
您可以从以下三种下载方式中进行选择:
下载类型 |
说明 |
localModel
|
从设备获取本地模型。如果没有本地模型,则其行为类似于 latestModel 。如果您不想检查模型更新,请使用此下载类型。例如,您使用 Remote Config 检索模型名称,并始终以新名称上传模型(推荐)。 |
localModelUpdateInBackground
|
从设备获取本地模型,并在后台开始更新模型。如果没有本地模型,则其行为类似于 latestModel 。 |
latestModel
|
获取最新模型。如果本地模型是最新版本,将返回本地模型。否则,下载最新模型。此方式会阻塞进程,直到最新版本下载完毕(不推荐)。请仅在您明确需要最新版本的情况下才使用此方式。 |
您应该停用与模型相关的功能(例如使界面的一部分变灰或将其隐藏),直到您确认模型已下载。
FirebaseModelDownloader.instance
.getModel(
"yourModelName",
FirebaseModelDownloadType.localModel,
FirebaseModelDownloadConditions(
iosAllowsCellularAccess: true,
iosAllowsBackgroundDownloading: false,
androidChargingRequired: false,
androidWifiRequired: false,
androidDeviceIdleRequired: false,
)
)
.then((customModel) {
// Download complete. Depending on your app, you could enable the ML
// feature, or switch from the local model to the remote model, etc.
// The CustomModel object contains the local path of the model file,
// which you can use to instantiate a TensorFlow Lite interpreter.
final localModelPath = customModel.file;
// ...
});
许多应用会通过其初始化代码启动下载任务,您也可以在需要使用该模型之前随时启动下载任务。
现在,设备上已有模型文件,接下来您可以将其与 TensorFlow Lite 解释器搭配使用来进行推理。在缺少适用于 Dart 的受维护 TensorFlow Lite 库的情况下,您需要与适用于 iOS 和 Android 的原生 TensorFlow Lite 库集成。
附录:模型的安全性
无论您以何种方式在 Firebase ML 中添加自己的 TensorFlow Lite 模型,Firebase ML 都会以标准序列化的 protobuf 格式将这些模型存储到本地存储空间中。
从理论上说,这意味着任何人都可以复制您的模型。但实际上,大多数模型都是针对具体的应用,且通过优化进行了混淆处理,因此,这一风险与竞争对手对您的代码进行反汇编和再利用的风险类似。但无论怎样,在您的应用中使用自定义模型之前,您应该了解这种风险。
如未另行说明,那么本页面中的内容已根据知识共享署名 4.0 许可获得了许可,并且代码示例已根据 Apache 2.0 许可获得了许可。有关详情,请参阅 Google 开发者网站政策。Java 是 Oracle 和/或其关联公司的注册商标。
最后更新时间 (UTC):2025-08-04。
[null,null,["最后更新时间 (UTC):2025-08-04。"],[],[],null,["\u003cbr /\u003e\n\nIf your app uses custom\n[TensorFlow Lite](https://www.tensorflow.org/lite/) models, you can\nuse Firebase ML to deploy your models. By deploying models with Firebase, you\ncan reduce the initial download size of your app and update your app's ML models\nwithout releasing a new version of your app. And, with Remote Config and A/B\nTesting, you can dynamically serve different models to different sets of users.\n\nTensorFlow Lite models\n\nTensorFlow Lite models are ML models that are optimized to run on mobile\ndevices. To get a TensorFlow Lite model:\n\n- Use a pre-built model, such as one of the [official TensorFlow Lite models](https://www.tensorflow.org/lite/models)\n- [Convert a TensorFlow model, Keras model, or concrete function to TensorFlow Lite.](https://www.tensorflow.org/lite/convert)\n\nNote that in the absence of a maintained TensorFlow Lite library for Dart, you\nwill need to integrate with the native TensorFlow Lite library for your\nplatforms. This integration is not documented here.\n\nBefore you begin\n\n1. [Install and initialize the Firebase SDKs for Flutter](/docs/flutter/setup)\n if you haven't already done so.\n\n2. From the root directory of your Flutter project, run the following\n command to install the ML model downloader plugin:\n\n flutter pub add firebase_ml_model_downloader\n\n3. Rebuild your project:\n\n flutter run\n\n1. Deploy your model\n\nDeploy your custom TensorFlow models using either the Firebase console or\nthe Firebase Admin Python and Node.js SDKs. See\n[Deploy and manage custom models](/docs/ml/manage-hosted-models).\n\nAfter you add a custom model to your Firebase project, you can reference the\nmodel in your apps using the name you specified. At any time, you can deploy a\nnew TensorFlow Lite model and download the new model onto users' devices by\ncalling `getModel()` (see below).\n\n2. Download the model to the device and initialize a TensorFlow Lite interpreter\n\nTo use your TensorFlow Lite model in your app, first use the model downloader\nto download the latest version of the model to the device. Then, instantiate a\nTensorFlow Lite interpreter with the model.\n\nTo start the model download, call the model downloader's `getModel()` method,\nspecifying the name you assigned the model when you uploaded it, whether you\nwant to always download the latest model, and the conditions under which you\nwant to allow downloading.\n\nYou can choose from three download behaviors:\n\nYou should disable model-related functionality---for example, grey-out or\nhide part of your UI---until you confirm the model has been downloaded. \n\n FirebaseModelDownloader.instance\n .getModel(\n \"yourModelName\",\n FirebaseModelDownloadType.localModel,\n FirebaseModelDownloadConditions(\n iosAllowsCellularAccess: true,\n iosAllowsBackgroundDownloading: false,\n androidChargingRequired: false,\n androidWifiRequired: false,\n androidDeviceIdleRequired: false,\n )\n )\n .then((customModel) {\n // Download complete. Depending on your app, you could enable the ML\n // feature, or switch from the local model to the remote model, etc.\n\n // The CustomModel object contains the local path of the model file,\n // which you can use to instantiate a TensorFlow Lite interpreter.\n final localModelPath = customModel.file;\n\n // ...\n });\n\nMany apps start the download task in their initialization code, but you can do\nso at any point before you need to use the model.\n\n3. Perform inference on input data\n\nNow that you have your model file on the device you can use it with the\nTensorFlow Lite interpreter to perform inference. In the absence of a maintained\nTensorFlow Lite library for Dart, you will need to integrate with the\n[native TensorFlow Lite libraries](https://www.tensorflow.org/lite)\nfor iOS and Android.\n\nAppendix: Model security\n\nRegardless of how you make your TensorFlow Lite models available to\nFirebase ML, Firebase ML stores them in the standard serialized protobuf format in\nlocal storage.\n\nIn theory, this means that anybody can copy your model. However,\nin practice, most models are so application-specific and obfuscated by\noptimizations that the risk is similar to that of competitors disassembling and\nreusing your code. Nevertheless, you should be aware of this risk before you use\na custom model in your app."]]