从旧版自定义模型 API 迁移
使用集合让一切井井有条
根据您的偏好保存内容并对其进行分类。
firebase-ml-model-interpreter
库的 22.0.2 版引入了新的 getLatestModelFile()
方法,该方法可获取设备上自定义模型的位置。您可以使用此方法直接实例化 TensorFlow Lite Interpreter
对象,该对象可用于取代 FirebaseModelInterpreter
封装容器。
今后,我们推荐您采用这种方法。由于 TensorFlow Lite 解释器版本不再与 Firebase 库版本结合使用,因此您可以根据需要更灵活地升级到新版 TensorFlow Lite,或者更轻松地使用自定义 TensorFlow Lite 构建。
本页面介绍了如何从使用 FirebaseModelInterpreter
迁移到 TensorFlow Lite Interpreter
。
1. 更新项目依赖项
更新项目的依赖项,以包含 firebase-ml-model-interpreter
库的 22.0.2 版(或更高版本)以及 tensorflow-lite
库:
旧版
implementation("com.google.firebase:firebase-ml-model-interpreter:22.0.1")
新版
implementation("com.google.firebase:firebase-ml-model-interpreter:22.0.2")
implementation("org.tensorflow:tensorflow-lite:2.0.0")
2. 创建 TensorFlow Lite 解释器而不是 FirebaseModelInterpreter
使用 getLatestModelFile()
获取设备上模型的位置并使用它创建 TensorFlow Lite Interpreter
,而不是创建 FirebaseModelInterpreter
。
旧版
Kotlin
val remoteModel = FirebaseCustomRemoteModel.Builder("your_model").build()
val options = FirebaseModelInterpreterOptions.Builder(remoteModel).build()
val interpreter = FirebaseModelInterpreter.getInstance(options)
Java
FirebaseCustomRemoteModel remoteModel =
new FirebaseCustomRemoteModel.Builder("your_model").build();
FirebaseModelInterpreterOptions options =
new FirebaseModelInterpreterOptions.Builder(remoteModel).build();
FirebaseModelInterpreter interpreter = FirebaseModelInterpreter.getInstance(options);
新版
Kotlin
val remoteModel = FirebaseCustomRemoteModel.Builder("your_model").build()
FirebaseModelManager.getInstance().getLatestModelFile(remoteModel)
.addOnCompleteListener { task ->
val modelFile = task.getResult()
if (modelFile != null) {
// Instantiate an org.tensorflow.lite.Interpreter object.
interpreter = Interpreter(modelFile)
}
}
Java
FirebaseCustomRemoteModel remoteModel =
new FirebaseCustomRemoteModel.Builder("your_model").build();
FirebaseModelManager.getInstance().getLatestModelFile(remoteModel)
.addOnCompleteListener(new OnCompleteListener<File>() {
@Override
public void onComplete(@NonNull Task<File> task) {
File modelFile = task.getResult();
if (modelFile != null) {
// Instantiate an org.tensorflow.lite.Interpreter object.
Interpreter interpreter = new Interpreter(modelFile);
}
}
});
通过 FirebaseModelInterpreter
,您可以在运行解释器时将 FirebaseModelInputOutputOptions
对象传递给解释器,从而指定模型的输入和输出形状。
对于 TensorFlow Lite 解释器,您需要改为以适合模型输入和输出的大小分配 ByteBuffer
对象。
例如,如果模型的输入形状为 [1 224 224 3] float
值且输出形状为 [1 1000] float
值,请进行以下更改:
旧版
Kotlin
val inputOutputOptions = FirebaseModelInputOutputOptions.Builder()
.setInputFormat(0, FirebaseModelDataType.FLOAT32, intArrayOf(1, 224, 224, 3))
.setOutputFormat(0, FirebaseModelDataType.FLOAT32, intArrayOf(1, 1000))
.build()
val input = ByteBuffer.allocateDirect(224*224*3*4).order(ByteOrder.nativeOrder())
// Then populate with input data.
val inputs = FirebaseModelInputs.Builder()
.add(input)
.build()
interpreter.run(inputs, inputOutputOptions)
.addOnSuccessListener { outputs ->
// ...
}
.addOnFailureListener {
// Task failed with an exception.
// ...
}
Java
FirebaseModelInputOutputOptions inputOutputOptions =
new FirebaseModelInputOutputOptions.Builder()
.setInputFormat(0, FirebaseModelDataType.FLOAT32, new int[]{1, 224, 224, 3})
.setOutputFormat(0, FirebaseModelDataType.FLOAT32, new int[]{1, 1000})
.build();
float[][][][] input = new float[1][224][224][3];
// Then populate with input data.
FirebaseModelInputs inputs = new FirebaseModelInputs.Builder()
.add(input)
.build();
interpreter.run(inputs, inputOutputOptions)
.addOnSuccessListener(
new OnSuccessListener<FirebaseModelOutputs>() {
@Override
public void onSuccess(FirebaseModelOutputs result) {
// ...
}
})
.addOnFailureListener(
new OnFailureListener() {
@Override
public void onFailure(@NonNull Exception e) {
// Task failed with an exception
// ...
}
});
新版
Kotlin
val inBufferSize = 1 * 224 * 224 * 3 * java.lang.Float.SIZE / java.lang.Byte.SIZE
val inputBuffer = ByteBuffer.allocateDirect(inBufferSize).order(ByteOrder.nativeOrder())
// Then populate with input data.
val outBufferSize = 1 * 1000 * java.lang.Float.SIZE / java.lang.Byte.SIZE
val outputBuffer = ByteBuffer.allocateDirect(outBufferSize).order(ByteOrder.nativeOrder())
interpreter.run(inputBuffer, outputBuffer)
Java
int inBufferSize = 1 * 224 * 224 * 3 * java.lang.Float.SIZE / java.lang.Byte.SIZE;
ByteBuffer inputBuffer =
ByteBuffer.allocateDirect(inBufferSize).order(ByteOrder.nativeOrder());
// Then populate with input data.
int outBufferSize = 1 * 1000 * java.lang.Float.SIZE / java.lang.Byte.SIZE;
ByteBuffer outputBuffer =
ByteBuffer.allocateDirect(outBufferSize).order(ByteOrder.nativeOrder());
interpreter.run(inputBuffer, outputBuffer);
4. 更新输出处理代码
最后,不要使用 FirebaseModelOutputs
对象的 getOutput()
方法获取模型的输出,而是将 ByteBuffer
输出转换为对您的使用场景而言很方便的任何结构。
例如,如果您要执行分类,则可以进行如下更改:
旧版
Kotlin
val output = result.getOutput(0)
val probabilities = output[0]
try {
val reader = BufferedReader(InputStreamReader(assets.open("custom_labels.txt")))
for (probability in probabilities) {
val label: String = reader.readLine()
println("$label: $probability")
}
} catch (e: IOException) {
// File not found?
}
Java
float[][] output = result.getOutput(0);
float[] probabilities = output[0];
try {
BufferedReader reader = new BufferedReader(
new InputStreamReader(getAssets().open("custom_labels.txt")));
for (float probability : probabilities) {
String label = reader.readLine();
Log.i(TAG, String.format("%s: %1.4f", label, probability));
}
} catch (IOException e) {
// File not found?
}
更新后
Kotlin
modelOutput.rewind()
val probabilities = modelOutput.asFloatBuffer()
try {
val reader = BufferedReader(
InputStreamReader(assets.open("custom_labels.txt")))
for (i in probabilities.capacity()) {
val label: String = reader.readLine()
val probability = probabilities.get(i)
println("$label: $probability")
}
} catch (e: IOException) {
// File not found?
}
Java
modelOutput.rewind();
FloatBuffer probabilities = modelOutput.asFloatBuffer();
try {
BufferedReader reader = new BufferedReader(
new InputStreamReader(getAssets().open("custom_labels.txt")));
for (int i = 0; i < probabilities.capacity(); i++) {
String label = reader.readLine();
float probability = probabilities.get(i);
Log.i(TAG, String.format("%s: %1.4f", label, probability));
}
} catch (IOException e) {
// File not found?
}
如未另行说明,那么本页面中的内容已根据知识共享署名 4.0 许可获得了许可,并且代码示例已根据 Apache 2.0 许可获得了许可。有关详情,请参阅 Google 开发者网站政策。Java 是 Oracle 和/或其关联公司的注册商标。
最后更新时间 (UTC):2025-08-22。
[null,null,["最后更新时间 (UTC):2025-08-22。"],[],[],null,["\u003cbr /\u003e\n\nVersion 22.0.2 of the `firebase-ml-model-interpreter` library introduces a new\n`getLatestModelFile()` method, which gets the location on the device of custom\nmodels. You can use this method to directly instantiate a TensorFlow Lite\n`Interpreter` object, which you can use instead of the\n`FirebaseModelInterpreter` wrapper.\n\nGoing forward, this is the preferred approach. Because the TensorFlow Lite\ninterpreter version is no longer coupled with the Firebase library version, you\nhave more flexibility to upgrade to new versions of TensorFlow Lite when you\nwant, or more easily use custom TensorFlow Lite builds.\n\nThis page shows how you can migrate from using `FirebaseModelInterpreter` to the\nTensorFlow Lite `Interpreter`.\n\n1. Update project dependencies\n\nUpdate your project's dependencies to include version 22.0.2 of the\n`firebase-ml-model-interpreter` library (or newer) and the `tensorflow-lite`\nlibrary:\n\nBefore \n\n implementation(\"com.google.firebase:firebase-ml-model-interpreter:22.0.1\")\n\nAfter \n\n implementation(\"com.google.firebase:firebase-ml-model-interpreter:22.0.2\")\n implementation(\"org.tensorflow:tensorflow-lite:2.0.0\")\n\n2. Create a TensorFlow Lite interpreter instead of a FirebaseModelInterpreter\n\nInstead of creating a `FirebaseModelInterpreter`, get the model's location on\ndevice with `getLatestModelFile()` and use it to create a TensorFlow Lite\n`Interpreter`.\n\nBefore \n\nKotlin \n\n val remoteModel = FirebaseCustomRemoteModel.Builder(\"your_model\").build()\n val options = FirebaseModelInterpreterOptions.Builder(remoteModel).build()\n val interpreter = FirebaseModelInterpreter.getInstance(options)\n\nJava \n\n FirebaseCustomRemoteModel remoteModel =\n new FirebaseCustomRemoteModel.Builder(\"your_model\").build();\n FirebaseModelInterpreterOptions options =\n new FirebaseModelInterpreterOptions.Builder(remoteModel).build();\n FirebaseModelInterpreter interpreter = FirebaseModelInterpreter.getInstance(options);\n\nAfter \n\nKotlin \n\n val remoteModel = FirebaseCustomRemoteModel.Builder(\"your_model\").build()\n FirebaseModelManager.getInstance().getLatestModelFile(remoteModel)\n .addOnCompleteListener { task -\u003e\n val modelFile = task.getResult()\n if (modelFile != null) {\n // Instantiate an org.tensorflow.lite.Interpreter object.\n interpreter = Interpreter(modelFile)\n }\n }\n\nJava \n\n FirebaseCustomRemoteModel remoteModel =\n new FirebaseCustomRemoteModel.Builder(\"your_model\").build();\n FirebaseModelManager.getInstance().getLatestModelFile(remoteModel)\n .addOnCompleteListener(new OnCompleteListener\u003cFile\u003e() {\n @Override\n public void onComplete(@NonNull Task\u003cFile\u003e task) {\n File modelFile = task.getResult();\n if (modelFile != null) {\n // Instantiate an org.tensorflow.lite.Interpreter object.\n Interpreter interpreter = new Interpreter(modelFile);\n }\n }\n });\n\n3. Update input and output preparation code\n\nWith `FirebaseModelInterpreter`, you specify the model's input and output shapes\nby passing a `FirebaseModelInputOutputOptions` object to the interpreter when\nyou run it.\n\nFor the TensorFlow Lite interpreter, you instead allocate `ByteBuffer` objects\nwith the right size for your model's input and output.\n\nFor example, if your model has an input shape of \\[1 224 224 3\\] `float` values\nand an output shape of \\[1 1000\\] `float` values, make these changes:\n\nBefore \n\nKotlin \n\n val inputOutputOptions = FirebaseModelInputOutputOptions.Builder()\n .setInputFormat(0, FirebaseModelDataType.FLOAT32, intArrayOf(1, 224, 224, 3))\n .setOutputFormat(0, FirebaseModelDataType.FLOAT32, intArrayOf(1, 1000))\n .build()\n\n val input = ByteBuffer.allocateDirect(224*224*3*4).order(ByteOrder.nativeOrder())\n // Then populate with input data.\n\n val inputs = FirebaseModelInputs.Builder()\n .add(input)\n .build()\n\n interpreter.run(inputs, inputOutputOptions)\n .addOnSuccessListener { outputs -\u003e\n // ...\n }\n .addOnFailureListener {\n // Task failed with an exception.\n // ...\n }\n\nJava \n\n FirebaseModelInputOutputOptions inputOutputOptions =\n new FirebaseModelInputOutputOptions.Builder()\n .setInputFormat(0, FirebaseModelDataType.FLOAT32, new int[]{1, 224, 224, 3})\n .setOutputFormat(0, FirebaseModelDataType.FLOAT32, new int[]{1, 1000})\n .build();\n\n float[][][][] input = new float[1][224][224][3];\n // Then populate with input data.\n\n FirebaseModelInputs inputs = new FirebaseModelInputs.Builder()\n .add(input)\n .build();\n\n interpreter.run(inputs, inputOutputOptions)\n .addOnSuccessListener(\n new OnSuccessListener\u003cFirebaseModelOutputs\u003e() {\n @Override\n public void onSuccess(FirebaseModelOutputs result) {\n // ...\n }\n })\n .addOnFailureListener(\n new OnFailureListener() {\n @Override\n public void onFailure(@NonNull Exception e) {\n // Task failed with an exception\n // ...\n }\n });\n\nAfter \n\nKotlin \n\n val inBufferSize = 1 * 224 * 224 * 3 * java.lang.Float.SIZE / java.lang.Byte.SIZE\n val inputBuffer = ByteBuffer.allocateDirect(inBufferSize).order(ByteOrder.nativeOrder())\n // Then populate with input data.\n\n val outBufferSize = 1 * 1000 * java.lang.Float.SIZE / java.lang.Byte.SIZE\n val outputBuffer = ByteBuffer.allocateDirect(outBufferSize).order(ByteOrder.nativeOrder())\n\n interpreter.run(inputBuffer, outputBuffer)\n\nJava \n\n int inBufferSize = 1 * 224 * 224 * 3 * java.lang.Float.SIZE / java.lang.Byte.SIZE;\n ByteBuffer inputBuffer =\n ByteBuffer.allocateDirect(inBufferSize).order(ByteOrder.nativeOrder());\n // Then populate with input data.\n\n int outBufferSize = 1 * 1000 * java.lang.Float.SIZE / java.lang.Byte.SIZE;\n ByteBuffer outputBuffer =\n ByteBuffer.allocateDirect(outBufferSize).order(ByteOrder.nativeOrder());\n\n interpreter.run(inputBuffer, outputBuffer);\n\n4. Update output handling code\n\nFinally, instead of getting the model's output with the `FirebaseModelOutputs`\nobject's `getOutput()` method, convert the `ByteBuffer` output to whatever\nstructure is convenient for your use case.\n\nFor example, if you're doing classification, you might make changes like the\nfollowing:\n\nBefore \n\nKotlin \n\n val output = result.getOutput(0)\n val probabilities = output[0]\n try {\n val reader = BufferedReader(InputStreamReader(assets.open(\"custom_labels.txt\")))\n for (probability in probabilities) {\n val label: String = reader.readLine()\n println(\"$label: $probability\")\n }\n } catch (e: IOException) {\n // File not found?\n }\n\nJava \n\n float[][] output = result.getOutput(0);\n float[] probabilities = output[0];\n try {\n BufferedReader reader = new BufferedReader(\n new InputStreamReader(getAssets().open(\"custom_labels.txt\")));\n for (float probability : probabilities) {\n String label = reader.readLine();\n Log.i(TAG, String.format(\"%s: %1.4f\", label, probability));\n }\n } catch (IOException e) {\n // File not found?\n }\n\nAfter \n\nKotlin \n\n modelOutput.rewind()\n val probabilities = modelOutput.asFloatBuffer()\n try {\n val reader = BufferedReader(\n InputStreamReader(assets.open(\"custom_labels.txt\")))\n for (i in probabilities.capacity()) {\n val label: String = reader.readLine()\n val probability = probabilities.get(i)\n println(\"$label: $probability\")\n }\n } catch (e: IOException) {\n // File not found?\n }\n\nJava \n\n modelOutput.rewind();\n FloatBuffer probabilities = modelOutput.asFloatBuffer();\n try {\n BufferedReader reader = new BufferedReader(\n new InputStreamReader(getAssets().open(\"custom_labels.txt\")));\n for (int i = 0; i \u003c probabilities.capacity(); i++) {\n String label = reader.readLine();\n float probability = probabilities.get(i);\n Log.i(TAG, String.format(\"%s: %1.4f\", label, probability));\n }\n } catch (IOException e) {\n // File not found?\n }"]]