Si necesitas un modelo de etiquetado de imágenes o un modelo de detección de objetos más especializado y que abarque un dominio de conceptos más limitado y detallado (por ejemplo, un modelo para distinguir entre especies de flores o tipos de comida) puedes usar Firebase ML y AutoML Vision Edge para entrenar un modelo con tus propias imágenes y categorías. El modelo personalizado se entrena en Google Cloud y, una vez que está listo, se usa por completo en el dispositivo.
Entrena automáticamente los modelos de etiquetado de imágenes y de detección de objetos personalizados para reconocer las etiquetas que te interesan utilizando tus datos de entrenamiento.
Alojamiento de modelos integrado
Aloja tus modelos con Firebase y cárgalos en el entorno de ejecución. Si alojas el modelo en Firebase, puedes asegurarte de que los usuarios tengan el último modelo sin lanzar una nueva versión de la app.
Además, puedes empaquetar el modelo con tu app, de modo que esté disponible de inmediato en la instalación.
Ruta de implementación
Recopilar datos de entrenamiento
Prepara un conjunto de datos de ejemplos de cada etiqueta que quieras que tu modelo reconozca.
Entrena un modelo nuevo
En Google Cloud console, importa tus datos de entrenamiento y utilízalos para entrenar un nuevo modelo.
Usa el modelo en tu app
Empaqueta el modelo con tu app o descárgalo desde Firebase cuando sea necesario. Luego, usa el modelo para etiquetar imágenes en el dispositivo.
Precios y límites
Para entrenar modelos personalizados con AutoML Vision Edge, debes tener el plan de pago por uso (Blaze).
[null,null,["Última actualización: 2025-08-04 (UTC)"],[],[],null,["AutoML Vision Edge \nplat_ios plat_android \nCreate custom image classification models from your own training data with AutoML Vision Edge.\n\nIf you want to recognize contents of an image, one option is to use ML Kit's\n[on-device image labeling API](https://developers.google.com/ml-kit/vision/image-labeling)\nor [on-device object detection API](https://developers.google.com/ml-kit/vision/object-detection).\nThe models used by these APIs are built for general-purpose use, and are trained\nto recognize the most commonly-found concepts in photos.\n\nIf you need a more specialized image labeling or object detection model, covering a narrower domain\nof concepts in more detail---for example, a model to distinguish between\nspecies of flowers or types of food---you can use Firebase ML and AutoML\nVision Edge to train a model with your own images and categories. The custom\nmodel is trained in Google Cloud, and once the model is ready, it's used fully\non the device.\n| Firebase ML's AutoML Vision Edge features are deprecated. Consider using [Vertex AI](https://cloud.google.com/vertex-ai/docs/beginner/beginners-guide) to automatically train ML models, which you can either [export as TensorFlow\n| Lite models](https://cloud.google.com/vertex-ai/docs/export/export-edge-model) for on-device use or [deploy for cloud-based\n| inference](https://cloud.google.com/vertex-ai/docs/predictions/overview).\n\n[Get started with image labeling](/docs/ml/ios/train-image-labeler)\n[Get started with object detection](/docs/ml/android/train-object-detector)\n\nKey capabilities\n\n|---------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| Train models based on your data | Automatically train custom image labeling and object detection models to recognize the labels you care about, using your training data. |\n| Built-in model hosting | Host your models with Firebase, and load them at run time. By hosting the model on Firebase, you can make sure users have the latest model without releasing a new app version. And, of course, you can also bundle the model with your app, so it's immediately available on install. |\n\n| **Running AutoML models in the cloud**\n|\n| These pages only discuss generating mobile-optimized models intended to run\n| on the device. However, for models with many thousands of labels or when\n| significantly higher accuracy is required, you might want to run a\n| server-optimized model in the cloud instead, which you can do by calling the\n| Cloud AutoML Vision APIs directly. See\n| [Making an\n| online prediction](https://cloud.google.com/vision/automl/docs/predict).\n|\n| Note that unlike running AutoML Vision Edge models on device, running a\n| cloud-based AutoML model is billed per invocation.\n\nImplementation path\n\n|---|---------------------------|----------------------------------------------------------------------------------------------------------------------------------|\n| | Assemble training data | Put together a dataset of examples of each label you want your model to recognize. |\n| | Train a new model | In the Google Cloud console, import your training data and use it to train a new model. |\n| | Use the model in your app | Bundle the model with your app or download it from Firebase when it's needed. Then, use the model to label images on the device. |\n\nPricing \\& Limits\n\nTo train custom models with AutoML Vision Edge, you must be on the pay-as-you-go\n(Blaze) plan.\n| **Important:** You can no longer train models with AutoML Vision Edge while on the Spark plan. If you previously trained models while on the Spark plan, your training data and trained models are still accessible from the Firebase console in read-only mode. If you want to keep this data download it before March 1, 2021.\n\n| Datasets | Billed according to [Cloud Storage rates](https://cloud.google.com/storage/pricing) |\n| Images per dataset | 1,000,000 |\n| Training hours | No per-model limit |\n|--------------------|-------------------------------------------------------------------------------------|\n\nNext steps\n\n- Learn how to [train an image labeling model](/docs/ml/train-image-labeler).\n- Learn how to [train an object detection model](/docs/ml/train-object-detector)."]]