Se hai bisogno di un modello di etichettatura delle immagini o di rilevamento degli oggetti più specializzato, che copra un dominio più ristretto
di concetti in modo più dettagliato, ad esempio, un modello per distinguere
specie di fiori o tipi di alimenti, puoi utilizzare Firebase ML e AutoML
Vision Edge per addestrare un modello con le tue immagini e categorie. L'opzione
il modello viene addestrato in Google Cloud e, una volta pronto, viene utilizzato
sul dispositivo.
Addestra automaticamente modelli personalizzati di etichettatura delle immagini e rilevamento degli oggetti per
riconoscere le etichette che ti interessano, utilizzando i tuoi dati di addestramento.
Hosting del modello integrato
Ospita i tuoi modelli con Firebase e caricali in fase di esecuzione. Di
che ospita il modello su Firebase, puoi assicurarti che gli utenti dispongano della versione
senza rilasciare una nuova versione dell'app.
Naturalmente, è anche possibile integrare il modello nella propria app,
immediatamente disponibili al momento dell'installazione.
Percorso di implementazione
Combina i dati di addestramento
Crea un set di dati di esempi per ogni etichetta da utilizzare per il modello
riconoscere.
Addestra un nuovo modello
Nella console Google Cloud, importa i dati di addestramento e utilizzali per l'addestramento
un nuovo modello.
Utilizza il modello nella tua app
Raggruppa il modello con la tua app o scaricalo da Firebase quando
necessario. Poi, utilizza il modello per etichettare le immagini sul dispositivo.
Prezzi e Limiti
Per addestrare modelli personalizzati con AutoML Vision Edge, devi utilizzare il piano di pagamento per utilizzo
(Blaze).
[null,null,["Ultimo aggiornamento 2025-07-25 UTC."],[],[],null,["AutoML Vision Edge \nplat_ios plat_android \nCreate custom image classification models from your own training data with AutoML Vision Edge.\n\nIf you want to recognize contents of an image, one option is to use ML Kit's\n[on-device image labeling API](https://developers.google.com/ml-kit/vision/image-labeling)\nor [on-device object detection API](https://developers.google.com/ml-kit/vision/object-detection).\nThe models used by these APIs are built for general-purpose use, and are trained\nto recognize the most commonly-found concepts in photos.\n\nIf you need a more specialized image labeling or object detection model, covering a narrower domain\nof concepts in more detail---for example, a model to distinguish between\nspecies of flowers or types of food---you can use Firebase ML and AutoML\nVision Edge to train a model with your own images and categories. The custom\nmodel is trained in Google Cloud, and once the model is ready, it's used fully\non the device.\n| Firebase ML's AutoML Vision Edge features are deprecated. Consider using [Vertex AI](https://cloud.google.com/vertex-ai/docs/beginner/beginners-guide) to automatically train ML models, which you can either [export as TensorFlow\n| Lite models](https://cloud.google.com/vertex-ai/docs/export/export-edge-model) for on-device use or [deploy for cloud-based\n| inference](https://cloud.google.com/vertex-ai/docs/predictions/overview).\n\n[Get started with image labeling](/docs/ml/ios/train-image-labeler)\n[Get started with object detection](/docs/ml/android/train-object-detector)\n\nKey capabilities\n\n|---------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| Train models based on your data | Automatically train custom image labeling and object detection models to recognize the labels you care about, using your training data. |\n| Built-in model hosting | Host your models with Firebase, and load them at run time. By hosting the model on Firebase, you can make sure users have the latest model without releasing a new app version. And, of course, you can also bundle the model with your app, so it's immediately available on install. |\n\n| **Running AutoML models in the cloud**\n|\n| These pages only discuss generating mobile-optimized models intended to run\n| on the device. However, for models with many thousands of labels or when\n| significantly higher accuracy is required, you might want to run a\n| server-optimized model in the cloud instead, which you can do by calling the\n| Cloud AutoML Vision APIs directly. See\n| [Making an\n| online prediction](https://cloud.google.com/vision/automl/docs/predict).\n|\n| Note that unlike running AutoML Vision Edge models on device, running a\n| cloud-based AutoML model is billed per invocation.\n\nImplementation path\n\n|---|---------------------------|----------------------------------------------------------------------------------------------------------------------------------|\n| | Assemble training data | Put together a dataset of examples of each label you want your model to recognize. |\n| | Train a new model | In the Google Cloud console, import your training data and use it to train a new model. |\n| | Use the model in your app | Bundle the model with your app or download it from Firebase when it's needed. Then, use the model to label images on the device. |\n\nPricing \\& Limits\n\nTo train custom models with AutoML Vision Edge, you must be on the pay-as-you-go\n(Blaze) plan.\n| **Important:** You can no longer train models with AutoML Vision Edge while on the Spark plan. If you previously trained models while on the Spark plan, your training data and trained models are still accessible from the Firebase console in read-only mode. If you want to keep this data download it before March 1, 2021.\n\n| Datasets | Billed according to [Cloud Storage rates](https://cloud.google.com/storage/pricing) |\n| Images per dataset | 1,000,000 |\n| Training hours | No per-model limit |\n|--------------------|-------------------------------------------------------------------------------------|\n\nNext steps\n\n- Learn how to [train an image labeling model](/docs/ml/train-image-labeler).\n- Learn how to [train an object detection model](/docs/ml/train-object-detector)."]]