Etichetta le immagini con Firebase ML sulle piattaforme Apple
Mantieni tutto organizzato con le raccolte
Salva e classifica i contenuti in base alle tue preferenze.
Puoi utilizzare Firebase ML per etichettare gli oggetti riconosciuti in un'immagine. Consulta la panoramica per informazioni sulle funzionalità di questa API.
Se non hai ancora eseguito l'upgrade del progetto al
piano tariffario Blaze con pagamento a consumo, fai clic su Esegui l'upgrade. Ti verrà chiesto di eseguire l'upgrade solo se il progetto non è in uso con il
piano tariffario Blaze.
Solo i progetti con il piano tariffario Blaze possono utilizzare
le API basate su cloud.
Se le API basate su cloud non sono già abilitate, fai clic su
Abilita API basate su cloud.
Ora puoi etichettare le immagini.
1. Prepara l'immagine di input
Crea un oggetto VisionImage utilizzando un UIImage o un
CMSampleBufferRef.
Per utilizzare un UIImage:
Se necessario, ruota l'immagine in modo che la sua proprietà imageOrientation
sia .up.
Crea un oggetto VisionImage utilizzando il UIImage ruotato correttamente. Non specificare metadati di rotazione. Deve essere utilizzato il valore predefinito .topLeft.
letcameraPosition=AVCaptureDevice.Position.back// Set to the capture device you used.letmetadata=VisionImageMetadata()metadata.orientation=imageOrientation(deviceOrientation:UIDevice.current.orientation,cameraPosition:cameraPosition)
Objective-C
FIRVisionImageMetadata*metadata=[[FIRVisionImageMetadataalloc]init];AVCaptureDevicePositioncameraPosition=AVCaptureDevicePositionBack;// Set to the capture device you used.metadata.orientation=[selfimageOrientationFromDeviceOrientation:UIDevice.currentDevice.orientationcameraPosition:cameraPosition];
Crea un oggetto VisionImage utilizzando l'oggetto
CMSampleBufferRef e i metadati di rotazione:
2. Configura ed esegui l'etichettatore di immagini
Per etichettare gli oggetti in un'immagine, passa l'oggetto VisionImage al metodo processImage() di VisionImageLabeler.
Innanzitutto, ottieni un'istanza di VisionImageLabeler:
Swift
letlabeler=Vision.vision().cloudImageLabeler()// Or, to set the minimum confidence required:// let options = VisionCloudImageLabelerOptions()// options.confidenceThreshold = 0.7// let labeler = Vision.vision().cloudImageLabeler(options: options)
Objective-C
FIRVisionImageLabeler*labeler=[[FIRVisionvision]cloudImageLabeler];// Or, to set the minimum confidence required:// FIRVisionCloudImageLabelerOptions *options =// [[FIRVisionCloudImageLabelerOptions alloc] init];// options.confidenceThreshold = 0.7;// FIRVisionImageLabeler *labeler =// [[FIRVision vision] cloudImageLabelerWithOptions:options];
Quindi, passa l'immagine al metodo processImage():
Se l'etichettatura delle immagini va a buon fine, un array di oggetti VisionImageLabel
viene passato al gestore di completamento. Da ogni oggetto puoi ottenere informazioni su un elemento riconosciuto nell'immagine.
[null,null,["Ultimo aggiornamento 2025-08-29 UTC."],[],[],null,["| This page describes an old version of labeling objects recognized in an image using the\n| deprecated Firebase ML Vision SDK. As an alternative, you may\n| [call\n| Cloud Vision APIs using Firebase Auth and Callable Functions](/docs/ml/ios/label-images) to allow only users logged\n| into your app to access the API.\n\nYou can use Firebase ML to label objects recognized in an image. See the\n[overview](/docs/ml/label-images) for information about this API's\nfeatures.\n| Use of the Cloud Vision APIs is subject to the [Google Cloud Platform License\n| Agreement](https://cloud.google.com/terms/) and [Service\n| Specific Terms](https://cloud.google.com/terms/service-terms), and billed accordingly. For billing information, see the [Pricing](https://cloud.google.com/vision/pricing) page.\n| **Looking for on-device image labeling?** Try the [standalone ML Kit library](https://developers.google.com/ml-kit/vision/image-labeling).\n\n\u003cbr /\u003e\n\nBefore you begin\n\nIf you have not already added Firebase to your app, do so by following the steps in the [getting started guide](/docs/ios/setup).\n1. Use Swift Package Manager to install and manage Firebase dependencies.\n| Visit [our installation guide](/docs/ios/installation-methods) to learn about the different ways you can add Firebase SDKs to your Apple project, including importing frameworks directly and using CocoaPods.\n1. In Xcode, with your app project open, navigate to **File \\\u003e Add Packages**.\n2. When prompted, add the Firebase Apple platforms SDK repository: \n\n```text\n https://github.com/firebase/firebase-ios-sdk.git\n```\n| **Note:** New projects should use the default (latest) SDK version, but you can choose an older version if needed.\n3. Choose the Firebase ML library.\n4. Add the `-ObjC` flag to the *Other Linker Flags* section of your target's build settings.\n5. When finished, Xcode will automatically begin resolving and downloading your dependencies in the background.\n2. Next, perform some in-app setup:\n1. In your app, import Firebase:\n\n Swift \n\n ```swift\n import FirebaseMLModelDownloader\n ```\n\n Objective-C \n\n ```objective-c\n @import FirebaseMLModelDownloader;\n ```\n3. If you haven't already enabled Cloud-based APIs for your project, do so\n now:\n\n 1. Open the [Firebase ML\n APIs page](//console.firebase.google.com/project/_/ml/apis) in the Firebase console.\n 2. If you haven't already upgraded your project to the\n [pay-as-you-go Blaze pricing plan](/pricing), click **Upgrade** to do so. (You'll be\n prompted to upgrade only if your project isn't on the\n Blaze pricing plan.)\n\n Only projects on the Blaze pricing plan can use\n Cloud-based APIs.\n 3. If Cloud-based APIs aren't already enabled, click **Enable Cloud-based APIs**.\n\n | Before you deploy to production an app that uses a Cloud API, you should take some additional steps to [prevent and mitigate the\n | effect of unauthorized API access](./secure-api-key).\n\nNow you are ready to label images.\n\n1. Prepare the input image\n\nCreate a [`VisionImage`](/docs/reference/swift/firebasemlvision/api/reference/Classes/VisionImage) object using a `UIImage` or a\n`CMSampleBufferRef`.\n\nTo use a `UIImage`:\n\n1. If necessary, rotate the image so that its `imageOrientation` property is `.up`.\n2. Create a `VisionImage` object using the correctly-rotated `UIImage`. Do not specify any rotation metadata---the default value, `.topLeft`, must be used. \n\n Swift \n\n ```swift\n let image = VisionImage(image: uiImage)\n ```\n\n Objective-C \n\n ```objective-c\n FIRVisionImage *image = [[FIRVisionImage alloc] initWithImage:uiImage];\n ```\n\nTo use a `CMSampleBufferRef`:\n\n1. Create a [`VisionImageMetadata`](/docs/reference/swift/firebasemlvision/api/reference/Classes/VisionImageMetadata) object that specifies the\n orientation of the image data contained in the\n `CMSampleBufferRef` buffer.\n\n To get the image orientation: \n\n Swift \n\n ```swift\n func imageOrientation(\n deviceOrientation: UIDeviceOrientation,\n cameraPosition: AVCaptureDevice.Position\n ) -\u003e VisionDetectorImageOrientation {\n switch deviceOrientation {\n case .portrait:\n return cameraPosition == .front ? .leftTop : .rightTop\n case .landscapeLeft:\n return cameraPosition == .front ? .bottomLeft : .topLeft\n case .portraitUpsideDown:\n return cameraPosition == .front ? .rightBottom : .leftBottom\n case .landscapeRight:\n return cameraPosition == .front ? .topRight : .bottomRight\n case .faceDown, .faceUp, .unknown:\n return .leftTop\n }\n }\n ```\n\n Objective-C \n\n ```objective-c\n - (FIRVisionDetectorImageOrientation)\n imageOrientationFromDeviceOrientation:(UIDeviceOrientation)deviceOrientation\n cameraPosition:(AVCaptureDevicePosition)cameraPosition {\n switch (deviceOrientation) {\n case UIDeviceOrientationPortrait:\n if (cameraPosition == AVCaptureDevicePositionFront) {\n return FIRVisionDetectorImageOrientationLeftTop;\n } else {\n return FIRVisionDetectorImageOrientationRightTop;\n }\n case UIDeviceOrientationLandscapeLeft:\n if (cameraPosition == AVCaptureDevicePositionFront) {\n return FIRVisionDetectorImageOrientationBottomLeft;\n } else {\n return FIRVisionDetectorImageOrientationTopLeft;\n }\n case UIDeviceOrientationPortraitUpsideDown:\n if (cameraPosition == AVCaptureDevicePositionFront) {\n return FIRVisionDetectorImageOrientationRightBottom;\n } else {\n return FIRVisionDetectorImageOrientationLeftBottom;\n }\n case UIDeviceOrientationLandscapeRight:\n if (cameraPosition == AVCaptureDevicePositionFront) {\n return FIRVisionDetectorImageOrientationTopRight;\n } else {\n return FIRVisionDetectorImageOrientationBottomRight;\n }\n default:\n return FIRVisionDetectorImageOrientationTopLeft;\n }\n }\n ```\n\n Then, create the metadata object: \n\n Swift \n\n ```swift\n let cameraPosition = AVCaptureDevice.Position.back // Set to the capture device you used.\n let metadata = VisionImageMetadata()\n metadata.orientation = imageOrientation(\n deviceOrientation: UIDevice.current.orientation,\n cameraPosition: cameraPosition\n )\n ```\n\n Objective-C \n\n ```objective-c\n FIRVisionImageMetadata *metadata = [[FIRVisionImageMetadata alloc] init];\n AVCaptureDevicePosition cameraPosition =\n AVCaptureDevicePositionBack; // Set to the capture device you used.\n metadata.orientation =\n [self imageOrientationFromDeviceOrientation:UIDevice.currentDevice.orientation\n cameraPosition:cameraPosition];\n ```\n2. Create a `VisionImage` object using the `CMSampleBufferRef` object and the rotation metadata: \n\n Swift \n\n ```swift\n let image = VisionImage(buffer: sampleBuffer)\n image.metadata = metadata\n ```\n\n Objective-C \n\n ```objective-c\n FIRVisionImage *image = [[FIRVisionImage alloc] initWithBuffer:sampleBuffer];\n image.metadata = metadata;\n ```\n\n2. Configure and run the image labeler To label objects in an image, pass the `VisionImage` object to the `VisionImageLabeler`'s `processImage()` method.\n\n\u003cbr /\u003e\n\n1. First, get an instance of `VisionImageLabeler`:\n\n Swift \n\n let labeler = Vision.vision().cloudImageLabeler()\n\n // Or, to set the minimum confidence required:\n // let options = VisionCloudImageLabelerOptions()\n // options.confidenceThreshold = 0.7\n // let labeler = Vision.vision().cloudImageLabeler(options: options)\n\n Objective-C \n\n FIRVisionImageLabeler *labeler = [[FIRVision vision] cloudImageLabeler];\n\n // Or, to set the minimum confidence required:\n // FIRVisionCloudImageLabelerOptions *options =\n // [[FIRVisionCloudImageLabelerOptions alloc] init];\n // options.confidenceThreshold = 0.7;\n // FIRVisionImageLabeler *labeler =\n // [[FIRVision vision] cloudImageLabelerWithOptions:options];\n\n2. Then, pass the image to the `processImage()` method:\n\n Swift \n\n labeler.process(image) { labels, error in\n guard error == nil, let labels = labels else { return }\n\n // Task succeeded.\n // ...\n }\n\n Objective-C \n\n [labeler processImage:image\n completion:^(NSArray\u003cFIRVisionImageLabel *\u003e *_Nullable labels,\n NSError *_Nullable error) {\n if (error != nil) { return; }\n\n // Task succeeded.\n // ...\n }];\n\n3. Get information about labeled objects If image labeling succeeds, an array of `VisionImageLabel` objects will be passed to the completion handler. From each object, you can get information about a feature recognized in the image.\n\n\u003cbr /\u003e\n\nFor example: \n\nSwift \n\n for label in labels {\n let labelText = label.text\n let entityId = label.entityID\n let confidence = label.confidence\n }\n\nObjective-C \n\n for (FIRVisionImageLabel *label in labels) {\n NSString *labelText = label.text;\n NSString *entityId = label.entityID;\n NSNumber *confidence = label.confidence;\n }\n\nNext steps\n\n- Before you deploy to production an app that uses a Cloud API, you should take some additional steps to [prevent and mitigate the\n effect of unauthorized API access](./secure-api-key)."]]