Jika belum mengupgrade project ke paket harga Blaze, klik
Upgrade untuk melakukannya. (Anda akan diminta untuk mengupgrade hanya jika
project tersebut tidak menggunakan paket Blaze.)
Hanya project tingkat Blaze yang dapat menggunakan API berbasis Cloud.
Jika API berbasis Cloud belum diaktifkan, klik Aktifkan API berbasis Cloud.
Mengonfigurasi detektor bangunan terkenal
Secara default, detektor Cloud menggunakan model versi stabil dan menampilkan
hingga 10 hasil. Jika Anda ingin mengubah salah satu setelan ini,
tentukan dengan objek VisionCloudDetectorOptions seperti
pada contoh berikut:
Pada langkah berikutnya, teruskan objek VisionCloudDetectorOptions
saat Anda membuat objek detektor Cloud.
Menjalankan detektor tempat terkenal
Untuk mengenali tempat terkenal pada gambar, teruskan gambar tersebut sebagai UIImage atau CMSampleBufferRef ke metode detect(in:)VisionCloudLandmarkDetector:
lazyvarvision=Vision.vision()letcloudDetector=vision.cloudLandmarkDetector(options:options)// Or, to use the default settings:// let cloudDetector = vision.cloudLandmarkDetector()
Objective-C
FIRVision*vision=[FIRVisionvision];FIRVisionCloudLandmarkDetector*landmarkDetector=[visioncloudLandmarkDetector];// Or, to change the default settings:// FIRVisionCloudLandmarkDetector *landmarkDetector =// [vision cloudLandmarkDetectorWithOptions:options];
Buat objek VisionImage menggunakan UIImage atau CMSampleBufferRef.
Untuk menggunakan UIImage:
Jika perlu, putar gambar sehingga properti imageOrientation-nya
adalah .up.
Buat objek VisionImage menggunakan UIImage yang sudah diputar dengan benar. Jangan tentukan metadata rotasi apa pun—yang harus digunakan adalah nilai default, yaitu .topLeft.
letcameraPosition=AVCaptureDevice.Position.back// Set to the capture device you used.letmetadata=VisionImageMetadata()metadata.orientation=imageOrientation(deviceOrientation:UIDevice.current.orientation,cameraPosition:cameraPosition)
Objective-C
FIRVisionImageMetadata*metadata=[[FIRVisionImageMetadataalloc]init];AVCaptureDevicePositioncameraPosition=AVCaptureDevicePositionBack;// Set to the capture device you used.metadata.orientation=[selfimageOrientationFromDeviceOrientation:UIDevice.currentDevice.orientationcameraPosition:cameraPosition];
Buat objek VisionImage menggunakan objek
CMSampleBufferRef dan metadata rotasi:
Mendapatkan informasi tentang tempat terkenal yang dikenali
Jika pengenalan tempat terkenal berhasil, array objek VisionCloudLandmark akan diteruskan ke pengendali penyelesaian. Dari setiap objek, Anda bisa mendapatkan informasi tentang
tempat terkenal yang dikenali dalam gambar.
Contoh:
Swift
forlandmarkinlandmarks{letlandmarkDesc=landmark.landmarkletboundingPoly=landmark.frameletentityId=landmark.entityId// A landmark can have multiple locations: for example, the location the image// was taken, and the location of the landmark depicted.forlocationinlandmark.locations{letlatitude=location.latitudeletlongitude=location.longitude}letconfidence=landmark.confidence}
Objective-C
for(FIRVisionCloudLandmark*landmarkinlandmarks){NSString*landmarkDesc=landmark.landmark;CGRectframe=landmark.frame;NSString*entityId=landmark.entityId;// A landmark can have multiple locations: for example, the location the image// was taken, and the location of the landmark depicted.for(FIRVisionLatitudeLongitude*locationinlandmark.locations){doublelatitude=[location.latitudedoubleValue];doublelongitude=[location.longitudedoubleValue];}floatconfidence=[landmark.confidencefloatValue];}
[null,null,["Terakhir diperbarui pada 2025-08-08 UTC."],[],[],null,["You can use ML Kit to recognize well-known landmarks in an image.\n| Use of ML Kit to access Cloud ML functionality is subject to the [Google Cloud Platform License\n| Agreement](https://cloud.google.com/terms/) and [Service\n| Specific Terms](https://cloud.google.com/terms/service-terms), and billed accordingly. For billing information, see the Firebase [Pricing](/pricing) page.\n\n\u003cbr /\u003e\n\nBefore you begin\n\n1. If you have not already added Firebase to your app, do so by following the steps in the [getting started guide](/docs/ios/setup).\n2. Include the ML Kit libraries in your Podfile: \n\n ```\n pod 'Firebase/MLVision', '6.25.0'\n ```\n After you install or update your project's Pods, be sure to open your Xcode project using its `.xcworkspace`.\n3. In your app, import Firebase: \n\n Swift \n\n ```swift\n import Firebase\n ```\n\n Objective-C \n\n ```objective-c\n @import Firebase;\n ```\n4. If you have not already enabled Cloud-based APIs for your project, do so\n now:\n\n 1. Open the [ML Kit\n APIs page](//console.firebase.google.com/project/_/ml/apis) of the Firebase console.\n 2. If you have not already upgraded your project to a Blaze pricing plan, click\n **Upgrade** to do so. (You will be prompted to upgrade only if your\n project isn't on the Blaze plan.)\n\n Only Blaze-level projects can use Cloud-based APIs.\n 3. If Cloud-based APIs aren't already enabled, click **Enable Cloud-based\n APIs**.\n\n | Before you deploy to production an app that uses a Cloud API, you should take some additional steps to [prevent and mitigate the\n | effect of unauthorized API access](./secure-api-key).\n\nConfigure the landmark detector\n\nBy default, the Cloud detector uses the stable version of the model and\nreturns up to 10 results. If you want to change either of these settings,\nspecify them with a `VisionCloudDetectorOptions` object as\nin the following example: \n\nSwift \n\n```swift\nlet options = VisionCloudDetectorOptions()\noptions.modelType = .latest\noptions.maxResults = 20\n```\n\nObjective-C \n\n```objective-c\n FIRVisionCloudDetectorOptions *options =\n [[FIRVisionCloudDetectorOptions alloc] init];\n options.modelType = FIRVisionCloudModelTypeLatest;\n options.maxResults = 20;\n \n```\n\nIn the next step, pass the `VisionCloudDetectorOptions`\nobject when you create the Cloud detector object.\n\nRun the landmark detector To recognize landmarks in an image, pass the image as a `UIImage` or a `CMSampleBufferRef` to the `VisionCloudLandmarkDetector`'s `detect(in:)` method:\n\n\u003cbr /\u003e\n\n1. Get an instance of [`VisionCloudLandmarkDetector`](/docs/reference/swift/firebasemlvision/api/reference/Classes/VisionCloudLandmarkDetector): \n\n Swift \n\n ```swift\n lazy var vision = Vision.vision()\n\n let cloudDetector = vision.cloudLandmarkDetector(options: options)\n // Or, to use the default settings:\n // let cloudDetector = vision.cloudLandmarkDetector()\n ```\n\n Objective-C \n\n ```objective-c\n FIRVision *vision = [FIRVision vision];\n FIRVisionCloudLandmarkDetector *landmarkDetector = [vision cloudLandmarkDetector];\n // Or, to change the default settings:\n // FIRVisionCloudLandmarkDetector *landmarkDetector =\n // [vision cloudLandmarkDetectorWithOptions:options];\n ```\n2. Create a [`VisionImage`](/docs/reference/swift/firebasemlvision/api/reference/Classes/VisionImage) object using a `UIImage` or a\n `CMSampleBufferRef`.\n\n To use a `UIImage`:\n 1. If necessary, rotate the image so that its `imageOrientation` property is `.up`.\n 2. Create a `VisionImage` object using the correctly-rotated `UIImage`. Do not specify any rotation metadata---the default value, `.topLeft`, must be used. \n\n Swift \n\n ```swift\n let image = VisionImage(image: uiImage)\n ```\n\n Objective-C \n\n ```objective-c\n FIRVisionImage *image = [[FIRVisionImage alloc] initWithImage:uiImage];\n ```\n\n To use a `CMSampleBufferRef`:\n 1. Create a [`VisionImageMetadata`](/docs/reference/swift/firebasemlvision/api/reference/Classes/VisionImageMetadata) object that specifies the\n orientation of the image data contained in the\n `CMSampleBufferRef` buffer.\n\n To get the image orientation: \n\n Swift \n\n ```swift\n func imageOrientation(\n deviceOrientation: UIDeviceOrientation,\n cameraPosition: AVCaptureDevice.Position\n ) -\u003e VisionDetectorImageOrientation {\n switch deviceOrientation {\n case .portrait:\n return cameraPosition == .front ? .leftTop : .rightTop\n case .landscapeLeft:\n return cameraPosition == .front ? .bottomLeft : .topLeft\n case .portraitUpsideDown:\n return cameraPosition == .front ? .rightBottom : .leftBottom\n case .landscapeRight:\n return cameraPosition == .front ? .topRight : .bottomRight\n case .faceDown, .faceUp, .unknown:\n return .leftTop\n }\n }\n ```\n\n Objective-C \n\n ```objective-c\n - (FIRVisionDetectorImageOrientation)\n imageOrientationFromDeviceOrientation:(UIDeviceOrientation)deviceOrientation\n cameraPosition:(AVCaptureDevicePosition)cameraPosition {\n switch (deviceOrientation) {\n case UIDeviceOrientationPortrait:\n if (cameraPosition == AVCaptureDevicePositionFront) {\n return FIRVisionDetectorImageOrientationLeftTop;\n } else {\n return FIRVisionDetectorImageOrientationRightTop;\n }\n case UIDeviceOrientationLandscapeLeft:\n if (cameraPosition == AVCaptureDevicePositionFront) {\n return FIRVisionDetectorImageOrientationBottomLeft;\n } else {\n return FIRVisionDetectorImageOrientationTopLeft;\n }\n case UIDeviceOrientationPortraitUpsideDown:\n if (cameraPosition == AVCaptureDevicePositionFront) {\n return FIRVisionDetectorImageOrientationRightBottom;\n } else {\n return FIRVisionDetectorImageOrientationLeftBottom;\n }\n case UIDeviceOrientationLandscapeRight:\n if (cameraPosition == AVCaptureDevicePositionFront) {\n return FIRVisionDetectorImageOrientationTopRight;\n } else {\n return FIRVisionDetectorImageOrientationBottomRight;\n }\n default:\n return FIRVisionDetectorImageOrientationTopLeft;\n }\n }\n ```\n\n Then, create the metadata object: \n\n Swift \n\n ```swift\n let cameraPosition = AVCaptureDevice.Position.back // Set to the capture device you used.\n let metadata = VisionImageMetadata()\n metadata.orientation = imageOrientation(\n deviceOrientation: UIDevice.current.orientation,\n cameraPosition: cameraPosition\n )\n ```\n\n Objective-C \n\n ```objective-c\n FIRVisionImageMetadata *metadata = [[FIRVisionImageMetadata alloc] init];\n AVCaptureDevicePosition cameraPosition =\n AVCaptureDevicePositionBack; // Set to the capture device you used.\n metadata.orientation =\n [self imageOrientationFromDeviceOrientation:UIDevice.currentDevice.orientation\n cameraPosition:cameraPosition];\n ```\n 2. Create a `VisionImage` object using the `CMSampleBufferRef` object and the rotation metadata: \n\n Swift \n\n ```swift\n let image = VisionImage(buffer: sampleBuffer)\n image.metadata = metadata\n ```\n\n Objective-C \n\n ```objective-c\n FIRVisionImage *image = [[FIRVisionImage alloc] initWithBuffer:sampleBuffer];\n image.metadata = metadata;\n ```\n3. Then, pass the image to the `detect(in:)` method: \n\n Swift \n\n ```swift\n cloudDetector.detect(in: visionImage) { landmarks, error in\n guard error == nil, let landmarks = landmarks, !landmarks.isEmpty else {\n // ...\n return\n }\n\n // Recognized landmarks\n // ...\n }\n ```\n\n Objective-C \n\n ```objective-c\n [landmarkDetector detectInImage:image\n completion:^(NSArray\u003cFIRVisionCloudLandmark *\u003e *landmarks,\n NSError *error) {\n if (error != nil) {\n return;\n } else if (landmarks != nil) {\n // Got landmarks\n }\n }];\n ```\n\nGet information about the recognized landmarks If landmark recognition succeeds, an array of [`VisionCloudLandmark`](/docs/reference/swift/firebasemlvision/api/reference/Classes/VisionCloudLandmark) objects will be passed to the completion handler. From each object, you can get information about a landmark recognized in the image.\n\n\u003cbr /\u003e\n\nFor example: \n\nSwift \n\n```swift\nfor landmark in landmarks {\n let landmarkDesc = landmark.landmark\n let boundingPoly = landmark.frame\n let entityId = landmark.entityId\n\n // A landmark can have multiple locations: for example, the location the image\n // was taken, and the location of the landmark depicted.\n for location in landmark.locations {\n let latitude = location.latitude\n let longitude = location.longitude\n }\n\n let confidence = landmark.confidence\n}\n```\n\nObjective-C \n\n```objective-c\nfor (FIRVisionCloudLandmark *landmark in landmarks) {\n NSString *landmarkDesc = landmark.landmark;\n CGRect frame = landmark.frame;\n NSString *entityId = landmark.entityId;\n\n // A landmark can have multiple locations: for example, the location the image\n // was taken, and the location of the landmark depicted.\n for (FIRVisionLatitudeLongitude *location in landmark.locations) {\n double latitude = [location.latitude doubleValue];\n double longitude = [location.longitude doubleValue];\n }\n\n float confidence = [landmark.confidence floatValue];\n}\n```\n\nNext steps\n\n- Before you deploy to production an app that uses a Cloud API, you should take some additional steps to [prevent and mitigate the\n effect of unauthorized API access](./secure-api-key)."]]