在 iOS 上使用 ML Kit 識別地標

您可以使用 ML Kit 識別圖像中的知名地標。

在你開始之前

  1. 如果您尚未將 Firebase 添加到您的應用,請按照入門指南中的步驟進行操作。
  2. 在您的 Podfile 中包含 ML Kit 庫:
    pod 'Firebase/MLVision', '6.25.0'
    
    安裝或更新項目的 Pod 後,請務必使用其.xcworkspace打開您的 Xcode 項目。
  3. 在您的應用中,導入 Firebase:

    迅速

    import Firebase

    Objective-C

    @import Firebase;
  4. 如果您尚未為您的項目啟用基於雲的 API,請立即執行此操作:

    1. 打開 Firebase 控制台的ML Kit API 頁面
    2. 如果您尚未將項目升級到 Blaze 定價計劃,請單擊升級以執行此操作。 (僅當您的項目不在 Blaze 計劃中時,系統才會提示您升級。)

      只有 Blaze 級項目可以使用基於雲的 API。

    3. 如果尚未啟用基於雲的 API,請單擊啟用基於雲的 API

配置地標檢測器

默認情況下,雲檢測器使用模型的穩定版本並返回最多 10 個結果。如果您想更改這些設置中的任何一個,請使用VisionCloudDetectorOptions對象指定它們,如下例所示:

迅速

let options = VisionCloudDetectorOptions()
options.modelType = .latest
options.maxResults = 20

Objective-C

  FIRVisionCloudDetectorOptions *options =
      [[FIRVisionCloudDetectorOptions alloc] init];
  options.modelType = FIRVisionCloudModelTypeLatest;
  options.maxResults = 20;
  

在下一步中,在創建雲檢測器對象時傳遞VisionCloudDetectorOptions對象。

運行地標檢測器

要識別圖像中的地標,請將圖像作為UIImageCMSampleBufferRef傳遞給VisionCloudLandmarkDetectordetect(in:)方法:

  1. 獲取VisionCloudLandmarkDetector的實例:

    迅速

    lazy var vision = Vision.vision()
    
    let cloudDetector = vision.cloudLandmarkDetector(options: options)
    // Or, to use the default settings:
    // let cloudDetector = vision.cloudLandmarkDetector()
    

    Objective-C

    FIRVision *vision = [FIRVision vision];
    FIRVisionCloudLandmarkDetector *landmarkDetector = [vision cloudLandmarkDetector];
    // Or, to change the default settings:
    // FIRVisionCloudLandmarkDetector *landmarkDetector =
    //     [vision cloudLandmarkDetectorWithOptions:options];
    
  2. 使用UIImageCMSampleBufferRef創建VisionImage對象。

    要使用UIImage

    1. 如有必要,旋轉圖像,使其imageOrientation屬性為.up
    2. 使用正確旋轉的UIImage創建一個VisionImage對象。不要指定任何旋轉元數據——必須使用默認值.topLeft

      迅速

      let image = VisionImage(image: uiImage)

      Objective-C

      FIRVisionImage *image = [[FIRVisionImage alloc] initWithImage:uiImage];

    要使用CMSampleBufferRef

    1. 創建一個VisionImageMetadata對象,該對象指定CMSampleBufferRef緩衝區中包含的圖像數據的方向。

      要獲取圖像方向:

      迅速

      func imageOrientation(
          deviceOrientation: UIDeviceOrientation,
          cameraPosition: AVCaptureDevice.Position
          ) -> VisionDetectorImageOrientation {
          switch deviceOrientation {
          case .portrait:
              return cameraPosition == .front ? .leftTop : .rightTop
          case .landscapeLeft:
              return cameraPosition == .front ? .bottomLeft : .topLeft
          case .portraitUpsideDown:
              return cameraPosition == .front ? .rightBottom : .leftBottom
          case .landscapeRight:
              return cameraPosition == .front ? .topRight : .bottomRight
          case .faceDown, .faceUp, .unknown:
              return .leftTop
          }
      }

      Objective-C

      - (FIRVisionDetectorImageOrientation)
          imageOrientationFromDeviceOrientation:(UIDeviceOrientation)deviceOrientation
                                 cameraPosition:(AVCaptureDevicePosition)cameraPosition {
        switch (deviceOrientation) {
          case UIDeviceOrientationPortrait:
            if (cameraPosition == AVCaptureDevicePositionFront) {
              return FIRVisionDetectorImageOrientationLeftTop;
            } else {
              return FIRVisionDetectorImageOrientationRightTop;
            }
          case UIDeviceOrientationLandscapeLeft:
            if (cameraPosition == AVCaptureDevicePositionFront) {
              return FIRVisionDetectorImageOrientationBottomLeft;
            } else {
              return FIRVisionDetectorImageOrientationTopLeft;
            }
          case UIDeviceOrientationPortraitUpsideDown:
            if (cameraPosition == AVCaptureDevicePositionFront) {
              return FIRVisionDetectorImageOrientationRightBottom;
            } else {
              return FIRVisionDetectorImageOrientationLeftBottom;
            }
          case UIDeviceOrientationLandscapeRight:
            if (cameraPosition == AVCaptureDevicePositionFront) {
              return FIRVisionDetectorImageOrientationTopRight;
            } else {
              return FIRVisionDetectorImageOrientationBottomRight;
            }
          default:
            return FIRVisionDetectorImageOrientationTopLeft;
        }
      }

      然後,創建元數據對象:

      迅速

      let cameraPosition = AVCaptureDevice.Position.back  // Set to the capture device you used.
      let metadata = VisionImageMetadata()
      metadata.orientation = imageOrientation(
          deviceOrientation: UIDevice.current.orientation,
          cameraPosition: cameraPosition
      )

      Objective-C

      FIRVisionImageMetadata *metadata = [[FIRVisionImageMetadata alloc] init];
      AVCaptureDevicePosition cameraPosition =
          AVCaptureDevicePositionBack;  // Set to the capture device you used.
      metadata.orientation =
          [self imageOrientationFromDeviceOrientation:UIDevice.currentDevice.orientation
                                       cameraPosition:cameraPosition];
    2. 使用CMSampleBufferRef對象和旋轉元數據創建一個VisionImage對象:

      迅速

      let image = VisionImage(buffer: sampleBuffer)
      image.metadata = metadata

      Objective-C

      FIRVisionImage *image = [[FIRVisionImage alloc] initWithBuffer:sampleBuffer];
      image.metadata = metadata;
  3. 然後,將圖像傳遞給detect(in:)方法:

    迅速

    cloudDetector.detect(in: visionImage) { landmarks, error in
      guard error == nil, let landmarks = landmarks, !landmarks.isEmpty else {
        // ...
        return
      }
    
      // Recognized landmarks
      // ...
    }
    

    Objective-C

    [landmarkDetector detectInImage:image
                         completion:^(NSArray<FIRVisionCloudLandmark *> *landmarks,
                                      NSError *error) {
      if (error != nil) {
        return;
      } else if (landmarks != nil) {
        // Got landmarks
      }
    }];
    

獲取有關已識別地標的信息

如果地標識別成功, VisionCloudLandmark對像數組將被傳遞給完成處理程序。您可以從每個對像中獲取有關圖像中識別的地標的信息。

例如:

迅速

for landmark in landmarks {
  let landmarkDesc = landmark.landmark
  let boundingPoly = landmark.frame
  let entityId = landmark.entityId

  // A landmark can have multiple locations: for example, the location the image
  // was taken, and the location of the landmark depicted.
  for location in landmark.locations {
    let latitude = location.latitude
    let longitude = location.longitude
  }

  let confidence = landmark.confidence
}

Objective-C

for (FIRVisionCloudLandmark *landmark in landmarks) {
   NSString *landmarkDesc = landmark.landmark;
   CGRect frame = landmark.frame;
   NSString *entityId = landmark.entityId;

   // A landmark can have multiple locations: for example, the location the image
   // was taken, and the location of the landmark depicted.
   for (FIRVisionLatitudeLongitude *location in landmark.locations) {
     double latitude = [location.latitude doubleValue];
     double longitude = [location.longitude doubleValue];
   }

   float confidence = [landmark.confidence floatValue];
}

下一步