Recognize Landmarks with Firebase ML on iOS

You can use Firebase ML to recognize well-known landmarks in an image.

Before you begin

    If you have not already added Firebase to your app, do so by following the steps in the getting started guide.

    Use Swift Package Manager to install and manage Firebase dependencies.

    1. In Xcode, with your app project open, navigate to File > Add Packages.
    2. When prompted, add the Firebase Apple platforms SDK repository:
    3.   https://github.com/firebase/firebase-ios-sdk.git
    4. Choose the Firebase ML library.
    5. Add the -ObjC flag to the Other Linker Flags section of your target's build settings.
    6. When finished, Xcode will automatically begin resolving and downloading your dependencies in the background.

    Next, perform some in-app setup:

    1. In your app, import Firebase:

      Swift

      import FirebaseMLModelDownloader

      Objective-C

      @import FirebaseMLModelDownloader;
  1. If you have not already enabled Cloud-based APIs for your project, do so now:

    1. Open the Firebase ML APIs page of the Firebase console.
    2. If you have not already upgraded your project to the Blaze pricing plan, click Upgrade to do so. (You will be prompted to upgrade only if your project isn't on the Blaze plan.)

      Only Blaze-level projects can use Cloud-based APIs.

    3. If Cloud-based APIs aren't already enabled, click Enable Cloud-based APIs.

Configure the landmark detector

By default, the Cloud detector uses the stable version of the model and returns up to 10 results. If you want to change either of these settings, specify them with a VisionCloudDetectorOptions object as in the following example:

Swift

let options = VisionCloudDetectorOptions()
options.modelType = .latest
options.maxResults = 20

Objective-C

  FIRVisionCloudDetectorOptions *options =
      [[FIRVisionCloudDetectorOptions alloc] init];
  options.modelType = FIRVisionCloudModelTypeLatest;
  options.maxResults = 20;
  

In the next step, pass the VisionCloudDetectorOptions object when you create the Cloud detector object.

Run the landmark detector

To recognize landmarks in an image, pass the image as a UIImage or a CMSampleBufferRef to the VisionCloudLandmarkDetector's detect(in:) method:

  1. Get an instance of VisionCloudLandmarkDetector:

    Swift

    lazy var vision = Vision.vision()
    
    let cloudDetector = vision.cloudLandmarkDetector(options: options)
    // Or, to use the default settings:
    // let cloudDetector = vision.cloudLandmarkDetector()

    Objective-C

    FIRVision *vision = [FIRVision vision];
    FIRVisionCloudLandmarkDetector *landmarkDetector = [vision cloudLandmarkDetector];
    // Or, to change the default settings:
    // FIRVisionCloudLandmarkDetector *landmarkDetector =
    //     [vision cloudLandmarkDetectorWithOptions:options];
  2. In order to call Cloud Vision, the image must be formatted as a base64-encoded string. To process a UIImage:

    Swift

    guard let imageData = uiImage.jpegData(compressionQuality: 1.0) else { return }
    let base64encodedImage = imageData.base64EncodedString()

    Objective-C

    NSData *imageData = UIImageJPEGRepresentation(uiImage, 1.0f);
    NSString *base64encodedImage =
      [imageData base64EncodedStringWithOptions:NSDataBase64Encoding76CharacterLineLength];
  3. Then, pass the image to the detect(in:) method:

    Swift

    cloudDetector.detect(in: visionImage) { landmarks, error in
      guard error == nil, let landmarks = landmarks, !landmarks.isEmpty else {
        // ...
        return
      }
    
      // Recognized landmarks
      // ...
    }

    Objective-C

    [landmarkDetector detectInImage:image
                         completion:^(NSArray<FIRVisionCloudLandmark *> *landmarks,
                                      NSError *error) {
      if (error != nil) {
        return;
      } else if (landmarks != nil) {
        // Got landmarks
      }
    }];

Get information about the recognized landmarks

If landmark recognition succeeds, an array of VisionCloudLandmark objects will be passed to the completion handler. From each object, you can get information about a landmark recognized in the image.

For example:

Swift

for landmark in landmarks {
  let landmarkDesc = landmark.landmark
  let boundingPoly = landmark.frame
  let entityId = landmark.entityId

  // A landmark can have multiple locations: for example, the location the image
  // was taken, and the location of the landmark depicted.
  for location in landmark.locations {
    let latitude = location.latitude
    let longitude = location.longitude
  }

  let confidence = landmark.confidence
}

Objective-C

for (FIRVisionCloudLandmark *landmark in landmarks) {
   NSString *landmarkDesc = landmark.landmark;
   CGRect frame = landmark.frame;
   NSString *entityId = landmark.entityId;

   // A landmark can have multiple locations: for example, the location the image
   // was taken, and the location of the landmark depicted.
   for (FIRVisionLatitudeLongitude *location in landmark.locations) {
     double latitude = [location.latitude doubleValue];
     double longitude = [location.longitude doubleValue];
   }

   float confidence = [landmark.confidence floatValue];
}

Next steps