Nếu bạn chưa nâng cấp dự án lên gói giá linh hoạt (trả tiền theo mức dùng), hãy nhấp vào Nâng cấp để thực hiện việc này. (Bạn sẽ chỉ được nhắc nâng cấp nếu dự án của bạn không sử dụng gói giá Blaze.)
Chỉ những dự án sử dụng gói giá Blaze mới có thể sử dụng API dựa trên đám mây.
Nếu bạn chưa bật API dựa trên đám mây, hãy nhấp vào Bật API dựa trên đám mây.
Định cấu hình trình phát hiện địa danh
Theo mặc định, trình phát hiện trên đám mây sử dụng phiên bản ổn định của mô hình và trả về tối đa 10 kết quả. Nếu bạn muốn thay đổi một trong hai chế độ cài đặt này, hãy chỉ định các chế độ cài đặt đó bằng đối tượng VisionCloudDetectorOptions như trong ví dụ sau:
Trong bước tiếp theo, hãy truyền đối tượng VisionCloudDetectorOptions khi bạn tạo đối tượng Trình phát hiện đám mây.
Chạy trình phát hiện địa danh
Để nhận dạng các địa danh trong hình ảnh, hãy truyền hình ảnh dưới dạng UIImage hoặc CMSampleBufferRef đến phương thức detect(in:) của VisionCloudLandmarkDetector:
lazyvarvision=Vision.vision()letcloudDetector=vision.cloudLandmarkDetector(options:options)// Or, to use the default settings:// let cloudDetector = vision.cloudLandmarkDetector()
Nếu nhận dạng được địa danh, một mảng các đối tượng VisionCloudLandmark sẽ được truyền đến trình xử lý hoàn tất. Từ mỗi đối tượng, bạn có thể lấy thông tin về một địa danh được nhận dạng trong hình ảnh.
Ví dụ:
Swift
forlandmarkinlandmarks{letlandmarkDesc=landmark.landmarkletboundingPoly=landmark.frameletentityId=landmark.entityId// A landmark can have multiple locations: for example, the location the image// was taken, and the location of the landmark depicted.forlocationinlandmark.locations{letlatitude=location.latitudeletlongitude=location.longitude}letconfidence=landmark.confidence}
Objective-C
for(FIRVisionCloudLandmark*landmarkinlandmarks){NSString*landmarkDesc=landmark.landmark;CGRectframe=landmark.frame;NSString*entityId=landmark.entityId;// A landmark can have multiple locations: for example, the location the image// was taken, and the location of the landmark depicted.for(FIRVisionLatitudeLongitude*locationinlandmark.locations){doublelatitude=[location.latitudedoubleValue];doublelongitude=[location.longitudedoubleValue];}floatconfidence=[landmark.confidencefloatValue];}
[null,null,["Cập nhật lần gần đây nhất: 2025-08-16 UTC."],[],[],null,["# Recognize Landmarks with Firebase ML on iOS\n\n| This page describes an old version of recognizing well-known landmarks in images using the\n| deprecated Firebase ML Vision SDK. As an alternative, you may\n| [call\n| Cloud Vision APIs using Firebase Auth and Callable Functions](/docs/ml/ios/recognize-landmarks) to allow only users logged\n| into your app to access the API.\n\nYou can use Firebase ML to recognize well-known landmarks in an image.\n| Use of the Cloud Vision APIs is subject to the [Google Cloud Platform License\n| Agreement](https://cloud.google.com/terms/) and [Service\n| Specific Terms](https://cloud.google.com/terms/service-terms), and billed accordingly. For billing information, see the [Pricing](https://cloud.google.com/vision/pricing) page.\n\n\u003cbr /\u003e\n\nBefore you begin\n----------------\n\nIf you have not already added Firebase to your app, do so by following the steps in the [getting started guide](/docs/ios/setup).\n1. Use Swift Package Manager to install and manage Firebase dependencies.\n| Visit [our installation guide](/docs/ios/installation-methods) to learn about the different ways you can add Firebase SDKs to your Apple project, including importing frameworks directly and using CocoaPods.\n1. In Xcode, with your app project open, navigate to **File \\\u003e Add Packages**.\n2. When prompted, add the Firebase Apple platforms SDK repository: \n\n```text\n https://github.com/firebase/firebase-ios-sdk.git\n```\n| **Note:** New projects should use the default (latest) SDK version, but you can choose an older version if needed.\n3. Choose the Firebase ML library.\n4. Add the `-ObjC` flag to the *Other Linker Flags* section of your target's build settings.\n5. When finished, Xcode will automatically begin resolving and downloading your dependencies in the background.\n2. Next, perform some in-app setup:\n1. In your app, import Firebase:\n\n #### Swift\n\n ```swift\n import FirebaseMLModelDownloader\n ```\n\n #### Objective-C\n\n ```objective-c\n @import FirebaseMLModelDownloader;\n ```\n3. If you haven't already enabled Cloud-based APIs for your project, do so\n now:\n\n 1. Open the [Firebase ML\n APIs page](//console.firebase.google.com/project/_/ml/apis) in the Firebase console.\n 2. If you haven't already upgraded your project to the\n [pay-as-you-go Blaze pricing plan](/pricing), click **Upgrade** to do so. (You'll be\n prompted to upgrade only if your project isn't on the\n Blaze pricing plan.)\n\n Only projects on the Blaze pricing plan can use\n Cloud-based APIs.\n 3. If Cloud-based APIs aren't already enabled, click **Enable Cloud-based APIs**.\n\n | Before you deploy to production an app that uses a Cloud API, you should take some additional steps to [prevent and mitigate the\n | effect of unauthorized API access](./secure-api-key).\n\nConfigure the landmark detector\n-------------------------------\n\nBy default, the Cloud detector uses the stable version of the model and\nreturns up to 10 results. If you want to change either of these settings,\nspecify them with a `VisionCloudDetectorOptions` object as\nin the following example: \n\n#### Swift\n\n```swift\nlet options = VisionCloudDetectorOptions()\noptions.modelType = .latest\noptions.maxResults = 20https://github.com/firebase/quickstart-ios/blob/c7b9221ceaff346fd912ed071d4984eca32bfc4f/mlvision/MLVisionExample/ViewController.swift#L463-L465\n```\n\n#### Objective-C\n\n```objective-c\n FIRVisionCloudDetectorOptions *options =\n [[FIRVisionCloudDetectorOptions alloc] init];\n options.modelType = FIRVisionCloudModelTypeLatest;\n options.maxResults = 20;\n \n```\n\nIn the next step, pass the `VisionCloudDetectorOptions`\nobject when you create the Cloud detector object.\n\nRun the landmark detector\n-------------------------\n\nTo recognize landmarks in an image, pass the image as a `UIImage` or a `CMSampleBufferRef` to the `VisionCloudLandmarkDetector`'s `detect(in:)` method:\n\n\u003cbr /\u003e\n\n1. Get an instance of [`VisionCloudLandmarkDetector`](/docs/reference/swift/firebasemlvision/api/reference/Classes/VisionCloudLandmarkDetector): \n\n #### Swift\n\n ```swift\n lazy var vision = Vision.vision()\n\n let cloudDetector = vision.cloudLandmarkDetector(options: options)\n // Or, to use the default settings:\n // let cloudDetector = vision.cloudLandmarkDetector() \n https://github.com/firebase/quickstart-ios/blob/c7b9221ceaff346fd912ed071d4984eca32bfc4f/mlvision/MLVisionExample/ViewController.swift#L469-L471\n ```\n\n #### Objective-C\n\n ```objective-c\n FIRVision *vision = [FIRVision vision];\n FIRVisionCloudLandmarkDetector *landmarkDetector = [vision cloudLandmarkDetector];\n // Or, to change the default settings:\n // FIRVisionCloudLandmarkDetector *landmarkDetector =\n // [vision cloudLandmarkDetectorWithOptions:options];\n ```\n2. In order to call Cloud Vision, the image must be formatted as a base64-encoded string. To process a `UIImage`: \n\n #### Swift\n\n ```swift\n guard let imageData = uiImage.jpegData(compressionQuality: 1.0) else { return }\n let base64encodedImage = imageData.base64EncodedString()\n ```\n\n #### Objective-C\n\n ```objective-c\n NSData *imageData = UIImageJPEGRepresentation(uiImage, 1.0f);\n NSString *base64encodedImage =\n [imageData base64EncodedStringWithOptions:NSDataBase64Encoding76CharacterLineLength];\n ```\n3. Then, pass the image to the `detect(in:)` method: \n\n #### Swift\n\n ```swift\n cloudDetector.detect(in: visionImage) { landmarks, error in\n guard error == nil, let landmarks = landmarks, !landmarks.isEmpty else {\n // ...\n return\n }\n\n // Recognized landmarks\n // ...\n }https://github.com/firebase/quickstart-ios/blob/c7b9221ceaff346fd912ed071d4984eca32bfc4f/mlvision/MLVisionExample/ViewController.swift#L475-L504\n ```\n\n #### Objective-C\n\n ```objective-c\n [landmarkDetector detectInImage:image\n completion:^(NSArray\u003cFIRVisionCloudLandmark *\u003e *landmarks,\n NSError *error) {\n if (error != nil) {\n return;\n } else if (landmarks != nil) {\n // Got landmarks\n }\n }];\n ```\n\nGet information about the recognized landmarks\n----------------------------------------------\n\nIf landmark recognition succeeds, an array of [`VisionCloudLandmark`](/docs/reference/swift/firebasemlvision/api/reference/Classes/VisionCloudLandmark) objects will be passed to the completion handler. From each object, you can get information about a landmark recognized in the image.\n\n\u003cbr /\u003e\n\nFor example: \n\n#### Swift\n\n```swift\nfor landmark in landmarks {\n let landmarkDesc = landmark.landmark\n let boundingPoly = landmark.frame\n let entityId = landmark.entityId\n\n // A landmark can have multiple locations: for example, the location the image\n // was taken, and the location of the landmark depicted.\n for location in landmark.locations {\n let latitude = location.latitude\n let longitude = location.longitude\n }\n\n let confidence = landmark.confidence\n}\n```\n\n#### Objective-C\n\n```objective-c\nfor (FIRVisionCloudLandmark *landmark in landmarks) {\n NSString *landmarkDesc = landmark.landmark;\n CGRect frame = landmark.frame;\n NSString *entityId = landmark.entityId;\n\n // A landmark can have multiple locations: for example, the location the image\n // was taken, and the location of the landmark depicted.\n for (FIRVisionLatitudeLongitude *location in landmark.locations) {\n double latitude = [location.latitude doubleValue];\n double longitude = [location.longitude doubleValue];\n }\n\n float confidence = [landmark.confidence floatValue];\n}\n```\n\nNext steps\n----------\n\n- Before you deploy to production an app that uses a Cloud API, you should take some additional steps to [prevent and mitigate the\n effect of unauthorized API access](./secure-api-key)."]]