您可以使用机器学习套件来识别图片中的知名地标。
准备工作
- 如果您尚未将 Firebase 添加到自己的应用中,请按照入门指南中的步骤执行此操作。
- 在 Podfile 中添加机器学习套件库:
pod 'Firebase/MLVision', '6.25.0'
在安装或更新项目的 Pod 之后,请务必使用 Xcode 项目的.xcworkspace
打开该项目。 - 在您的应用中导入 Firebase:
Swift
import Firebase
Objective-C
@import Firebase;
-
如果您尚未为项目启用云端 API,请立即完成以下操作:
- 打开 Firebase 控制台的机器学习套件 API 页面。
-
如果您尚未将项目升级到 Blaze 定价方案,请点击升级以执行此操作。(只有在您的项目未采用 Blaze 方案时,系统才会提示您进行升级。)
只有 Blaze 级项目才能使用基于 Cloud 的 API。
- 如果尚未启用基于 Cloud 的 API,请点击启用基于 Cloud 的 API。
配置地标检测器
默认情况下,Cloud 检测器使用稳定版模型并最多返回 10 个结果。如果您想更改这两项设置,请按照以下示例使用 VisionCloudDetectorOptions
对象指定:
Swift
let options = VisionCloudDetectorOptions() options.modelType = .latest options.maxResults = 20
Objective-C
FIRVisionCloudDetectorOptions *options = [[FIRVisionCloudDetectorOptions alloc] init]; options.modelType = FIRVisionCloudModelTypeLatest; options.maxResults = 20;
在下一步中,在创建 Cloud 检测器对象时传递 VisionCloudDetectorOptions
对象。
运行地标检测器
如需识别图片中的地标,请将图片作为UIImage
或 CMSampleBufferRef
传递给 VisionCloudLandmarkDetector
的 detect(in:)
方法:
- 获取
VisionCloudLandmarkDetector
的一个实例:Swift
lazy var vision = Vision.vision() let cloudDetector = vision.cloudLandmarkDetector(options: options) // Or, to use the default settings: // let cloudDetector = vision.cloudLandmarkDetector()
Objective-C
FIRVision *vision = [FIRVision vision]; FIRVisionCloudLandmarkDetector *landmarkDetector = [vision cloudLandmarkDetector]; // Or, to change the default settings: // FIRVisionCloudLandmarkDetector *landmarkDetector = // [vision cloudLandmarkDetectorWithOptions:options];
-
使用
UIImage
或CMSampleBufferRef
创建一个VisionImage
对象。如需使用
UIImage
,请按以下步骤操作:- 在必要时旋转图片,以使其
imageOrientation
属性为.up
。 - 使用方向正确的
UIImage
创建一个VisionImage
对象。不要指定任何旋转方式元数据,必须使用默认值.topLeft
。Swift
let image = VisionImage(image: uiImage)
Objective-C
FIRVisionImage *image = [[FIRVisionImage alloc] initWithImage:uiImage];
如需使用
CMSampleBufferRef
,请按以下步骤操作:-
创建一个
VisionImageMetadata
对象,用其指定CMSampleBufferRef
缓冲区中所含图片数据的方向。如需获取图片方向,请运行以下代码:
Swift
func imageOrientation( deviceOrientation: UIDeviceOrientation, cameraPosition: AVCaptureDevice.Position ) -> VisionDetectorImageOrientation { switch deviceOrientation { case .portrait: return cameraPosition == .front ? .leftTop : .rightTop case .landscapeLeft: return cameraPosition == .front ? .bottomLeft : .topLeft case .portraitUpsideDown: return cameraPosition == .front ? .rightBottom : .leftBottom case .landscapeRight: return cameraPosition == .front ? .topRight : .bottomRight case .faceDown, .faceUp, .unknown: return .leftTop } }
Objective-C
- (FIRVisionDetectorImageOrientation) imageOrientationFromDeviceOrientation:(UIDeviceOrientation)deviceOrientation cameraPosition:(AVCaptureDevicePosition)cameraPosition { switch (deviceOrientation) { case UIDeviceOrientationPortrait: if (cameraPosition == AVCaptureDevicePositionFront) { return FIRVisionDetectorImageOrientationLeftTop; } else { return FIRVisionDetectorImageOrientationRightTop; } case UIDeviceOrientationLandscapeLeft: if (cameraPosition == AVCaptureDevicePositionFront) { return FIRVisionDetectorImageOrientationBottomLeft; } else { return FIRVisionDetectorImageOrientationTopLeft; } case UIDeviceOrientationPortraitUpsideDown: if (cameraPosition == AVCaptureDevicePositionFront) { return FIRVisionDetectorImageOrientationRightBottom; } else { return FIRVisionDetectorImageOrientationLeftBottom; } case UIDeviceOrientationLandscapeRight: if (cameraPosition == AVCaptureDevicePositionFront) { return FIRVisionDetectorImageOrientationTopRight; } else { return FIRVisionDetectorImageOrientationBottomRight; } default: return FIRVisionDetectorImageOrientationTopLeft; } }
然后,创建元数据对象:
Swift
let cameraPosition = AVCaptureDevice.Position.back // Set to the capture device you used. let metadata = VisionImageMetadata() metadata.orientation = imageOrientation( deviceOrientation: UIDevice.current.orientation, cameraPosition: cameraPosition )
Objective-C
FIRVisionImageMetadata *metadata = [[FIRVisionImageMetadata alloc] init]; AVCaptureDevicePosition cameraPosition = AVCaptureDevicePositionBack; // Set to the capture device you used. metadata.orientation = [self imageOrientationFromDeviceOrientation:UIDevice.currentDevice.orientation cameraPosition:cameraPosition];
- 使用
CMSampleBufferRef
对象和旋转方式元数据创建一个VisionImage
对象:Swift
let image = VisionImage(buffer: sampleBuffer) image.metadata = metadata
Objective-C
FIRVisionImage *image = [[FIRVisionImage alloc] initWithBuffer:sampleBuffer]; image.metadata = metadata;
- 在必要时旋转图片,以使其
- 然后,将图片传递给
detect(in:)
方法:Swift
cloudDetector.detect(in: visionImage) { landmarks, error in guard error == nil, let landmarks = landmarks, !landmarks.isEmpty else { // ... return } // Recognized landmarks // ... }
Objective-C
[landmarkDetector detectInImage:image completion:^(NSArray<FIRVisionCloudLandmark *> *landmarks, NSError *error) { if (error != nil) { return; } else if (landmarks != nil) { // Got landmarks } }];
获取识别出的地标的相关信息
如果成功识别出了地标,系统会向完成处理程序传递一组VisionCloudLandmark
对象。从每个对象中,您可以获取图片中识别出的地标的相关信息。
例如:
Swift
for landmark in landmarks { let landmarkDesc = landmark.landmark let boundingPoly = landmark.frame let entityId = landmark.entityId // A landmark can have multiple locations: for example, the location the image // was taken, and the location of the landmark depicted. for location in landmark.locations { let latitude = location.latitude let longitude = location.longitude } let confidence = landmark.confidence }
Objective-C
for (FIRVisionCloudLandmark *landmark in landmarks) { NSString *landmarkDesc = landmark.landmark; CGRect frame = landmark.frame; NSString *entityId = landmark.entityId; // A landmark can have multiple locations: for example, the location the image // was taken, and the location of the landmark depicted. for (FIRVisionLatitudeLongitude *location in landmark.locations) { double latitude = [location.latitude doubleValue]; double longitude = [location.longitude doubleValue]; } float confidence = [landmark.confidence floatValue]; }
后续步骤
- 在向生产环境中部署使用 Cloud API 的应用之前,您应该执行一些额外的步骤来防止未经授权的 API 访问并减轻这些访问造成的影响。