您可以使用 ML Kit 辨識圖片中的文字,ML Kit 提供一般用途的 API,適合辨識圖片中的文字 (例如路標上的文字),也提供專為辨識文件文字而最佳化的 API。一般用途 API 包含裝置端和雲端模型。 文件文字辨識功能僅提供雲端模型。如要比較雲端和裝置端模型,請參閱總覽。
事前準備
- 如果尚未將 Firebase 新增至應用程式,請按照入門指南中的步驟操作。
- 在 Podfile 中加入 ML Kit 程式庫:
安裝或更新專案的 Pod 後,請務必使用專案的pod 'Firebase/MLVision', '6.25.0' # If using an on-device API: pod 'Firebase/MLVisionTextModel', '6.25.0'
.xcworkspace
開啟 Xcode 專案。 - 在應用程式中匯入 Firebase:
Swift
import Firebase
Objective-C
@import Firebase;
-
如要使用雲端型模型,但尚未為專案啟用雲端型 API,請立即啟用:
- 開啟 Firebase 控制台的 ML Kit API 頁面。
-
如果尚未將專案升級至 Blaze 定價方案,請按一下「升級」。系統只會在專案未採用 Blaze 方案時,提示您升級。
只有 Blaze 級別的專案才能使用雲端 API。
- 如果尚未啟用雲端 API,請按一下「啟用雲端 API」。
如要只使用裝置端模型,可以略過這個步驟。
現在可以開始辨識圖片中的文字。
輸入圖片規範
-
如要讓 ML Kit 準確辨識文字,輸入圖片必須包含以足夠像素資料呈現的文字。在理想情況下,拉丁文字的每個字元至少應為 16x16 像素。如果是中文、日文和韓文文字 (僅雲端 API 支援),每個字元應為 24x24 像素。一般而言,無論使用哪種語言,字元大於 24x24 像素對準確度沒有幫助。
舉例來說,如果名片佔滿圖片寬度,640x480 的圖片可能就非常適合掃描名片。如要掃描印在 Letter 尺寸紙張上的文件,可能需要 720x1280 像素的圖片。
-
如果圖片對焦不佳,可能會影響文字辨識準確度。如果結果不盡理想,請要求使用者重新拍攝圖片。
-
如果您要在即時應用程式中辨識文字,可能也需要考量輸入圖片的整體尺寸。系統處理較小的圖片時速度較快,因此為了減少延遲,請以較低的解析度擷取圖片 (請注意上述準確度規定),並確保文字盡可能占滿圖片。另請參閱「改善即時成效的訣竅」。
辨識圖片中的文字
如要使用裝置端或雲端模型辨識圖片中的文字,請按照下列說明執行文字辨識器。
1. 執行文字辨識器
將圖片以 `UIImage` 或 `CMSampleBufferRef` 形式傳遞至 `VisionTextRecognizer` 的 `process(_:completion:)` 方法:- 呼叫
onDeviceTextRecognizer
或cloudTextRecognizer
即可取得VisionTextRecognizer
的執行個體:Swift
如要使用裝置端模型,請按照下列步驟操作:
let vision = Vision.vision() let textRecognizer = vision.onDeviceTextRecognizer()
如要使用雲端模型,請按照下列步驟操作:
let vision = Vision.vision() let textRecognizer = vision.cloudTextRecognizer() // Or, to provide language hints to assist with language detection: // See https://cloud.google.com/vision/docs/languages for supported languages let options = VisionCloudTextRecognizerOptions() options.languageHints = ["en", "hi"] let textRecognizer = vision.cloudTextRecognizer(options: options)
Objective-C
如要使用裝置端模型,請按照下列步驟操作:
FIRVision *vision = [FIRVision vision]; FIRVisionTextRecognizer *textRecognizer = [vision onDeviceTextRecognizer];
如要使用雲端模型,請按照下列步驟操作:
FIRVision *vision = [FIRVision vision]; FIRVisionTextRecognizer *textRecognizer = [vision cloudTextRecognizer]; // Or, to provide language hints to assist with language detection: // See https://cloud.google.com/vision/docs/languages for supported languages FIRVisionCloudTextRecognizerOptions *options = [[FIRVisionCloudTextRecognizerOptions alloc] init]; options.languageHints = @[@"en", @"hi"]; FIRVisionTextRecognizer *textRecognizer = [vision cloudTextRecognizerWithOptions:options];
-
使用
UIImage
或CMSampleBufferRef
建立VisionImage
物件。如何使用
UIImage
:- 如有需要,請旋轉圖片,使其
imageOrientation
屬性為.up
。 - 使用正確旋轉的
UIImage
建立VisionImage
物件。請勿指定任何旋轉中繼資料,必須使用預設值.topLeft
。Swift
let image = VisionImage(image: uiImage)
Objective-C
FIRVisionImage *image = [[FIRVisionImage alloc] initWithImage:uiImage];
如何使用
CMSampleBufferRef
:-
建立
VisionImageMetadata
物件,指定CMSampleBufferRef
緩衝區中圖片資料的方向。如要取得圖片方向,請執行下列操作:
Swift
func imageOrientation( deviceOrientation: UIDeviceOrientation, cameraPosition: AVCaptureDevice.Position ) -> VisionDetectorImageOrientation { switch deviceOrientation { case .portrait: return cameraPosition == .front ? .leftTop : .rightTop case .landscapeLeft: return cameraPosition == .front ? .bottomLeft : .topLeft case .portraitUpsideDown: return cameraPosition == .front ? .rightBottom : .leftBottom case .landscapeRight: return cameraPosition == .front ? .topRight : .bottomRight case .faceDown, .faceUp, .unknown: return .leftTop } }
Objective-C
- (FIRVisionDetectorImageOrientation) imageOrientationFromDeviceOrientation:(UIDeviceOrientation)deviceOrientation cameraPosition:(AVCaptureDevicePosition)cameraPosition { switch (deviceOrientation) { case UIDeviceOrientationPortrait: if (cameraPosition == AVCaptureDevicePositionFront) { return FIRVisionDetectorImageOrientationLeftTop; } else { return FIRVisionDetectorImageOrientationRightTop; } case UIDeviceOrientationLandscapeLeft: if (cameraPosition == AVCaptureDevicePositionFront) { return FIRVisionDetectorImageOrientationBottomLeft; } else { return FIRVisionDetectorImageOrientationTopLeft; } case UIDeviceOrientationPortraitUpsideDown: if (cameraPosition == AVCaptureDevicePositionFront) { return FIRVisionDetectorImageOrientationRightBottom; } else { return FIRVisionDetectorImageOrientationLeftBottom; } case UIDeviceOrientationLandscapeRight: if (cameraPosition == AVCaptureDevicePositionFront) { return FIRVisionDetectorImageOrientationTopRight; } else { return FIRVisionDetectorImageOrientationBottomRight; } default: return FIRVisionDetectorImageOrientationTopLeft; } }
接著,建立中繼資料物件:
Swift
let cameraPosition = AVCaptureDevice.Position.back // Set to the capture device you used. let metadata = VisionImageMetadata() metadata.orientation = imageOrientation( deviceOrientation: UIDevice.current.orientation, cameraPosition: cameraPosition )
Objective-C
FIRVisionImageMetadata *metadata = [[FIRVisionImageMetadata alloc] init]; AVCaptureDevicePosition cameraPosition = AVCaptureDevicePositionBack; // Set to the capture device you used. metadata.orientation = [self imageOrientationFromDeviceOrientation:UIDevice.currentDevice.orientation cameraPosition:cameraPosition];
- 使用
CMSampleBufferRef
物件和旋轉中繼資料建立VisionImage
物件:Swift
let image = VisionImage(buffer: sampleBuffer) image.metadata = metadata
Objective-C
FIRVisionImage *image = [[FIRVisionImage alloc] initWithBuffer:sampleBuffer]; image.metadata = metadata;
- 如有需要,請旋轉圖片,使其
-
接著,將圖片傳遞至
process(_:completion:)
方法:Swift
textRecognizer.process(visionImage) { result, error in guard error == nil, let result = result else { // ... return } // Recognized text }
Objective-C
[textRecognizer processImage:image completion:^(FIRVisionText *_Nullable result, NSError *_Nullable error) { if (error != nil || result == nil) { // ... return; } // Recognized text }];
2. 從辨識出的文字區塊擷取文字
如果文字辨識作業成功,系統會傳回 [`VisionText`][VisionText] 物件。`VisionText` 物件包含圖片中辨識到的完整文字,以及零或多個 [`VisionTextBlock`][VisionTextBlock] 物件。 每個 `VisionTextBlock` 都代表矩形文字區塊,其中包含零或多個 [`VisionTextLine`][VisionTextLine] 物件。每個 `VisionTextLine` 物件都包含零個或多個 [`VisionTextElement`][VisionTextElement] 物件,代表字詞和類似字詞的實體 (日期、數字等)。針對每個 `VisionTextBlock`、`VisionTextLine` 和 `VisionTextElement` 物件,您可以取得區域中辨識的文字,以及區域的邊界座標。例如:Swift
let resultText = result.text for block in result.blocks { let blockText = block.text let blockConfidence = block.confidence let blockLanguages = block.recognizedLanguages let blockCornerPoints = block.cornerPoints let blockFrame = block.frame for line in block.lines { let lineText = line.text let lineConfidence = line.confidence let lineLanguages = line.recognizedLanguages let lineCornerPoints = line.cornerPoints let lineFrame = line.frame for element in line.elements { let elementText = element.text let elementConfidence = element.confidence let elementLanguages = element.recognizedLanguages let elementCornerPoints = element.cornerPoints let elementFrame = element.frame } } }
Objective-C
NSString *resultText = result.text; for (FIRVisionTextBlock *block in result.blocks) { NSString *blockText = block.text; NSNumber *blockConfidence = block.confidence; NSArray<FIRVisionTextRecognizedLanguage *> *blockLanguages = block.recognizedLanguages; NSArray<NSValue *> *blockCornerPoints = block.cornerPoints; CGRect blockFrame = block.frame; for (FIRVisionTextLine *line in block.lines) { NSString *lineText = line.text; NSNumber *lineConfidence = line.confidence; NSArray<FIRVisionTextRecognizedLanguage *> *lineLanguages = line.recognizedLanguages; NSArray<NSValue *> *lineCornerPoints = line.cornerPoints; CGRect lineFrame = line.frame; for (FIRVisionTextElement *element in line.elements) { NSString *elementText = element.text; NSNumber *elementConfidence = element.confidence; NSArray<FIRVisionTextRecognizedLanguage *> *elementLanguages = element.recognizedLanguages; NSArray<NSValue *> *elementCornerPoints = element.cornerPoints; CGRect elementFrame = element.frame; } } }
提升即時成效的訣竅
如要在即時應用程式中使用裝置端模型辨識文字,請按照下列準則操作,以達到最佳影格速率:
- 限制對文字辨識工具的呼叫次數。如果文字辨識器執行時有新的視訊影格可用,請捨棄該影格。
- 如果您要使用文字辨識器的輸出內容,在輸入圖片上疊加圖像,請先從 ML Kit 取得結果,然後在單一步驟中算繪圖片並疊加圖像。這樣做的話,每個輸入影格只會轉譯到顯示表面一次。如需範例,請參閱展示範例應用程式中的 previewOverlayView 和 FIRDetectionOverlayView 類別。
- 建議您以較低的解析度拍攝圖片。但請注意,這個 API 的圖片尺寸也有相關規定。
後續步驟
- 在正式環境中部署使用 Cloud API 的應用程式之前,請先採取幾個額外步驟,防範未經授權的 API 存取活動,並減輕其影響。
辨識文件圖片中的文字
如要辨識文件中的文字,請按照下列說明設定及執行雲端文件文字辨識器。
下文所述的文件文字辨識 API 提供介面,可更輕鬆地處理文件圖片。不過,如果您偏好稀疏文字 API 提供的介面,可以改用該介面掃描文件,方法是將雲端文字辨識器設定為使用密集文字模型。
如要使用文件文字辨識 API,請按照下列指示操作:
1. 執行文字辨識器
將圖片做為UIImage
或 CMSampleBufferRef
傳遞至
VisionDocumentTextRecognizer
的 process(_:completion:)
方法:
- 呼叫
cloudDocumentTextRecognizer
即可取得VisionDocumentTextRecognizer
的執行個體:Swift
let vision = Vision.vision() let textRecognizer = vision.cloudDocumentTextRecognizer() // Or, to provide language hints to assist with language detection: // See https://cloud.google.com/vision/docs/languages for supported languages let options = VisionCloudDocumentTextRecognizerOptions() options.languageHints = ["en", "hi"] let textRecognizer = vision.cloudDocumentTextRecognizer(options: options)
Objective-C
FIRVision *vision = [FIRVision vision]; FIRVisionDocumentTextRecognizer *textRecognizer = [vision cloudDocumentTextRecognizer]; // Or, to provide language hints to assist with language detection: // See https://cloud.google.com/vision/docs/languages for supported languages FIRVisionCloudDocumentTextRecognizerOptions *options = [[FIRVisionCloudDocumentTextRecognizerOptions alloc] init]; options.languageHints = @[@"en", @"hi"]; FIRVisionDocumentTextRecognizer *textRecognizer = [vision cloudDocumentTextRecognizerWithOptions:options];
-
使用
UIImage
或CMSampleBufferRef
建立VisionImage
物件。如何使用
UIImage
:- 如有需要,請旋轉圖片,使其
imageOrientation
屬性為.up
。 - 使用正確旋轉的
UIImage
建立VisionImage
物件。請勿指定任何旋轉中繼資料,必須使用預設值.topLeft
。Swift
let image = VisionImage(image: uiImage)
Objective-C
FIRVisionImage *image = [[FIRVisionImage alloc] initWithImage:uiImage];
如何使用
CMSampleBufferRef
:-
建立
VisionImageMetadata
物件,指定CMSampleBufferRef
緩衝區中圖片資料的方向。如要取得圖片方向,請執行下列操作:
Swift
func imageOrientation( deviceOrientation: UIDeviceOrientation, cameraPosition: AVCaptureDevice.Position ) -> VisionDetectorImageOrientation { switch deviceOrientation { case .portrait: return cameraPosition == .front ? .leftTop : .rightTop case .landscapeLeft: return cameraPosition == .front ? .bottomLeft : .topLeft case .portraitUpsideDown: return cameraPosition == .front ? .rightBottom : .leftBottom case .landscapeRight: return cameraPosition == .front ? .topRight : .bottomRight case .faceDown, .faceUp, .unknown: return .leftTop } }
Objective-C
- (FIRVisionDetectorImageOrientation) imageOrientationFromDeviceOrientation:(UIDeviceOrientation)deviceOrientation cameraPosition:(AVCaptureDevicePosition)cameraPosition { switch (deviceOrientation) { case UIDeviceOrientationPortrait: if (cameraPosition == AVCaptureDevicePositionFront) { return FIRVisionDetectorImageOrientationLeftTop; } else { return FIRVisionDetectorImageOrientationRightTop; } case UIDeviceOrientationLandscapeLeft: if (cameraPosition == AVCaptureDevicePositionFront) { return FIRVisionDetectorImageOrientationBottomLeft; } else { return FIRVisionDetectorImageOrientationTopLeft; } case UIDeviceOrientationPortraitUpsideDown: if (cameraPosition == AVCaptureDevicePositionFront) { return FIRVisionDetectorImageOrientationRightBottom; } else { return FIRVisionDetectorImageOrientationLeftBottom; } case UIDeviceOrientationLandscapeRight: if (cameraPosition == AVCaptureDevicePositionFront) { return FIRVisionDetectorImageOrientationTopRight; } else { return FIRVisionDetectorImageOrientationBottomRight; } default: return FIRVisionDetectorImageOrientationTopLeft; } }
接著,建立中繼資料物件:
Swift
let cameraPosition = AVCaptureDevice.Position.back // Set to the capture device you used. let metadata = VisionImageMetadata() metadata.orientation = imageOrientation( deviceOrientation: UIDevice.current.orientation, cameraPosition: cameraPosition )
Objective-C
FIRVisionImageMetadata *metadata = [[FIRVisionImageMetadata alloc] init]; AVCaptureDevicePosition cameraPosition = AVCaptureDevicePositionBack; // Set to the capture device you used. metadata.orientation = [self imageOrientationFromDeviceOrientation:UIDevice.currentDevice.orientation cameraPosition:cameraPosition];
- 使用
CMSampleBufferRef
物件和旋轉中繼資料建立VisionImage
物件:Swift
let image = VisionImage(buffer: sampleBuffer) image.metadata = metadata
Objective-C
FIRVisionImage *image = [[FIRVisionImage alloc] initWithBuffer:sampleBuffer]; image.metadata = metadata;
- 如有需要,請旋轉圖片,使其
-
接著,將圖片傳遞至
process(_:completion:)
方法:Swift
textRecognizer.process(visionImage) { result, error in guard error == nil, let result = result else { // ... return } // Recognized text }
Objective-C
[textRecognizer processImage:image completion:^(FIRVisionDocumentText *_Nullable result, NSError *_Nullable error) { if (error != nil || result == nil) { // ... return; } // Recognized text }];
2. 從辨識出的文字區塊擷取文字
如果文字辨識作業成功,系統會傳回VisionDocumentText
物件。VisionDocumentText
物件包含圖片中辨識到的完整文字,以及反映辨識文件結構的物件階層:
針對每個 VisionDocumentTextBlock
、VisionDocumentTextParagraph
、VisionDocumentTextWord
和 VisionDocumentTextSymbol
物件,您可以取得該區域中辨識到的文字,以及該區域的邊界座標。
例如:
Swift
let resultText = result.text for block in result.blocks { let blockText = block.text let blockConfidence = block.confidence let blockRecognizedLanguages = block.recognizedLanguages let blockBreak = block.recognizedBreak let blockCornerPoints = block.cornerPoints let blockFrame = block.frame for paragraph in block.paragraphs { let paragraphText = paragraph.text let paragraphConfidence = paragraph.confidence let paragraphRecognizedLanguages = paragraph.recognizedLanguages let paragraphBreak = paragraph.recognizedBreak let paragraphCornerPoints = paragraph.cornerPoints let paragraphFrame = paragraph.frame for word in paragraph.words { let wordText = word.text let wordConfidence = word.confidence let wordRecognizedLanguages = word.recognizedLanguages let wordBreak = word.recognizedBreak let wordCornerPoints = word.cornerPoints let wordFrame = word.frame for symbol in word.symbols { let symbolText = symbol.text let symbolConfidence = symbol.confidence let symbolRecognizedLanguages = symbol.recognizedLanguages let symbolBreak = symbol.recognizedBreak let symbolCornerPoints = symbol.cornerPoints let symbolFrame = symbol.frame } } } }
Objective-C
NSString *resultText = result.text; for (FIRVisionDocumentTextBlock *block in result.blocks) { NSString *blockText = block.text; NSNumber *blockConfidence = block.confidence; NSArray<FIRVisionTextRecognizedLanguage *> *blockRecognizedLanguages = block.recognizedLanguages; FIRVisionTextRecognizedBreak *blockBreak = block.recognizedBreak; CGRect blockFrame = block.frame; for (FIRVisionDocumentTextParagraph *paragraph in block.paragraphs) { NSString *paragraphText = paragraph.text; NSNumber *paragraphConfidence = paragraph.confidence; NSArray<FIRVisionTextRecognizedLanguage *> *paragraphRecognizedLanguages = paragraph.recognizedLanguages; FIRVisionTextRecognizedBreak *paragraphBreak = paragraph.recognizedBreak; CGRect paragraphFrame = paragraph.frame; for (FIRVisionDocumentTextWord *word in paragraph.words) { NSString *wordText = word.text; NSNumber *wordConfidence = word.confidence; NSArray<FIRVisionTextRecognizedLanguage *> *wordRecognizedLanguages = word.recognizedLanguages; FIRVisionTextRecognizedBreak *wordBreak = word.recognizedBreak; CGRect wordFrame = word.frame; for (FIRVisionDocumentTextSymbol *symbol in word.symbols) { NSString *symbolText = symbol.text; NSNumber *symbolConfidence = symbol.confidence; NSArray<FIRVisionTextRecognizedLanguage *> *symbolRecognizedLanguages = symbol.recognizedLanguages; FIRVisionTextRecognizedBreak *symbolBreak = symbol.recognizedBreak; CGRect symbolFrame = symbol.frame; } } } }
後續步驟
- 在正式環境中部署使用 Cloud API 的應用程式之前,請先採取幾個額外步驟,防範未經授權的 API 存取活動,並減輕其影響。