您可以使用 ML Kit 識別圖像中的文本。 ML Kit 既有一個適用於識別圖像中文本(例如街道標誌的文本)的通用 API,也有一個針對識別文檔文本進行了優化的 API。通用 API 既有設備上的模型,也有基於雲的模型。文檔文本識別僅作為基於雲的模型提供。有關雲和設備上模型的比較,請參閱概述。
在你開始之前
- 如果您還沒有,請將 Firebase 添加到您的 Android 項目中。
- 將 ML Kit Android 庫的依賴項添加到您的模塊(應用級)Gradle 文件(通常是
app/build.gradle
):apply plugin: 'com.android.application' apply plugin: 'com.google.gms.google-services' dependencies { // ... implementation 'com.google.firebase:firebase-ml-vision:24.0.3' }
- 可選但推薦:如果您使用設備上 API,請將您的應用配置為在從 Play 商店安裝您的應用後自動將 ML 模型下載到設備。
為此,請將以下聲明添加到您應用的
AndroidManifest.xml
文件中:<application ...> ... <meta-data android:name="com.google.firebase.ml.vision.DEPENDENCIES" android:value="ocr" /> <!-- To use multiple models: android:value="ocr,model2,model3" --> </application>
如果您不啟用安裝時模型下載,則模型將在您第一次運行設備檢測器時下載。您在下載完成之前提出的請求不會產生任何結果。 如果您想使用基於雲的模型,並且尚未為您的項目啟用基於雲的 API,請立即執行此操作:
- 打開 Firebase 控制台的ML Kit API 頁面。
如果您尚未將項目升級到 Blaze 定價計劃,請單擊升級以執行此操作。 (僅當您的項目不在 Blaze 計劃中時,系統才會提示您升級。)
只有 Blaze 級項目可以使用基於雲的 API。
- 如果尚未啟用基於雲的 API,請單擊啟用基於雲的 API 。
如果您只想使用設備端模型,則可以跳過此步驟。
現在您已準備好開始識別圖像中的文本。
輸入圖像指南
為使 ML Kit 準確識別文本,輸入圖像必須包含由足夠像素數據表示的文本。理想情況下,對於拉丁文本,每個字符至少應為 16x16 像素。對於中文、日文和韓文文本(僅雲 API 支持),每個字符應為 24x24 像素。對於所有語言,大於 24x24 像素的字符通常沒有準確性優勢。
因此,例如,一張 640x480 的圖像可能適用於掃描佔據整個圖像寬度的名片。要掃描在信紙大小的紙張上打印的文檔,可能需要 720x1280 像素的圖像。
圖像聚焦不佳會損害文本識別的準確性。如果您沒有得到可接受的結果,請嘗試要求用戶重新捕獲圖像。
如果您在實時應用程序中識別文本,您可能還需要考慮輸入圖像的整體尺寸。較小的圖像可以更快地處理,因此為了減少延遲,以較低的分辨率捕獲圖像(牢記上述精度要求)並確保文本盡可能多地佔據圖像。另請參閱提高實時性能的技巧。
識別圖像中的文本
要使用設備上或基於雲的模型識別圖像中的文本,請按如下所述運行文本識別器。
1.運行文本識別器
要識別圖像中的文本,請從Bitmap
、 media.Image
、 ByteBuffer
、字節數組或設備上的文件創建一個FirebaseVisionImage
對象。然後,將FirebaseVisionImage
對像傳遞給FirebaseVisionTextRecognizer
的processImage
方法。從您的圖像創建一個
FirebaseVisionImage
對象。要從
media.Image
對象創建FirebaseVisionImage
對象,例如從設備的攝像頭捕獲圖像時,請將media.Image
對象和圖像的旋轉傳遞給FirebaseVisionImage.fromMediaImage()
。如果您使用CameraX庫,
OnImageCapturedListener
和ImageAnalysis.Analyzer
類會為您計算旋轉值,因此您只需在調用FirebaseVisionImage.fromMediaImage()
之前將旋轉轉換為 ML Kit 的ROTATION_
常量之一:Java
private class YourAnalyzer implements ImageAnalysis.Analyzer { private int degreesToFirebaseRotation(int degrees) { switch (degrees) { case 0: return FirebaseVisionImageMetadata.ROTATION_0; case 90: return FirebaseVisionImageMetadata.ROTATION_90; case 180: return FirebaseVisionImageMetadata.ROTATION_180; case 270: return FirebaseVisionImageMetadata.ROTATION_270; default: throw new IllegalArgumentException( "Rotation must be 0, 90, 180, or 270."); } } @Override public void analyze(ImageProxy imageProxy, int degrees) { if (imageProxy == null || imageProxy.getImage() == null) { return; } Image mediaImage = imageProxy.getImage(); int rotation = degreesToFirebaseRotation(degrees); FirebaseVisionImage image = FirebaseVisionImage.fromMediaImage(mediaImage, rotation); // Pass image to an ML Kit Vision API // ... } }
Kotlin+KTX
private class YourImageAnalyzer : ImageAnalysis.Analyzer { private fun degreesToFirebaseRotation(degrees: Int): Int = when(degrees) { 0 -> FirebaseVisionImageMetadata.ROTATION_0 90 -> FirebaseVisionImageMetadata.ROTATION_90 180 -> FirebaseVisionImageMetadata.ROTATION_180 270 -> FirebaseVisionImageMetadata.ROTATION_270 else -> throw Exception("Rotation must be 0, 90, 180, or 270.") } override fun analyze(imageProxy: ImageProxy?, degrees: Int) { val mediaImage = imageProxy?.image val imageRotation = degreesToFirebaseRotation(degrees) if (mediaImage != null) { val image = FirebaseVisionImage.fromMediaImage(mediaImage, imageRotation) // Pass image to an ML Kit Vision API // ... } } }
如果您不使用為您提供圖像旋轉的相機庫,您可以根據設備的旋轉和設備中相機傳感器的方向來計算它:
Java
private static final SparseIntArray ORIENTATIONS = new SparseIntArray(); static { ORIENTATIONS.append(Surface.ROTATION_0, 90); ORIENTATIONS.append(Surface.ROTATION_90, 0); ORIENTATIONS.append(Surface.ROTATION_180, 270); ORIENTATIONS.append(Surface.ROTATION_270, 180); } /** * Get the angle by which an image must be rotated given the device's current * orientation. */ @RequiresApi(api = Build.VERSION_CODES.LOLLIPOP) private int getRotationCompensation(String cameraId, Activity activity, Context context) throws CameraAccessException { // Get the device's current rotation relative to its "native" orientation. // Then, from the ORIENTATIONS table, look up the angle the image must be // rotated to compensate for the device's rotation. int deviceRotation = activity.getWindowManager().getDefaultDisplay().getRotation(); int rotationCompensation = ORIENTATIONS.get(deviceRotation); // On most devices, the sensor orientation is 90 degrees, but for some // devices it is 270 degrees. For devices with a sensor orientation of // 270, rotate the image an additional 180 ((270 + 270) % 360) degrees. CameraManager cameraManager = (CameraManager) context.getSystemService(CAMERA_SERVICE); int sensorOrientation = cameraManager .getCameraCharacteristics(cameraId) .get(CameraCharacteristics.SENSOR_ORIENTATION); rotationCompensation = (rotationCompensation + sensorOrientation + 270) % 360; // Return the corresponding FirebaseVisionImageMetadata rotation value. int result; switch (rotationCompensation) { case 0: result = FirebaseVisionImageMetadata.ROTATION_0; break; case 90: result = FirebaseVisionImageMetadata.ROTATION_90; break; case 180: result = FirebaseVisionImageMetadata.ROTATION_180; break; case 270: result = FirebaseVisionImageMetadata.ROTATION_270; break; default: result = FirebaseVisionImageMetadata.ROTATION_0; Log.e(TAG, "Bad rotation value: " + rotationCompensation); } return result; }
Kotlin+KTX
private val ORIENTATIONS = SparseIntArray() init { ORIENTATIONS.append(Surface.ROTATION_0, 90) ORIENTATIONS.append(Surface.ROTATION_90, 0) ORIENTATIONS.append(Surface.ROTATION_180, 270) ORIENTATIONS.append(Surface.ROTATION_270, 180) } /** * Get the angle by which an image must be rotated given the device's current * orientation. */ @RequiresApi(api = Build.VERSION_CODES.LOLLIPOP) @Throws(CameraAccessException::class) private fun getRotationCompensation(cameraId: String, activity: Activity, context: Context): Int { // Get the device's current rotation relative to its "native" orientation. // Then, from the ORIENTATIONS table, look up the angle the image must be // rotated to compensate for the device's rotation. val deviceRotation = activity.windowManager.defaultDisplay.rotation var rotationCompensation = ORIENTATIONS.get(deviceRotation) // On most devices, the sensor orientation is 90 degrees, but for some // devices it is 270 degrees. For devices with a sensor orientation of // 270, rotate the image an additional 180 ((270 + 270) % 360) degrees. val cameraManager = context.getSystemService(CAMERA_SERVICE) as CameraManager val sensorOrientation = cameraManager .getCameraCharacteristics(cameraId) .get(CameraCharacteristics.SENSOR_ORIENTATION)!! rotationCompensation = (rotationCompensation + sensorOrientation + 270) % 360 // Return the corresponding FirebaseVisionImageMetadata rotation value. val result: Int when (rotationCompensation) { 0 -> result = FirebaseVisionImageMetadata.ROTATION_0 90 -> result = FirebaseVisionImageMetadata.ROTATION_90 180 -> result = FirebaseVisionImageMetadata.ROTATION_180 270 -> result = FirebaseVisionImageMetadata.ROTATION_270 else -> { result = FirebaseVisionImageMetadata.ROTATION_0 Log.e(TAG, "Bad rotation value: $rotationCompensation") } } return result }
然後,將
media.Image
對象和旋轉值傳遞給FirebaseVisionImage.fromMediaImage()
:Java
FirebaseVisionImage image = FirebaseVisionImage.fromMediaImage(mediaImage, rotation);
Kotlin+KTX
val image = FirebaseVisionImage.fromMediaImage(mediaImage, rotation)
- 要從文件 URI 創建
FirebaseVisionImage
對象,請將應用上下文和文件 URI 傳遞給FirebaseVisionImage.fromFilePath()
。當您使用ACTION_GET_CONTENT
意圖提示用戶從他們的圖庫應用中選擇圖像時,這很有用。Java
FirebaseVisionImage image; try { image = FirebaseVisionImage.fromFilePath(context, uri); } catch (IOException e) { e.printStackTrace(); }
Kotlin+KTX
val image: FirebaseVisionImage try { image = FirebaseVisionImage.fromFilePath(context, uri) } catch (e: IOException) { e.printStackTrace() }
- 要從
ByteBuffer
或字節數組創建FirebaseVisionImage
對象,請首先按照上面針對media.Image
輸入的描述計算圖像旋轉。然後,創建一個
FirebaseVisionImageMetadata
對象,其中包含圖像的高度、寬度、顏色編碼格式和旋轉:Java
FirebaseVisionImageMetadata metadata = new FirebaseVisionImageMetadata.Builder() .setWidth(480) // 480x360 is typically sufficient for .setHeight(360) // image recognition .setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21) .setRotation(rotation) .build();
Kotlin+KTX
val metadata = FirebaseVisionImageMetadata.Builder() .setWidth(480) // 480x360 is typically sufficient for .setHeight(360) // image recognition .setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21) .setRotation(rotation) .build()
使用緩衝區或數組以及元數據對象來創建
FirebaseVisionImage
對象:Java
FirebaseVisionImage image = FirebaseVisionImage.fromByteBuffer(buffer, metadata); // Or: FirebaseVisionImage image = FirebaseVisionImage.fromByteArray(byteArray, metadata);
Kotlin+KTX
val image = FirebaseVisionImage.fromByteBuffer(buffer, metadata) // Or: val image = FirebaseVisionImage.fromByteArray(byteArray, metadata)
- 要從
Bitmap
對象創建FirebaseVisionImage
對象:Java
FirebaseVisionImage image = FirebaseVisionImage.fromBitmap(bitmap);
Kotlin+KTX
val image = FirebaseVisionImage.fromBitmap(bitmap)
Bitmap
對象表示的圖像必須是直立的,不需要額外的旋轉。
獲取
FirebaseVisionTextRecognizer
的實例。要使用設備上模型:
Java
FirebaseVisionTextRecognizer detector = FirebaseVision.getInstance() .getOnDeviceTextRecognizer();
Kotlin+KTX
val detector = FirebaseVision.getInstance() .onDeviceTextRecognizer
要使用基於雲的模型:
Java
FirebaseVisionTextRecognizer detector = FirebaseVision.getInstance() .getCloudTextRecognizer(); // Or, to change the default settings: // FirebaseVisionTextRecognizer detector = FirebaseVision.getInstance() // .getCloudTextRecognizer(options);
// Or, to provide language hints to assist with language detection: // See https://cloud.google.com/vision/docs/languages for supported languages FirebaseVisionCloudTextRecognizerOptions options = new FirebaseVisionCloudTextRecognizerOptions.Builder() .setLanguageHints(Arrays.asList("en", "hi")) .build();
Kotlin+KTX
val detector = FirebaseVision.getInstance().cloudTextRecognizer // Or, to change the default settings: // val detector = FirebaseVision.getInstance().getCloudTextRecognizer(options)
// Or, to provide language hints to assist with language detection: // See https://cloud.google.com/vision/docs/languages for supported languages val options = FirebaseVisionCloudTextRecognizerOptions.Builder() .setLanguageHints(listOf("en", "hi")) .build()
最後,將圖像傳遞給
processImage
方法:Java
Task<FirebaseVisionText> result = detector.processImage(image) .addOnSuccessListener(new OnSuccessListener<FirebaseVisionText>() { @Override public void onSuccess(FirebaseVisionText firebaseVisionText) { // Task completed successfully // ... } }) .addOnFailureListener( new OnFailureListener() { @Override public void onFailure(@NonNull Exception e) { // Task failed with an exception // ... } });
Kotlin+KTX
val result = detector.processImage(image) .addOnSuccessListener { firebaseVisionText -> // Task completed successfully // ... } .addOnFailureListener { e -> // Task failed with an exception // ... }
2.從識別文本塊中提取文本
如果文本識別操作成功,FirebaseVisionText
對象將被傳遞給成功監聽器。 FirebaseVisionText
對象包含圖像中識別的全文以及零個或多個TextBlock
對象。每個TextBlock
代表一個矩形文本塊,其中包含零個或多個Line
對象。每個Line
對象包含零個或多個Element
對象,它們表示單詞和類似單詞的實體(日期、數字等)。
對於每個TextBlock
、 Line
和Element
對象,您可以獲得區域中識別的文本和區域的邊界坐標。
例如:
Java
String resultText = result.getText(); for (FirebaseVisionText.TextBlock block: result.getTextBlocks()) { String blockText = block.getText(); Float blockConfidence = block.getConfidence(); List<RecognizedLanguage> blockLanguages = block.getRecognizedLanguages(); Point[] blockCornerPoints = block.getCornerPoints(); Rect blockFrame = block.getBoundingBox(); for (FirebaseVisionText.Line line: block.getLines()) { String lineText = line.getText(); Float lineConfidence = line.getConfidence(); List<RecognizedLanguage> lineLanguages = line.getRecognizedLanguages(); Point[] lineCornerPoints = line.getCornerPoints(); Rect lineFrame = line.getBoundingBox(); for (FirebaseVisionText.Element element: line.getElements()) { String elementText = element.getText(); Float elementConfidence = element.getConfidence(); List<RecognizedLanguage> elementLanguages = element.getRecognizedLanguages(); Point[] elementCornerPoints = element.getCornerPoints(); Rect elementFrame = element.getBoundingBox(); } } }
Kotlin+KTX
val resultText = result.text for (block in result.textBlocks) { val blockText = block.text val blockConfidence = block.confidence val blockLanguages = block.recognizedLanguages val blockCornerPoints = block.cornerPoints val blockFrame = block.boundingBox for (line in block.lines) { val lineText = line.text val lineConfidence = line.confidence val lineLanguages = line.recognizedLanguages val lineCornerPoints = line.cornerPoints val lineFrame = line.boundingBox for (element in line.elements) { val elementText = element.text val elementConfidence = element.confidence val elementLanguages = element.recognizedLanguages val elementCornerPoints = element.cornerPoints val elementFrame = element.boundingBox } } }
提高實時性能的技巧
如果您想使用設備端模型來識別實時應用程序中的文本,請遵循以下指南以獲得最佳幀速率:
- 限制對文本識別器的調用。如果在文本識別器運行時有新的視頻幀可用,請丟棄該幀。
- 如果您使用文本識別器的輸出在輸入圖像上疊加圖形,首先從 ML Kit 獲取結果,然後在一個步驟中渲染圖像並疊加。通過這樣做,您只為每個輸入幀渲染到顯示表面一次。
如果您使用 Camera2 API,請以
ImageFormat.YUV_420_888
格式捕獲圖像。如果您使用較舊的 Camera API,請以
ImageFormat.NV21
格式捕獲圖像。- 考慮以較低的分辨率捕獲圖像。但是,還要記住此 API 的圖像尺寸要求。
下一步
- 在將使用 Cloud API 的應用程序部署到生產環境之前,您應該採取一些額外的步驟來防止和減輕未經授權的 API 訪問的影響。
識別文檔圖像中的文本
要識別文檔的文本,請配置並運行基於雲的文檔文本識別器,如下所述。
下面描述的文檔文本識別 API 提供了一個接口,旨在更方便地處理文檔圖像。但是,如果您更喜歡FirebaseVisionTextRecognizer
API 提供的接口,則可以通過將雲文本識別器配置為使用密集文本模型來代替它來掃描文檔。
要使用文檔文本識別 API:
1.運行文本識別器
要識別圖像中的文本,請從Bitmap
、 media.Image
、 ByteBuffer
、字節數組或設備上的文件創建一個FirebaseVisionImage
對象。然後,將FirebaseVisionImage
對像傳遞給FirebaseVisionDocumentTextRecognizer
的processImage
方法。從您的圖像創建一個
FirebaseVisionImage
對象。要從
media.Image
對象創建FirebaseVisionImage
對象,例如從設備的攝像頭捕獲圖像時,請將media.Image
對象和圖像的旋轉傳遞給FirebaseVisionImage.fromMediaImage()
。如果您使用CameraX庫,
OnImageCapturedListener
和ImageAnalysis.Analyzer
類會為您計算旋轉值,因此您只需在調用FirebaseVisionImage.fromMediaImage()
之前將旋轉轉換為 ML Kit 的ROTATION_
常量之一:Java
private class YourAnalyzer implements ImageAnalysis.Analyzer { private int degreesToFirebaseRotation(int degrees) { switch (degrees) { case 0: return FirebaseVisionImageMetadata.ROTATION_0; case 90: return FirebaseVisionImageMetadata.ROTATION_90; case 180: return FirebaseVisionImageMetadata.ROTATION_180; case 270: return FirebaseVisionImageMetadata.ROTATION_270; default: throw new IllegalArgumentException( "Rotation must be 0, 90, 180, or 270."); } } @Override public void analyze(ImageProxy imageProxy, int degrees) { if (imageProxy == null || imageProxy.getImage() == null) { return; } Image mediaImage = imageProxy.getImage(); int rotation = degreesToFirebaseRotation(degrees); FirebaseVisionImage image = FirebaseVisionImage.fromMediaImage(mediaImage, rotation); // Pass image to an ML Kit Vision API // ... } }
Kotlin+KTX
private class YourImageAnalyzer : ImageAnalysis.Analyzer { private fun degreesToFirebaseRotation(degrees: Int): Int = when(degrees) { 0 -> FirebaseVisionImageMetadata.ROTATION_0 90 -> FirebaseVisionImageMetadata.ROTATION_90 180 -> FirebaseVisionImageMetadata.ROTATION_180 270 -> FirebaseVisionImageMetadata.ROTATION_270 else -> throw Exception("Rotation must be 0, 90, 180, or 270.") } override fun analyze(imageProxy: ImageProxy?, degrees: Int) { val mediaImage = imageProxy?.image val imageRotation = degreesToFirebaseRotation(degrees) if (mediaImage != null) { val image = FirebaseVisionImage.fromMediaImage(mediaImage, imageRotation) // Pass image to an ML Kit Vision API // ... } } }
如果您不使用為您提供圖像旋轉的相機庫,您可以根據設備的旋轉和設備中相機傳感器的方向來計算它:
Java
private static final SparseIntArray ORIENTATIONS = new SparseIntArray(); static { ORIENTATIONS.append(Surface.ROTATION_0, 90); ORIENTATIONS.append(Surface.ROTATION_90, 0); ORIENTATIONS.append(Surface.ROTATION_180, 270); ORIENTATIONS.append(Surface.ROTATION_270, 180); } /** * Get the angle by which an image must be rotated given the device's current * orientation. */ @RequiresApi(api = Build.VERSION_CODES.LOLLIPOP) private int getRotationCompensation(String cameraId, Activity activity, Context context) throws CameraAccessException { // Get the device's current rotation relative to its "native" orientation. // Then, from the ORIENTATIONS table, look up the angle the image must be // rotated to compensate for the device's rotation. int deviceRotation = activity.getWindowManager().getDefaultDisplay().getRotation(); int rotationCompensation = ORIENTATIONS.get(deviceRotation); // On most devices, the sensor orientation is 90 degrees, but for some // devices it is 270 degrees. For devices with a sensor orientation of // 270, rotate the image an additional 180 ((270 + 270) % 360) degrees. CameraManager cameraManager = (CameraManager) context.getSystemService(CAMERA_SERVICE); int sensorOrientation = cameraManager .getCameraCharacteristics(cameraId) .get(CameraCharacteristics.SENSOR_ORIENTATION); rotationCompensation = (rotationCompensation + sensorOrientation + 270) % 360; // Return the corresponding FirebaseVisionImageMetadata rotation value. int result; switch (rotationCompensation) { case 0: result = FirebaseVisionImageMetadata.ROTATION_0; break; case 90: result = FirebaseVisionImageMetadata.ROTATION_90; break; case 180: result = FirebaseVisionImageMetadata.ROTATION_180; break; case 270: result = FirebaseVisionImageMetadata.ROTATION_270; break; default: result = FirebaseVisionImageMetadata.ROTATION_0; Log.e(TAG, "Bad rotation value: " + rotationCompensation); } return result; }
Kotlin+KTX
private val ORIENTATIONS = SparseIntArray() init { ORIENTATIONS.append(Surface.ROTATION_0, 90) ORIENTATIONS.append(Surface.ROTATION_90, 0) ORIENTATIONS.append(Surface.ROTATION_180, 270) ORIENTATIONS.append(Surface.ROTATION_270, 180) } /** * Get the angle by which an image must be rotated given the device's current * orientation. */ @RequiresApi(api = Build.VERSION_CODES.LOLLIPOP) @Throws(CameraAccessException::class) private fun getRotationCompensation(cameraId: String, activity: Activity, context: Context): Int { // Get the device's current rotation relative to its "native" orientation. // Then, from the ORIENTATIONS table, look up the angle the image must be // rotated to compensate for the device's rotation. val deviceRotation = activity.windowManager.defaultDisplay.rotation var rotationCompensation = ORIENTATIONS.get(deviceRotation) // On most devices, the sensor orientation is 90 degrees, but for some // devices it is 270 degrees. For devices with a sensor orientation of // 270, rotate the image an additional 180 ((270 + 270) % 360) degrees. val cameraManager = context.getSystemService(CAMERA_SERVICE) as CameraManager val sensorOrientation = cameraManager .getCameraCharacteristics(cameraId) .get(CameraCharacteristics.SENSOR_ORIENTATION)!! rotationCompensation = (rotationCompensation + sensorOrientation + 270) % 360 // Return the corresponding FirebaseVisionImageMetadata rotation value. val result: Int when (rotationCompensation) { 0 -> result = FirebaseVisionImageMetadata.ROTATION_0 90 -> result = FirebaseVisionImageMetadata.ROTATION_90 180 -> result = FirebaseVisionImageMetadata.ROTATION_180 270 -> result = FirebaseVisionImageMetadata.ROTATION_270 else -> { result = FirebaseVisionImageMetadata.ROTATION_0 Log.e(TAG, "Bad rotation value: $rotationCompensation") } } return result }
然後,將
media.Image
對象和旋轉值傳遞給FirebaseVisionImage.fromMediaImage()
:Java
FirebaseVisionImage image = FirebaseVisionImage.fromMediaImage(mediaImage, rotation);
Kotlin+KTX
val image = FirebaseVisionImage.fromMediaImage(mediaImage, rotation)
- 要從文件 URI 創建
FirebaseVisionImage
對象,請將應用上下文和文件 URI 傳遞給FirebaseVisionImage.fromFilePath()
。當您使用ACTION_GET_CONTENT
意圖提示用戶從他們的圖庫應用中選擇圖像時,這很有用。Java
FirebaseVisionImage image; try { image = FirebaseVisionImage.fromFilePath(context, uri); } catch (IOException e) { e.printStackTrace(); }
Kotlin+KTX
val image: FirebaseVisionImage try { image = FirebaseVisionImage.fromFilePath(context, uri) } catch (e: IOException) { e.printStackTrace() }
- 要從
ByteBuffer
或字節數組創建FirebaseVisionImage
對象,請首先按照上面針對media.Image
輸入的描述計算圖像旋轉。然後,創建一個
FirebaseVisionImageMetadata
對象,其中包含圖像的高度、寬度、顏色編碼格式和旋轉:Java
FirebaseVisionImageMetadata metadata = new FirebaseVisionImageMetadata.Builder() .setWidth(480) // 480x360 is typically sufficient for .setHeight(360) // image recognition .setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21) .setRotation(rotation) .build();
Kotlin+KTX
val metadata = FirebaseVisionImageMetadata.Builder() .setWidth(480) // 480x360 is typically sufficient for .setHeight(360) // image recognition .setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21) .setRotation(rotation) .build()
使用緩衝區或數組以及元數據對象來創建
FirebaseVisionImage
對象:Java
FirebaseVisionImage image = FirebaseVisionImage.fromByteBuffer(buffer, metadata); // Or: FirebaseVisionImage image = FirebaseVisionImage.fromByteArray(byteArray, metadata);
Kotlin+KTX
val image = FirebaseVisionImage.fromByteBuffer(buffer, metadata) // Or: val image = FirebaseVisionImage.fromByteArray(byteArray, metadata)
- 要從
Bitmap
對象創建FirebaseVisionImage
對象:Java
FirebaseVisionImage image = FirebaseVisionImage.fromBitmap(bitmap);
Kotlin+KTX
val image = FirebaseVisionImage.fromBitmap(bitmap)
Bitmap
對象表示的圖像必須是直立的,不需要額外的旋轉。
獲取
FirebaseVisionDocumentTextRecognizer
的實例:Java
FirebaseVisionDocumentTextRecognizer detector = FirebaseVision.getInstance() .getCloudDocumentTextRecognizer();
// Or, to provide language hints to assist with language detection: // See https://cloud.google.com/vision/docs/languages for supported languages FirebaseVisionCloudDocumentRecognizerOptions options = new FirebaseVisionCloudDocumentRecognizerOptions.Builder() .setLanguageHints(Arrays.asList("en", "hi")) .build(); FirebaseVisionDocumentTextRecognizer detector = FirebaseVision.getInstance() .getCloudDocumentTextRecognizer(options);
Kotlin+KTX
val detector = FirebaseVision.getInstance() .cloudDocumentTextRecognizer
// Or, to provide language hints to assist with language detection: // See https://cloud.google.com/vision/docs/languages for supported languages val options = FirebaseVisionCloudDocumentRecognizerOptions.Builder() .setLanguageHints(listOf("en", "hi")) .build() val detector = FirebaseVision.getInstance() .getCloudDocumentTextRecognizer(options)
最後,將圖像傳遞給
processImage
方法:Java
detector.processImage(myImage) .addOnSuccessListener(new OnSuccessListener<FirebaseVisionDocumentText>() { @Override public void onSuccess(FirebaseVisionDocumentText result) { // Task completed successfully // ... } }) .addOnFailureListener(new OnFailureListener() { @Override public void onFailure(@NonNull Exception e) { // Task failed with an exception // ... } });
Kotlin+KTX
detector.processImage(myImage) .addOnSuccessListener { firebaseVisionDocumentText -> // Task completed successfully // ... } .addOnFailureListener { e -> // Task failed with an exception // ... }
2.從識別文本塊中提取文本
如果文本識別操作成功,它將返回一個FirebaseVisionDocumentText
對象。 FirebaseVisionDocumentText
對象包含圖像中識別的全文以及反映已識別文檔結構的對象層次結構:
-
FirebaseVisionDocumentText.Block
-
FirebaseVisionDocumentText.Paragraph
-
FirebaseVisionDocumentText.Word
-
FirebaseVisionDocumentText.Symbol
對於每個Block
、 Paragraph
、 Word
和Symbol
對象,您可以獲得區域內識別的文本和區域的邊界坐標。
例如:
Java
String resultText = result.getText(); for (FirebaseVisionDocumentText.Block block: result.getBlocks()) { String blockText = block.getText(); Float blockConfidence = block.getConfidence(); List<RecognizedLanguage> blockRecognizedLanguages = block.getRecognizedLanguages(); Rect blockFrame = block.getBoundingBox(); for (FirebaseVisionDocumentText.Paragraph paragraph: block.getParagraphs()) { String paragraphText = paragraph.getText(); Float paragraphConfidence = paragraph.getConfidence(); List<RecognizedLanguage> paragraphRecognizedLanguages = paragraph.getRecognizedLanguages(); Rect paragraphFrame = paragraph.getBoundingBox(); for (FirebaseVisionDocumentText.Word word: paragraph.getWords()) { String wordText = word.getText(); Float wordConfidence = word.getConfidence(); List<RecognizedLanguage> wordRecognizedLanguages = word.getRecognizedLanguages(); Rect wordFrame = word.getBoundingBox(); for (FirebaseVisionDocumentText.Symbol symbol: word.getSymbols()) { String symbolText = symbol.getText(); Float symbolConfidence = symbol.getConfidence(); List<RecognizedLanguage> symbolRecognizedLanguages = symbol.getRecognizedLanguages(); Rect symbolFrame = symbol.getBoundingBox(); } } } }
Kotlin+KTX
val resultText = result.text for (block in result.blocks) { val blockText = block.text val blockConfidence = block.confidence val blockRecognizedLanguages = block.recognizedLanguages val blockFrame = block.boundingBox for (paragraph in block.paragraphs) { val paragraphText = paragraph.text val paragraphConfidence = paragraph.confidence val paragraphRecognizedLanguages = paragraph.recognizedLanguages val paragraphFrame = paragraph.boundingBox for (word in paragraph.words) { val wordText = word.text val wordConfidence = word.confidence val wordRecognizedLanguages = word.recognizedLanguages val wordFrame = word.boundingBox for (symbol in word.symbols) { val symbolText = symbol.text val symbolConfidence = symbol.confidence val symbolRecognizedLanguages = symbol.recognizedLanguages val symbolFrame = symbol.boundingBox } } } }
下一步
- 在將使用 Cloud API 的應用程序部署到生產環境之前,您應該採取一些額外的步驟來防止和減輕未經授權的 API 訪問的影響。