在 Android 上使用机器学习套件识别图片中的文本

您可以使用机器学习套件识别图片中的文本。机器学习套件既具有可用于识别图片中的文本(例如街道标志的文本)的通用 API,也具有针对识别文档文本而优化的 API。通用 API 具有设备端模型和云端模型。文档文本识别只能以云端模型的形式提供。如需了解云端模型和设备端模型的比较情况,请参阅概览

如需查看此 API 的实际应用示例,请查看 GitHub 上的机器学习套件快速入门示例,也可以试用代码实验室

准备工作

  1. 如果您尚未将 Firebase 添加到自己的应用中,请按照入门指南中的步骤执行此操作。
  2. 在您的应用级 build.gradle 文件中添加机器学习套件的依赖项:
    dependencies {
      // ...
    
      implementation 'com.google.firebase:firebase-ml-vision:19.0.2'
    }
    
  3. 可选但建议执行的操作:如果您使用基于设备的 API,请将您的应用配置成用户从 Play 商店安装您的应用后,您的应用自动将机器学习模型下载到设备上。

    为此,请将以下声明添加到应用的 AndroidManifest.xml 文件:

    <application ...>
      ...
      <meta-data
          android:name="com.google.firebase.ml.vision.DEPENDENCIES"
          android:value="ocr" />
      <!-- To use multiple models: android:value="ocr,model2,model3" -->
    </application>
    
    如果您未启用安装时模型下载,模型将在您首次运行设备上的检测器时下载。您在下载完毕之前提出的请求不会产生任何结果。
  4. 如果您想使用云端模型,且尚未为项目启用云端 API,请立即完成以下操作:

    1. 打开 Firebase 控制台的机器学习套件 API 页面
    2. 如果您尚未将项目升级到 Blaze 方案,请点击升级以执行此操作。(只有在您的项目未采用 Blaze 方案中时,系统才会提示您进行升级。)

      只有 Blaze 级项目才能使用云端 API。

    3. 如果尚未启用云端 API,请点击启用云端 API

    如果您只想使用设备端模型,则可以跳过此步骤。

现在,您可以开始识别图片中的文本了。

输入图片指南

  • 要使机器学习套件准确识别文本,输入图片必须包含由足够像素数据表示的文本。理想情况下,对于拉丁文本,每个字符应至少为 16x16 像素。对于中文、日文和韩文文本(仅云端 API 支持),每个字符应为 24x24 像素。对于所有语言,大于 24x24 像素的字符通常没有准确性优势。

    因此,例如,640x480 图片可能非常适合用于扫描占据图像整个宽度的名片。要扫描打印在信纸大小纸张上的文档,可能需要 720x1280 像素的图片。

  • 图片聚焦不良会影响文本识别的准确性。如果您获得的结果不可接受,请尝试让用户重新采集图片。

  • 如果您是在实时应用中识别文本,则可能还需要考虑输入图片的整体尺寸。图片越小,处理速度越快,进而可以减少延迟时间,以更低的分辨率采集图片(牢记上述准确性要求),并确保文本尽可能占据图片的尺寸。另请参阅提高实时性能的相关提示


识别图片中的文本

要使用设备端模型或云端模型来识别图片中的文本,请按照以下说明运行文本识别器。

1. 运行文本识别器

要识别图片中的文本,请基于设备上的以下资源创建一个 FirebaseVisionImage 对象:Bitmapmedia.ImageByteBuffer、字节数组或文件。然后,将 FirebaseVisionImage 对象传递给 FirebaseVisionTextRecognizerprocessImage 方法。

  1. 通过图片创建 FirebaseVisionImage 对象。

    • 要基于 Bitmap 对象创建一个 FirebaseVisionImage 对象,请使用以下代码:

      Java
      Android

      FirebaseVisionImage image = FirebaseVisionImage.fromBitmap(bitmap);

      Kotlin
      Android

      val image = FirebaseVisionImage.fromBitmap(bitmap)
      Bitmap 对象表示的图片必须保持竖直,不应再需要额外的旋转。
    • 要创建一个 FirebaseVisionImage 对象(基于 media.Image 对象),例如从设备的相机采集图片时,请首先确定图片必须旋转的角度,以便根据设备的旋转情况和相机传感器的朝向进行补偿:

      Java
      Android

      private static final SparseIntArray ORIENTATIONS = new SparseIntArray();
      static {
          ORIENTATIONS.append(Surface.ROTATION_0, 90);
          ORIENTATIONS.append(Surface.ROTATION_90, 0);
          ORIENTATIONS.append(Surface.ROTATION_180, 270);
          ORIENTATIONS.append(Surface.ROTATION_270, 180);
      }
      
      /**
       * Get the angle by which an image must be rotated given the device's current
       * orientation.
       */
      @RequiresApi(api = Build.VERSION_CODES.LOLLIPOP)
      private int getRotationCompensation(String cameraId, Activity activity, Context context)
              throws CameraAccessException {
          // Get the device's current rotation relative to its "native" orientation.
          // Then, from the ORIENTATIONS table, look up the angle the image must be
          // rotated to compensate for the device's rotation.
          int deviceRotation = activity.getWindowManager().getDefaultDisplay().getRotation();
          int rotationCompensation = ORIENTATIONS.get(deviceRotation);
      
          // On most devices, the sensor orientation is 90 degrees, but for some
          // devices it is 270 degrees. For devices with a sensor orientation of
          // 270, rotate the image an additional 180 ((270 + 270) % 360) degrees.
          CameraManager cameraManager = (CameraManager) context.getSystemService(CAMERA_SERVICE);
          int sensorOrientation = cameraManager
                  .getCameraCharacteristics(cameraId)
                  .get(CameraCharacteristics.SENSOR_ORIENTATION);
          rotationCompensation = (rotationCompensation + sensorOrientation + 270) % 360;
      
          // Return the corresponding FirebaseVisionImageMetadata rotation value.
          int result;
          switch (rotationCompensation) {
              case 0:
                  result = FirebaseVisionImageMetadata.ROTATION_0;
                  break;
              case 90:
                  result = FirebaseVisionImageMetadata.ROTATION_90;
                  break;
              case 180:
                  result = FirebaseVisionImageMetadata.ROTATION_180;
                  break;
              case 270:
                  result = FirebaseVisionImageMetadata.ROTATION_270;
                  break;
              default:
                  result = FirebaseVisionImageMetadata.ROTATION_0;
                  Log.e(TAG, "Bad rotation value: " + rotationCompensation);
          }
          return result;
      }

      Kotlin
      Android

      private val ORIENTATIONS = SparseIntArray()
      
      init {
          ORIENTATIONS.append(Surface.ROTATION_0, 90)
          ORIENTATIONS.append(Surface.ROTATION_90, 0)
          ORIENTATIONS.append(Surface.ROTATION_180, 270)
          ORIENTATIONS.append(Surface.ROTATION_270, 180)
      }
      /**
       * Get the angle by which an image must be rotated given the device's current
       * orientation.
       */
      @RequiresApi(api = Build.VERSION_CODES.LOLLIPOP)
      @Throws(CameraAccessException::class)
      private fun getRotationCompensation(cameraId: String, activity: Activity, context: Context): Int {
          // Get the device's current rotation relative to its "native" orientation.
          // Then, from the ORIENTATIONS table, look up the angle the image must be
          // rotated to compensate for the device's rotation.
          val deviceRotation = activity.windowManager.defaultDisplay.rotation
          var rotationCompensation = ORIENTATIONS.get(deviceRotation)
      
          // On most devices, the sensor orientation is 90 degrees, but for some
          // devices it is 270 degrees. For devices with a sensor orientation of
          // 270, rotate the image an additional 180 ((270 + 270) % 360) degrees.
          val cameraManager = context.getSystemService(CAMERA_SERVICE) as CameraManager
          val sensorOrientation = cameraManager
                  .getCameraCharacteristics(cameraId)
                  .get(CameraCharacteristics.SENSOR_ORIENTATION)!!
          rotationCompensation = (rotationCompensation + sensorOrientation + 270) % 360
      
          // Return the corresponding FirebaseVisionImageMetadata rotation value.
          val result: Int
          when (rotationCompensation) {
              0 -> result = FirebaseVisionImageMetadata.ROTATION_0
              90 -> result = FirebaseVisionImageMetadata.ROTATION_90
              180 -> result = FirebaseVisionImageMetadata.ROTATION_180
              270 -> result = FirebaseVisionImageMetadata.ROTATION_270
              else -> {
                  result = FirebaseVisionImageMetadata.ROTATION_0
                  Log.e(TAG, "Bad rotation value: $rotationCompensation")
              }
          }
          return result
      }

      然后,将 media.Image 对象和旋转值传递给 FirebaseVisionImage.fromMediaImage()

      Java
      Android

      FirebaseVisionImage image = FirebaseVisionImage.fromMediaImage(mediaImage, rotation);

      Kotlin
      Android

      val image = FirebaseVisionImage.fromMediaImage(mediaImage, rotation)
    • 要基于 ByteBuffer 或字节数组创建 FirebaseVisionImage 对象,请首先按上述方法计算图片旋转角度。

      然后,创建一个包含图片的高度、宽度、颜色编码格式和旋转角度的 FirebaseVisionImageMetadata 对象:

      Java
      Android

      FirebaseVisionImageMetadata metadata = new FirebaseVisionImageMetadata.Builder()
              .setWidth(480)   // 480x360 is typically sufficient for
              .setHeight(360)  // image recognition
              .setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21)
              .setRotation(rotation)
              .build();

      Kotlin
      Android

      val metadata = FirebaseVisionImageMetadata.Builder()
              .setWidth(480)   // 480x360 is typically sufficient for
              .setHeight(360)  // image recognition
              .setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21)
              .setRotation(rotation)
              .build()

      使用缓冲区(即数组)以及元数据对象来创建 FirebaseVisionImage 对象:

      Java
      Android

      FirebaseVisionImage image = FirebaseVisionImage.fromByteBuffer(buffer, metadata);
      // Or: FirebaseVisionImage image = FirebaseVisionImage.fromByteArray(byteArray, metadata);

      Kotlin
      Android

      val image = FirebaseVisionImage.fromByteBuffer(buffer, metadata)
      // Or: val image = FirebaseVisionImage.fromByteArray(byteArray, metadata)
    • 要从文件创建 FirebaseVisionImage 对象,请将应用上下文和文件 URI 传递给 FirebaseVisionImage.fromFilePath()

      Java
      Android

      FirebaseVisionImage image;
      try {
          image = FirebaseVisionImage.fromFilePath(context, uri);
      } catch (IOException e) {
          e.printStackTrace();
      }

      Kotlin
      Android

      val image: FirebaseVisionImage
      try {
          image = FirebaseVisionImage.fromFilePath(context, uri)
      } catch (e: IOException) {
          e.printStackTrace()
      }

  2. 获取 FirebaseVisionTextRecognizer 的一个实例。

    要使用设备端模型,请运行以下命令:

    Java
    Android

    FirebaseVisionTextRecognizer detector = FirebaseVision.getInstance()
            .getOnDeviceTextRecognizer();

    Kotlin
    Android

    val detector = FirebaseVision.getInstance()
            .onDeviceTextRecognizer

    要使用云端模型,请运行以下命令:

    Java
    Android

    FirebaseVisionTextRecognizer detector = FirebaseVision.getInstance()
            .getCloudTextRecognizer();
    // Or, to change the default settings:
    //   FirebaseVisionTextRecognizer detector = FirebaseVision.getInstance()
    //          .getCloudTextRecognizer(options);
    // Or, to provide language hints to assist with language detection:
    // See https://cloud.google.com/vision/docs/languages for supported languages
    FirebaseVisionCloudTextRecognizerOptions options = new FirebaseVisionCloudTextRecognizerOptions.Builder()
            .setLanguageHints(Arrays.asList("en", "hi"))
            .build();
    

    Kotlin
    Android

    val detector = FirebaseVision.getInstance().cloudTextRecognizer
    // Or, to change the default settings:
    // val detector = FirebaseVision.getInstance().getCloudTextRecognizer(options)
    // Or, to provide language hints to assist with language detection:
    // See https://cloud.google.com/vision/docs/languages for supported languages
    val options = FirebaseVisionCloudTextRecognizerOptions.Builder()
            .setLanguageHints(Arrays.asList("en", "hi"))
            .build()
    
  3. 最后,将图片传递给 processImage 方法:

    Java
    Android

    Task<FirebaseVisionText> result =
            detector.processImage(image)
                    .addOnSuccessListener(new OnSuccessListener<FirebaseVisionText>() {
                        @Override
                        public void onSuccess(FirebaseVisionText firebaseVisionText) {
                            // Task completed successfully
                            // ...
                        }
                    })
                    .addOnFailureListener(
                            new OnFailureListener() {
                                @Override
                                public void onFailure(@NonNull Exception e) {
                                    // Task failed with an exception
                                    // ...
                                }
                            });

    Kotlin
    Android

    val result = detector.processImage(image)
            .addOnSuccessListener { firebaseVisionText ->
                // Task completed successfully
                // ...
            }
            .addOnFailureListener {
                // Task failed with an exception
                // ...
            }

2. 从识别出的文本块中提取文本

如果文本识别操作成功,则系统会向成功侦听器传递一个 FirebaseVisionText 对象。FirebaseVisionText 对象包含图片中识别到的完整文本以及零个或零个以上 TextBlock 对象。

每个 TextBlock 表示一个矩形文本块,其中包含零个或零个以上 Line 对象。每个 Line 对象包含零个或零个以上 Element 对象,这些对象表示字词和类似字词的实体(日期、数字等)。

对于每个 TextBlockLineElement 对象,您可以获取区域中识别出的文本以及该区域的边界坐标。

例如:

Java
Android

String resultText = result.getText();
for (FirebaseVisionText.TextBlock block: result.getTextBlocks()) {
    String blockText = block.getText();
    Float blockConfidence = block.getConfidence();
    List<RecognizedLanguage> blockLanguages = block.getRecognizedLanguages();
    Point[] blockCornerPoints = block.getCornerPoints();
    Rect blockFrame = block.getBoundingBox();
    for (FirebaseVisionText.Line line: block.getLines()) {
        String lineText = line.getText();
        Float lineConfidence = line.getConfidence();
        List<RecognizedLanguage> lineLanguages = line.getRecognizedLanguages();
        Point[] lineCornerPoints = line.getCornerPoints();
        Rect lineFrame = line.getBoundingBox();
        for (FirebaseVisionText.Element element: line.getElements()) {
            String elementText = element.getText();
            Float elementConfidence = element.getConfidence();
            List<RecognizedLanguage> elementLanguages = element.getRecognizedLanguages();
            Point[] elementCornerPoints = element.getCornerPoints();
            Rect elementFrame = element.getBoundingBox();
        }
    }
}

Kotlin
Android

val resultText = result.text
for (block in result.textBlocks) {
    val blockText = block.text
    val blockConfidence = block.confidence
    val blockLanguages = block.recognizedLanguages
    val blockCornerPoints = block.cornerPoints
    val blockFrame = block.boundingBox
    for (line in block.lines) {
        val lineText = line.text
        val lineConfidence = line.confidence
        val lineLanguages = line.recognizedLanguages
        val lineCornerPoints = line.cornerPoints
        val lineFrame = line.boundingBox
        for (element in line.elements) {
            val elementText = element.text
            val elementConfidence = element.confidence
            val elementLanguages = element.recognizedLanguages
            val elementCornerPoints = element.cornerPoints
            val elementFrame = element.boundingBox
        }
    }
}

提高实时性能的相关提示

如果要在实时应用中使用设备端模型识别文本,请遵循以下准则以实现最佳帧速率:

  • 限制文本识别器的调用次数。如果在文本识别器运行时有新视频帧可用,请丢弃该帧。
  • 如果使用文本识别器的输出在输入图片上叠加图形,请先从机器学习套件获取结果,然后在一个步骤中渲染该图片并进行叠加。这样,您只为每个输入帧渲染到显示表面一次。如需查看示例,请参阅快速入门示例应用中的 CameraSourcePreviewGraphicOverlay 类。
  • 如果您使用 Camera2 API,请以 ImageFormat.YUV_420_888 格式采集图片。

    如果您使用旧版 Camera API,请以 ImageFormat.NV21 格式采集图片。

  • 建议以较低分辨率采集图片。但是,请记住此 API 的图像尺寸要求。

识别文档图片中的文本

要识别文档的文本,请按照以下说明配置并运行云端文档文本识别器。

下文所述的文档文本识别 API 提供了一个旨在更方便地处理文档图片的接口。但是,如果您更喜欢使用 FirebaseVisionTextRecognizer API 提供的接口,则可以将云端文本识别器配置为使用密集文本模型,以改用该接口来扫描文档。

要使用文档文本识别 API,请执行以下操作:

1. 运行文本识别器

要识别图片中的文本,请基于设备上的以下资源创建一个 FirebaseVisionImage 对象:Bitmapmedia.ImageByteBuffer、字节数组或文件。然后,将 FirebaseVisionImage 对象传递给 FirebaseVisionDocumentTextRecognizerprocessImage 方法。

  1. 通过图片创建 FirebaseVisionImage 对象。

    • 要基于 Bitmap 对象创建一个 FirebaseVisionImage 对象,请使用以下代码:

      Java
      Android

      FirebaseVisionImage image = FirebaseVisionImage.fromBitmap(bitmap);

      Kotlin
      Android

      val image = FirebaseVisionImage.fromBitmap(bitmap)
      Bitmap 对象表示的图片必须保持竖直,不应再需要额外的旋转。
    • 要创建一个 FirebaseVisionImage 对象(基于 media.Image 对象),例如从设备的相机采集图片时,请首先确定图片必须旋转的角度,以便根据设备的旋转情况和相机传感器的朝向进行补偿:

      Java
      Android

      private static final SparseIntArray ORIENTATIONS = new SparseIntArray();
      static {
          ORIENTATIONS.append(Surface.ROTATION_0, 90);
          ORIENTATIONS.append(Surface.ROTATION_90, 0);
          ORIENTATIONS.append(Surface.ROTATION_180, 270);
          ORIENTATIONS.append(Surface.ROTATION_270, 180);
      }
      
      /**
       * Get the angle by which an image must be rotated given the device's current
       * orientation.
       */
      @RequiresApi(api = Build.VERSION_CODES.LOLLIPOP)
      private int getRotationCompensation(String cameraId, Activity activity, Context context)
              throws CameraAccessException {
          // Get the device's current rotation relative to its "native" orientation.
          // Then, from the ORIENTATIONS table, look up the angle the image must be
          // rotated to compensate for the device's rotation.
          int deviceRotation = activity.getWindowManager().getDefaultDisplay().getRotation();
          int rotationCompensation = ORIENTATIONS.get(deviceRotation);
      
          // On most devices, the sensor orientation is 90 degrees, but for some
          // devices it is 270 degrees. For devices with a sensor orientation of
          // 270, rotate the image an additional 180 ((270 + 270) % 360) degrees.
          CameraManager cameraManager = (CameraManager) context.getSystemService(CAMERA_SERVICE);
          int sensorOrientation = cameraManager
                  .getCameraCharacteristics(cameraId)
                  .get(CameraCharacteristics.SENSOR_ORIENTATION);
          rotationCompensation = (rotationCompensation + sensorOrientation + 270) % 360;
      
          // Return the corresponding FirebaseVisionImageMetadata rotation value.
          int result;
          switch (rotationCompensation) {
              case 0:
                  result = FirebaseVisionImageMetadata.ROTATION_0;
                  break;
              case 90:
                  result = FirebaseVisionImageMetadata.ROTATION_90;
                  break;
              case 180:
                  result = FirebaseVisionImageMetadata.ROTATION_180;
                  break;
              case 270:
                  result = FirebaseVisionImageMetadata.ROTATION_270;
                  break;
              default:
                  result = FirebaseVisionImageMetadata.ROTATION_0;
                  Log.e(TAG, "Bad rotation value: " + rotationCompensation);
          }
          return result;
      }

      Kotlin
      Android

      private val ORIENTATIONS = SparseIntArray()
      
      init {
          ORIENTATIONS.append(Surface.ROTATION_0, 90)
          ORIENTATIONS.append(Surface.ROTATION_90, 0)
          ORIENTATIONS.append(Surface.ROTATION_180, 270)
          ORIENTATIONS.append(Surface.ROTATION_270, 180)
      }
      /**
       * Get the angle by which an image must be rotated given the device's current
       * orientation.
       */
      @RequiresApi(api = Build.VERSION_CODES.LOLLIPOP)
      @Throws(CameraAccessException::class)
      private fun getRotationCompensation(cameraId: String, activity: Activity, context: Context): Int {
          // Get the device's current rotation relative to its "native" orientation.
          // Then, from the ORIENTATIONS table, look up the angle the image must be
          // rotated to compensate for the device's rotation.
          val deviceRotation = activity.windowManager.defaultDisplay.rotation
          var rotationCompensation = ORIENTATIONS.get(deviceRotation)
      
          // On most devices, the sensor orientation is 90 degrees, but for some
          // devices it is 270 degrees. For devices with a sensor orientation of
          // 270, rotate the image an additional 180 ((270 + 270) % 360) degrees.
          val cameraManager = context.getSystemService(CAMERA_SERVICE) as CameraManager
          val sensorOrientation = cameraManager
                  .getCameraCharacteristics(cameraId)
                  .get(CameraCharacteristics.SENSOR_ORIENTATION)!!
          rotationCompensation = (rotationCompensation + sensorOrientation + 270) % 360
      
          // Return the corresponding FirebaseVisionImageMetadata rotation value.
          val result: Int
          when (rotationCompensation) {
              0 -> result = FirebaseVisionImageMetadata.ROTATION_0
              90 -> result = FirebaseVisionImageMetadata.ROTATION_90
              180 -> result = FirebaseVisionImageMetadata.ROTATION_180
              270 -> result = FirebaseVisionImageMetadata.ROTATION_270
              else -> {
                  result = FirebaseVisionImageMetadata.ROTATION_0
                  Log.e(TAG, "Bad rotation value: $rotationCompensation")
              }
          }
          return result
      }

      然后,将 media.Image 对象和旋转值传递给 FirebaseVisionImage.fromMediaImage()

      Java
      Android

      FirebaseVisionImage image = FirebaseVisionImage.fromMediaImage(mediaImage, rotation);

      Kotlin
      Android

      val image = FirebaseVisionImage.fromMediaImage(mediaImage, rotation)
    • 要基于 ByteBuffer 或字节数组创建 FirebaseVisionImage 对象,请首先按上述方法计算图片旋转角度。

      然后,创建一个包含图片的高度、宽度、颜色编码格式和旋转角度的 FirebaseVisionImageMetadata 对象:

      Java
      Android

      FirebaseVisionImageMetadata metadata = new FirebaseVisionImageMetadata.Builder()
              .setWidth(480)   // 480x360 is typically sufficient for
              .setHeight(360)  // image recognition
              .setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21)
              .setRotation(rotation)
              .build();

      Kotlin
      Android

      val metadata = FirebaseVisionImageMetadata.Builder()
              .setWidth(480)   // 480x360 is typically sufficient for
              .setHeight(360)  // image recognition
              .setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21)
              .setRotation(rotation)
              .build()

      使用缓冲区(即数组)以及元数据对象来创建 FirebaseVisionImage 对象:

      Java
      Android

      FirebaseVisionImage image = FirebaseVisionImage.fromByteBuffer(buffer, metadata);
      // Or: FirebaseVisionImage image = FirebaseVisionImage.fromByteArray(byteArray, metadata);

      Kotlin
      Android

      val image = FirebaseVisionImage.fromByteBuffer(buffer, metadata)
      // Or: val image = FirebaseVisionImage.fromByteArray(byteArray, metadata)
    • 要从文件创建 FirebaseVisionImage 对象,请将应用上下文和文件 URI 传递给 FirebaseVisionImage.fromFilePath()

      Java
      Android

      FirebaseVisionImage image;
      try {
          image = FirebaseVisionImage.fromFilePath(context, uri);
      } catch (IOException e) {
          e.printStackTrace();
      }

      Kotlin
      Android

      val image: FirebaseVisionImage
      try {
          image = FirebaseVisionImage.fromFilePath(context, uri)
      } catch (e: IOException) {
          e.printStackTrace()
      }

  2. 获取 FirebaseVisionDocumentTextRecognizer 的一个实例:

    Java
    Android

    FirebaseVisionDocumentTextRecognizer detector = FirebaseVision.getInstance()
            .getCloudDocumentTextRecognizer();
    // Or, to provide language hints to assist with language detection:
    // See https://cloud.google.com/vision/docs/languages for supported languages
    FirebaseVisionCloudDocumentRecognizerOptions options =
            new FirebaseVisionCloudDocumentRecognizerOptions.Builder()
                    .setLanguageHints(Arrays.asList("en", "hi"))
                    .build();
    FirebaseVisionDocumentTextRecognizer detector = FirebaseVision.getInstance()
            .getCloudDocumentTextRecognizer(options);

    Kotlin
    Android

    val detector = FirebaseVision.getInstance()
            .cloudDocumentTextRecognizer
    // Or, to provide language hints to assist with language detection:
    // See https://cloud.google.com/vision/docs/languages for supported languages
    val options = FirebaseVisionCloudDocumentRecognizerOptions.Builder()
            .setLanguageHints(Arrays.asList("en", "hi"))
            .build()
    val detector = FirebaseVision.getInstance()
            .getCloudDocumentTextRecognizer(options)

  3. 最后,将图片传递给 processImage 方法:

    Java
    Android

    detector.processImage(myImage)
            .addOnSuccessListener(new OnSuccessListener<FirebaseVisionDocumentText>() {
                @Override
                public void onSuccess(FirebaseVisionDocumentText result) {
                    // Task completed successfully
                    // ...
                }
            })
            .addOnFailureListener(new OnFailureListener() {
                @Override
                public void onFailure(@NonNull Exception e) {
                    // Task failed with an exception
                    // ...
                }
            });

    Kotlin
    Android

    detector.processImage(myImage)
            .addOnSuccessListener {
                // Task completed successfully
                // ...
            }
            .addOnFailureListener {
                // Task failed with an exception
                // ...
            }

2. 从识别出的文本块中提取文本

如果文本识别操作成功,它将返回一个 FirebaseVisionDocumentText 对象。FirebaseVisionDocumentText 对象包含图片中识别到的完整文本以及反映所识别的文档结构的对象层次结构:

对于每个 BlockParagraphWordSymbol 对象,您可以获取区域中识别出的文本以及该区域的边界坐标。

例如:

Java
Android

String resultText = result.getText();
for (FirebaseVisionDocumentText.Block block: result.getBlocks()) {
    String blockText = block.getText();
    Float blockConfidence = block.getConfidence();
    List<RecognizedLanguage> blockRecognizedLanguages = block.getRecognizedLanguages();
    Rect blockFrame = block.getBoundingBox();
    for (FirebaseVisionDocumentText.Paragraph paragraph: block.getParagraphs()) {
        String paragraphText = paragraph.getText();
        Float paragraphConfidence = paragraph.getConfidence();
        List<RecognizedLanguage> paragraphRecognizedLanguages = paragraph.getRecognizedLanguages();
        Rect paragraphFrame = paragraph.getBoundingBox();
        for (FirebaseVisionDocumentText.Word word: paragraph.getWords()) {
            String wordText = word.getText();
            Float wordConfidence = word.getConfidence();
            List<RecognizedLanguage> wordRecognizedLanguages = word.getRecognizedLanguages();
            Rect wordFrame = word.getBoundingBox();
            for (FirebaseVisionDocumentText.Symbol symbol: word.getSymbols()) {
                String symbolText = symbol.getText();
                Float symbolConfidence = symbol.getConfidence();
                List<RecognizedLanguage> symbolRecognizedLanguages = symbol.getRecognizedLanguages();
                Rect symbolFrame = symbol.getBoundingBox();
            }
        }
    }
}

Kotlin
Android

val resultText = result.text
for (block in result.blocks) {
    val blockText = block.text
    val blockConfidence = block.confidence
    val blockRecognizedLanguages = block.recognizedLanguages
    val blockFrame = block.boundingBox
    for (paragraph in block.paragraphs) {
        val paragraphText = paragraph.text
        val paragraphConfidence = paragraph.confidence
        val paragraphRecognizedLanguages = paragraph.recognizedLanguages
        val paragraphFrame = paragraph.boundingBox
        for (word in paragraph.words) {
            val wordText = word.text
            val wordConfidence = word.confidence
            val wordRecognizedLanguages = word.recognizedLanguages
            val wordFrame = word.boundingBox
            for (symbol in word.symbols) {
                val symbolText = symbol.text
                val symbolConfidence = symbol.confidence
                val symbolRecognizedLanguages = symbol.recognizedLanguages
                val symbolFrame = symbol.boundingBox
            }
        }
    }
}

后续步骤

在向生产环境中部署使用 Cloud API 的应用之前,您应该采取一些额外步骤来防止未经授权的 API 访问并减轻其造成的影响

发送以下问题的反馈:

此网页
需要帮助?请访问我们的支持页面