FirebaseVisionTextRecognizer
Stay organized with collections
Save and categorize content based on your preferences.
Text recognizer for performing optical character recognition(OCR) on an input image.
A text recognizer is created via
getOnDeviceTextRecognizer()
or
getCloudTextRecognizer()
. See the code example below.
To use on-device text recognizer:
FirebaseVisionTextRecognizer textRecognizer =
FirebaseVision.getInstance().getOnDeviceTextRecognizer();
Or use cloud text recognizer:
FirebaseVisionTextRecognizer textRecognizer =
FirebaseVision.getInstance().getCloudTextRecognizer();
To perform OCR on an image, you first need to create an instance of
FirebaseVisionImage
from a
ByteBuffer
,
Bitmap
,
etc. See
FirebaseVisionImage
documentation for more details. For example, the code below creates a
FirebaseVisionImage
from a
Bitmap
.
FirebaseVisionImage image = FirebaseVisionImage.fromBitmap(bitmap);
Then the code below can detect texts in the supplied FirebaseVisionImage
.
Task
<FirebaseVisionText> task = textRecognizer.processImage(image);
task.addOnSuccessListener(...).addOnFailureListener(...);
Constant Summary
int |
CLOUD
|
Indicates that the recognizer is using a cloud
model. |
int |
ON_DEVICE
|
Indicates that the recognizer is using an
on-device model. |
Inherited Method Summary
From class java.lang.Object
Object
|
clone()
|
boolean |
|
void |
finalize()
|
final Class<?>
|
getClass()
|
int |
hashCode()
|
final void |
notify()
|
final void |
notifyAll()
|
String
|
toString()
|
final void |
wait(long arg0, int arg1)
|
final void |
wait(long arg0)
|
final void |
wait()
|
From interface java.io.Closeable
From interface java.lang.AutoCloseable
Constants
public static final int
CLOUD
Indicates that the recognizer is using a cloud model.
Constant Value: 2
public static final int
ON_DEVICE
Indicates that the recognizer is using an on-device model.
Constant Value: 1
Public Methods
public void close ()
Closes the text detector and release its model resources.
public int getRecognizerType ()
Gets recognizer type in terms of on-device or cloud.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-08-08 UTC.
[null,null,["Last updated 2020-08-08 UTC."],[],[],null,["public class **FirebaseVisionTextRecognizer** extends [Object](//developer.android.com/reference/java/lang/Object.html) \nimplements [Closeable](//developer.android.com/reference/java/io/Closeable.html) \nText recognizer for performing optical character recognition(OCR) on an input image.\n\nA text recognizer is created via [getOnDeviceTextRecognizer()](/docs/reference/android/com/google/firebase/ml/vision/FirebaseVision#getOnDeviceTextRecognizer()) or [getCloudTextRecognizer()](/docs/reference/android/com/google/firebase/ml/vision/FirebaseVision#getCloudTextRecognizer()). See the code example below.\n\nTo use on-device text recognizer: \n\n FirebaseVisionTextRecognizer textRecognizer =\n FirebaseVision.getInstance().getOnDeviceTextRecognizer();\n \nOr use cloud text recognizer: \n\n FirebaseVisionTextRecognizer textRecognizer =\n FirebaseVision.getInstance().getCloudTextRecognizer();\n \nTo perform OCR on an image, you first need to create an instance of [FirebaseVisionImage](/docs/reference/android/com/google/firebase/ml/vision/common/FirebaseVisionImage) from a [ByteBuffer](//developer.android.com/reference/java/nio/ByteBuffer.html), [Bitmap](//developer.android.com/reference/android/graphics/Bitmap.html), etc. See [FirebaseVisionImage](/docs/reference/android/com/google/firebase/ml/vision/common/FirebaseVisionImage) documentation for more details. For example, the code below creates a [FirebaseVisionImage](/docs/reference/android/com/google/firebase/ml/vision/common/FirebaseVisionImage) from a [Bitmap](//developer.android.com/reference/android/graphics/Bitmap.html). \n\n FirebaseVisionImage image = FirebaseVisionImage.fromBitmap(bitmap);\n\nThen the code below can detect texts in the supplied [FirebaseVisionImage](/docs/reference/android/com/google/firebase/ml/vision/common/FirebaseVisionImage). \n\n\n Task\u003cFirebaseVisionText\u003e task = textRecognizer.processImage(image);\n task.addOnSuccessListener(...).addOnFailureListener(...);\n \nNested Class Summary\n\nConstant Summary\n\nPublic Method Summary\n\nInherited Method Summary \nFrom class java.lang.Object \n\nFrom interface java.io.Closeable \n\nFrom interface java.lang.AutoCloseable \n\nConstants \n\npublic static final int\n**CLOUD** \nIndicates that the recognizer is using a cloud model. \nConstant Value: 2 \n\npublic static final int\n**ON_DEVICE** \nIndicates that the recognizer is using an on-device model. \nConstant Value: 1\n\nPublic Methods \n\npublic void **close** () \nCloses the text detector and release its model resources. \n\nThrows\n\npublic int **getRecognizerType** () \nGets recognizer type in terms of on-device or cloud. \n\npublic [Task](//developers.google.com/android/reference/com/google/android/gms/tasks/Task.html)\\\u003c[FirebaseVisionText](/docs/reference/android/com/google/firebase/ml/vision/text/FirebaseVisionText)\\\u003e\n**processImage** ([FirebaseVisionImage](/docs/reference/android/com/google/firebase/ml/vision/common/FirebaseVisionImage) image) \nDetects [FirebaseVisionText](/docs/reference/android/com/google/firebase/ml/vision/text/FirebaseVisionText)\nfrom a [FirebaseVisionImage](/docs/reference/android/com/google/firebase/ml/vision/common/FirebaseVisionImage). The OCR is performed asynchronously. Right now, only\nthe following input types are supported:\n\nFor best efficiency, create a [FirebaseVisionImage](/docs/reference/android/com/google/firebase/ml/vision/common/FirebaseVisionImage) object using one of the following ways:\n\n- [fromMediaImage(Image, int)](/docs/reference/android/com/google/firebase/ml/vision/common/FirebaseVisionImage#fromMediaImage(android.media.Image, int)) with a [YUV_420_888](//developer.android.com/reference/android/graphics/ImageFormat.html#YUV_420_888) formatted image from [android.hardware.camera2](/docs/reference/android/reference/android/hardware/camera2/package-summary).\n- [fromByteArray(byte[], FirebaseVisionImageMetadata)](/docs/reference/android/com/google/firebase/ml/vision/common/FirebaseVisionImage#fromByteArray(byte[], com.google.firebase.ml.vision.common.FirebaseVisionImageMetadata)) with a [NV21](//developer.android.com/reference/android/graphics/ImageFormat.html#NV21) formatted image from [Camera](//developer.android.com/reference/android/hardware/Camera.html) (deprecated).\n- [fromByteBuffer(ByteBuffer, FirebaseVisionImageMetadata)](/docs/reference/android/com/google/firebase/ml/vision/common/FirebaseVisionImage#fromByteBuffer(java.nio.ByteBuffer, com.google.firebase.ml.vision.common.FirebaseVisionImageMetadata)) if you need to pre-process the image. E.g. allocate a direct [ByteBuffer](//developer.android.com/reference/java/nio/ByteBuffer.html) and write processed pixels into the [ByteBuffer](//developer.android.com/reference/java/nio/ByteBuffer.html).\n\nAll other [FirebaseVisionImage](/docs/reference/android/com/google/firebase/ml/vision/common/FirebaseVisionImage) factory methods will work as well, but possibly slightly slower. \n\nReturns\n\n- A [Task](//developers.google.com/android/reference/com/google/android/gms/tasks/Task.html) for [FirebaseVisionText](/docs/reference/android/com/google/firebase/ml/vision/text/FirebaseVisionText)."]]