Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
ML Kit for Firebase
plat_iosplat_android
Gunakan machine learning di aplikasi Anda untuk memecahkan masalah di dunia nyata.
ML Kit adalah SDK seluler yang menghadirkan keahlian machine learning Google untuk aplikasi Android dan iOS dalam paket yang andal dan mudah digunakan. Sebagai pengguna machine learning, baik pemula maupun berpengalaman, Anda dapat menerapkan fungsi yang diperlukan hanya dengan beberapa baris kode. Tidak perlu pengetahuan mendalam tentang jaringan neural atau pengoptimalan model untuk memulai. Di sisi lain, jika Anda adalah developer ML berpengalaman, ML Kit menyediakan API praktis yang dapat membantu Anda menggunakan model TensorFlow Lite kustom di aplikasi seluler.
Kemampuan utama
Siap produksi untuk kasus penggunaan umum
ML Kit dilengkapi dengan satu set API yang siap digunakan untuk kasus penggunaan seluler umum: mengenali teks, mendeteksi wajah, mengidentifikasi tempat terkenal, memindai kode batang, melabeli gambar, dan mengidentifikasi bahasa teks. Cukup teruskan data ke library ML Kit dan Anda akan mendapatkan informasi yang diperlukan.
Di perangkat atau di cloud
Berbagai API untuk ML Kit dapat dijalankan di perangkat atau di cloud. API di perangkat dapat memproses data Anda dengan cepat dan berfungsi bahkan ketika tidak ada koneksi jaringan. Di sisi lain, API berbasis cloud kami memanfaatkan kecanggihan teknologi machine learning dari Google Cloud untuk memberikan tingkat akurasi yang lebih tinggi.
Men-deploy model kustom
Jika kasus penggunaan Anda tidak tercakup dalam API ML Kit, Anda selalu dapat menggunakan
model TensorFlow Lite milik Anda sendiri. Cukup upload model Anda ke
Firebase, dan kami akan menangani proses hosting dan menayangkannya ke aplikasi Anda.
ML Kit berfungsi sebagai lapisan API untuk model kustom Anda, sehingga lebih mudah
dijalankan dan digunakan.
Bagaimana cara kerjanya?
ML Kit mempermudah penerapan teknik ML di aplikasi Anda dengan menghadirkan teknologi
ML dari Google, seperti
Google Cloud Vision API,
TensorFlow Lite, dan
Android Neural Networks API
secara terpadu dalam satu SDK. Hanya dengan menerapkan beberapa baris kode, ML Kit dapat digunakan untuk memperoleh berbagai hal,
seperti pemrosesan berbasis cloud, kemampuan real-time pada model di perangkat
yang dioptimalkan untuk seluler, atau fleksibilitas
model TensorFlow Lite kustom.
Fitur apa yang tersedia di perangkat atau di cloud?
Sertakan SDK dengan cepat menggunakan Gradle atau CocoaPods.
Mempersiapkan data input
Misalnya, jika Anda menggunakan fitur vision, ambil gambar dari kamera dan buat metadata yang diperlukan, seperti rotasi gambar, atau minta pengguna untuk memilih foto dari galeri mereka.
Menerapkan model ML ke data Anda
Dengan menerapkan model ML ke data, Anda menghasilkan insight, misalnya keadaan emosi wajah yang terdeteksi atau objek dan konsep yang dikenali dalam gambar, bergantung pada fitur yang Anda gunakan. Gunakan insight ini untuk mendukung fitur di aplikasi Anda seperti aksesori foto, pembuatan metadata otomatis, atau semua hal lain yang dapat Anda bayangkan.
[null,null,["Terakhir diperbarui pada 2025-08-04 UTC."],[],[],null,["ML Kit for Firebase \nplat_ios plat_android \nUse machine learning in your apps to solve real-world problems.\n\nML Kit is a mobile SDK that brings Google's machine learning expertise to\nAndroid and iOS apps in a powerful yet easy-to-use package. Whether you're new\nor experienced in machine learning, you can implement the functionality\nyou need in just a few lines of code. There's no need to have deep knowledge of\nneural networks or model optimization to get started. On the other hand, if you\nare an experienced ML developer, ML Kit provides convenient APIs that help\nyou use your custom TensorFlow Lite models in your mobile apps.\n| This is a beta release of ML Kit for Firebase. This API might be changed in backward-incompatible ways and is not subject to any SLA or deprecation policy.\n\nKey capabilities\n\n|---------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| Production-ready for common use cases | ML Kit comes with a set of ready-to-use APIs for common mobile use cases: recognizing text, detecting faces, identifying landmarks, scanning barcodes, labeling images, and identifying the language of text. Simply pass in data to the ML Kit library and it gives you the information you need. |\n| On-device or in the cloud | ML Kit's selection of APIs run on-device or in the cloud. Our on-device APIs can process your data quickly and work even when there's no network connection. Our cloud-based APIs, on the other hand, leverage the power of Google Cloud's machine learning technology to give you an even higher level of accuracy. |\n| Deploy custom models | If ML Kit's APIs don't cover your use cases, you can always bring your own existing TensorFlow Lite models. Just upload your model to Firebase, and we'll take care of hosting and serving it to your app. ML Kit acts as an API layer to your custom model, making it simpler to run and use. |\n\nHow does it work?\n\nML Kit makes it easy to apply ML techniques in your apps by bringing Google's\nML technologies, such as the\n[Google Cloud Vision API](https://cloud.google.com/vision/),\n[TensorFlow Lite](https://www.tensorflow.org/mobile/tflite/), and the\n[Android Neural Networks API](https://developer.android.com/ndk/guides/neuralnetworks/)\ntogether in a single SDK. Whether you need the power of cloud-based processing,\nthe real-time capabilities of mobile-optimized on-device models, or the\nflexibility of custom TensorFlow Lite models, ML Kit makes it possible with\njust a few lines of code.\n\nWhat features are available on device or in the cloud?\n\n| Feature | On-device | Cloud |\n|---------------------------------------------------------------|-----------|-------|\n| [Text recognition](/docs/ml-kit/recognize-text) | | |\n| [Face detection](/docs/ml-kit/detect-faces) | | |\n| [Barcode scanning](/docs/ml-kit/read-barcodes) | | |\n| [Image labeling](/docs/ml-kit/label-images) | | |\n| [Object detection \\& tracking](/docs/ml-kit/object-detection) | | |\n| [Landmark recognition](/docs/ml-kit/recognize-landmarks) | | |\n| [Language identification](/docs/ml-kit/identify-languages) | | |\n| [Translation](/docs/ml-kit/translation) | | |\n| [Smart Reply](/docs/ml-kit/generate-smart-replies) | | |\n| [AutoML model inference](/docs/ml-kit/automl-image-labeling) | | |\n| [Custom model inference](/docs/ml-kit/use-custom-models) | | |\n\n| Use of ML Kit to access Cloud ML functionality is subject to the [Google Cloud Platform License\n| Agreement](https://cloud.google.com/terms/) and [Service\n| Specific Terms](https://cloud.google.com/terms/service-terms), and billed accordingly. For billing information, see the Firebase [Pricing](/pricing) page.\n\nImplementation path\n\n|---|---------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| | Integrate the SDK | Quickly include the SDK using Gradle or CocoaPods. |\n| | Prepare input data | For example, if you're using a vision feature, capture an image from the camera and generate the necessary metadata such as image rotation, or prompt the user to select a photo from their gallery. |\n| | Apply the ML model to your data | By applying the ML model to your data, you generate insights such as the emotional state of detected faces or the objects and concepts that were recognized in the image, depending on the feature you used. Use these insights to power features in your app like photo embellishment, automatic metadata generation, or whatever else you can imagine. |\n\nNext steps\n\n- Explore the ready-to-use APIs: [text recognition](/docs/ml-kit/recognize-text), [face detection](/docs/ml-kit/detect-faces), [barcode scanning](/docs/ml-kit/read-barcodes), [image labeling](/docs/ml-kit/label-images), [object detection \\& tracking](/docs/ml-kit/object-detection), [landmark recognition](/docs/ml-kit/recognize-landmarks), [Smart Reply](/docs/ml-kit/generate-smart-replies), [translation](/docs/ml-kit/translation), and [language identification](/docs/ml-kit/identify-languages).\n- Train your own image labeling model with [AutoML Vision Edge](/docs/ml-kit/automl-image-labeling).\n- Learn about using mobile-optimized [custom models](/docs/ml-kit/use-custom-models) in your app."]]