Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Deteksi Wajah
plat_iosplat_android
Dengan API deteksi wajah pada ML Kit, Anda dapat mendeteksi wajah pada gambar,
mengidentifikasi fitur utama wajah, dan mendapatkan kontur dari wajah yang terdeteksi.
Dengan deteksi wajah, Anda bisa mendapatkan informasi yang diperlukan untuk melakukan tugas-tugas seperti menghias selfie dan potret, atau membuat avatar dari foto pengguna.
Karena ML Kit dapat melakukan deteksi wajah secara real time, Anda dapat menggunakannya dalam aplikasi seperti video chat atau game yang merespons ekspresi pemain.
Bagi developer Flutter, ada baiknya mempertimbangkan FlutterFire, yang mencakup plugin untuk ML Vision API Firebase.
Kemampuan utama
Mengenali dan menemukan fitur wajah
Mendapatkan koordinat mata, telinga, pipi, hidung, dan mulut setiap wajah yang terdeteksi.
Mendapatkan kontur fitur wajah
Mendapatkan kontur wajah yang terdeteksi serta mata, alis, bibir, dan hidungnya.
Mengenali ekspresi wajah
Menentukan apakah orang tersebut sedang tersenyum atau menutup mata.
Melacak wajah di rangkaian frame video
Mendapatkan ID untuk wajah setiap orang yang terdeteksi.
ID ini bersifat konsisten di seluruh pemanggilan sehingga Anda dapat, misalnya, melakukan manipulasi gambar pada orang tertentu dalam streaming video.
Memproses frame video secara real time
Deteksi wajah dilakukan pada perangkat, dan cukup cepat untuk digunakan dalam aplikasi real time, seperti manipulasi video.
Ketika mengaktifkan deteksi kontur wajah, Anda juga akan melihat sekumpulan titik untuk setiap fitur wajah yang terdeteksi Titik-titik ini mengikuti bentuk fitur wajah. Gambar berikut mengilustrasikan bagaimana titik-titik ini dipetakan ke wajah (klik gambar untuk memperbesar):
[null,null,["Terakhir diperbarui pada 2025-08-08 UTC."],[],[],null,["Face Detection \nplat_ios plat_android \n\nWith ML Kit's face detection API, you can detect faces in an image, identify\nkey facial features, and get the contours of detected faces.\n\nWith face detection, you can get the information you need to perform tasks like\nembellishing selfies and portraits, or generating avatars from a user's photo.\nBecause ML Kit can perform face detection in real time, you can use it in\napplications like video chat or games that respond to the player's expressions.\n\n[iOS](/docs/ml-kit/ios/detect-faces)\n[Android](/docs/ml-kit/android/detect-faces)\n\nIf you're a Flutter developer, you might be interested in\n[FlutterFire](https://github.com/FirebaseExtended/flutterfire/tree/master/packages/firebase_ml_vision),\nwhich includes a plugin for Firebase's ML Vision APIs.\n| This is a beta release of ML Kit for Firebase. This API might be changed in backward-incompatible ways and is not subject to any SLA or deprecation policy.\n\nKey capabilities\n\n|--------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| Recognize and locate facial features | Get the coordinates of the eyes, ears, cheeks, nose, and mouth of every face detected. |\n| Get the contours of facial features | Get the contours of detected faces and their eyes, eyebrows, lips, and nose. |\n| Recognize facial expressions | Determine whether a person is smiling or has their eyes closed. |\n| Track faces across video frames | Get an identifier for each individual person's face that is detected. This identifier is consistent across invocations, so you can, for example, perform image manipulation on a particular person in a video stream. |\n| Process video frames in real time | Face detection is performed on the device, and is fast enough to be used in real-time applications, such as video manipulation. |\n\nExample results\n\nExample 1\n\nFor each face detected:\n\n| Face 1 of 3 ||\n|---------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| **Bounding polygon** | (884.880004882812, 149.546676635742), (1030.77197265625, 149.546676635742), (1030.77197265625, 329.660278320312), (884.880004882812, 329.660278320312) |\n| **Angles of rotation** | Y: -14.054030418395996, Z: -55.007488250732422 |\n| **Tracking ID** | 2 |\n| **Facial landmarks** | |---------------------|--------------------------------------| | **Left eye** | (945.869323730469, 211.867126464844) | | **Right eye** | (971.579467773438, 247.257247924805) | | **Bottom of mouth** | (907.756591796875, 259.714477539062) | ... etc. |\n| **Feature probabilities** | |--------------------|---------------------| | **Smiling** | 0.88979166746139526 | | **Left eye open** | 0.98635888937860727 | | **Right eye open** | 0.99258323386311531 | |\n\nExample 2 (face contour detection)\n\nWhen you have face contour detection enabled, you also get a list of points\nfor each facial feature that was detected. These points represent the shape of\nthe feature. The following image illustrates how these points map to a face\n(click the image to enlarge):\n\n[](/static/docs/ml-kit/images/examples/face_contours.svg)\n\n| Facial feature contours ||\n|----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| **Nose bridge** | (505.149811, 221.201797), (506.987122, 313.285919) |\n| **Left eye** | (404.642029, 232.854431), (408.527283, 231.366623), (413.565796, 229.427856), (421.378296, 226.967682), (432.598755, 225.434143), (442.953064, 226.089508), (453.899811, 228.594818), (461.516418, 232.650467), (465.069580, 235.600845), (462.170410, 236.316147), (456.233643, 236.891602), (446.363922, 237.966888), (435.698914, 238.149323), (424.320740, 237.235168), (416.037720, 236.012115), (409.983459, 234.870300) |\n| **Top of upper lip** | (421.662048, 354.520813), (428.103882, 349.694061), (440.847595, 348.048737), (456.549988, 346.295532), (480.526489, 346.089294), (503.375702, 349.470459), (525.624634, 347.352783), (547.371155, 349.091980), (560.082031, 351.693268), (570.226685, 354.210175), (575.305420, 359.257751) |\n| (etc.) | |"]]