Downloads
Learn
Tools
Contact
About
ML Kit Tutorial: How to detect faces with ML Kit API and identify key facial features

ML Kit Tutorial: Face detection with ML Kit API

ML Kit face detection API can be used to detect faces in an image and identify key facial features.

Lets first briefly review what we can do with this API:

  1. Recognize and locate facial features like eyes, ears, cheeks, nose, and mouth of every face detected.

  2. Recognize facial expressions like to determine whether a person is smiling or has their eyes closed.

  3. Track faces across video frames like to get an identifier for each individual's face that is detected. This identifier is consistent across invocations, so you can, for example, perform image manipulation on a particular person in a video stream.

  4. Process video frames in real time

And this API can be used to create features like embellishing selfies and portraits(something like this) and create avatars of photos(like this)

In this lesson, we are going to learn how to use this ML Kit API to detect faces and identify facial features. This tutorial does not require you prior knowledge or experience in Machine Learning. But you should be well familiar with Android Studio and its directory structures. If not, then you may refer Android Studio Project Overview.

Before we start, have a look at what we are going build in the end:

1. Follow steps 1 to 3 of ML Kit Tutorial: How to recognize and extract text in images. In step 3, add all the necessary ML Kit dependencies:

// ML Kit dependencies
implementation 'com.google.firebase:firebase-core:16.0.1'
implementation 'com.google.firebase:firebase-ml-common:16.1.2'
implementation 'com.google.firebase:firebase-ml-vision:17.0.0'
implementation 'com.google.firebase:firebase-ml-vision-image-label-model:15.0.0'
implementation 'com.google.firebase:firebase-ml-model-interpreter:16.2.0'

2. Add the following permissions to your AndroidManifest.xml file:

<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />

3. In the values folder, define the resource files as styles.xml, strings.xml, dimens.xml and colors.xml.

4. In the menu folder, place camera_button_menu.xml file.

5. Select the drawble folder, rightclick and go to New->Vector Asset, click on Clip Art and search for switch camera resource, select the file, change the color to white and name it as ic_switch_camera_white_48dp.xml and click finish. Generate a green version of the file as ic_switch_camera_green_48dp.xml.

6.In the the main java package, put GraphicOverlay.java for rendering custom graphics on top of the camera preview, CameraSourcePreview.java for previewing the camera image in the screen and CameraSource.java which defines methods for managing the camera and allowing UI updates on top of it.

7. In the layout folder, define toggle_style.xml and activity_main.xml.

8. Now the action part:

Before you supply an image to the face detector, you may want to change the detector's default settings. This can be done with FirebaseVisionFaceDetectorOptions object. The following settings can be altered:

Detection mode FAST_MODE (default) | ACCURATE_MODE

Favor speed or accuracy when detecting faces.

Detect landmarks NO_LANDMARKS (default) | ALL_LANDMARKS

Whether or not to attempt to identify facial "landmarks": eyes, ears, nose, cheeks, mouth.

Classify faces NO_CLASSIFICATIONS (default) | NO_CLASSIFICATIONS

Whether or not to classify faces into categories such as "smiling", and "eyes open".

Minimum face size float (default: 0.1f)

The minimum size, relative to the image, of faces to detect.

Enable face tracking false (default) | true

Whether or not to assign faces an ID, which can be used to track faces across images.

For example:

  FirebaseVisionFaceDetectorOptions 
  options =  new FirebaseVisionFaceDetectorOptions.Builder()
	.setModeType(FirebaseVisionFaceDetectorOptions.ACCURATE_MODE)
	.setLandmarkType(FirebaseVisionFaceDetectorOptions.ALL_LANDMARKS)
	.setClassificationType(FirebaseVisionFaceDetectorOptions.ALL_CLASSIFICATIONS)
	.setMinFaceSize(0.15f)
	.setTrackingEnabled(true)
	.build();
  

You can get an instance of FirebaseVisionFaceDetector as follows:

  FirebaseVisionFaceDetector detector = FirebaseVision.getInstance()
        .getVisionFaceDetector(options);
  

Then you can pass the image to the detectInImage method as follows:

  Task> result =
detector.detectInImage(image)
.addOnSuccessListener(
	new OnSuccessListener>() {
		@Override
		public void onSuccess(List faces) {
			// Task completed successfully
			// ...
		}
	})
.addOnFailureListener(
	new OnFailureListener() {
		@Override
		public void onFailure(@NonNull Exception e) {
			// Task failed with an exception
			// ...
		}
	});
  

These tasks are defined in FaceDetectionProcessor.java. You can include this file directly in the main java package for quick setup.

Getting information about detected faces

If the face detector succeeds, the face detector returns a list of FirebaseVisionFace objects. Each FirebaseVisionFace object represents a face that was detected in the image. For each face, you can get its bounding coordinates in the input image, as well as any other information you configured the face detector to find. For example:

 for (FirebaseVisionFace face : faces) {
    Rect bounds = face.getBoundingBox();
    float rotY = face.getHeadEulerAngleY();  // Head is rotated to the right rotY degrees
    float rotZ = face.getHeadEulerAngleZ();  // Head is tilted sideways rotZ degrees
    // If landmark detection was enabled (mouth, ears, eyes, cheeks, and
    // nose available):
    FirebaseVisionFaceLandmark leftEar = face.getLandmark(FirebaseVisionFaceLandmark.LEFT_EAR);
    if (leftEar != null) {
        FirebaseVisionPoint leftEarPos = leftEar.getPosition();
    }
    // If classification was enabled:
    if (face.getSmilingProbability() != FirebaseVisionFace.UNCOMPUTED_PROBABILITY) {
        float smileProb = face.getSmilingProbability();
    }
    if (face.getRightEyeOpenProbability() != FirebaseVisionFace.UNCOMPUTED_PROBABILITY) {
        float rightEyeOpenProb = face.getRightEyeOpenProbability();
    }
    // If face tracking was enabled:
    if (face.getTrackingId() != FirebaseVisionFace.INVALID_ID) {
        int id = face.getTrackingId();
    }
}
 

These tasks are defined in FaceGraphic.java. Include this file in the main java package for quick setup.

9. Now we can use FaceGraphic.java and FaceDetectionProcessor.java in the MainActivity.java like this. This is the full and final code of MainActivity.java

10. Now run the project. You should see that the app is now completed exactly as shown in the video above.

For quick set up, you may download the project directly from here or you may refer to this repo for all the source codes.

And thats it! You have just learnt how to use the ML Kit face detection API to detect faces and identify key facial features. This is the second tutorial of the ML Kit tutorial series. If you have any issue while running the project or setting it up, just leave a comment below.






Author:


Ratul Doley
Ratul Doley
Expertised in ReactJs, NodeJS, Modern Js+Css, Php, Java. Professional Android and iOS app developer and designer. Updated Aoril 02, 2020