Workflows

Overview

Workflows are a pre-defined set of rules and parameters used to perform automatic collection of subject data for determining liveness. A Workflow must be selected using the Select Workflow method before facial data collection and analysis can begin. Workflows are separated into two categories : Passive Liveness and Active Liveness. Passive Liveness workflows use on-device image quality checks to position the subject then collect liveness related data passively. Active Liveness workflows ask the subject to complete a simple activity while in view of the camera. Active Liveness workflows are currently provided as a preview only, showing how we are developing our liveness and how this may work in the future. It is not recommended for production use at this time. All applications must use a Passive Liveness workflow to be compatible with the Knomi Face Liveness service. Active Liveness workflows can be used in addition to Passive Liveness to supplement its results at the cost of additional back-end processing and an increase packet size.

Passive Liveness Workflows

Passive Liveness workflows perform a series of checks to ensure that the subject is positioned correctly then capture liveness related data. By default, users will be asked to hold their phone up vertically during the capture in order to help improve the facial image quality. Once the phone is held vertically, on-screen feedback will direct the user to position themselves in the frame correctly. The liveness data will be captured after a small amount of time has passed while the user is in the correct location.

Please see the tables below for an overview of the Charlie, Foxtrot, and Delta workflows.

Table 1 Knomi S Workflow Summary:

Feature

Charlie2

Charlie4

Charlie6

Supports High Accuracy Model

Yes

Yes

Yes

Device Camera Used

Front

Front

Front

Workflow Focus

Usability

Security

Higher Security

Lighting Check

Minimal

Minimal

Minimal

Face Position Check

Yes

Yes

Yes

Mouth Check

No

No

No

Neutral Expression Check

No

No

No

Eye Check

Yes

Yes

Yes

Tinted Glasses Check

Yes

Yes

Yes

Heavy Frame Check

No

No

No

Unnatural Skin Check

No

No

No

Table 2 Knomi S Workflow Summary cont.:

Feature

Foxtrot2

Foxtrot4

Foxtrot6

Supports High Accuracy Model

Yes

Yes

Yes

Device Camera Used

Front

Front

Front

Workflow Focus

Usability

Security

Higher Security

Lighting Check

Minimal

Minimal

Minimal

Face Position Check

Yes

Yes

Yes

Mouth Check

No

No

No

Neutral Expression Check

No

No

No

Eye Check

Yes

Yes

Yes

Tinted Glasses Check

Yes

Yes

Yes

Heavy Frame Check

No

No

No

Unnatural Skin Check

No

No

No

Table 3 Knomi S Workflow Summary cont.:

Feature

Delta2

Delta4

Delta6

Supports High Accuracy Model

Yes

Yes

Yes

Device Camera Used

Back

Back

Back

Workflow Focus

Usability

Security

Higher Security

Lighting Check

Minimal

Minimal

Minimal

Face Position Check

Yes

Yes

Yes

Mouth Check

No

No

No

Neutral Expression Check

Yes

Yes

Yes

Eye Check

Yes

Yes

Yes

Tinted Glasses Check

Yes

Yes

Yes

Heavy Frame Check

No

No

No

Unnatural Skin Check

No

No

No

Supports High Accuracy Model - Whether or not the Workflow supports the High Accuracy Model provided with the SDK.

The High Accuracy Model is an option that can be utilized through the setStaticProperty API. This Model performs higher accuracy facial analysis at the expense of a larger on-device footprint, additional memory, and additional CPU time.

Device Camera Used - The camera in a device used for data collection in a Workflow.

This is either the front or back camera. The front camera refers to the camera on the same side as the device screen. This is commonly referred to as a “selfie” camera. The back camera is the camera located on the rear of the device. The back camera is generally used in Workflows that involve capturing of other people.

Workflow Focus - The general goal of the Workflow.

Different Workflows provide different mixes of usability and security. Generally, focussing on usability means a lower level of security to enable real subjects be detected as live more often. Conversely, higher security by putting more restrictions on the subject will catch more spoof attacks but reduce usability.

Lighting Check - The Workflow checks lighting conditions as a part of subject quality control.

As of Knomi D Client SDK 2.4.4, Knomi can only perform minimal or no lighting checks. Minimal lighting checks requires the subject be under lighting conditions that do not obscure the subject heavily. When present in a Workflow, lighting checks can cause the INSUFFICIENT LIGHTING and LIGHT TOO BRIGHT feedback codes to be returned via callback.

Face Position Check - The Workflow checks the subject’s position within collected images for centering and distance.

As of Knomi D Client SDK 2.4.4, the subject must be positioned correctly in all Workflows. The area in which a subject’s face should be present is obtainable via the GetROI (region of interest) function in the API. Display of the region of interest is left up to the application integrating Knomi D Client SDK. When present in a Workflow, face positon checks can cause the FACE TOO FAR, FACE TOO CLOSE, FACE ON LEFT, FACE ON RIGHT, FACE TOO HIGH, and FACE TOO LOW feedback codes to be returned via callback.

Mouth Check - The Workflow checks that the subject’s mouth is not obscured.

When present in a Workflow, the mouth check can cause the MOUTH OBSCURED feedback code to be returned via callback.

Neutral Expression Check - The Workflow checks to ensure that the subject has a neutral expression.

A neutral expression does not allow the subject to smile widely or have their mouth open. When present in a Workflow, the neutral expression check can cause the SMILE PRESENT feedback code to be returned via callback.

Eye Check - The Workflow checks that the subject’s eyes are both visible and open.

When present in a Workflow, the eye checks can cause the LEFT EYE CLOSED, LEFT EYE OBSTRUCTED, RIGHT EYE CLOSED, and RIGHT EYE OBSTRUCTED feedback codes to be returned via callback.

Tinted Glasses Check - The Workflow checks if the subject is wearing tinted or sun glasses.

If tinted or sun glasses are detected on the subject the analysis being run will not perform well or at all. When present in a Workflow, the tinted glasses check can cause the DARK GLASSES feedback code to be returned via callback.

Heavy Frame Check - The Workflow checks if the subject is wearing glasses with thick frames.

If heavy frames are detected on the subject the analysis being run will not perform well or at all. When present in a Workflow, the heavy frames check can cause the HEAVY FRAMES feedback code to be returned via callback.

Unnatural Skin Check - The Workflow checks the subject’s skin for unnatural colorations.

When present in a Workflow, the unnatural skin check can cause the UNNATURAL LIGHTING COLOR feedback code to be returned via callback.

Changing Passive Workflow Settings

Workflow Setting Overrides

A small subset of Workflow properties can be overridden by an optional parameter in the Select Workflow method. The set of override-able properties are limited to those which will not have an adverse affect on the liveness analysis. It is NOT recommended that any settings be overridden except in extreme cases. Workflows have been thoroughly tested and optimized for data collection and usability.

The following settings can be overridden :

  • Evaluation Time Before Collection : The minimum amount of time (in seconds) where a face must be present in consecutive frames before Knomi will enter a final collection phase. This can be combined with the Compliance Time Before Collection. Both conditions from these settings must be true before final collection will occur.

  • Compliance Time Before Collection : The minimum amount of time (in seconds) where a face must be positioned correctly with no feedback code related issues before Knomi will enter a final collection phase. This can be combined with the Evaluation Time Before Collection. Both conditions from these settings must be true before final collection will occur.

The Override JSON (Listing 1) JSON has the following form:

Listing 1 Example Override JSON
{
       "workflow": {
               "compliance_time_before_collection": 0.0,
               "evaluation_time_before_collection": 1.5,
       }
}

Changing Eye Separation for Capture

The eye separation required for capture can be modified through changing the EYE_SEPARATION value via the Set Property function. Setting this property will have several effects on the capture and on back-end processing. Excercise caution when setting the eye separation capture requirement.

On-Client Effects

The eye separation property can be set to require anywhere between 60 and 130 pixels between the eyes. Setting the eye separation will change when the Face Too Far feedback is returned. A lower eye separation allows the user to be further away from the device. This will result in a smaller face in the image and will be less accurate in analyzing specific facial features. A higher eye separation requires the user to be closer to the device. This will result in a larger face, but may introduce a “fish-eye” effect due to being too close to the lens.

On-Server Effects

Any changes to the eye separation on the client should be accompanied by a change in the required eye separation for any back-end services. Knomi S Face SDK uses a different face finding method from the Knomi Face Analyzer and Knomi Face Liveness services. As a result of these differences, any change to the eye separation on the client should be accompanied by changing the server profile eye separation to a value 10 pixels less than the client.

When using a non-standard eye separation, it is recommended to use the checkLiveness endpoint on the Face Liveness service and the analyzeImage endpoint on the Face Analyzer service. The checkLiveness endpoint will not perform any specific face quality checks and will safely ignore the modified eye separation on the client. The Face Analyzer’s analyzeImage endpoint can be used with a modified profile to perform facial quality checking against the modified eye separation value.

Disabling the Device Position Check

By default, workflows require the user to hold the phone vertically to improve image quality. This requirement can be disabled to improve usability, but will cause a degradation in image quality.

Temporarily Disabling Autocapture

Disabling and enabling Autocapture can be modified through changing the ENABLE_AUTOCAPTURE value via the Set Property function. Setting this property will allow autocapture to end and proceed to the next phase when the conditions for autocapture are met. When the conditions for autocapture are met while this property is disabled, the autocapture will continue as if the conditions were not met, but the user will still receive feedback messages.

Active Liveness Workflows

Active Liveness workflows are currently provided as a preview only, showing how we are developing our liveness and how this may work in the future. It is not recommended for production use at this time.

Active Liveness workflows require the user to perform a small action with their head while in frame. Unlike the Passive Liveness workflows which require the user to position the phone then themselves, these workflows immediately begin data collection without any checks. Active Liveness workflows do not perform any on device facial analysis and as a result do not return any capture feedback codes. Once started, a user has three seconds to complete the task specified by the workflow.

There are currently four Active Liveness workflows available:

  • Active_up - The subject needs to look upwards.

  • Active_down - The subject needs to look downwards.

  • Active_left - The subject needs to look left.

  • Active_right - The subject needs to look right.

Note: These workflows require more on-device memory and produce a larger server package. The Passive Liveness workflow settings detailed above do not apply to Active Liveness workflows.