Workflows

Overview

Workflows are a pre-defined set of rules and parameters used to define how a liveness transaction was collected and should be analyzed. Workflows are separated into two categories : Passive Liveness and Active Liveness.

  • Passive Liveness workflows require collection of one or more images of a subject without requiring any action on their part, after which the liveness can be determined.

  • Active Liveness workflows ask the subject to complete a simple activity while in view of the camera during image collection. Active Liveness workflows are currently provided as a preview only, showing how we are developing our liveness and how this may work in the future. It is not recommended for production use at this time.

All applications must use a Passive Liveness workflow to be compatible with the Knomi Face Liveness service. Active Liveness workflows can be used in addition to Passive Liveness to supplement its results at the cost of additional back-end processing and an increased package size.

Workflow Selection

If you are not using the Android or iOS Knomi S Face SDK to select your workflow, you must specify the workflow name in the request. This is done by adding the “workflow” field to your request and specifying the name inside the workflow_data field for the “video” section. For example:

Example of using workflow name - request form
{
   "video": {
       "meta_data": {
           ...
       },
        "workflow_data": {
           "workflow": "charlie2",
           "frames": []
        }
   }
}

Passive Workflows

Each workflow has 3 variations with different levels of security and usability.

  • 2 - High usability

  • 4 - Balanced usability/security

  • 6 - High security.

It is recommended to start with 4 and evaluate if it meets your needs before trying 2 or 6.

The workflows are separated into the following categories. Each one requires a certain number of images

  • charlie#
    • The charlie# set of workflows provide passive liveness for images captured from a mobile phone.

    • Available workflows: charlie2, charlie4, charlie6

    • Required images: 3 (using Knomi S Face SDK) or 1 (without SDK)

  • delta# – The delta# set of workflows provide passive liveness for images captured from a mobile phone’s back camera. – Available workflows: delta2, delta4, delta6 - Required images: 3 (using Knomi S Face SDK) or 1 (without SDK)

  • hotel#
    • The hotel# set of workflows provide passive liveness for images captured from a web camera.

    • Available workflows: hotel2, hotel4, hotel6

    • Required images: 1

  • foxtrot#
    • The foxtrot# set of workflows provide passive liveness for images captured from a mobile phone. This workflow always uses 1 image, unlike Charlie which can use 3 or 1.

    • Available workflows: foxtrot2, foxtrot4, foxtrot6

    • Required images: 1

Active Workflows

Active Liveness workflows are currently provided as a preview only, showing how we are developing our liveness and how this may work in the future. It is not recommended for production use at this time.

Active Liveness workflows require the user to perform a small action with their head while in frame. The requirement is that the user’s head is turned in the specified direction in at least one of the images included in the transaction. If you are creating your own transaction (and not using Aware client SDKs), you must correctly set the frame tags so that the active liveness information can be interpreted. See the sample JSON requests that are included in the installation package.

There are currently four Active Liveness workflows available:

  • active_up - The subject needs to turn their head upwards.

  • active_down - The subject needs to turn their head downwards.

  • active_left - The subject needs to turn their head left.

  • active_right - The subject needs to turn their head right.

Workflow Override

The Knomi Face Liveness Server also provides the ability to override the default parameters of a workflow, providing flexibility to accommodate different usecases. Currently it supports the following parameters being overriden:

  • security_level
    • This parameter controls the balance of usability versus security. The available workflow configurations set this value to 70, 77, and 82 respectively. Valid values are 0-100, with higher values providing higher security and lower values providing higher usability.

  • face_detection_granularity
    • This parameter can be set between 0.0 and 1.0 (including 1.0) with a default value of 0.20. Larger values can provide better accuracy, but causes the analysis time to increase. Smaller values can improve the speed at which faces are detected, but may cause a drop in accuracy. In general, this parameter should be left alone without explicit direction.

  • face_detection_min_size
    • This parameter determines the smallest face that can be found in the image during face detection. If the image’s smallest dimension is its width, this is the ratio of the face’s width to the image’s width. If the image’s smallest dimension is its height, this is the ratio of the face’s height to the image’s height. This parameter is the minimum ratio of face width to the smallest image dimension that will be used when detecting faces. The face_detection_min_size can be set between 0.0 and 1.0 (including 1.0) with a default value of 0.30. Setting this value too small will cause false positive faces to be detected. This value must be smaller than the face_detection_max_size setting. I.E. A value of 0.10 means that a face must be at least 10% of the image’s smallest dimension to be detected.

  • face_detection_max_size
    • This parameter determines the largest face that can be found in the image during face detection. If the image’s smallest dimension is its width, this is the ratio of the face’s width to the image’s width. If the image’s smallest dimension is its height, this is the ratio of the face’s height to the image’s height. The face_detection_max_size can be set between 0.0 and 1.0 (including 1.0) with a default value of 1.0. This value must be larger than the parameter setting. I.E. A value of 0.70 means that a face must be less than 70% of the image’s smallest dimension to be detected.

Example of overriding the security level
{
   "video": {
       "meta_data": {
           ...
       },
        "workflow_data": {
           "workflow": "charlie2",
           "frames": [],
           "security_level": 80
        }
   }
}