Detailed Integration

The Face Liveness SDK integrates as any custom UIView. Apps must size the component to fill the device, and the device orientation is required to be portrait mode.

Detailed integration steps are provided below:

Setup up Face Liveness SDK

  • Add framework canvas to a view controller:
    • Add ViewController to Storyboard

    • Connect ViewController to backend Swift class by setting the view controller’s custom class to the backend Swift class.

    • Set view controller’s storyboard ID.

  • Add UIView to this ViewController
    • Position this UIView with the following constraints:
      • Set equal width to superview

      • Set aspect ratio to 3:4

      • Align bottom to safe area

      • Set leading space to safe area to 0

  • In the identity inspector, set:
    • class: “FaceLiveness”

    • module: “FaceLivenessFramework”

Messaging/feedback UI

Set up your own App level display objects:

  • Setup callbacks to retrieve workflow state, facial autocapture feedback, and device positioning information.
    • See the viewDidLoad and viewDidAppear methods in the demo which ships with this SDK and demonstrates how to set callbacks and run the component.

  • Add UI objects:
    • Examples:
      • Add label, “statusLabel” to display workflow state.

      • Add label, “feedbackLabel” to display facial autocapture feedback.

      • Add button, “startStopButton” to control stopping and starting the workflow.

      • Add devicePosition indicator, “devicePositionControl” to display the device position. A sample custom control, DevicePositionControl, ships with the SDK.

  • Add callback handlers:
    • See the didReceiveDevicePosition, didReceiveFeedback, and didReceiveWorkflowState methods in the demo which ships with this SDK and demonstrate various callback handlers.

Simulated Camera Device

Set up your own simulated camera device (only applicable to Simulator builds):

  • You must be using a build of the framework that supports simulators.

  • Open your project or target property settings (Info tab in Project Settings, or Info.plist file)

  • If you want to use a standalone image:
    • Add your image to your app bundle.

    • Add a string property called “SimulatorCameraImage” that points to that same image (ex. face.jpg).

    • The only currently supported image types are jpeg and png.

  • If you want to use a solid color:
    • Add a string property called “SimulatorCameraColor” and set it to a hex value color (ex. #00FF00).

    • Add a string property called “SimulatorCameraColorWidth” and set it to an integer value for the simulated image width (ex. 480).

    • Add a string property called “SimulatorCameraColorHeight” and set it to an integer value for the simulated image height (ex. 640).

  • If both a camera and solid image are specified in the property settings, then the image will be used first.

  • If there is a problem using the image, it will then attempt to use the color.

  • If both fail, then you will receive invalid camera objects when running code that accesses hardware.

Accelerometer Support

At this time, there is no support for accelerometer position checking in the simulator version of FaceLiveness. However, the simulated camera image will always be considered “in position”.