React Native Integration¶
Overview¶
The Face Capture SDK comes with a React Native interface for integration. This chapter will outline the requirements for React Native integration, how to operate the included developer demo, and which parts of the demo source code correspond to the integration tasks outlined in the Application Design chapter.
Integration Requirements¶
The Face Capture SDK requires Camera permissions.
Demo Operation¶
The Face Capture SDK comes with a small developer demo that is intended to show how various options work. The demo allows developers to rapidly test various combinations of settings to determine what they need to adjust to meet their own application’s requirements. The demo is not intended to resemble a finished product and provides the user with many options that should not be accessible to an end-user in a production environment. Source code for the demo is provided within the SDK installer package.
Due to an issue with the “react-native” npm package used in the demo, you may run into an error where “node_modules/react-native/scripts/xcode/with-environment.sh” is not found. This can be fixed by creating a blank file at that location and giving it executable permissions. This is only applicable to building on iOS.
Home Screen¶
Settings Screen¶
Capture Screen¶
Result Screen¶
Demo Code¶
This section provides details regarding the Face Capture API and how it is used to implement an application.
Create a Face Capture Object¶
currentFaceCapture = new FaceCapture();
Create a Workflow Object¶
currentWorkflow = await currentFaceCapture.createWorkflow(settingWorkflow);
Adjust Workflow Settings¶
await currentWorkflow.setStringProperty(WorkflowProperty.USERNAME.value, settingUsername);
await currentWorkflow.setDoubleProperty(WorkflowProperty.CAPTURE_TIMEOUT.value, settingCaptureTimeout);
await currentWorkflow.setStringProperty(WorkflowProperty.CAPTURE_PROFILE.value, settingCaptureProfile);
Select a Camera¶
currentCameraList = await currentFaceCapture.getCameraList(settingCameraPosition);
currentCamera = currentCameraList[0];
await currentCamera.setOrientation(settingCameraOrientation);
Begin a Capture Session¶
await currentFaceCapture.startCaptureSession(currentWorkflow, currentCamera);
Stop a Capture Session¶
await currentFaceCapture.stopCaptureSession();
Get the capture region¶
currentCaptureRegion = await currentFaceCapture.captureSessionGetCaptureRegion();
Get the current Capture Session State¶
currentCaptureState = await currentFaceCapture.getCaptureSessionState();
Get the Capture State’s Image¶
currentCaptureSessionFrame = await currentCaptureState.getFrame();
Get the Capture State’s Feedback¶
currentCaptureSessionFeedback = await currentCaptureState.getFeedback();
Get the Capture State’s Status¶
currentCaptureSessionStatus = await currentCaptureState.getStatus();
Get the Server Package¶
currentCaptureServerPackage = await currentFaceCapture.getServerPackage(currentWorkflow, settingPackageType);
Get the Encrypted Server Package¶
currentCaptureServerPackage = await currentFaceCapture.getEncryptedServerPackage(currentEncryptionType, currentEncryptionKey, currentWorkflow, settingPackageType);
Enable Autocapture¶
await currentFaceCapture.captureSessionEnableAutocapture(true);