https://docs.google.com/document/d/1epzt-02hj8_K9VHPbkr7xvOr-A1F8tJIijq6W1Uqelg/edit?usp=sharing Make fully custom UI screens using our component library for standard elements (e.g., buttons, modals) that guide users through a linear flow. 0. Requirements (1) display an input example image with clear gesture instructions; (2) prompt camera/video activation with permission handling; (3) provide live preview with real-time overlays for pose matching and auto-capture 3-5 frames upon detection; (4) process via configurable backends and show success/failure feedback with retries. The flow starts video recording on activation and outputs the verified user image (JPEG/PNG) plus the full capture video (MP4, <10MB) with metadata (e.g., timestamp, confidence). Must plug into an Ionic app. Develop this as a federated Flutter plugin for Android/iOS compatibility, with a runtime config API (e.g., enableBackends(['amazon', 'azure', 'facetec'])) to toggle Amazon Rekognition Face Liveness, Azure Face Liveness, or FaceTec 3D Liveness SDK; include error handling for connectivity, lighting, and accessibility (e.g., WCAG-compliant prompts), plus automatic data deletion after processing. Description: Create a reusable Flutter plugin that provides a customizable UI flow for guiding users through a hand+face gesture challenge based on an input example image. The plugin will capture video automatically during the process, perform liveness checks via configurable backends (Amazon Rekognition, Azure Face Liveness, or FaceTec), and output the verified user image and capture video. The UI must leverage the client's component library for standardization, with backend toggling via a runtime configuration API. This enables secure, low-friction biometric verification in mobile apps without full-body pose analysis. Success Metrics: 95%+ liveness pass rate in diverse testing; <5s average flow completion; seamless toggle between backends without UI regressions.