Child-Face Lock App Development

Заказчик: AI | Опубликовано: 26.10.2025

# Project Brief — Child-Face Lock App (Android + iOS) **Goal:** A cross-platform parental app that detects a registered child’s face and immediately prevents that child from using a parent’s phone. When the child is detected, the device is locked or switched to a restricted safe-screen; parents can override quickly via PIN/biometrics. --- # Key Requirements (MVP) **Functional** * Register one or more child face profiles (on-device). * Continuously or periodically check front camera for a registered child when phone is in use. * If registered child detected → trigger a lock / safe-screen that prevents normal phone use. * Parental override: PIN or device biometric (FaceID/TouchID, Android biometrics). * Audit log of detection events (time, photo thumbnail optional, kept locally). * Basic parental dashboard: enable/disable, manage child profiles, viewing logs. **Non-functional** * Face recognition must run on-device (privacy + latency). * Minimal battery and CPU impact. * Robust against false positives/negatives. * Secure storage of profiles and PIN (encrypted, Keystore/Secure Enclave). * Respect platform rules & user consent; offer clear permission flows. --- # Important Platform Constraints (must-read) * **iOS:** Third-party apps cannot lock the entire device programmatically. Apple only allows device-level restrictions via **Guided Access** (user-enabled) or via **MDM/profiles** for supervised devices. App must use allowed alternatives: show a full-screen child-safe mode inside the app, guide parents to enable Guided Access, or pair with an MDM solution for stricter control. * **Android:** More flexibility. Use **Device Admin / Device Owner (Kiosk/Lock Task)** modes or overlay + Accessibility services to restrict use. Note: Device Owner requires provisioning (works for single-purpose devices or with user consent during setup). (Dev team: design flows accounting for these platform differences. iOS will need user education and/or a guided-access flow.) --- # Suggested Architecture & Tech Stack **Cross-platform UI** * **Flutter** (recommended) or **React Native** (alternative) **On-device ML / Face Detection** * **TensorFlow Lite** (custom child-face recognition model) — offline, fast. * **Firebase ML Kit** for face detection helpers (pose/landmarks) if acceptable. * **Core ML** (iOS) and **ML Kit/TFLite** (Android) for optimized pipelines. **Backend / Cloud (optional)** * **Firebase** for user auth, remote config, optional cloud sync (but allow purely local mode). * Use cloud **only** if parents want multi-device sync — otherwise keep profiles local for privacy. **Storage & Security** * **Android Keystore / iOS Secure Enclave** for PIN & credentials. * Encrypted local DB (SQLCipher or encrypted Hive/SQLite). * Store face embeddings, not raw images. If storing images, encrypt and explain to users. **Permissions** * Camera (foreground) — explain use-case clearly. * Biometric authentication (for overrides). * Accessibility / Usage Access / Device Admin (Android) — explain necessity. * Local notifications (for parent alerts). **CI / Testing** * Firebase Test Lab (Android), TestFlight (iOS), unit + integration tests, device battery profiling. --- # Detection Flow (high-level) 1. Parent registers child → capture multiple front-camera photos in guided flow (varied lighting/angles). 2. App computes face embeddings and stores them encrypted on-device. 3. When phone is active or unlocked, app performs periodic/triggered checks (not continuous video to save power). 4. If face embedding similarity > threshold → trigger lock/safe-screen. 5. Lock event logged; parent receives optional alert. 6. Parent overrides via PIN/biometric → session resumes. --- # UX / UI Notes (for dev + design) * Parent-facing screens must be minimal and explicit: setup wizard, permissions, test-scan, enable Guided Access instructions (iOS). * Child-facing lock screen should be friendly and non-scary (cute message + animation), and must not reveal any sensitive info. * Provide clear error/false-detect flow and “did we get it wrong?” correction UI to improve embeddings. * Include an “emergency unlock” flow in case of false lock (quick biometric fallback). --- # Privacy & Compliance * Default: **on-device processing only**. Explicit opt-in required for any cloud sync. * Collect minimal data. Store embeddings instead of images where possible. * Provide clear privacy policy explaining what is stored, where, and for how long. Allow users to delete profiles. * Consider GDPR/COPPA implications (app involving children) — consult legal before launch. --- # Risks & Mitigations * **False positives/negatives:** Provide retraining UI, adjustable detection threshold, multi-frame confirmation before locking. * **Battery drain:** Use event-driven detection (e.g., detect only when phone is unlocked or when front camera in use) — avoid constant camera streaming. * **Bypassing app:** Android: use Device Owner / Kiosk mode where feasible. iOS: require Guided Access or MDM for strict blocking. * **Privacy backlash:** Default to local-only processing; transparent consent flow. --- # MVP Roadmap & Sprints (suggested) **Sprint 0 — Planning (1 week)** * Finalize requirements, UX flows, permissions, legal check. **Sprint 1 — Prototyping (2 weeks)** * Flutter app skeleton, camera + onboarding UI, face capture flow. **Sprint 2 — On-device Detection MVP (3 weeks)** * Integrate TFLite/ML Kit for face detection, create embeddings, simple matching pipeline (local). **Sprint 3 — Lock/Overlay Implementation (3 weeks)** * Android: implement lock/task mode (kiosk) and overlay. * iOS: implement full-screen safe-screen + Guided Access instructions. **Sprint 4 — Security + Storage (2 weeks)** * Implement encrypted storage, biometric override, logs. **Sprint 5 — QA & Beta (2–3 weeks)** * Tests, small-family pilot, battery/perf tuning. **Sprint 6 — Launch Prep (2 weeks)** * App Store / Play Store packaging, privacy policy, marketing screenshots. *(Adjust timelines to team size; above assumes small cross-functional team.)* --- # Deliverables for Developers * Repository with modules: camera, ML, matching, lock/overlay, settings, logs. * On-device model conversion (TFLite) + sample embeddings. * Test suite + profiling reports (battery/CPU). * Documentation: permissions rationale, iOS Guided Access setup guide for parents, and security summary. --- # Team & Roles * 1 Mobile Engineer (Flutter/React Native) * 1 Native Android engineer (lock/task, device admin) * 1 iOS engineer (Guided Access flows, CoreML integration) * 1 ML Engineer (training model, TFLite optimization) * 1 UX/UI Designer (setup flows, friendly lock screen) * 1 QA/DevOps (CI, TestFlight/Firebase Lab) --- # Quick Ask to Developer Team * Confirm acceptance of platform constraints (iOS Guided Access vs MDM). * Decide offline-only vs optional sync. * Choose cross-platform framework (Flutter recommended). * Estimate sprint allocation based on your dev capacity. --- If you want, I can now: * produce a 1-page technical spec PDF for your devs, or * generate the onboarding UI wireframes (Flutter-ready), or * write the README + GitHub repo scaffold.