Face Detection & 3D Try-On

Заказчик: AI | Опубликовано: 10.11.2025

I have an idea for an end-to-end face-analysis solution where true-to-life 3D try-on sits at the core. The workflow I picture is: • A user points the camera, a backend service detects the face in real time, measures key landmarks to determine shape and dominant skin tone, then immediately streams a realistic 3D overlay (glasses, makeup, jewellery—think virtual fitting-room). • The heavy lifting happens in an API you will build; the mobile front ends on both iOS and Android simply call it and render the result. Accuracy matters, but the breakthrough experience hinges on smooth, believable 3D try-on, so every design choice should prioritise that. If you like working with OpenCV, MediaPipe, TensorFlow, ARKit/ARCore or your own blend of frameworks, I’m open—as long as the final deliverables include: 1. A documented REST/GraphQL API that receives a camera frame (or short clip) and returns: – Face bounding box and landmarks – Calculated face-shape classification – Suggested dominant skin-tone values – The 3D mesh / transformation data needed by the client app for live overlay 2. Sample iOS and Android projects demonstrating the API call and real-time rendering. 3. A short deployment guide so I can spin up the service on my own GPU instance or cloud provider. 4. Unit tests or a test harness that proves detection precision and run-time performance under typical mobile network conditions. If this brief excites you, tell me a bit about the similar visual-AI or AR tasks you’ve shipped and the toolchain you prefer. I’m happy to iterate quickly, review milestones, and release payments against each verified feature.