Dual-Platform Vision AI App

Заказчик: AI | Опубликовано: 18.11.2025

I’m building a new mobile application that must run smoothly on both iOS and Android. The user experience is straightforward: open the app, take a photo, and instantly see on-screen feedback that highlights detected objects, faces, and even hand gestures. To achieve this I’m relying on existing computer-vision models (Core ML, TensorFlow Lite, MediaPipe, or similar), so I need someone comfortable plugging those into a polished, secure app. wiring up the camera module for single-shot photo capture, streaming the image through the chosen on-device models, and presenting clear overlays in a clean, intuitive UI. Everything has to process quickly on the device, respect user privacy, and keep data encrypted at rest and in transit when it must leave the phone. Deliverables • A single codebase (native, Flutter, React Native—whichever you prefer and can justify) that compiles for iOS and Android • Photo-capture screen, analysis pipeline, and results screen with object, facial, and gesture feedback overlays • Integration of at least one production-ready model for each recognition type, optimised for mobile performance • Brief setup instructions and commented code so I can extend or swap models later I’ll test by running the app on real devices, snapping photos in varied lighting, and confirming that objects, faces, and gestures are detected accurately and that no image leaves the device without explicit consent. If this sounds like your wheelhouse, tell me which mobile framework and vision libraries you’d lean on and why.