I’m building an AI-driven web or mobile system that can snap a meal photo, recognise the dish itself and every visible ingredient, then instantly return a full nutrition report—calories, full macro- and micronutrient profile—and close each session with actionable dietary recommendations. Core needs • Vision model that reliably performs both dish-level and ingredient-level recognition, even in mixed plates or low-light scenarios. • Nutrition engine that maps recognised items to a verified database and produces calorie counts, macro/micro breakdowns and tailored advice. • Clean API or lightweight SDK so the recognition and analysis modules can plug into existing iOS/Android or web front ends. • Admin interface for me to update food entries and track model accuracy over time. Acceptance criteria 1. Top-1 dish recognition ≥90 % on a held-out test set. 2. Ingredient detection F1 ≥0.85 on annotated images. 3. Nutrition output matches USDA or EFSA references within 5 %. 4. End-to-end inference time ≤2 s on a mid-range smartphone. When you send your detailed project proposal, outline your model pipeline, dataset strategy, tech stack (e.g. PyTorch, TensorFlow, ONNX, Core ML, TensorFlow Lite), and a realistic timeline with milestones for model training, API integration and final test deployment. I’m ready to move fast and will provide quick feedback on each milestone.