I’m looking to build a functional prototype of an AI-driven examination portal that focuses on lecture-based learning data. The goal is to prove the core concept end-to-end, so I can later scale it into a full SaaS product. What I need you to develop: • Lecture-to-Question Generator Ingest a recorded lecture (audio or video), segment it by topic, and automatically create a mixed set of questions. Multiple-choice is welcome, but the must-have is auto-generated fill-in-the-blank items. • Smart Evaluation On submission, the system instantly checks those fill-in-the-blank answers with NLP similarity scoring or keyword matching, then stores the graded attempt. • Adaptive Testing Logic Serve the next question set based on the student’s current performance level so each learner gets a personalized path. • Analytics Dashboard For every completed test show: – Accuracy analysis per topic – Speed analysis per question and overall test – Weak-area identification with visual charts (pie, bar, or line) The admin view should also let me compare multiple tests on one screen. • Lightweight AI Proctoring Flag obvious cheating by monitoring webcam feed or screen focus changes during the attempt. A simple rules-based or model-based proof is enough for this phase. User flow to cover: 1. Student watches a lecture; system logs watch duration, skips, and drop points. 2. The platform proposes a topic-wise test generated from that lecture. 3. Student takes the test and submits. 4. Immediate analytics appear; clicking any wrong question reveals a step-by-step solution and deep-links back to the exact lecture segment. 5. Admin reviews combined analytics across all learners. Deliverables include: • Source code (Python, Node, or comparable modern stack) • Clear setup instructions and sample data • Brief technical documentation explaining the AI models, evaluation logic, and future extension points A modular, well-commented prototype that runs locally or on a small cloud instance will meet my needs for this milestone.