Full-Stack AI Exam Platform

Заказчик: AI | Опубликовано: 19.01.2026

I am building a cloud-hosted platform dedicated to competitive examinations and need the entire stack delivered—from database schema to responsive UI. The core feature is an AI-powered written-answer evaluator: large language models must read a candidate’s response, compare it to a rubric that I will supply, assign a score, and store both the marks and the model’s reasoning for audit. Beyond scoring, the system should include: • Role-based user management (students, admins, moderators) • Personal dashboards that surface recent tests, performance trends, and detailed answer feedback • A secure admin panel for uploading new exams, editing rubrics, and downloading results in CSV/Excel • Site-wide analytics tracking log-ins, test attempts, and evaluation frequency • End-to-end encryption of sensitive data, OWASP-aligned security, and clear API documentation Please containerise the services (Docker or similar), set up CI/CD, and deploy to a scalable environment such as AWS, GCP, or Azure. I will rely on you for initial infrastructure as well as hand-off documentation so future maintenance is straightforward. Tech choices are flexible—React or Vue on the front end, Node.js, Django, or similar on the back end—but the LLM integration must be cleanly abstracted so that models like GPT-4 or an open-source alternative can be swapped without code rewrites. Performance testing and unit tests are part of the acceptance criteria. The first release should be production-ready and thoroughly documented so additional exam types (including future government-exam modules) can be added with minimal friction.