I’m racing against the clock in a classification-focused Kaggle competition and need an experienced AI engineer to convert my polished data into a winning submission. The dataset is my own—no reliance on the platform’s default files—and I have already completed data cleaning, feature extraction, and normalization, so you can dive straight into modelling. Your task is to craft, tune, and ensemble models that out-perform the current leaderboard benchmark, then package everything into a fully reproducible training notebook and a submission-ready inference script. Python with scikit-learn, XGBoost, LightGBM, CatBoost, PyTorch, or TensorFlow are all welcome; pick the stack you believe will squeeze out the highest score. Deliverables • Commented training notebook(s) demonstrating data pipeline, model selection, tuning, and validation • Stand-alone inference notebook/script ready for Kaggle submission • README summarising architecture choices, hyper-parameters, CV strategy, and achieved public LB score • Brief hand-over session or document so I can iterate confidently before the deadline The work is accepted once the private leaderboard score beats the median baseline and matches the public score you showcase. Speed is critical, so please share your timeline and any questions as soon as possible.