I am in the middle of my MSc thesis in machine learning and the central hurdle is implementing and fine-tuning both Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) on a proprietary “Bank Data of accounting ratios” dataset. I already have a draft data pipeline and preliminary feature engineering in place, but I need an experienced collaborator who can turn these models into publishable-quality work and guide me in framing the statistical narrative behind them. Beyond the deep-learning core, the study compares the neural results against classic ensemble methods—XGBoost, bagging—and a baseline logistic-regression model. A solid command of statistics is therefore essential to validate assumptions, check multicollinearity, and report confidence intervals correctly. Key deliverables: • Clean, modular Python (or preferred) code for CNN and RNN architectures, including training scripts, hyper-parameter search, and reproducibility notes. • Comparative runs with well-documented XGBoost and bagging implementations, plus a rigorously evaluated logistic-regression benchmark. • Clear statistical analysis explaining performance differences, significance tests, and potential limitations suitable for the thesis methodology chapter. • Step-by-step write-up support: comments in code, inline LaTeX-ready tables/figures, and brief Zoom walkthroughs so I can defend each decision confidently. Tools currently in use: TensorFlow/Keras, scikit-learn, pandas, NumPy, Jupyter Lab. If you prefer PyTorch for certain parts, I’m flexible as long as the reasoning is sound and the results are reproducible. Timeline is tight—ideally the first working CNN model within two weeks—so proven experience delivering similar academic projects is a must. Please share one example of a previous thesis or peer-reviewed paper you helped shape, especially if it involved financial or tabular data and neural networks. Let’s create something that not only satisfies the grading rubric but is conference-submission ready.