I’m building a diffusion–based pipeline focused on image generation, and I want the results to be crisp, large-format visuals rather than the low-res outputs many models settle for. Your task is two-fold: first, curate or assemble a fit-for-purpose dataset (I don’t yet have one), and second, train a state-of-the-art z-image diffusion model that consistently produces high-resolution renders. You’ll have freedom in tool choice—PyTorch, TensorFlow, DreamBooth, LoRA or any other modern techniques are fine—as long as the final model can be reproduced from the training scripts you deliver. I expect the usual artefacts: cleaned dataset (with clear licensing notes), training code, model checkpoints, and a concise README outlining hyper-parameters, compute used, and instructions to run inference locally. Acceptance will be based on: • ability to generate 2m (or larger) images without noticeable up-sampling artefacts • a sample set of at least 50 generations that demonstrate the model’s fidelity to the curated dataset’s style and subject matter • deterministic reproduction when seeds are fixed. If this scope excites you and you’ve got prior diffusion experience to back it up, I’m ready to get started right away.