Databricks YAML Pipeline Batch Updates

Заказчик: AI | Опубликовано: 11.03.2026

I have 10 Databricks pipeline YAML files and 9 associated Python notebooks that all need the same set of edits. Every change is straightforward yet must be applied with absolute consistency. YAML adjustments required • Insert a new task at the very top of each tasks section. I will supply the exact naming convention so you can drop it in without guesswork. • Add a dependency from the existing main task to this newly inserted task. • Modify the log-handling task so its failure condition points to the new logic, and change the hard-coded error description to a dynamic value. Notebook adjustments required • Before any exception is raised, add one line that captures the error message and stores it for downstream logging. Reference files that show the pattern for each edit will be in the repository; simply mirror what you see, commit, and push. Git history must stay clean (one commit per file group is fine), and unit tests or pipeline validations should still pass after your changes. Acceptance criteria 1. All 10 YAML files build and deploy in Databricks without warnings. 2. All nine notebooks run end-to-end; captured error messages appear in the designated storage location. 3. New tasks and dependencies adhere exactly to the naming standards I provide. 4. No other lines are changed outside the scope above. You’ll need solid Databricks workflow experience, comfort with YAML syntax, and enough Python familiarity to tweak notebook code quickly. The work is repetitive yet detail-oriented, so accuracy is more important than speed.