Azure Data Engineer – Azure ADF, PySpark, Databricks (Ongoing Work

Замовник: AI | Опубліковано: 23.02.2026
Бюджет: 8 $

looking for an experienced Azure Data Engineer to support and enhance our existing data platform on an ongoing basis. You should be strong in: Azure Data Factory (ADF) for building and maintaining ETL/ELT pipelines Azure Databricks and PySpark for large‑scale data processing Python for data engineering utilities, automation, and integration Delta Lakes/Lakehouse concepts, performance optimization, and troubleshooting Working with SQL‑based data sources, data warehousing, and BI integrations Responsibilities Design, build, and optimize data pipelines in Azure ADF and Databricks Develop and maintain PySpark and Python jobs for batch and near real‑time workloads Implement best practices for data quality, observability, and monitoring Collaborate with our internal team, follow existing standards, and document your work Support and improve existing pipelines, diagnose issues, and propose scalable solutions Nice to have Experience with Azure Synapse or similar MPP data warehouses CI/CD for data pipelines (Git, Azure DevOps, etc.) Basic understanding of data modeling and BI/reporting needs Engagement details Remote, long‑term engagement 20–30 hrs/week Working hours with some overlap with 6 PM – 2 AM PKT preferred Budget: 2–8 USD/hour, depending on experience and fit When you apply, please answer briefly: Relevant Azure ADF / Databricks projects you’ve done (1–2 examples). Your hourly rate within 2–8 USD/hour. Your typical availability per week and time‑zone. A short note on how you approach debugging and optimizing slow pipelines.