Palantir-Python Data Engineer (Pune)

Customer: AI | Published: 21.01.2026

I work with large manufacturing datasets inside Palantir Foundry and need practical help each day to keep the data flowing smoothly. The job is entirely hands-on: you and I will sit together in Kharadi, Hadapsar, or any nearby part of Pune for roughly one to two hours (we can fine-tune the schedule once we see the workload) and tackle daily data-engineering tasks. Your core focus will be designing, building, and refining Foundry pipelines and related Python components that ingest raw plant information, transform it for downstream analytics, and push clean, trusted tables back into Foundry. This involves writing well-structured Python code, mapping source systems, troubleshooting schema issues, and performance-tuning the workflows so overnight runs stay within SLA. • Creating new code repositories or extending existing ones inside the Foundry environment • Converting manual SQL transforms into reusable Python/Foundry framework logic • Unit-testing and version-controlling every change for easy rollback • Validating results on sample and full-scale data before promoting to production Acceptance is straightforward: each pipeline must complete without errors on the agreed dataset sizes, and the resulting tables must reconcile with a control sample we define together. If you are already in Pune and comfortable diving into Foundry notebooks, Code Workbooks, PySpark, or the Foundry REST APIs, let’s get started this week.