Talent.com
This job offer is not available in your country.
Data Engineer (PySpark + Palantir Foundry) @ Crestt

Data Engineer (PySpark + Palantir Foundry) @ Crestt

CresttRemote, Poland
9 days ago
Job description

Hi! We are looking for experienced  Data Engineers  to join a strategic project within the healthcare sector for one of the world’s leading life science companies. You will be working on  data migration and ETL / ELT process development  in  Palantir Foundry

  • Rate : 180–250 PLN / hour (B2B)
  • Start date : August 2025
  • Location : 100% remote
  • Engagement : full-time
  • Own equipment required

Technical & organizational notes :

  • Work is fully remote and asynchronous — you choose your working hours and location
  • Candidates must use  their own hardware  (PC and / or laptop)
  • Sprint cycles last  3 weeks , with planning and review handled by the Product Owner
  • Interviews will be scheduled next week — apply now to secure your spot on this impactful, high-profile project!

  • Advanced knowledge of PySpark  and  Python
  • Hands-on experience with  Palantir Foundry  (data modeling, data flows, ETL / ELT)
  • Familiarity with  Data Lake  concepts and tools like  Azure DevOps ,  Jira , and  Confluence
  • Experience working in  Scrum -based environments
  • Ability to work independently and consult effectively with project teams and stakeholders
  • Hi! We are looking for experienced  Data Engineers  to join a strategic project within the healthcare sector for one of the world’s leading life science companies. You will be working on  data migration and ETL / ELT process development  in  Palantir Foundry

  • Rate : 180–250 PLN / hour (B2B)
  • Start date : August 2025
  • Location : 100% remote
  • Engagement : full-time
  • Own equipment required
  • Technical & organizational notes :

  • Work is fully remote and asynchronous — you choose your working hours and location
  • Candidates must use  their own hardware  (PC and / or laptop)
  • Sprint cycles last  3 weeks , with planning and review handled by the Product Owner
  • Interviews will be scheduled next week — apply now to secure your spot on this impactful, high-profile project!

    Design and implement data flows and Data Lake structures using PySpark in Palantir Foundry  , Develop and maintain documentation in Confluence  , Estimate and deliver tasks assigned via ticketing systems (e.g., Azure DevOps, Jira) , Participate in Scrum ceremonies and collaborate with the Product Owner during sprint reviews , Provide consultations and support to team members and end users on implemented solutions , Build ETL pipelines in accordance with Merck’s architectural standards ] Requirements : PySpark, ETL, Data Lake, Confluence, Jira, Python, Data modeling, Azure DevOps, Scrum, Palantir Foundry, ELT Additionally : Remote work.

    Create a job alert for this search

    Data Engineer • Remote, Poland