Talent.com
Ta oferta pracy nie jest dostępna w Twoim kraju.
Data Engineer with Databricks & PySpark @ Capgemini Polska Sp. z o.o.

Data Engineer with Databricks & PySpark @ Capgemini Polska Sp. z o.o.

Capgemini Polska Sp. z o.o.Warsaw, Poland
6 dni temu
Opis pracy

Join our dynamic Insights & Data team —over 400 professionals delivering cutting-edge, data-driven solutions. We specialize in Cloud & Big Data engineering, building scalable architectures across AWS, Azure, and GCP. You’ll be part of a team that manages the full Software Development Life Cycle (SDLC) using modern frameworks, agile methodologies, and DevOps best practices.

  • You have hands-on experience in data engineering and are comfortable working independently on moderately complex tasks.
  • You’ve worked with Databricks and PySpark in real-world projects.
  • You’re strong in Python for data transformation and automation.
  • You’ve used at least one cloud platform (AWS, Azure, or GCP) in a production environment.
  • You communicate clearly and confidently in English.

Nice to Have

  • Solid SQL skills and understanding of data modeling.
  • Exposure to CI / CD pipelines, Terraform, or other DevOps tools.
  • Familiarity with streaming technologies (e.g., Kafka, Spark Streaming).
  • Knowledge of cloud data storage solutions (e.g., Data Lake, Snowflake, Synapse).
  • Relevant certifications (e.g., Databricks Certified Data Engineer Associate).
  • Join our dynamic Insights & Data team —over 400 professionals delivering cutting-edge, data-driven solutions. We specialize in Cloud & Big Data engineering, building scalable architectures across AWS, Azure, and GCP. You’ll be part of a team that manages the full Software Development Life Cycle (SDLC) using modern frameworks, agile methodologies, and DevOps best practices.

    Develop and maintain data processing pipelines using Databricks and PySpark., Collaborate with senior engineers and architects to implement scalable data solutions., Work with cloud-native tools to ingest, transform, and store large datasets., Ensure data quality, consistency, and security in cloud environments., Participate in code reviews and contribute to continuous improvement initiatives.] Requirements : Databricks, PySpark, SQL, Snowflake Tools : . Additionally : Training budget, Private healthcare, International projects, In-house trainings, Free parking.

    Utwórz powiadomienie o ofertach pracy dla tego wyszukiwania

    Data Engineer • Warsaw, Poland