3 days ago Be among the first 25 applicants
YOUR TASKS
- Design and implement Azure-based data solutions for large-scale and unstructured datasets.
- Develop and optimize data pipelines using Azure Data Factory, Databricks, or Snowflake.
- Collaborate with solution architects to define and implement best practices in data engineering.
- Ensure data quality, scalability, and security across all Azure-based solutions.
YOUR PROFILE
Strong experience in data engineering, including working with AzureStrong Python for data processing and automation.Hands-on experience with one of the following : Databricks, Snowflake, or Microsoft Fabric.Strong communication skills and very good English language skills.Nice to have
Strong SQL skills and experience with database optimization.Knowledge of DevOps practices, CI / CD pipelines, and Infrastructure as Code (IaC) tools (Terraform, Bicep).Familiarity with key Azure services : Data Lake, Event Hub, Data Factory, Synapse Analytics, Azure Functions.Exposure to containerization (Docker, Kubernetes) and cloud security best practices.Experience in real-time data processing and streaming technologies like Kafka or Spark Streaming.Hands-on experience with PySpark.Certifications such as DP-900, DP-203, AZ-204, or AZ-400.Seniority level
Mid-Senior levelEmployment type
Full-timeJob function
Information TechnologyIndustries
IT Services and IT ConsultingReferrals increase your chances of interviewing at Capgemini by 2x
Data Engineer (Krakow / Wroclaw / Warsaw, Poland)
Mid / Senior Data Engineer with Databricks
R&D AI Software Engineer / End-to-End Machine Learning Engineer / RAG and LLM
Data Engineer – Commercial Business Unit (Ads)
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-Ljbffr