Talent.com
This job offer is not available in your country.
Data Engineer Hadoop @ Antal

Data Engineer Hadoop @ Antal

AntalKraków, Poland
30+ days ago
Job description

Hadoop Data Engineer (GCP, Spark, Scala) – Kraków / Hybrid

We are looking for an experienced  Hadoop Data Engineer  to join a global data platform project built in the  Google Cloud Platform (GCP)  environment. This is a great opportunity to work with distributed systems, cloud-native data solutions, and a modern tech stack. The position is based in  Kraków (hybrid model – 2 days per week in the office).

Work model :

  • Hybrid  – 2 days per week from the Kraków office (rest remotely)
  • Opportunity to join an international team and contribute to global-scale projects

Must-have qualifications :

  • Minimum  5 years of experience  as a Data Engineer / Big Data Engineer
  • Hands-on expertise in  Hadoop, Hive, HDFS, Apache Spark, Scala, SQL
  • Solid experience with  GCP  and services like  BigQuery, Dataflow, DataProc, Pub / Sub, Composer (Airflow)
  • Experience with CI / CD processes and DevOps tools :   Jenkins, GitHub, Ansible
  • Strong data architecture and data engineering skills in large-scale environments
  • Experience working in enterprise environments and with external stakeholders
  • Familiarity with Agile methodologies such as Scrum or Kanban
  • Ability to debug and analyze application-level logic and performance
  • Nice to have :

  • Google Cloud certification (e.g., Professional Data Engineer)
  • Experience with Tableau, Cloud DataPrep, or Ansible
  • Knowledge of cloud design patterns and modern data architectures
  • Hadoop Data Engineer (GCP, Spark, Scala) – Kraków / Hybrid

    We are looking for an experienced  Hadoop Data Engineer  to join a global data platform project built in the  Google Cloud Platform (GCP)  environment. This is a great opportunity to work with distributed systems, cloud-native data solutions, and a modern tech stack. The position is based in  Kraków (hybrid model – 2 days per week in the office).

    Work model :

  • Hybrid  – 2 days per week from the Kraków office (rest remotely)
  • Opportunity to join an international team and contribute to global-scale projects
  • Design and build large-scale, distributed data processing pipelines using Hadoop, Spark, and GCP, Develop and maintain ETL / ELT workflows using Apache Hive, Apache Airflow (Cloud Composer), Dataflow, DataProc, Work with structured and semi-structured data using BigQuery, PostgreSQL, Cloud Storage, Manage and optimize HDFS-based environments and integrate with GCP components, Participate in cloud data migrations and real-time data processing projects, Automate deployment, testing, and monitoring pipelines (CI / CD using Jenkins, GitHub, Ansible), Collaborate with architects, analysts, and product teams in Agile / Scrum setup, Troubleshoot and debug complex data logic at the code and architecture level, Contribute to cloud architecture patterns and data modeling decisions] Requirements : Big Data, Hadoop, Hive, HDFS, Apache Spark, Scala, SQL, GCP, BigQuery, PUB, Airflow, DevOps, Jenkins, GitHub, Ansible, Tableau, Google Cloud

    Create a job alert for this search

    Data Engineer • Kraków, Poland