Location : Kraków, Poland (Hybrid – 2 days per week in office)
Employment type : Full-time, B2B Contract
Rate : 190–200 PLN per hour
Industry : Financial Services
At Antal, we connect top tech talent with exceptional career opportunities. For our client – a global financial institution and technology leader – we are currently looking for an Ingestion Data Engineer to join an innovative Environmental, Social & Governance (ESG) data initiative within the Data & Analytics Office.
The engineering team builds and maintains large-scale data ingestion and processing pipelines that power the bank’s ESG analytics platforms. You will work in a multidisciplinary environment, collaborating with data analysts, architects, and engineers to design robust, scalable, and secure data solutions using Apache Spark (Scala) and the Google Cloud Platform.
We Offer
o Hadoop, Hive, HDFS, Apache Spark, Scala
o SQL and distributed data processing
o Airflow and Jenkins for workflow orchestration and CI / CD
o GCP services (BigQuery, Dataflow, DataProc, Cloud Storage, Composer)
Location : Kraków, Poland (Hybrid – 2 days per week in office)
Employment type : Full-time, B2B Contract
Rate : 190–200 PLN per hour
Industry : Financial Services
At Antal, we connect top tech talent with exceptional career opportunities. For our client – a global financial institution and technology leader – we are currently looking for an Ingestion Data Engineer to join an innovative Environmental, Social & Governance (ESG) data initiative within the Data & Analytics Office.
The engineering team builds and maintains large-scale data ingestion and processing pipelines that power the bank’s ESG analytics platforms. You will work in a multidisciplinary environment, collaborating with data analysts, architects, and engineers to design robust, scalable, and secure data solutions using Apache Spark (Scala) and the Google Cloud Platform.
We Offer
,[Design, develop, and optimize data ingestion and processing pipelines using Spark (Scala) and Hadoop ecosystem tools., Orchestrate and automate workflows using Airflow and Jenkins within a CI / CD environment., Migrate and process data using Google Cloud services such as BigQuery, Dataflow, DataProc, and Composer., Collaborate with cross-functional teams to translate business logic into scalable data solutions., Ensure data quality, reliability, and performance across distributed environments., Contribute to architecture design and continuous improvement initiatives within the data platform., Support debugging, testing, and deployment activities as part of the DevOps lifecycle.] Requirements : Data engineering, Hadoop, Hive, HDFS, Spark, Scala, SQL, Airflow, Jenkins, GCP services, BigQuery, Dataflow, DataProc, Cloud Storage, Git, GitHub, Google cloud, DevOps, Ansible, Jira
Data Engineer • Kraków, Poland