Job Title :
Data Engineer – PySpark
Work Mode :
Hybrid (3 days a week from client's office in Kraków)
Project Duration : Long Term - B2B
About the Role :
We are looking for an experienced
Data Engineer
with strong
PySpark
and
Python
skills to join a dynamic team. The ideal candidate will have hands-on experience building large-scale data analytics solutions in a big data or cloud environment.
Key Responsibilities :
PySpark
RDDs, DataFrames, and Datasets
(not just Spark SQL).
Agile environment
Airflow, Databricks, and Azure
Required Experience & Skills :
Data Engineer
, familiar with ETL, DQ, DM, and reject / recycling concepts.
3 years of PySpark coding experience
, building applications with RDDs, DataFrames, and Datasets.
Spark applications
for processing large datasets.
Python
; candidates must demonstrate advanced coding skills.
CI / CD pipelines
and Agile methodologies is a plus.
PySpark coding hackathons
or similar is a bonus.
Additional Skills :
Data Engineer • Powiat Krakowski, Województwo małopolskie, Polska