Job Opportunity : Data Engineer (Python, PySpark, SQL)
Location :
Cracow, hybrid model (6 office days per month)
Type of contract : B2B
English : B2+
Role Overview :
We are looking for an experienced
Data Engineer (Python, PySpark, SQL)
to join an international technology team working on large-scale data platforms. The role focuses on building and optimizing data pipelines, developing ETL processes, and ensuring high-quality, efficient code delivery.
This is a hands-on position with responsibility for both development and technical decision-making - including defining best practices, reviewing code, and ensuring solutions meet enterprise standards of security, scalability, and performance.
Responsibilities :
- Design, build, and maintain data pipelines and ETL processes using Python and PySpark.
- Develop CI / CD pipelines to automate deployments and improve data integration processes.
- Define best practices for data engineering and ensure adherence within the team.
- Review code and conduct testing to ensure quality, performance, and scalability.
- Collaborate with architects, business analysts, and stakeholders to design data-driven solutions.
- Resolve technical issues and provide guidance on architecture and stack decisions.
Requirements :
Strong experience in Python, PySpark, and SQL (including query optimization).Hands-on expertise with Oracle PL / SQL, Unix shell scripting.Familiarity with CI / CD tools such as Jenkins, Ansible, GitHub.Experience in error handling, troubleshooting, and performance tuning of data solutions.Knowledge of Agile and DevOps practices, with ability to work in cross-functional teams.Strong analytical and communication skills, ability to interact with both business and IT stakeholders.What we offer :
B2B contract.Private medical care, life insurance, and access to Multisport card.Stable, long-term projects with full-time employment.If you are an experienced Data Engineer who enjoys building scalable data pipelines and shaping technical standards - we would love to hear from you