Your tasks
- Performance tuning
- Build and maintain ETL Pipelines
- Test and Document existing pipelines
- Implement tools and various processes for Data related projects
- Promote development standards
Your skills
Openness to work in a hybrid model (2 days from the office per week)Openness to visiting the client's office in Cracow once every two months (for 3 days)At least 4 years of experience working on Data Engineering topic
Strong Python, PySpark & SQL
Good understanding of Data Warehousing conceptsExperience with GCP Data Stack(BigQuery, DataProc, Composer)Experience and expertise across data integration and data management with high data volumesExperience working in agile continuous integration / DevOps paradigm and tool set (Git, GitHub, Jenkins, Sonar, Nexus, Jira)Experience with different database structures, including (Postgres, SQL, Hive)Nice to have
Experience in working with big data – Spark, Hadoop, Hive
Orchestration : Control-M, Airflow
ScalaScripting : Bash, PythonWe offer you
Working in a highly experienced and dedicated teamExtra benefit package that can be tailored to your personal needs (private medical coverage, sport & recreation package, lunch subsidy, life insurance, etc.)Contract of employment or B2B contractOn-line training and certifications fit for career pathSocial eventsAccess to e-learning platformErgonomic and functional working space