Sollers Consulting is more than a consultancy and software integrator. As a company founded in 2000, our mission has been to transform the financial & insurance industries by helping them adapt to new technologies.
The power of collaboration and the limitless potential of Sollers people are at the root of our success. We strive to be the best at what we do, both in the eyes of our team and our customers. We put people at the heart of every project.
Join us and make Sollers be driven by… you!
About Data Competency at Sollers :
Versatility is the keyword when it comes to what we do in IT. Data Competency is one of our initiatives to support digital transformation of financial sector, enabling our customers to become truly data-driven companies. Our mission is to ensure capabilities to address financial sector data-related needs.
As a Data Engineer, you will be responsible for building and optimizing our clients' data pipelines and data flows. You will work on various data initiatives and ensure optimal data delivery. You are the ideal candidate if you have experience in data warehousing and data wrangling. If you enjoy building data systems from scratch and modifying existing ones, you will have the opportunity to do so at Sollers.
About the role. You will :
Build scalable data processing pipelines.
Identify potential improvements and enhancements for current data processing solutions.
Advise on the utilization of appropriate tools and technologies.
Research new tools and solutions and advise on their use.
Recommend potential enhancements to the existing data architecture.
Monitor performance and optimize data processing flows.
Collaborate with Business Analysts, Subject Matter Experts, and Tech Leads to ensure thorough consideration of business requirements in both the design and subsequent development phases.
Address crucial aspects such as data privacy, security, regulatory compliance, data integrity, and availability.
Work directly with our clients as an active member of an agile project team.
About the skills and tools. You will use :
Our partners : Snowflake, Databricks
Cloud providers : Azure, AWS
ETL / ELT data pipelines using batch and stream processing
DWH, data lakes, data lakehouses
BI & predictive analytics; AI / ML
RDBMS, CDC, APIs, data connectors, file systems (structured, semi-structured and unstructured)
SQL, Python
DataOps
Data architecture, data modeling, design patterns
About the requirements. You need :
At least 3 years of experience in data engineering.
Experience with ETL / ELT processes and building data processing pipelines.
Experience working with various data sources (RDBMS, CDC, APIs, data connectors, structured, semi-structured and unstructured files).
Experience with various data transformation and processing tools (e.g. Apache Airflow, Apache Nifi, Kafka, Fivetran etc.)
Knowledge of data modelling concepts.
Proficiency in SQL and good knowledge of at least one programming language (Python, Java, Scala, R).
Good command of English language, at least B2 / C1 level.
Effective communication skills and the ability to work well in a team.
Availability to work full time.
Ability to work in the European Union.
About the wishes. Nice to haves :
Previous working experience in Insurance industry.
Practical experience with Snowflake or Databricks.
Practical experience with DWH data modeling.
Familiarity with one or more Cloud data stacks (AWS, Azure, GCP).
Experience in pipeline orchestration tools (e.g. Airflow).
Familiarity with containerization, CI / CD, version control.
Fluency in French or German.
About our promises. We can offer :
Data Engineer • Lublin, Lublin, Polska