Talent.com
This job offer is not available in your country.
Big Data Engineer (Scala, Spark) - Remote @ Link Group

Big Data Engineer (Scala, Spark) - Remote @ Link Group

Link GroupRemote, Poland
30+ days ago
Job description

At Link Group, we specialize in building tech teams for Fortune 500 companies and some of the world's most exciting startups. Our mission is to connect talented professionals with opportunities that align with their skills, interests, and career aspirations.

We are currently looking for a Big Data Engineer with expertise in Scala and Spark to join our team and contribute to innovative, large-scale data processing projects.

About the Project

The project focuses on building a high-performance data platform for the finance / stock exchange industry , processing massive datasets to enable real-time analytics, reporting, and decision-making. You will work with modern big data tools and frameworks to ensure scalability and efficiency.

Tech Stack

  • Scala
  • Apache Spark
  • Hadoop ecosystem (HDFS, Hive, HBase)
  • Kafka
  • SQL / NoSQL databases
  • Cloud platforms (AWS, Azure, GCP)
  • Docker, Kubernetes
  • Agile development methodologies

What We Offer

  • Tailored opportunities to match your professional interests and goals.
  • A leadership role in a dynamic and collaborative work environment.
  • Access to exciting and diverse projects for global clients.
  • Competitive compensation aligned with your expertise.
  • Ongoing opportunities for professional growth and development.
  • Apply today and join us at Link Group to make an impact!

    Must-Have Qualifications

  • At least 3+ years of experience in big data engineering.
  • Proficiency in Scala and experience with Apache Spark .
  • Strong understanding of distributed data processing and frameworks like Hadoop.
  • Experience with message brokers like Kafka .
  • Hands-on experience with SQL / NoSQL databases.
  • Familiarity with version control tools like Git .
  • Solid understanding of data architecture and ETL processes.
  • Good command of English.
  • Nice to Have

  • Experience with cloud platforms such as AWS , Azure , or GCP .
  • Knowledge of containerization tools like Docker and orchestration with Kubernetes .
  • Familiarity with CI / CD pipelines for big data workflows.
  • Experience in optimizing Spark jobs for large-scale data processing.
  • Academic background in Computer Science, Data Engineering, or a related field.
  • At Link Group, we specialize in building tech teams for Fortune 500 companies and some of the world's most exciting startups. Our mission is to connect talented professionals with opportunities that align with their skills, interests, and career aspirations.

    We are currently looking for a Big Data Engineer with expertise in Scala and Spark to join our team and contribute to innovative, large-scale data processing projects.

    About the Project

    The project focuses on building a high-performance data platform for the finance / stock exchange industry , processing massive datasets to enable real-time analytics, reporting, and decision-making. You will work with modern big data tools and frameworks to ensure scalability and efficiency.

    Tech Stack

  • Scala
  • Apache Spark
  • Hadoop ecosystem (HDFS, Hive, HBase)
  • Kafka
  • SQL / NoSQL databases
  • Cloud platforms (AWS, Azure, GCP)
  • Docker, Kubernetes
  • Agile development methodologies
  • What We Offer

  • Tailored opportunities to match your professional interests and goals.
  • A leadership role in a dynamic and collaborative work environment.
  • Access to exciting and diverse projects for global clients.
  • Competitive compensation aligned with your expertise.
  • Ongoing opportunities for professional growth and development.
  • Apply today and join us at Link Group to make an impact!

    Design, implement, and optimize big data processing pipelines using Scala and Spark., Work with massive datasets, ensuring scalability and performance., Collaborate with data scientists and analysts to develop data solutions., Monitor and troubleshoot data workflows to ensure reliability and efficiency., Integrate with data sources and ensure proper data ingestion and transformation., Research emerging big data technologies to improve the architecture.] Requirements : Big data, Scala, Spark, Hadoop, Kafka, SQL, NoSQL, Git, ETL, Cloud platform, AWS, Azure, GCP, Docker, Kubernetes, CI / CD Pipelines Tools : Agile, Scrum.

    Create a job alert for this search

    Data Engineer Remote • Remote, Poland