Talent.com
Ta oferta pracy nie jest dostępna w Twoim kraju.
MLOps Engineer in Samsung Ads Project

MLOps Engineer in Samsung Ads Project

Samsung R&D Institute PolandWarszawa, mazowieckie, Polska
7 dni temu
Opis pracy

MLOps Engineer in Samsung Ads Project

Miejsce pracy : Warszawa

Technologies we use

Expected

  • Terraform
  • Docker
  • Kubernetes
  • TensorFlow
  • PyTorch
  • Redis
  • Prometheus
  • Grafana
  • Python
  • Kafka
  • Spark
  • Flink
  • Unix
  • Linux

Optional

  • Protobuf
  • FlatBuffers
  • TensorRT
  • Seldon
  • ONNX
  • Cap’n Proto
  • Operating system

  • Linux
  • About the project

    Samsung Ads is an advanced advertising ecosystem, spanning hundreds of millions of smart devices across TV, mobile, desktop, and beyond. The project we are recruiting for is focused on enabling brands to connect with Samsung TV audiences building the world’s smartest advertising platform. We use machine learning algorithms for advertising campaigns to enhance targeting, personalization, and optimization. The goal is to deliver the right message to the right audience at the right time, resulting in higher engagement rates and conversion rates.

    Audience building is a crucial aspect of effective marketing, especially in today's digital landscape where targeting specific groups of people is essential for success.

    During project on-boarding process you will understand our products and services to easily identify the ideal customer persona, considering factors such as demographics, psychographics, and purchasing power.

    Being part of an international company such as Samsung you will get to work on the most challenging projects with stakeholders and teams located around the globe.

    You will deeply dive into Samsung Advertising Galaxy working with such exciting domains like bidding, pacing and performance-based advertising, as well as recommendations and churn prediction / prevention.

    As a MLOps engineer of the Samsung Ads team, you will have access to unique Samsung proprietary data to address existing product challenges and build end-to-end solutions with real-world impact. You will also work with talented engineers and top-notch machine learning researchers on exciting projects and state-of-the-art technologies.

    In conclusion, your responsibility will be designing, setting up and administering infrastructure for deploying, monitoring, and maintaining ML models.

    Technologies in use

  • Python
  • Golang
  • REST
  • Spark
  • Snowflake / Snowpark
  • Github Actions
  • ArgoCD
  • Airflow
  • Kubernetes
  • Grafana / Prometheus
  • Terraform
  • TensorFlow
  • PyTorch
  • Hadoop
  • Aerospike / Redis
  • Your responsibilities

  • Design and develop highly scalable machine learning infrastructure to support high throughput and low latency.
  • Serve ML models to downstream applications, ensuring that they are accessible, scalable, and secure.
  • Manage model versions and ensure that the correct version is served to clients. Implement a rollback mechanism in case of issues with the current model version.
  • Implement monitoring and observability tools to track the performance, health, and usage of the platform and its components. Monitor the performance of the deployed models, addressing issues such as concept drift, data drift, and model degradation over time. Identify and resolve issues promptly, ensuring that the system remains stable and responsive.
  • Develop, test, deploy, and maintain data and model training pipelines to support our ML products
  • Integrate the serving infrastructure with other systems, such as data pipelines, monitoring tools, and alerting systems. Ensure seamless communication and coordination among these systems.
  • Constantly review and optimize the ML serving system. Strive to improve efficiency, reliability, and speed, looking for opportunities to simplify and automate tasks while maintaining high standards of quality.
  • Research the latest machine learning serving technologies (e.g., model compilers, GPU deployment, and inference as a service), and keep up-to-date with industry trends and developments.
  • Experiment with new scalable machine learning serving architectures tailored to our environment and create quick prototypes / proof-of-concepts.
  • Streamline model deployment, unit testing, integration testing, stress testing and shadow testing.
  • Enhance the online A / B testing framework
  • Work with ML engineers to deploy and serve production-grade, state-of-the-art machine learning models at scale.
  • Depending of your skills and experience you will have a chance to technically lead people
  • Our requirements

  • Degree in Computer Science or related fields.
  • At least 2 years of proven industry experience in microservices.
  • Experience with Infrastructure as Code (Terraform), cloud solutions and orchestration tools (AWS e.g. Sagemaker, Airflow MWAA, Step / Lambda, EC2, EMR).
  • Familiarity with CI / CD (e.g. : Github Actions, ArgoCD), ETL, big data tools, mainstream ML frameworks (e.g., MapReduce, Spark, Flink, Kafka, Unix / Linux with shell, Docker, Kubernetes, TensorFlow, PyTorch, etc.) and communication protocols (gRPC, HTTP2.0).
  • Experience working with real time monitoring / alerting components (e.g., Prometheus / Grafana / AWS Quicksight).
  • Experience in Python and Go (preferable).
  • Experience with distributed cache systems, e.g., Redis / Aerospike.
  • Optional

  • At least 3 years of industry experience in low latency, high throughput distributed microservices and integration e.g. WS / REST.
  • Extensive experience with system architecture design for machine learning.
  • Knowledge on testing frameworks for online A / B testing, canary, blue-green deployment.
  • Knowledge about ML serving technologies, such as Seldon, Triton, ONNX, ONCL, TensorRT.
  • Experience with the advertising industry, recommendation systems or real-time bidding (RTB) ecosystem.
  • Knowledge of other OOP languages
  • Knowledge of SQL scripting
  • Knowledge of serialization protocols (Protobuf, FlatBuffers, Cap’n Proto)
  • This is how we work on a project

  • integration tests
  • performance tests
  • testing environments
  • unit tests
  • What we offer

  • Friendly atmosphere focused on teamwork
  • Wide range of trainings and a huge support in developing algorithmic skills
  • Opportunity to work in multiple projects
  • Working with the latest technologies on the market
  • Monthly integration budget
  • Possibility to attend local and foreign conferences
  • Flexible working hours
  • PC workstation / Laptop + 2 external monitors
  • OS : Windows or Linux
  • Private medical care (possibility to add family members for free)
  • Multisport card
  • Life insurance
  • Lunch card
  • Variety of discounts (Samsung products, theaters, restaurants)
  • Unlimited free access to Copernicus Science Center for you and your friends
  • Possibility to test new Samsung products
  • Office in Warsaw Spire / Quattro Business Park
  • Very attractive relocation package
  • Benefits

  • sharing the costs of sports activities
  • private medical care
  • sharing the costs of foreign language classes
  • life insurance
  • corporate products and services at discounted prices
  • integration events
  • dental care
  • no dress code
  • leisure zone
  • pre-paid cards
  • baby layette
  • charity initiatives
  • unlimited free access to Copernicus Science Center
  • mentoring program
  • psychological support
  • possibility to test new Samsung products
  • Samsung R&D Institute Poland

    If you share our faith in the power of technology that changes reality, you work with passion, you have a curiosity about the world and you still want to learn - this is the place for you, and we know what types of working conditions to create to foster your development. We are looking for people who can turn bold visions of the future into projects and products that will serve millions of people around the world.

    Utwórz powiadomienie o ofertach pracy dla tego wyszukiwania

    Mlops Engineer • Warszawa, mazowieckie, Polska