Talent.com
Ta oferta pracy nie jest dostępna w Twoim kraju.
Data Infrastructure Engineer ID35383

Data Infrastructure Engineer ID35383

AgileEnginePoznań, LU, pl
18 godz. temu
Opis pracy

Job Description

AgileEngine is one of the Inc. 5000 fastest-growing companies in the US and a top-3 ranked dev shop according to Clutch. We create award-winning custom software solutions that help companies across 15+ industries change the lives of millions.

If you like a challenging environment where you’re working with the best and are encouraged to learn and experiment every day, there’s no better place - guaranteed! : )

WHAT YOU WILL DO

  • Architect, build, and maintain modern and robust real-time and batch data analytics pipelines;
  • Develop and maintain declarative data models and transformations;
  • Implement data ingestion integrations for streaming and traditional sources such as Postgres, Kafka, and DynamoDB;
  • Deploy and configure BI tooling for data analysis;
  • Work closely with product, finance, legal, and compliance teams to build dashboards and reports to support business operations, regulatory obligations, and customer needs;
  • Establish, communicate, and enforce data governance policies;
  • Document and share best practices with regards to schema management, data integrity, availability, and security;
  • Protect and limit access to sensitive data by implementing a secure permissioning model and establishing data masking and tokenization processes;
  • Identify and communicate data platform needs, including additional tooling and staffing;
  • Work with cross-functional teams to define requirements, plan projects, and execute on the plan.

MUST HAVES

  • 5+ years of engineering and data analytics experience ;
  • Strong SQL and Python / Scala skills for complex data analysis;
  • Hands-on experience building automation tooling and pipelines using Python, Scala, Go, or TypeScript;
  • Experience with modern data pipeline and warehouse tools (e.g., Snowflake, Databricks, Spark, AWS Glue );
  • Proficiency with declarative data modeling and transformation tools (e.g., DBT, SqlMesh);
  • Familiarity with real-time data streaming (e.g., Kafka, Spark );
  • Experience configuring and maintaining data orchestration platforms (e.g., Airflow);
  • Background working with cloud-based data lakes and secure data practices;
  • Ability to work autonomously and drive projects end-to-end;
  • Strong bias for simplicity, speed, and avoiding overengineering;
  • Upper-intermediate English level.
  • NICE TO HAVES

  • Experience with infrastructure-as-code tools (e.g., Terraform);
  • Familiarity with container orchestration (e.g., Kubernetes);
  • Prior experience managing external data vendors;
  • Exposure to Web3 / Crypto data systems;
  • Background working cross-functionally with compliance, legal, and finance teams;
  • Experience driving company-wide data governance or permissioning frameworks.
  • THE BENEFITS OF JOINING US

  • Professional growth : Accelerate your professional journey with mentorship, TechTalks, and personalized growth roadmaps.
  • Competitive compensation : We match your ever-growing skills, talent, and contributions with competitive USD-based compensation and budgets for education, fitness, and team activities.
  • A selection of exciting projects : Join projects with modern solutions development and top-tier clients that include Fortune 500 enterprises and leading product brands.
  • Flextime : Tailor your schedule for an optimal work-life balance, by having the options of working from home and going to the office – whatever makes you the happiest and most productive.
  • Your application doesn't end here! To unlock the next steps, check your email and complete your registration on our Applicant Site . The incomplete registration results in the termination of your process.

    Requirements

    5+ years of engineering and data analytics experience; Strong SQL and Python / Scala skills for complex data analysis; Hands-on experience building automation tooling and pipelines using Python, Scala, Go, or TypeScript; Experience with modern data pipeline and warehouse tools (e.g., Snowflake, Databricks, Spark, AWS Glue); Proficiency with declarative data modeling and transformation tools (e.g., DBT, SqlMesh); Familiarity with real-time data streaming (e.g., Kafka, Spark); Experience configuring and maintaining data orchestration platforms (e.g., Airflow); Background working with cloud-based data lakes and secure data practices; Ability to work autonomously and drive projects end-to-end; Strong bias for simplicity, speed, and avoiding overengineering; Upper-intermediate English level.