Aleja Pokoju 18D Krakow, Poland
Interested?
Contact our recruiter Volha for more details
Key Responsibilities
- Participate in building, testing, and maintaining efficient data pipelines to support machine learning workflows. This includes data ingestion, preprocessing, transformation, and ensuring data quality and consistency across environments
- Contribute to the creation of intelligent agents capable of autonomous decision-making. Assist in integrating these agents into larger systems or applications, ensuring robust performance and scalability
- Work on extracurricular or research-oriented tasks involving the training, fine-tuning, and evaluation of machine learning models using standard frameworks. Analyze model performance and iterate based on insights and metrics
- Enhance team collaboration through participation in Agile development processes and understanding professional teamwork dynamics
Qualifications
Proficient in Python syntax and collections, pip, and Object-Oriented Programming (OOP) basicsGood understanding of basic linear algebra (vector and matrix operations), derivatives, and function plotsKnowledge of ML fundamentals : overfitting, gradient descent, classification, regression, metrics, and examples of ML modelsBasic familiarity with Pandas for data manipulation and analysisKnowledge of SQL basics and database schema designExperience training models using PyTorch or other frameworks, understanding of NLP and prompt engineering fundamentals would be a plusExperience developing web services is desirableExperience with web services, Bash, and Linux is desirableEnglish proficiency at B2 level or higherCapacity to study for 6 hours dailyAbout us
At Vention, we assemble senior-level, dedicated teams of developers to help fast-growing startups and innovative enterprises drive impact and achieve their goals. We've delivered solutions across multiple domains, including FinTech, PropTech, AdTech, HealthTech, e-commerce, and more.
Our Data team works with clients to create data platforms from scratch or modify and update existing platforms. The tech stack depends on the project, but we mainly use Spark (along with Scala, Python, or Java) – as well as Apache Kafka, Apache Cassandra, Apache Hadoop, Apache Parquet, and AWS.
Internal knowledge transfer activities are conducted within the Data Engineering Family (which includes data practice & data competency) – it is a space for all of our specialists to share their experiences, learn new skills, host meetups, mentor others, and more.
Benefits
Enjoy personalized learning with intimate group sizes of 3-15 or opt for a one-on-one experienceOur dynamic curriculum offers a mix of hands-on practice and essential theory, tailored for groups or adjusted to fit individual needsGive yourself at least three months to dive deep into the material in a group, or choose an individual internship length that aligns with international standardsDiscover the industry inside out. This internship provides insights into the IT world, giving you a leg up in your future careerReceive guidance and support from an experienced mentor throughout your internship journeyBeyond learning, there's a chance for employment. Successful interns might land a full-time job with us after the programDive into real-world projects Get hands-on experience with genuine IT challenges and see firsthand the solutions in actionEngineer your success