Required Skills
- At least 3-5 years of experience in data engineering, including designing, optimizing, and maintaining data processing pipelines;
- Hands-on experience with AWS Glue, Redshift, and batch processing;
- Proficiency in SQL, Python, Spark, or Scala;
- Strong understanding of ETL design principles, data warehousing best practices, and database modeling techniques.
Responsibilities
- Review existing RDS and DynamoDB structures, identifying inconsistencies;
- Define ETL feasibility using AWS Glue and Redshift ingestion;
- Conduct performance and scalability assessments of batch processing;
- Work with the Data Architect to establish data modeling best practices.
We Offer
- We invite you to join our friendly international team;
- Comfortable work schedule;
- Zero bureaucracy;
- Pleasant working atmosphere;
- Interesting projects and challenging tasks;
- Opportunities for self-realization, professional and stable career growth;
- The optional partly compensation for learning English language.
About the project
The payment company faces several critical data challenges that impact both day-to-day operations and long-term scalability. From production overload to security compliance, addressing these issues is essential for ensuring stable growth, efficient reporting, and a modern data platform capable of supporting future enhancements.