Data Integration & 11 others
EPAM Systems
Mexico City, Mexico
Posted on Nov 19, 2025
Responsibilities
- Design and deliver data warehouse architectures and establish reliable ETL/ELT workflows to maintain data quality and accessibility
- Build and refine data pipelines leveraging AWS technologies such as Glue, Redshift, Athena, DynamoDB, and Amazon RDS
- Develop and update automation scripts for pipeline operations, including Glue scripts for creating parameterized parquet files
- Oversee the transition of data assets and workflows to the cloud, enabling the retirement of multi-platform BI solutions
- Facilitate smooth data integration across a variety of platforms, sources, and formats to drive the adoption of a Data Mesh model
- Work closely with engineers, analysts, and administrators to capture business requirements and deliver tailored data solutions
- Collaborate with business stakeholders to clarify data needs, estimate project scope, and construct pipelines for Data Lake ingestion
- Contribute to the creation of Redshift data models that power BI visualizations in Tableau Cloud or Server
- Strengthen the BI environment while enabling the development of advanced ML, AI, and Deep Learning solutions
- Prepare and maintain documentation for database structures, ETL/ELT processes, security models, and architectural diagrams
- Uphold rigorous data governance practices to guarantee data reliability, protection, and accessibility
- Construct Glue Data Pipelines to bring in data from APIs, transform it, and load it into cloud storage
- Write Glue scripts to generate parameterized parquet files in JSON format and automate their storage in the cloud
- Design and deploy Redshift data models for metrics to be visualized in Tableau
- Support business teams in defining data requirements, estimating workloads, and delivering pipeline solutions for Data Lake integration
Requirements
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field
- At least 5 years of hands-on experience in data engineering
- Minimum one year of experience leading and managing technical teams
- Demonstrated expertise with AWS data services, including S3, Glue, Redshift, Lambda, DynamoDB, Athena, and RDS
- In-depth knowledge of ETL/ELT processes, data pipeline design, and contemporary data warehousing principles
- Proficiency in programming languages such as Python, SQL, or Scala
- Experience building, deploying, and tuning data models in Redshift or comparable platforms
- Strong analytical and troubleshooting abilities with a keen eye for detail
- Excellent interpersonal skills and a proven ability to work effectively with both technical and business teams
- Advanced English proficiency, both written and spoken, at B2+ level or above
Nice to have
- Experience with data visualization platforms, particularly Tableau Server or Tableau Cloud
- Background in implementing Data Mesh architectures
- Understanding of machine learning, deep learning, AI, and IoT technologies
- Familiarity with big data tools like Apache Spark and Hadoop
- Knowledge of JSON schema design and managing cloud storage
- Hands-on experience with AWS Lambda and Amazon EC2
We offer/Benefits
- International projects with top brands
- Work with global teams of highly skilled, diverse peers
- Healthcare benefits
- Employee financial programs
- Paid time off and sick leave
- Upskilling, reskilling and certification courses
- Unlimited access to the LinkedIn Learning library and 22,000+ courses
- Global career opportunities
- Volunteer and community involvement opportunities
- EPAM Employee Groups
- Award-winning culture recognized by Glassdoor, Newsweek and LinkedIn