Data Software Engineering & 6 others
EPAM Systems
Software Engineering
Argentina · Amp. Gabriel Hernández, Ciudad de México, CDMX, Mexico · Remote
Posted on Nov 19, 2025
Responsibilities
- Guide and nurture the Data Software Engineering team, cultivating a culture of continual learning and growth within the unit
- Collaborate with interdisciplinary teams to furnish top-notch data solutions aligning with project objectives and timelines
- Forge and uphold robust Data Software Engineering processes and practices emphasizing automation, innovation, and efficiency
- Ensure the realization and upkeep of streamlined and scalable data solutions utilizing AWS services
- Supervise the fine-tuning of data processing and workflow management through Apache Airflow and Apache Spark
- Consistently assess industry trends and best practices to enhance and implement the most potent Data Software Engineering strategies
- Provide on-call assistance for data pipelines and datamarts
- Direct the team in crafting and deploying REST APIs for seamless data integration and communication
- Engage directly with clients to comprehend their requirements and deliver tailored, effective solutions
- Steer the organization and structure of the team, guaranteeing the prompt and efficient delivery of projects
- Collaborate with stakeholders, showcasing excellent communication and leadership skills
Requirements
- A minimum of 5 years of experience as a Data Software Engineer, engaged in substantial projects and intricate data infrastructures
- At least 1 year of confirmed leadership experience, adeptly managing and motivating a team of Data Software Engineers
- Proficiency in Amazon Web Services, encompassing the design and implementation of scalable and effective data solutions
- Substantial expertise in Apache Airflow and Apache Spark, ensuring streamlined data processing and workflow management
- Advanced familiarity with at least one CI/CD tool, preferably Jenkins, for the efficient delivery of data pipelines
- Proficiency in Python and SQL, facilitating the creation and upkeep of data pipelines and ETL processes
- Experience with Databricks and PySpark for data processing and analysis
- Acquaintance with REST APIs for seamless data integration and communication
- Robust analytical skills, empowering adept problem-solving and decision-making in intricate environments
- Effective client-facing skills, facilitating clear communication and collaboration to attain project objectives
- Exceptional organizational and structuring abilities, championing the efficient delivery of projects
- Upper-intermediate proficiency in the English language, enabling clear collaboration, presentation, and discussion with the team and stakeholders
Nice to have
- Background with Redshift for data warehousing and analysis
We offer/Benefits
- International projects with top brands
- Work with global teams of highly skilled, diverse peers
- Healthcare benefits
- Employee financial programs
- Paid time off and sick leave
- Upskilling, reskilling and certification courses
- Unlimited access to the LinkedIn Learning library and 22,000+ courses
- Global career opportunities
- Volunteer and community involvement opportunities
- EPAM Employee Groups
- Award-winning culture recognized by Glassdoor, Newsweek and LinkedIn