Data Software Engineering & 5 others
EPAM Systems
Software Engineering
Portugal · Remote
Posted on Nov 19, 2025
Responsibilities
- Build, optimize, and maintain ETL pipelines using Hadoop ecosystem tools (HDFS, Hive, Spark)
- Collaborate with cross-functional teams to design and implement efficient data workflows
- Perform data modeling, quality checks, and system performance tuning
- Contribute to modernization projects, including cloud and Databricks integration
- Leverage cloud services to enhance data infrastructure scalability and reliability
- Ensure compliance with best practices for data governance and security
Requirements
- 3+ years of experience in software engineering with a focus on data processing
- Proficiency in programming languages such as Java, Scala, or Python
- Expertise in data processing frameworks like Spark
- Hands-on experience with Big Data Hadoop frameworks, including Hive, Impala, Oozie, Airflow, and HDFS
- Experience with cloud services like S3, Athena, EMR, Redshift, Glue, or Lambda
- Familiarity with data and AI platforms such as Databricks
- Strong understanding of ETL pipeline development and optimization
- Experience in data modeling, quality assurance, and system performance tuning
- Effective collaboration and problem-solving skills
- B2+ English level
We offer/Benefits
- International projects with top brands
- Work with global teams of highly skilled, diverse peers
- Healthcare benefits
- Employee financial programs
- Paid time off and sick leave
- Upskilling, reskilling and certification courses
- Unlimited access to the LinkedIn Learning library and 22,000+ courses
- Global career opportunities
- Volunteer and community involvement opportunities
- EPAM Employee Groups
- Award-winning culture recognized by Glassdoor, Newsweek and LinkedIn