Senior Data Software Engineer
EPAM Systems
Senior Data Software Engineer Description
We are seeking a highly skilled Senior Data Software Engineer to join our team and spearhead efforts in creating and optimizing data integration and processing pipelines that power Enterprise Data Products. This position focuses on a transformative project to transition from Azure Synapse Data Warehouse to a modern Databricks-based architecture, incorporating Dremio for semantic interfaces and implementing domain-driven data modeling.
EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential.
#LI-DNI
Responsibilities
- Develop efficient and high-performance data pipelines within Databricks Medallion architecture using DLT
- Integrate and implement custom Data Quality components based on provided designs
- Conduct data modeling and apply domain-driven design to enhance the enterprise's new data platform and products
- Build performant ETL pipelines to transform and process large datasets
- Create and maintain semantic models in Dremio with detailed guidance
- Optimize data systems to ensure scalability and reliability in a production environment
- Collaborate with cross-functional teams to rationalize existing PowerBI reports and align them with rebuilt data products on the new platform
- Troubleshoot, debug, and enhance the functionality of data integration processes
- Ensure proper documentation of workflows, processes, and data transformations
- Actively contribute to the improvement of data engineering practices and tooling across the organization
Requirements
- 3+ years of experience in software engineering or data engineering roles
- Proficiency in Azure Databricks, including DLT and Unity Catalog
- Competency in PySpark or a combination of Spark experience with strong Python fundamentals
- Background in Azure Synapse analytics
- Understanding of designing and building data pipelines using modern cloud architecture
- Skills in data modeling and domain-driven design principles
- Familiarity with Dremio or similar semantic layer tools
- Showcase of building performant ETL pipelines for large-scale data systems
- Capability to work collaboratively within a multi-disciplinary team
- Strong problem-solving skills and an ability to deliver high-quality solutions under tight deadlines
- Excellent command of written and spoken English (B2+ level)
Nice to have
- Proficiency in Dremio, including creating and optimizing semantic interfaces
- Background in working with PowerBI report rationalization and aligning them to updated data products
- Knowledge of custom Data Quality frameworks or similar data validation tools
We offer
- Connectivity Bonus (15,000 ARS are paid with a salary receipt at the end of each month as a non-wages concept)
- Medicina Prepaga (It covers the collaborator and direct family group)
- Paternity Leave (Two additional days are added to what is established by law, total of 4 days)
- Discounts card
- English Training (English lessons, twice per week)
- Training Program (Access to multiple customized training plans according to the needs of each role within the company)
- Marriage bonus (The company doubles the allowance established by law that ANSES offers)
- Referral Program (Referral bonus is paid when the referral of a collaborator joins the Company)
- External Agreements and Discounts
- Vacations: 14 calendar days a year
By applying to our role, you are agreeing that your personal data may be used as in set out in EPAMĀ“s Privacy Notice and Policy.