Data Engineer
Experience: 5 to 8 Years
Overview
We are seeking experienced Data Engineers to join our Hyderabad-based team, focusing on building robust data pipelines and ETL processes using Teradata technologies. This role is essential for ensuring data quality, accessibility, and performance in our enterprise data warehouse environment. The ideal candidate will bring expertise in Teradata tools and scripting to support analytics and business intelligence initiatives in a collaborative, high-impact setting.
Key Responsibilities
- Design, develop, and optimize ETL processes using Teradata ETL tools to extract, transform, and load large-scale data.
- Write and tune complex Teradata SQL queries for data extraction, manipulation, and reporting.
- Utilize BTEQ (Basic Teradata Query) scripts to automate batch jobs and data processing workflows.
- Perform data modeling, including the creation and maintenance of semantic layers for enhanced data accessibility (preferred).
- Develop and maintain Unix shell scripts for basic automation of data workflows and job scheduling.
- Collaborate with data analysts, scientists, and stakeholders to understand requirements and deliver reliable data solutions.
- Conduct data validation, profiling, and quality checks to ensure accuracy and integrity.
- Troubleshoot performance issues in Teradata environments and implement optimizations.
- Integrate data from various sources into the Teradata warehouse, adhering to data governance standards.
- Document processes, scripts, and architectures to support team knowledge sharing and maintenance.
Qualifications
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- 5 to 8 years of experience in data engineering, with a focus on Teradata-based ETL and data warehousing.
- Proficiency in Teradata ETL tools and utilities for building scalable data pipelines.
- Strong expertise in BTEQ for scripting and executing Teradata jobs.
- Advanced knowledge of Teradata SQL, including query optimization and performance tuning.
- Knowledge of semantic layer concepts and implementation is a plus.
- Basic proficiency in Unix scripting for automation and file handling.
- Experience with data integration tools and basic understanding of cloud data platforms is advantageous.
- Solid analytical skills with the ability to handle complex datasets and solve data-related challenges.
- Familiarity with Agile methodologies and version control systems (e.g., Git).
Candidate Profile
The successful candidate must:
- Have excellent verbal and written communication skills to articulate technical concepts to non-technical stakeholders.
- Be a proactive problem-solver with a keen eye for detail and data accuracy.
- Demonstrate a collaborative spirit and ability to thrive in a team-oriented environment.
- Possess intellectual curiosity and a drive to explore new data technologies.
- Be adaptable to a hybrid work model in Hyderabad, with flexibility for occasional extended hours during project peaks.
Join us in Hyderabad to engineer the future of data-driven decision-making—apply today if this role aligns with your expertise.