
Senior ETL Consultant

Senior ETL Consultant
About the Job
Skills
What We’re Looking For
We are hiring a seasoned expert who walks in, understands our ecosystem, identifies gaps, and drives measurable outcomes—without needing to be told what to do next.
This is a hands-on, high-ownership engagement. You will be embedded into our data team to assess the current state of our data pipelines, identify and fix inefficiencies, and build a more reliable, scalable, and insight-ready data foundation. The right candidate combines deep technical expertise with business acumen and a strong bias toward delivering working solutions over endless discussion.
Key Responsibilities
1. Current State Assessment & Pipeline Audit
• Independently explore, understand, and document the existing data infrastructure, pipeline architecture, and data flows without requiring hand-holding.
• Identify bottlenecks, redundancies, anti-patterns, and reliability risks across ETL workflows.
• Deliver a clear, prioritized assessment report within the first two weeks, outlining findings and a recommended action plan.
2. ETL Pipeline Design & Engineering
• Redesign and rebuild fragile or inefficient pipelines using industry best practices.
• Develop robust, scalable ETL workflows using Amazon Glue and other relevant tools.
• Handle data ingestion from diverse sources including relational databases, APIs, flat files, and logs.
• Write and optimize complex SQL queries to support analytics and reporting requirements.
• Set up or improve data warehouse and data mart structures aligned with business reporting needs.
3. Monitoring, Reliability & Documentation
• Implement or enhance monitoring frameworks and automated alerting for pipeline failures and data anomalies.
• Ensure pipelines are auditable, observable, and maintainable by the internal team post-engagement.
• Produce clear documentation covering data flows, transformation logic, dependencies, and operational runbooks—so nothing leaves with you.
4. Business Intelligence & Analytics Enablement
• Translate business processes and KPIs into structured, well-modeled data assets.
• Build or improve dashboards and BI reports in Power BI or equivalent tools.
• Proactively identify where data gaps or inconsistencies may be causing incorrect reporting or decision-making, and fix them.
• Define and standardize metric definitions to ensure consistency across dashboards and teams.
5. AI / ML Exploration & POCs
• Identify practical AI/ML use cases relevant to the business and independently prototype solutions.
• Build working proof-of-concepts for areas such as Natural Language to SQL, pipeline anomaly detection, or BI automation.
• Deliver POCs that are demo-ready and clearly documented, not just theoretical.
Non-Negotiable Expectations
🔍
Proactive Discovery
You should be able to explore our systems, documentation, and data independently. We expect questions when clarification is genuinely needed—not as a substitute for self-directed investigation.
🚀
Bias Toward Delivery
We value working deliverables over long assessments or wait-and-see postures. You should be delivering early, iterating often, and demonstrating progress every week.
📌
Business-Aware Thinking
You understand that data engineering exists to serve business outcomes. Your decisions on data modeling, pipeline design, and reporting should always connect back to business impact.
📣
Proactive Communication
Raise blockers early. Share what you’ve found. Suggest improvements before being asked. Do not wait for a task list—build one yourself based on what you discover.
📝
Leave It Better Than You Found It
Every system you touch should be more reliable, better documented, and easier for the internal team to maintain after you leave.
🤝
Collaborative Independence
Work closely with internal stakeholders and the product team to understand context—but do not depend on them for technical decisions. Own those yourself.
Required Skills & Experience
Core Technical Skills
• 8+ years of hands-on experience in data engineering and ETL development
• Expert-level proficiency in ETL pipeline design, development, and optimization
• Hands-on experience with Amazon Glue (mandatory)
• Advanced SQL: complex queries, performance tuning, window functions, indexing strategies
• Data warehouse and data mart design: dimensional modeling, star/snowflake schemas
• Data ingestion from heterogeneous sources: APIs, RDBMS, flat files, streaming, logs
• Monitoring and alerting setup for data pipelines (failure detection, SLA monitoring)
• Power BI or equivalent BI tool for dashboard and report development
Architecture & Engineering Mindset
• Strong data modeling fundamentals and ability to independently design scalable models
• Experience auditing and refactoring existing pipelines in production environments
• Familiarity with cloud data platforms (AWS preferred): S3, Redshift, RDS, Lambda
• Comfortable working in environments where documentation may be sparse or incomplete
AI / ML Exposure (Desirable)
• Exposure to ML pipelines, feature stores, or data preparation for model training
• Hands-on experience or experimentation with NLP-to-SQL, LLM-powered analytics, or similar AI-driven BI tools
• Ability to rapidly prototype and validate AI/ML use cases independently
Soft Skills That Matter for This Role
• Self-directed: comfortable starting from ambiguity and structuring your own workplan
• Strong communicator: can explain technical findings clearly to non-technical stakeholders
• Problem-first thinking: approaches data issues from a business impact lens, not just a technical one
• High ownership: treats the engagement outcomes as your own, not just billable hours
Candidates who require constant direction need not apply. We are looking for someone who turns ambiguity into action.
About the company
Industry
IT Services and IT Consul...
Company Size
11-50 Employees
Headquarter
New Delhi, Delhi
Other open jobs from ALIQAN Technologies
