Mumbai
Full-Time
Executive: 10 to 30 years
Posted on Jul 12 2022

Not Accepting Applications

About the Job

Skills

At least 10 years of experience working across/understanding of one or more ETL/Big Data tools

Atleast 5 years of experience working in Big Data Technologies Java Spark technical 4+ years of experience in Java Spark At least 5 years of experience in Data warehousing

Strong understanding and hands-on experience on the Big Data stack (HDFS, Sqoop, Hive, Java etc.)

Big Data solution design and architecture Design, sizing and implementation of Big Data platforms based Deep understanding of Cloudera and or Hortonworks stack (Spark, Installation and configuration, Navigator, Oozie, Ranger etc.)

Experience in extracting data from feeds into Data Lake using Kafka and other open source components Understanding of and experience in Data ingestion patterns and experience with building pipelines.

Experience in configuring Azure or AWS components and managing data flows.

Knowledge of Google Cloud Platform a plus.

Experience work on Production grade projects with Terabyte to Petabyte size data sets. Build a scalable, reliable, operable and performant big data platform for both streaming and batch analytics Design and implement data aggregation, cleansing and transformation layers Skills Required: Apache NiFi, Kafka Connect (Confluent), Sqoop, Kylo, Spark, Druid, Presto, RESTful services, Lambda / Kappa architectures, Airflow 3+ years of hands-on experience designing and operating large data platforms

Experience in Big data Ingestion, Transformation and stream / batch processing technologies using Apache NiFi, Apache Kafka, Kafka Connect (Confluent), Sqoop, Spark, Storm, Hive etc;

Experience in designing and building streaming data platforms in Lambda, Kappa architectures Should have working experience in one of NoSQL, OLAP data stores like Druid, Cassandra, Elasticsearch, Pinot etc;

Experience in one of data warehousing tools like RedShift, BigQuery, Azure SQL Data Warehouse

Exposure to other Data Ingestion, Data Lake and querying frameworks like Marmaray, Kylo, Drill, Presto Experience in designing and consuming microservices Exposure to security and governance tools like Apache Ranger, Apache Atlas

Experience in performance benchmarks Any contributions to open source projects Basic Expectation

Must have excellent written and verbal communication skills to communicate with various stakeholders within organization.

Demonstrates excellent process, documentation, team participation and problem - solving skills Must have self-learning skills for performing the tasks by deadline Requirement gathering and understanding, Analyze and convert functional requirements into concrete technical tasks.

Primary Skills: Java Spark, Hive, HDFS, Kafka, Impala, HBase, Hadoop MapReduce Core JAVA, Linux shell scripting Restful APIs

Other required Skills: Agile/Scrum methodology experience is required.

Experience in SCMs like GIT; and tools like JIRA Experience in RDMS and No SQL databases Strong experience in Data Structures, Algorithms etc Well versed with SDLC life cycle having exposure to various Phases

Strong hands-on working experience in multiple Big data and Cloud technologies Good to have Datawarehouse exposure Responsible for systems analysis - Design, Coding and Unit Testing

About the company

CLIQHR is a dynamic global recruiting agency focused on the creative, product, sales, events, marketing, BSFI and technology services. CLIQHR Recruitment Services is a part of Geetha Technology Solutions (P) Ltd established in 2012 and headquartered in Chennai. CLIQHR is an executive search firm managed by a team of professionals. We conduct searches for top, senior and middle level profession ...Show More

Industry

Staffing and Recruiting

Company Size

11-50 Employees

Headquarter

Hyderabad, Remote