Currently accepting applicants
CLIQHR Recruitment Services

Bigdata Engineer

Date Posted : 18th Jun 2022
6 applicant(s)
Senior (7 to 10 years)
Pune, Mumbai, B
Rs. 800000 INR -Rs.2000000 INR (PA)
Visit Website
Share via   
  1. Specific skills- Bigdata (Hadoop/ Spark /Scala / Java / Hive/ Kafka)
  2. Locations- Pune/ Mumbai/ Bangalore/ Chennai

Experience- 8 to 12 Years


Total 8-10 years of working experience


- 8-10 Years of experience with big data tools like Spark, Kafka, Hadoop etc.

- Design and deliver consumer-centric high performant systems. You would be dealing with huge volumes of data sets arriving through batch and streaming platforms. You will be responsible to build and deliver data pipelines that process, transform, integrate and enrich data to meet various demands from business

- Mentor team on infrastructural, networking, data migration, monitoring and troubleshooting aspects

- Focus on automation using Infrastructure as a Code (IaaC), Jenkins, devOps etc.

- Design, build, test and deploy streaming pipelines for data processing in real time and at scale

- Experience with stream-processing systems like Storm, Spark-Streaming, Flink etc..

- Experience with object-oriented/object function scripting languages: Scala, Java, etc.

- Develop software systems using test driven development employing CI/CD practices

- Partner with other engineers and team members to develop software that meets business needs

- Follow Agile methodology for software development and technical documentation

- Good to have banking/finance domain knowledge

- Strong written and oral communication, presentation and interpersonal skills.

- Exceptional analytical, conceptual, and problem-solving abilities

- Able to prioritize and execute tasks in a high-pressure environment

- Experience working in a team-oriented, collaborative environment Job Responsibilities :

- 8-10 years of hand on coding experience

- Proficient in Java, with a good knowledge of its ecosystems

- Experience with writing Spark code using scala language

- Experience with BigData tools like Sqoop, Hive, Pig, Hue

- Solid understanding of object-oriented programming and HDFS concepts

- Familiar with various design and architectural patterns

- Experience with big data tools: Hadoop, Spark, Kafka, fink, Hive, Sqoop etc.

- Experience with relational SQL and NoSQL databases like MySQL, PostgreSQL, Mongo dB and Cassandra

- Experience with data pipeline tools like Airflow, etc.

- Experience with AWS cloud services: EC2, S3, EMR, RDS, Redshift, BigQuery

- Experience with stream-processing systems: Storm, Spark-Streaming, Flink etc.

- Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc.

- Expertise in design / developing platform components like caching, messaging, event processing, automation, transformation and tooling frameworks