Pune, Mumbai, Bangalore, Chennai
Full-Time
Senior: 7 to 10 years
8L - 20L (Per Year)
Posted on Jun 18 2022

Not Accepting Applications

About the Job

Skills

  1. Specific skills- Bigdata (Hadoop/ Spark /Scala / Java / Hive/ Kafka)
  2. Locations- Pune/ Mumbai/ Bangalore/ Chennai

Experience- 8 to 12 Years

 

Total 8-10 years of working experience

Experience/Needs:

- 8-10 Years of experience with big data tools like Spark, Kafka, Hadoop etc.

- Design and deliver consumer-centric high performant systems. You would be dealing with huge volumes of data sets arriving through batch and streaming platforms. You will be responsible to build and deliver data pipelines that process, transform, integrate and enrich data to meet various demands from business

- Mentor team on infrastructural, networking, data migration, monitoring and troubleshooting aspects

- Focus on automation using Infrastructure as a Code (IaaC), Jenkins, devOps etc.

- Design, build, test and deploy streaming pipelines for data processing in real time and at scale

- Experience with stream-processing systems like Storm, Spark-Streaming, Flink etc..

- Experience with object-oriented/object function scripting languages: Scala, Java, etc.

- Develop software systems using test driven development employing CI/CD practices

- Partner with other engineers and team members to develop software that meets business needs

- Follow Agile methodology for software development and technical documentation

- Good to have banking/finance domain knowledge

- Strong written and oral communication, presentation and interpersonal skills.

- Exceptional analytical, conceptual, and problem-solving abilities

- Able to prioritize and execute tasks in a high-pressure environment

- Experience working in a team-oriented, collaborative environment Job Responsibilities :

- 8-10 years of hand on coding experience

- Proficient in Java, with a good knowledge of its ecosystems

- Experience with writing Spark code using scala language

- Experience with BigData tools like Sqoop, Hive, Pig, Hue

- Solid understanding of object-oriented programming and HDFS concepts

- Familiar with various design and architectural patterns

- Experience with big data tools: Hadoop, Spark, Kafka, fink, Hive, Sqoop etc.

- Experience with relational SQL and NoSQL databases like MySQL, PostgreSQL, Mongo dB and Cassandra

- Experience with data pipeline tools like Airflow, etc.

- Experience with AWS cloud services: EC2, S3, EMR, RDS, Redshift, BigQuery

- Experience with stream-processing systems: Storm, Spark-Streaming, Flink etc.

- Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc.

- Expertise in design / developing platform components like caching, messaging, event processing, automation, transformation and tooling frameworks

About the company

CLIQHR is a dynamic global recruiting agency focused on the creative, product, sales, events, marketing, BSFI and technology services. CLIQHR Recruitment Services is a part of Geetha Technology Solutions (P) Ltd established in 2012 and headquartered in Chennai. CLIQHR is an executive search firm managed by a team of professionals. We conduct searches for top, senior and middle level profession ...Show More

Industry

Staffing and Recruiting

Company Size

11-50 Employees

Headquarter

Hyderabad, Remote