company logo

AwS Developer

Bangalore
Full-Time
Junior: 1 to 3 years
1L - 10.4L (Per Year)
Posted on Sep 15 2022

Not Accepting Applications

About the Job

Skills

Education and/or Work Experience Requirements:

Education:

· Bachelor's degree in Computer Science or related technical field, or equivalent practical experience.

· 2-3 years of industry experience in software development, data engineering, business intelligence, data science, or related field with experience in manipulating, processing, and extracting value from datasets.

Key Requirements (Work Experience)

· Master's degree in Computer Science, or related field.

· Understanding of Big Data technologies and solutions (Spark, Hadoop, Hive, MapReduce) and multiple scripting and languages (YAML, Python).

· Understanding of Cloud technologies (AWS, AZURE) technologies in the big data and data warehousing space

· Demonstrable track record of dealing well with ambiguity, prioritizing needs, and delivering results in a dynamic environment.

· Excellent verbal and written communication skills with the ability to effectively advocate technical solutions to research scientists, engineering teams and business audiences.

· Demonstrable track record of dealing well with ambiguity, prioritizing needs, and delivering results in a dynamic environment.

Excellent verbal and written communication skills with the ability to effectively advocate technical solutions to research scientists, engineering teams and business audiences.

Mandatory/Preferred Language Skills:

· Apache Spark (MUST), and Apache Hadoop, Kafka – Good to have

· AWS Glue, AWS athena, AWS Lambda (all 3 MUST)

· Databases - SQL and NoSQL databases

· Cloud platforms – AWS (MUST)

· Object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.

· Data warehousing Solutions – AWS Redshift (Good to have),

· ETL Tools – Cloud Native Tools on AWS,

· knowledge of ODI (good to have)

· Distributed Computing – Apache Hadoop

· Knowledge of Algorithms and Data Structures

· CICD Pipeline - Jira, Confluence, Bamboo, Bitbucket, Artefactory (good to have)

Essential Duties and Responsibilities:

· Create and maintain optimal data pipeline architecture and assemble large, complex data sets that meet functional / non-functional business requirements

· Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL (on both On-premise and Cloud Platforms including AWS and Azure)

· Understand customer requirement, create Functional/technical specification document clearly outlining the data architecture, data pipeline and ETL logic to be implemented, as necessary

· Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics

· Work with the client to resolve with data-related technical issues and support their data infrastructure needs.

· Creation and maintenance of optimum data pipeline architecture for ingestion, processing of data · Creation of necessary infrastructure for ETL jobs from a wide range of data sources

· Work in sync with internal and external team members like data architects, data scientists, data analysts to handle all sorts of technical issues

· Collecting data requirements, maintaining metadata about data and monitor Data security and governance with modern-day security control

About the company

Sankalp HR Services, is a team with professionals in rendering End to End Consulting services that form the basic building blocks required for implementing success in the organizations endeavor.

Industry

IT Services and IT Consul...

Company Size

11-50 Employees

Headquarter

Bangalore