big data developer (java, hadoop) in philadelphia

posted
job type
contract
salary
US$ 80 per hour
apply now

job details

posted
location
philadelphia, pennsylvania
job category
Information Technology
job type
contract
salary
US$ 80 per hour
reference number
632323
phone
.
apply now

job description

job summary:

We have an excellent opportunity for a Big Data Developer (Java, Hadoop) with one of our Philadelphia clients who designs, develops, and implements web-based Java applications to support business requirements.

 
location: Philadelphia, Pennsylvania
job type: Contract
salary: $80 - 90 per hour
work hours: 9 to 5
education: Bachelors
 
responsibilities:
  • Follows approved life cycle methodologies, creates design documents, and performs program coding and testing.
  • Resolves technical issues through debugging, research, and investigation. Relies on experience and judgment to plan and accomplish goals.
  • Performs a variety of tasks. A degree of creativity and latitude is required. Typically reports to a supervisor or manager.
  • Codes software applications to adhere to designs supporting internal business requirements or external customers.
  • Standardizes the quality assurance procedure for software. Oversees testing and develops fixes.
  • Contribute to the Design and develop high quality software for large scale Java/Spring Batch/Hadoop distributed systems by
  • Loading and processing from disparate data sets using appropriate technologies including but not limited to, Hive, Pig, MapReduce, HBase, Spark, Storm and Kafka.
 
qualifications:
  • Requires a bachelor's degree in area of specialty and 8 years+ of experience in the field or in a related area.
  • Expert knowledge of standard concepts, practices, and procedures within a particular field.
  • 8 years+ of Java experience.
 
skills:
  • Strong communication skills.
  • Experience with Hbase, kafka and spark.
  • Expert in HIVE SQL and ANSI SQL - Great hands on in Data Analysis using SQL.
  • Ability to write simple to complicated SQL in addition to having ability to comprehend and support data questions/analysis using already written existing complicated queries
  • Familiarity in Dev/Ops (Puppet, Chef, Python)
  • Expert knowledge of Big Data concepts and common components including YARN, Queues, Hive, Beeline, AtScale, Datameer, Kafka, and HDF.


Equal Opportunity Employer: Race, Color, Religion, Sex, Sexual Orientation, Gender Identity, National Origin, Age, Genetic Information, Disability, Protected Veteran Status, or any other legally protected group status.

skills

  • Strong communication skills. 
  • Experience with Hbase, kafka and spark.
  • Expert in HIVE SQL and ANSI SQL - Great hands on in Data Analysis using SQL.
  • Ability to write simple to complicated SQL in addition to having ability to comprehend and support data questions/analysis using already written existing complicated queries
  • Familiarity in Dev/Ops (Puppet, Chef, Python)
  • Expert knowledge of Big Data concepts and common components including YARN, Queues, Hive, Beeline, AtScale, Datameer, Kafka, and HDF.

 

qualification

  • Requires a bachelor's degree in area of specialty and 8 years+ of experience in the field or in a related area. 
  • Expert knowledge of standard concepts, practices, and procedures within a particular field.
  • 8 years+ of Java experience.

 

responsibilities

  • Follows approved life cycle methodologies, creates design documents, and performs program coding and testing. 
  • Resolves technical issues through debugging, research, and investigation. Relies on experience and judgment to plan and accomplish goals.
  • Performs a variety of tasks. A degree of creativity and latitude is required. Typically reports to a supervisor or manager.
  • Codes software applications to adhere to designs supporting internal business requirements or external customers. 
  • Standardizes the quality assurance procedure for software. Oversees testing and develops fixes. 
  • Contribute to the Design and develop high quality software for large scale Java/Spring Batch/Hadoop distributed systems by
  • Loading and processing from disparate data sets using appropriate technologies including but not limited to, Hive, Pig, MapReduce, HBase, Spark, Storm and Kafka.

educational requirements

Bachelors