Senior Data Engineer
A leading E-Health organisation based in the Sydney CBD, who are looking at lessening the strain on the Health industry across Australia. They are doing this through providing digital and non-digital solutions/products including call centres, websites, web applications and mobile applications to act as the first point of contact for information within the Health industry.
The Senior Data Engineer develops, maintains, tests, and evaluates big data solutions. This involves delivery leadership, design responsibility, and construction of a "large-scale data processing system" to enhance the quality of the Project.
The system will implement a combined data management and data processing infrastructure to improve data quality based on big data principles, with a focus on collecting, parsing, managing, analysing and visualising large sets of data to turn information into insights using multiple platforms. These changes will drive cost and operational efficiencies while providing the Australian health sector a reliable source of information to underpin the exchange of health information.
Organisations environment - full continuous delivery operation with high degrees of automation across all stages of the Software Delivery Lifecycle (SDLC).
- Strong ability to design highly scalable distributed systems, using different open source tools.
- Understanding of how to apply technologies to solve big data problems and to develop innovative big data solutions.
- Experience in engineering (commercial or open source) software platforms and large-scale data infrastructures.
- Experience in software engineering, software lifecycle, object-oriented design, coding and testing patterns.
- Expert knowledge of different (NoSQL and RDBMS) databases such as DynamoDB/MongoDB and Redis.
- Strong knowledge of and experience with statistics; potentially other advanced mathematics as well.
- 5+ years’ experience in Java and exposure to at least one other programming or scripting language such as C++, PHP, Ruby, Python and/or R.
- 5+ years in ETL implementations.
- 5+ years database design and data modelling background.
- 2+ years AWS DyanomDB/ Lambda/ API Gateway/ IAM, Policy, Security.
- 2+ years AWS EMR (Hadoop: HDFS, MapReduce, Hive, HBase, Spark).
- 2+ years of Impala, Oozie, Falcon, Mahout, and Sqoop.
- 2+ years agile project experience preferably in the SCRUM and Kanban methods.
- Hands on experience in build automation, continuous Integration, version control and other related technologies (Git, Bamboo, Jenkins or similar technologies).
- Experience in streaming and processing technologies (AWS Kinesis or Kafka), AWS Data Pipeline, Apache Nifi, AWS ElasticSearch, AWS ElastiCache.
- Evidence of contribution and participation in Open-Source and Commercial software communities, meet-ups and groups.
Excellent salary on offer.
Please hit the apply button below or contact Ben Cary on 02 8235 3353 for further information and a confidential conversation.