Data Engineer - multiple positions

Job Title: Data Engineer - multiple positions
Contract Type: Contract
Location: Melbourne CBD, Victoria
Salary: Negotiable
Reference: 3291_1563518036
Contact Name: Mailangi Styles
Contact Email:
Job Published: July 19, 2019 16:33

Job Description

Ayan Infotech is in urgent need of numerous Data Engineers to take up an initial 6 month contract in Melbourne. Our client is a global leader in IT consultancy services and are engaged in a number of Big Data and Cloud initiatives for one of their key banking and finance accounts.

The Data Engineer will expand and optimise our clients' data and data pipeline architecture, as well as optimise their data flow and collection for cross functional teams.

Responsibilities will include:

  • Build robust, efficient and reliable data pipelines consisting of diverse data sources to ingest and process data into data platforms (Hadoop, AWS or GCP).

  • Design and develop real time streaming and batch processing pipeline solutions

  • Assemble large, complex data sets that meet functional / non-functional business requirements.

  • Work with stakeholders including the Product Owner and data analyst teams to assist with data-related technical issues and support their data infrastructure needs.

  • Collaborate with Architects to define the architecture and technology selection.

Skills and experience required:

  • Proven working experience as Big Data engineer for 2+ years preferably in building data lake solution by ingesting and processing data from various source systems.

  • Experience with multiple Big data technologies and concepts such as HDFS, NiFi, Kafka, Hive, Spark, Spark streaming, HBase, EMR and GCP.

  • Development experience in one or more of Java, Scala, python and bash.

  • In-depth understanding of Data Management practices and Database technologies.

  • Ability to work in team in diverse, fast-paced Agile environment.

  • Apply DevOps, Continuous Integration and Continuous Delivery principles to build automated pipelines for deployment and production assurance on the data platform.

  • Knowledge of building self-contained applications using Docker and OpenShift.

  • Share knowledge with immediate peers and build communities and connections that promote better technical practices across the organisation.

  • Implement test cases and test automation.

  • Experience in building various frameworks for enterprise data lake is highly desirable.

Attractive rate on offer relevant to experience.

If interested, click the 'APPLY NOW' button or email your CV.

Contact - 02 9411 8794