Connecting...

W1siziisimnvbxbpbgvkx3rozw1lx2fzc2v0cy9hy3vpdhkty29uc3vsdgfudhmvanbnl2jhbm5lci1kzwzhdwx0lw5ldy0zlmpwzyjdxq

Job Board

Big DATA ENGINEER – work with PETABYTE Data Assets & gain DATA SCIENCE/AI expertise – SANDTON/JOHANNESBURG - R750K - R850K

Job Title: Big DATA ENGINEER – work with PETABYTE Data Assets & gain DATA SCIENCE/AI expertise – SANDTON/JOHANNESBURG - R750K - R850K
Contract Type: Permanent
Location: Sandton Johannesburg
Industry:
IT
Salary: R750K - R850K./annum
Contact Name: Gary Silbermann
Contact Email: gary@acuityconsultants.co.za
Job Published: May 22, 2019 11:36

Job Description

An excellent opportunity for a Big DATA ENGINEER to work within a PetaByte data environment and at the forefront of Data Science, Machine Learning and Artificial Intelligence – if you’re a Data Engineer with experience in Hadoop and MapReduce keep reading…

Based in SANDTON / JOHANNESBURG this BIG DATA ENGINEER position offers a salary of R750K – R850K/annum.

THE COMPANY:
An international SOFTWARE ENGINEERING & DATA SCIENCE COMPANY headquartered in Canada (Vancouver) & with offices in SANDTON / JOHANNESBURG. With a 10-year track-record they are regarded as the leader in AI & Machine Learning Technologies and provide a SaaS AI Platform enabling highly accurate prediction of Consumer Behaviour for numerous Sectors and Industries.  Through Artificial Intelligence the company predicts customer behaviour using algorithmic development, AI & ML, and prides itself in helping their clients out-predict their competition in order to maximise efficiency and customer satisfaction.  As an AI Company their capabilities rival the likes of IBM, Google, Amazon, Facebook & Apple - this is your opportunity to Develop SaaS Products which enable this level of technical genius.

THE ROLE:
As big data engineer you’ll will work with a team of extraordinary engineers, to deliver the company’s automated consumer behaviour prediction platform. Their results are multiples better than traditional statistical or ML methods. They are automating and commoditising cutting edge AI results directly from a client data lake, and your job will be to help their platform handle data at the massive scale needed. The data sizes can reach PetaByte ranges, and they expect it to process rapidly.

RESPONSIBILITIES:
Selecting and integrating any Big Data tools and frameworks required to provide requested capabilities.
Implementing ETL process.
Part of ETL is analysing and understanding the data well enough to integrate in to their API.
Propose, design and implement Big Data Architecture including infrastructure.
Monitoring performance and advising any necessary infrastructure changes.
Defining data retention policies.

REQUIRED SKILLS:
Experience with Hadoop v2 and MapReduce.
Proficiency with the management of Hadoop cluster and accompanying services including Hive, Spark, Kafka, Scoop and Oozie.
Proficiency with Presto.
Experience in NoSql databases. Cassandra preferred.
Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming.
Experience with integration of data from multiple data sources.
Knowledge of various ETL techniques.
Experience in creating Lambda Architecture, along with knowledge of its advantages and drawbacks.
Experience with Cloudera/MapR/Hortonworks.

If you qualify for this role, please email your CV directly to:
Gary Silbermann
gary@acuityconsultants.co.za 
021 801 5001

If you have not had a response to your application within 14 days please consider your application to be unsuccessful.