Connecting...

W1siziisimnvbxbpbgvkx3rozw1lx2fzc2v0cy9hy3vpdhkty29uc3vsdgfudhmvanbnl2jhbm5lci1kzwzhdwx0lw5ldy0zlmpwzyjdxq

Job Board

BIG DATA ENGINEER – work w/ PetaByte sets in AI CONSUMER DATA SCIENCE, SANDTON, R750K-R800K

Job Title: BIG DATA ENGINEER – work w/ PetaByte sets in AI CONSUMER DATA SCIENCE, SANDTON, R750K-R800K
Contract Type: Permanent
Location: Sandton
Industry:
IT
Salary: R750K - R800K/annum
Contact Name: Gary Silbermann
Contact Email: gary@acuityconsultants.co.za
Job Published: January 25, 2019 10:36

Job Description

Seriously challenging and interesting opportunity for a BIG DATA ENGINEER to join a DATA SOLUTIONS, ARTIFICIAL INTELLIGENCE & CONSUMER INSIGHT Company, and to work on platforms handing Data Sizes reaching PetaByte ranges.

Based in SANDTON, this BIG DATA ENGINEER role offers a salary of R750K – R800K/annum.

THE COMPANY:
This is an industry-leader delivering Consumer Behaviour Predictions, AI and Machine Learning to the Retail and Financial Services sectors.
A fast-growing South African Big Data Solutions & Predictive Analytics Company who provide technically some of the most advanced AI Data Platforms and Products.
The Data Sets managed by the company go up to PetaByte sizes.

THE ROLE:
As BIG DATA ENGINEER you will work with a team of extraordinary engineers, to deliver automated consumer behaviour prediction platforms to companies. The results are multiples better than traditional statistical or ML methods. They are automating and commoditising cutting edge AI results directly from client data lakes, and your job will be to enable the platform to handle data at the massive scale needed. As mentioned, the data sizes reach PetaByte ranges, and the challenge is to process rapidly.
This will be achieved by: Selecting and integrating any Big Data tools and frameworks required to provide capabilities.
Implementing ETL process.
Part of ETL is analysing and understanding the data well enough to integrate in to the API.
Propose, design and implement Big Data Architecture including infrastructure.
Monitoring performance and advising any necessary infrastructure changes.
Defining data retention policies.

REQUIRED SKILLS:
3 years experience with Hadoop v2 and MapReduce.
Proficiency with the management of Hadoop cluster and accompanying services including Hive, Spark, Kafka, Scoop and Oozie.
Proficiency with Presto.
3 years experience in NoSql databases. Cassandra preferred.
Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming.
Experience with integration of data from multiple data sources.
Knowledge of various ETL techniques.
3 years experience in creating Lambda Architecture, along with knowledge of its advantages and drawbacks.
Experience with Cloudera/MapR/Hortonworks.

If you qualify for this role, please email your CV directly to:
Gary Silbermann
gary@acuityconsultants.co.za 
021 801 5001

If you have not had a response to your application within 14 days please consider your application to be unsuccessful.