Why Should You Apply For The Job
- Fast growing company, fun, dynamic, and flexible working environment.
- A caring and supportive environment for career growth and development.
- Attractive Remuneration.
● Design, build and maintain efficient, extensible and scalable data pipelines to capture, store and process high volume data from both external and internal data sources.
● Build and maintain the infrastructure required for optimal extraction, transformation and loading of data from a wide variety of data sources using SQL and big data technologies.
● Deploy, monitor and manage processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
● Identify, design and implement internal process improvements such as automating manual processes, optimising data delivery, re-designing infrastructure for greater scalability and others.
● Perform data cleansing, checks on data quality and data validation for raw and processed data.
● Basic monitoring and management of database and data lakes.
● Ensure detailed inventory of all data assets within the organisation.
● Work with data and analytics experts to simplify access to real-time data for internal stakeholders.
● Bachelor’s Degree or equivalent experience in Computer Science, Engineering, IT or any STEM-related field.
● Expertise in fundamentals of data structures, algorithm design, problem-solving and complexity analysis.
● Advanced working knowledge in database definition and database manipulation languages such as SQL.
● Advanced working knowledge with relational databases such as MySQL, Postgres, etc.
● Experience with non-relational databases such as MongoDB, Neo4j is an additional advantage.
● Experience in programming languages such as Python.
● Experience in data extraction from public domain data sources.
● Experience building, operating and optimising distributed, large scale data storage and analysis solutions using Amazon Web Services (EG. S3, EC2, RDS, Athena, EMR, Glue, IAM, Security Group, etc.).
● Experience with data pipeline and workflow management tools such as Airflow, etc.
● Experience preparing data for data science and machine learning.
● Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code fundamentals with working experience with CI/CD development & deployment tools such as Git, Docker, Kubernetes. Experience with big
data technologies (eg. Spark, Kafka, Hive) is an advantage.
● Proven track record of processing unstructured and/or complex semi-structured data streams and repositories.
About The Company
Our client is a future-forward lifestyle and design property company has its in-house software engineering & data science team that uses digital platforms and artificial intelligence to create seamless experiences for their customers, and to create solutions to power its strategic decisions.
Apply The Job
Other Jobs You May Like
Could Not Find Any Suitable Jobs?
If you are in a career cross road or could not found any suitable jobs here, please submit your CV to our career consultant.
Or you may email us your CV with the form below: