Mô tả công việc
Tóm tắt công việc
You will lead Ahamove's Data Engineering team, responsible for building and scaling our terabyte-scale data warehouse. Our systems handle nearly 200,000 daily orders across both real-time streaming and batch processing pipelines. Your mission is to ensure the reliability, scalability, and performance of our data infrastructure, empowering:
Real-time dashboards for operational visibility
Machine Learning services to power intelligent decision-making
Robust query experiences for internal teams and external stakeholders
Main Duties:
Build, maintain and optimize in-house data infrastructure including database, data warehouse, orchestration system, data streaming and batching pipelines.
Work with multiple Cloud Platforms such as GCP, AWS, Databricks.
Work with big and complex datasets and multi databases for multiple departments.
Create a benchmark, alert, and audit log for the data system to ensure stability and scalability of data and system.
Lead data engineer team and responsible for tech-stack and tech-cost of data platforms
Communicate with stakeholders including
Product Owner,
Software Engineers, Business Users,
Data Analysts and Machine Learning Engineer to solve data-related problems.
Physical Wellbeing Benefit: General Insurance, Medical check-up, Accident Insurance, Healthcare Insurance
Emotional Wellbeing Benefit: Company Trip, Year End Party, Aha Hour Activities, Special Day Gifts, Aha Club (Badminton, Soccer)
Financial Wellbeing Benefit: Grab/Be For Work (Tech/Lead Level), Workplace Relocation, 13th Month Salary, PP Appreciate, Annual Leave Remain
Yêu cầu
Bachelor degree in Computer Science or Software Engineer or Information System
Specializing in data science or a higher degree is a big plus.
At least 06-year-experience in data engineer role and building data platforms and pipelines for analytics.
Excellence in Python and one programming language Java, Scala, Go, Javascript, Typescript, R is a big plus.
Excellence at SQL in DBMS.
Excellence with cloud services such as GCP or AWS.
Experience with Hadoop, Spark, Databricks is a big plus.
Excellence with various OLTP and OLAP databases and data warehouses: MongoDB, PostgreSQL, BigQuery, ClickHouse, MotherDuck, etc.
Excellence with streaming process platforms and streaming concepts such as Redpanda, Kafka, RabbitMQ, CDC or any open-sources.
Experience with data pipeline and workflow management tools: Airflow, dbt, Airbyte, etc.
Knowledge of data visualization tools like Metabase, PowerBI, Looker Studio, etc.
Exposure to emerging open-source or technologies to apply.
Excellence with using source version control such as gitlab, github.
Experience with Kubernetes or DevOps skills (Linux, Networking) is a big plus.
Thông tin khác
Python
PostgreSQL
Big Data
Java
JavaScript
Linux
MongoDB
Networking
DBMS
OLAP
MS SQL
Hadoop
Github
TypeScript
RabbitMQ
Apache Spark
Scala
DevOps
Golang
Apache Kafka
AWS
Data Warehouse
Kubernetes
Gitlab
MS Power BI
OLTP
GCP
R
Apache Airflow
Looker Studio
ClickHouse
Databricks
Google BigQuery
Data Visualization
Metabase
DBT
Thông tin chung
Cách thức ứng tuyển
Ứng viên nộp hồ sơ trực tuyến bằng cách bấm nút Ứng tuyển bên dưới:
Hạn nộp: 27/12/2025