Design, develop, and manage scalable data pipelines and ETL processes using AWS Glue, Lambda, and MWAA (Apache Airflow on AWS).
Build and optimize data lakes and data warehouses using Amazon S3, Redshift, Iceberg, and DynamoDB.
Integrate and monitor data workflows via SNS, CloudWatch, and Datadog.
Design and implement data modeling and storage strategies in MySQL, Oracle, and PostgreSQL.
Architect and implement big data platforms with focus on performance, scalability, and maintainability.
Collaborate with cross-functional teams to analyze business requirements and translate them into data solutions.
Support ongoing operations and resolve data issues involving application-layer and cloud infrastructure.
3+ years of experience in data engineering or big data architecture roles.
Strong hands-on expertise in AWS data services:
Glue, Lambda, Redshift, S3, DynamoDB, SNS, MWAA, CloudWatch
Deep understanding of data lake / data warehouse architecture and data modeling.
Experience with MySQL, Oracle, PostgreSQL.
Familiarity with Apache Iceberg or similar data lake table formats.
Proficiency in monitoring/logging tools such as Datadog.
Nice to have
Experience in Big Data architecture analysis and design.
Proven experience in cloud and resource optimization (cost, performance, scalability).
Ability to solve complex, multi-layered issues involving both application logic and cloud infrastructure.
Familiarity with
Digital Marketing, AdTech, or Customer Data Platforms.