Design and develop data pipelines to support enterprise data solutions, ensuring seamless integration across systems.
Work with a variety of data sources, optimizing ETL/ELT processes for efficient data processing and storage.
Architect data solutions that meet the analytics and business intelligence needs of the organization.
Implement best practices for data governance, quality, and security, including data privacy and compliance regulations.
Collaborate with internal and external teams to understand business requirements and deliver actionable data solutions.
Ensure efficient data retrieval, optimizing performance for real-time and batch data processing environments.
Partner with business stakeholders,
data analysts, and
data scientists to provide insights and drive data-driven decisions.
4+ years of experience as a Data Engineer, particularly with a focus on building and maintaining data solutions.
Expertise in SQL and data modeling (Kimball, Inmon, Data Vault).
Hands-on experience with Cloud Data Warehousing solutions (preferably Azure).
Proficiency in Python, Scala, or Java for data processing and pipeline development.
Experience with data orchestration tools like Apache Airflow or dbt.
Strong knowledge of relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB).
Familiarity with working on AI-related projects is preferred.
Excellent English communication skills, both written and verbal, for effective client collaboration.
Experience with Azure Databricks is a plus
Experience working on AI or machine learning projects