Key job responsibilities
• Design and implement end-to-end data pipelines (ETL) to ensure efficient data collection, cleansing, transformation, and storage, supporting both real-time and offline analytics needs.
• Develop automated data monitoring tools and interactive dashboards to enhance business teams’ insights into core metrics (e.g., user behavior, AI model performance).
• Collaborate with cross-functional teams (e.g., Product, Operations, Tech) to align data logic, integrate multi-source data (e.g., user behavior, transaction logs, AI outputs), and build a unified data layer.
• Establish data standardization and governance policies to ensure consistency, accuracy, and compliance.
• Provide structured data inputs for AI model training and inference (e.g., LLM applications, recommendation systems), optimizing feature engineering workflows.
• Explore innovative AI-data integration use cases (e.g., embedding AI-generated insights into BI tools).
• Provide technical guidance and best practice on data architecture and BI solution
- 4+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
- Experience programming with at least one modern language such as C++, C#, Java, Python, Golang, PowerShell, Ruby
משרות נוספות שיכולות לעניין אותך