In this role, you will:
- Implement and operationalize modern self-serve data capabilities on Google Cloud to ingest, transform, and distribute data for a variety of big data apps
- Enable secure data pipelines to ensure data protection in transit and at rest
- Automate data governance capabilities to ensure proper data observability throughout the data flows
- Leverage AI/Agentic frameworks to automate data management, governance, and data consumption capabilities
- Create repeatable processes to instantiate data processes that fuel analytics products and business decisions
- Work with principal engineers, product managers, and data engineers to roadmap, plan, and deliver key data capabilities based on priority
- Create the Future of Data: design and implement processes using the entire toolset within GCP to shape the future of data
Required Qualifications:
- 5+ years of Database Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education
- 5+ years of hands-on experience working with Hadoop and Google Cloud data solutions: creating/supporting Spark based processing, Kafka streaming, in a highly collaborative team
- 3+ years of experience with Data lakehouse architecture and design, including hands-on experience with Python, pySpark, Apache Kafka, Airflow, and SQL
- 2+ years working with NoSQL databases such as columnar databases, graph databases, document databases, KV stores, and associated data formats
- Public cloud certifications such as GCP Professional Data Engineer, Azure Data Engineer, or AWS Specialty Data Analytics
Desired Qualifications:
- Proven skills with data migration from on-prem to a cloud native environment
- Proven experience working with the Hadoop ecosystem capabilities such as Hive, HDFS, Parquet, Iceberg, and Delta Tables
- Deep understanding of data warehouse, data cloud architecture, building data pipelines, and orchestration
- Design and implementation of highly scalable and modular data pipelines with built-in data controls for automating data governance
- Familiarity of GenAI frameworks such as Langchain and Langraph to develop agent-based data capabilities
- Dev Ops and CI/CD deployments including Git, Jenkins, Docker, and Kubernetes
- Web based UI development using React and Node JS is a plus
Job Expectations:
- Ability to work on-site in one of the listed locations in a hybrid environment
Will fill out with reruiter
5 Sep 2025
Wells Fargo Recruitment and Hiring Requirements:
b. Wells Fargo requires you to directly represent your own experiences during the recruiting and hiring process.