What you’ll be doing:
Help define the field of ML/AI security architecture.
Research, define, design, advise, develop, review, and implement architecture solutions meeting internal and external security requirements and standards.
Collaborate across the company to guide the direction of designing secure AI and ML products, working with hardware, software, research, IT, and product teams.
Architectural modeling, validation, definition, following standards bodies, and developing infrastructure enabling trusted platforms using hardware security methods.
Perform Product Cybersecurity assessments on projects of multiple NVIDIA product lines. Complete independent reviews on project work packages that are AI and ML specific.
Develop new attacks and defenses for ML/AI enabled applications.
Support the development of the Product Cybersecurity Training strategy and deliver cybersecurity trainings to increase awareness and understanding of security requirements, tools, processes, and technical standards for NVIDIA ML/AI systems.
What we need to see:
MS or PhD in Electrical Engineering, Computer Science, Computer Engineering, Artificial Intelligence, Data Science, Mathematics, Statistics, or equivalent experience.
8+ years of relevant work experience.
First-hand work with Machine Learning, Deep-Learning, or Artificial Intelligence.
Familiarity with current attacks on ML models, including adversarial examples, training data extraction, model extraction, and data poisoning.
Background with attacks on and attack surface of LLM-powered systems, including direct and indirect prompt injection, guardrail evasion, and tool abuse.
Experience using modern Deep Learning software architectures and frameworks like Jax or PyTorch
Experience with security development lifecycle processes and tools
Programming and debugging fundamentals across languages such as Python, C/C++
Strong communication skills and a real passion for working as a team are essential
Ways to stand out from the crowd:
Use of AI in vulnerability research or some other offensive domain
Experience analyzing AI-generated code for security issues
Demonstrated experience in MLops or Deep learning related infrastructure
Understanding of data science, statistical analysis, and visualization
Background of AI Trust principles and familiarity with application of ethical and safety perspectives to AI implementations
You will also be eligible for equity and .
משרות נוספות שיכולות לעניין אותך