Share
What you'll be doing:
Innovating and developing new AI systems technologies for efficient inference
Designing, implementing, and optimizing kernels for high impact AI workloads
Designing and implementing extensible abstractions for LLM serving engines
Building efficient just-in-time domain specific compilers and runtimes
Collaborating closely with other engineers at NVIDIA across deep learning frameworks, libraries, kernels, and GPU arch teams
Contributing to open source communities like FlashInfer, vLLM, and SGLang
What we need to see:
Masters degree in Computer Science, Electrical Engineering, or related field (or equivalent experience); PhD are preferred
6+ years (academic/ industry) experience with ML/DL systems development preferable
Strong experience in developing or using deep learning frameworks (e.g. PyTorch, JAX, TensorFlow, ONNX, etc) and ideally inference engines and runtimes such as vLLM, SGLang, and MLC.
Strong Python and C/C++ programming skills
Ways to stand out from the crowd:
Background in domain specific compiler and library solutions for LLM inference and training (e.g. FlashInfer, Flash Attention)
Expertise in inference engines like vLLM and SGLang
Expertise in machine learning compilers (e.g. Apache TVM, MLIR)
Strong experience in GPU kernel development and performance optimizations (especially using CUDA C/C++, cuTile, Triton, or similar)
Open source project ownership or contributions
You will also be eligible for equity and .
These jobs might be a good fit