Share
What you’ll be doing:
Run multi‑node training/inference jobs on large GPU clusters to assess performance, validate usability, improve products, and create developer education.
Design benchmark suites that spotlight NVIDIA hardware, networking, and software stacks.
Profile deep‑learning workloads, identify bottlenecks, and deliver optimization guidance.
Produce concise tutorials, scripts, and whitepapers for customers and tech press.
Analyze competitive solutions and craft data‑driven product positioning.
Present live demos at GTC, CES, SIGGRAPH, and other global conferences.
What we need to see:
Passionate about AI infrastructure and performance optimization.
3+ years in software development, tech marketing, evangelism, or similar roles.
BS/MS in CS, CE, EE, or related field (or equivalent experience).
Strong Python and C++ skills for AI and HPC work.
Hands‑on multi-node experience with Slurm, Kubernetes, or cloud CSP clusters.
Solid grasp of DL architectures, PyTorch, and distributed training methods.
Understanding of CPU/GPU architecture plus CUDA, cuDNN, TensorRT‑LLM, Triton, NCCL
Excellent written and verbal communication for technical and executive audiences.
Ways to stand out from the crowd:
Hands‑on experience setting up and tuning HPC clusters with Slurm, Kubernetes, or other schedulers.
Public technical blogs, talks, forum activity, or notable open‑source projects as well as prior work with customers and/or technical press on AI performance topics.
Exceptional communication skills that simplify complex technology for diverse audiences.
Familiarity with modern LLM architectures and ability to write Torch code and occasional custom GPU kernels.
Expertise in InfiniBand, NVLink, RoCE, RDMA, and collective‑comm libraries.
You will also be eligible for equity and .
These jobs might be a good fit