Campus Pride Jobs

Mobile Campus Pride Logo

Job Information

Nvidia Senior Software Engineer, Deep Learning Inference in Ramat Gan, Israel

NVIDIA has been at the forefront of the deep learning revolution, pioneering innovations that have transformed the entire field. As the leading provider of GPUs and AI computing platforms, NVIDIA has empowered researchers and engineers worldwide to accelerate breakthroughs in artificial intelligence.

We seek a versatile Senior Software Engineer who is passionate about performance optimization and generative AI. Our team builds software solutions that enable efficient inference on the latest and greatest generative AI models. We tackle problems on all levels of the stack—from server-level request batching to GPU kernel fusion—and collaborate with teams across diverse disciplines to push Nvidia's hardware to its full potential.

What you’ll be doing:

  • Cooperate with research teams to onboard new LLMs and VLMs into Nvidia's opensource AI runtimes

  • Optimize inference workloads using sophisticated profiling and simulation tools

  • Build SOLID, extendable inference software systems, and refine robust APIs

  • Implement and debug low-level GPU code to harness the latest HW features

  • Own end-to-end inference acceleration features and work with teams around the world to deliver production-grade products

What we need to see:

  • B.Sc., M.Sc. or equivalent experience in Computer Science or Computer Engineering

  • 5+ years of relevant hands-on software engineering experience

  • Profound knowledge of software design principles

  • Strong proficiency in at least one system and one scripting language

  • Strong grasp of machine learning concepts

  • People person with excellent communication skills that enjoys collaboration and teamwork.

Ways to stand out from the crowd:

  • Familiarity with Nvidia's DL software stack, e.g. Triton Inference Server (https://github.com/triton-inference-server/server) , TensorRT-LLM (https://github.com/NVIDIA/TensorRT-LLM) , and Model Optimizer (https://github.com/NVIDIA/TensorRT-Model-Optimizer)

  • Proven track record of performance modeling, profiling, debugging, and development in a performance-critical setting with Nvidia's accelerators.

  • Familiarity with LLM quantization, fine-tunning, and caching algorithms

  • Proficiency in GPU kernel programming (CUDA or OpenCL)

  • Prior experience working on a large software project with 50+ contributors

NVIDIA is widely considered one of the world’s most desirable employers in the technology field. We have some of the most forward-thinking and hardworking people working for us. If you're creative and autonomous, we want to hear from you! We are committed to fostering a diverse work environment and are proud to be an equal-opportunity employer. We highly value diversity in our current and future employees. We do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status, or any other characteristic protected by law.

DirectEmployers