Principal Software Engineer - Kubernetes

Neural Magic

Neural Magic

Software Engineering
Somerville, MA, USA
Posted on Thursday, May 30, 2024

About Neural Magic

Based in Somerville, Massachusetts, Neural Magic is a series A startup backed by leading investors including Andreessen Horowitz, NEA, NEA, Pillar, VMware, Verizon Ventures, Comcast Ventures, and Amdocs. At Neural Magic we believe the future of AI is open and we are on a mission to bring the power of open-source LLMs and VLLM to every enterprise on the planet. Neural Magic accelerates AI for the enterprise and brings operational simplicity to GenAI deployments. As a leading developer and maintainer of the vLLM project and inventor of state-of-the-art techniques for model quantization and sparsification, Neural Magic provides a stable platform for enterprises to build, optimize and scale LLM deployments.

Our Mission

Neural Magic is on a mission to bring the power of open-source LLMs and vLLM to every enterprise on the planet.

Your Role

As a Principal Software Engineer with expertise in Kubernetes, you will be at the center of designing, productizing, and working with customers on Kubernetes-based deployment architectures for vLLM.

Join us in shaping the future of AI!


  • Use your experience with Kubernetes and cloud native deployments to build reference architectures for vLLM, taking in the unique considerations of deploying LLMs, including:
    • Autoscaling an LLM (with non-standard K8s metrics)
    • Managing very large model artifacts and long startup times
    • Monitoring and logging integrations
  • Lead customer engagements that focus on building deployments of vLLM in customer environments
  • Lead developer marketing associated with vLLM+Kuberentes, including driving blogs and videos explaining the concepts, designs, and considerations
  • Represent Neural Magic and the vLLM Project in Kubernetes open-source community discussions and contribute to Kubernetes repos (including Kubernetes Serving Working Group and KServe Working Group)
  • Contribute to vLLM Project any features needed to make cloud deployments and K8s integration seamless