Lead Engineer Inference Platform

💰 $137k-$270k
🇺🇸 United States - Remote
💻 Software Development🟣 Senior

Job description

MongoDB’s mission is to empower innovators to create, transform, and disrupt industries by unleashing the power of software and data. We enable organizations of all sizes to easily build, scale, and run modern applications by helping them modernize legacy workloads, embrace innovation, and unleash AI. Our industry-leading developer data platform, MongoDB Atlas, is the only globally distributed, multi-cloud database and is available in more than 115 regions across AWS, Google Cloud, and Microsoft Azure. Atlas allows customers to build and run applications anywhere—on premises, or across cloud providers. With offices worldwide and over 175,000 new developers signing up to use MongoDB every month, it’s no wonder that leading organizations, like Samsung and Toyota, trust MongoDB to build next-generation, AI-powered applications.

About the Role

We’re looking for a Lead Engineer, Inference Platform to join our team building the inference platform for embedding models that power semantic search, retrieval, and AI-native features across MongoDB Atlas.

This role is part of the broader Search and AI Platform team and involves close collaboration with AI engineers and researchers from our Voyage.ai acquisition, who are developing industry-leading embedding models. Together, we’re building the infrastructure that enables real-time, high-scale, and low-latency inference — all deeply integrated into Atlas and optimized for developer experience.

As a Lead Engineer, Inference Platform, you’ll be hands-on with design and implementation, while working with engineers across experience levels to build a robust, scalable system. The focus is on latency, availability, observability, and scalability in a multi-tenant, cloud-native environment. You will also be responsible for guiding the technical direction of the team, mentoring junior engineers, and ensuring the delivery of high-quality, impactful features.

We are looking to speak to candidates who are based in Palo Alto for our hybrid working model.

What You’ll Do

  • Partner with Search Platform and Voyage.ai AI engineers and researchers to productionize state-of-the-art embedding models and rerankers, supporting both batch and real-time inference
  • Lead key projects around performance optimization, GPU utilization, autoscaling, and observability for the inference platform
  • Design and build components of a multi-tenant inference service that integrates with Atlas Vector Search, driving capabilities for semantic search and hybrid retrieval
  • Contribute to platform features like model versioning, safe deployment pipelines, latency-aware routing, and model health monitoring
  • Collaborate with peers across ML, infra, and product teams to define architectural patterns and operational practices that support high availability and low latency at scale
  • Guide decisions on model serving architecture using tools like vLLM, ONNX Runtime, and container orchestration in Kubernetes
  • Provide technical leadership and mentorship to junior engineers, fostering a culture of technical excellence and continuous improvement within the team

Who You Are

  • 8+ years of engineering experience in backend systems, ML infrastructure, or scalable platform development, and the ability to provide technical leadership and guidance to a team of engineers
  • Expertise in serving embedding models in production environments
  • Strong systems skills in languages like Go, Rust, C++, or Python, and experience profiling and optimizing performance
  • Comfortable working on cloud-native distributed systems, with a focus on latency, availability, and observability
  • Familiarity with inference runtimes and vector search systems (e.g., Faiss, HNSW, ScaNN)
  • Proven ability to collaborate across disciplines and experience levels, from ML researchers to junior engineers
  • Experience with high-scale SaaS infrastructure, particularly in multi-tenant environments
  • 1+ years of experience serving as TL for a large-scale ML inference or training platform SW project

Nice to Have

  • Prior experience working with model teams on inference-optimized architectures
  • Background in hybrid retrieval, prompt-based pipelines, or retrieval-augmented generation (RAG)
  • Contributions to relevant open-source ML serving infrastructure
  • 1+ years of experience in managing a technical team focused on ML inference or training infrastructure

Why Join Us

  • Be part of shaping the future of AI-native developer experiences on the world’s most popular developer data platform
  • Collaborate with ML experts from Voyage.ai to bring cutting-edge research into production at scale
  • Solve hard problems in real-time inference, model serving, and semantic retrieval — in a system used by thousands of customers worldwide
  • Work in a culture that values mentorship, autonomy, and strong technical craft
  • Competitive compensation, equity, and career growth in a hands-on technical leadership role

To drive the personal growth and business impact of our employees, we’re committed to developing a supportive and enriching culture for everyone. From employee affinity groups, to fertility assistance and a generous parental leave policy, we value our employees’ wellbeing and want to support them along every step of their professional and personal journeys. Learn more about what it’s like to work at MongoDB, and help us make an impact on the world!

MongoDB is committed to providing any necessary accommodations for individuals with disabilities within our application and interview process. To request an accommodation due to a disability, please inform your recruiter.

MongoDB, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type and makes all hiring decisions without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.

REQ ID: 3263228668

MongoDB’s base salary range for this role is posted below. Compensation at the time of offer is unique to each candidate and based on a variety of factors such as skill set, experience, qualifications, and work location. Salary is one part of MongoDB’s total compensation and benefits package. Other benefits for eligible employees may include: equity, participation in the employee stock purchase program, flexible paid time off, 20 weeks fully-paid gender-neutral parental leave, fertility and adoption assistance, 401(k) plan, mental health counseling, access to transgender-inclusive health insurance coverage, and health benefits offerings. Please note, the base salary range listed below and the benefits in this paragraph are only applicable to U.S.-based candidates.

MongoDB’s base salary range for this role in the U.S. is:

$137,000—$270,000 USD

Share this job:
Please let MongoDB know you found this job on Remote First Jobs 🙏

Benefits of using Remote First Jobs

Discover Hidden Jobs

Unique jobs you won't find on other job boards.

Advanced Filters

Filter by category, benefits, seniority, and more.

Priority Job Alerts

Get timely alerts for new job openings every day.

Manage Your Job Hunt

Save jobs you like and keep a simple list of your applications.

Search remote, work from home, 100% online jobs

We help you connect with top remote-first companies.

Search jobs

Hiring remote talent? Post a job

Frequently Asked Questions

What makes Remote First Jobs different from other job boards?

Unlike other job boards that only show jobs from companies that pay to post, we actively scan over 20,000 companies to find remote positions. This means you get access to thousands more jobs, including ones from companies that don't typically post on traditional job boards. Our platform is dedicated to fully remote positions, focusing on companies that have adopted remote work as their standard practice.

How often are new jobs added?

New jobs are constantly being added as our system checks company websites every day. We process thousands of jobs daily to ensure you have access to the most up-to-date remote job listings. Our algorithms scan over 20,000 different sources daily, adding jobs to the board the moment they appear.

Can I trust the job listings on Remote First Jobs?

Yes! We verify all job listings and companies to ensure they're legitimate. Our system automatically filters out spam, junk, and fake jobs to ensure you only see real remote opportunities.

Can I suggest companies to be added to your search?

Yes! We're always looking to expand our listings and appreciate suggestions from our community. If you know of companies offering remote positions that should be included in our search, please let us know. We actively work to increase our coverage of remote job opportunities.

How do I apply for jobs?

When you find a job you're interested in, simply click the 'Apply Now' button on the job listing. This will take you directly to the company's application page. We kindly ask you to mention that you found the position through Remote First Jobs when applying, as it helps us grow and improve our service 🙏

Apply