Voice AI Evaluation Lead

at Deepgram
  • Remote - Worldwide

Remote

QA

Senior

Job description

Company Overview

Deepgram is the leading voice AI platform for developers building speech-to-text (STT), text-to-speech (TTS) and full speech-to-speech (STS) offerings. 200,000+ developers build with Deepgram’s voice-native foundational models – accessed through APIs or as self-managed software – due to our unmatched accuracy, latency and pricing. Customers include software companies building voice products, co-sell partners working with large enterprises, and enterprises solving internal voice AI use cases. The company ended 2024 cash-flow positive with 400+ enterprise customers, 3.3x annual usage growth across the past 4 years, over 50,000 years of audio processed and over 1 trillion words transcribed. There is no organization in the world that understands voice better than Deepgram

Opportunity:

Deepgram is looking for a Voice AI Evaluation Lead to take ownership of how we benchmark and evaluate the performance of our voice AI models. This role is pivotal to the integrity and impact of our AI offerings. You’ll be building robust benchmarking pipelines, producing clear and actionable model cards, and partnering cross-functionally with research, product, QA, marketing, and data labeling to shape how our models are measured, released, and improved. If you love designing evaluations that matter, aligning metrics with product goals, and translating data into insight, this role is for you.

What You’ll Do

  • Build and maintain scalable benchmarking pipelines for model evaluations across STT, TTS, and voice agent use cases.

  • Run regular evaluations of production and pre-release models on curated, real-world datasets.

  • Partner with Research, Data, and Engineering teams to develop new evaluation methodologies and integrate them into our development cycle.

  • Design, define and refine evaluation metrics that reflect product experience, quality, and performance goals.

  • Author comprehensive model cards and internal reports outlining model strengths, weaknesses, and recommended use cases.

  • Work closely with Data Labeling Ops to source, annotate, and prepare evaluation datasets.

  • Collaborate with QA Engineers to integrate model tests into CI/CD and release workflows.

  • Support Marketing and Product with credible, data-backed comparisons to competitors.

  • Track market developments and maintain awareness of competitive benchmarks.

  • Support GTM teams with benchmarking best practices for prospects and customers.

You’ll Love This Role If You

  • Enjoy translating model outputs into human insights that guide product strategy.

  • Are motivated by precision, fairness, and transparency in evaluation.

  • Have a data-minded approach to experimentation and thrive on uncovering what’s working—and what’s not.

  • Take pride in designing clean, repeatable benchmarks that bring clarity to complex systems.

  • Get satisfaction from cross-functional collaboration, working with researchers, product teams, and engineers alike.

  • Want to shape how we define quality and success in speech AI.

  • Are excited by the idea of being a key voice in when—and how—we release new models into the world.

It’s Important To Us That You Have

  • Experience designing, executing, and iterating on evaluation pipelines for ML models

  • Proficiency in Python and data analysis libraries

  • Ability to develop automated evaluation systems—whether scripting analysis workflows or integrating with broader ML pipelines.

  • Comfort working with large-scale datasets and crafting meaningful performance metrics and visualizations.

  • Experience using LLMs or internal tooling to accelerate analysis, QA, or pipeline prototyping.

  • Strong communication skills—especially when translating raw data into structured insights, documentation, or dashboards.

  • Proven success working cross-functionally with research, engineering, QA, and product teams.

It Would Be Great if You Had

  • Prior experience evaluating speech-related models, especially STT or TTS systems.

  • Familiarity with model documentation formats (e.g., model cards, eval reports, dashboards).

  • Understanding of competitive benchmarking and landscape analysis for voice AI products.

  • Experience contributing to or owning internal evaluation infrastructure—whether integrating with existing systems or proposing new ones.

  • A background in startup environments, applied research, or AI product deployment.

Backed by prominent investors including Y Combinator, Madrona, Tiger Global, Wing VC and NVIDIA, Deepgram has raised over $85 million in total funding. If you’re looking to work on cutting-edge technology and make a significant impact in the AI industry, we’d love to hear from you!

Deepgram is an equal opportunity employer. We want all voices and perspectives represented in our workforce. We are a curious bunch focused on collaboration and doing the right thing. We put our customers first, grow together and move quickly. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, gender identity or expression, age, marital status, veteran status, disability status, pregnancy, parental status, genetic information, political affiliation, or any other status protected by the laws or regulations in the locations where we operate.

We are happy to provide accommodations for applicants who need them.

Share this job:
Please let Deepgram know you found this job on Remote First Jobs 🙏

Benefits of using Remote First Jobs

Discover Hidden Jobs

Unique jobs you won't find on other job boards.

Advanced Filters

Filter by category, benefits, seniority, and more.

Priority Job Alerts

Get timely alerts for new job openings every day.

Manage Your Job Hunt

Save jobs you like and keep a simple list of your applications.

Search remote, work from home, 100% online jobs

We help you connect with top remote-first companies.

Search jobs

Hiring remote talent? Post a job

Frequently Asked Questions

What makes Remote First Jobs different from other job boards?

Unlike other job boards that only show jobs from companies that pay to post, we actively scan over 20,000 companies to find remote positions. This means you get access to thousands more jobs, including ones from companies that don't typically post on traditional job boards. Our platform is dedicated to fully remote positions, focusing on companies that have adopted remote work as their standard practice.

How often are new jobs added?

New jobs are constantly being added as our system checks company websites every day. We process thousands of jobs daily to ensure you have access to the most up-to-date remote job listings. Our algorithms scan over 20,000 different sources daily, adding jobs to the board the moment they appear.

Can I trust the job listings on Remote First Jobs?

Yes! We verify all job listings and companies to ensure they're legitimate. Our system automatically filters out spam, junk, and fake jobs to ensure you only see real remote opportunities.

Can I suggest companies to be added to your search?

Yes! We're always looking to expand our listings and appreciate suggestions from our community. If you know of companies offering remote positions that should be included in our search, please let us know. We actively work to increase our coverage of remote job opportunities.

How do I apply for jobs?

When you find a job you're interested in, simply click the 'Apply Now' button on the job listing. This will take you directly to the company's application page. We kindly ask you to mention that you found the position through Remote First Jobs when applying, as it helps us grow and improve our service 🙏

Apply