AI/LLM Evaluation & Alignment Software Engineer

πŸ’° $135k-$160k
πŸ‡ΊπŸ‡Έ United States - Remote
πŸ’» Software DevelopmentπŸ”΅ Mid-level

Job description

At LeoTech, we are passionate about building software that solves real-world problems in the Public Safety sector. Our software has been used to help the fight against continuing criminal enterprises, drug trafficking organizations, identifying financial fraud, disrupting sex and human trafficking rings and focusing on mental health matters to name a few.

Role

  • This is a remote, WFH role.
  • As an AI/LLM Evaluation & Alignment Engineer on our Data Science team, you will play a critical role in ensuring that our Large Language Model (LLM) and Agentic AI solutions are accurate, safe, and aligned with the unique requirements of public safety and law enforcement workflows. You will design and implement evaluation frameworks, guardrails, and bias-mitigation strategies that give our customers confidence in the reliability and ethical use of our AI systems. This is an individual contributor (IC) role that combines hands-on technical engineering with a focus on responsible AI deployment. You will work closely with AI engineers, product managers, and DevOps teams to establish standards for evaluation, design test harnesses for generative models, and operationalize quality assurance processes across our AI stack.

Core Responsibilities

  • Build and maintain evaluation frameworks for LLMs and generative AI systems tailored to public safety and intelligence use cases.
  • Design guardrails and alignment strategies to minimize bias, toxicity, hallucinations, and other ethical risks in production workflows.
  • Partner with AI engineers and data scientists to define online and offline evaluation metrics (e.g., model drifts, data drifts, factual accuracy, consistency, safety, interpretability).
  • Implement continuous evaluation pipelines for AI models, integrated into CI/CD and production monitoring systems.
  • Collaborate with stakeholders to stress test models against edge cases, adversarial prompts, and sensitive data scenarios.
  • Research and integrate third-party evaluation frameworks and solutions; adapt them to our regulated, high-stakes environment.
  • Work with product and customer-facing teams to ensure explainability, transparency, and auditability of AI outputs.
  • Provide technical leadership in responsible AI practices, influencing standards across the organization.
  • Contribute to DevOps/MLOps workflows for deployment, monitoring, and scaling of AI evaluation and guardrail systems (experience with Kubernetes is a plus).
  • Document best practices and findings, and share knowledge across teams to foster a culture of responsible AI innovation.

What We Value

  • Bachelor’s or Master’s in Computer Science, Artificial Intelligence, Data Science, or related field.
  • 3–5+ years of hands-on experience in ML/AI engineering, with at least 2 years working directly on LLM evaluation, QA, or safety.
  • Strong familiarity with evaluation techniques for generative AI: human-in-the-loop evaluation, automated metrics, adversarial testing, red-teaming.
  • Experience with bias detection, fairness approaches, and responsible AI design.
  • Knowledge of LLM observability, monitoring, and guardrail frameworks e.g Langfuse, Langsmith
  • Proficiency with Python and modern AI/ML/LLM/Agentic AI libraries (LangGraph, Strands Agents, Pydantic AI, LangChain, HuggingFace, PyTorch, LlamaIndex).
  • Experience integrating evaluations into DevOps/MLOps pipelines, preferably with Kubernetes, Terraform, ArgoCD, or GitHub Actions.
  • Understanding of cloud AI platforms (AWS, Azure) and deployment best practices.
  • Strong problem-solving skills, with the ability to design practical evaluation systems for real-world, high-stakes scenarios.
  • Excellent communication skills to translate technical risks and evaluation results into insights for both technical and non-technical stakeholders.

Technologies We Use

  • Cloud & Infrastructure: AWS (Bedrock, SageMaker, Lambda), Azure AI, Kubernetes (EKS), Terraform, ArgoCD.
  • LLMs & Evaluation: HuggingFace, OpenAI API, Anthropic, LangChain, LlamaIndex, Ragas, DeepEval, OpenAI Evals.
  • Observability & Guardrails: Langfuse, GuardrailsAI.
  • Backend & Data: Python (primary), ElasticSearch, Kafka, Airflow.
  • DevOps & Automation: GitHub Actions, CodePipeline.

What You Can Expect

  • Work from home opportunity
  • Enjoy great team camaraderie.
  • Thrive on the fast pace and challenging problems to solve.
  • Modern technologies and tools.
  • Continuous learning environment.
  • Opportunity to communicate and work with people of all technical levels in a team environment.
  • Grow as you are given feedback and incorporate it into your work.
  • Be part of a self-managing team that enjoys support and direction when required.
  • 3 weeks of paid vacation – out the gate!!
  • Competitive Salary.
  • Generous medical, dental, and vision plans.
  • Sick, and paid holidays are offered.

$135,000 - $160,000 a year

Please note the national salary range listed in the job posting reflects the new hire salary range across levels and U.S. locations that would be applicable to the position. The final salary will be commensurate with the candidate’s accepted hiring level and work location. Also, this range represents base salary only and does not include equity, or benefits if applicable.

LeoTech is an equal opportunity employer and does not discriminate on the basis of any legally protected status.

Share this job:
Please let LEO Technologies know you found this job on Remote First Jobs πŸ™

Benefits of using Remote First Jobs

Discover Hidden Jobs

Unique jobs you won't find on other job boards.

Advanced Filters

Filter by category, benefits, seniority, and more.

Priority Job Alerts

Get timely alerts for new job openings every day.

Manage Your Job Hunt

Save jobs you like and keep a simple list of your applications.

Search remote, work from home, 100% online jobs

We help you connect with top remote-first companies.

Search jobs

Hiring remote talent? Post a job

Frequently Asked Questions

What makes Remote First Jobs different from other job boards?

Unlike other job boards that only show jobs from companies that pay to post, we actively scan over 20,000 companies to find remote positions. This means you get access to thousands more jobs, including ones from companies that don't typically post on traditional job boards. Our platform is dedicated to fully remote positions, focusing on companies that have adopted remote work as their standard practice.

How often are new jobs added?

New jobs are constantly being added as our system checks company websites every day. We process thousands of jobs daily to ensure you have access to the most up-to-date remote job listings. Our algorithms scan over 20,000 different sources daily, adding jobs to the board the moment they appear.

Can I trust the job listings on Remote First Jobs?

Yes! We verify all job listings and companies to ensure they're legitimate. Our system automatically filters out spam, junk, and fake jobs to ensure you only see real remote opportunities.

Can I suggest companies to be added to your search?

Yes! We're always looking to expand our listings and appreciate suggestions from our community. If you know of companies offering remote positions that should be included in our search, please let us know. We actively work to increase our coverage of remote job opportunities.

How do I apply for jobs?

When you find a job you're interested in, simply click the 'Apply Now' button on the job listing. This will take you directly to the company's application page. We kindly ask you to mention that you found the position through Remote First Jobs when applying, as it helps us grow and improve our service πŸ™

Apply