Job Description
This is Engineering at Lattice
Lattice’s Engineering team is continuously improving both our product and our craft. We build maintainable, performant systems using modern technologies, and we collaborate closely with product and design to deliver exceptional user experiences.
Our AI Engineering team is building the systems that power how AI works across Lattice. We’ve laid the foundations: traces are flowing and evals are running - and we’re now focused on defining how our AI products are measured, improved, and trusted at scale. This is a high-ownership role where you’ll help shape evaluation methodology, agent architecture, and the core systems that determine how AI performs in production.
What You Will Do
Evaluation Infrastructure
- Design and ship a robust, end-to-end AI evaluation framework, covering offline evals, production tracing, and human-in-the-loop feedback loops, connected across all of Lattice’s AI use cases.
- Define and instrument the metrics that actually matter: agent task completion, hallucination rates, response quality, user engagement, and downstream business outcomes.
- Build and maintain evaluation datasets, test harnesses, and automated scoring pipelines to catch regressions before they ship.
- Identify and surface the drivers of agent quality improvement, giving the team clear signals on where to invest.
Agent Architecture & Infrastructure
- Architect and implement reusable agent infrastructure: multi-turn conversation workflows, recommendation services, LLM DAGs, and standardized agent topology patterns using LangGraph.
- Build and scale RAG pipelines and retrieval infrastructure, including vector store management and retrieval quality optimization.
- Make principled build vs. buy decisions across LLM providers, agent frameworks, and evaluation tooling, balancing capability, cost, latency, and vendor risk.
- Contribute to production AI systems with a strong focus on reliability, observability, and performance, not just prototypes.
Technical Leadership & Collaboration
- Own projects end-to-end: scope them, drive them to completion, and bring in the right people at the right time.
- Partner with engineering leads and managers to inform technical direction on agent quality and evaluation strategy you’ll be expected to hold intelligent, substantive conversations about methodology, not just implementation.
- Raise the AI engineering bar across the broader team through code review, documentation, and thoughtful technical debate.
What You Will Bring to the Table
Experience
- 5+ years of professional software engineering experience with significant time spent on production AI/ML systems.
- Deep hands-on experience with LLM-based systems: prompt engineering, RAG pipelines, agent orchestration, evaluation metrics, and model fine-tuning.
- Proven ability to work with data and understand statistics, especially in experiments.
- Proven ability to build and operate agentic AI systems in production: multi-step workflows, multi-agent topologies, and the failure modes that come with them.
- Strong command of AI evaluation: you’ve built eval frameworks before, you know the difference between a good eval and a vanity metric, and you have opinions about it.
- Production-grade Python engineering: clean, maintainable, testable code.
Technical Skills
- LangGraph or comparable agent orchestration frameworks. You’ve built real agent workflows with it, not just tutorials.
- LangSmith or comparable LLM observability tooling for tracing, evaluation, and debugging.
- Reads AI papers & blogs regularly and is a trusted source of AI trends.
- Vector databases (Pinecone or similar) and retrieval system design.
- AWS ecosystem or other cloud infrastructure (ex GCP). Comfortable with lambdas, queues, and cloud-native architecture.
- Familiarity with TypeScript is a plus. Our full-stack engineers use it and cross-pollination is valuable.
Ways of Working
- Clear eyes: you see problems as they are, not as you’d like them to be. You surface hard truths early and address them directly.
- Ship, shipmate, self: you prioritize the product and your teammates. Low ego, high ownership.
- You’re as comfortable in ambiguity as you are in well-defined problems: early foundations mean you’ll encounter both.
- Strong technical communication: you can debate evaluation methodology with an AI lead and explain it clearly to an EM in the same afternoon.
Nice to Have
- Experience with RLHF, LoRA, or other model adaptation techniques.
- Background in traditional ML (supervised/unsupervised, neural networks) and knowing when an LLM is overkill.
- Experience with MLOps tooling: MLflow, DataDog, CI/CD pipelines for model deployment.
- Published work, conference talks, or open-source contributions in AI/ML.
- Experience in HR tech, people analytics, or other domains where data quality and trust are critical.
The estimated annual cash salary for this role is CAD $123,750 - CAD $165,000. This position is also eligible for incentive stock options, subject to the terms of Lattice’s applicable plans.
Benefits: The Company offers the following benefits for this position, subject to applicable eligibility requirements: Medical insurance; Dental insurance; Life, AD&D, and Disability Insurance; Natural Disaster Support Program; Wellness Apps; Paid Parental Leave, Paid Time off inclusive of holidays and sick time; Working Remotely Stipend; One time WFH Office Set-Up Stipend; Retirement Plan; Financial Planning; and Learning & Development Budget.
Note on Pay Transparency:
Lattice provides an estimate of the compensation for roles that may be hired as required by regulations. Compensation may vary based on (a) location, as Lattice factors in specific location when benchmarking compensation for most roles; (b) individual candidate skills and qualifications; and © individual candidate experience. Additionally, Lattice leverages current market data to determine compensation, so posted compensation figures are subject to change as new market data becomes available. The salary, other compensation, and benefits information is accurate as of the date of this posting. Lattice reserves the right to modify this information at any time, subject to applicable law.
Lattice welcomes and encourages applications from people with disabilities. Accommodations are available on request for candidates taking part in all aspects of the recruitment and selection process.
About Lattice
Lattice is on a mission to build cultures where employees and their companies thrive. In an age where employees have more choices than ever before, businesses that put employees first are winning 🏅– and Lattice is building the tools to empower those people-centric companies.
Lattice is a people success platform that offers performance reviews, employee engagement surveys, real-time feedback, weekly check-ins, goal setting, and career planning in a way that allows companies to focus on employee development, growth, and engagement – yielding stronger employee retention, performance, and impact to the bottom line 📈. Since launching in 2016, we have grown to over 5,000+ customers globally, including brands like Loom, Robinhood, and Gusto.
Lattice is committed to equal treatment and opportunity in all aspects of recruitment, selection, and employment without regard to gender, race, religion, national origin, ethnicity, disability, gender identity/expression, sexual orientation, veteran or military status, or any other category protected under the law. Lattice is an equal opportunity employer; committed to a community of inclusion, and an environment free from discrimination, harassment, and retaliation.
By clicking the “Submit Application” button below, you consent to Lattice processing your personal information for the purpose of assessing your candidacy for this position in accordance with Lattice’s Job Applicant Privacy Policy__.










