AI QA Engineer

Job description

We are seeking an AI QA Engineer with specialization in LLM/NLP model quality assurance to ensure our language models and NLP applications meet the highest standards of accuracy, reliability, and safety. In this role, you will develop rigorous testing strategies for our AI models – including large language models – and lead efforts to detect issues such as factual errors, biases, and instability in model outputs. You will work closely with data scientists and engineers to integrate testing into the model development lifecycle, from early prototyping to post-deployment monitoring. This position is ideal for someone with a strong quality assurance background and a passion for AI, who can bridge the gap between traditional software QA and the unique challenges of evaluating AI systems (chatbots, NLP APIs, etc.) in the context of our Ukrainian LLM projectand other products.

About us:

Kyivstar.Tech is a Ukrainian hybrid IT company and a resident of Diia.City.

We are a subsidiary of Kyivstar, one of Ukraine’s largest telecom operators.

Our mission is to change lives in Ukraine and around the world by creating technological solutions and products that unleash the potential of businesses and meet users’ needs.

Over 500+ KS.Tech specialists work daily in various areas: mobile and web solutions, as well as design, development, support, and technical maintenance of high-performance systems and services.

We believe in innovations that truly bring quality changes and constantly challenge conventional approaches and solutions. Each of us is an adherent of entrepreneurial culture, which allows us never to stop, to evolve, and to create something new.

Responsibilities:

•Develop and execute comprehensive AI model evaluation strategies to assess the performance of our NLP and LLM systems. Define testing methodologies that cover correctness (e.g., accuracy of responses, compliance with requirements), consistency, and fairness of model outputs.

•Analyze benchmarking datasets, define gapsgaps, and develop the first SOTA benchmarking framework for Ukrainian language.

•Analyze training datasets and collaborate with data engineers on processing pipelines improvement. Implement training data testing framework.

•Implement both automated and manual testing for applications powered by large language models. This includes creating automation scripts or test harnesses that can systematically query models with test cases (prompts/questions) and verify responses, as well as performing hands-on review of outputs for subjective evaluation.

•Build and curate high-quality test datasets for model evaluation. Manage a repository of test inputs (e.g., sample user queries, edge-case scenarios, conversational dialogues) along with expected or reference outputs when applicable. Ensure these datasets are diverse, balanced, and representative of real-world use cases, including Ukrainian language content and culturally relevant scenarios.

•Develop pipelines for synthetic data generation and adversarial example creation to challenge the model’s robustness. Use techniques such as paraphrasing, noise injection, or adversarial prompting to produce test cases that can reveal model weaknesses.

•Design and maintain testing frameworks to detect hallucinations, biases, and other failure modes in LLM outputs.

•Define and track key AI performance metrics. Monitor metrics like factual accuracy, coherence/fluency, relevancy to prompt, response diversity, latency of response, and user satisfaction ratings if available. Establish baseline metrics for each new model version and ensure subsequent iterations meet or exceed these benchmarks.

•Work closely with the AI development team to integrate QA in the development process. Collaborate with data scientists to test models at early stages (e.g., evaluating prototypes before full deployment), and with ML engineers to include automated tests in CI/CD pipelines for model updates.

•Debug and analyze AI model failures. When tests uncover issues (e.g., a model consistently gives incorrect information in a certain domain or shows a bias), investigate and identify root causes by analyzing model outputs and underlying data. Provide clear, detailed reports on issues with steps to reproduce and potential causes.

•Provide feedback and recommendations for model improvement. Work with prompt engineers or NLP scientists to refine prompts and instructions that guide the model towards better performance.

•Implement continuous monitoring in production to catch regressions or new issues. Set up mechanisms to regularly evaluate live model outputs (via sampling or user feedback analysis) and alert the team if any quality metrics degrade over time (indicative of model drift or unforeseen use cases).

•Maintain comprehensive test documentation and reports. Document test plans, test case suites, and summarize the results of evaluations for each model version (including graphs/metrics and qualitative findings). Communicate findings to both technical teams and stakeholders in a clear, actionable manner.

Required Qualifications:

QA Experience:

•3+ years in a Quality Assurance or Testing role, with at least part of that focused on testing AI, ML, or complex data-driven systems and 2+ years in data analysis.

•Strong foundation in QA methodologies, test planning, and test case design.

•Experience writing test plans and handling bug tracking for software projects.

AI/ML Knowledge:

•Familiarity with machine learning concepts and specific challenges of testing AI models.

•Experience with AI/ML testing frameworks and LLM evaluation methodologies – for example, knowledge of how to measure model accuracy on benchmarks, how to perform AB testing on model versions, or using frameworks like Hugging Face’s evaluation tools or custom Python-based testing.

NLP Domain Skills:

•Solid understanding of Natural Language Processing tasks and common failure modes of language models.

•Awareness of issues like model hallucination (making up facts), bias in AI (and methods to test for bias), and the importance of context in language understanding.

•Ideally, hands-on experience testing chatbots, virtual assistants, or language generation systems.

Programming & Tools:

•Proficiency in Python for developing test automation and evaluation scripts.

•Familiarity with testing frameworks (PyTest, unittest) and libraries commonly used in ML/NLP (pandas, numpy for data handling; possibly Hugging Face transformers for model interfacing).

•Experience with tools for dataset handling and annotation; ability to write simple scripts to manipulate and evaluate text data.

Data Management:

•Experience creating and managing test datasets, including annotation and labeling processes.

•Comfortable with basic data engineering to gather logs or outputs from models and analyze them.

•Knowledge of using version control for test scripts and maintaining a repository of test cases.

Analytical Skills:

•Strong problem-solving and debugging skills specifically applied to AI outputs

•Ability to notice patterns in model errors and analytically determine what they have in common.

•Capacity to interpret model evaluation metrics and translate them into actionable improvements.

Communication:

•Excellent written and verbal communication skills.

•Able to clearly document bugs, write detailed QA reports, and discuss issues with developers and researchers.

•Fluent Ukrainian is a must, as our LLM is oriented towards Ukrainian – you should be able to evaluate outputs in Ukrainian for correctness and nuance.

Attention to Detail:

•A keen eye for spotting subtle errors or oddities in AI behavior.

•Patience and thoroughness in performing manual testing when needed, and creativity in thinking of edge cases or tricky scenarios to test the model’s limits.

Preferred Qualifications:

AI Testing Tools:

•Experience with specialized tools or frameworks for AI testing, such as model evaluation harnesses, adversarial testing platforms, or crowdsourced evaluation methods.

•Familiarity with techniques like prompt engineering and how prompt changes affect model output quality.

Statistical Analysis:

•Ability to perform statistical analyses on model performance results (significance testing for A/B comparisons, etc.) to determine if changes are improvements.

•Understanding of experiment design in AI (e.g., proper control groups for new model versions).

Continuous Integration:

•Experience integrating tests into CI/CD pipelines for ML – for example, automatically evaluating a model on a validation set every time it’s updated, and blocking deployment if it fails certain criteria.

•Familiarity with ML model versioning and deployment workflows.

Security & Compliance Testing:

•Knowledge of testing AI models for security and compliance issues – e.g. prompt injection attacks on LLMs, data privacy in outputs, or ensuring no disallowed content is generated according to usage policies.

UX Perspective:

•Some experience or understanding of user experience as it relates to AI products.

•Being able to anticipate how end-users might interact with the AI (for instance, phrasing questions in unexpected ways) and ensuring the model handles such interactions gracefully.

Testing Certifications:

•Any certifications or formal training in Quality Assurance, software testing (such as ISTQB) or in AI/ML could be a plus, demonstrating a commitment to the discipline.

What we offer:

•Office or remote — it’s up to you. You can work from anywhere, and we will arrange your workplace.

•Remote onboarding.

•Performance bonuses for everyone (annual or quarterly — depends on the role).

•We train employees: with the opportunity to learn through the company’s library, internal resources, and programs from partners.

•Health and life insurance.

•Wellbeing program and corporate psychologist.

•Reimbursement of expenses for Kyivstar mobile communication.

Share this job:
Please let Kyivstar know you found this job on Remote First Jobs 🙏

Similar Remote Jobs

Benefits of using Remote First Jobs

Discover Hidden Jobs

Unique jobs you won't find on other job boards.

Advanced Filters

Filter by category, benefits, seniority, and more.

Priority Job Alerts

Get timely alerts for new job openings every day.

Manage Your Job Hunt

Save jobs you like and keep a simple list of your applications.

Search remote, work from home, 100% online jobs

We help you connect with top remote-first companies.

Search jobs

Hiring remote talent? Post a job

Frequently Asked Questions

What makes Remote First Jobs different from other job boards?

Unlike other job boards that only show jobs from companies that pay to post, we actively scan over 20,000 companies to find remote positions. This means you get access to thousands more jobs, including ones from companies that don't typically post on traditional job boards. Our platform is dedicated to fully remote positions, focusing on companies that have adopted remote work as their standard practice.

How often are new jobs added?

New jobs are constantly being added as our system checks company websites every day. We process thousands of jobs daily to ensure you have access to the most up-to-date remote job listings. Our algorithms scan over 20,000 different sources daily, adding jobs to the board the moment they appear.

Can I trust the job listings on Remote First Jobs?

Yes! We verify all job listings and companies to ensure they're legitimate. Our system automatically filters out spam, junk, and fake jobs to ensure you only see real remote opportunities.

Can I suggest companies to be added to your search?

Yes! We're always looking to expand our listings and appreciate suggestions from our community. If you know of companies offering remote positions that should be included in our search, please let us know. We actively work to increase our coverage of remote job opportunities.

How do I apply for jobs?

When you find a job you're interested in, simply click the 'Apply Now' button on the job listing. This will take you directly to the company's application page. We kindly ask you to mention that you found the position through Remote First Jobs when applying, as it helps us grow and improve our service 🙏

Apply