AI Test Automation Engineer

Дата размещения вакансии: 14.01.2026
Работодатель: ОнТаргет ЛАБС
Уровень зарплаты:
з/п не указана
Город:
Алматы
Требуемый опыт работы:
От 3 до 6 лет

OnTarget Labs is a leading international software product development company.
We create next generation of world class product lines.
The company is looking for an AI Test Automation Engineer to join our innovative product team as a full-time member working remotely.
Lots of opportunities for professional growth and business trips abroad are offered.
Join our friendly team of IT professionals now!

About the role
We’re looking for a backend test automation engineer to help test systems that include AI-powered features. The work is focused on backend services, APIs, data flows, and integrations with third-party AI services.
Part of this role is building automated AI evaluations into our pipelines so we can continuously validate AI behavior as models, prompts, and data change over time.

What you’ll do

  • Write and maintain automated tests for backend services and APIs
  • Test AI-assisted features and services, including validating outputs and behavior
  • Add AI evals into CI/CD pipelines to track AI quality over time
  • Test async and event-driven workflows
  • Validate data pipelines that mix structured data with text or documents
  • Cover non-functional requirements like performance, scalability, and reliability
  • Work closely with backend engineers and product to agree on what “good” looks like
  • Contribute to CI checks, quality gates, and test reporting

What we’re looking for

  • Solid backend test automation experience (3+ years)
  • Strong experience with Python and JavaScript/TypeScript
  • Hands-on experience with test frameworks (pytest, unittest, or similar)
  • Experience testing APIs using tools like Postman, Requests/HTTPX, or similar
  • Comfortable working with Docker
  • Working knowledge of SQL for data validation
  • Experience testing backend systems that include AI / ML / LLM components
  • Experience testing non-functional aspects such as performance and scalability
  • Experience working in a cloud environment (AWS, GCP, Azure, etc.)

Nice to have

  • Experience building or maintaining AI evals
  • Familiarity with LLM evaluation tools (DeepEval, Ragas, LangSmith, or similar)
  • Experience dealing with non-deterministic systems (flaky results, hallucinations)
  • Contract testing experience (Pact or similar)
  • Some exposure to prompt engineering or RAG-based systems
  • Familiarity with observability tools (CloudWatch, Kibana, Grafana, etc.)
  • Basic security testing knowledge (OWASP, API security)
  • Experience generating or working with synthetic test data
  • Fluent English

We offer

  • Competitive salary to be defined upon the interview results
  • Full time REMOTE WORK