Senior Applied Scientist (Agentic Systems Validation)
TraceLink
Company overview:
TraceLink’s software solutions and Opus Platform help the pharmaceutical industry digitize their supply chain and enable greater compliance, visibility, and decision making. It reduces disruption to the supply of medicines to patients who need them, anywhere in the world.
Founded in 2009 with the simple mission of protecting patients, today Tracelink has 8 offices, over 800 employees and more than 1300 customers in over 60 countries around the world. Our expanding product suite continues to protect patients and now also enhances multi-enterprise collaboration through innovative new applications such as MINT.
Tracelink is recognized as an industry leader by Gartner and IDC, and for having a great company culture by Comparably.
Senior Applied Scientist (Agentic Systems Validation)
Location: Pune, India
About the Role
We’re hiring a Senior Applied Scientist (Agentic Systems Validation) to lead validation, quality engineering, and production reliability for agentic AI systems, including LLM-powered features, tool-using agents, and automated workflows.
This role focuses on evaluating and operationalizing agentic AI systems in production, not training foundation models. You’ll partner closely with ML engineers and platform teams to ensure agentic experiences are reliable, safe, observable, and scalable.
What You’ll Do
Own quality and validation strategy for LLM- and agent-based features.
-
Design end-to-end tests for complex agentic workflows:
multi-agent coordination
long-running workflows and state transitions
tool/API execution, retries, and failure recovery
memory, context, and grounding behavior
Build test harnesses, golden datasets, and regression suites for AI behavior consistency.
Handle LLM non-determinism using deterministic testing techniques (mocking, replay, stubbing).
Define AI quality, safety, and reliability metrics; contribute to dashboards and alerts.
Monitor production behavior, investigate incidents, and drive regression prevention.
Integrate AI validation into CI/CD pipelines with strong quality gates.
Mentor engineers and drive org-wide standards for agentic AI quality.
AI Safety & Responsible AI
Test guardrails against prompt injection, jailbreaks, and unsafe tool execution.
Validate safe handling of sensitive data and high-risk scenarios.
What We’re Looking For
5+ years in SDET, QA automation, applied AI validation, or software engineering with deep testing ownership.
Hands-on experience testing LLM-powered or AI-driven systems.
Strong understanding of AI testing challenges (non-determinism, hallucinations, drift).
Strong coding skills in Python, Java, or TypeScript/JavaScript.
Experience with API/integration testing, CI/CD, and automation frameworks.
Familiarity with observability tools and debugging distributed systems.
Nice to Have
Experience with RAG systems, vector databases, or agent frameworks.
AI security testing experience (prompt injection, jailbreak prevention).
Experience building AI evaluation or monitoring dashboards.
Education
Master’s degree in Computer Science, Engineering, Data Science, or equivalent experience.
Please see the Tracelink Privacy Policy for more information on how Tracelink processes your personal information during the recruitment process and, if applicable based on your location, how you can exercise your privacy rights. If you have questions about this privacy notice or need to contact us in connection with your personal data, including any requests to exercise your legal rights referred to at the end of this notice, please contact Candidate-Privacy@tracelink.com.