Sr. Quality Assurance / SDET Engineer, GenAI/AI Platform
About Us:
We love going to work and think you should too. Our team is dedicated to trust, customer obsession, agility, and striving to be better everyday. These values serve as the foundation of our culture, guiding our actions and driving us towards excellence. We foster a culture of performance and recognition, allowing us to transform growth as we enable our employees to do the best work of their careers.
This position is located in Pune. You'll be working in a major tech center of Pune, India. Across the globe, our Centers of Energy serve as hubs where we accelerate productivity and collaboration, inspire creativity, and cultivate a culture of connection and celebration. Our teams coordinate their time in Centers of Energy to reflect how they work best.
To learn more about life at LogicMonitor, check out our Careers Page.
What You'll Do:
LogicMonitor® is the AI-first hybrid observability platform powering the next generation of digital infrastructure. LogicMonitor delivers complete visibility and actionable intelligence across on-premises, cloud, and edge environments. By anticipating issues before they strike, optimizing resources in real time, and enabling faster, smarter decisions, LogicMonitor helps IT and business leaders protect margins, accelerate innovation, and deliver exceptional digital experiences without compromise.
Our customers love LogicMonitor's ability to bring cloud and traditional IT together into one view, as seen in minimal churn rates, expansion business, and exciting new customer references. In fact, LogicMonitor has received the highest Net Promoter Score of any IT Infrastructure Management provider. LogicMonitor also boasts high employee satisfaction. We have been certified as a Great Place To Work®, and named one of BuiltIn's Best Places to Work for the seventh year in a row!
LogicMonitor is looking for a highly skilled Senior QA/SDET with 4 to 5 years of experience to build and scale automated test frameworks for Generative AI features across our observability platform. In this role, you will ensure the quality, reliability, safety, and performance of LLM-based workflows, including AI assistants, Retrieval-Augmented Generation (RAG) pipelines, AI-generated incident summaries, auto-remediation agents, tool calling, and AI-driven insights. You will work closely with engineering, product, and applied AI teams to validate AI experiences in production.Here's a closer look at this key role:
-
1. Test Strategy for GenAI Features
- Define end-to-end test strategies for GenAI-driven product features, including:
- AI assistant chat flows (multi-turn conversations)
- AI-generated RCA summaries and incident timelines
- RAG-based responses using knowledge base, tickets, and observability signals
- Agent execution flows (tool calling, action orchestration)
- Establish quality standards for AI output across:
- Factuality, relevance, completeness, groundedness
- Hallucination risk mitigation
- User trust and explainability
- Build scalable automation test frameworks for API and UI experiences.
- Automate validation of:
- AI endpoints (REST / GraphQL)
- Orchestration workflows
- Streaming behaviors
- Structured response schemas (JSON, Pydantic models, etc.)
- Develop regression test packs that run in CI/CD pipelines and validate:
- Deterministic system behavior around non-deterministic model outputs
- Create and maintain LLM evaluation test suites, including:
- Golden datasets (prompt → expected response patterns)
- Rubric-based scoring (LLM judge + deterministic validation checks)
- Failure taxonomy (hallucinations, irrelevant retrieval, refusal bugs, etc.)
- Build automated pipelines for:
- Drift testing
- Prompt regression testing
- Retrieval quality regression testing
- Design and implement performance and load tests for:
- High concurrency chat experiences
- Streaming response latency
- Tool execution latency
- RAG query throughput
- Ensure AI systems consistently meet SLOs and performance targets.
- Validate AI system robustness against:
- Prompt injection attacks
- System prompt leakage
- Cross-tenant data access risks
- Unsafe tool execution
- PII and sensitive data exposure
- Build guardrail validation tests for:
- Safe refusal behavior
- Policy compliance
- Approval flows for state-mutating agent actions
- Collaborate with engineering teams to enhance AI observability:
- Tracing across agents and tool calls
- Prompt and tool execution logging
- Retrieval traceability logs
- Model output diffing
- Use monitoring and telemetry to detect regressions and report actionable issues.
- 4 to 5 years of experience as an SDET / QA Automation Engineer
- Strong programming skills in Python or Java
- Strong hands-on automation experience with:
- API testing frameworks (PyTest, Requests, RestAssured, etc.)
- UI testing frameworks (Playwright / Cypress / Selenium)
- CI/CD automation (GitHub Actions / Jenkins)
- Strong understanding of:
- Microservices and distributed systems testing
- Asynchronous workflows and queues
- Cloud-native architecture and reliability testing
- Proven ability to build test strategies for:
- Functional, regression, integration, contract, and performance testing
- Experience testing LLM-based systems, such as:
- AI assistants (multi-turn chat)
- RAG pipelines
- Agentic workflows (tool calling, orchestration)
- Strong understanding of common GenAI failure patterns:
- Hallucinations
- Prompt injection
- Retrieval failures
- Toxicity and unsafe responses
- Ability to create evaluation datasets and rubrics for AI correctness
Click here to read our International Applicant Privacy Notice.
LogicMonitor is an Equal Opportunity EmployerAt LogicMonitor, we believe that innovation thrives when every voice is heard and each individual is empowered to bring their unique perspective. We’re committed to creating a workplace where diversity is celebrated, and all employees feel inspired and supported to contribute their best.
For us, equal opportunity means fostering a truly inclusive culture where everyone has the chance to grow and succeed. We don’t just open doors; we invite you to step through and be part of something bigger. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Notice Regarding Use of AI in Hiring
We use artificial intelligence tools to assist with reviewing job applications, such as matching skills and experience to job requirements. These tools support, but do not replace, human review. All hiring decisions are made by our recruiting and hiring teams. You may opt out of AI processing at any time, and your application will still be reviewed. To opt out, please contact us at opt.out@logicmonitor.com.
By submitting your application, you acknowledge this notice.
Our goal is to ensure an accessible and inclusive experience for every candidate.
If you need a reasonable accommodation during the application or interview process under applicable local law, please submit a request via this Accommodation Request Form.
Know your rights: workplace discrimination is illegal. Please click here to review LogicMonitor’s U.S. Pay Transparency Nondiscrimination Provision.