Skip to content

tests: Add Claude agents for API testing#934

Open
matejnesuta wants to merge 2 commits intoguacsec:mainfrom
matejnesuta:api-agent
Open

tests: Add Claude agents for API testing#934
matejnesuta wants to merge 2 commits intoguacsec:mainfrom
matejnesuta:api-agent

Conversation

@matejnesuta
Copy link
Contributor

@matejnesuta matejnesuta commented Feb 25, 2026

Summary by Sourcery

Add Claude agent configurations to support automated generation, review, orchestration, and coverage analysis for Playwright-based API tests in Trustify UI.

New Features:

  • Introduce an API test reviewer agent configuration to enforce project standards, run linting, and score Playwright API tests.
  • Introduce an API test generator agent configuration to create Playwright API tests from the OpenAPI spec while respecting existing test structure.
  • Introduce an API test orchestrator agent configuration to coordinate iterative test generation and review workflows with quality gates.
  • Introduce an API coverage analyzer agent configuration to compare OpenAPI definitions against existing tests and highlight coverage gaps.

@codecov
Copy link

codecov bot commented Feb 25, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 64.14%. Comparing base (08dab12) to head (5b1feea).

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #934      +/-   ##
==========================================
- Coverage   65.07%   64.14%   -0.93%     
==========================================
  Files         195      195              
  Lines        3341     3341              
  Branches      753      753              
==========================================
- Hits         2174     2143      -31     
- Misses        868      908      +40     
+ Partials      299      290       -9     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Feb 26, 2026

Reviewer's Guide

Adds four Claude agent configuration files that define an automated workflow for generating, reviewing, orchestrating, and analyzing Playwright API tests for Trustify UI, including detailed responsibilities, standards, and processes for each agent.

Sequence diagram for orchestrated API test generation and analysis

sequenceDiagram
  actor Developer
  participant Orchestrator as api-test-orchestrator
  participant Analyzer as api-coverage-analyzer
  participant Generator as api-test-generator
  participant Reviewer as api-test-reviewer
  participant Spec as OpenAPI_trustd_yaml
  participant Tests as Api_test_files

  Developer ->> Orchestrator: Define testing goal

  Orchestrator ->> Analyzer: Analyze API coverage
  Analyzer ->> Spec: Read OpenAPI spec
  Analyzer ->> Tests: Read existing API tests
  Analyzer -->> Orchestrator: Coverage report + prioritized gaps

  Orchestrator ->> Generator: Generate tests for top priority gaps
  Generator ->> Spec: Read OpenAPI spec
  Generator ->> Tests: Read existing tests for context
  Generator -->> Developer: Draft Playwright API tests

  Developer ->> Reviewer: Submit draft tests for review
  Reviewer ->> Spec: Cross check against OpenAPI spec
  Reviewer ->> Tests: Compare with existing patterns
  Reviewer -->> Developer: Review comments and improvements

  Developer ->> Tests: Integrate approved tests into suite
Loading

File-Level Changes

Change Details Files
Introduce an API test review agent with strict standards and workflow for Playwright API integration tests.
  • Define responsibilities for reviewing Playwright API tests including structure, reusability, linting, and verdict generation.
  • Specify eight detailed review standards covering structure, query params (URLSearchParams), assertions, error handling, code reusability, code quality, bugfix test conventions, and test independence.
  • Describe a step-by-step review workflow including running the linter, checking for code duplication across files, scoring tests, determining verdicts, and formatting reports for orchestrator and users.
.claude/agents/api-test-reviewer.md
Introduce an API test generator agent that creates Playwright API tests from the OpenAPI spec while respecting existing code patterns.
  • Define generator responsibilities including parsing the OpenAPI spec, generating tests, reusing datasets, running tests, and iterating on reviewer feedback.
  • Document core test patterns for GET/POST requests, query parameter handling with URLSearchParams, negative tests, grouping with describe, and TypeScript/code-quality expectations.
  • Specify a generation workflow that reads OpenAPI, detects existing tests, preserves file structure, reuses datasets, runs tests, reports results, and applies reviewer feedback without modifying existing tests.
  • Add detailed guidance for bugfix/regression tests, batch/domain-based generation, parameter boundary testing, and different input modes.
.claude/agents/api-test-generator.md
Introduce an API test orchestrator agent that coordinates generator and reviewer agents through up to three iterations.
  • Define orchestrator mission and state model for tracking iterations, endpoints, status, and history.
  • Describe the generation-review loop including launching generator and reviewer agents, parsing review output (verdict, score, linter status, issues), and decision logic for iterating or stopping.
  • Specify feedback formatting for subsequent iterations, comprehensive final reporting (including histories, outstanding issues, and next steps), and behavior in bulk-generation scenarios.
  • Outline error handling strategies for generator/reviewer failures, test execution failures, and linter failures, including how they affect orchestration flow.
.claude/agents/api-test-orchestrator.md
Introduce an API coverage analyzer agent that compares OpenAPI spec to existing tests to find coverage gaps.
  • Define responsibilities for analyzing endpoint coverage, parameter coverage, edge cases, and negative testing.
  • Describe a workflow for parsing the OpenAPI spec, scanning existing API tests, calculating per-endpoint coverage metrics, and prioritizing gaps by severity.
  • Provide a detailed coverage report format with summary statistics, per-endpoint analysis, prioritized recommendations, and suggested follow-up commands.
  • Document multiple analysis modes (full, endpoint-specific, summary, gap list) and success criteria for coverage analysis.
.claude/agents/api-coverage-analyzer.md

Possibly linked issues

  • #claude: Issue asks to evaluate/decide on Playwright test agents; PR implements Claude-based API test agents and orchestration accordingly.

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've left some high level feedback:

  • There’s a lot of duplicated guidance between the generator, reviewer, orchestrator, and coverage analyzer (e.g., URLSearchParams usage, Jira bugfix format, file locations); consider extracting shared standards into a single short reference section that each agent links to so future updates don’t get out of sync.
  • Some of the embedded code examples (e.g., const fs = require('fs');, specific npm commands, and relative paths) assume particular module systems and project scripts—please align these snippets with the actual conventions and scripts used in the existing e2e tests so the agents don’t suggest patterns that won’t compile or run in this repo.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- There’s a lot of duplicated guidance between the generator, reviewer, orchestrator, and coverage analyzer (e.g., URLSearchParams usage, Jira bugfix format, file locations); consider extracting shared standards into a single short reference section that each agent links to so future updates don’t get out of sync.
- Some of the embedded code examples (e.g., `const fs = require('fs');`, specific `npm` commands, and relative paths) assume particular module systems and project scripts—please align these snippets with the actual conventions and scripts used in the existing e2e tests so the agents don’t suggest patterns that won’t compile or run in this repo.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

1 participant