← All services
03 // Proprietary Platform

Your entire test
operation.
One platform.

AQA is a full test management and execution platform — connecting your test repositories, managing execution environments, recording results, and applying AI to turn failures into filed tickets. Built and used in production by AQAnetics.

▶ REC
Every failed test recorded
AI
Failure analysis on every test
Jira
Failures grouped & tickets filed automatically

Platform in use
Dashboard
Command center
All Executions
Full run history
Reports
Trend analysis
Execution Detail
Per-test breakdown
AI Analysis
Failure intelligence
AQA Dashboard
Pinned profiles, live status, scheduled runs

The dashboard surfaces what matters right now — which profiles are executing, what's scheduled next, today's pass/fail ratio, and one-click access to your most-used test profiles.

Pinned profiles with instant execute button
Live execution status with real-time progress
Next scheduled runs at a glance
Daily statistics chart per environment
All Executions
Every run, fully searchable and filterable

The complete execution history — date, environment, profile, duration, pass/fail/skip counts — stored persistently and filterable by any dimension. Nothing gets lost.

Filter by date range, environment, bundle, profile, branch
Pass / fail / skip counts visible per run
Full history across months of production use
Drill into any run for full test-level detail
Reports
Quality trends across months of execution

Daily pie charts and a historical bar chart spanning your entire execution history. See at a glance whether quality is improving, degrading, or stable — and share reports with stakeholders who don't need to touch the platform.

Daily pass/fail breakdown per profile or bundle
Historical trend chart grouped by day
Exportable for non-technical stakeholders
Filter by date range and execution type
Execution Detail
Every test, every phase, every retry

Drill into any run and see each test's full lifecycle — setup, execution, teardown — with per-phase status, duration, retry count, and tag filters. Distributed execution across 4 executors accelerated this run by 269%.

Before/test/after method phase breakdown
Retry tracking per test
Tag-based filtering (API, UI, CE, WMS, etc.)
Distributed execution with acceleration metrics
AI Analysis
Stack traces explained in plain English

Every failed test gets an AI analysis panel — not just the error, but the cause, what the screenshot shows at the moment of failure, and a step-by-step reconstruction of exactly what the test did before it broke.

Actual error identified and explained in plain English
Root cause analysis — not just what failed, but why
Screenshot context at the moment of failure
Step-by-step test execution narrative for non-technical review
Video recording embedded alongside the analysis

How it works

Connect once.
Run everything from here.

01
Connect your repo

AQA parses your test repository, extracts individual tests, and displays them in the platform. No code changes required.

02
Build bundles & profiles

Group tests into Bundles. Add scheduling and CI/CD triggers to create Profiles. Mix and match for regression, smoke, or module-specific runs.

03
Execute

AQA spins up Docker containers with Selenium, distributes tests across executors, and runs in parallel. Execution time drops by hundreds of percent.

04
Review results

Full logs, video recording, and per-phase status for every test. AI analysis explains each failure in plain English with step-by-step context.

05
Tickets & reports

Failures are grouped by root cause and filed as Jira tickets automatically. Reports go to stakeholders. Everything is stored — runs, videos, trends.


Platform features

Everything in
one place.

AQA replaces a stack of disconnected tools — test management, execution infrastructure, reporting, and bug filing — with a single platform your whole team can use.

Test Parsing & Management

Connect your repository and AQA extracts every test individually. Browse, filter, and tag your full test library from the platform without touching code.

Bundles & Profiles

Group tests into Bundles for flexible reuse. Wrap Bundles in Profiles with scheduling and CI/CD configuration for fully automated execution pipelines.

Managed Execution Environments

AQA provisions Docker containers with Selenium automatically on execution. No infrastructure management. Distributed parallel execution cuts run times dramatically.

Video Recording & Full Logs

Every test execution is recorded. Full step-level logs alongside video — stored on S3, accessible from the platform, timestamped to the millisecond.

AI Failure Analysis

Failed tests get an AI-generated explanation: the actual error, the root cause, what the screenshot shows, and a plain-English narrative of every step the test took.

Automatic Jira Tickets

Failures are grouped by root cause and filed as structured Jira tickets with full context. No manual triage. Your backlog reflects real issues, not raw test counts.

AQA Trace — Own Reporting Backend

Built on Quarkus with PostgreSQL and WebSocket live updates. AQA Trace is our own test reporting server — replacing third-party dependencies with a service we fully control.

Manual Test Case Management

Host manual test cases alongside automation — similar to TestRail but integrated. Non-technical team members can manage, review, and report from the same platform.

Automation Parity Checking

AQA compares written manual test steps to actual Selenium code using AI. When automation drifts from the spec, you know. No other platform has access to both sides simultaneously.


AI Analysis
From stack trace
to plain English.

When a test fails, AQA doesn't just surface the exception. The AI analysis panel explains what went wrong, reconstructs what the test was doing step by step, and describes what was visible on screen at the moment of failure. Non-technical team members can read, understand, and report bugs without touching a log file.

Actual error identified and translated from technical exception
Root cause reasoning — not just what, but why it failed
Screenshot context described at the moment of failure
Step-by-step reconstruction of the full test execution
Video synced to log timeline for full playback
AI Analysis
Actual Error: Element with text 'Save' not found after 10-second wait period (WaitTimeoutException)
Cause
The Save button wasn't fully rendered or made interactive during the 10-second wait period despite being visible after page stabilization, causing the test to fail when attempting to click it.
Execution steps
01Navigated to 'All business partners' page via sideMenuPage.goToPage()
02Applied quick filter with business partner ID 00468505
03Cleared the filter to reset search field before re-applying
04Re-applied filter with business partner ID 00468505 successfully
05Verified business partner existed in grid within 15-second timeout
06Selected the business partner row (index 0) from the grid
07Clicked 'New delivery customer' from toolbar under Customer group
08Attempted to open Address tab — Save button not interactive ✗

AutoDoc — Unique to AQA

Automation that stays
in sync with the spec.

Most platforms either manage manual test cases or run automation. AQA does both — and because it has access to both the written manual steps and the actual Selenium code, it can compare them using AI.

When a developer updates automation code without updating the manual spec — or vice versa — AQA flags the divergence. You always know whether your automation is actually testing what it's supposed to test.

Manual test cases hosted alongside automation in one platform
AI compares written steps to Selenium code automatically
Divergence flagged when automation drifts from spec
No other platform has simultaneous access to both sides
Manual spec
Written in plain language

Test cases authored by QA engineers or business analysts in natural language — stored, versioned, and linked to their automation counterpart in AQA.

Selenium code
Parsed from your repository

AQA reads the actual test implementation — what the code does, step by step — and compares it to the written spec using AI to detect drift.

Result
Parity score per test case

Every test case gets a parity score. When code and spec diverge, AQA flags it before the divergence causes a missed defect or a false pass.

See AQA running
on your test suite.

We'll connect AQA to your repository and run your first execution. No setup required from your team — we handle the infrastructure.

Request a demo ← All services