What we scan, what we miss, and why we say so.
WCAGdesk is built on axe-core, the same engine inside Lighthouse, Chrome DevTools, and most enterprise accessibility platforms. Below is exactly what we do — and what no automated tool can do.
The engine
We render every target URL in headless Chromium (Playwright 1.50, Chromium 147), wait for
DOM content and a brief network-idle window, then execute axe-core 4.11 with the
wcag2a, wcag2aa, wcag21a, wcag21aa, and
best-practice rule tags enabled.
Each violation is normalized to: rule id, WCAG criterion (e.g. 1.4.3), impact level
(critical · serious · moderate · minor),
a CSS selector targeting the offending node, and the offending HTML snippet. We compute a
weighted severity score per scan so regressions are visible at a glance.
What we catch
axe-core has roughly 90 rules covering WCAG 2.1 A and AA. Common issues it reliably detects:
- Missing alt text on images, buttons, and form inputs
- Insufficient color contrast for text and interactive elements
- Form fields without programmatic labels
- Heading hierarchy violations (h1 → h3 → h2)
- Missing or duplicate landmark regions
- ARIA misuse: invalid roles, missing required attributes
- Keyboard traps caused by tabindex misuse
- Page structure:
html lang,title, document outline
What we cannot catch
This is where overlay vendors mislead the market. Two numbers circulate, and both are correct. They answer different questions:
- ~30–40% of WCAG criteria can be tested by automation at all. This is the conservative figure that WebAIM, Pa11y, and most legal/compliance literature cite. It measures coverage of the WCAG checklist: out of ~50 success criteria for level AA, automated tools can fully evaluate roughly 17–20.
- ~57% of real issues by volume, per Deque's 2024 analysis of 13,000 pages and ~300,000 issues, were caught by axe-core. This measures how often the automatable criteria fire in practice, not how much of the standard they cover. Common issues like missing alt text, low contrast, and unlabeled inputs get detected reliably, which inflates the volume figure.
We cite the lower bound (30–40%) in our PDF reports and accessibility statements because it sets honest expectations for legal review. Either way, a meaningful residual — somewhere between ~43% and ~60–70% of the standard — needs human evaluation. That residual includes:
- Whether alt text is correct — we can detect that an image has alt; we cannot judge whether "image1.jpg" is a meaningful description.
- Keyboard interaction quality — we can spot trap candidates; we cannot verify that the focus order makes operational sense.
- Screen-reader experience — automated tools do not simulate VoiceOver, NVDA, or TalkBack reading flow.
- Cognitive accessibility — most of WCAG 3.x readability and predictability is human-judgment territory.
- Dynamic flows — multi-step checkouts, modal interactions, and authenticated areas are not traversed by our public-page crawler.
- Custom widgets — accessibility of bespoke components needs manual ARIA review.
For full WCAG conformance you still need an accessibility consultant or in-house expert. WCAGdesk is the continuous record-keeping layer underneath that work.
The crawler
Our paid plans crawl your shop on a schedule. We respect robots.txt and identify
ourselves as WCAGdesk/0.1 (+https://wcagdesk.com/methodology). We start from
/sitemap.xml when available and fall back to a same-origin BFS bounded by your
plan's page cap (50 for Audit, 200 for Defense, 200 per site for Counsel).
We do not authenticate. We do not submit forms. We do not write to your site. This is deliberate: a deterministic, read-only crawl produces a record we can hand to a lawyer with a straight face.
The audit trail
Every scan run is stored: timestamp, page list, full violation set, severity score, and the axe-core engine version used. Defense and Counsel tiers retain history long enough to demonstrate continuous due diligence under EU enforcement timelines. The PDF export is a self-contained document signed by our system clock — not a screenshot of a dashboard.
The accessibility statement
Under EAA Article 4 and Member State implementations (BFSG in Germany, the EAA transposition in France, etc.), public-facing operators must publish an accessibility statement. The Defense and Counsel tiers generate this statement quarterly from the most recent scan, including: compliance status, last assessment date, known non-conformities, and contact for accessibility complaints. You can override the prefilled fields before publishing.
What this is not
- It is not legal advice. We are not lawyers.
- It is not a guarantee of WCAG conformance.
- It is not an overlay or a widget. We do not modify your live site.
- It is not a replacement for manual audits or expert review.
It is a defensible, timestamped, automated audit trail — the artifact you wish you had when an Abmahnung lands.