Your Automated Tools Miss 70% of What I Find

By Akhilesh Malani · · 8 min read

Every few weeks, someone sends me an accessibility report generated by an automated tool and asks, "We're clean, right?" The report shows a handful of colour contrast issues, some missing alt text, maybe a few form labels. Everything else is green.

Then I open the same page with my screen reader.

Within five minutes, I've found issues the tool never flagged. A modal that traps my focus. A dropdown that announces nothing when I select an option. A form that submits silently with no confirmation — I have no idea if it worked or failed. These aren't edge cases. They're the everyday experience of someone who actually depends on assistive technology.

The Numbers Don't Lie

Multiple studies, including research from the UK Government Digital Service and accessibility organisations like Deque and WebAIM, consistently show that automated tools can only detect between 30% and 40% of WCAG conformance issues. That means 60% to 70% of real accessibility barriers go completely undetected.

Think about that for a moment. If you're relying solely on automated scans, you're catching less than half the problems. And the ones you're missing? They're usually the ones that matter most to real users.

What Automated Tools Are Good At

Let me be clear — I'm not saying automated tools are useless. They're excellent for catching certain categories of issues quickly and at scale:

  • Missing alt attributes on images (though they can't judge if the alt text is actually meaningful)
  • Colour contrast ratios that fall below WCAG thresholds
  • Missing form labels where an input has no associated label element
  • Duplicate IDs in the HTML
  • Missing document language declaration
  • Empty headings or buttons with no accessible name

These are important. Running axe, WAVE, or Lighthouse as part of your CI/CD pipeline is a good practice. I recommend it. But it's the starting point, not the finish line.

What They Miss — and Why It Matters

1. Keyboard Navigation and Focus Management

An automated tool can check if an element is focusable. It cannot tell you whether the focus order makes sense. I've tested applications where pressing Tab takes me from the header straight to the footer, skipping the entire main content. I've encountered modals where focus escapes to the page behind, leaving me interacting with content I can't even see on screen.

No tool catches this. It requires someone to actually navigate the page, in sequence, and assess whether the experience is logical.

2. Screen Reader Announcements

This is where the gap is widest. Automated tools check the DOM structure. They don't listen to what gets announced. I regularly find:

  • Buttons that announce as "clickable" with no label — I have no idea what they do
  • Custom dropdowns that announce "blank" when I open them
  • Tables with data that reads as a flat list of disconnected text, because row and column headers are missing or incorrectly associated
  • Dynamic content that updates visually but announces nothing — I'm left wondering if the page has changed at all

3. Meaningful Alt Text

A tool can tell you an image has alt text. It cannot tell you if that alt text is useful. I've seen alt text like "image1.jpg", "banner", "photo", or my personal favourite: "image of an image." An automated scan marks all of these as passing. For me, they're all failures.

4. Error Handling and Form Flows

Forms are where accessibility breaks down most often. An automated tool checks if labels exist. It doesn't test what happens when I submit a form with errors. Does the error message appear? Is it announced? Does focus move to the error so I know what to fix? Or does the page just sit there silently, leaving me to guess?

I've tested banking applications where I filled out an entire form, submitted it, and received no feedback at all. The errors were displayed visually in red text. My screen reader said nothing.

5. ARIA Live Regions

ARIA live regions (aria-live="polite" or aria-live="assertive") are meant to announce dynamic content changes to screen reader users. When used correctly, they're essential. When missing or misconfigured, I simply don't know that something has changed on the page.

Automated tools can check if a live region exists in the markup. They cannot test whether it actually announces at the right time, with the right content, and without interrupting my current task.

6. Custom Components

Modern web applications are full of custom components: date pickers, autocomplete fields, drag-and-drop interfaces, accordions, tab panels. Each of these requires specific keyboard interaction patterns and ARIA roles to work correctly with screen readers.

Automated tools have no way to evaluate these. They can't press Arrow Down in a custom listbox to see if it moves to the next option. They can't check if Escape closes a modal and returns focus to the trigger. Only a human — preferably one who uses these patterns daily — can assess this.

The Real Cost of Relying Only on Automation

When organisations treat automated scans as a complete accessibility assessment, they end up with a false sense of compliance. The report says 95% pass rate. The reality is that a screen reader user still can't complete basic tasks on the site.

From my experience: In over 250 digital products I've evaluated, I have never once seen an automated tool catch every critical issue. Not once. The tools that matter most — the ones that find the barriers real people face — are a screen reader, a keyboard, and someone who knows how to use them.

What I Recommend

Here's the approach I use with the organisations I work with:

  1. Automate what you can. Run axe or similar tools in your CI/CD pipeline. Catch the low-hanging fruit early and consistently.
  2. Test with a keyboard. Unplug your mouse and try to complete every task. If you can't, neither can I.
  3. Test with a screen reader. At minimum, test with NVDA (free) on Windows and VoiceOver on Mac/iOS. If you're targeting Android, add TalkBack.
  4. Include people with disabilities in your testing. Not as an afterthought, but as a regular part of your QA cycle. We catch things no one else does — because we live it.
  5. Don't stop at the report. An audit is a snapshot. Accessibility is ongoing. Build it into your process, not just your checklist.

The Bottom Line

Automated tools are a useful first pass. They are not an accessibility strategy. If your entire approach to accessibility is running a scan and fixing what it flags, you're leaving the majority of your users' barriers untouched.

The 70% that automated tools miss? That's not a number I made up. That's what I find, consistently, every time I sit down with a screen reader and a website that "passed" its automated audit.

If you want to know what your automated tools are missing, let's talk.