Our expert team at Squee conducts a thorough evaluation to pinpoint how your site can perform better.
Rather than simply counting how many issues appear on a page, our model tries to answer a more practical question:
Based on what we can detect, how many visitors might struggle to use this page each month?
To do that, we combine:
The result is a range, not a single exact number. Accessibility impact is rarely precise, so we avoid false certainty.
We use results from two established accessibility testing engines:
These provide issue counts for things such as:
Automated tools are very good at spotting patterns, but they cannot understand context or real user experience. That’s why we use their output as signals, not absolute truth.
We also analyse the structure of the page itself. For example, we count:
This gives us important denominators.
For example:
This ratio-based approach helps keep the model fair.
Some of the biggest usability barriers cannot be reliably confirmed by automated tools.
We therefore include manual checks for:
If these fail, they carry significant weight.If they pass, they contribute nothing.If they are unknown, we apply a middle value until they’re verified.
Instead of adding every issue together, we group them into real-world barrier types. This prevents double-counting and keeps the estimate meaningful.
This group covers issues that make text harder to read.
Signals include:
Severity increases as issues become more widespread across the page.
Because readability barriers affect a broad cross-section of users in real-world settings, this group carries one of the larger baseline ranges.
This group reflects barriers affecting screen readers and users relying on clear descriptive content.
For these, we calculate severity proportionally:
Missing items ÷ total relevant elements
So impact increases as the issue becomes more common across the page.
This group focuses on how easy it is to understand and move around the page.
These issues tend to affect a narrower but important group of users, so they are weighted accordingly.
This group looks at whether users can complete tasks on the page, such as filling in forms or interacting with menus.
This group is informed by headline UK data about dexterity and motor-related access needs, but it is not tied to a single condition. It represents a broad range of users who may rely on keyboard navigation or experience difficulty using a mouse.
If keyboard access fails or required labels are missing, severity is high because those issues can prevent task completion entirely.
If the page contains video or audio, we treat this as needing review.
Automated tools cannot reliably confirm:
So media presence carries a moderate severity until manually verified.
To ground the model in reality, we use well-established UK headline figures, including:
These figures are not used to label visitors. They help define broad baseline ranges for each barrier group.
Each barrier group has a baseline range:
These ranges are then scaled by the calculated severity for the page.
If a group has no issues, its severity is zero and it contributes nothing.
People can be affected by more than one barrier type.
Rather than adding percentages together, we use a probability-based union calculation. This estimates the likelihood that someone is affected by at least one barrier group, without counting them twice.
To prevent extreme edge cases from producing unrealistic figures, we apply a conservative cap:
The total estimated affected share will not exceed 45% of monthly visitors.
This keeps the output responsible and avoids exaggerated results.
This score is:
It is not:
This model will continue to evolve. As we introduce more manual verification and refine severity weightings, accuracy will improve.
If you have feedback on the approach or suggestions on how we could refine the algorithm, we welcome it.