VPAT / ACR Review Guide

A practical guide for understanding VPATs/ACRs: what a VPAT is, the common sections you should see, how to spot red flags, what to ask vendors after review, what a strong VPAT looks like, and how to assess accuracy quickly.

Quick mindset: Treat the VPAT as a structured claim that must be supported by scope, evaluation methods, and meaningful comments.
What a VPAT gives you
A standardized view of accessibility support + limitations across a defined scope
What to verify first
Scope, date, testing methods, AT/browsers, and quality of “Remarks and Explanations”

1) What is a VPAT

A VPAT (Voluntary Product Accessibility Template) is a standardized format used to document how a product supports accessibility requirements. When completed as an ACR (Accessibility Conformance Report), it describes conformance against one or more standards (commonly Section 508 and WCAG) using the VPAT structure.

Important: A VPAT is not an audit report. An accurate VPAT usually depends on real testing (manual + AT + some automation).
Common risk: A VPAT written without testing becomes a guess—often “Supports” everywhere with vague remarks.

VPATs can apply to many product types: web apps, mobile apps, software, documents, hardware, and support documentation. The key is that the VPAT should match the actual features and user workflows the product provides.

↑ Back to top

2) Sections of a VPAT

A VPAT/ACR typically includes the following core sections (wording varies slightly depending on VPAT version and edition).

Cover / Report Info
  • Title: “Accessibility Conformance Report (ACR)”
  • Product name, version, and platform(s)
  • Report date
  • Vendor contact information
Product Description + Notes
  • Clear description of the product and its main workflows
  • Any limitations, exclusions, assumptions
  • Dependencies (browser, OS, assistive tech, integrations)
Evaluation Methods Used
  • Manual testing approach
  • Assistive technologies used (e.g., NVDA, JAWS, VoiceOver)
  • Browsers / OS / devices tested
  • Automated tools (optional, but should not be the only method)
Terms
  • Supports
  • Partially Supports
  • Does Not Support
  • Not Applicable
Conformance Tables
  • Tables for each applicable standard/guideline
  • Each row includes: Criterion + Conformance Level + Remarks/Explanations
  • Remarks should explain how it supports or what fails + impact
Additional Tables (if applicable)
  • Software (desktop), support documentation, or hardware tables
  • Functional performance statements
  • Exceptions / alternate versions
Rule of thumb: If the “Evaluation Methods Used” section is weak or missing, confidence in the VPAT drops sharply.
↑ Back to top

3) How to spot red flags in a VPAT

These red flags suggest the VPAT may be incomplete, outdated, or not based on thorough testing.

High-signal red flags
  • Blank rows or missing “Remarks and Explanations”
  • “All Supports” with minimal or generic remarks
  • “Partially Supports” but no user impact explanation
  • Wrong terminology (“Pass/Fail” instead of VPAT language)
  • Outdated report date (e.g., very old compared to product releases)
Common credibility gaps
  • Evaluation methods list only automation (no manual + no AT)
  • Assistive technologies not named (e.g., “screen readers were used”)
  • Browsers/devices not listed
  • Scope unclear (which features, which platforms, which workflows)
  • Too many “Not Applicable” for features the product clearly has
Fast test: Pick 4–6 high-impact criteria (keyboard, labels, name/role/value, focus visible, errors, reflow) and see if the VPAT’s remarks match what you would expect in real UI.
↑ Back to top

4) What questions / issues identified do you ask a Vendor after a VPAT review

Use vendor questions to clarify scope, confirm testing credibility, and resolve inconsistencies.

Scope + Coverage
  • Which platforms are covered (web, iOS, Android, desktop)?
  • Which key workflows are included/excluded (login, forms, upload, approvals, reporting)?
  • Does the report cover support documentation and help content?
Testing Methods
  • Which assistive technologies were used (NVDA, JAWS, VoiceOver, TalkBack)? Versions?
  • Which browsers and OS versions were tested?
  • What manual test scripts or scenarios were used?
  • If automation was used, can you share tool outputs (high level)?
Inconsistencies / Red Flags
  • Why are there many “Supports” with no explanation?
  • Why is a requirement marked “Not Applicable” if the feature exists?
  • Why is the VPAT dated X months ago if the product has been updated since?
  • Who authored the VPAT (internal vs external) and what was their approach?
Remediation + Roadmap
  • For “Partially Supports/Does Not Support”, what is the remediation plan?
  • Target timelines for fixes and re-testing?
  • Do you have an accessibility statement, roadmap, or release notes for accessibility?
Good sign: Vendor answers are specific (AT versions, browsers, workflows tested) and can explain limitations without avoiding details.
↑ Back to top

5) What does a good VPAT look like?

Strong VPAT characteristics
  • Clear product description with included platforms and key workflows
  • Evaluation methods include manual testing + named AT + browsers/OS
  • Every row has meaningful “Remarks and Explanations”
  • “Partially Supports” explains what works vs what doesn’t + user impact
  • Uses standard VPAT terms (Supports / Partially Supports / Does Not Support / Not Applicable)
Credibility indicators
  • Report is recent and aligns with product release cadence
  • Known tough areas (keyboard, forms, focus, errors) have detailed remarks
  • Any exceptions or limitations are transparent (not hidden)
  • Vendor can answer follow-up questions without contradictions
Example remark style (good):
"Partially Supports — Keyboard access is available for all primary navigation and forms.
However, the drag-and-drop reordering in the approval workflow requires mouse input.
Workaround: use up/down buttons in the list view. Fix planned in Q3 release."
↑ Back to top

6) How do you assess the accuracy of a VPAT?

You can quickly assess accuracy by combining (A) a structured read-through, (B) basic consistency checks, and (C) a few fast “spot tests”.

Step 1 — Read for scope + credibility
  • Does the VPAT accurately describe the product and essential features?
  • Are any major workflows excluded (login, checkout, approvals, reporting)?
  • Are evaluation methods specific (AT, browsers, manual steps)?
Step 2 — Table quality checks
  • Are there blank “Remarks and Explanations” cells?
  • Do “Supports” rows explain how it’s supported (not just “Supported”)?
  • Are “Not Applicable” rows truly not applicable?
Step 3 — Quick spot tests (high value)
  • Keyboard: can you reach everything and operate key actions?
  • Focus: is focus visible and not lost/hidden?
  • Forms: do labels and errors work with screen readers?
  • Reflow: does content avoid horizontal scroll at high zoom?
Step 4 — Validate against the VPAT
  • If your spot test finds an issue, does the VPAT mention it in the relevant rows?
  • If not, that’s a strong indicator the VPAT is inaccurate or incomplete.
  • Follow up with the vendor for clarification and updated evidence.
Simple decision rule: A VPAT is “usable” when the scope matches your needs, evaluation methods are credible, and remarks clearly describe user impact for anything not fully supported.
↑ Back to top