Skip to main content
MediaScribe
Deadline Extended: ADA compliance deadline moved to April 26, 2027.Learn what changed →

You are on lesson 1 of 5 in the course Path 2: Audio Descriptions.

Module 2.1: When Do You Need Audio Description? A Decision Framework

Audio description helps people who are blind or have low vision understand visual content that isn't explained in the audio track. But figuring out when you need it can feel overwhelming when you're managing dozens of videos each month.

This article gives you a practical framework for deciding when audio description is required, recommended, or optional for your government videos and meetings. You'll learn how to apply the "close your eyes test," understand WCAG requirements in context, and make confident decisions based on your specific content and audience.

By the end of this article, you'll be able to evaluate any video and determine the right level of audio description support—without second-guessing yourself or over-complicating simple decisions.

The close your eyes test

Here's the simplest way to know if you need audio description: close your eyes and listen to your video. Can you understand what's happening? If you miss important information with your eyes closed, you need audio description.

This test puts you in the position of someone who relies on sound alone. When you try this with a city council meeting, you'll quickly notice if someone's presenting slides without reading them aloud, or if commissioners are nodding instead of saying "yes" on the record.

Try this exercise right now with your last meeting recording. Pick a five-minute segment where someone presented something—a staff report, budget slides, or site plan. Close your eyes and listen. What information did you miss? Those gaps are exactly what audio description fills.

The close your eyes test works because it's experiential rather than theoretical. You don't need to memorize WCAG criteria or debate whether something counts as "essential visual information." You simply experience what someone who can't see the screen experiences. The answer becomes obvious.

This test has limitations—some visual details matter more than others, and WCAG requirements go beyond what feels obviously necessary. But it's an excellent starting point that will catch most audio description needs in your content.

What WCAG actually requires

WCAG 2.1 Level AA requires audio description for prerecorded video through Success Criterion 1.2.5.

The actual requirement

Success Criterion 1.2.5 states: "Audio description is provided for all prerecorded video content in synchronized media."

In plain language, if your video includes meaningful visual information that isn't explained in the audio, you need audio description. This applies to videos published on your website, embedded in web pages, or linked from your digital properties.

Why this requirement exists

People who can't see the screen should have access to the same information as people who can. When a planning commissioner points at a map and says "this area here," someone listening needs to know which area. When a chart displays revenue trends, someone who can't see the chart needs those numbers explained verbally. Audio description ensures visual information gets conveyed through audio channels.

Your two options for compliance

You can provide a separate audio description track that plays during natural pauses, or create a version where visual information is spoken as part of the main audio. Both satisfy the requirement.

The separate track approach works well for produced content where you can't re-record the original. The integrated approach works better for live presentations where speakers can describe what they're showing as they show it.

Live vs. archived content

Live meetings don't require audio description as they happen. But archived recordings become prerecorded content and need audio description if they contain important visual information. You can't reasonably expect staff to add audio description to a live meeting in real time, but once you archive that meeting for public access, it's subject to WCAG 2.1 Level AA requirements.

Good, better, best: your audio description options

Think of audio description as a spectrum of accessibility, not just a pass/fail checklist. This framework helps you prioritize your efforts and grow your accessibility practices over time.

Good: Meeting minimum requirements

At the baseline level, you're focused on legal compliance and avoiding risk. This means adding audio description only when visual information is essential and not explained in dialogue.

Focus your efforts on:

  • Videos published or archived publicly (your highest-risk content)

  • Content with visual information critical to understanding (site plans, budget charts, safety demonstrations)

  • Using the close your eyes test to identify obvious gaps

  • Meeting WCAG 2.1 Level AA requirements for prerecorded videos

At this level, you're building the foundation. You're learning what audio description is, establishing basic processes, and ensuring your most important public-facing content meets legal requirements. This is where most organizations start, and it's perfectly acceptable as long as you're honest about what you're not covering yet.

Better: Expanding access

As your capacity grows, you can move beyond strict compliance to provide better service to your community.

Consider adding audio description for:

  • Videos where visual context enhances understanding, even if not strictly required by WCAG

  • Internal training videos and staff resources (serving employees with vision disabilities)

  • Archived meeting content even when minimal visual aids were shown

  • Presentations where speakers summarized slides verbally but didn't provide full details

At this level, you're thinking about user experience rather than just checking boxes. You recognize that someone listening to a budget presentation benefits from hearing specific numbers, not just "revenue increased." You understand that speaker identification matters even when it's possible to follow the discussion without it.

The difference between "good" and "better" is often subtle, but it's the difference between accessibility as legal compliance and accessibility as service quality.

Best: Accessibility by default

At the highest level, audio description becomes part of your standard operating procedure rather than a special effort.

This looks like:

  • Training all presenters to describe visual materials as they present them

  • Building audio description planning into your video production workflow from the start

  • Making all archived video content accessible, regardless of technical requirements

  • Creating a culture where visual information is always verbalized

Organizations at this level don't debate whether specific content needs audio description. They've internalized that accessible content serves everyone better, so they make it accessible by default. Presenters automatically say "I'm showing a pie chart with public safety at 42%, infrastructure at 28%..." because that's standard practice. Video producers script audio descriptions into initial drafts, not as afterthoughts.

This level requires cultural change, not just procedural change. Accessibility becomes part of how you do business rather than something you remember to do.

Government scenarios: when you need audio description

Scenario 1: Planning commission presentation with site maps

A developer presents their proposal for a new apartment complex. They show aerial photos, site plans, and architectural renderings while discussing setbacks, parking ratios, and building heights.

Decision: Audio description required.

The visual information is essential to understanding the proposal. Someone listening without seeing the images wouldn't know where the buildings are positioned, how the parking is arranged, or how the development relates to neighboring properties. The presenter needs to describe what they're showing, or you need to add audio description in post-production.

Scenario 2: Budget presentation with charts and graphs

The finance director presents the annual budget using slides that show revenue trends, expense breakdowns by department, and five-year projections.

Decision: Audio description required if charts aren't read aloud.

Financial data shown visually must be explained verbally. If the finance director says "as you can see here, public safety represents 42% of general fund expenditures" while pointing at a pie chart, that works. But if they show a complex trend line without explaining the specific data points, someone listening can't access that information. The presenter should narrate the data, or you need audio description.

Scenario 3: Public safety demonstration video

Your fire department creates a video showing proper smoke alarm installation. The video demonstrates mounting height, placement distance from corners, and testing procedures.

Decision: Audio description required.

The visual demonstration is the primary content. Someone who can't see the video won't learn where to mount the smoke alarm or how to test it properly. The narrator needs to describe each step clearly: "Mount the alarm on the ceiling at least 4 inches from the wall, or on the wall 4 to 12 inches below the ceiling."

Scenario 4: City council meeting with no visual aids

Your city council discusses a proposed noise ordinance. Council members debate the text of the ordinance verbally. No slides or documents are shown. The video shows people sitting at a dais talking.

Decision: Audio description not required.

All the meaningful content is in the spoken discussion. Someone listening without watching gets the same information as someone watching the video. You don't need to describe what people are wearing or that they're sitting at a table—those details don't affect understanding of the policy discussion.

Scenario 5: Mayor's video statement with text overlays

The mayor records a video statement about upcoming road construction. The video includes text overlays showing which streets will be closed and when.

Decision: Audio description required.

The text overlays contain critical information that's only available visually. The mayor needs to read the street names and closure dates aloud, or you need to add audio description that does this. Simply showing text on screen doesn't make it accessible to people using screen readers or audio-only access.

Scenario 6: Parks department trail camera footage

Your parks department shares trail camera footage showing wildlife in the nature preserve. A brief narration explains the camera location and date, but doesn't describe the animals shown.

Decision: Audio description required if intended as educational content.

If this is posted as general interest content, describing what animals appear and what they're doing makes it accessible. If the video's purpose is to document wildlife presence, someone listening needs to know "a white-tailed deer enters from the left at 6:47 AM" rather than just hearing ambient forest sounds.

Scenario 7: Time-lapse construction project video

Public works shares a time-lapse video showing construction of a new water treatment facility over 18 months, set to music with title cards showing project milestones.

Decision: Audio description required.

The visual progression is the entire point. Audio description should explain major construction phases and read milestone text aloud: "Foundation work begins. Workers pour concrete footings. Month 3: Steel framework rises for the treatment tanks."

Scenario 8: Employee training on document formatting

IT creates a training video showing staff how to create accessible PDF documents through screen recording.

Decision: Audio description required (through narration).

Instructional videos need verbal explanation of on-screen actions. The instructor should narrate: "Click Document Properties in the File menu. Enter a title in the Title field. This helps screen reader users identify the document." The narration itself provides the required audio description.

Common questions

Do we need audio description for every meeting video?

Not necessarily. If commissioners discuss issues entirely through conversation, audio description isn't required. But if staff presents slides or maps with information not spoken aloud, you need audio description for those portions.

What about videos that are mostly talking heads?

If all the information is in what they say, you don't need audio description. You don't need to describe what people look like unless those details matter to the content.

How do we add audio description to our existing content?

MediaScribe Narrate is a cloud-based audio description service that uses AI to analyze your pre-recorded video, generate description scripts, and produce a narrated accessible version. It works entirely in the cloud — no additional hardware is required. You upload your video, review and edit the AI-generated descriptions, and download the finished accessible file.

Interested in trying MediaScribe Narrate? Contact our team.

What if we didn't add audio description when we first published?

You can add audio description anytime. Many organizations prioritize their most-viewed content first, then work through older videos over time.

Making audio description part of your workflow

The difference between organizations that successfully implement audio description and those that struggle often comes down to workflow integration. When audio description is an afterthought, it becomes a burden. When it's part of your standard process, it becomes manageable.

Build it in from the start

The easiest time to add audio description is during content creation, not after publication.

For live meetings, train your presenters on visual description basics. Give them simple guidelines: whenever you show something visual, describe what it shows. When you display a map, identify the area. When you show a chart, state the key numbers. When you reference a document, read the relevant section aloud.

A single-page reference sheet covers most scenarios: "When showing slides, describe key visual elements. When pointing at things, say what you're pointing at. When nodding or shaking your head, say 'yes' or 'no' out loud."

Integrate review into your publishing process

Before publishing meeting recordings, someone should review sections where visual materials were shown. This doesn't mean watching every minute—just checking where slides, maps, or diagrams appeared.

Build a simple checklist:

  • Were slides or documents shown? Did the presenter describe them?

  • Were maps or diagrams displayed? Did anyone explain what they showed?

  • Did people point at things without identifying them verbally?

  • Were votes taken with hand raises instead of verbal responses?

This review takes 10-15 minutes for a typical two-hour meeting and catches most audio description issues before publication.

Plan ahead for produced content

For videos you're producing rather than simply recording, audio description should be part of your initial planning, not something you discover you need after production is complete.

When scripting a public safety video, write descriptions of visual content into the narration from the beginning. When planning a department update video, identify visual elements that will need verbal explanation and script those descriptions into your draft.

This approach costs nothing extra—you're writing descriptions during the scripting phase when changes are easy, rather than trying to retrofit them after production is complete and changes are expensive.

Use tools that support your workflow

MediaScribe Narrate can handle the technical work of generating and inserting descriptions once you've identified what needs describing. The system analyzes your video, detects dialogue gaps, generates descriptions, and produces accessible video files—all through cloud-based processing that doesn't require additional hardware or specialized staff.

But the tool works best when you feed it good source content. If your presenter described visual materials during the live presentation, the AI needs to add less. If your produced video was scripted with descriptions from the start, you're using the tool to polish rather than to rescue poorly planned content.

Tools support good processes—they don't replace them.

When audio description helps everyone

Audio description benefits more people than you might expect. This is the curb cut effect in action—features designed for specific needs help broader audiences.

Mobile and multitasking audiences

People listening to meeting recordings while driving can't watch slides or read on-screen text. Staff members reviewing footage while taking notes benefit from verbal descriptions. Parents catching up on city council decisions while making dinner need to hear what's being shown, not just what's being said.

Non-native speakers and people with cognitive disabilities

Verbal descriptions provide redundancy that helps with comprehension. When a budget chart is both shown and described, someone processing English as a second language gets two chances to understand. Someone with certain cognitive processing differences may find verbal information easier to understand than complex charts.

Your future archives

Video files outlast the websites they're embedded in. The descriptions you add today make your historical archives more useful decades from now. Future researchers, journalists, and community members reviewing old footage will appreciate being able to understand meetings without needing access to materials that have since been lost.

Professional quality

Content with good audio description simply feels more professional. It demonstrates that your organization pays attention to details and cares about serving everyone. When a reporter or concerned citizen reviews your recordings, accessible content suggests competence and care.


How MediaScribe supports audio description

MediaScribe Narrate's audio description system automates much of the technical work involved in creating accessible video. The system analyzes your video content, identifies meaningful visual elements, detects natural dialogue gaps, and generates descriptions that fit seamlessly into available time.

You upload your video file and give your project a name. MediaScribe handles the processing—generating descriptions, synthesizing professional-quality narration, resolving timing conflicts, and producing accessible video files through cloud-based processing that doesn't require additional hardware. You can add tags to organize your projects through the Actions menu.

The system includes review and editing tools so you can refine AI-generated descriptions before final publication. This lets you maintain quality control while dramatically reducing the time required to produce accessible content.

MediaScribe supports WCAG 2.1 Success Criterion 1.2.5 by making audio description practical for organizations with limited staff and tight budgets. While the tool handles the technical complexity, your organization still makes the decisions about what content needs describing and ensures descriptions accurately represent your visual materials.

This short video walks you through the MediaScribe Narrate workflow—from uploading your first video to downloading accessible content with audio descriptions. If you're a Cablecast customer, you'll also see how the integration automates the entire process.

New to MediaScribe Narrate? Contact our team to discuss how audio description can fit into your accessibility workflow.


Summary: Key takeaways

  • Use the close your eyes test as your first evaluation tool—if you miss important information with your eyes closed, you need audio description

  • WCAG 2.1 Level AA requires audio description for prerecorded video when visual information is essential to understanding

  • Think in terms of "good, better, best" rather than just pass/fail—start with minimum requirements and expand as capacity allows

  • Build audio description into your content creation process rather than treating it as post-production cleanup

  • Train presenters to describe visual materials as they show them—this is often easier than adding descriptions later

  • Audio description benefits many audiences beyond people with vision disabilities, including mobile users, multitaskers, and non-native speakers

  • MediaScribe's AI-powered tools can automate much of the technical work, but organizational decisions about what needs describing still require human judgment