Museum audio guides are designed for a brain that sits still, processes information at a steady rate, tolerates unpredictable sensory input, and reads standard fonts without difficulty. That brain describes some visitors. It does not describe all of them.
Roughly one in five people is neurodiverse. Autism, ADHD, dyslexia, sensory processing differences: these aren't edge cases. In a museum that sees 200,000 visitors a year, 30,000 to 40,000 of them experience the world in ways that most audio guides simply don't account for. They get the same fixed-pace narration, the same wall of text, the same lack of warning before walking into a room with flashing video installations.
The museum sector has gotten better at physical accessibility. Ramps, elevators, wheelchair-accessible cases are standard now. But neurological accessibility is decades behind. Most institutions haven't started thinking about it, and the audio guide industry hasn't given them tools to act on it even if they had.
That's beginning to change. Not fast enough, but it's starting.
Where traditional audio guides fail
A conventional audio guide is a linear recording. Press play, listen for ninety seconds, move to the next stop. The pace is set by whoever wrote the script. The language complexity is fixed. The sensory environment is what it is.
For a visitor with ADHD, ninety seconds of unbroken narration about a painting's provenance can feel interminable. There's no way to skip to the part they care about. No way to get a shorter version. No interactive element to hold their attention. They either endure it or take the headphones off. Most take the headphones off.
For an autistic visitor, the problems are different but equally real. The guide provides no information about what the museum visit will be like before they arrive. No predictable structure they can review in advance. No warnings about sensory intensity (the next gallery has a loud video installation, a particular room tends to be crowded). No quiet space locations. The visit becomes an exercise in managing anxiety rather than enjoying art.
For someone with dyslexia, any on-screen text in the guide app uses whatever font the designer picked. If the guide has a transcript mode, it's in a standard typeface that makes letters swim and merge.
For visitors with sensory processing differences, there's no volume normalization, no way to reduce auditory complexity, and no integration with noise-canceling headphones they may already be wearing.
None of these failures are intentional. They're the result of designing for one type of visitor and never questioning that assumption.
What software-based guides make possible
The single biggest shift in audio guide technology over the past few years is the move from fixed recordings to software-driven, AI-generated delivery. This matters for neurodiversity more than almost any other accessibility category, because the limitations of traditional guides are all rooted in rigidity. And rigidity is exactly what software eliminates.
When your guide is generated in real time by software, pacing becomes a choice rather than a constraint. Content length becomes adjustable. Language complexity can shift. The order of stops can flex. Warnings and supplementary information can be inserted contextually. None of this requires producing multiple versions of the guide. It's the same system responding differently based on visitor preferences.
This is not hypothetical. AI-powered audio guides already support variable pacing, skip-ahead, question-and-answer interaction, and non-linear navigation. These features were built for general usability, but they happen to address core neurodiversity needs.
The question is: how deliberately are we applying them?
ADHD: the attention contract
People with ADHD don't have a deficit of attention. They have difficulty directing it on demand toward things that aren't intrinsically engaging. A two-minute monologue about artistic technique might lose them in fifteen seconds. A thirty-second story about the scandal that got a painting banned might hold them for five minutes as they ask follow-up questions.
Traditional audio guides offer no way to work with this. The content is what it is, and it plays at the pace it plays.
An AI-powered guide can do several things differently. Shorter default segments (thirty to forty-five seconds rather than ninety) with clear invitations to go deeper if the visitor wants. The ability to interrupt and ask a question mid-narration, which redirects the experience toward whatever caught their interest. Non-linear navigation that lets visitors bounce between stops based on curiosity rather than a prescribed path. Interactive prompts that turn passive listening into a conversation.
The underlying principle: give ADHD visitors control over their own attention. Don't demand sustained focus on content they didn't choose. Let them steer.
This isn't only good for ADHD visitors. Every visitor benefits from shorter segments and more interactivity. The research on museum fatigue has been saying this for decades: people stop absorbing information after about thirty minutes of passive listening. Designing for ADHD pushes you toward a better experience for everyone.
Autism: predictability and sensory safety
For many autistic visitors, the hardest part of a museum visit isn't the content. It's the uncertainty. What will the space look like? How loud will it be? How crowded? Where are the quiet areas? What's the expected sequence of events?
Social stories (short, structured previews of an experience) are a well-established tool for helping autistic people prepare for new situations. An audio guide could deliver one before the visit even begins: here's what the museum looks like, here's how the tour works, here's what to expect in each gallery, here are the quiet spaces if you need a break.
No traditional audio guide does this. But a software-based guide could. The information already exists. Museums know their floor plans, their crowd patterns, their noisy galleries. Packaging it into a pre-visit orientation for visitors who need predictability is a matter of building the feature, not generating new content.
During the visit itself, sensory warnings matter. A simple flag ("the next gallery contains a video installation with sudden loud sounds") costs nothing to implement but can be the difference between an autistic visitor continuing the tour or leaving the museum. Volume normalization, integration with the visitor's own noise-canceling headphones, and clear information about quieter alternative routes all fall into the same category: low effort, high impact.
Predictable structure helps too. An autistic visitor who can see the full tour sequence (how many stops, roughly how long each takes, where they are in the overall progression) experiences less anxiety than one who's navigating blind. Progress indicators and clear "you're at stop 5 of 12" cues are simple additions that provide real reassurance.
Dyslexia and visual processing
Audio guides are inherently better for dyslexic visitors than text panels. That's the good news. But most guide apps still include significant amounts of on-screen text: stop descriptions, navigation instructions, transcripts.
Dyslexia-friendly fonts (typefaces specifically designed to reduce letter confusion) exist and work. OpenDyslexic and similar fonts weight the bottom of letters to prevent the visual rotation that makes standard typefaces difficult for dyslexic readers. Offering this as a toggle in a guide app is straightforward software work.
Beyond fonts: text-to-speech with real-time highlighting (what Musa calls "karaoke mode," originally built for deaf and hard-of-hearing visitors) also helps dyslexic visitors follow along. Seeing words highlighted as they're spoken reinforces the connection between the visual and auditory processing channels.
Simplified language options (shorter sentences, more common vocabulary, less jargon) serve dyslexic visitors while also helping anyone who isn't fluent in the guide's language. Again, what helps one group helps many.
Sensory processing: the environment problem
Some neurodiversity challenges can't be solved by the audio guide alone. A museum with a loud, echoing atrium is going to be overwhelming for visitors with sensory processing difficulties regardless of what's playing in their headphones.
But the guide can help manage the experience. Sensory maps (which galleries are typically quiet, which have intense visual or auditory elements, which tend to be crowded at different times of day) let visitors plan their route around their own tolerances. This information changes throughout the day and could be updated in real time for a software-based guide, though that level of integration doesn't exist yet.
Volume control with wider range than standard audio players, automatic volume normalization so a whispered gallery introduction doesn't suddenly blast into a dramatic war scene narration, and explicit pairing with noise-canceling headphones are all practical features. The visitor's headphones become a tool for managing the entire sensory environment, not just a delivery mechanism for narration.
Being honest about where we are
Here's what we should acknowledge directly: the museum audio guide industry hasn't focused on neurodiversity. Not in a meaningful, systematic way. Including us.
At Musa, we've built a platform with significant software flexibility. Adjustable pacing, skip-ahead, conversational interaction, karaoke-mode transcripts, screen reader support. These features exist and they happen to serve some neurodiverse needs. But we haven't built dedicated neurodiversity features yet. No dyslexic font toggle. No sensory maps. No social stories. No ADHD-specific content mode.
We know these things are possible. The whole point of a software-based guide is that you can build whatever the situation demands. We're not constrained by hardware or fixed recordings. If a museum needs a dyslexic font option, we can build it. If sensory warnings need to be added to specific stops, the architecture supports it. The question is prioritization, and honestly, neurodiversity hasn't been at the top of the list yet.
That's changing. Accessibility has been a real focus: screen reader optimization, high contrast, device-level font sizing, the karaoke transcript system. The next layer includes neurodiversity-specific features. But I'd rather be straightforward about what exists today than sell something we haven't shipped.
The software flexibility is real. What's built on top of it is still catching up.
The curb cut effect
In the 1970s, cities started adding curb cuts (small ramps at sidewalk edges) for wheelchair users. Almost immediately, everyone started using them: people with strollers, delivery workers with carts, travelers with suitcases, joggers, cyclists. A feature designed for a specific accessibility need turned out to improve the experience for the entire population.
This same pattern applies to neurodiversity design in audio guides. Shorter content segments designed for ADHD visitors reduce museum fatigue for all visitors. Predictable structure designed for autistic visitors helps first-time museumgoers who feel overwhelmed. Simplified language options designed for dyslexic visitors serve international tourists whose English is limited. Sensory information designed for sensory processing differences helps parents planning visits with young children.
Every feature built for neurodiverse visitors makes the guide better for neurotypical visitors too. This isn't a niche accommodation. It's better design.
What museums can do now
You don't need to wait for perfect neurodiversity features to make progress.
Audit your existing guide for fixed-pace assumptions. If your guide only works at one speed with no ability to pause, skip, or adjust, that's the first thing to address. Any software-based guide should support this already.
Add sensory information to stop descriptions. If a gallery has loud elements, note it. If there's a quiet space nearby, mention it. This takes minutes per stop and helps visitors who need it.
Provide a pre-visit overview. What does the tour cover? How long does it take? What's the sequence? Even a simple "here's what to expect" screen helps visitors who need predictability.
Ask neurodiverse visitors directly. Partner with local autism organizations, ADHD support groups, and dyslexia charities. Invite them to test your guide and tell you what doesn't work. The feedback will be specific, useful, and probably surprising.
Choose a guide platform with software flexibility. If you're selecting or upgrading your audio guide, prioritize systems that can add features without re-recording content. The neurodiversity features you'll want in two years should be software updates, not new production runs.
What comes next
The museum sector is where physical accessibility was thirty years ago when it comes to neurological accessibility. We know it matters. We know the numbers. We know the existing tools don't serve these visitors well. What's been missing is the technology to act without multiplying production costs and complexity.
Software-based, AI-powered audio guides change that equation. Not because they solve everything today, but because they make the cost of adding neurodiversity features marginal rather than prohibitive. A dyslexic font toggle is a software feature, not a new print run. A sensory warning system is a data field per stop, not a separate guide version. An ADHD-friendly content mode is a prompt configuration, not a re-recording.
The tools are getting there. The awareness is growing. The question for individual museums is whether to start incorporating neurodiversity into their audio guide thinking now or wait until it becomes an expectation.
If you're thinking about how to make your guide work better for all types of visitors, we'd be glad to talk through what's possible today and what's on the horizon.