Two Birds, One Stone
There are two ways to think about AI-enhanced audio guides.
The obvious case is that they're better. A traditional audio guide is someone talking at you. You press play, you listen, you move to the next thing. An AI-enhanced guide can actually answer questions. If you're curious about something specific, you can ask. That's a meaningful improvement.
But there's a second case that's less obvious and equally important.
ChatGPT now has 800 million weekly active users. That's up from 100 million in November 2023, to 300 million in December 2024, to 400 million in February 2025. It's one of the fastest-growing services ever.
This matters because of how people use it. About half of all ChatGPT messages are people asking questions, using it as an advisor rather than for task completion. Even using conservative estimates, ChatGPT is already handling nearly 1 in 10 of the "search-like" activities that Google does every day. At the high end, it's over 1 in 8. That's remarkable for a product that launched less than three years ago.
The shift is significant enough that it's reshaping legal battles worth hundreds of billions of dollars.
Google is now using AI competition as an antitrust defense. In a blog post responding to the Justice Department's proposal to potentially break up Google, the company pointed to the blossoming market for AI and the evolution of search as reasons why the government's case is misguided.
Judge Mehta's 226-page ruling heavily emphasized the role of AI. "The emergence of GenAI changed the course of this case," he wrote. After ruling that Google had illegally monopolized online search, he also concluded that the specter of artificial intelligence would ensure the company faced new competition. Google didn't need to change much about its business, the judge ruled, in part because the looming threat of AI would help solve the problem.
Think about what that means: AI competition is now credible enough to use as evidence in federal court.
What happens in museums
Picture someone standing in front of a painting. They don't have time to read every placard. They want to understand what they're looking at. Increasingly, they'll just ask ChatGPT.
And ChatGPT will answer. But there are two problems with this.
First, it hallucinates. Not often, but enough. And it's not optimized against hallucination for your specific collection. It doesn't know the particular history of your acquisitions, your research, your discoveries.
Second, even when it's accurate, it's generic. It pulls from the general corpus of knowledge: the Wikipedia article, the common interpretation, the consensus view. It doesn't know your curatorial vision. It doesn't know the story you're trying to tell with how you've arranged things. It doesn't know why you put these two pieces next to each other.
Younger demographics are more likely to use ChatGPT, particularly in academic and creative tasks. The visitors of tomorrow are forming their information-seeking habits today. They're learning that when they have a question, they can just ask.
The defense
There's really only one way to address this: have your own AI that answers questions about your collection better than the generic one.
You can't block people from using ChatGPT. You can't ignore that they will. But you can give them something better. Something that has the same conversational capability but with accuracy about your specific works, and with your interpretive voice built in.
The same product that makes the experience better is also what protects your narrative from being replaced by a generic one. It's the same investment, the same technology, the same work. But it accomplishes both things.
Two birds. One stone.
