A museum IT lead showed me her Gantt chart for the handset-to-app migration. Six weeks. One line item called "content migration." Another called "cut over." Her vendor had told her it would be straightforward because the new platform "supports imports." She'd signed the contract in January and the go-live was April. By the end of our first meeting in March, we'd moved the go-live to July and added nine items to the plan.
The gap between how vendors describe a migration and what actually has to happen is the single largest source of pain in this project. The platforms are fine. The mechanics are well understood. What's missing in most project plans is a realistic account of where the work actually lives.
This is a practical playbook for the IT lead, digital manager, or interpretation officer who's been handed the migration project and needs a timeline they can defend.
What a migration actually is
A handset-to-app migration is not a software swap. It's three overlapping projects:
- A content project (audit, extraction, mapping, regeneration, translation, QA)
- An integration project (ticketing, CMS, collection data, analytics, visitor identity)
- An operations project (front-of-house training, signage, decommissioning, accessibility parity)
Any one of these can kill the timeline if it's missed. In our experience, the content project is the most underestimated, the integration project is the most technically complex, and the operations project is the most likely to slip because everyone assumes it'll sort itself out.
A realistic schedule for a single-site mid-sized museum is 12 to 20 weeks from kickoff to live. Multi-site networks or institutions with complex collection systems typically run 6 to 9 months. Compress that timeline at your own risk — it's the single biggest predictor of a migration that goes live with visible defects.
Phase 1: content audit and extraction (weeks 1-3)
Before you can migrate anything, you need to know what exists. The audit asks: how many tours, how many stops per tour, how many languages, how many total audio assets, how much script text, which stops reference objects that are no longer on the floor, which have been edited and which haven't been touched since 2017.
The audit output is a spreadsheet or database of every content asset, its current state, and whether it should be ported forward, rewritten, or retired. In a 150-object permanent collection with 8 languages, that's typically 1,200 or more individual asset rows.
Extraction is where the friction starts. Getting your audio files, scripts, and translation pairs out of the legacy vendor's system ranges from trivial (a clean export in a well-documented format) to adversarial (support tickets, NDAs, per-file fees). Contractual audit: before you sign with the new platform, read the old contract for the clauses on content ownership, export rights, and any per-asset or per-hour retrieval fees. Some legacy vendors charge four-figure sums for full content export, and that line item belongs in the migration budget, not as a surprise in week six.
Common extraction formats:
- Audio files: MP3 or WAV are universal. WMA or vendor-proprietary formats need transcoding, which costs time.
- Scripts: ideally structured (XML, JSON, CSV) but often PDF or Word documents. PDF-only script recovery needs OCR plus manual cleanup.
- Translations: you want these as paired source-target text, not just the target recordings. Without the paired text, adding a 9th language to the new platform will be harder than it needs to be.
- Metadata: object IDs, gallery locations, tour sequences, stop numbers — the structural glue.
If the old vendor won't give you the structured content and will only ship audio files, treat that as a signal to regenerate from scratch in the new platform rather than trying to reverse-engineer the structure.
Phase 2: content mapping and regeneration (weeks 3-7)
The new platform has a different structural model. Tours, stops, objects, themes, personas, tracks — every platform uses these words slightly differently, and the old system's categories rarely match one-to-one. Mapping is the work of deciding what becomes what.
A practical approach: build the map in the new platform's vocabulary first. "In the new system, a tour is a sequence of stops, and a stop can belong to multiple tours." Then work backward from that to the old system's content. If the old system had a 40-stop "Highlights" tour and a 120-stop "Permanent Collection" tour with overlap, the new model might be a single library of 150 stops with two curated sequences on top. The mapping step decides that explicitly, before the import runs.
Regeneration is the question: do we port the old audio or regenerate it? Our default recommendation for most museums: regenerate, especially if you're moving to an AI-based platform. Old recordings were written for a linear hardware format. Regenerated content can be shorter where visitors wanted shorter, deeper where they wanted deeper, and available in languages the original production never covered.
The exceptions are signature voice work (a recognizable narrator that's part of the museum's identity), licensed celebrity tracks, and specific oral-history recordings where the voice itself is the content. Keep those. Regenerate the rest.
Budget realistically. For a mid-sized museum, regeneration of a 150-stop permanent collection guide runs typically 3-5 weeks of curator time across content review, persona setting, and QA. Not vendor time — curator time.
Phase 3: integration (weeks 5-9, overlaps phase 2)
This is where technical migrations most often stall. The new app has to talk to things the old handsets never did: the ticketing system (to validate visitor entitlement), the CMS (to serve content), the analytics platform (to report engagement), sometimes the collection database (to pull object metadata), occasionally a donor management system (to identify members for premium content).
Each integration has its own rhythm. A modern REST API with decent documentation can be wired up in days. A SOAP-based ticketing integration with a vendor who answers support tickets every ten days can take six weeks. The best predictor of integration time is not the new platform's sophistication; it's the old system's API quality. We covered the integration side in audio guide museum system integrations if you want the category map.
A specific trap: visitor authentication. If the old system didn't validate tickets and the new one does, you've just introduced a new point of failure into the entry flow. Make sure the fallback (visitor without a ticket scan, staff override, offline mode) is designed deliberately, not as an afterthought.
The other integration worth naming: analytics. You want continuity of visitor data across the migration — same KPIs, comparable benchmarks, a clean crossover so you can prove the migration improved outcomes. If the new platform exposes its analytics only through its own dashboard with no export, that's a medium-long-term problem worth solving before you sign.
Phase 4: pilot in one gallery (weeks 8-12)
Don't go live across the museum on day one. Pilot in one gallery, with the new app live and the handsets still available, for 3-4 weeks.
The pilot measures the things that matter: completion rate (what percentage of visitors who start the tour finish it), language mix (does the distribution match what your visitor data predicted), question volume (if the platform supports it, how often visitors ask follow-ups), and qualitative reports from front-of-house staff. The pilot finds the problems that weren't visible in testing — signage that's in the wrong place, a QR code that's too small, a stop whose audio didn't export cleanly, a language pair that sounds strange to native speakers.
Fix what the pilot surfaces before the broader rollout. Don't skip this step to save two weeks. The museums that skip it tend to re-learn the lessons at full scale, in front of all their visitors, in the first fortnight after cutover.
Phase 5: phased rollout (weeks 12-16)
Extend from one gallery to the full floor, then run both systems in parallel for 4 to 6 weeks. Reception still leads with the app, but the handsets remain available for visitors who specifically ask. Track the crossover ratio. If app adoption climbs into the 60-70% range during parallel run, the cutover is safe. If handset demand stays high, something structural is wrong — usually signage, usually fixable.
Treat this phase as real operations work, not a victory lap. Front-of-house training is the concrete deliverable. Every staff member should have used the app themselves, on their own phone, for at least 15 minutes, before they're expected to talk about it to visitors. A staff member who hasn't tried the guide will not promote it convincingly.
Phase 6: decommission (weeks 16-20)
The cutover is a specific calendar date, not a drift. After that date, the handsets come off the counter, signage updates, and the fleet moves toward disposal. Old IT systems tied to the handset program get decommissioned cleanly — credentials revoked, integrations disabled, data archived to wherever your retention policy says it should live.
Disposal is a project on its own, covered in audio guide hardware end of life. The short version: WEEE-compliant recycling is a legal requirement in most jurisdictions, lithium batteries need to be handled as hazardous material, and the "send them to a refurbisher" path can return meaningful salvage value if the model is still supported. Plan it in week one, not week sixteen.
The economic frame that justifies the project
The migration has real cost. Content regeneration, integration work, staff training, signage, disposal. A realistic all-in range for a mid-sized museum is $25,000-$75,000 depending on scope.
What makes this justifiable is what the destination looks like, not what the source costs. Legacy handset programs are capex plus variable operating costs — fleet replacement cycles, battery costs, counter staffing, content re-recording, maintenance contracts. Modern platforms like Musa run on per-interaction or revenue-share pricing with no fleet capex. The migration cost is a one-time transition; what follows is an operating model where the guide only costs the museum when visitors actually engage. That changes the curve — the migration pays back not because the old system was expensive in year one, but because the new system stops being expensive in years three through ten.
If you're planning this project and want a realistic timeline for your specific configuration — content scope, integrations, accessibility requirements — we're happy to walk through it. The Gantt chart should be defensible before you commit to a go-live. The cost of the chart being wrong is borne by visitors in the weeks after cutover, and they don't forget.