This is incredible. Terry Garrett, a blind gamer, has beaten the Nintendo 64 classic “Ocarina of Time”. The man’s a gaming god.
How did he do it? First, Garrett relied heavily on the game’s soundscape to orient himself around its 3D space. He even used the venerable Zelda hookshot “as a form of echolocation,” listening for the difference between the weapon striking walls and whiffing through open air. He also relied heavily on software emulation—Garrett saves his game state every few seconds, then restores that state when experiments go awry.
Garrett’s achievement testifies to his perseverance and ingenuity; it took five years of occasional gameplay to finish the task. Few gamers have the patience to do that sort of repetitive, time-consuming work.
Nintendo also deserves credit—for putting such care into Ocarina’s soundscape. The game’s sound engine places each noise in its proper stereo location. Plus, key occurrences on-screen have discernible audio equivalents. For example, when Link chaperones Zelda through Ganondorf’s castle, Zelda’s feet make tiny, just-perceptible noises.
What if every game developer took low-vision accessibility more seriously? What if game studios put the same care into their sound engines that they put into graphics and physics? What if every game’s sound design made it possible for blind gamers to play—withoutresorting to trial and error?
Imagine, for example, if your avatar’s footsteps reverberated more like real life. The sound would echo differently depending on your distance from the nearest wall, the texture of the floor, or the proximity of a deadly chasm. Just this one feature would allow a blind gamer to navigate virtual realms much like Daniel Kish explores the real world.
Games might even implement a “low-vision mode.” With this setting enabled, on-screen events would create constant, audible cues.
Take the recent Arkham Batman series as a theoretical example. How might these games sound if they were programmed with the sight-impaired gamer in mind? Each mob thug would grumble and yell incessantly; that way, the player could tell exactly where each foe stood, relative to Batman’s current position. Or, as the Batmobile motored through Gotham City, audio cues could distinguish open street intersections from adjacent buildings. That way, a gamer could hear exactly when to hit that e-brake. Finally, for less action-heavy sequences, Batman might speak his inner monologue out loud—describing the environment or the puzzle at hand in exhaustive detail.
If more game developers attended to such details, a standard “low-vision vocabulary” would solidify over time. These conventions would guide devs’ work and allow blind gamers to quickly grok new games. Game engines (e.g. Unreal, Unity) would incorporate these features, giving developers a head-start on building blind-accessible titles. Design studios might even hire blind game developers to ensure that their games met the needs of the sight-impaired.
UPDATE: Reader Ian Hamilton responded via Twitter with a series of helpful thoughts. In particular, he notes that many fighting games (e.g. ‘Mortal Kombat X’) already include audio cues that make it easier for sight-impaired gamers to compete. Ian also linked to an interesting Game Developers Conference panel on “Reaching the Visually Impaired Gamer”.