Making internet graphics accessible through rich audio and touch

Welcome! This project is carried out by McGill University's Shared Reality Lab (SRL), in strategic partnership with Gateway Navigation CCC Ltd and the Canadian Council of the Blind (CCB). The project is funded by Innovation Science Economic Development Canada through the Assistive Technology Program. The motivation for this project is to improve the access to internet graphics for people who are blind or visually impaired.


The Problem

On the internet, graphic material such as maps, photographs, and charts that represent numerical information, are clear and straightforward to those who can see it. For people with visual impairments, this is not the case. Rendering of graphical information is often limited to manually generated alt-text HTML labels, often abridged, and lacking in richness. This represents a better-than-nothing solution, but remains woefully inadequate. Artificial intelligence (AI) technology can improve the situation, but existing solutions are non-interactive, and provide a minimal summary at best, without offering a cognitive understanding of the content, such as points of interest within a map, or the relationship between elements of a schematic diagram. So, the essential information described by the graphic frequently remains inaccessible.

A woman sitting at a computer that is displaying a web page with six images. She has a cup of coffee to her left and a phone to her right.
A visually impaired man wearing a sweater and headphones, sitting in front of a computer in a library, reading a braille book.

Our Approach

We use rich audio (sonification) together with the sense of touch (haptics) to provide a faster and more nuanced experience of graphics on the web. For example, by using spatial audio, where the user experiences the sound moving around them through their headphones, information about the spatial relationships between various objects in the scene can be quickly conveyed without reading long descriptions. In addition, rather than only passive experiences of listening to audio, we allow the user to actively explore a photograph either by pointing to different portions and hearing about its content or nuance, or use a custom haptic device to literally feel aspects like texture or regions. This will permit interpretation of maps, drawings, diagrams, and photographs, in which the visual experience is replaced with multimodal sensory feedback, rendered in a manner that helps overcome access barriers for users who are blind, deaf-blind, or visually impaired.

Try it out.


Engaging the Community

Collaborating with the community is key when creating accessible technology. Our team is partnering with Gateway Navigation CCC Ltd and the Canadian Council of the Blind (CCB), a consumer organization of Canadians who are blind, to ensure that our system is in line with the needs of the community. We are in regular contact with community members as part of our co-design approach, who are helping guide the development process but there is always room for more voices. If you'd like to contribute to the project, we invite you to fill out our community survey.

Participate in our community survey.

A woman sitting at a computer that is displaying a web page with six images. She has a cup of coffee to her left and a phone to her right.

A visually impaired man wearing a sweater and headphones, sitting in front of a computer in a library, reading a braille book.

Our Technology

Our project is designed to be as freely available as possible, as well as extensible so that artists, technologists, or even companies can produce new experiences for specific graphical content that they know how to render. If someone has a special way of rendering cat photos, they do not have to reinvent the wheel, but can create a module that focuses on their specific audio and haptic rendering, and plug it into our overall system.

Learn more about how our system works.


Contact Us

For any information related to project you can contact atp@cim.mcgill.ca.