Welcome to the website of IMAGE (Internet Multimodal Access to Graphical Exploration). This project is carried out by McGill University's Shared Reality Lab (SRL), in strategic partnership with Gateway Navigation CCC Ltd and the Canadian Council of the Blind (CCB). The project is funded by Innovation Science Economic Development Canada through the Assistive Technology Program. The motivation for this project is to improve the access to internet graphics for people who are blind or partially sighted.
On the internet, graphic material such as maps, photographs, and charts that represent numerical information, are clear and straightforward to those who can see it. For people with low vision, this is not the case. Rendering of graphical information is often limited to manually generated alt-text HTML labels, often abridged, and lacking in richness. This represents a better-than-nothing solution, but remains woefully inadequate. Artificial intelligence (AI) technology can improve the situation, but existing solutions are non-interactive, and provide a minimal summary at best, without offering a cognitive understanding of the content, such as points of interest within a map, or the relationship between elements of a schematic diagram. So, the essential information described by the graphic frequently remains inaccessible.
We use rich audio (sonification) together with the sense of touch (haptics) to provide a faster and more nuanced experience of graphics on the web. For example, by using spatial audio, where the user experiences the sound moving around them through their headphones, information about the spatial relationships between various objects in the scene can be quickly conveyed without reading long descriptions. In addition, rather than only passive experiences of listening to audio, we allow the user to actively explore a photograph either by pointing to different portions and hearing about its content or nuance, or use a custom haptic device to literally feel aspects like texture or regions. This will permit interpretation of maps, drawings, diagrams, and photographs, in which the visual experience is replaced with multimodal sensory feedback, rendered in a manner that helps overcome access barriers for users who are blind, deaf-blind, or partially sighted.
Collaborating with the community is key when creating accessible technology. Our team is partnering with Gateway Navigation CCC Ltd and the Canadian Council of the Blind (CCB), a consumer organization of Canadians who are blind, to ensure that our system is in line with the needs of the community. We are in regular contact with community members as part of our co-design approach, who are helping guide the development process but there is always room for more voices. If you'd like to contribute to the project, we invite you to fill out our community survey.
Our project is designed to be as freely available as possible, as well as extensible so that artists, technologists, or even companies can produce new experiences for specific graphical content that they know how to render. If someone has a special way of rendering cat photos, they do not have to reinvent the wheel, but can create a module that focuses on their specific audio and haptic rendering, and plug it into our overall system.