A wide range of products have been developed for the blind that are referred to variously as 'adaptive' or 'assistive' technologies. These include traditional devices (e.g. long cane, magnifying glass, portable braille typewriter, hand-held video camera, talking calculator/clock/dictionary/measuring device, cassette recorder and Dictaphone, large-print books and raised-line drawings), as well as more recent technology associated with the computer (e.g. braille keycaps, braille embosser, braille display, screen reader, screen magnifier, speech synthesiser, text-to-speech software, scanner and OCR software, electronic travel aid, personal navigation assistant and laptop/portable computer).
Many assistive products may be bought or acquired (e.g. long cane, braille embosser, voice recognition software), while other facilities may be acquired as a free or paid-for service (e.g. tactile graphics production, bulk text-to-braille transcription).
The traditional aids are described in the companion document, General Resources and Assistance. In this document, a broad overview is provided of some of the more recent assistive technologies that are of greatest relevance to visually impaired students undertaking fieldwork. It should be noted, however, that most of these were developed with neither fieldwork nor geographical study in mind, and that many of them (e.g. talking calculator and screen-reader) may be useful throughout a programme of study, not just on fieldcourse.
There are several ways of classifying assistive technologies. A distinction between computer and non-computer aids has already been used above. An alternative approach, based on how assistive technology relates to the user's visual impairment, sees three kinds of technology:
Almost all sensory substitution technologies are now computer-related, though several (e.g. tactile graphics, braille displays, voice recorders and video cameras) have their origins in pre-digital technology. Most of these technologies involve substitution by sound and speech, but the first two described here involve the use of touch. The cheap laptop PC, scanner, optical character reader (OCR), graphics software and sound card provide a firm base on which the visually impaired student is able to build a complete kit of sensory substitution facilities. Minor utilities, such as braille keycaps for PC keyboards, can also be extremely useful.
It is worth noting that although braille is perhaps the most widely known assistive technology, it is likely to be used by only a minority of VI students, and few partially sighted or recently blind students will be able to read braille. Nevertheless, for those who use it, braille can mean a significant improvement to their study experience. Braille can be used in several ways:
There are two computer-linked items of hardware available for displaying braille. The braille embosser is the equivalent of a printer and creates braille hard copy on suitable paper. Braille translation software is available to convert typed or scanned text into a format that can be output on an embosser. The braille display displays text from the computer on a row of 40 to 80 'characters' formed by groups of pins. (These devices connect to the PC through a serial or USB port.) Some braille displays have facilities to enable users to move around the computer screen, but such devices are still rather expensive, and are more likely to be encountered by visually impaired students at disability resource centres on campus rather than owned themselves.
While most assistive technologies have explored alternative sensory pathways for giving readers access to text, visually impaired students undertaking fieldwork are likely to need access to graphical information, including maps, diagrams, photographs, space images, landscape views, etc. Where the information is paper- or screen-based, tactile graphics can provide the necessary sensory substitution, involving the haptic sense of touch (Edman, 1992; Schiff & Foulke, 1982).
As with braille displays, it is unlikely that students will have their own facilities for producing tactile graphics. Fortunately, in the UK at least, tutors have access to the National Centre for Tactile Diagrams (NCTD) at the University of Hertfordshire (see the Resources document for details), which can produce a variety of tactile graphs and maps at subsidised prices. For specific applications of tactile graphics in geography, mapping and fieldwork, see the complementary document on Maps and Other Graphics.
Other forms of tactile display have been developed, including vibrotactile and electrotactile devices attached to the finger tip, tongue or other parts of the skin. However, despite considerable research, few of these are in routine use by the blind. (See Kaczmarek, 1991 for further details.)
Many libraries provide popular books in large-print and talking book formats for visually impaired readers. Talking books usually involve someone reading text into a tape recorder, and the reader uses a cassette player to listen to the book. Recently, however, talking books are also becoming available as digital speech files for playback on PCs equipped with a sound card. The problem from a blind or visually impaired student's point of view is that there are relatively few textbooks or field study guides available in this format. (See the Handouts document for applications.)
An increasing number of blind or partially sighted computer users use screen reading software to listen to textual material that appears on their computer screen. This software extracts text from the desktop software (e.g. Windows), from application programs or Web documents so it can be passed to a speech synthesiser device, text-to-speech software or a braille display (Iowa 2000). Among the more popular commercial screen readers are Windots and JAWS (Job Access with Speech for Windows), both of which pass information to a braille display or speech synthesiser. A useful review of screen readers is available as a fact sheet from Ability Net (http://www.abilitynet.org.uk/content/factsheets/Factsheets.htm), and an interesting survey has been carried out on the use of screen readers by people using Windows (Earl & Leventhal, 1999). Some visually impaired students might want to explore whether a self-voicing Web browser, such as PWWebspeak, is available to them, and whether it is preferable to a combination of standard browser and separate screen reader.
An important principle is that screen readers can only work effectively if there is suitable text available on screen to be read. Staff intending to create Web documents relating to fieldwork should therefore adopt design rules that maximise the amount of information that can be accessed by screen readers. (See the Web Design document for further details.) Microsoft has made available 'Active Accessibility' technology (described below), which enables software designers to build applications and documents that are relatively easy to link to screen-reading software.
Speech synthesis can be provided either by dedicated hardware (a standalone unit or a PC card), or by 'text-to-speech' software that takes the textual content of computer files or words generated by computer software and outputs this as artificial speech, usually through a PC sound card. Although DOS-based screen readers typically output text to a speech synthesiser, most Windows screen readers can also vocalise the textual contents of what is on screen through speech synthesiser software. (Note that the Lynx Web browser available for DOS can output to speech synthesisers.) A useful review of speech synthesis technology is available as a fact sheet from Ability Net (http://www.abilitynet.org.uk/content/factsheets/Factsheets.htm).
In order to enable software developers to provide synthetic speech output of text from their programs, Microsoft has released a free Text-to-Speech utility for Windows. (For details see: http://www.microsoft.com/reader/download_tts.asp.)
Typing and mouse manipulation can be troublesome for blind and visually impaired students. An increasingly viable alternative is provided by voice recognition or dictation software, which now has accuracy rates in excess of 90%. Examples include: Via-Voice from IBM (http://www-4.ibm.com/software/speech/uk/), Dragon and Voice Xpress. A useful review of voice recognition technology is available as a fact sheet from AbilityNet (http://www.abilitynet.org.uk/content/factsheets/pdfs/Voice%20Recognition%20Systems.pdf), which explains the difference between continuous speech and discrete speech recognition systems.
Recently, Microsoft has released a free Speech Recognition utility for software developers which allows then to capture speech and convert it to text. For details see: http://www.microsoft.com/speech/.
Data sonification or auralisation involves converting visual information into sounds. (For a general review of multi-sensory data representation, see Shepherd, 1995). An innovative approach to this problem is the vOICe system, developed by Peter Meijer (Meijer, 2000), which is used to 'read' printed or screen images (e.g. photos, graphs). This combination of hardware and software converts pictorial images into sounds, using two aural variables: pitch and loudness to represent vertical positions and brightness respectively, while time-after-click represents horizontal positions. The flow of information in this system is illustrated in the following diagram:
This echoes earlier attempt at converting spatial data to sounds by geographers (e.g. Fisher, 1994). Another system currently under development (at UMIST) is Smartsight, which converts physical shapes captured through a hand-held video camera into melodies to which the blind person listens through earphones or speakers (Guardian, 2000b). Most sonic devices are meant for desktop use, and are therefore most relevant for use in preparatory study prior to fieldcourses, data analysis at field study venues and follow-up activities back at college. An important point about all sonification or auralisation technologies is that it takes some considerable time for the visually impaired user to learn how to interpret the sounds effectively.
Unlike most other sensory substitution systems, electronic travel aids (ETAs) are designed specifically for use outdoors, and have been the subject of considerable research and development over many years (Foulke, 1986). Some ETAs are based on similar principles to the information sonification devices, in that they attempt to sense relevant information from the environment and convert these into sound information that can be readily understood by the traveller. These devices are discussed further in the Mobility Aids document.
In Windows 95 and later versions of its graphical user interface (GUI) operating system, Microsoft has adopted an 'off-screen' model for accommodating disabled users, its policy being that it will not supply adaptations of its operating system or applications programs for disabled users. Instead, it makes available interface technology that enables third parties to extract relevant information that can be used to provide alternative sensory representations — e.g. extracting text to be passed to screen readers.
From the mid-1990s, Microsoft has been developing Active Accessibility (MSAA), which is a software interface embedded in Windows application programs that enables them to pass important information to screen reader software or braille display systems used by blind users. The idea behind this technology is that the application program (operating system, word processor, spreadsheet, etc.) has MSAA facilities embedded within it, and the accessibility software used by the blind includes 'hooks' that communicate with this interface. (MSAA code for developers was released in 1997). There was a major spat between the blind community and Microsoft in 1997 when version 4 of its Web browser, Internet Explorer, was released without MSAA, despite the fact that the previous version (IE3) had included it.
Page updated 14 December 2001
GDN pages maintained by Phil Gravestock
© Geography Discipline Network/authors, 2001
ISBN: 1 86174 115 4