ATR (Advanced Telecommunications Research Institute International)

This organization has no projects in progress directly aimed at helping people with disabilities as such, nor are any currently planned, but a virtual reality system is being designed to allow elderly people who lack mobility to "go" outside.


Since 1977 Canon has manufactured products of its own design for people with disabilities, in addition to offering customized versions for the local market of products developed by TeleSensory Corporation (see Fig. 5.1). It also provides seminars and training courses for users and special education teachers, some of which are partly funded by the Japanese government. The Corporate Welfare Division, which is in charge of all these activities, currently consists of about 20 staff members in 4 departments: planning, R&D, marketing and sales, and technical services. The unit is now beginning a push for "universal design" to assure accessibility of all future company products. A height-adjustable photocopier is currently available, but is just a first step, according to company representatives. Documents on this topic were, at the time of the JTEC visit, in press for planned June 1995 release and internal distribution; the panelists were told that these would be updated and issued thereafter on a quarterly basis. Some of Canon's current products are listed below.

Canon/TeleSensory Optacon II System

Originally introduced in the early 1970s, this is a portable reading device for people who are blind that does not rely on a knowledge of Braille. It consists of a tiny camera tethered

Fig. 5.1. History of Canon's welfare activities.

to a tactile array of 5 x 20 pins covering an area approximately 1 inch long by 0.5 inches wide. As the user moves the camera over the surface on which the information is printed or displayed, the scanned image is converted into a vibrating tactile form which the user can sense with his/her fingertip. The device can also effectively convey simple graphics (e.g., straight lines) as well as text. When reading kanji (Japanese pictographic characters), it isn't always possible to display an entire character at once, in which case the user must scan the left and right halves separately.

Canon/TeleSensory PowerBraille 40 and BrailleMate 2 or 2+2

These products are Braille output and input devices, respectively, for personal computers and laptops. Both have been adapted by Canon for the Japanese market. In the PowerBraille 40, for example, special software has been developed to allow use of two or even three Braille cells to represent a single kanji character when required.

Canon/TeleSensory Aladdin

This product is a closed-circuit TV reading device for people with low vision. Material to be read is placed on a platform under a camera, which displays a highly magnified image on a screen positioned so that it is at eye level and easily accessible. Four simple switches and dials control all functionality: on/off, focus, magnification (4x - 20x), and display mode (normal black on white versus reverse video, high contrast versus gray scale). Aladdin costs approximately 150,000 (about $1,500) less than older units with similar capabilities.

Canon Communicator CC-7S/CC-7P

A portable augmentative communication device, the Canon Communicator is about the size of a postcard, and is for people with speech impairments. Text is typed in by the user on a QWERTY-style keyboard that responds to very light touch and never requires simultaneous depression of multiple keys (sequential key presses suffice to accomplish all functions); this is in recognition of the fact that potential users include people who have suffered traumatic head injuries, or who have diseases such as multiple sclerosis or cerebral palsy, where motor control of the hands is adversely affected in conjunction with loss of speech.

The unit comes with a built-in memory to store frequently needed words and phrases. It can record and play back voice and generate printed output, as desired, the latter on a thin tape (a compromise selected to allow use of existing reliable and cheap technology). Optional accessories include an LCD display and a wheelchair bracket. A single large button/switch can be attached as a supplementary input device, to accommodate people with severe motor impairments, and the system can then be set to automatically and repeatedly cycle through all row/column key combinations (marked by small illuminated indicator LEDs at the edges of the unit) so the user can select what to say.

This device is not an adaptation of a TeleSensory product. See Figure 5.2.

Fig. 5.2. Canon CC-7S -- "Communication Aid for People with Speech Disorders."

As explained in the Canon site report (see Appendix C), Canon expects to lose money on the Welfare Group's activities, despite heavy government subsidies available to qualified purchasers of its products.

ETL (MITI's Electrotechnical Laboratory)

ETL has a group led by Dr. Fumiaki Tomita that is developing a system to support acquisition of visual information by the blind (Fig. 5.3). The prototype hardware and software have to date cost 8 million (about $80,000), and includes a pair of video cameras for stereo image capture (bottom center of Fig. 5.3), a computer (upper left of Fig. 5.3), and a custom-built 16 x 16 pin (175 mm x 175 mm) display in which each of the touch-sensitive pins can be positioned by a stepper motor within a range of 0-6 mm above the base plate in increments of 1 mm (center and right of Fig. 5.3). The software currently supports three kinds of tactile images:

Fig. 5.3. Components of ETL's visual information system.

In the future, Dr. Tomita's group intends to add voice output and to associate some system functionality with pin presses so that, for example, the user will be able to learn which object is located where by pressing the single pin that represents it in the multiobject image.


The Personal Systems Laboratory, one of five research laboratories maintained by Fujitsu, is especially interested in medical applications. It is to be commended for not shying away from investigating even the most speculative ideas for new I/O devices. One effort in this category involves attaching sensors to the various parts of the hand. The user then merely tenses appropriate muscles to control cursor movement on the screen. Initial experiments with two and three sensors have had some success; although subjects report the effort required to be "very tiring" they were able to correctly, albeit slowly, select digits using two sensors and letters using three. These results have led the team to design a more ambitious, and as yet untested, bipedal robot walking chair in which the user tenses various posterior muscles to control motion. Although in the early stages, projects such as these have clear and fascinating implications for people with motor impairments.


Matsushita has an extremely strong commitment to research related to speech and hearing technology. It maintains the Panasonic Speech Technology Laboratory (STL) in Santa Barbara, California, and had at the time of the JTEC visit organized and sponsored two international symposia on the speech and hearing sciences in Osaka, the first in 1991 and the second in 1994. Research projects often have a ten-year time line, whereas the norm in Japan is five years. Two subareas where the company has achieved substantial and significant accomplishments are hearing aid technology and training profoundly deaf children to speak. In all cases, the emphasis is on affordable solutions. Drs. Yoshinori Yamada and Yoshiyuki Yoshizumi, along with other members of their teams, presented some of this work to the JTEC panel in a soundproof room located in their research laboratories.

Hearing aid technology

Investigation of techniques to improve hearing aids began about five years ago, and has been supported by MITI. About twenty researchers in Japan and twenty more at STL are involved. The algorithms developed process the input stream in four stages:

suppression of impulsive sounds (e.g., a door slamming) that may otherwise mask speech three-channel compression consonant enhancement spectral shaping to enhance higher frequencies (work in progress)

All the electronics fit on a small card about 2 inches square, and the entire product is attractively designed and small enough (59 x 63 x 26 mm) and light enough (just 98 grams) to easily fit into any pocket. Plans are to include an EPROM in a future model to store individual user parameters.

In addition to hearing aids, devices of potential benefit to blind users as well as to people with other types of disabilities have been developed based on the company's speech input technology. Examples include a voice-activated VCR controller which was marketed about four years ago but was a commercial failure because consumers were unwilling to pay the extra $20-$30, and a prototype speaker-independent ticket vending machine currently installed at the Japan Rail Shinagawa Station, which employs word spotter algorithms to identify the user's destination.

Computer-Integrated Speech Training Aid (CISTA)

Matsushita developed CISTA to help train profoundly deaf people to speak. Sensors monitor the subject's nose, neck, tongue, and airflow (Fig. 5.4). The tongue sensor contains 63 electrodes to support high-precision modeling of the contact pattern between the tongue and hard palate. A total of about 10 parameters (Fig. 5.5) are extracted as the subject attempts to speak into a microphone; on-screen graphical feedback then allows the subject to improve performance on future trials (Fig. 5.6).

Fig. 5.4. CISTA sensors worn by a student (Matsushita).

Fig. 5.5. Block Diagram of CISTA (Matsushita).

Fig. 5.6. Example of speech training program for Japanese syllables (Matsushita).

Many of the displays are designed especially to hold the interest of children (e.g., the height to which a basketball bounces indicates the strength of airflow, a turtle's shell lights up to show actual versus desired tongue position, etc.).

The first version of CISTA was made commercially available in Japan about five years ago, and 108 schools for the deaf throughout the country purchased over 200 units. Reports indicate that with sufficient training, the system enables subjects to improve their speaking abilities on their own as well as, or even significantly better than, would be the case if a human teacher were present. A special version of the system is available for teachers for the deaf; an English-language version is currently being developed in collaboration with STL. A rather old (and cheap) IBM PC was used to run the demo, but panelists were told that the newer models would be designed on a more modern IBM laptop platform.


Osamu Iseki described some of the research underway at the Kansai laboratories, with emphasis on work related to barrier-free interfaces for the elderly and for people with disabilities. A particular focus was an auditory touchscreen display for blind users. The prototype system provides feedback as the cursor is moved across screen entities such as icons and windows, or hits borders (see Fig. 5.7). At present, text or voice output only is generated, in response to user touch or action, but Braille and image pin enhancements are planned for the future. In initial tests, blind subjects said they did not think the current version would be a good I/O device for daily use, because it was difficult for them to search on a surface without system guidance.

Fig. 5.7. Auditory touchscreen display (NEC).

This work is part of a recently-begun joint effort by NEC, IBM/Japan, and Hitachi to develop an adaptive multimedia barrier-free interface for the elderly and for differently-abled users (NEC terminology). The five year effort will run through 1998, and is being funded by MITI through NEDO (the New Energy and Industrial Technology Development Organization) at a level of 500 million in total. NEC, the managing company for the consortium, is responsible for providing nonvisual access to the GUI, while IBM/Japan is developing a graphics reader and Hitachi is working on spatial sound displays.


Nippon Telegraph and Telephone (NTT) has developed a system for evaluating hearing disabilities. The system measures the exact level of hearing, the differences between the left and right ears, and which sounds are difficult to hear. Company researchers are now trying to develop technology for a new generation of hearing aids that can amplify the human voice while ignoring other sounds.


Omron produces a special version of its banking ATM for people with disabilities, which is situated lower than the regular version to accommodate people in wheelchairs, and which provides voice output as well. Omron's digital readout thermometers and blood pressure monitors, although not intended specifically for people with disabilities, are useful to those with low vision. The company now places great emphasis on research related to fuzzy logic, with products belonging to this family (e.g., a module for color photocopiers to detect and defeat efforts to reproduce banknotes) currently accounting for approximately 25% of annual sales. A project in progress that uses fuzzy logic technology with the goal of helping people with disabilities is an image-understanding system that is intended to form the core of a navigation support system for the blind. Figure 5.8 shows a system diagram.

The prototype of Omron's navigational support system runs on a SPARC 20 with Datacell frame grabber and Verbex Speech Commander speech recognition system. Voice I/O is supported. In response to user queries such as:

"Where is the telephone?"

the system generates output like the following within a couple of seconds:

"Something like a telephone is around left on the desk." "A chair is slightly left of the desk." "Something greenish about the size of a clock is around the middle of the desk."

There are currently about a dozen items in the database, all of which might typically be found in an office environment. Just three parameters are recognized for each object: color, size, and position. Training for new objects is carried out by selecting parameter values from menus, and is claimed to be fast.

Fig. 5.8. Image understanding for a navigation support system (M. Kawade, N. Tabata, Omron).


The director of TRON, Dr. Ken Sakamura of the University of Tokyo, and his team of about thirty researchers have, since 1984, been pursuing their vision of ubiquitous computing that is usable by everybody, including people with disabilities, for whom special "enable-ware" is to be designed and built into the various TRON subsystems, as required. The model TK1 ergonomically designed input device, for example, incorporates a split and tilted kanji keyboard, together with a writing tablet and electronic stylus (Fig. 5.9). It is available from Personal Media Corporation of Tokyo. The TRON project is noteworthy in that it is, for Japan, a rare university effort, funded by a consortium of industrial partners (at the time of the JTEC visit they numbered 69), and having no government support.

Fig. 5.9. TRON TK1 input device (Personal Media Corp.).

Published: March 1996; WTEC Hyper-Librarian