Japanese scientists know what you are about to say
The Asahi Shimbun by TAKASHI MIYAZAWA/ Staff Writer May 11, 2017
The activated brain region differs according to the syllable when observing spatial-feature patterns.dit COPYRIGHT (C) TOYOHASHI UNIVERSITY OF TECHNOLOGY.
The researchers have developed technology that recognizes syllables before they are spoken just by analyzing brain waves.
It signals that typing without speaking or using fingers may become a reality in the not-too-distant future.
The team reckons the technology could lead to the development of a science fiction-like “brain computer interface” that recognizes words in the mind.
“We’ve studied brain waves that we observed when recalling syllables, without the voice bit," said Tsuneo Nitta, a member of the research team and emeritus professor of information processing at Toyohashi University of Technology in Toyohashi, Aichi Prefecture.
"The technology will lead to the development of devices that could help people who have difficulty speaking due to diseases or disabilities,” he added.
In the experiment, the group measured the brain waves of the examinees when the subjects pronounced Japanese syllables such as “A,” “I,” “BA” and “BE” with 64 electrodes placed on each of their heads.
This image shows the shift in brain activity (10 syllable average for 8 subjects).
COPYRIGHT (C) TOYOHASHI UNIVERSITY OF TECHNOLOGY
As a result, the research team revealed that the brain region that activated 0.2 second before pronouncing syllables varied depending on the syllables.
If the volunteer did not voice the syllables it was difficult because the brain wave signals were weak, said the team that included Nitta, Junsei Horikawa, a professor of auditory physiology of Toyohashi University of Technology, and other members.
Nitta created brain wave patterns for each syllable spoken by a subject after extracting the necessary elements from the brain wave data.
By comparing brain wave patterns, the research team succeeded in recognizing syllables.
Currently, recognizing 10 spoken digits from “zero,” “ichi” (one), “ni” (two), “san” (three) to “kyu” (nine) by comparing brain wave patterns is achieved with 90 percent accuracy.
Technologies that convert sound into characters have already been put to practical use.
Wave patterns also vary widely from person to person, which could be a challenge for practical application of the technology.
“We expect to increase recognition accuracy by creating basic brain wave patterns and letting the device learn individual characteristics,” said Nitta.
“We are planning to create a method by developing a headset-like device and link it with smartphones.”
The study is expected to be presented at the 18th annual conference of the International Speech Communication Association (Interspeech 2017) to be held in Stockholm this August.