We propose to explore the utility of haptic feedback where audition alone is not sufficient for good speech intelligibility. Integrating speech information from visual sensory domains can improve speech intelligibility (Sumby and Pollack 1954). However, there are many situations where vision is either not available or where visual attention must be re-directed via some other sensory mode in order for the interaction to be ‘face-to-face’ – for example, an industrial shop floor where two interlocutors are not necessarily able to establish shared visual attention.
We have developed a small wearable transducer that provides vibrotactile stimulation corresponding to the amplitude envelope of a speech signal. With a series of simple experiments we ask: can the intelligibility of an environmentally degraded acoustic signal be improved via vibrotactile stimulation of the skin surface? Alternatively, can intelligibility be improved simply by aligning the listener’s attention with the talker’s signal? Our prediction is that optimal enhancement involves both signal redundancy and alignment.
This project aligns with the language sciences initiative by bridging the fields of human-computer-interaction and speech science to potentially enhance user’s linguistic communication in degraded sensory environments. By examining how additional sensory modalities may enrich processing of linguistic content, we are exploring the capabilities of the communicating mind and body. Additionally, our study uses wearable haptic technology to augment human speech perception and thus examines the role of evolving language in an information economy. The results of our study may provide tangible benefits to individuals with hearing loss, groups who must accomplish cooperative tasks in acoustically challenging environments, and human-robot-interaction.
PI: Karon MacLean, Professor, Computer Science. Co-Investigators: David Marino, Research Assistant, Computer Science; Eric Vatikiotis-Bateson, Professor, Linguistics/Cognitive Systems.