Multimodal sensing

AppleInsider reports on a set of patents for multimodal sensing (i.e. using two or more senses at the same time). Multimodal sensing has been a hot research topic in human-computer interaction for several years, based on the knowledge that human perception and cognition is fundamentally multimodal. If we want computers to respond more efficiently to human communication they will also have to use more than one modality in their sensing and communication. That said, I am not sure that everyone will be comfortable leaving the webcam on at all times to allow for computer vision techniques on everything happening in front of the screen (as the picture below depicts)…

Published by


Alexander Refsum Jensenius is a music researcher and research musician living in Oslo, Norway.