Snap, Map, Chat and Hyperlink (Science Daily January 9, 2009) provided my latest glimpse into the future. The article was about MOBVIS, a new use of technology, that:
. . . ultimately works like a picture-driven search engine about things that just surround the user. The picture is your search query, and the system matches the picture and the features it contains with results in its database.After reading this article, I scouted around for more information on MOBVIS. There has been much research done in Europe, so my first stop was the Visual Cognitive Systems Laboratory, University of Ljubljana project:
MOBVIS concentrates its research on the integration of multi-modal context awareness, vision based object recognition, and intelligent map technology, into an innovative form of an attentive interface, which enables perception and reasoning on a vast amount of data and in a continuously operating framework.The MOBVIS project site maintained by the Institute of Digital Image Processing has demos of object awareness, visual localization, multimodal positioning, visual context awareness, multimodal context, augmented digital city maps, geo-services and incremental map updating, visual attention and attentive interface.
Basically, this system will allow you to take a picture with a special mobile device and then review visually hyperlinked data about your location. You could use this technology to orient yourself in a new city, to identify a prominent landmark, to view neighborhood information, to find a local business, to locate a hospital or to download a train schedule.
There is always a "gotcha" to any new technology. While this could be useful technology for anyone who is lost, visually impaired, wounded or just needs a cup of Joe, it could also open a door to your perceptions and reasoning processes for commercial uses. Think about it.