With sentient computing, each person and object has a defined space with associated characteristics - somewhat like a cursor moving across a screen. When spaces converge, events are triggered automatically based on pre-set rules and preferences. Professor Hopper refers to this concept as 'programming with spaces', representing a new 'do nothing' user interface, requiring no conscious interaction from the user.
Sensors collect information about the real world
Rules state how this information should be interpreted
Computers apply the rules to make decisions
Examples of sentient computing are the follow-me personal desktop that can be configured and displayed automatically, anywhere on any computer and telephones programmed with personal short codes - simply by moving into the proximity of the device. Alternatively, videophones can be made to follow conference participants around a room by automatically selecting the relevant camera for the best view. In fact, once the infrastructure and middleware to handle the information are in place, the possibilities are endless. For example, a drinks vending machine that knows how you like your coffee as you approach it or TV and radio systems that select your favourite channels as you walk into the room.
"In my view there is no such thing as real artificial intelligence," says Professor Hopper. "Instead we will begin to harness and make increasing use of information taken from our environment and our personal preferences to enhance the way we interact with the world around us."
Click here to see a video demonstration of a vision-based location project.