Wyman is a firm believer that it’s incredibly important for technology and museum staff to understand one another. Each world impacts the other. Finding common ground allows for more effective progress and advancement toward common goals.
Mr. Wyman began his presentation by showing a modified pay-phone that visitors could use to call up YouTube videos. The rotary phone dialing mechanism was deliberately ignored and assumed understood, and turned out to be bewildering to younger visitors. Rather than accept this as a negative, Wyman saw this as a positive addition to the exhibit. It started conversations.
“I am a huge fan of tangible object interfaces,” Wyman says. With that, Mr. Wyman initiated a brief discussion of fiducial markers, physical objects labeled with a unique code that can communicate information such as object position and orientation with multitouch tables and other electronics.
From there, Mr. Wyman began to dig into several code-based technologies, starting from the lowly, traditional UPC bar code and moving into QR codes, which are much richer in information density.
“QR codes are two dimensional,” Wyman says. “They can be read either way. They offer a lot of potential for embedded data.”
QR codes already have a lot of marketing history, mostly in Asia and increasingly in domestic advertising and marketing as people come to recognize and appreciate their value.
Microsoft Tags is a similar technology that allows the information in the tag to change over time. Instead of being limited to the information embedded into the code itself, Microsoft Tags make a server call, pulling the information down on request.
Ralph Das added that the technology exists to add this functionality to QR codes as well.
Steganography was Mr. Wyman’s next topic, addressing the work of the company Digimarc. Long used by spies, in steganography information is embedded into an existing image, audio file or video. This is already in use in advertising.
Omniar, a company offering vision based augmented reality, was also discussed. This technology uses object scanning to pull up information online via a vision based computing device (such as a camera phone).
3-D real-world objects can be identified, crudely but easily scanned, and then tied to online data. A picture of the artifact on your phone can supply you with expanded information about it.
From there, the discussion moved on to robotics and the research on reading facial expressions and facial cues being done at MIT. Object recognition and visitor reaction can be used to monitor and tailor the museum experience for visitors based on their current mood, in real-time.
Mr. Wyman pointed out that Disney uses this technology to control the line length of rides. A recent experiment in robotic interaction in weight control was also discussed. In the weight study, users saw a six fold increase in weight loss when they took a robot home, as opposed to logging their weight loss online.
Slides from Bruce Wyman’s presentation can be seen here:
by Sanford Clark on March 15, 2011