Category Archives: interaction

NYT Interactive Presidential Debates

The New York Times recently published an interactive application for exploring the video and transcripts from the presidential and vice-presidential debates. Actual debate content aside, the application is quite a usable foray into the realm of multimedia (video + transcript) interfaces. Seen here is a screen shot of the application from the 2nd presidential debate.

Overall the interface has a good “flow.” At the top is the ability to search for keywords and see where they showed up in the transcript. You can see the comparison between the word’s usage between Obama, McCain, and the moderator. Below this are two timelines, the problem is that while they are all intuitive, they are in the wrong hierarchical order. The top most timeline is the most “zoomed out,” but the next one down is the most “zoomed in.” Really they need to be re-ordered so that the middle timeline is the bottom most. This would be a more intuitive layout from least detailed to most detailed. What IS really nice about all of the timelines and what really helps navigation is all of the textual information that pops up when hovering. Also there’s some segmentation showing parts of the video where each of the debaters is speaking. I found it really helpful to be able to click any of these segments and navigate the video to there. There is some navigational integration with the transcript which is interesting too. For one you can click on a block of the transcript and that will navigate you to that section of the video. But still we’re dealing with blocks of text rather than individual words being linked into the video.

The other fantastic aspect of this tool is that it provides some level of integrated fact-checks. The fact checking is produced professionally by the Times and is presented as aligned with the different question segments.¬† It’s difficult to follow though because it’s in a tab which competes with the transcript itself and so you can’t see the context or anchor to where the fact checking is referring. It seems it would be a lot more helpful for comparison’s sake to be able to see both the transcript and also the fact checking at the same time. The other problem with the presentation of the fact checking is just that’s it’s really dense and hard to read through. Again, better contextualization with the video and the transcript would really help here.

User Interface Software Technology (UIST 2007) Conference

This week (Oct 7th – 10th) I’m in Newport, Rhode Island attending the User Interface Software Technology (UIST) conference to present a couple of posters that I’m presenting there. It’s the third year I’ve been to UIST and it’s usually a great venue to learn about creative and emerging interaction technologies. UIST seems to be a bit different than in years past. While there are still the projector / camera systems, pointing schemes, and fitt’s law studies, this year seems to be bringing a lot more stuff in information systems and interaction; topics that are quite interesting to me. I would characterize UIST as primarily an engineering conference; people build systems and prototypes and do small-scale user studies, but generalization of results is difficult because of the highly contextual and specific nature of much of the work.

The keynote on Oct 8 was given by David Woods from Ohio State University. According to David there are essentially two views on designing for people: (1) you can compensate for human limits through design, or (2) you can amplify the adaptive capabilities and resilience of humans through design. In particular I appreciated the focus on the development of interfaces which take into account the adaptiveness, learning, and resilience of humans. This is in keeping with the basic of philosophy / focus of this particular meeting; requiring too much evaluation kills innovation because ultimately humans can adapt to innovative interaction techniques over time.

There are a few really cool systems that have been presented here so far. I liked Merrie Morris’ paper on SearchTogether: An Interface for Collaborative Web Search because I can immediately see how it could be useful for pairs or groups of people who are collaborating on some information search task e.g. vacation shopping. It provides support for awareness, division of labor (e.g. splitting search results), and persistence (e.g. saving query terms) as well as integrated chatting. It’s exciting to see this kind of software being developed though somehow I feel like it’s not just search that needs to be collaborative but rather the entire value-added information spectrum (organization, analysis, judging, and decision making). SearchTogether addresses organization and a bit of analysis, but the question now is how to judge that information and ultimately come to an actionable decision based on it.

Another paper that I liked was Mira Dontcheva’s Relations, Cards, and Search Templates, because it’s pointing toward the semantic web. It makes a lot of sense to be able to construct personalized layouts and data sources for an information search task and that’s just what they’ve done here. You can almost think of it like a mash-up system for specific search tasks; clearly useful for knowledge workers who have specific tasks. This touches on the point brought up by David Wood in terms of supporting adaptation by people.

Finally, I really appreciated Bjorn Hartmann et al.’s paper: Programming by a Sample: Rapidly Creating Web Applications with d.mix. The prototype lets programmers grab elements of web pages and generate code based on the web APIs for those pages automatically. This facilitates programming by example and could be really powerful for people learning an API or for people who aren’t expert programmers but who want / need to do mashups on online information sources.