Category Archives: interfaces

Histrionic Visualization: The Rise of Theatrical Visual Presentation of Data

Earlier this year, while preparing for a workshop on Telling Stories with Data, I coined the term “Histrionic Visualization”, to account for certain theatrical presentations of information visualizations that I had seen. I wanted to expand on this idea a bit here.

Perhaps the best way to define what I mean by histrionic visualization is to cite some examples. For instance, Al Gore explains climate change data and visualizations in his movie An Inconvenient Truth. Gore combines a linear (sometimes animated) slide deck together with his voice over (and occasional sound effects) to present his data to the audience.

Another example of this idea came about when CNN started using Perceptive Pixel’s touch screen technology to allow on-air journalists the ability to manipulate data on the display using touch and gestures while they were broadcasting. This led to the likes of John King dynamically manipulating election visualizations while on air.

From these examples, hopefully it’s a bit more clear now what I’m talking about when I say “histrionic visualization”. These are embodied presentations of information where the physicality of the presentation itself becomes the defining factor. What new forms of tangible interaction or interfaces could enable further development here?

I think this idea could actually go a lot deeper than the examples I’ve seen too. It seems to me that acting out the presentation of visualizations is an area ripe for study. Does the physicality of the presentation help people learn or crystallize knowledge from the visualization? Are these presentations more engaging? How could you incorporate the audience into the interaction?

Moreover, could this become a new form of art, where talented storytellers weave data and visualization together with acting to engage an audience in the performance? How would the 2010 U.S. census look when presented on stage?

Fact Checking Source Contextualization

I ran across this round-up of some of the most prominent Political Fact Checking sites online including non-partisan FactCheck, Politifact, and¬† Washington Post Fact Checker Blog, as well as the partisan counter-parts: Newsbusters and MediaMatters. One of my criticisms of such sites is that oftentimes the fact-checking is decontextualized from the orginal document, especially for multimedia such as video. The presentation is usually done as a block of text explaining the “fact” in question. But what’s missing is the context of the claim or statement within the original source document. A far more compelling information interface for this would be to present an annotated document so that segments of the document (or video) are precisely delineated and critiqued. This is something I worked into the Videolyzer for video and text, but more generally this type of thing needs to happen for all fact-checked texts online.

Transparency in Game UIs

Games are a decent starting point for seeing how mechanical transparency is addressed in computer interfaces since many times simulation games are built around the concept of optimizing some state of the game (resource use, growth, or simply just score etc.) based on decisions the player makes. Here I illustrate how games are approaching some of the facets of mechanical transparency I introduced before.

Specific Why of State. This is a precise explanation for a game element’s attributes including any relationships, the directionality of those relationships, and their valences.

Sim City 4 does this exceedingly well but using a “?” tool which when clicked on an element exposes the state of the attributes of that object relevant to game play. For instance, it will tell you the crime rate and freight trip length for industrial buildings. At any point in the game you get a snapshot of the status of an individual object. Another method used to expose state is by hovering the mouse over an object. This reveals less information than the “?” click, but still shows you “the top three conditions that currently have the most dramatic impact on the desirability of the area” [Sim City 4 manual]. When the state of elements is spatially structured as in a map, overlays are used to show the distribution of a variable across space. Graphs are used to show transparency of state over time, thus aggregations of individual elements’ state are shown as a time series. Sim City 4 manages to achieve a playable simulation in part because the information that is necessary for optimizing the simulation is transparent in the interface. This is done through hovering popups, clickable popups, spatially layered attributes, and temporally graphed attributes.

Another game I looked at, Oil God, adds an additional twist to communicating state transparency by adding an overlay network visual on top of a spatial layout to explicitly show relationships between game elements. Democracy 2 encodes two more variables into its graphical overlays: relationship valence (via red-green coloring) and directionality (via animation direction). Another perhaps simpler method for communicating specific state information is via textual feedback. For instance, in The Garbage Game, where the premise is to keep as much refuse out of landfills, I made a decision in the game to refill and reuse my plastic bottle and it told me: “We figure that a bottle will get refilled about three times on average, so we’ve reduced the volume of water bottle waste in your sorted recycling to 25 percent of the 17,677 tons that New Yorkers currently generate each year, or 4,420 tons.”

Specific Why of State Change. This relates to explicit descriptions of computational state and explanations of why a state has changed. What was the trigger, event, or decision that affected a state change? Was this trigger algorithmic or based on user input? This facet of mechanical transparency is lacking in many of the games I looked at. One game that did have some attempt at an explanation for state change is Energville. At the end of the game, a graph shows the economic, environmental, and security impacts of your decisions. Along the timeline of the graph there are icons of different events that have happened. Clicking on these expands them out with additional textual information. For instance, “2014: wind power fails to deliver. -resulting in a 20% increase in your Wind economic impact.” Since state change is a matter of an attribute over time, annotated graphs and timelines seem to be a natural interface metaphor for explaining state change. Another candidate would be animation. For embedding monikers about state change within an interface itself, something like the afterglow effects in Phosphor might also work.

General Why of State. This is a generic explanation for an element’s attribute or a relationship between attributes, which is not related to the specifics of any particular element.

General why of state involves explication of the existence and valence of a relationship rather than the actual mathematical description, which would be included in the specific why of state. Because the explanation is general in terms of being non-specific, external information and sources can be used to buttress the existence and valence of relationships in the model. One of the tensions that this general information brings up is the granularity of its availability. Is it embedded in the interface or simply available as blocks of text elsewhere? What we see in many games is that the general why of state is offloaded to textual explanation on a separate information page, sometimes also outside of the interactive application itself. In Energyville, which has players managing economic, environmental, and security impacts of energy decisions, each of these facets has explanations in plain text. For instance, the environmental facet, when clicked, lists out all the negative impacts on the environment and cites the names of some recent reports which were used to “inform the impact assessments.”

Sim City 4 and Democracy 2 follow a similar strategy of explaining in text generally what the relationships are between variables of interest. For instance, In Sim City 4, clicking a city opinion poll about land value tells you this: “Represents the average land value in your city. To raise land value place parks, schools, hospitals, and other amenities in your residential zones.” In Democracy 2, the textual information comes in the form of “Encyclopedia” articles, which explain and provide context for the variable currently under consideration. The advantage to a textual approach is that we have well understood conventions for citing information in text. Also, it’s unclear whether or not an abstract relationship could be represented well either in an image or in video material. The value of such multimedia assets may however serve as existence proof for some attribute of interest, if such an attribute is not already obvious.

HCI’s Teachings on Transparency I

I’ve gone back to basics and have been reading through the HCI bible (Human Computer Interaction 3rd Ed. Dix et al.) to get a better understand how transparency is conceived of in interactive systems. System transparency does get a treatment as an element of formal interface modeling. There are several key points that we can learn from and which tie into transparency as it concerns journalism and interactive media.

While the state of the system is central to the notion of system transparency, what we’re really interested in is an idealization of the system state. What’s important in a user-centric model is the representation of “state required to account for the future external behavior.” In the text Dix refers to this as the “effect” which I think is nasty terminology. I’m going to call it the “User-Relevant State” or URS.

They do arrive at a workable definition of state: “[Transparency] would say that there is nothing in the state of the system that cannot be inferred from the display. If there are any modes, then these must have a visual indication; if there are any differences in behavior between the displayed shapes, then there must be some corresponding visual difference.” This gets at the central usability heuristic of observability and its relation to transparency. Observable states are visually shown in the interface; an invisible component of the URS does not uphold the principle of system transparency. One could make an argument that if the URS is not fully transparent usability problems are likely to ensue since the user does not have adequate feedback on the state of the system relevant to its usage.

Of course, not all of the URS may be observable in one view because of screen real-estate problems and that the display could be unintelligible if there were too much shown. Thus the URS can be progressively observed through interaction (e.g. clicking a marker and an object for state or displaying a layer over objects which explicates state). The usability of a system and its effectiveness may however be increased if more of the URS (such as data dimensions in a model) is visible in the display in one view. A key connection Dix makes is that when the user can observe the complete state of the system they can (in theory) predict what the system will do. This is precisely what many simulation games are all about: predicting how actions taken on the model will impact on future states of the simulation. The question remains: does complete transparency make a game experience too easy? Isn’t there some satisfaction in figuring it out?

NYT Interactive Presidential Debates

The New York Times recently published an interactive application for exploring the video and transcripts from the presidential and vice-presidential debates. Actual debate content aside, the application is quite a usable foray into the realm of multimedia (video + transcript) interfaces. Seen here is a screen shot of the application from the 2nd presidential debate.

Overall the interface has a good “flow.” At the top is the ability to search for keywords and see where they showed up in the transcript. You can see the comparison between the word’s usage between Obama, McCain, and the moderator. Below this are two timelines, the problem is that while they are all intuitive, they are in the wrong hierarchical order. The top most timeline is the most “zoomed out,” but the next one down is the most “zoomed in.” Really they need to be re-ordered so that the middle timeline is the bottom most. This would be a more intuitive layout from least detailed to most detailed. What IS really nice about all of the timelines and what really helps navigation is all of the textual information that pops up when hovering. Also there’s some segmentation showing parts of the video where each of the debaters is speaking. I found it really helpful to be able to click any of these segments and navigate the video to there. There is some navigational integration with the transcript which is interesting too. For one you can click on a block of the transcript and that will navigate you to that section of the video. But still we’re dealing with blocks of text rather than individual words being linked into the video.

The other fantastic aspect of this tool is that it provides some level of integrated fact-checks. The fact checking is produced professionally by the Times and is presented as aligned with the different question segments.¬† It’s difficult to follow though because it’s in a tab which competes with the transcript itself and so you can’t see the context or anchor to where the fact checking is referring. It seems it would be a lot more helpful for comparison’s sake to be able to see both the transcript and also the fact checking at the same time. The other problem with the presentation of the fact checking is just that’s it’s really dense and hard to read through. Again, better contextualization with the video and the transcript would really help here.

Video Transcription on Google

Yesterday Google announced that they were applying some of their speech transcription research to political videos on YouTube. The philosophy – pushing research into the market to see its value and how it’s used- is great. The implementation however is rather shallow. While searching for keywords within video may be valuable for some users, several other features (such as closed captioning) have been left out of the interface. Also, the feature has not been integrated into YouTube itself and only functions within the google gadget, which makes it less likely to be seen and used by many people.

Speech recognition is a hard problem. In a recent test I did with the sphinx 3 engine from CMU I was lucky to get a 60% correct transcription for a YouTube video – and this was cleanly spoken audio. Studies at the University of Toronto by Cosmin Munteanu suggest that a word error rate (WER) of 25% is needed for the benefits of a transcribed video to be realized. And there’s a LONG way to go until automatically transcribed video achieves that WER on arbitrary internet content. The problems with automatic transcription are manifold but include (1) noisy audio, (2) different speakers with varying accents, (3) poor support for named entities, and (4) high errors in audio to transcript alignment.

It’s hard to evaluate the Google transcription effort, but I will mention that in several searches for keywords that I have done, the markers on the timeline are off by several seconds from where the words are actually spoken in the video. This speaks to difficulty # 4 above. To my knowledge there is no research about the effects on the interactive experience of this type of misalignment error, so it should be interesting to see if Google users find this annoying or not.

I’ve been developing a new technology which addresses the video transcription problem. Check out my post on it here.

Reuters’ Open Calais

Reuters release an new API called Calais based on a semantic entity extraction engine they acquired from Clear Forest last year. It can extract Entities, Organizations, Companies, Events, and relationships between these things. Read Write Web posted a good summary of the API’s capabilities here.

Thinking about how to integrate the capabilities of this API into news analysis or consumption interfaces could allow for some interesting new information interfaces.

Hypertext 2007

So I’m here in the (believe it or not sunny) Manchester, England attending the Hypertext and Hypermedia conference. I gave my presentation on our study of Jumpcut.com, which was well received albeit a bit rushed before lunch. I will admit that I’ve been surprised by this conference so far; it’s much smaller than I expected. In my presentation there were only 40-50 people and I have word that total registrations are only at about 70, so it’s really more on the scale of a large workshop than a conference. The advantage is that I’m getting to meet and chat in depth with people moreso than I do at some of the larger conferences I attend.

Whereas the papers being presented yesterday were somewhat boring, I found the papers this morning to be really really interesting. My favorite so far is a paper entitled Assembly Lines: Web Generators as Hypertext by Elizabeth Losh, a faculty member at UC Irvine who has also recently visited and spoken at our (Georgia Tech’s) Living Games Worlds symposium last spring. Web generators are online apps that produce media based on some simple input or interaction from the user at the outset. Here’s an example from www.churchsigngenerator.com

church-sign.jpg

There are tons of these things online and I’ve even seen some popping up as Facebook Apps (e.g. the Chuck Norris Fact Generator, or various quote generators). Web generators are sort of the most simplistic form of remixing that I think you could imagine. The programmer of the generator sets some sort of creative context (e.g. church signs) that people can then fill in with meaning-forming details which get incorporated automatically. A lot of the interfaces for these things are as simple as a page refresh or a button which re-generates, but I’m interested more with what could be done with slightly more sophisticated interfaces, which are less form based and more interactive, perhaps facilitating some kind of tweaking once the initial bit of media is generated. Anyway, neat stuff and an engaging presentation by Elizabeth.

Here are some more fun generators:

Biblical Curse Generator

Pirate Name Generator

Web 2.0 App Generator

Timed Comments in Video

There’s a lot of interest from new video startups in making video into a first class web 2.0 citizen by bringing tagging, commenting, and responses to videos at a sub-video level of granularity. While the old skool video sites like YouTube, Revver, Metacafe, Magnify etc. let you add tags and comments to a video, the new breed of video services such as Viddler, YouTube Streams, and The Click facilitate timed tags and comments to video. There has also been some recent academic interest in how people can interact with one another through commenting and chatting around video. This CHI 2007 paper from CMU is a good first step in understanding chatting behavior around videos.

Today I took a closer look at one of the newer attempts at highly granular commentable video, viddler. Here’s a screenshot of their interface, which despite some great features also suffers from some usability issues. All in all the timeline is pretty good though, you can clearly see where people have left behind comments and an easy to understand “+” graphic allows the user to access a menu and select the addition of a comment, tag, or video response. The downside here is there is no time extent associated with the comments or tags, they are simply added as point anchors. At the same time, this does simplify the interface by not requiring the use to select and in and out point; ideally I think the point anchor should be extendable to cover a time period if the user so chooses.

One of the problems that the interface suffers from is that comments sometimes obscure the video. Although they can be expanded and contracted as necessary, this just seems tedious. Furthermore, to see all of the comments that have been added to the video (as a list), almost half of the video is obscured. Perhaps the goal is to have the comment track and video mutually exclusive as they distract from one another anyway? What is nice is the voting mechanism for comments which is used to determine which comment shows up while someone is watching if there are several comments or responses at that point in the video.

Viddler Screenshot