Category Archives: conference

Systems Papers at CHI – Some Data

Back in 2009 James Landay wrote a thoughtful piece on some of the challenges associated with publishing systems research at a venue like CHI (or UIST). He concluded that the incentive structure just isn’t there to support the greater degree of time and effort needed to build and evaluate systems, especially when compared to other types of research which require less time but still get you the line-item on the CV.

I wanted to try to back up some of this thinking with data, so I wrote a ScraperWiki script to go out and harvest a corpus of previous CHI proceedings (you can edit the script or access the data I collected here). I scraped all paper titles, authors, and abstracts going back to 1999 (the ACM DL changes their page format before then which is why I didn’t go back further). The dataset ended up being 2,498 papers over 14 years (1999-2012)

For the sake of the rest of the analysis I define “systems papers” as the subset of papers with an abstract that uses the word “system”. I know it’s not perfect (most likely some false positives in there), but it’s a reasonable proxy and I didn’t have time to go through all 2.5k papers by hand.

One question we might ask is: Do systems papers really require more effort than other papers at CHI? If they take too much effort, a rational researcher might choose to spend time on other types of contributions. In the following graph we can see that, in the last 5 years, systems papers have indeed averaged more authors per paper than other papers at CHI (and an assumption is that more authors implies more overall work, though this of course doesn’t always hold). There have also been years in the past when non-systems papers have had more authors on average (e.g. 2001 or 2002). Overall the number of authors for systems papers over the period (M=3.61, SD 0.37) is slightly higher than that for non-systems papers (M=3.43, SD=0.21), and the standard deviation is also a bit higher indicating there is more variance in the number of authors of systems papers. The difference in means isn’t statistically significant (p=.15). So it seems there is some (weak) evidence that systems papers do have more authors on average.

Another question we might ask is: Is the relative amount of systems work published at CHI declining? To see this we can look at the graph below which shows the fraction of systems papers out of the total for each year. The average fraction of systems papers over the time period (1999-2012) is 0.36 (SD = 0.07). There’s a fair bit of variance with a low in 2007 and a high in 2003. In the last couple years the fraction of systems papers has been a tad below the mean, but still within one standard deviation. There’s no correlation between fraction and year. From this I think we can conclude that there’s no clear trend in fraction of systems papers being published at CHI. Moreover, the absolute number of systems papers has gone from 15 in 1999 to 60 in 2012, indicating fair growth in this segment of CHI papers. (It would be really interesting to analyze abstracts from all papers both accepted and rejected to see if there is a bias).

While the cost of doing systems work in HCI may be higher (i.e. more co-authors needed), the fraction of systems work at CHI doesn’t seem to have been substantially affected over the course of the last 14 years. But it’s still easy to feel like all the action is happening in industry: new products are constantly hitting the market and start-ups and entrepreneurship and heavily covered by the tech press. The reality is that systems publishing is trucking along and also growing, but, I think, over time will represent a smaller and smaller fraction of the pie as prototyping becomes “mainstream” and knowledge of HCI continues to diffuse. That may be ok, as long as the research prototypes produced by the academy are sufficiently differentiated to what’s available and possible in the market.

User Interface Software Technology (UIST 2007) Conference

This week (Oct 7th – 10th) I’m in Newport, Rhode Island attending the User Interface Software Technology (UIST) conference to present a couple of posters that I’m presenting there. It’s the third year I’ve been to UIST and it’s usually a great venue to learn about creative and emerging interaction technologies. UIST seems to be a bit different than in years past. While there are still the projector / camera systems, pointing schemes, and fitt’s law studies, this year seems to be bringing a lot more stuff in information systems and interaction; topics that are quite interesting to me. I would characterize UIST as primarily an engineering conference; people build systems and prototypes and do small-scale user studies, but generalization of results is difficult because of the highly contextual and specific nature of much of the work.

The keynote on Oct 8 was given by David Woods from Ohio State University. According to David there are essentially two views on designing for people: (1) you can compensate for human limits through design, or (2) you can amplify the adaptive capabilities and resilience of humans through design. In particular I appreciated the focus on the development of interfaces which take into account the adaptiveness, learning, and resilience of humans. This is in keeping with the basic of philosophy / focus of this particular meeting; requiring too much evaluation kills innovation because ultimately humans can adapt to innovative interaction techniques over time.

There are a few really cool systems that have been presented here so far. I liked Merrie Morris’ paper on SearchTogether: An Interface for Collaborative Web Search because I can immediately see how it could be useful for pairs or groups of people who are collaborating on some information search task e.g. vacation shopping. It provides support for awareness, division of labor (e.g. splitting search results), and persistence (e.g. saving query terms) as well as integrated chatting. It’s exciting to see this kind of software being developed though somehow I feel like it’s not just search that needs to be collaborative but rather the entire value-added information spectrum (organization, analysis, judging, and decision making). SearchTogether addresses organization and a bit of analysis, but the question now is how to judge that information and ultimately come to an actionable decision based on it.

Another paper that I liked was Mira Dontcheva’s Relations, Cards, and Search Templates, because it’s pointing toward the semantic web. It makes a lot of sense to be able to construct personalized layouts and data sources for an information search task and that’s just what they’ve done here. You can almost think of it like a mash-up system for specific search tasks; clearly useful for knowledge workers who have specific tasks. This touches on the point brought up by David Wood in terms of supporting adaptation by people.

Finally, I really appreciated Bjorn Hartmann et al.’s paper: Programming by a Sample: Rapidly Creating Web Applications with d.mix. The prototype lets programmers grab elements of web pages and generate code based on the web APIs for those pages automatically. This facilitates programming by example and could be really powerful for people learning an API or for people who aren’t expert programmers but who want / need to do mashups on online information sources.