Category Archives: transparency

51% Foreign: Algorithms and the Surveillance State

In New York City there’s a “geek squad” of analysts that gathers all kinds of data, from restaurant inspection grades and utility usage to neighborhood complaints, and uses it to predict how to improve the city. The idea behind the team is that with more and more data available about how the city is running—even if it’s messy, unstructured, and massive—the government can optimize its resources by keeping an eye out for what needs its attention most. It’s really about city surveillance, and of course acting on the intelligence produced by that surveillance.

One story about the success of the geek squad comes to us from Viktor Mayer-Schonberger and Kenneth Cukier in their book “Big Data”. They describe the issue of illegal real-estate conversions, which involves sub-dividing an apartment into smaller and smaller units so that it can accommodate many more people than it should. With the density of people in such close quarters, illegally converted units are more prone to accidents, like fire. So it’s in the city’s—and the public’s—best interest to make sure apartment buildings aren’t sub-divided like that. Unfortunately there aren’t very many inspectors to do the job. But by collecting and analyzing data about each apartment building the geek squad can predict which units are more likely to pose a danger, and thus determine where the limited number of inspectors should focus their attention. Seventy percent of inspections now lead to eviction orders from unsafe dwellings, up from 13% without using all that data—a clear improvement in helping inspectors focus on the most troubling cases.

Consider a different, albeit hypothetical, use of big data surveillance in society: detecting drunk drivers. Since there are already a variety of road cameras and other traffic sensors available on our roads, it’s not implausible to think that all of this data could feed into an algorithm that says, with some confidence, that a car is exhibiting signs of erratic, possibly drunk driving. Let’s say, similar to the fire-risk inspections, that this method also increases the efficiency of the police department in getting drunk drivers off the road—a win for public safety.

But there’s a different framing at work here. In the fire-risk inspections the city is targeting buildings, whereas in the drunk driving example it’s really targeting the drivers themselves. This shift in framing—targeting the individual as opposed to the inanimate–crosses the line into invasive, even creepy, civil surveillance.

So given the degree to which the recently exposed government surveillance programs target individual communications, it’s not as surprising that, according to Gallup, more Americans disapprove (53%) than approve (37%) of the federal government’s program to “compile telephone call logs and Internet communications.” This is despite the fact that such surveillance could in a very real way contribute to public safety, just as with the fire-risk or drunk driving inspections.

At the heart of the public’s psychological response is the fear and risk of surveillance uncovering personal communication, of violating our privacy. But this risk is not a foregone conclusion. There’s some uncertainty and probability around it, which makes it that much harder to understand the real risk. In the Prism program, the government surveillance program that targets internet communications like email, chats, and file transfers, the Washington Post describes how analysts use the system to “produce at least 51 percent confidence in a target’s ‘foreignness’”. This test of foreignness is tied to the idea that it’s okay (legally) to spy on foreign communications, but that it would breach FISA (the Foreign Intelligence Surveillance Act), as well as 4th amendment rights for the government to do the same to American citizens.

Platforms used by Prism, such as Google and Facebook, have denied that they give the government direct access to their servers. The New York Times reported that the system in place is more like having a locked mailbox where the platform can deposit specific data requested pursuant to a court order from the Foreign Intelligence Surveillance Court. But even if such requests are legally targeted at foreigners and have been faithfully vetted by the court, there’s still a chance that ancillary data on American citizens will be swept up by the government. “To collect on a suspected spy or foreign terrorist means, at minimum, that everyone in the suspect’s inbox or outbox is swept in,” as the Washington Post writes. And typically data is collected not just of direct contacts, but also contacts of contacts. This all means that there’s a greater risk that the government is indeed collecting data on many Americans’ personal communications.

Algorithms, and a bit of transparency on those algorithms, could go a long way to mitigating the uneasiness over domestic surveillance of personal communications that American citizens may be feeling. The basic idea is this: when collecting information on a legally identified foreign target, for every possible contact that might be swept up with the target’s data, an automated classification algorithm can be used to determine whether that contact is more likely to be “foreign” or “American”. Although the algorithm would have access to all the data, it would only output one bit of metadata for each contact: is the contact foreign or not? Only if the contact was deemed highly likely to be foreign would the details of that data be passed on to the NSA. In other words, the algorithm would automatically read your personal communications and then signal whether or not it was legal to report your data to intelligence agencies, much in the same way that Google’s algorithms monitor your email contents to determine which ads to show you without making those emails available for people at Google to read.

The FISA court implements a “minimization procedure” in order to curtail incidental data collection from people not covered in the order, though the exact process remains classified. Marc Ambinder suggests that, “the NSA automates the minimization procedures as much as it can” using a continuously updated score that assesses the likelihood that a contact is foreign.  Indeed, it seems at least plausible that the algorithm I suggest above could already be a part of the actual minimization procedure used by NSA.

The minimization process reduces the creepiness of unfettered government access to personal communications, but at the same time we still need to know how often such a procedure makes mistakes. In general there are two kinds of mistakes that such an algorithm could make, often referred to as false positives and false negatives. A false negative in this scenario would indicate that a foreign contact was categorized by the algorithm as an American. Obviously the NSA would like to avoid this type of mistake since it would lose the opportunity to snoop on a foreign terrorist. The other type of mistake, false positive, corresponds to the algorithm designating a contact as foreign even though in reality it’s American. The public would want to avoid this type of mistake because it’s an invasion of privacy and a violation of the 4th amendment. Both of these types of errors are shown in the conceptual diagram below, with the foreign target marked with an “x” at the center and ancillary targets shown as connected circles (orange is foreign, blue is American citizen).


It would be a shame to disregard such a potentially valuable tool simply because it might make mistakes from time to time. To make such a scheme work we first need to accept that the algorithm will indeed make mistakes. Luckily, such an algorithm can be tuned to make more or less of either of those mistakes. As false positives are tuned down false negatives will often increase, and vice versa. The advantage for the public would be that it could have a real debate with the government about what magnitude of mistakes is reasonable. How many Americans being labeled as foreigners and thus subject to unwarranted search and seizure is acceptable to us? None? Some? And what’s the trade-off in terms of how many would-be terrorists might slip through if we tuned the false positives down?

To begin a debate like this the government just needs to tell us how many of each type of mistake its minimization procedure makes; just two numbers. In this case, minimal transparency of an algorithm could allow for a robust public debate without betraying any particular details or secrets about individuals. In other words, we don’t particularly need to know the gory details of how such an algorithm works. We simply need to know where the government has placed the fulcrum in the tradeoff between these different types of errors. And by implementing smartly transparent surveillance maybe we can even move more towards the world of the geek squad, where big data is still ballyhooed for furthering public safety.

Transparency in Game UIs

Games are a decent starting point for seeing how mechanical transparency is addressed in computer interfaces since many times simulation games are built around the concept of optimizing some state of the game (resource use, growth, or simply just score etc.) based on decisions the player makes. Here I illustrate how games are approaching some of the facets of mechanical transparency I introduced before.

Specific Why of State. This is a precise explanation for a game element’s attributes including any relationships, the directionality of those relationships, and their valences.

Sim City 4 does this exceedingly well but using a “?” tool which when clicked on an element exposes the state of the attributes of that object relevant to game play. For instance, it will tell you the crime rate and freight trip length for industrial buildings. At any point in the game you get a snapshot of the status of an individual object. Another method used to expose state is by hovering the mouse over an object. This reveals less information than the “?” click, but still shows you “the top three conditions that currently have the most dramatic impact on the desirability of the area” [Sim City 4 manual]. When the state of elements is spatially structured as in a map, overlays are used to show the distribution of a variable across space. Graphs are used to show transparency of state over time, thus aggregations of individual elements’ state are shown as a time series. Sim City 4 manages to achieve a playable simulation in part because the information that is necessary for optimizing the simulation is transparent in the interface. This is done through hovering popups, clickable popups, spatially layered attributes, and temporally graphed attributes.

Another game I looked at, Oil God, adds an additional twist to communicating state transparency by adding an overlay network visual on top of a spatial layout to explicitly show relationships between game elements. Democracy 2 encodes two more variables into its graphical overlays: relationship valence (via red-green coloring) and directionality (via animation direction). Another perhaps simpler method for communicating specific state information is via textual feedback. For instance, in The Garbage Game, where the premise is to keep as much refuse out of landfills, I made a decision in the game to refill and reuse my plastic bottle and it told me: “We figure that a bottle will get refilled about three times on average, so we’ve reduced the volume of water bottle waste in your sorted recycling to 25 percent of the 17,677 tons that New Yorkers currently generate each year, or 4,420 tons.”

Specific Why of State Change. This relates to explicit descriptions of computational state and explanations of why a state has changed. What was the trigger, event, or decision that affected a state change? Was this trigger algorithmic or based on user input? This facet of mechanical transparency is lacking in many of the games I looked at. One game that did have some attempt at an explanation for state change is Energville. At the end of the game, a graph shows the economic, environmental, and security impacts of your decisions. Along the timeline of the graph there are icons of different events that have happened. Clicking on these expands them out with additional textual information. For instance, “2014: wind power fails to deliver. -resulting in a 20% increase in your Wind economic impact.” Since state change is a matter of an attribute over time, annotated graphs and timelines seem to be a natural interface metaphor for explaining state change. Another candidate would be animation. For embedding monikers about state change within an interface itself, something like the afterglow effects in Phosphor might also work.

General Why of State. This is a generic explanation for an element’s attribute or a relationship between attributes, which is not related to the specifics of any particular element.

General why of state involves explication of the existence and valence of a relationship rather than the actual mathematical description, which would be included in the specific why of state. Because the explanation is general in terms of being non-specific, external information and sources can be used to buttress the existence and valence of relationships in the model. One of the tensions that this general information brings up is the granularity of its availability. Is it embedded in the interface or simply available as blocks of text elsewhere? What we see in many games is that the general why of state is offloaded to textual explanation on a separate information page, sometimes also outside of the interactive application itself. In Energyville, which has players managing economic, environmental, and security impacts of energy decisions, each of these facets has explanations in plain text. For instance, the environmental facet, when clicked, lists out all the negative impacts on the environment and cites the names of some recent reports which were used to “inform the impact assessments.”

Sim City 4 and Democracy 2 follow a similar strategy of explaining in text generally what the relationships are between variables of interest. For instance, In Sim City 4, clicking a city opinion poll about land value tells you this: “Represents the average land value in your city. To raise land value place parks, schools, hospitals, and other amenities in your residential zones.” In Democracy 2, the textual information comes in the form of “Encyclopedia” articles, which explain and provide context for the variable currently under consideration. The advantage to a textual approach is that we have well understood conventions for citing information in text. Also, it’s unclear whether or not an abstract relationship could be represented well either in an image or in video material. The value of such multimedia assets may however serve as existence proof for some attribute of interest, if such an attribute is not already obvious.

HCI’s Teachings on Transparency II

In this post I’ll continue trying to glean knowledge from the study of transparency of interactive systems in HCI, which I began in an earlier post.

Back in the mid 1990’s there was a flurry of activity in HCI in trying to understand the explainability and transparency of interactive systems. Paul Dourish published extensively in the area and is known for his book, Where the Action Is: The Foundations of Embodied Interaction, which (among other things) connects ideas from ethnomethodology with those of technology and system transparency.

A key concept studied in relation to ethnomethodology is that of accountability, meaning “observable and reportable” or able to be made sense of in the context in which an action arises. It addresses not just the result or outcome of an action but also includes how the result was achieved. Dourish sums it up thus, “Put simply it says that because we know that people don’t just take things at face value but attempt to interrogate them for their meaning, we should provide some facilities so that they can do the same thing with interactive systems. Even more straightforwardly, it’s a good idea to build systems that tell you what they’re doing.”

An account then is something that provides accountability in a software interface. The goal of an account is to provide some explanation for how the sequence of actions up to a moment results in a system’s current configuration. Why did each action in the interface affect the state in the way that it did? This is extremely similar to the notion of the transparency of mechanics that I developed in a previous post. Too bad Dourish beat me by a decade or so.

In his paper, Accounting for System Behavior: Representation, Reflection and Resourceful Action, Dourish posits a compelling definition for an account: “Accounts are causally-connected representations of system action which systems offer as explications of their own activity. They are inherently partial and variable, selectively highlighting and hiding aspects of the inherent structure of the systems they represent.” The notion of partiality of accounts is troubling with respect to journalistic transparency since information exclusion entails a danger of bias. But journalistic transparency can be maintained even in partiality if decisions about inclusion / exclusion are explicated. Decisions about inclusion / exclusion can however also be made algorithmically, which confounds the problem for interactive systems. The classic example is in the (lack of) transparency of ranking algorithms used in online search engines.

Another connection that I see to journalistic notions of transparency is that accounts are context sensitive: more general statements of transparency are less context specific whereas less general statements embedded in the actual context of the running system are highly context specific. “The account that matters is one that is good enough for the needs and purposes at hand, in the circumstances in which it arises and for those who are involved in the activity,” writes Dourish in Where the Action Is. What are the needs of the user in some particular situation? A journalist writing interactive software would need to answer the question: “What states need to be observable?”.

Furthermore, in journalism, transparency happens at varying degrees and levels of granularity and is thought of in a practical light where, for instance, it would not make sense to be transparent about all of a reporter’s notes in a newspaper since there are space constraints. Practicality, efficiency of communication, and usability of an interface, can be subverted if everything must be transparent. What is the appropriate level of transparency, both mechanical and journalistic, for interactive games and info graphics?

Johnson and Johnson have also written about another important facet of transparency that is relevant here. The nature of the knowledge that is being made transparent, whether declarative or procedural knowledge can have an impact on how that transparency is presented. Is it easily citable or does a complex process need to be explicated? I think this gets manifested in journalistic transparency as a difference between transparency of reference and transparency of construction.

Notions of Transparency in Journalism

I’ve been trying to get a handle on how interactive software such as games can be made more transparent, and perhaps more trustworthy. As suggested in The Elements of Journalism, transparency signals a respect for the audience and reaffirms a journalist’s public interest motive, the key to gaining credibility. “The willingness of the journalist to be transparent about what he or she has done is at the heart of establishing that the journalist is concerned with the truth” (p. 92). I’ve begun the process of teasing apart understandings of transparency in journalism, which encompass a number of different notions including:

  • Decisions. Explaining how and why relevant editorial decisions are made. This includes explaining any inclusion or exclusion criteria for any controversial decisions as well as explaining why a decision to anonymize a source was made. Selection is at the heart of bias, so to be more transparent about bias, journalists should always support their decisions about selecting or excluding information.
  • Lack or Uncertainty of Knowledge. Being upfront about acknowledging what questions stories do not answer (or cannot answer). When information is uncertain or unavailable, what assumptions have been made which affect interpretation?
  • Production Process. Providing evidentiary support to a story. News providers can use the Internet to provide primary source material in the form of databases, documents, methodologies, or audio and video of interviews. This can also include information about the nature and quality of the source used for information gathering. For instance, was the source of information a press conference, interview, press release, or quote from another media institution? What is the context and circumstance under which that information was gathered? Why is that source qualified to comment on the issue at hand? If multiple sources were used, how were they selected? At a different level of granularity process can also involve explaining to the audience how stories are developed, reported, edited, produced, and presented. In Ian’s terminology highly granular process transparency corresponds to the transparency of reference and at a less granular level to the transparency of construction.
  • Labeling. Advertisement and opinion needs to be marked as such to avoid confusion by news consumers.
  • Correction. Admitting and correcting mistakes and errors in a timely fashion.

Transparency is modulated by features such as:

  • Granularity. What is the appropriate scale to address transparency? Sometimes it is addressed at the level of the whole newsroom in the form of an editor’s column or blog about how decisions are made. Other times it should be addressed with more specificity, at the level of providing links to primary source material as well as providing context about the information sources in a particular story.
  • Degree. Even in a highly granular instantiation of transparency, not every statement likely needs that much attention to detail. Perhaps there are culturally accepted chunks of information that don’t need explicit citation, or are so widely known as to be considered wasting space if they are included. No one should expect that an article contain a complete list of explanations regarding sourcing or newsgathering, as this would be overwhelming for a consumer and perhaps impossible in print or video where space and time are at a premium.

Ultimately though the granularity and degree of transparency need to be audience centric. “What does my audience need to know to evaluate this information for itself? This includes explaining as much as is practical about how the news organization got its information.” (The Elements of Journalism. 94)

When considering bloggers-as-journalists the concept of transparency shifts a bit. Whereas the predominant notion of the journalist as objective and impartial reporter prevails in mainstream media, bloggers participating in journalistic activity tend to be transparent about their bias and background as well as what they have at stake. Bloggers have the freedom to express transparency in motives as well as transparency in process. For instance, bloggers often link to documents, sources and supporting evidence to buttress their own authority whereas oftentimes press articles are written without links, as if for print.

In a future post I will mix and match this understanding of transparency in journalism with the notions of mechanical transparency and system transparency I’ve talked about before.

HCI’s Teachings on Transparency I

I’ve gone back to basics and have been reading through the HCI bible (Human Computer Interaction 3rd Ed. Dix et al.) to get a better understand how transparency is conceived of in interactive systems. System transparency does get a treatment as an element of formal interface modeling. There are several key points that we can learn from and which tie into transparency as it concerns journalism and interactive media.

While the state of the system is central to the notion of system transparency, what we’re really interested in is an idealization of the system state. What’s important in a user-centric model is the representation of “state required to account for the future external behavior.” In the text Dix refers to this as the “effect” which I think is nasty terminology. I’m going to call it the “User-Relevant State” or URS.

They do arrive at a workable definition of state: “[Transparency] would say that there is nothing in the state of the system that cannot be inferred from the display. If there are any modes, then these must have a visual indication; if there are any differences in behavior between the displayed shapes, then there must be some corresponding visual difference.” This gets at the central usability heuristic of observability and its relation to transparency. Observable states are visually shown in the interface; an invisible component of the URS does not uphold the principle of system transparency. One could make an argument that if the URS is not fully transparent usability problems are likely to ensue since the user does not have adequate feedback on the state of the system relevant to its usage.

Of course, not all of the URS may be observable in one view because of screen real-estate problems and that the display could be unintelligible if there were too much shown. Thus the URS can be progressively observed through interaction (e.g. clicking a marker and an object for state or displaying a layer over objects which explicates state). The usability of a system and its effectiveness may however be increased if more of the URS (such as data dimensions in a model) is visible in the display in one view. A key connection Dix makes is that when the user can observe the complete state of the system they can (in theory) predict what the system will do. This is precisely what many simulation games are all about: predicting how actions taken on the model will impact on future states of the simulation. The question remains: does complete transparency make a game experience too easy? Isn’t there some satisfaction in figuring it out?

The Transparency of Mechanics

aibo x-ray.png

In Ian’s prior post on transparency and games he mentions three types of transparency: transparency of influence, transparency of construction, and transparency of reference. Cutting across these facets of transparency I’d like to add the transparency of mechanics which is particularly applicable to any consumer-facing journalistic software, of which games are one instance. To get a better understanding of (1) what the transparency of mechanics involves in journalistic software and (2) how mechanics are currently communication in software I analyzed a number of examples of serious games and info graphics including: SimCity 4, Democracy 2, Oil God, The Garbage Game, Energyville, Stop Disasters Game, The Chevron Energy Generator, Better to Buy or Rent, and’s 2008 what ifs. In this post I will mainly address the definitions and in future posts I will consider how the model of the transparency of mechanics presented here has been and can be reified in interfaces.

What I mean when I say “mechanics” is essentially the internal and external state of elements and relationships between elements of a computer program, including the values or attributes and categorizations of elements in the software with respect to their circumstances (e.g. time, place, etc.). A state within a game is the instantaneous value of all elements and relationships between elements. For example, in Sim City the state of the game at any one time slice is the set of all values (e.g. low, med, high) of all attributes (e.g. pollution, education, fire protection etc.) for all games objects (e.g. power plants, residential areas etc.) including how those objects are interacting and influencing each other at that moment.

Transparency of mechanics can be broken out into different facets including:

  • State. What are the attributes and relationships of game elements?
    • The specific WHY of state: a precise explanation for an element’s attribute.
      • This gets at the notion of what the relationships are between elements and what their valence and effect on each other is. For instance, what attributes at the current time-slice contributed to the attribute of the object of interest?
    • The general WHY of state: a generic explanation for an element’s attribute
      • What are the general attributes which affect a given attribute of interest, i.e. what are the relationships and weights to other entities? How do you know the strength and directionality of those relationships?
  • Computation of State (How). How are changes of state computed? How does probability factor into the computation? What is the method of inference or equation governing state change?
  • Explanation for State Change (Why). What was the trigger, event, or decision that affected a state change?
  • Assumptions and Limitations of the Model. How is the model grounded and where does it fail to accurately portray the phenomenon of interest?

Being fully transparent about all mechanics in a game may turn out to be a daunting and in fact unproductive enterprise. This is because of the granularity of transparency that would need to be supported to show the attributes and relationships between all game elements at all time-slices. Do users really need to know about every little state change? The answer is clearly no, but the job is then up to the journalist / programmer to make decisions about which aspects of the model should be most saliently transparent in the final presentation. Another question to ponder here is whether too much transparency in games can ruin the fun of it? And if perhaps by explicating too much you undermine the medium’s abilities to get people to comprehend models via interaction?

Usable Transparency

The NYT has recently been doing a lot of interactive pieces for the 2008 presidential election. One of these is an interactive chart presentation of different political polls done by different organizations. This isn’t quite game-y, though it could be if there were some additional features like being able to compare one poll to another, or to try to predict a future poll based on current polls for points. Anyway, the important point here is that these visualizations are based on some simple polling data, things like # of respondents, and % in favor of each candidate. The Times is transparent about this data in 2 ways, (1) by providing a link explaining eligibility for polls to be included in the chart and (2) by providing a link to the raw database dump of the data. The eligibility link speaks to data quality issues that can arise in the collection of data, which can lead to invalid results or bias. The database dump link speaks to the ability to peer behind the graphic to the actual data used to produce it.

It’s useful to draw a distinction between data and information here, data being raw sensor readings or direct observations and information being additional context and interpretation based on data. There’s a difference in what needs to be done in terms of transparency of data (which the Time did magnificently for the interactive polling piece) and transparency of information. This is because there is a layer of contextualization and interpretation that also needs to be explicated in order to be transparent about information. This touches on issues of individual and organizational biases since interpretation itself is influenced by these outside sources. Moreover interpretation can be something encoded into mathematical equations that produce information (derived values) from the actual raw data. Consider the mean of all polls for each candidate. This is a derived value, albeit one that most people understand readily, but nonetheless which takes an interpretive stance that a mean of polling data collected under different circumstances is meaningful. As we move from simple means to more complexity, a data driven model is really nothing more than a series of complex mathematical manipulations which interpret the data into a manageable form of information.

Here’s the crux: to be transparent about information (interpretation from data), journalists need a way to be express interpretations or manipulations, mathematical though they may be, in a way that is easily understood. This has direct bearing on games for journalism since the models on which games interpret the world will be important to explicate to consumers in the spirit of transparency. The problem alas is that math is inpenetrable to many. Imagine the Times providing a 3rd link for transparency, one which shows a nasty equation on top of which a simulation is built. This is important, because even though many people won’t take the time to understand it, the people that take the time to will be able to verify or understand the model. But what about the other people? They need Usable Transparency. I like to think that a simulation game like SimCity follows the principle of usable transparency – you don’t need to understand the simulation model to be able to make decisions in the game. The manual describes in prose what to do to alleviate trash problems, create more jobs, or reduce rush hour traffic jams. I think this is a useful paradigm that would serve journalists well in thinking about transparency as it relates to games. The collection of the data is important, check. The data itself is important, check. But the mathematical model which drives a simulation is important too. I would argue for a prose description of that model which itself is footnoted with grounding equations.