Press "Enter" to skip to content

Situating Innovation

I was fortunate to be invited to give a talk at Boston University for their Digital Learning Initiative 2016 speaker series.  Here is the blurb for the talk:

Innovation,” as the word itself suggests, faces toward the future. In doing so, it often assumes an ahistorical stance, erasing social contexts and questions of justice, fairness, and power. In this upcoming presentation, Chris Gilliard argues that in the world of apps, databases, gadgets, and impenetrable algorithms, the word “innovation” should always be balanced by considerations of the history and values of technology, accompanied by the question “Innovation for whom?” He posits that without critique, a strong grasp of the history of a given technology, and the recognition that digital technologies (like any cultural objects) are intrinsically ideological, innovation is likely to maintain its built-in biases and continue to leave behind large portions of our communities. Focusing on “digital redlining”, Gilliard’s presentation touches on the active enforcing of technological race and class boundaries and assumptions through algorithms involved in decisions ranging from college admissions and IT information access policies to investment analysis, policing and even in an online fad like Pokémon Go.

Taking a cue from Rolin Moe and Jessamyn West, I thought I should post the talk on my blog. It’s also available on Periscope here, if you are interested.

 


Situating Innovation

On July 14, 2016, at a campaign rally in Virginia, Hillary Clinton, hoping to latch on to the wave of the popular game Pokemon Go, said this to her crowd of supporters:

“I don’t know who created Pokemon Go, but I’m trying to figure out how we get them to have Pokemon Go to the polls.”

Shortly after, it was announced that the Clinton staff would try to tap into the game’s phenomenon by using Pokemon Go stops and gyms as rallying points for registering voters:

“Campaign organizers for Hillary Clinton, like her Ohio organizing director Jennifer Friedmann, have started showing up at Pokéstops and gyms to convince Pokémon Go users to register to vote.”

The problem? Pokemon Go was created by Niantic, and a good chunk of PG’s geographic data was based on Niantic’s geolocation game, Ingress.

“…but because Ingress players tended to be younger, English-speaking men, and because Ingress’s portal criteria biased business districts and tourist areas, it is unsurprising that portals ended up in white-majority neighborhoods.

Which lead to headlines like this, in USA Today:

Is Pokemon Go Racist? How the App May be Redlining Communities of Color.

The result? A seemingly helpful tactic to increase voter registration initiative winds up being “unevenly distributed” away from minority communities because folks either don’t understand how the tech they are using works, don’t bother to investigate the underlying values and assumptions of the technology.

So at this point, you may be asking yourself: what the hell does this have to do with “innovation?” So it might be helpful to go back a bit before we move forward.

In the United States, redlining began informally but was institutionalized in the National Housing Act of 1934. At the behest of the Federal Home Loan Bank Board, the Home Owners Loan Corporation (HOLC) created maps for America’s largest cities that color-coded the areas where loans would be differentially available. The difference among these areas was race. In Detroit, “redlining” was a practice that efficiently barred specific groups—African-Americans, Eastern Europeans, Arabs—from access to mortgages and other financial resources. We can still see landmarks such as the Birwood Wall, a six-foot-high, ½ mile long wall explicitly built to mark the boundary of white and black neighborhoods. Even though the evidence is clear, there is a general failure to acknowledge that redlining was a conscious policy that remained legal until the Fair Housing Act of 1967 and which continues to reappear in various guises.

Detroits Race Wall

Just within the past two weeks, ProPublica revealed that it’s possible for people advertising rental apartments to choose their audience based on a Facebook user’s “ethnic affinity” (Facebook’s code for race), meaning that one could place an ad for an apartment and dictate that no people who “ethnically identify” as African American see the ad, a pretty clear violation of the Federal Housing Act.

What does this have to do with digital tools, data analytics, algorithms, or innovation? A lot of my work focuses on the intersections of algorithmic filtering, broadband access, privacy, and surveillance and how choices made at these intersections often combine to wall off information and limit opportunities for students.

Just as we need to understand how the Birwood Wall limited financial opportunity, so also do we need to understand how the shape of current technologies control the intellectual (and, ultimately, financial) opportunities of some college students. If we emphasize the consequences of differential access, of differences in privacy according to class, we see one facet of what’s often called the “digital divide”; if we ask about how these consequences are produced, we are asking about digital redlining.

Digital Redlining It is a different thing: the creation and maintenance of technological policies, practices, and investment decisions that enforce class boundaries and discriminate against specific groups. The digital divide is a noun; it is the consequence of many forces. In contrast, digital redlining is a verb , the “doing” of difference, a “doing” whose consequences reinforce existing class structures. The digital divide is a lack—redlining is an act. In one era, redlining created differences in physical access to schools, libraries, and home ownership. Now, the task is to recognize how digital redlining is integrated into technologies, and especially education technologies, to produce the same kinds of discriminatory results. The divide is a “lack” — redlining is an act.

Digital redlining becomes more obvious if we examine how community colleges are embedded in American class structures. For about 50 percent of U.S. undergraduates, higher education means enrollment in these institutions. These students are quite different than those of institutions like Boston University:  13% of them have been homeless, 17% are single parents, 22% of those who attend full-time also work full-time. Many of them are poor, from low-quality high schools, and they have a class-consciousness that makes them view education as job training.

These students face powerful forces—foundation grants, state funding, and federal programs—that configure education as job training and service to corporate needs. These colleges sometimes rationalize this strategy by emphasizing community college as a means of escaping poverty, serving community needs, and avoiding student debt.

One of the most important things to realize about the concept of digital redlining is this: you are either attempting to design bias out or you are necessarily designing bias in.” In other words, the process of making technology decisions MUST take into account how our inventions, decisions, and technologies affect diverse populations. So while there are (just as in the case of traditional redlining) conscious decisions about who gets what technology and what technology is “good enough” for certain populations, redlining also occurs when these decisions are made without regard for their effects on diverse populations. These decisions occur at educational institutions daily and at all levels: when instructors decide to embed videos in their Learning Management System or find some “cool” new ed-tech to use in the classroom without scrutinizing its privacy policy; when administrators decide which new technologies to incorporate without regard for how it handles student data privacy; when Chief Information Officers decide on the Acceptable Use Policy of a school and assert what kinds of tech will and will not be allowed or what legitimate uses of the network are.

When I was trying to come up with a name for the talk, the name I came up with was “situating innovation” and when I parse that term I think about not only figuring out what innovation is (how it’s defined), but where innovation takes place—how do you find it on a map. Of course, the narrative that is spun around Silicon Valley situates it as the home of Innovation. But what does Innovation look like? Can we see it on a map? Conversely can we see where it isn’t? For while Silicon Valley sells itself as the home of innovation , it also has a massive population of homeless, exacerbated by the meteoric rise in housing and rent prices—one of the prices of innovation. So, sometimes Innovation looks like this: the Silicon Valley Triage Tool. (Yes, Silicon Valley has an algorithm that determines which homeless people get allocated resources):

“The algorithm, known as the Silicon Valley Triage Tool, draws on millions of pieces of data to predict which homeless individuals will use the most public services like emergency rooms, hospitals, and jails. The researchers behind the project calculate that identifying and quickly housing 1,000 of these high-cost people could save more than $19 million a year—money that could be rolled into providing more housing for homeless people.”

One of the problems? “It gives higher scores to men because they tend to rack up higher public service costs.”

We can also see how these decisions play out with other tech giants. For instance, Amazon recently received a huge amount of criticism for the way its algorithm determined which communities could get same-day delivery.

screen-shot-2016-10-28-at-11-17-03-pm
Image credit: http://www.bloomberg.com/graphics/2016-amazon-same-day/

“The most striking gap in Amazon’s same-day service is in Boston, where three ZIP codes encompassing the primarily black neighborhood of Roxbury are excluded from same-day service, while the neighborhoods that surround it on all sides are eligible.”

Similarly, in March, “ride-sharing” service Uber has also received strong accusations of redlining:

uber
Image credit: https://www.washingtonpost.com/news/wonk/wp/2016/03/10/uber-seems-to-offer-better-service-in-areas-with-more-white-people-that-raises-some-tough-questions/

“Census tracts with more people of color (including Black/African American, Asian, Hispanic-Black/African American, and Hispanic/Asian) have longer wait times. In other words, if you’re in a neighborhood where there are more people of color, you’ll wait longer for your uberX.”

We can also see it in the maps of stingray surveillance of inner-city Baltimore.

screen-shot-2016-10-31-at-10-48-03-pm
Image credit: http://www.citylab.com/crime/2016/10/racial-disparities-in-police-stingray-surveillance-mapped/502715/

So in each of these examples we can *see* innovation in techniques, but too often the demographics of most Silicon Valley companies makes them unable to see the bias in their tech; in how resources are distributed; in who gets targeted; in who gets left out or left behind. “Innovation” comes from using statistical models delivered via technology to both justify and conceal existing bias.

Is there such a thing as innovating out/down? [or is being innovated out being disrupted?] Of course if we can situate innovation, we can map it, we can see the fissures, we can see who is left out. We can perhaps see where innovating out is a conscious decision, and we can see where it’s a result of not understanding the tech, not interrogating its origins, or not looking for the ways that historical bias is reinscribed into the tech.

In the case of “Pokemon Go to the polls” certainly Secretary Clinton wasn’t advocating that voting outreach seek to further disenfranchise black and brown communities. Nor (likely, anyway) were Amazon and Uber thinking about how to work their services around not serving minority communities—much more likely in those cases are that the secret sauce algorithms of the companies sought to maximize profits and the maps that the algorithms delivered “just happened” to do that thing,

 In the case of the stingrays and surveillance of minority communities, that’s very much a conscious choice, and perhaps something we can talk about later.

The stories that we are repeatedly told about innovation are situated in specific places, and those innovations are done by certain people. Those stories often tell us that “Innovations” (that’s Innovation with a big I) happen at Harvard, at MIT, at BU, in Silicon Valley. Those stories don’t tell us about the “little “i” innovations going on at suburban community colleges, or in inner city schools. Lacking the capital or the ability to scale, many of those local innovations are never recognized. Instead places like mine often see the results of the big “I” innovation handed down to them, and they are often handed down with little thought given to the kinds of students at our places.

I’ve come up with my own formula:

Innovation minus capital = “hustling” –meaning, the story of innovation is as much about money and who gets to tell the story than it is about creating improvement and change.

This more than anything gets into the “why should I care?” portion of the talk. When you look at the stats:

Half of all college students are CC students

13% of CC students are have dealt with homeless at some time in their lives

33% of low & moderate income families have either dial up, mobile only, or no personal access to the web

17% are single parents

Most of these stats don’t belong to the students at a place like Boston University; however, we should consider that systems-level solutions – and any “innovations” required to make them – need to be sanity-checked by the people allegedly helped by these systems.

Thinking about the redlining, and the trickle-down effects of innovation, we need to look at these effects both as what happens individually and what happens institutionally. As mentioned earlier, tech design and policy decisions have concrete effects on individual users, who may have a different set of tech skills, different financial resources, and different needs. They may be the targets of surveillance. We might think of a student with a smart phone who is required to view data-gobbling videos in order to pass their class. This is a small-scale decision by a professor (although it may seem rather large-scale for that individual student). But in this case, the student has been “redlined” in the sense that their lack of money and the choices, conscious or otherwise, made by their professors create conditions where the student is walled off from success.

But we should also think about these things institutionally. My institution, like a lot of others, looks to R1’s to see what they are doing, and we often follow suit. We are (for the most part) not a trend setting, dare I say “Innovative” place–and much of that is because we don’t have the money. We get the tech as it flows down the pipeline, often without consideration of how the different populations are served (or not) by a particular tech. A look at Edsurge’s “Community College: Digital Innovations Next Frontier” article lists 10 innovations, most of which fall into two main categories: skills training and increased surveillance (under the guise of “engagement).

When talking about tech, we often hear William Gibson’s quote “the future is here, it’s just not evenly distributed.”

But there’s another way to think about it—even the same tech–equally distributed– “the future” or “innovation” means different things to different folks. Herbert Marcuse talks about this type of thing in One Dimensional Man

“If the worker and his boss enjoy the same tv program and visit the same resort places, if the typist is as attractively made up as the daughter of her employer, if the Negro owns a Cadillac, if they all read the same newspaper, then this assimilation indicates not the disappearance of classes, but the extent to which the needs and satisfactions that serve the preservation of the Establishment are shared by the underlying population.”

So, you may drive the same car as your boss, but that doesn’t mean you are in the same class. The flattening out of the consumer goods markets often means most of us use the same technologies, but we take for granted that that puts us in the same class or that we use them the same way. Many of my students have the same (or better) versions of the iphone than I do. But it’s a mistake to think that their phones have the same data plan or that we have the same plan of action if our phone breaks, or even that we use our phones the same way.

Pew Research recently put out a report that Fifteen percent of Americans between the ages of 18 and 29 and 13 percent of Americans who make less than $30,000 per year (24 million people!) are smartphone-dependent, meaning they can only access the internet through their smart phones. So “small things” like removing the headphone port can mean costly decisions for those populations.

Surveillance, tracking, predictive analytics: these mean something different to the Ivy League white kid whose cafeteria uses an iris scan than those same tracking technologies do to the Albanian community college kid whose prof has a dashboard in front of her predicting the likelihood of success in the class. These are both versions of innovation. We have to be careful that in looking forward, innovation doesn’t neglect history and the real life conditions of those who are subjected to the innovations.

So I started the talk with a story about the election, and I’m going to end with a story about the election—and tacos.

In October, Donald Trump surrogate Marco Gutierrez, who founded “Latinos for Trump,” warned that if immigration wasn’t curbed, there would be “taco trucks on every corner.”

Of course he was widely ridiculed, but this declaration set into motion the “Guac the vote” campaign, where taco trucks doubled as voter registration sites. Certainly deciding where those trucks were placed was a much different process than placing registration booths at PokeStops.

tacotruck
Image credit: http://www.rawstory.com/2016/09/register-to-vote-get-a-taco-houston-taco-trucks-put-voter-registration-booths-on-every-corner/

The “algorithm” for placing these stops was not a black box. It was the conscious recognition of ethnic geographies and their political beliefs.  The black box was not black at all, and in fact openly revealed its aims and assumptions in ways that are too often foreign to the educational “black boxes” that Frank Pasquale describes. It was not politics free. It was not free of bias, nor did it pretend to be.

blm-pokestop

I want to end on this picture of Old West United Methodist Church in Boston. I think it’s an important image because it combines many of the things I’ve talked about: geography, technology, race, and social justice. So, while many consider it an imperative to innovate and look forward, it’s just as important to consider the ethical implications of that “forward thinking” and to make sure that no one gets left behind.

 

 

2 Comments

  1. Alan Levine
    Alan Levine November 8, 2016

    Thanks for sharing your talk in both formats; for me being able to read, re-read your words, like Audrey Watters talks, helps so much in understanding. The data, the maps, are hard to ignore. I must admit I had never heard of “stingray” before, but I knew exactly the patterns in northwest Baltimore where I grew up. Impossible to ignore.

    Much appreciated, much to do.

    • Chris
      Chris November 8, 2016

      Thanks! I’m slowly easing into the public scholarship realm!

      The stingray tech is especially pernicious in how law enforcement is able to use them in almost complete secrecy. The ACLU is doing some really great work in bringing this to light: https://www.aclu.org/map/stingray-tracking-devices-whos-got-them

Comments are closed.

css.php