Press "Enter" to skip to content

Hypervisible Exchanges Posts

Shaming and Framing: Imagining students at an education conference

“Online Proctoring: All Day and All of the Night!”

“Live Online Proctoring Redefined!”

“Switch to the Gold Standard” (of student proctoring)

“Making Any Mode of Proctoring Possible.”

“Make Them Prove They Aren’t Cheating!” (okay, I made this one up)

These slogans and their accompanying images are the gatekeepers standing in between me and the conference upstairs. If attendees use the escalator, they will be greeted at the top by the image of an all seeing eye. If they choose the elevator, the doors close and form a gigantic hand hand, covered in mathematical formulas; presumably it’s the hand of a cheating student, trying to pull a fast one on their professor. But quickly I learn that professors have nothing to fear if they only buy “Examity,” which promises “better test integrity” all done in service of students who want to succeed the “right way.”


Choose your own (surveillance) adventure
View from inside elevator.









If the entry into the registration/vendor space of the conference is any indication, students are rampant cheaters and can only be stopped by the vigilant vendors, ever-ready with their surveillance tools. The emblems of these companies are equal parts creepy and unintentionally ironic: a lock and shield; what I think is supposed to be a webcam, but it more closely resembles a bathysphere or the head of a deep sea diving suit, and a giant owl with a graduation cap. I imagine designers thought a peephole or a gigantic pair of eyes bulging through a computer screen were too obvious (or perhaps those images are already taken).

Several essays in recent weeks offer important insights that are useful in thinking about this setting. Audrey Watters talks about how we confuse surveillance with care.

Joshua Eyler, in an important essay, recently wrote about what seems like weekly appearances of essays that deal in student shaming.

Jesse Stommel also addresses student shaming, and the implications for doing so. He writes: “We can’t get to a place of listening to students if they don’t show up to the conversation because we’ve already excluded their voice in advance by creating environments hostile to them and their work.”

These issues matter, and what I would add to them is the issue of student framing; in other words, how are students imagined when we attend a conference? Who do we think our students are? Who do vendors think our students are? What is their investment in getting us to imagine students in that way? Certainly there are other ways to assess students that don’t involve forcing them to be watched either by an algorithm or some unknown proctor on the other end.

Along with the privacy issues (some of which are discussed here and here), subjecting students to this kind scrutiny casts every student in the role of a potential cheater. So not only is it invasive, in terms of pedagogy, it sets up the classroom as a space where every student is trying to put one over on the professor. This is hardly ideal pedagogy or a sound way to set up trust in a classroom.

I’ve written before about the absence of student voice in conferences (here) and it’s clear that this space is not one that was designed with students in mind, but the question I have is this: what is the effect of attending a 3 day conference where the looming image is one of the dishonest student? Is this where “Innovation” takes us—optimizing the spying capabilities of educational institutions? Silicon Valley would be proud.
















Educon 2.9 and “Student Voice” or “Finding a Glimmer of Hope in a Time of Chaos”

As I arrived home from my weekend attending Educon 2.9, I ran into the remnants of the airport protest in my city, part of the nation-wide response to the Executive Order banning Muslim immigration.  One of the protest signs read “We already learned this lesson 6 million times.” Unfortunately, it seems that many of us did not learn that lesson well enough, and we will need as many voices as we can get to speak out against injustices. Many of our students are ready to offer the voices that we need.

“Student voice” is one of those terms that I hear all the time, but rarely do I see it put into action, and when I do, it’s not the voices of the kinds of students I teach. Rather, it’s students from Ivy League institutions or students with tremendous privilege. Let’s get this out of the way: I’m not saying we shouldn’t hear from these students, I am simply saying we shouldn’t only hear from these students, or these students shouldn’t be looked at as representative of “student voice” when (as of 2014) 42% of all college students are community college students, and these students are disproportionately low-income.

This weekend, thanks to contributions from Common Sense and my home institution, my colleagues and I traveled with 4 of our students: Adrian, Kelsey, Kevin, and Orion, to Educon 2.9 so our students could present the work they’ve done on digital redlining and student data privacy. The students kicked ass. Of course, you don’t have to take my word for it. You can watch their Virtually Connecting session here or their presentation along with Q&A here. I can only imagine the level of courage required for a first-year college student to hop on a plane, attend their first academic conference, and present their work in front of a group of strangers. If that doesn’t sound terrifying to you, you probably don’t remember what it’s like to be 18 and just starting out on your scholarly journey. I can’t pretend to be surprised that the students did well; they put in the work, and they are among the brightest and most dedicated students I’ve ever had, whether I’m talking about my students at the community college, my students at a small liberal arts school, or my students at the two large state schools where I’ve taught. But, as I watched my students present, I couldn’t help but think about how often my students, the kind of institutions they attend, and their generation are maligned. Students at community colleges are too often seen as inferior attendees of inferior institutions (here I’m thinking about the degree to which education technologies for community college students use surveillance, analytics and tracking under the guise of “engagement” and “retention”) as well as some of my previous discussions about digital redlining and ideas of what kind of information access is good enough for what kinds of students. In addition, I dare you to check Google’s autocomplete for “millennials are… ”

I make this last point about millennials because all of us attended Educon in the shadow of the fallout from the Executive Order. While this generation is getting hammered for being awful, the generations before them are the ones busy wrecking the planet. This weekend was filled with moments of pride and inspiration as I watched students take command of their intellectual path, interspersed with moments of disgust as I watched the bigotry and hatred some of our leaders and the cowardice of others, and finally an infusion of strength as I watched people turn out to demand that those in power recognize our shared humanity no matter what nationality or religion.

What do these all these things have to do with each other? Everything I know about my students, I learned about them from talking to them, spending time with them, and reading their work: not from a dashboard, readout, or spreadsheet, or for that matter, not from the latest garbage hot take about how awful today’s young people are.  In the age of DJT, I’ve read a lot of proselytizing about what we as teachers can do or need to do. Among the many things reaffirmed for me by my students this weekend: good teaching doesn’t scale. If professors, teachers, instructors have any role in helping prepare our students to clean up the mess we’ve made and continue to move towards a more just world, we need to do a better job of finding out who they are and what motivations shape their world.



Situating Innovation

I was fortunate to be invited to give a talk at Boston University for their Digital Learning Initiative 2016 speaker series.  Here is the blurb for the talk:

Innovation,” as the word itself suggests, faces toward the future. In doing so, it often assumes an ahistorical stance, erasing social contexts and questions of justice, fairness, and power. In this upcoming presentation, Chris Gilliard argues that in the world of apps, databases, gadgets, and impenetrable algorithms, the word “innovation” should always be balanced by considerations of the history and values of technology, accompanied by the question “Innovation for whom?” He posits that without critique, a strong grasp of the history of a given technology, and the recognition that digital technologies (like any cultural objects) are intrinsically ideological, innovation is likely to maintain its built-in biases and continue to leave behind large portions of our communities. Focusing on “digital redlining”, Gilliard’s presentation touches on the active enforcing of technological race and class boundaries and assumptions through algorithms involved in decisions ranging from college admissions and IT information access policies to investment analysis, policing and even in an online fad like Pokémon Go.

Taking a cue from Rolin Moe and Jessamyn West, I thought I should post the talk on my blog. It’s also available on Periscope here, if you are interested.


Situating Innovation

On July 14, 2016, at a campaign rally in Virginia, Hillary Clinton, hoping to latch on to the wave of the popular game Pokemon Go, said this to her crowd of supporters:

“I don’t know who created Pokemon Go, but I’m trying to figure out how we get them to have Pokemon Go to the polls.”

Shortly after, it was announced that the Clinton staff would try to tap into the game’s phenomenon by using Pokemon Go stops and gyms as rallying points for registering voters:

“Campaign organizers for Hillary Clinton, like her Ohio organizing director Jennifer Friedmann, have started showing up at Pokéstops and gyms to convince Pokémon Go users to register to vote.”

The problem? Pokemon Go was created by Niantic, and a good chunk of PG’s geographic data was based on Niantic’s geolocation game, Ingress.

“…but because Ingress players tended to be younger, English-speaking men, and because Ingress’s portal criteria biased business districts and tourist areas, it is unsurprising that portals ended up in white-majority neighborhoods.

Which lead to headlines like this, in USA Today:

Is Pokemon Go Racist? How the App May be Redlining Communities of Color.

The result? A seemingly helpful tactic to increase voter registration initiative winds up being “unevenly distributed” away from minority communities because folks either don’t understand how the tech they are using works, don’t bother to investigate the underlying values and assumptions of the technology.

So at this point, you may be asking yourself: what the hell does this have to do with “innovation?” So it might be helpful to go back a bit before we move forward.

In the United States, redlining began informally but was institutionalized in the National Housing Act of 1934. At the behest of the Federal Home Loan Bank Board, the Home Owners Loan Corporation (HOLC) created maps for America’s largest cities that color-coded the areas where loans would be differentially available. The difference among these areas was race. In Detroit, “redlining” was a practice that efficiently barred specific groups—African-Americans, Eastern Europeans, Arabs—from access to mortgages and other financial resources. We can still see landmarks such as the Birwood Wall, a six-foot-high, ½ mile long wall explicitly built to mark the boundary of white and black neighborhoods. Even though the evidence is clear, there is a general failure to acknowledge that redlining was a conscious policy that remained legal until the Fair Housing Act of 1967 and which continues to reappear in various guises.

Detroits Race Wall

Just within the past two weeks, ProPublica revealed that it’s possible for people advertising rental apartments to choose their audience based on a Facebook user’s “ethnic affinity” (Facebook’s code for race), meaning that one could place an ad for an apartment and dictate that no people who “ethnically identify” as African American see the ad, a pretty clear violation of the Federal Housing Act.

What does this have to do with digital tools, data analytics, algorithms, or innovation? A lot of my work focuses on the intersections of algorithmic filtering, broadband access, privacy, and surveillance and how choices made at these intersections often combine to wall off information and limit opportunities for students.

Just as we need to understand how the Birwood Wall limited financial opportunity, so also do we need to understand how the shape of current technologies control the intellectual (and, ultimately, financial) opportunities of some college students. If we emphasize the consequences of differential access, of differences in privacy according to class, we see one facet of what’s often called the “digital divide”; if we ask about how these consequences are produced, we are asking about digital redlining.

Digital Redlining It is a different thing: the creation and maintenance of technological policies, practices, and investment decisions that enforce class boundaries and discriminate against specific groups. The digital divide is a noun; it is the consequence of many forces. In contrast, digital redlining is a verb , the “doing” of difference, a “doing” whose consequences reinforce existing class structures. The digital divide is a lack—redlining is an act. In one era, redlining created differences in physical access to schools, libraries, and home ownership. Now, the task is to recognize how digital redlining is integrated into technologies, and especially education technologies, to produce the same kinds of discriminatory results. The divide is a “lack” — redlining is an act.

Digital redlining becomes more obvious if we examine how community colleges are embedded in American class structures. For about 50 percent of U.S. undergraduates, higher education means enrollment in these institutions. These students are quite different than those of institutions like Boston University:  13% of them have been homeless, 17% are single parents, 22% of those who attend full-time also work full-time. Many of them are poor, from low-quality high schools, and they have a class-consciousness that makes them view education as job training.

These students face powerful forces—foundation grants, state funding, and federal programs—that configure education as job training and service to corporate needs. These colleges sometimes rationalize this strategy by emphasizing community college as a means of escaping poverty, serving community needs, and avoiding student debt.

One of the most important things to realize about the concept of digital redlining is this: you are either attempting to design bias out or you are necessarily designing bias in.” In other words, the process of making technology decisions MUST take into account how our inventions, decisions, and technologies affect diverse populations. So while there are (just as in the case of traditional redlining) conscious decisions about who gets what technology and what technology is “good enough” for certain populations, redlining also occurs when these decisions are made without regard for their effects on diverse populations. These decisions occur at educational institutions daily and at all levels: when instructors decide to embed videos in their Learning Management System or find some “cool” new ed-tech to use in the classroom without scrutinizing its privacy policy; when administrators decide which new technologies to incorporate without regard for how it handles student data privacy; when Chief Information Officers decide on the Acceptable Use Policy of a school and assert what kinds of tech will and will not be allowed or what legitimate uses of the network are.

When I was trying to come up with a name for the talk, the name I came up with was “situating innovation” and when I parse that term I think about not only figuring out what innovation is (how it’s defined), but where innovation takes place—how do you find it on a map. Of course, the narrative that is spun around Silicon Valley situates it as the home of Innovation. But what does Innovation look like? Can we see it on a map? Conversely can we see where it isn’t? For while Silicon Valley sells itself as the home of innovation , it also has a massive population of homeless, exacerbated by the meteoric rise in housing and rent prices—one of the prices of innovation. So, sometimes Innovation looks like this: the Silicon Valley Triage Tool. (Yes, Silicon Valley has an algorithm that determines which homeless people get allocated resources):

“The algorithm, known as the Silicon Valley Triage Tool, draws on millions of pieces of data to predict which homeless individuals will use the most public services like emergency rooms, hospitals, and jails. The researchers behind the project calculate that identifying and quickly housing 1,000 of these high-cost people could save more than $19 million a year—money that could be rolled into providing more housing for homeless people.”

One of the problems? “It gives higher scores to men because they tend to rack up higher public service costs.”

We can also see how these decisions play out with other tech giants. For instance, Amazon recently received a huge amount of criticism for the way its algorithm determined which communities could get same-day delivery.

Image credit:

“The most striking gap in Amazon’s same-day service is in Boston, where three ZIP codes encompassing the primarily black neighborhood of Roxbury are excluded from same-day service, while the neighborhoods that surround it on all sides are eligible.”

Similarly, in March, “ride-sharing” service Uber has also received strong accusations of redlining:

Image credit:

“Census tracts with more people of color (including Black/African American, Asian, Hispanic-Black/African American, and Hispanic/Asian) have longer wait times. In other words, if you’re in a neighborhood where there are more people of color, you’ll wait longer for your uberX.”

We can also see it in the maps of stingray surveillance of inner-city Baltimore.

Image credit:

So in each of these examples we can *see* innovation in techniques, but too often the demographics of most Silicon Valley companies makes them unable to see the bias in their tech; in how resources are distributed; in who gets targeted; in who gets left out or left behind. “Innovation” comes from using statistical models delivered via technology to both justify and conceal existing bias.

Is there such a thing as innovating out/down? [or is being innovated out being disrupted?] Of course if we can situate innovation, we can map it, we can see the fissures, we can see who is left out. We can perhaps see where innovating out is a conscious decision, and we can see where it’s a result of not understanding the tech, not interrogating its origins, or not looking for the ways that historical bias is reinscribed into the tech.

In the case of “Pokemon Go to the polls” certainly Secretary Clinton wasn’t advocating that voting outreach seek to further disenfranchise black and brown communities. Nor (likely, anyway) were Amazon and Uber thinking about how to work their services around not serving minority communities—much more likely in those cases are that the secret sauce algorithms of the companies sought to maximize profits and the maps that the algorithms delivered “just happened” to do that thing,

 In the case of the stingrays and surveillance of minority communities, that’s very much a conscious choice, and perhaps something we can talk about later.

The stories that we are repeatedly told about innovation are situated in specific places, and those innovations are done by certain people. Those stories often tell us that “Innovations” (that’s Innovation with a big I) happen at Harvard, at MIT, at BU, in Silicon Valley. Those stories don’t tell us about the “little “i” innovations going on at suburban community colleges, or in inner city schools. Lacking the capital or the ability to scale, many of those local innovations are never recognized. Instead places like mine often see the results of the big “I” innovation handed down to them, and they are often handed down with little thought given to the kinds of students at our places.

I’ve come up with my own formula:

Innovation minus capital = “hustling” –meaning, the story of innovation is as much about money and who gets to tell the story than it is about creating improvement and change.

This more than anything gets into the “why should I care?” portion of the talk. When you look at the stats:

Half of all college students are CC students

13% of CC students are have dealt with homeless at some time in their lives

33% of low & moderate income families have either dial up, mobile only, or no personal access to the web

17% are single parents

Most of these stats don’t belong to the students at a place like Boston University; however, we should consider that systems-level solutions – and any “innovations” required to make them – need to be sanity-checked by the people allegedly helped by these systems.

Thinking about the redlining, and the trickle-down effects of innovation, we need to look at these effects both as what happens individually and what happens institutionally. As mentioned earlier, tech design and policy decisions have concrete effects on individual users, who may have a different set of tech skills, different financial resources, and different needs. They may be the targets of surveillance. We might think of a student with a smart phone who is required to view data-gobbling videos in order to pass their class. This is a small-scale decision by a professor (although it may seem rather large-scale for that individual student). But in this case, the student has been “redlined” in the sense that their lack of money and the choices, conscious or otherwise, made by their professors create conditions where the student is walled off from success.

But we should also think about these things institutionally. My institution, like a lot of others, looks to R1’s to see what they are doing, and we often follow suit. We are (for the most part) not a trend setting, dare I say “Innovative” place–and much of that is because we don’t have the money. We get the tech as it flows down the pipeline, often without consideration of how the different populations are served (or not) by a particular tech. A look at Edsurge’s “Community College: Digital Innovations Next Frontier” article lists 10 innovations, most of which fall into two main categories: skills training and increased surveillance (under the guise of “engagement).

When talking about tech, we often hear William Gibson’s quote “the future is here, it’s just not evenly distributed.”

But there’s another way to think about it—even the same tech–equally distributed– “the future” or “innovation” means different things to different folks. Herbert Marcuse talks about this type of thing in One Dimensional Man

“If the worker and his boss enjoy the same tv program and visit the same resort places, if the typist is as attractively made up as the daughter of her employer, if the Negro owns a Cadillac, if they all read the same newspaper, then this assimilation indicates not the disappearance of classes, but the extent to which the needs and satisfactions that serve the preservation of the Establishment are shared by the underlying population.”

So, you may drive the same car as your boss, but that doesn’t mean you are in the same class. The flattening out of the consumer goods markets often means most of us use the same technologies, but we take for granted that that puts us in the same class or that we use them the same way. Many of my students have the same (or better) versions of the iphone than I do. But it’s a mistake to think that their phones have the same data plan or that we have the same plan of action if our phone breaks, or even that we use our phones the same way.

Pew Research recently put out a report that Fifteen percent of Americans between the ages of 18 and 29 and 13 percent of Americans who make less than $30,000 per year (24 million people!) are smartphone-dependent, meaning they can only access the internet through their smart phones. So “small things” like removing the headphone port can mean costly decisions for those populations.

Surveillance, tracking, predictive analytics: these mean something different to the Ivy League white kid whose cafeteria uses an iris scan than those same tracking technologies do to the Albanian community college kid whose prof has a dashboard in front of her predicting the likelihood of success in the class. These are both versions of innovation. We have to be careful that in looking forward, innovation doesn’t neglect history and the real life conditions of those who are subjected to the innovations.

So I started the talk with a story about the election, and I’m going to end with a story about the election—and tacos.

In October, Donald Trump surrogate Marco Gutierrez, who founded “Latinos for Trump,” warned that if immigration wasn’t curbed, there would be “taco trucks on every corner.”

Of course he was widely ridiculed, but this declaration set into motion the “Guac the vote” campaign, where taco trucks doubled as voter registration sites. Certainly deciding where those trucks were placed was a much different process than placing registration booths at PokeStops.

Image credit:

The “algorithm” for placing these stops was not a black box. It was the conscious recognition of ethnic geographies and their political beliefs.  The black box was not black at all, and in fact openly revealed its aims and assumptions in ways that are too often foreign to the educational “black boxes” that Frank Pasquale describes. It was not politics free. It was not free of bias, nor did it pretend to be.


I want to end on this picture of Old West United Methodist Church in Boston. I think it’s an important image because it combines many of the things I’ve talked about: geography, technology, race, and social justice. So, while many consider it an imperative to innovate and look forward, it’s just as important to consider the ethical implications of that “forward thinking” and to make sure that no one gets left behind.




Digital Redlining


Digital Redlining, Access, and Privacy” is an essay that Hugh Culik and I did for Common Sense where we discuss digital redlining as “a set of education policies, investment decisions, and IT practices that actively create and maintain class boundaries through strictures that discriminate against specific groups. The digital divide is a noun; it is the consequence of many forces. In contrast, digital redlining is a verb, the “doing” of difference, a “doing” whose consequences reinforce existing class structures. In one era, redlining created differences in physical access to schools, libraries, and home ownership. Now, the task is to recognize how digital redlining is integrated into edtech to produce the same kinds of discriminatory results.”

1 Comment

Data Mining and Students

It seems like ages ago that Hugh Culik and I recorded a podcast with Les Howard about data mining and students, but it was only last February. Most of these issues remain, and in many cases have only become more severe in terms of tracking and analyzing students with little regard for their privacy and  often without their consent.


First Post


I’m finally coming to terms with the notion that I can’t use the name “hypervisible” and not have a public-facing space (of “my own”). So what goes in this surveilled space? My plan is to use it for for a few purposes: as a space to work out some of the things I’m thinking and writing about; as a central space for the public scholarship I do at the intersections of pedagogy, data, surveillance, access, digital redlining, and privacy; as a space to engage other people doing similar work.

I’ve been tremendously influenced and challenged by the work of several brilliant public scholars: Audrey Watters, Kate Bowles, Tressie McMillan Cottom, Bill Fitzgerald, Frank Pasquale, Paul Prinsloo, Cathy O’Neil, Jeffrey Alan Johnson, Autumm Caines . . . and ultimately I want this space to be a place where I might ruminate (along with whoever may be paying attention at the time) on the work that’s done by these folks and folks like them.

Leave a Comment