Press "Enter" to skip to content

Author: Chris

Power, Polarization, and Tech

This essay is my provocation for “Engagement in a Time of Polarization,” Davidson Now’s pop-up MOOC.


In Howard Zinn’s A People’s History of the United States, he writes about early colonists and how the rich were feeling the heat of poor white folks and poor black folks associating too closely with each other. The fear was that the poor, despite being different races, would unite against their wealthy overlords. Shortly after, the overlords began to pass laws that banned fraternization between the races. The message to poor whites was clear: “you are poor, but you are still far better than that poor black person over there, because you are white.”

Polarization is by design, for profit.

Certainly for as long as there have been ways to segment individuals and groups, there has been polarization–the intentional pitting of those groups against each other so that they might focus on the ways they are different rather than the reasons they might unite. This doesn’t need to always be about race, as with the Zinn example. Polarization can and does occur according to class, gender and gender identity, geography, nationality…but when and where it occurs, it tends to be in service of the powerful and the status quo, not as some “natural” occurrence, but as result of dedicated efforts to create it.

It’s hard not to talk about Trump when talking about polarization in the present day. Tressie McMillan Cottom’s essay, “Finding Hope in a Loveless Place,” writes out loud the thing that we aren’t supposed to say about the election of Trump: “This was America and I knew it was because for me it always has been.”

For many, particularly black and brown women and men, and LGBTQ folks, polarization isn’t new: we’ve been here all along. The “digital” in polarization has made more visible what for so long was only able to be seen and understood if you believed the stories people have been telling since this country’s beginnings.

In a recent Wired Magazine essay, Zeynep Tufekci expertly detailed where we are and how we got here in the digital sense. We’ve been consistently fed the lie of the “marketplace of ideas” fetishized by Silicon Valley bros who make the tech we use, and we gobbled up the narrative they were peddling. When we look at digital technology and platforms, it’s always instructive to remember that they exist to extract data. The longer you are on the platform, the more you produce and the more can be extracted from you. Polarization keys engagement, and engagement/attention are the what keep us on platforms. In the words of Tristan Harris, the former Google Design Ethicist, and one of the earliest SV folks to have the scales fall from his eyes, “What people don’t know about or see about Facebook is that polarization is built into the business model,” Harris told NBC News. “Polarization is profitable.”

David Golumbia’s description of the scholarly concept of Cyberlibertarianism is useful here (emphasis mine)  :

In perhaps the most pointed form of cyberlibertarianism, computer expertise is seen as directly applicable to social questions.  In The Cultural Logic of Computation, I argue that computational practices are intrinsically hierarchical and shaped by identification with power. To the extent that algorithmic forms of reason and social organization can be said to have an inherent politics, these have long been understood as compatible with political formations on the Right rather than the Left.

So the cui bono of digital polarization are the wealthy, the powerful, the people with so much to gain promoting systems that maintain the status quo, despite the language of freedom, democratization, and community that are featured so prominently when people like Facebook co-founder Mark Zuckerberg or Twitter co-founder and CEO Jack Dorsey talk about technology. Digital technology in general, and platforms like Facebook, YouTube, and Twitter specifically, exist to promote polarization and maintain the existing concentration of power.

To the extent that Silicon Valley is the seat of the technological power, it’s useful to note that the very ground of what we now call Silicon Valley is built on the foundation of segregating black and white workers. Richard Rothstein’s The Color of Law talks about auto workers in 1950’s California:

So in 1953 the company (Ford) announced it would close its Richmond plant and reestablish operations in a larger facility fifty miles south in Milpitas, a suburb of San Jose, rural at the time. (Milpitas is a part of what we now call Silicon Valley.)

Because Milpitas had no apartments, and houses in the area were off limits to black workers—though their incomes and economic circumstances were like those of whites on the assembly line—African Americans at Ford had to choose between giving up their good industrial jobs , moving to apartments in a segregated neighborhood of San Jose, or enduring lengthy commutes between North Richmond and Milpitas. Frank Stevenson bought a van, recruited eight others to share the costs, and made the drive daily for the next twenty years until he retired. The trip took over an hour each way.

Quite literally, Silicon Valley is built on the ground of segregation.

Tech platforms are, to borrow a legal term, fruit of the poisonous tree. The segregated ground of Silicon Valley is both the literal and figurative foundation for the platforms we use, and the design of these platforms, well-aligned with their racist history, promotes notions of free speech and community that are designed to protect the folks in society who already benefit from the most protections. ProPublica’s exposé on how Facebook understands the notion of a protected class on their platform is telling:

In the wake of a terrorist attack in London earlier this month, a U.S. congressman wrote a Facebook post in which he called for the slaughter of “radicalized” Muslims. “Hunt them, identify them, and kill them,” declared U.S. Rep. Clay Higgins, a Louisiana Republican. “Kill them all. For the sake of all that is good and righteous. Kill them all.”

Higgins’ plea for violent revenge went untouched by Facebook workers who scour the social network deleting offensive speech.

But a May posting on Facebook by Boston poet and Black Lives Matter activist Didi Delgado drew a different response.

“All white people are racist. Start from this reference point, or you’ve already failed,” Delgado wrote. The post was removed and her Facebook account was disabled for seven days.

Wired Magazine’s investigation of Facebook’s formula for protecting speech shows how the platform privileges whiteness; Facebook’s logic dictates the following:

Protected category + Protected category = Protected category

Protected category + Unprotected category = Unprotected

To illustrate this, Facebook’s training materials provide three examples—“white men”, “female drivers”, and “black children”—and states that only the first of these is protected from hate speech. The answer is “white men.” Why? Because “white” + “male” = protected class + protected class, and thus, the resulting class of people protected. Counterintuitively, because “black” (a protected class) modifies “children” (not protected), the group is unprotected.

If we had social media and rules for operating on platforms made by black women instead of bros, what might these platforms look like? What would the rules be for free speech and who gets protected? How would we experience online “community” differently than we do now? Would polarization be a bug instead of a feature? The historical disenfranchisement of black and brown women and men is compounded by these same folks still being walled off and locked out of tech institutions through hiring policy, toxic masculinity at the companies, and lack of access to venture capital. “Black women are the most educated and entrepreneurial group in the U.S., yet they receive less than 1% of VC (Venture Capital) funding.”

Buttressed by populations who don’t look much different from the ones Zinn discussed in his book, polarization persists, and benefits the people it has always benefited. The primary difference is that now people get snookered by the belief that technology is going to magically fix entrenched social and political problems as opposed to the earlier sucker’s game of believing their whiteness would save them.

Facebook’s decision to privilege the appearance of news organizations in the Newsfeed based on how users rank the organizations is just the latest example of this ruse. On its surface, this is “democracy” in its purest form. But as we’ve seen time and again (here, here, and here ) not only are these upvote/downvote schemes easily gamed by “bad actors,” they ignore significant scholarship from journalism, law, sociology, and psychology in favor of a favorite of the Silicon Valley crew: the wisdom of the crowd. And this crowd is pervasively white, male, and infected with notions of “protected” and “unprotected” categories that not-so-surprisingly perpetuate their privilege. Unfettered technological “free speech” often results in the marginalized or less technically proficient being drowned out. Silicon Valley’s high-school notion of social construction banishes expertise in favor of the images they see in the mirror. Facebook started as “Facemash,” a kind of “hot or not” where college students could vote on whether or not they found their female classmates attractive. Billions of dollars and billions of users later, Zuck is still doing the same thing.

The American Supreme Court decision Citizens United extended these categorical biases by ruling that corporations are people and money = speech. Thus, more money = more speech, and we are all free to get as much money as we can to have as much free speech as we can. In the same way, we are all free to develop our own platforms, get VC funding, and give ourselves digital megaphones. We are also free to mobilize an army of bots and sock puppets to amplify our message. Not doing so is cast as a choice rather than a result of design.

To Anatole France’s famous, “The law, in its majestic equality, forbids rich and poor alike to sleep under bridges, to beg in the streets, and to steal their bread” I might offer an addendum:

Tech platforms, in their majestic equality, allow both rich and poor alike to marshal digital tools to drown out dissenting voices, suppress votes, and spread falsehoods.

In Jessie Daniels’ Cyber Racism she discusses concepts of “seeking out” versus “recruitment” in how racism spreads on the web. Daniels points out that the term “recruitment” is misguided because it assumes that people are sucked into a vortex of racism rather than performing it out of their embedded white supremacy and  “white racial frame” that is the setting of life in the United States. Certainly we are “hailed” (in the Althusserian sense), but we are hailed not out of some individual, psychological matrix, but out of the dominant cultural matrix and into social roles.

Daniels goes further by discussing what she calls “cloaked sites”:

“cloaked sites” call into question the value of racial equality by challenging the epistemological basis of hard won political truths about the civil rights era, about the Holocaust, and even about the end of slavery. By that I mean the cultural values about race, racism and racial equality that many consider to be settled by the victories of the civil rights movement going back to the en end of slavery, are, in fact, open for debate once again as white supremacy online offers an alternative way of presenting, publishing, and debating ideas that take issue with these cultural values.

What strikes me here are the similarities between white supremacist “cloaked sites” –websites published by individuals or groups who conceal authorship or intention in order to deliberately disguise a hidden political agenda —  that Daniels discusses and what have come to be known as “dark posts” on tech platforms—posts that are only seen by the individuals who are being targeted. These were notoriously a feature of the Trump campaign, and are convenient way to sidestep the few transparency laws in place regarding political ads. No mere bystander, Facebook works hand in hand with political campaigns across the globe on these projects. Again, this is polarization by design, despite the consistent claims of platforms of fostering community. The entire goal of a “dark post” is to single out individuals with messages crafted specifically for them, meant to mobilize the latent racial/ideological elements of the culture.

When we talk about “polarization” it’s most assuredly a loaded term.  As my colleague sava saheli singh has said: “There’s race, there’s poverty, there’s digital literacy…maybe even start with stating it as intersectional polarization. Polarization affects different populations differently, and intersectional populations further differently.”

Like any abstraction, “Polarization” is fraught with meanings, but in this case, they are about class, poverty, race, gender, sexuality, technology, and power. These structures are filled with concrete instances from culture: content farms spewing out propaganda, police “heat maps” and the placement of cell site simulators in black communities, extractive platforms that benefit from the “engagement” of pitting one group against another, and the other hundreds of outrageous intrusions on our private and social lives that are first and foremost in the service of power. Digital Polarization is the technological mask for the age-old scheme of atomizing populations while making sure the powerful stay on top.

 

Comments closed

Shaming and Framing: Imagining students at an education conference

“Online Proctoring: All Day and All of the Night!”

“Live Online Proctoring Redefined!”

“Switch to the Gold Standard” (of student proctoring)

“Making Any Mode of Proctoring Possible.”

“Make Them Prove They Aren’t Cheating!” (okay, I made this one up)

These slogans and their accompanying images are the gatekeepers standing in between me and the conference upstairs. If attendees use the escalator, they will be greeted at the top by the image of an all seeing eye. If they choose the elevator, the doors close and form a gigantic hand hand, covered in mathematical formulas; presumably it’s the hand of a cheating student, trying to pull a fast one on their professor. But quickly I learn that professors have nothing to fear if they only buy “Examity,” which promises “better test integrity” all done in service of students who want to succeed the “right way.”

 

Choose your own (surveillance) adventure
View from inside elevator.

 

 

 

 

 

 

 

 

If the entry into the registration/vendor space of the conference is any indication, students are rampant cheaters and can only be stopped by the vigilant vendors, ever-ready with their surveillance tools. The emblems of these companies are equal parts creepy and unintentionally ironic: a lock and shield; what I think is supposed to be a webcam, but it more closely resembles a bathysphere or the head of a deep sea diving suit, and a giant owl with a graduation cap. I imagine designers thought a peephole or a gigantic pair of eyes bulging through a computer screen were too obvious (or perhaps those images are already taken).

Several essays in recent weeks offer important insights that are useful in thinking about this setting. Audrey Watters talks about how we confuse surveillance with care.

Joshua Eyler, in an important essay, recently wrote about what seems like weekly appearances of essays that deal in student shaming.

Jesse Stommel also addresses student shaming, and the implications for doing so. He writes: “We can’t get to a place of listening to students if they don’t show up to the conversation because we’ve already excluded their voice in advance by creating environments hostile to them and their work.”

These issues matter, and what I would add to them is the issue of student framing; in other words, how are students imagined when we attend a conference? Who do we think our students are? Who do vendors think our students are? What is their investment in getting us to imagine students in that way? Certainly there are other ways to assess students that don’t involve forcing them to be watched either by an algorithm or some unknown proctor on the other end.

Along with the privacy issues (some of which are discussed here and here), subjecting students to this kind scrutiny casts every student in the role of a potential cheater. So not only is it invasive, in terms of pedagogy, it sets up the classroom as a space where every student is trying to put one over on the professor. This is hardly ideal pedagogy or a sound way to set up trust in a classroom.

I’ve written before about the absence of student voice in conferences (here) and it’s clear that this space is not one that was designed with students in mind, but the question I have is this: what is the effect of attending a 3 day conference where the looming image is one of the dishonest student? Is this where “Innovation” takes us—optimizing the spying capabilities of educational institutions? Silicon Valley would be proud.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

6 Comments

Educon 2.9 and “Student Voice” or “Finding a Glimmer of Hope in a Time of Chaos”

As I arrived home from my weekend attending Educon 2.9, I ran into the remnants of the airport protest in my city, part of the nation-wide response to the Executive Order banning Muslim immigration.  One of the protest signs read “We already learned this lesson 6 million times.” Unfortunately, it seems that many of us did not learn that lesson well enough, and we will need as many voices as we can get to speak out against injustices. Many of our students are ready to offer the voices that we need.

“Student voice” is one of those terms that I hear all the time, but rarely do I see it put into action, and when I do, it’s not the voices of the kinds of students I teach. Rather, it’s students from Ivy League institutions or students with tremendous privilege. Let’s get this out of the way: I’m not saying we shouldn’t hear from these students, I am simply saying we shouldn’t only hear from these students, or these students shouldn’t be looked at as representative of “student voice” when (as of 2014) 42% of all college students are community college students, and these students are disproportionately low-income.

This weekend, thanks to contributions from Common Sense and my home institution, my colleagues and I traveled with 4 of our students: Adrian, Kelsey, Kevin, and Orion, to Educon 2.9 so our students could present the work they’ve done on digital redlining and student data privacy. The students kicked ass. Of course, you don’t have to take my word for it. You can watch their Virtually Connecting session here or their presentation along with Q&A here. I can only imagine the level of courage required for a first-year college student to hop on a plane, attend their first academic conference, and present their work in front of a group of strangers. If that doesn’t sound terrifying to you, you probably don’t remember what it’s like to be 18 and just starting out on your scholarly journey. I can’t pretend to be surprised that the students did well; they put in the work, and they are among the brightest and most dedicated students I’ve ever had, whether I’m talking about my students at the community college, my students at a small liberal arts school, or my students at the two large state schools where I’ve taught. But, as I watched my students present, I couldn’t help but think about how often my students, the kind of institutions they attend, and their generation are maligned. Students at community colleges are too often seen as inferior attendees of inferior institutions (here I’m thinking about the degree to which education technologies for community college students use surveillance, analytics and tracking under the guise of “engagement” and “retention”) as well as some of my previous discussions about digital redlining and ideas of what kind of information access is good enough for what kinds of students. In addition, I dare you to check Google’s autocomplete for “millennials are… ”

I make this last point about millennials because all of us attended Educon in the shadow of the fallout from the Executive Order. While this generation is getting hammered for being awful, the generations before them are the ones busy wrecking the planet. This weekend was filled with moments of pride and inspiration as I watched students take command of their intellectual path, interspersed with moments of disgust as I watched the bigotry and hatred some of our leaders and the cowardice of others, and finally an infusion of strength as I watched people turn out to demand that those in power recognize our shared humanity no matter what nationality or religion.

What do these all these things have to do with each other? Everything I know about my students, I learned about them from talking to them, spending time with them, and reading their work: not from a dashboard, readout, or spreadsheet, or for that matter, not from the latest garbage hot take about how awful today’s young people are.  In the age of DJT, I’ve read a lot of proselytizing about what we as teachers can do or need to do. Among the many things reaffirmed for me by my students this weekend: good teaching doesn’t scale. If professors, teachers, instructors have any role in helping prepare our students to clean up the mess we’ve made and continue to move towards a more just world, we need to do a better job of finding out who they are and what motivations shape their world.

 

2 Comments

Situating Innovation

I was fortunate to be invited to give a talk at Boston University for their Digital Learning Initiative 2016 speaker series.  Here is the blurb for the talk:

Innovation,” as the word itself suggests, faces toward the future. In doing so, it often assumes an ahistorical stance, erasing social contexts and questions of justice, fairness, and power. In this upcoming presentation, Chris Gilliard argues that in the world of apps, databases, gadgets, and impenetrable algorithms, the word “innovation” should always be balanced by considerations of the history and values of technology, accompanied by the question “Innovation for whom?” He posits that without critique, a strong grasp of the history of a given technology, and the recognition that digital technologies (like any cultural objects) are intrinsically ideological, innovation is likely to maintain its built-in biases and continue to leave behind large portions of our communities. Focusing on “digital redlining”, Gilliard’s presentation touches on the active enforcing of technological race and class boundaries and assumptions through algorithms involved in decisions ranging from college admissions and IT information access policies to investment analysis, policing and even in an online fad like Pokémon Go.

Taking a cue from Rolin Moe and Jessamyn West, I thought I should post the talk on my blog. It’s also available on Periscope here, if you are interested.

 


Situating Innovation

On July 14, 2016, at a campaign rally in Virginia, Hillary Clinton, hoping to latch on to the wave of the popular game Pokemon Go, said this to her crowd of supporters:

“I don’t know who created Pokemon Go, but I’m trying to figure out how we get them to have Pokemon Go to the polls.”

Shortly after, it was announced that the Clinton staff would try to tap into the game’s phenomenon by using Pokemon Go stops and gyms as rallying points for registering voters:

“Campaign organizers for Hillary Clinton, like her Ohio organizing director Jennifer Friedmann, have started showing up at Pokéstops and gyms to convince Pokémon Go users to register to vote.”

The problem? Pokemon Go was created by Niantic, and a good chunk of PG’s geographic data was based on Niantic’s geolocation game, Ingress.

“…but because Ingress players tended to be younger, English-speaking men, and because Ingress’s portal criteria biased business districts and tourist areas, it is unsurprising that portals ended up in white-majority neighborhoods.

Which lead to headlines like this, in USA Today:

Is Pokemon Go Racist? How the App May be Redlining Communities of Color.

The result? A seemingly helpful tactic to increase voter registration initiative winds up being “unevenly distributed” away from minority communities because folks either don’t understand how the tech they are using works, don’t bother to investigate the underlying values and assumptions of the technology.

So at this point, you may be asking yourself: what the hell does this have to do with “innovation?” So it might be helpful to go back a bit before we move forward.

In the United States, redlining began informally but was institutionalized in the National Housing Act of 1934. At the behest of the Federal Home Loan Bank Board, the Home Owners Loan Corporation (HOLC) created maps for America’s largest cities that color-coded the areas where loans would be differentially available. The difference among these areas was race. In Detroit, “redlining” was a practice that efficiently barred specific groups—African-Americans, Eastern Europeans, Arabs—from access to mortgages and other financial resources. We can still see landmarks such as the Birwood Wall, a six-foot-high, ½ mile long wall explicitly built to mark the boundary of white and black neighborhoods. Even though the evidence is clear, there is a general failure to acknowledge that redlining was a conscious policy that remained legal until the Fair Housing Act of 1967 and which continues to reappear in various guises.

Detroits Race Wall

Just within the past two weeks, ProPublica revealed that it’s possible for people advertising rental apartments to choose their audience based on a Facebook user’s “ethnic affinity” (Facebook’s code for race), meaning that one could place an ad for an apartment and dictate that no people who “ethnically identify” as African American see the ad, a pretty clear violation of the Federal Housing Act.

What does this have to do with digital tools, data analytics, algorithms, or innovation? A lot of my work focuses on the intersections of algorithmic filtering, broadband access, privacy, and surveillance and how choices made at these intersections often combine to wall off information and limit opportunities for students.

Just as we need to understand how the Birwood Wall limited financial opportunity, so also do we need to understand how the shape of current technologies control the intellectual (and, ultimately, financial) opportunities of some college students. If we emphasize the consequences of differential access, of differences in privacy according to class, we see one facet of what’s often called the “digital divide”; if we ask about how these consequences are produced, we are asking about digital redlining.

Digital Redlining It is a different thing: the creation and maintenance of technological policies, practices, and investment decisions that enforce class boundaries and discriminate against specific groups. The digital divide is a noun; it is the consequence of many forces. In contrast, digital redlining is a verb , the “doing” of difference, a “doing” whose consequences reinforce existing class structures. The digital divide is a lack—redlining is an act. In one era, redlining created differences in physical access to schools, libraries, and home ownership. Now, the task is to recognize how digital redlining is integrated into technologies, and especially education technologies, to produce the same kinds of discriminatory results. The divide is a “lack” — redlining is an act.

Digital redlining becomes more obvious if we examine how community colleges are embedded in American class structures. For about 50 percent of U.S. undergraduates, higher education means enrollment in these institutions. These students are quite different than those of institutions like Boston University:  13% of them have been homeless, 17% are single parents, 22% of those who attend full-time also work full-time. Many of them are poor, from low-quality high schools, and they have a class-consciousness that makes them view education as job training.

These students face powerful forces—foundation grants, state funding, and federal programs—that configure education as job training and service to corporate needs. These colleges sometimes rationalize this strategy by emphasizing community college as a means of escaping poverty, serving community needs, and avoiding student debt.

One of the most important things to realize about the concept of digital redlining is this: you are either attempting to design bias out or you are necessarily designing bias in.” In other words, the process of making technology decisions MUST take into account how our inventions, decisions, and technologies affect diverse populations. So while there are (just as in the case of traditional redlining) conscious decisions about who gets what technology and what technology is “good enough” for certain populations, redlining also occurs when these decisions are made without regard for their effects on diverse populations. These decisions occur at educational institutions daily and at all levels: when instructors decide to embed videos in their Learning Management System or find some “cool” new ed-tech to use in the classroom without scrutinizing its privacy policy; when administrators decide which new technologies to incorporate without regard for how it handles student data privacy; when Chief Information Officers decide on the Acceptable Use Policy of a school and assert what kinds of tech will and will not be allowed or what legitimate uses of the network are.

When I was trying to come up with a name for the talk, the name I came up with was “situating innovation” and when I parse that term I think about not only figuring out what innovation is (how it’s defined), but where innovation takes place—how do you find it on a map. Of course, the narrative that is spun around Silicon Valley situates it as the home of Innovation. But what does Innovation look like? Can we see it on a map? Conversely can we see where it isn’t? For while Silicon Valley sells itself as the home of innovation , it also has a massive population of homeless, exacerbated by the meteoric rise in housing and rent prices—one of the prices of innovation. So, sometimes Innovation looks like this: the Silicon Valley Triage Tool. (Yes, Silicon Valley has an algorithm that determines which homeless people get allocated resources):

“The algorithm, known as the Silicon Valley Triage Tool, draws on millions of pieces of data to predict which homeless individuals will use the most public services like emergency rooms, hospitals, and jails. The researchers behind the project calculate that identifying and quickly housing 1,000 of these high-cost people could save more than $19 million a year—money that could be rolled into providing more housing for homeless people.”

One of the problems? “It gives higher scores to men because they tend to rack up higher public service costs.”

We can also see how these decisions play out with other tech giants. For instance, Amazon recently received a huge amount of criticism for the way its algorithm determined which communities could get same-day delivery.

screen-shot-2016-10-28-at-11-17-03-pm
Image credit: http://www.bloomberg.com/graphics/2016-amazon-same-day/

“The most striking gap in Amazon’s same-day service is in Boston, where three ZIP codes encompassing the primarily black neighborhood of Roxbury are excluded from same-day service, while the neighborhoods that surround it on all sides are eligible.”

Similarly, in March, “ride-sharing” service Uber has also received strong accusations of redlining:

uber
Image credit: https://www.washingtonpost.com/news/wonk/wp/2016/03/10/uber-seems-to-offer-better-service-in-areas-with-more-white-people-that-raises-some-tough-questions/

“Census tracts with more people of color (including Black/African American, Asian, Hispanic-Black/African American, and Hispanic/Asian) have longer wait times. In other words, if you’re in a neighborhood where there are more people of color, you’ll wait longer for your uberX.”

We can also see it in the maps of stingray surveillance of inner-city Baltimore.

screen-shot-2016-10-31-at-10-48-03-pm
Image credit: http://www.citylab.com/crime/2016/10/racial-disparities-in-police-stingray-surveillance-mapped/502715/

So in each of these examples we can *see* innovation in techniques, but too often the demographics of most Silicon Valley companies makes them unable to see the bias in their tech; in how resources are distributed; in who gets targeted; in who gets left out or left behind. “Innovation” comes from using statistical models delivered via technology to both justify and conceal existing bias.

Is there such a thing as innovating out/down? [or is being innovated out being disrupted?] Of course if we can situate innovation, we can map it, we can see the fissures, we can see who is left out. We can perhaps see where innovating out is a conscious decision, and we can see where it’s a result of not understanding the tech, not interrogating its origins, or not looking for the ways that historical bias is reinscribed into the tech.

In the case of “Pokemon Go to the polls” certainly Secretary Clinton wasn’t advocating that voting outreach seek to further disenfranchise black and brown communities. Nor (likely, anyway) were Amazon and Uber thinking about how to work their services around not serving minority communities—much more likely in those cases are that the secret sauce algorithms of the companies sought to maximize profits and the maps that the algorithms delivered “just happened” to do that thing,

 In the case of the stingrays and surveillance of minority communities, that’s very much a conscious choice, and perhaps something we can talk about later.

The stories that we are repeatedly told about innovation are situated in specific places, and those innovations are done by certain people. Those stories often tell us that “Innovations” (that’s Innovation with a big I) happen at Harvard, at MIT, at BU, in Silicon Valley. Those stories don’t tell us about the “little “i” innovations going on at suburban community colleges, or in inner city schools. Lacking the capital or the ability to scale, many of those local innovations are never recognized. Instead places like mine often see the results of the big “I” innovation handed down to them, and they are often handed down with little thought given to the kinds of students at our places.

I’ve come up with my own formula:

Innovation minus capital = “hustling” –meaning, the story of innovation is as much about money and who gets to tell the story than it is about creating improvement and change.

This more than anything gets into the “why should I care?” portion of the talk. When you look at the stats:

Half of all college students are CC students

13% of CC students are have dealt with homeless at some time in their lives

33% of low & moderate income families have either dial up, mobile only, or no personal access to the web

17% are single parents

Most of these stats don’t belong to the students at a place like Boston University; however, we should consider that systems-level solutions – and any “innovations” required to make them – need to be sanity-checked by the people allegedly helped by these systems.

Thinking about the redlining, and the trickle-down effects of innovation, we need to look at these effects both as what happens individually and what happens institutionally. As mentioned earlier, tech design and policy decisions have concrete effects on individual users, who may have a different set of tech skills, different financial resources, and different needs. They may be the targets of surveillance. We might think of a student with a smart phone who is required to view data-gobbling videos in order to pass their class. This is a small-scale decision by a professor (although it may seem rather large-scale for that individual student). But in this case, the student has been “redlined” in the sense that their lack of money and the choices, conscious or otherwise, made by their professors create conditions where the student is walled off from success.

But we should also think about these things institutionally. My institution, like a lot of others, looks to R1’s to see what they are doing, and we often follow suit. We are (for the most part) not a trend setting, dare I say “Innovative” place–and much of that is because we don’t have the money. We get the tech as it flows down the pipeline, often without consideration of how the different populations are served (or not) by a particular tech. A look at Edsurge’s “Community College: Digital Innovations Next Frontier” article lists 10 innovations, most of which fall into two main categories: skills training and increased surveillance (under the guise of “engagement).

When talking about tech, we often hear William Gibson’s quote “the future is here, it’s just not evenly distributed.”

But there’s another way to think about it—even the same tech–equally distributed– “the future” or “innovation” means different things to different folks. Herbert Marcuse talks about this type of thing in One Dimensional Man

“If the worker and his boss enjoy the same tv program and visit the same resort places, if the typist is as attractively made up as the daughter of her employer, if the Negro owns a Cadillac, if they all read the same newspaper, then this assimilation indicates not the disappearance of classes, but the extent to which the needs and satisfactions that serve the preservation of the Establishment are shared by the underlying population.”

So, you may drive the same car as your boss, but that doesn’t mean you are in the same class. The flattening out of the consumer goods markets often means most of us use the same technologies, but we take for granted that that puts us in the same class or that we use them the same way. Many of my students have the same (or better) versions of the iphone than I do. But it’s a mistake to think that their phones have the same data plan or that we have the same plan of action if our phone breaks, or even that we use our phones the same way.

Pew Research recently put out a report that Fifteen percent of Americans between the ages of 18 and 29 and 13 percent of Americans who make less than $30,000 per year (24 million people!) are smartphone-dependent, meaning they can only access the internet through their smart phones. So “small things” like removing the headphone port can mean costly decisions for those populations.

Surveillance, tracking, predictive analytics: these mean something different to the Ivy League white kid whose cafeteria uses an iris scan than those same tracking technologies do to the Albanian community college kid whose prof has a dashboard in front of her predicting the likelihood of success in the class. These are both versions of innovation. We have to be careful that in looking forward, innovation doesn’t neglect history and the real life conditions of those who are subjected to the innovations.

So I started the talk with a story about the election, and I’m going to end with a story about the election—and tacos.

In October, Donald Trump surrogate Marco Gutierrez, who founded “Latinos for Trump,” warned that if immigration wasn’t curbed, there would be “taco trucks on every corner.”

Of course he was widely ridiculed, but this declaration set into motion the “Guac the vote” campaign, where taco trucks doubled as voter registration sites. Certainly deciding where those trucks were placed was a much different process than placing registration booths at PokeStops.

tacotruck
Image credit: http://www.rawstory.com/2016/09/register-to-vote-get-a-taco-houston-taco-trucks-put-voter-registration-booths-on-every-corner/

The “algorithm” for placing these stops was not a black box. It was the conscious recognition of ethnic geographies and their political beliefs.  The black box was not black at all, and in fact openly revealed its aims and assumptions in ways that are too often foreign to the educational “black boxes” that Frank Pasquale describes. It was not politics free. It was not free of bias, nor did it pretend to be.

blm-pokestop

I want to end on this picture of Old West United Methodist Church in Boston. I think it’s an important image because it combines many of the things I’ve talked about: geography, technology, race, and social justice. So, while many consider it an imperative to innovate and look forward, it’s just as important to consider the ethical implications of that “forward thinking” and to make sure that no one gets left behind.

 

 

2 Comments

Digital Redlining

redlining

Digital Redlining, Access, and Privacy” is an essay that Hugh Culik and I did for Common Sense where we discuss digital redlining as “a set of education policies, investment decisions, and IT practices that actively create and maintain class boundaries through strictures that discriminate against specific groups. The digital divide is a noun; it is the consequence of many forces. In contrast, digital redlining is a verb, the “doing” of difference, a “doing” whose consequences reinforce existing class structures. In one era, redlining created differences in physical access to schools, libraries, and home ownership. Now, the task is to recognize how digital redlining is integrated into edtech to produce the same kinds of discriminatory results.”

1 Comment

Data Mining and Students

It seems like ages ago that Hugh Culik and I recorded a podcast with Les Howard about data mining and students, but it was only last February. Most of these issues remain, and in many cases have only become more severe in terms of tracking and analyzing students with little regard for their privacy and  often without their consent.

2 Comments

First Post

4446397467_cc4d4dd329_o

I’m finally coming to terms with the notion that I can’t use the name “hypervisible” and not have a public-facing space (of “my own”). So what goes in this surveilled space? My plan is to use it for for a few purposes: as a space to work out some of the things I’m thinking and writing about; as a central space for the public scholarship I do at the intersections of pedagogy, data, surveillance, access, digital redlining, and privacy; as a space to engage other people doing similar work.

I’ve been tremendously influenced and challenged by the work of several brilliant public scholars: Audrey Watters, Kate Bowles, Tressie McMillan Cottom, Bill Fitzgerald, Frank Pasquale, Paul Prinsloo, Cathy O’Neil, Jeffrey Alan Johnson, Autumm Caines . . . and ultimately I want this space to be a place where I might ruminate (along with whoever may be paying attention at the time) on the work that’s done by these folks and folks like them.

Comments closed
css.php