“Digital Redlining, Access, and Privacy” is an essay that Hugh Culik and I did for Common Sense where we discuss digital redlining as “a set of education policies, investment decisions, and IT practices that actively create and maintain class boundaries through strictures that discriminate against specific groups. The digital divide is a noun; it is the consequence of many forces. In contrast, digital redlining is a verb, the “doing” of difference, a “doing” whose consequences reinforce existing class structures. In one era, redlining created differences in physical access to schools, libraries, and home ownership. Now, the task is to recognize how digital redlining is integrated into edtech to produce the same kinds of discriminatory results.”
One Comment
Comments are closed.
I can barely imagine the humiliation of having to walk to the front desk at the library to have the alarm on my computer reset. The story, however, provides a great perspective for considering all of the electronic actions that still happen behind the scenes. It ties a human experience to something that has been automated (and accepted). Instead of facing the librarian in high-school, perhaps, like Neo, we will face Agent Smith in an interrogation, or be rejected for a job or a loan by a hidden algorithm that deemed your keyword search, for a research project back in high-school, unacceptable.
This story brings to light our assumption that those creating the boundaries are “good actors”. We in fact may forget, or not even know that our information has been restricted or tailored. A thing is not good just because it’s victim may not realize that the thing is bad. It is time now ask if our tendency to want to create and enforce boundaries is in fact rooted in good, and not just a lazy dismissal of other issues we would rather not face. A machine, after all, can enforce boundaries. What happens when they start creating them?