Technology can be racist and we should talk about that
The past year has been filled with examples of technologies being racist. Yet, how we can fight this is hardly part of societal debate in the Netherlands. This must change, Jill Toh and Naomi Appelman write. Making these racist technologies visible is the first step towards acknowledging that technology can indeed be racist.
As a first example of a racist technology, we can point to the infamous child benefits scandal. The Dutch tax agency has pushed hundreds of families into debt and worse. Having a double nationality meant the automated systems flagged these families as ‘high risk of fraud’. This automated racist classification was amplified by both systematic as well as personal failure in the tax agencies resulting in devastating consequences for the families involved.
Then, the controversial use of online proctoring software by several Dutch universities during the pandemic is another emblematic example, as we wrote in Parool last week. Online proctoring software detects cheating behaviour based on characteristics of the person and their surroundings. It flags for fraud based on several elements such as movement in the workspace, noise, disrupted connectivity and so forth. Sitting for exams is already stressful enough. To add to that stress with technology that produces racist, discriminatory and exclusionary effects makes it even more problematic.
‘Proctorio’ a popular online proctoring software, uses facial recognition; a technology that has been proven to be racist and sexist. This software is discriminatory to people who are darker skinned, and exclusionary to those whose home settings are not suitable. It is clear that the choice to adopt such technologies as pedagogical tools is actively reinforcing racism and exclusion in our education systems.
Finally, our dating life in this interconnected online world is, of course, not exempted from rampant racism and discrimination. This comes as no surprise as what happens offline manifests itself in new ways online. Grindr, for example, until recently had an “ethnicity filter” which created a culture that emboldened users to be racist.
Technology itself is racist
The common element to these three diverging examples is that they involve racist technology. Often, this racism is not pointed out as such but rather placed at a distance: the systems are only referred to as having racist outcomes or as amplifying racist patterns. However, it is important to recognize that the technologies themselves in these cases are racist. We must make explicit the role technology plays in reproducing or exacerbating existing racist patterns in society.
Calling the technology itself racist makes it clear that some technologies should not be used or created at all. Racism is not something that can be ‘programmed away’ by adapting or ‘fixing’ the technology. And, crucially, it is not necessary for developers of the technology to have a racist ‘intent’ for it to be racist. If we look at these examples of the child benefits scandal, Proctorio and Grindr we can see how racism, by very different means, became part of the systems themselves.
The tax agency was under huge political pressure to combat tax fraud. Regardless of intent, double nationality functioned as a proxy for race and ethnicity. As a result, immigrants and black or brown people were hit disproportionally hard, making the system itself racist. Similarly, Proctorio is based on flawed software unable to recognize black or brown faces because it was trained on an incomplete dataset. Finally, under the pretence of letting users pick a compatible dating partner, Grindr automated racism.
Intended or unintended, responsibility lies with people
Making these racist technologies visible is the first step towards acknowledging that technology can indeed be racist. That being said, understanding that technology can be racist, should fundamentally not acquit the people or organisations creating or using these technologies from any responsibility.
Importantly, these technologies do not exist in a vacuum and they are always part of a larger social structure with people that actively choose to create, use and deploy them.
Utilising these technologies that are inherently racist or have racist outcomes is always a political choice; choices that individuals – in companies, in governments, in organisations – have actively made. They should thus be held responsible, whether the racist outcomes of these technologies are intended or otherwise.
The absolving of responsibility, hiding behind the idea that technology’s ‘black box’ is indiscernible, or simply apologising but not taking these racist and problematic uses seriously, cannot be excuses or justifications that we put up with as a society. The harmful outcome and impact of racist technologies on individuals and communities are real and tangible, with far-reaching consequences.
The prevailing idea that technology is neutral, the pervasiveness of solutionism, or that technology can solve bias and racism, can continue to distract and obscure the deeply rooted and actual problems of racism in society. Europe,in particular, tends to treat racism as invisible, a sort of “colour blindness” or a U.S.-constructed issue. Yet, when we look at discourse and policies related to policing, refugees and migration, for instance , it is clear that Europe needs to reckon with its deep-rooted racist views and institutions. Racism is a social phenomenon and technologies such as the systems used by the tax agency, the Proctorio software or the Grindr app are direct manifestations of this.
How to fight racist technology
The ubiquity of technology across different areas of our lives – work, school, access to public services, entertainment, dating – makes it even more pressing to understand how they can reinforce existing injustices, and perpetuate harm to people in our societies.
This can be tricky, as the full uses of these technologies are often hidden and invisible. It may also seem too difficult to understand due to the lack of expertise. Or simply, the misconception that technology is neutral and does not have intent. These challenges, coupled with structural and institutional racism in government, as well as within other parts of society, makes the struggle more difficult, yet also more critical.
The most concrete steps we can take in fighting racist technology is to force this topic to the forefront of societal debate in the Netherlands and to support the organisations already fighting this fight. A great example is the Dutch organisation ControlAltDelete who are suing the Dutch border police for the use of racist algorithms that pick people for border checks based on ethnicity. Other European-wide organisations such as Digital Freedom Fund and EDRi are also fighting for change in thinking about decolonising our digital rights and to demand for local and national authorities to ban mass surveillance in public spaces. We need to follow these stories and organisations closely as they do the hard work: making the role of technology in reinforcing and amplifying racism in Dutch society explicit. If we want to avoid repetition of the child benefits scandal and address structural and institutional racism in the Netherlands we need to face up to the role technology plays. And acknowledge that, yes, technology itself can be racist.
Enjoyed this article? Become our friend and support us