Digital Activism

  • Home / Activism / Digital Activism
Digital Activism

The world is entering an age of accelerated “digital transformation”, marked by expanded access to (and reliance on) digital technologies. While the dominant narratives around these technologies primarily focus on promises of exponential economic and social advancements, much less attention is being paid to the scale and pace of these changes, the motivations and intentions of the creators behind these technologies, the decisions they’re making in their design processes and their potential negative and harmful consequences. 

Queer, feminist activists using digital technologies in their political action, firstly have a responsibility to ourselves and our communities to spend time understanding how these technologies are shaping and will continue to shape all our lives. And secondly, we have a responsibility to leverage that knowledge into playing an active role in pushing for the digital environment to be designed, developed and managed in just, equitable, positive ways, while supporting our community members in being conscientious users as they navigate this increasingly complex digital world.

Taking it back about a decade, in 2011, it seemed like social media would completely transform the world for the better. In addition to making instant communication across borders possible and accessible, completely transforming the ways humans interacted, this was also the era when several major social movements began to take advantage of social media for organising, spreading their messages and recruiting participants etc. From the Arab Spring, Black Lives Matter to Hong Kong’s Umbrella Revolution and the #MeToo movement, all around the world it seemed like people were rising up, seizing the mic, speaking truth to power and really changing the world.

Despite critics calling these actions “slacktivism” due to their lack of direct, public action and relative ease to perform, a decade later, people are still using social media in these ways. However, the optimism that was so typical of that time, has, in recent years, dwindled in the face of a barrage of anti-social actions by bad faith actors, government intervention and capitalist interests.The same social media that sustained these movements, also made them vulnerable to surveillance, infiltration, and co-optation while sustaining the misogyny, racism, discrimination and abuse they opposed.

From the Cambridge Analytica scandal which revealed how Facebook users data was used to serve them targeted ads, to the radicalization of young people by ISIS, young white men by the alt-right, and incel culture, including mass shootings, the fuelling of massacres such as the Rohingya people etc. Overall, despite the good these platforms are capable of, their profit-oriented objectives ultimately determined what actions they supported, what risks they corrected for and who they protected. 

And yet, with every new technology, product, idea coming out of the ICT sector globally from bitcoin, to NFTs to the metaverse and now artificial intelligence, the world is once again swept up by the same promises of overwhelmingly positive outcomes, and better futures for all without much discussion of what can go wrong, until it eventually does.

Focusing on AI, over the last few years the major companies in the world – Google, Microsoft, Amazon, Apple etc. have all heavily invested in researching and expanding AI with the goal of boosting profits, passing up opportunities to lean into developing AI’s capabilities to address challenges such as poverty and climate change. According to an article by the MIT Technology Review, they’ve actually worsened these issues. 

“The drive to automate tasks has cost jobs and led to the rise of tedious labor like data cleaning and content moderation. The push to create ever larger models has caused AI’s energy consumption to explode. Deep learning has also created a culture in which our data is constantly scraped, often without consent, to train products like facial recognition systems. And recommendation algorithms have exacerbated political polarization, while large language models have failed to clean up misinformation.” Hao, 2021

 To underscore this issue, recently a TIME investigation revealed that OpenAI, parent company of ChatGPT was paying Kenyan workers, less than $2 per hour to review data including content from some of the worst parts of the internet in order to train the AI to detect toxic content. That means, these people were paid peanuts to do some of the most emotionally and mentally taxing work, indicating that for all their promises of safety and limiting bias, the health of workers are still being sacrificed for that goal.

It’s issues like these that activists and collectives that occupy, shape and move through digital space must take into consideration in their ethics. What are the foundations of the spaces we use to build avenues for connection, identity expression, education and mobilisation. These networks are homes, communities, oases and safe spaces where we forge ‘real’ relationships, and organise to improve the realities of our people and secure better futures. 

We rely on these platforms for so much, putting incredible amounts of faith in their billionaire and conglomerate owners to manage them in such a way that even if they are not actively supportive of our causes, they are at least not outrightly antagonistic towards us, so we can continue to use them to our benefit.

This faith is not only understandable, but sometimes, incredibly important. We find ourselves more and more, living lives completely colonised by digital technology. It is largely impossible, for example, to get a job today without an email address and internet access, even if you’re getting it from the open wifi at a coffee shop. 

The Covid 19 pandemic has also proven to us how much the world we currently live in, runs on digital connectivity and falls apart without it. Furthermore, for some marginalised groups – people with chronic pain, illnesses and disabilities for example, the digital world is arguably ahead of the physical one in terms of widespread accessibility.

However, it’s important that we learn from recent history that this faith is largely misplaced and this reliance is dangerous and continuously work at the microlevel to increase awareness of these dangers and how to avoid them.

At the macrolevel researchers like Joy Buolamwini, Ruha Benjamin, Timnit Gebru and many others, have all been sounding the alarm on the dangers of unregulated technology, discriminatory design and other dangers associated with the current trajectory of technological advancement. 

In 2020 Timnit Ghebru, former co-lead of Google’s ethical AI team, was fired from Google for raising alarm about the dangers of large language models like ChatGPT particularly the way they operate which, as previously mentioned, is by studying large amounts of data (including toxic, harmful language and ideas) from the internet in order to output human-like speech. Furthermore, she pointed out that an incredibly small number of people, largely homogeneously white, and from the same economic class, had so much power in defining the way artificial intelligence will develop and making decisions that would impact the entire world. To directly address these concerns, she founded DAIR – The Distributed Al Research Institute – “an interdisciplinary and globally distributed AI research institute rooted in the belief that AI is not inevitable, its harms are preventable, and when its production and deployment include diverse perspectives and deliberate processes it can be beneficial.”

Dr. Joy Buolamwini’s focus has been on discrimination and bias in facial recognition software, which so far has had difficulty recognizing and accurately classifying the faces of people of darker skin tones, as the data sets they’ve been trained on contained far more white faces than black. This technology is used across the world, in policing, education, border security etc., yet the majority of the developers do not seem to think it’s important that a significant part of the global population be recognized by it. She founded the Algorithmic Justice League to “lead a cultural movement towards equitable and accountable AI” and to “raise public awareness about the impacts of AI, equip advocates with resources to bolster campaigns, build the voice and choice of the most impacted communities, and galvanise researchers, policymakers, and industry practitioners to prevent AI harms.”

Finally Dr. Ruha Benjamin, in her book “The New Jim Code” provides a useful abolitionist framework for peering behind the veil that separates AI creators from the rest of us, so we can better understand the ways the outcomes their products have, are directly linked to the deliberate design choices they make, even when they intend to be neutral or positive. She highlights how these choices exacerbate existing racial hierarchies and social divisions.

And there are many others doing similar work such as Sasha Costanza-Schock, a non-binary, trans femme researcher and designer “who works to support community-led processes that build shared power, dismantle the matrix of domination, and advance ecological survival”.. Their most recent book, Design Justice: Community-Led Practices to Build the Worlds We Need can be downloaded for free at design-justice.pubpub.org.

As well as organizations and collectives such as the Design Justice Network, Queer in AI, Black in AI, etc., all working to amplify diverse voices in tech development and support people and community power in addressing these issues. 

For Caribbean digital activists in particular, we can look to the work of Haynes’ (2016) in which she cautions feminists to not drink the optimism koolaid, and highlights some of the limitations of digital activism, including the one this article focuses on which is ‘‘dependence on commercial platforms whose very structures may be inimical to feminist principles’. She also encourages Caribbean feminists to pay attention to the inequalities and ask questions of access, capital, privilege and agenda setting etc. 

The current dominant narratives around the digital transformation and technological future of the world are understandably, overwhelmingly positive. For the vast majority of humans, life on planet earth is a demented game of temple run, where you spend all of your time trying to dodge the endless challenges life throws at you. Who wouldn’t be lured by the prospect of some invention, a deus ex machina if you will, coming along to free us from the rat-race and finally usher us collectively into our halcyon? While we might love our TikTok filters and look forward to a future where AI is being used to develop cures to diseases and find solutions to climate change, unfortunately, history has proven to us that we can neither trust nor expect the developers of these technologies to choose this route, so for our sake, and the sake of our communities, we must get involved however we can in shaping the digital futures we will most likely live in.