I delivered this brief provocation at Technology and its Discontents: Building Power for a New Paradigm, the launch event for the New School’s Digital Equity Laboratory:
The Milano School of Public & Urban Policy is proud to announce the launch of its new trans-disciplinary center, the Digital Equity Laboratory (DEL). To support collaboration and strategic intervention to support greater digital equity locally and nationally, DEL is hosting its first all-day symposium to share innovative local models for advancing digital equity, identify critical areas for intervention, including the online 2020 Census, Automated Decision-Making, Open Internet Rules, broadband ownership models and alliance building.
I was honoured to be on the roster with inspiring speakers like the Federal Communications Commissioner Mignon Clyburn, Allied Media Projects Executive Director Jenny Lee, Detroit Community Technology Project Director Diana Nucera, and co-directors of the Digital Equity Lab Maya Wiley and Greta Byrum.
The Mar. 20 event unfolded in the wake of the news that the data analytics firm Cambridge Analytica used psychological data from 50 million Facebook users’ profiles without their consent to build technology that could be used manipulate elections campaigns and outcomes.
Some are calling this a data or security breach. I think the system is working exactly as intended — non-consentfully. From their very inception, platforms like Facebook have worked actively against freely given, reversible, informed, enthusiastic, and specific consent. For more on what actual consent should look like in our technologies, please see the Building Consentful Tech Zine.
Now, the talk, for which I was asked to pose provocative questions (in bold):
I come to this work by way of designing applications within communities affected by all of the issues we’ve been discussing today.
The tools we’re building in the civic and community technology space are really powerful and important. They leverage data in the service of liberation and justice. But they also have the potential to pose risks that as we build them, we might not fully know the scope of. And the risks, as we all know, are greatest for those with the least amount of power and recourse.
So as members, organizers, technologists, and representatives of these communities we need to be asking — how can we protect people who could potentially be harmed by the very tools that we’re building to serve them?
For the tools I’ve been working on and that I use, I’ve come to realize that this question hinges on consent. There’s this very dangerous and widely accepted idea that our data will be used without our consent whether we like it or not. People have become jaded to this because the only apparent alternative seems to be avoiding most technology altogether. And we know that’s becoming less and less of a viable alternative.
So with glazed eyes we click through novel length terms and conditions without reading them, knowing that they’re intentionally designed for people to do just that. And we hope for the best — that we won’t be tracked or surveilled or doxxed or manipulated. Because we feel we don’t have a choice.
How can we dismantle the idea that using technology that presents some benefits to us will necessarily expose us to harms that we don’t even know the extent or implications of? And what needs to happen at an organizing level, at the city level, in startups, in tech culture that will shift this incredibly dangerous disregard of consent?
After all, our data says as much about us as our DNA. My inspiring colleagues at ODB Project talk about our data bodies, and the fact that harm that can happen to our data bodies is just as real as harm that can happen to our physical bodies. We know we should have control over what happens to our physical bodies. So shouldn’t we also be demanding to have control over what happens to our data bodies?
Framing this issue in terms of consent addresses what I find to be problematic about the mainstream privacy & security conversation. We’ve all heard that as users, we should be using secure communication tools, setting up 2-factor authentication, and being careful where we share our information. And that we should put our trust in so-called security experts to prevent and respond to attacks and keep us safe.
But is it working? Do we actually feel safe?
It reminds me of the mainstream cultural discourse around sexual violence. That to prevent assault we shouldn’t wear revealing clothing and walk down dark alleys, and that we should put our trust in police and prisons, also to prevent and respond to attacks.
We’ve seen an encouraging shift in this kind of thinking away from carceral approaches and the victim blaming that underpins rape culture. I believe that we need this with technology too. What will it take to shift the mindset from just protecting ourselves to something broader — to protecting each other?
What will it take to move from a culture where we’re told not to wear short skirts and have better passwords to a culture of consent, a culture of taking care of each other, both online and offline?
This is all in the context of a growing conversation in the mainstream about sexual harassment and sexual violence, which has been a pivotal cultural moment. But also, this isn’t where the work started and cannot be where it ends. Through listening to migrant, Black, Indigenous, Muslim, trans, disabled, and poor women, 2-Spirit, and gender non-conforming people, we know that this violence happens at multiple levels and that it’s not just an interpersonal issue or a workplace issue.
They remind us that the taking and doing of things without consent — especially to those with less power by those with more power — permeates our culture and our institutions. So it’s very unsurprising that it permeates technology.
That’s why I’m so grateful to those in the anti-violence movement like friends at Consent Comes First for sharing powerful models to help us create a culture of consent — models that are as applicable to our data bodies as our physical bodies.
Just one of those models is the idea of pleasure and enthusiastic consent. We’ve learned that we don’t just have the right to say no, but that actual consent means that our yeses are enthusiastic. What kinds of technologies do we need to build that folks aren’t just less scared of, but are actually eager to use? Because they’re so pleasurable, beautiful, respectful, and consensual.
So the question I’ve been exploring through this work is how do we make consent foundational to the applications, services, and data policies we’re creating? To the culture of tech communities? Essentially, how do embed it into the DNA of what we’re making so that’s what replicates?*
Along with my frequent collaborator Dann Toliver, we’ve come to call this work “consentful technology” — applications and services that have consent at their very core, that consider consent before a single line of code is written. We created the zine Building Consentful Tech, please be sure to pick up a copy!
The zine doesn’t provide any easy answers to these questions, but it does suggest some approaches. One of the key messages is that the answers can’t come just from the tech sector. It needs to be an intersectional approach, centered around those who could be most adversely impacted. That’s why moments like this, when we’re connecting across sectors, are so important. By asking questions together, we can see that we all hold pieces of the answer.
Answers to questions like, how can municipal policy advisors & UI designers work together to ensure that people are fully informed about what data is being collected when accessing public services online? And how can programmers and organizers ensure that technology development processes begin with the needs of those who are at greatest risk of harm by the application they’re working on?
The last thing I’ll say is that this issue is a deeply personal one to me. Not just because I’m a woman in this world, but I’m also a recent immigrant to the US, and I come from a family of migrants, many of whom have been subject to surveillance from various governments and threats from repressive forces.
For a practitioner, this isn’t a common type of lived experience.
We need to face the fact that people who are building these tools rarely have lived experience of worst case scenarios. If the most privileged and powerful people are the ones continuing to make the tools that have access to all our data we’re in deep trouble.
Technology’s lack of diversity isn’t just shameful and poor corporate ethics — it’s dangerous.
So what will it take for the composition of our teams, from the most entry level to the highest positions to shift for our technologies to be more consentful?
This is why I’m so hopeful about and honoured to be working with the Detroit Community Technology Project. DCTP is made up of and led by Detroiters who have direct lived experience of the impacts of digital injustice. Right now, we’re co-designing ways to ensure that the free wireless service and technology education they’re providing to their neighbours is based in consent. We’re working on consentful mechanics — making sure we’re not collecting data that could violate people’s privacy. We’re working on consentful policies — creating plain language terms and conditions and taking the time to explain them to folks. And at the crux of it is consent culture — showing people how and why to use technology in ways that protect each other.
This is the kind of story we so need to be writing and telling in this moment. I know we can and must pierce through our various silos to collaborate in creating many more.
*For this metaphor, I thank Sophie Varlow & Nick Wood, the organizers of Commons Platform, who are working to build a community where folks work to solve their own problems rather than wait for tech saviours. And as they say, they’re embedding consent into the DNA of that community. What a beautiful idea.