In 2016, Eva PenzeyMoog joined the tech industry as a UX designer and began to notice how people use technology as a tool for domestic violence and other forms of interpersonal harm. Her heightened focus led her to seek out support groups or think tanks working on how designers can address the issue, but she ultimately found that this type of advocacy was relatively unexplored and wanted to do more.
Eva continued highlighting the issue in her work, which included presenting a session called “Designing Against Domestic Violence” at the Information Architecture Conference in 2020. But during the pandemic, she had a chance to reflect on her strategy. If she wanted to scale this knowledge, she realized she needed to get out of talks and into writing. Her new book, Design for Safety, is a way to guide conversations and work in this emerging space.
First of all, love your book! If I were to hand someone resources on being effective in design today, it would be your book and Kat Holmes’ Mismatch. Both really uncover how design has missed the mark on inclusion and understanding dark patterns. Often, talking about dark patterns means talking about the psychological nature of humans we don’t want to face. On that note: Is lack of safety the “darkest pattern” in design today?
Perhaps so because it’s what no one wants to talk about. If it’s in the dark, it must be a dark pattern. I worked in domestic violence education and support for years before moving to tech, and it’s surprising how people may only think about these issues and the reality of abuse when pushed to do so. Statistics suggest we all know someone affected by domestic violence. Even though it can be upsetting to face this work head-on, it’s important to face it so we can keep people safe in digital spaces. We need to be thinking about this because our users are people, and if people hurt each other, then they can use our products to hurt each other. That should be something every product works to fix.
In design reviews and brainstorms, I’ve never heard someone ask: Could this product be used to harm someone else? It feels like a miss. What else are we forgetting to ask?
I think we just need to be aware of what we’re focusing on. As designers, we need to look at how we spend our brain power and prioritize new questions related to safety and inclusion. Name changes are a small but important example of how designers can begin thinking about what’s safe and inclusive for users. If someone goes through an important transition in their life and changes their name, how can designers make that process easy and secure while also ensuring the user has power over their own data? So I’m interested in designers asking about how someone could use a product to harm another person, but also how the product itself might harm a user just by the way it’s designed.
Totally get it. Equitable design means building products around how real people will actually use them. We may be harming our own creativity by not affording personhood to people or groups of people during the design process. If we’re more direct and ask if and how our work itself can cause harm, or if someone can leverage our work to harm another, that feels more like respecting personhood.
Yes. And through this work, I’ve done some safety audits for product teams. What I’ve found is that almost any product can be used for harm, even if in a minor way, and we’re missing a huge question when we continue design work without asking about the potential for harm. For example, I participated in a bootcamp, and no one talked about this stuff as part of the design process. Many places where design happens aren’t talking about this. I think conversations about safety and inclusion are often traded for comfort and convenience. That’s ingrained in us, and we bring that with us when we design.
I get it. It’s hard to be the person who says, “hey, isn’t this algorithm reinforcing racism?” or brings up how a product might be used in the context of domestic violence. In the book, I talk about how you can find a friend to practice having these awkward conversations and back you up in meetings. I feel like people do have these thoughts but often don’t voice them because it can be uncomfortable, especially if they’re in a situation where everyone is excited about the cool new features.
Building on that, in the book you say that “we as designers can often assume that life is a level playing field.” Is this optimism or flawed design thinking?
In my view, this thinking is built on cultural norms, like the US view of bootstrapping and lifting yourself up by hard work. That thinking is flawed, and plays into this false “level playing field” mentality. But it’s important to not hold shame or guilt about this type of thinking. I want to shout from the rooftops: It’s not your fault! You were never taught to think differently about these topics.
Right, and it’s also crucial that we don’t put the burden on historically excluded people to educate the majority. The majority has a responsibility to do that work.
Absolutely. That’s something I talk about in the book. You can’t expect a survivor of domestic violence to raise their hand and share their experience with their team at work. Maybe they will; lots of people are brave in that way. But we can’t expect those people to provide solutions. So we need to ask better questions, about all people and situations, to design better. The onus can’t be on the people with lived experience. Consult with them, design with them, but don’t put the burden on them.
One of the suggestions I love from your book is to have a buddy to back you up. It’s easier to advocate for something if two of you are doing it in unison. What other tips should people employ to advocate for safety in design?
To be brave. Maybe you can’t find that buddy in the beginning. But keep speaking up and keep being visible. If you keep going, you’re going to find your people, and it’s always a good thing to live your values.
Part of that is keeping these issues at the forefront of your mind. I would encourage designers to find books and get the whole team involved in reading them together. Start discussions about these ideas collectively instead of being the one person who’s fired up about safety and inclusion. Educating other people can be tough in the beginning, but it’s tougher to feel like you’re the only one who cares and that your team doesn’t understand why these questions are important.
So many topics out there are important. For example, at 8th Light, the software consultancy where I work, everyone is reading Living in Data by Jer Thorp. It’s about data and algorithms, and people are learning about it together and motivated together to do something — as opposed to just one person trying to get everyone else to care.
It’s true that you might come up against team members or stakeholders who say it doesn’t matter to the work you’re doing, but keep going. There are lots of people out there who care about equitable and safe design.
So you say that part of designing for safety means “realistically assessing how technological products will be used for harm. Prioritizing understanding the most vulnerable people; those experiencing domestic violence, people being stalked, sexual assault survivors, the elderly, and children.” One question really stuck out to me here: Could dominance of one group lead to active abuse and trauma of another group?
Yes. For certain groups, having dominance in society is absolutely a form of harm and a form of violence because of the ways it plays out. The reality is that the group that has power tends to misuse that power. Have you read that recent novel The Power by Naomi Alderman?
Yes I have, and it was haunting!
One of the most powerful books I’ve read, and it syncs up quite well with ideas from my book. A dominant group having all the power is absolutely a form of violence, especially when thinking about the effects on vulnerable groups. I immediately think about these things in terms of technology and how the group with more power is going to misuse technology.
In cases of domestic violence and other forms of interpersonal violence, people sometimes get away with how they misuse technology to harm others because there aren’t necessarily laws against it. For example, disproportionately, men stalk women online. Of course, not always, but it’s a common example. There are other ways to think about the implications of how we use technology though. Parents consistently monitor children through cameras and other digital devices. Recently, I even caved and asked my partner who’s a big cyclist to share his location with me in case of an accident, despite my previous commitment to never use any kind of tracking with each other. But the reality is that context matters. In our context, there’s not an abuse of power. But in many relationships, the impact of location sharing amounts to something more sinister.
Keeping tabs on a romantic partner or a child or an elderly person is often accepted as a normal thing in our society because some groups have power, and that’s how we’re used to seeing and interacting with those groups. There are definitely some concerning things about what happens to you if you feel like you’re constantly being watched and that nothing is really private. That has some consequences we need to acknowledge.
I was particularly struck by the statement, “The benefits of convenience outweighed the feature safety and privacy concerns.” That is quite common, and seeing it on paper was shocking. My husband is a smart home fanatic, and I can’t count the number of times I have gotten upset because he’s talked to me through a Ring doorbell or HomePod. It almost feels like we approach technology naively sometimes. People design and use technology but don’t often think about the deeper features that could be used.
The problem isn’t necessarily that the device has been connected or automated; that can be great! But when abuse cases aren’t considered, these devices are just so incredibly ripe for harm. And personally, I don’t love the devices that don’t afford other fallbacks. For example, I go in and out of the house a lot when I garden, and I don’t want the door to automatically lock behind me like it would if we had a smart lock. I want the option of having the door closed but unlocked. But that’s a minor thing compared to the interpersonal safety issues. The point is to expand our thinking around design safety. When you think about safety, you’re ultimately going to make a better experience for all sorts of scenarios and users. Having some kind of fallback that works for both the survivor who’s been locked in or out of her home might also benefit me as the gardener who wants the door closed but not locked.
Ultimately, safety design has a lot in common with inclusive design. I think a lot about the classic example that automatic doors work for someone who doesn’t have use of their hands to grip a doorknob, but also benefit someone carrying a bunch of bags of groceries. All of these changes will most likely benefit multiple groups. But to get there, we have to prioritize the most vulnerable among our users.
You say that empathy is not a stand in for representation, and I think that’s an unpleasant truth that people don’t like to face. Why is that?
Cause you’ll get it wrong. In the book, I talk about how diversity isn’t a silver bullet without inclusion and equity. I’ve seen that designers are kind of taught that having enough empathy means that you can represent someone. That’s certainly the way I was taught. But as a white person, I can’t use empathy as a compelling reason to say I can successfully design for a Black person’s needs. There is no standard for who Black people are and no standard for representing their needs and experiences. I think this concept gets a bit distorted in design education and can be a bit dangerous. It can be belittling or dehumanizing when a designer assumes they can meet someone’s needs because they claim to understand certain struggles. No matter how much we learn about others and how systems of marginalization operate, there will always be experiences we can’t understand if we’re not part of that group.
What I’m trying to say is it’s dangerous for us to think that empathy is a solid reason we can safely design for someone else. Why would we want to design for someone instead of with them? We’ll get it wrong if we think empathy is a stand-in for actual representation, because empathy isn’t the same as lived experience.
What is one thing you’d like to leave people with in closing?
I feel like we just covered some really dark stuff. Most of the things we talked about aren’t just a tech problem. It’s a society problem that’s being manifested in tech, and that can feel really, really overwhelming. How can we ever possibly fix this? What can one person really do? So I want to leave people with this: You can absolutely make a difference on this stuff.
Especially if you start with finding allies on your team or company. And if you don’t have those people at your company, you can find them on Twitter. They exist elsewhere if you need solidarity with this struggle. I think one of the most important things for people who want to start to design for safety and inclusion is to find their people, because doing anything alone is going to be isolating and so much more difficult. But don’t give up. Because history has shown us that paradigm shifts happen, and that we can transform entire industries for the better.