<aside> 💡 Kimi Wenzel is a second year PhD student in the HCII studying bias in language technology. Kimi is interested in how this bias is perceived, conceptualized, and reacted to as well as how the harms created by such biases can be better addressed!

</aside>

kimi_wenzel.jpeg

🐈 Who are you and what do you do?

My name is Kimi Wenzel, and I’m a second year PhD student in the Human-Computer Interaction department. My work is in HCI, and I like to take a psychology or behavioral science lens to most of the work I do. Right now I'm focusing on looking at bias in language technology, and I'm trying to understand how people perceive bias, how they conceptualize it, and how they react to it. We had a big study that is in the process of being published which proved that speech recognition errors can be interpreted as microaggressions. Considering this, people of color, without knowing it, may have negative mental health and psychological reactions to such errors. This is part of a much larger discussion about ethics, diversity, and computing.


🐈 In what ways do we see bias in language technology?

There's bias in automated speech recognition. For example, Siri may understand someone who has a very distinct White, upper-middle class, Midwestern accent but may not understand someone who is from a different area. This typically impacts people of color or people who are less affluent.

When looking at the field of sociolinguistics, we find that everyone agrees that no language is actually “correct”, so there should not be a language hierarchy. However, there’s a language hierarchy in technology, so we see this kind of bias in automated speech recognition like Siri, Alexa, Google Assistant, and anything that interprets a voice. Another form of bias in language technology is autocorrect or autocomplete. A basic example that people may resonate with is their name being auto-corrected. That's not likely to happen to someone named John Smith but more likely to happen to people from other backgrounds. Text input and voice input are typically associated with someone’s accent, but there are other elements such as cultural knowledge and vocabulary where we also see bias in language technology.


🐈 How is harm defined when studying bias in language technology?

Right now I'm focusing on language technologies, but in my broader work I want to develop a new harm framework. In most research right now, the word harm is thrown around a lot, so in my own research I’m trying to understand the psychological impact of all of these harms which stems from research for other technologies that has proven that there are discrepancies between accuracy rates for people of different races. However, there isn't this kind of research in language technology. A lot of people are aware of the facial recognition examples in computer vision in which the systems are not able to recognize people of darker skin tones, especially women. That is an example of an obvious harm, and those examples are really important contributions to research. However, not much work has gone into understanding what impact these biases actually have on the people who are subject to the bias.

It's easy to pinpoint the discrimination and say that it’s bad, but I want to know how that is actually affecting people. Right now I am looking at the standpoint of psychological harm, and I'm drawing on theories from social psychology specifically about social exclusion, micro-aggressions, and stereotype threat. All of these ideas have been talked about in social psychology, but they haven’t really been talked about in HCI or computer science, so I'm trying to bridge that gap this year.


🐈 How can these negative impacts be addressed, and how can the affected people be supported?

We are running a study on that this year. We're going to have a design workshop in which we invite people of color to come to discuss and share what they think are ideal solutions to these errors. There will be a brainstorming workshop and then an opportunity for participants to design and create some things. After, we will be testing their ideas to see if they, in a controlled experiment, actually reduce the psychological harm that is inflicted on people from these errors. We're taking pre-validated metrics such as anxiety, self-esteem, group self-esteem, self consciousness, mood, and emotional affect to measure if harm is reduced.


🐈 What obstacles do you see to the effort of decreasing bias in these technologies?

My work draws on social psychology which has allowed us to speculate that the psychological effects of racism and microaggressions that happen in human-human interactions may also happen in human-computer interactions. However, a complication that might arise is not fully understanding the extent to which we can apply solutions and coping mechanisms from social psychology and clinical psychology to human-computer interactions. We have successfully demonstrated that it's actually quite similar in terms of the harm that's received. However, when considering recovery for coping with microaggressions and racism, we still need to test if they are able to transfer to human-computer interactions as well.


🐈 What advice do you have for those working in industries where we see this bias?

I want to encourage people to be an advocate for inclusive design. It’s important to really think about the impact that the products they create may have on users at every stage of their design process.