METERedith broussard is a data journalist and academic whose research focuses on artificial intelligence (AI) bias. She has been at the forefront of raising awareness and sounding the alarm about rogue AI. The previous book of hers, Artificial Disintelligence (2018), coined the term “technochauvinism” to describe the blind belief in the superiority of technological solutions to solve our problems. She appeared in the Netflix documentary. coded bias (2020), which explores how algorithms encode and propagate discrimination. His new book is Further What a technical problem: Confronting race, gender and ability bias in technology. Broussard is an Associate Professor at New York University’s Arthur L Carter Institute for Journalism.
The message that bias may be embedded in our technological systems is not really new. Why do we need this book?
This book is about helping people understand the very real social harms that can be embedded in technology. We’ve had an explosion of wonderful journalism and studies on algorithmic bias and the harm that people have experienced. I try to lift that report and think. I also want people to know that we now have methods to measure bias in algorithmic systems. They are not entirely unknown black boxes: algorithmic auditing exists and can be done.
Why is the problem “more than a technical problem”? If algorithms can be racist and sexist because they’re trained on skewed data sets that don’t represent all people, isn’t the answer just more representative data?
A glitch suggests something temporary that can be easily fixed. I am arguing that racism, sexism, and ableism are systemic problems that are embedded in our technology systems because they are embedded in society. It would be great if the solution was more data. But more data will not fix our technological systems if the underlying problem is society. Take, for example, mortgage approval algorithms, which Have been found Borrowers of color are 40-80% more likely to be denied than their white counterparts. The reason is that the algorithms were trained using data about who had received mortgages in the past, and in the US, there is a long history of lending discrimination. We cannot fix the algorithms by introducing better data because there is no better data.
You argue that we should be more demanding about the technology we allow into our lives and our society. Should we just reject any AI-based technology that encodes bias?
AI is in all of our technologies today. But we can demand that our technologies work well, for everyone, and we can make some deliberate decisions about whether or not to use them.
I am excited about the distinction in the Proposed European Union AI Law that divides the uses into high and low risk according to the context. A low-risk use of facial recognition might be using it to unlock your phone – the stakes are low: you have a passcode if it doesn’t work. But facial recognition in surveillance would be a high-risk use that should be regulated or, better yet, not implemented at all because it leads to unwarranted arrests and is not very effective. It’s not the end of the world if you don’t use a computer at all. It cannot be assumed that a technological system is good because it exists.
There is enthusiasm for using AI to help diagnose disease. But racial bias is also being incorporated, even from unrepresentative data sets (for example, skin cancer AI). It will probably work a lot better on lighter skin because that’s mostly what’s in the training data.) Should we try? put “acceptable thresholds” for bias in medical algorithms, as some have suggested?
I don’t think the world is ready to have that conversation. We are still at a level of need to raise awareness about racism in medicine. We need to step back and fix some things about society before we start freezing it into algorithms. Formalized in code, a racist decision becomes difficult to see or eradicate.
She was diagnosed with breast cancer and underwent successful treatment. After her diagnosis, she experimented with taking her own mammograms through an open source cancer detection AI and discovered that it did indeed detect her breast cancer. It worked! So, good news?
It was really cool to see the AI draw a red box around the area of the scan where my tumor was. But I learned from this experiment that diagnostic AI is a much more blunt instrument than I imagined, and that there are tricky trade-offs. For example, developers must choose accuracy rates: more false positives or false negatives? They prefer the former because it is considered worse to miss something, but that also means that if you have a false positive you enter the diagnostic line, which could mean weeks of panic and invasive testing. Many people envision a fancy AI future in which machines replace doctors. This doesn’t sound tempting to me.
Any hope we can improve our algorithms?
I am optimistic about the potential of algorithmic auditing: the process of looking at the inputs, outputs, and code of an algorithm to assess its bias. I’ve done some work in this. The goal is to focus on algorithms as they are used in specific contexts and address the concerns of all stakeholders, including members of an affected community.
AI chatbots are all the rage. But technology is also riddled with bias. Railings added to OpenAI ChatGPT have been easy to get around. Where did we go wrong?
Although more needs to be done, I appreciate the railings. This has not been the case in the past, so it is progress. But we also need to stop being surprised when AI blunders in highly predictable ways. The issues we are seeing with ChatGPT were anticipated and written about by AI ethics researchers, including Timnit Gebru [who was forced out of Google in late 2020]. We need to recognize that this technology is not magic. People put it together, it has problems and it falls apart.
OpenAI Co-Founder Sam Altman newly promoted AI doctors as a way to solve the health crisis. It seemed to suggest a two-tier healthcare system: one for the wealthy, where they enjoy consultations with human doctors, and one for the rest of us, where we see an AI. This is how things are going and you’re worried?
AI in medicine doesn’t work particularly well, so if a very rich person says, “Hey, you can have AI take care of your healthcare and we’ll keep the doctors for ourselves,” that sounds like a problem to me and not something. that is leading us to a better world. Also, these algorithms are available to everyone, so we could address the issues as well.