Opinion
Where we explore subjectivity in ai models and why you should care
I recently attended a conference and a phrase on one of the slides caught my attention. The slide mentioned that they were developing an ai model to replace a human decision and that the model was, quote, “objective” in contrast to the human decision. After thinking about it for a while, I vehemently disagreed with that statement, as I think it tends to isolate us from the people we create these models for. This, in turn, limits the impact we can have.
In this opinion piece I want to explain where my disagreement with ai and objectivity comes from, and why the focus on “objectivity” poses a problem for ai researchers who want to have an impact in the real world. It reflects insights I have gained from research I have recently conducted on why many ai models fail to achieve effective deployment.
In order for you to understand my point, we must agree on what exactly we mean by objectivity. In this essay I use the following: definition of Objectivity:
Express or treat facts or conditions as they are perceived without distortion by personal feelings, prejudices, or interpretations.
To me, this definition speaks to something I love about mathematics: within the scope of a mathematical system we can reason objectively about what the truth is and how things work. This really appealed to me as I found social interactions and feelings very challenging. I felt like if I tried hard enough I could understand the math problem, whereas the real world was much more intimidating.
Since machine learning and ai are built using mathematics (mainly algebra), it is tempting to extend this same objectivity to this context. I think that as a mathematical system, machine learning can be considered objective. If I reduce the learning rate, mathematically we should be able to predict what the impact on the resulting ai should be. However, as our ML models become larger and much more black box, configuring them has increasingly become an art rather than a science. Intuitions about how to improve a model's performance can be a powerful tool for the ai researcher. This sounds awfully close to “personal feelings, prejudices, or interpretations.”
But where subjectivity really comes into play is when the ai model interacts with the real world. A model can predict what the probability is that a patient will have cancer, but how that interacts with actual medical decisions and treatment involves a lot of feelings and interpretations. What will be the impact of the treatment on the patient? It's worth it? What is the patient's mental status? Can you tolerate the treatment?
But subjectivity does not end with the application of the result of the ai model in the real world. In the way we build and configure a model, many decisions must be made that interact with reality:
- What data do we include in the model and what not? Which patients do we consider atypical?
- What metric do we use to evaluate our model? How does this influence the model we end up creating? What metric guides us toward a real-world solution? Is there a metric that does this?
- What is the actual problem our model is meant to solve? This will influence the decision we make regarding the configuration of the ai model.
So when the real world interacts with ai models, a fair amount of subjectivity is introduced. This applies both to the technical decisions we make and to how the output of the model interacts with the real world.
In my experience, one of the key factors limiting the implementation of ai models in the real world is close collaboration with stakeholders, whether they are doctors, employees, ethicists, legal experts or consumers. This lack of cooperation is partly due to the isolationist tendencies I see in many ai researchers. They work on their models, absorb knowledge from the internet and articles, and try to create the ai model as best they can. But they focus on the technical aspect of the ai model and live in their mathematical bubble.
I feel that the conviction that ai models are objective reassures the ai researcher that this isolationism is okay, the objectivity of the model means it can be applied in the real world. But the real world is full of “feelings, prejudices and interpretations”, which means that an ai model that impacts this real world also interacts with these “feelings, prejudices and interpretations”. If we want to create a model that has real-world impact, we need to incorporate real-world subjectivity. And this requires building a strong community of stakeholders around your ai research that explores, exchanges and debates all of these “feelings, biases and interpretations.” It requires us ai researchers to break out of our self-imposed mathematical shell.
Note: If you want to read more about how to conduct research in a more holistic and collaborative way, I highly recommend the work of Tineke Abma, for example. this paper.
If you liked this article, you may also like some of my other articles: