To give academics and others focused on ai their well-deserved (and long-awaited) time in the spotlight, TechCrunch is launching an interview series focused on notable women who have contributed to the ai revolution.
Anika Collier Navaroli is a senior fellow at the Tow Center for Digital Journalism at Columbia University and a technology Public Voices Fellow at the OpEd Project, conducted in collaboration with the MacArthur Foundation.
She is known for her research and promotional work within technology. Previously, she worked as a fellow specializing in race and technology at the Stanford Center on Philanthropy and Civil Society. Prior to this, she ran Trust and Safety at Twitch and twitter. Navaroli is perhaps best known for her testimony before Congress on twitter, where she spoke about the ignored warnings of impending violence on social media that preceded what would become the January 6 attack on the Capitol.
Briefly, how did you get started in ai? What attracted you to the field?
About 20 years ago, I was working as a copy boy in the newsroom of my hometown newspaper during the summer when it went digital. At the time, I was a college student studying journalism. Social networking sites like facebook were sweeping my campus, and I became obsessed with trying to understand how print-based laws would evolve with emerging technologies. That curiosity led me to law school, where I migrated to twitter, studied media law and policy, and watched the Arab Spring and Occupy Wall Street movements unfold. I put it all together and wrote my master's thesis on how new technologies were transforming the way information flowed and how society exercised freedom of expression.
I worked at a couple of law firms after graduating and then found my way to the Data and Society Research Institute leading the new think tank's research into what was then called “big data”, civil rights and justice. My work there looked at how early ai systems, such as facial recognition software, predictive policing tools, and criminal justice risk assessment algorithms, replicated biases and created unintended consequences that impacted marginalized communities. I then worked at Color of Change and led the first civil rights audit of a tech company, developed the organization's handbook for tech accountability campaigns, and advocated for tech policy changes to governments and regulators. From there, I became a senior policy officer within the Trust and Safety teams at twitter and Twitch.
What work are you most proud of in the field of ai?
I am most proud of my work within technology companies that use policies to practically shift the balance of power and correct biases within culture and knowledge-producing algorithmic systems. On twitter, I ran a couple of campaigns to verify people who, surprisingly, had previously been excluded from the exclusive verification process, including black women, people of color, and queer people. This also included prominent ai scholars such as Safiya Noble, Alondra Nelson, Timnit Gebru, and Meredith Broussard. This was in 2020 when twitter was still twitter. Back then, verification meant that your name and content became part of twitter's core algorithm because tweets from verified accounts were injected into recommendations, search results, home timelines, and contributed to trends. So working to verify new people with different perspectives on ai fundamentally changed whose voices were given authority as thought leaders and new ideas were elevated into the public conversation during some really critical moments.
I am also very proud of the research I conducted at Stanford that emerged as Black in moderation. When he worked at tech companies, I also noticed that no one wrote or talked about the experiences he had every day as a black person working in Trust and Safety. So when I left the industry and returned to academia, I decided to talk to Black tech workers and bring their stories to light. The research ended up being the first of its kind and has tech-industry/” target=”_blank” rel=”noreferrer noopener nofollow”>stimulated So many new and important conversations about the experiences of tech employees with marginalized identities.
How do you address the challenges of the male-dominated tech industry and, by extension, the male-dominated ai industry?
As a queer Black woman, navigating male-dominated spaces and spaces where I am different has been a part of my entire life's journey. Within technology and artificial intelligence, I think the most challenging aspect has been what I call in my research “mandatory identity work.” I coined the term to describe frequent situations in which employees with marginalized identities are treated as voices and/or representatives of entire communities that share their identities.
Because of the high risks involved in developing new technologies like ai, it can sometimes seem almost impossible to escape. I had to learn to set very specific boundaries for myself on what topics I was willing to address and when.
What are some of the most pressing issues facing ai as it evolves?
According technology/tech-giants-harvest-data-artificial-intelligence.html” target=”_blank” rel=”noreferrer noopener nofollow”>investigative report, current generative ai models have devoured all the data on the internet and will soon run out of data available to devour. That's why the world's largest ai companies are turning to synthetic data, or information generated by ai itself, rather than humans, to continue training their systems.
The idea led me down a rabbit hole. So, recently I wrote a opinion article Maintaining that I believe this use of synthetic data as training data is one of the most pressing ethical issues facing new ai development. Generative ai systems have already shown that, based on their original training data, their result is to replicate biases and create false information. Therefore, the path of training new systems with synthetic data would mean constantly feeding the system biased and inaccurate results as new training data. Yo technology/2024/04/18/meta-ai–facebook–instagram-misinformation/” target=”_blank” rel=”noreferrer noopener nofollow”>described This could become a feedback loop to hell.
Since I wrote the article, Mark Zuckerberg x” target=”_blank” rel=”noreferrer noopener nofollow”>praised that Meta's updated Llama 3 chatbot was technology/2024/apr/18/meta-ai-llama3-release” target=”_blank” rel=”noreferrer noopener nofollow”>partially fed by synthetic data and was the “smartest” generative ai product on the market.
What are some of the issues that ai users should consider?
ai is a ubiquitous part of our lives today, from spell checker and social media to chatbots and image generators. In many ways, society has become the guinea pig for experiments with this new, unproven technology. But ai users should not feel helpless.
I've been ai-opinion-1881977″ target=”_blank” rel=”noreferrer noopener nofollow”>arguing that technology advocates should come together and organize ai users to call for a popular pause on ai. I believe the Writers Guild of America has shown that with organization, collective action, and patient resolution, people can come together to create meaningful boundaries for the use of ai technologies. I also believe that if we pause now to correct the mistakes of the past and create new ethical guidelines and regulations, ai does not have to become a ai-dominated-world.php” target=”_blank” rel=”noreferrer noopener nofollow”>existential threat to our future.
What's the best way to build ai responsibly??
My experience working within technology companies showed me how much it matters who is in the room writing policies, making arguments, and making decisions. My path also showed me that I developed the skills I needed to succeed within the technology industry by starting journalism school. Now I'm back at Columbia Journalism School and interested in training the next generation of people who will do the work of technology accountability and responsibly develop ai both within technology companies and as external gatekeepers.
I believe the (journalism) school gives people unique training to interrogate information, seek the truth, consider multiple points of view, create logical arguments, and distill facts and reality from opinions and misinformation. I think it's a solid foundation for the people who will be responsible for writing the rules for what the next iterations of ai can and can't do. And I hope to create a more paved path for those who come next.
I also believe that in addition to skilled Trust and Safety workers, the ai industry needs external regulation. In the US, I facebook-tiktok-government-regulation” target=”_blank” rel=”noreferrer noopener nofollow”>argue that this should take the form of a new agency to regulate American technology companies with the power to set and enforce basic security and privacy standards. I would also like to continue working to connect current and future regulators with former tech workers who can help those in power ask the right questions and create new practical, nuanced solutions.