In our latest episode of Leading with Data, we had the privilege of speaking with Ravit Dotan, a renowned ai ethicist. Ravit Dotan's diverse background, including a PhD in philosophy from UC Berkeley and his leadership in ai ethics at Bria.ai, uniquely positions her to offer deep insights into responsible ai practices. Throughout our conversation, Ravit emphasized the importance of integrating responsible ai considerations from the beginning of product development. She shared practical strategies for startups, discussed the importance of ongoing ethical reviews, and highlighted the critical role of public participation in refining ai approaches. His insights provide a roadmap for companies looking to navigate the complex landscape of ai liability.
You can listen to this episode of Leading with Data on popular platforms like Spotify, Google Podcastsand Apple. Choose your favorite to enjoy the revealing content!
Key insights from our conversation with Ravit Dotan
- Responsible ai should be considered from the beginning of product development and not postponed until later stages.
- Participating in group exercises to discuss ai risks can raise awareness and lead to more responsible ai practices.
- Ethical reviews should be conducted at every stage of role development to assess risks and benefits.
- Testing for bias is crucial, even if a characteristic like gender is not explicitly included in the ai model.
- The choice of ai platform can significantly affect the level of discrimination in the system, so it is important to test and consider liability issues when selecting a foundation for your technology.
- Adapting to changes in business models or use cases may require changing the metrics used to measure bias, and companies must be prepared to accept these changes.
- Public engagement and expert consultation can help companies refine their approach to responsible ai and address broader issues.
Join our upcoming Leading with Data sessions for in-depth discussions with ai and data science leaders!
Let's check out the details of our conversation with Ravit Dotan!
<h2 class="wp-block-heading" id="h-what-is-the-most-dystopian-scenario-you-can-imagine-with-ai“>What is the most dystopian scenario you can imagine with ai?
As CEO of TechBetter, I have thought deeply about the potential dystopian outcomes of ai. The most worrying scenario for me is the proliferation of misinformation. Imagine a world where we can no longer trust anything we find online, where even scientific articles are riddled with ai-generated misinformation. This could erode our trust in science and reliable sources of information, leaving us in a state of perpetual uncertainty and skepticism.
<h2 class="wp-block-heading" id="h-how-did-you-transition-into-the-field-of-responsible-ai“>How did you transition into the field of responsible ai?
My journey toward responsible ai began during my PhD at UC Berkeley, where I specialized in epistemology and philosophy of science. I became intrigued by the inherent values that shape science and noticed parallels in machine learning, which was often touted as objective and value-free. With my background in technology and my desire to achieve positive social impact, I decided to apply the lessons of philosophy to the burgeoning field of ai, with the goal of detecting and productively using deep-rooted social and political values.
<h2 class="wp-block-heading" id="h-what-does-responsible-ai-mean-to-you”>What does responsible ai mean to you?
For me, responsible ai is not about the ai itself, but about the people behind it: those who create it, use it, buy it, invest in it, and insure it. It is about developing and implementing ai with a keen awareness of its social implications, minimizing risks and maximizing benefits. In a technology company, responsible ai is the result of responsible development processes that consider the broader social context.
<h2 class="wp-block-heading" id="h-when-should-startups-begin-to-consider-responsible-ai“>When should startups start considering responsible ai?
Startups should think about responsible ai from the beginning. Delaying this consideration will only complicate things later. Addressing responsible ai from the beginning allows you to integrate these considerations into your business model, which can be crucial for gaining internal buy-in and ensuring engineers have the resources to address responsibility-related tasks.
<h2 class="wp-block-heading" id="h-how-can-startups-approach-responsible-ai“>How can startups approach responsible ai?
Startups can start by identifying common risks using frameworks like NIST's ai RMF. They should consider how these risks could harm their target audience and their business and prioritize accordingly. Participating in group exercises to discuss these risks can raise awareness and lead to a more responsible approach. It is also vital to link business impact to ensure continued commitment to responsible ai practices.
<h2 class="wp-block-heading" id="h-what-are-the-trade-offs-between-focusing-on-product-development-and-responsible-ai“>What are the trade-offs between focusing on product development and responsible ai?
I don't see it as a compensation. Addressing responsible ai can actually propel a company forward by allaying consumer and investor concerns. Having a plan for responsible ai can help adapt to the market and demonstrate to stakeholders that the company is proactive in mitigating risks.
<h2 class="wp-block-heading" id="h-how-do-different-companies-approach-the-release-of-potentially-risky-ai-features”>How do different companies approach launching potentially risky ai features?
Companies vary in their approach. Some, like OpenAI, release products and iterate quickly as they identify shortcomings. Others, like Google, may delay releases until they are more certain about the model's behavior. Best practice is to conduct an ethics review at each stage of feature development to weigh the risks and benefits and decide whether to proceed.
<h2 class="wp-block-heading" id="h-can-you-share-an-example-where-considering-responsible-ai-changed-a-product-or-feature”>Can you share an example where considering responsible ai changed a product or feature?
A notable example is amazon's scrapped ai recruiting tool. After discovering that the system was biased against women, despite not having gender as a feature, amazon decided to abandon the project. This decision likely saved them from potential lawsuits and damage to their reputation. It underlines the importance of testing for bias and considering the broader implications of ai systems.
<h2 class="wp-block-heading" id="h-how-should-companies-handle-the-evolving-nature-of-ai-and-the-metrics-used-to-measure-bias”>How should companies handle the changing nature of ai and the metrics used to measure bias?
Companies must be adaptable. If a primary metric for measuring bias becomes obsolete due to changes in the business model or use case, they should switch to a more relevant metric. It is a continuous journey of improvement, where companies must start with a representative metric, measure and improve it, and then iterate to address broader problems.
While I don't classify tools strictly as open source or proprietary in terms of responsible ai, it is crucial for companies to consider the ai platform they choose. Different platforms may have different levels of inherent discrimination, so it is essential to test and consider liability issues when selecting the foundation for your technology.
What advice would you give to companies facing the need to change their bias measurement metrics?
Accept the change. As in other fields, sometimes a change in metrics is inevitable. It's important to start somewhere, even if it's not perfect, and see it as a process of incremental improvement. Engaging with the public and experts through hackathons or red team events can provide valuable insights and help refine the approach toward responsible ai.
summarizing
Our insightful conversation with Ravit Dotan underscored the vital need for responsible ai practices in today's rapidly evolving technology landscape. By incorporating ethical considerations from the beginning, engaging in group exercises to understand ai risks, and adapting to changing metrics, companies can better manage the social implications of their technologies.
Ravit's perspectives, drawn from his extensive experience and philosophical knowledge, emphasize the importance of ongoing ethical reviews and public participation. As ai continues to shape our future, insights from leaders like Ravit Dotan are invaluable in guiding companies to develop technologies that are not only innovative but also socially responsible and ethically sound.
For more interesting sessions on ai, data science and GenAI, stay tuned to Leading with Data.
Check out our upcoming sessions here.