ai-in-health.jpg?itok=a12JUz54″ />
Before the U.S. Food and Drug Administration (FDA) approves a drug, it must demonstrate both safety and effectiveness. However, the FDA does not require understanding a drug's mechanism of action for approval. This acceptance of unexplained results raises the question of whether the “black box” decision-making process of a safe and effective ai model must be fully explained to ensure FDA approval.
This topic was one of the many points of discussion addressed on Monday, December 4 during the MIT's Abdul Latif Jameel Clinic for Machine Learning in Health (Jamel Clinic) Conference on ai and health regulatory policy, which sparked a series of discussions and debates among teachers; regulators from the US, EU and Nigeria; and industry experts on the regulation of ai in healthcare.
As machine learning continues to evolve rapidly, uncertainty remains over whether regulators can keep up and still reduce the likelihood of a harmful impact while ensuring their respective countries remain competitive in innovation. To promote an environment of frank and open discussion, attendance at the Jameel Clinic event was highly curated for an audience of 100 attendees who debated the application of the Chatham House Rule, to allow speakers anonymity to discuss opinions and controversial arguments without being identified as the source. .
Rather than hosting an event to generate buzz about ai in health, the goal of the Jameel Clinic was to create a space to keep regulators informed about the most cutting-edge advances in ai, while allowing professors and experts from industry propose new or different approaches to regulation. frameworks for ai in healthcare, especially for use in clinical settings and in drug development.
ai's role in medicine is more relevant than ever, as the industry battles post-pandemic workforce shortages, rising costs (“It's not a wage issue, despite common belief” , one speaker said), as well as high rates of burnout and resignations. among health professionals. One speaker suggested that priorities for clinical ai deployment should focus more on operational tools than on diagnosing and treating patients.
One attendee noted a “clear lack of education among all constituents, not just developer communities and healthcare systems, but also patients and regulators.” Since doctors are often the primary users of clinical ai tools, several of the doctors present asked regulators to consult them before taking action.
Data availability was a key issue for most of the ai researchers present. They lamented the lack of data for their artificial intelligence tools to work effectively. Many faced barriers such as intellectual property preventing access or simply a scarcity of large, high-quality data sets. “Developers can't spend billions creating data, but the FDA can,” noted one speaker during the event. “There is uncertainty about pricing that could lead to underinvestment in ai.” EU speakers touted the development of a system that forces governments to make health data available to ai researchers.
At the end of the day-long event, many attendees suggested extending the discussion and praised the selective curation and closed environment, which created a unique space conducive to open and productive discussions on ai regulation in healthcare. Once future follow-up events are confirmed, the Jameel Clinic will develop additional workshops of a similar nature to maintain momentum and keep regulators informed of the latest developments in the field.
“The north star of any regulatory system is security,” acknowledged one attendee. “Generational thinking emerges from there and then works downward.”