Being an oncological surgeon is my main job and passion. It allows me to interact with people and immerse myself in the healthcare system, not the fancy corporate healthcare, but just everyday medicine.
And, as an ai researcher, I'm noticing a growing disconnect between actual clinical practice and the prevailing goals of ai researchers and companies. This is, of course, just a personal opinion and not a criticism of current R&D processes, but it is a reflection based on some experience in both fields.
The disruptive potential of ai in customer software and the industry is now clear. However, we must recognize that ai in healthcare is a completely different animal; The degree of complexity, regulation and risk is significantly higher than most other applications. Furthermore, publicly available data sets are much scarcer than in many other domains due to privacy and accessibility limitations.
So, great blockers and a higher level of complexity.
I currently reside in Silicon Valley as a surgeon with technical expertise in ai, which gave me direct access to this vibrant “ecosystem.” Meetings and conferences about ai are the order of the day. However, it is difficult not to notice some facts:
- Doctors do not participate in ai events.
- Doctors do not even participate in ai events for healthcare.
- ai healthcare research is technically driven, with minimal feedback/collaboration from clinicians.
- Even among doctors, there is insufficient collaboration regarding data sharing and technical development.
First of all, the enthusiasm for new technologies pushes us to try to apply them to each problem: “If the only tool you have is a hammer, you tend to see every problem as a nail.” in the words of Abraham Maslow. And I absolutely understand this trend. ai is our new Thor's hammer; Why wouldn't we want to try it on something even remotely appropriate?
However, this guides research and advances focused on solving “technical puzzles” without answering a fundamental question. On the one hand, we can find fun representations of this concept, such as the Joke identifier “That's what she said” (a fun solution, I'm not criticizing it); and, on the other, examples where the forced implementation of complex deep learning workflows is expensive and unnecessary.
Second, typical “top-down” strategies are based on market analysis and market share calculation. Soon, “Let's find a big, profitable field in healthcare and fill it with ai.“As always, it may be a great short-term strategy, but the magic wears off after a while.
These approaches are rarely effective in healthcare. Doctors and surgeons often resort to conventional practices when the benefits of the new solution are not evident. Planck's principle can be safely applied to medical innovation, “science advances one funeral at a time.” For this reason, a 5-10% increase in operational efficiency, while significant at scale, is hardly applicable in the medical environment: we need a 2- to 10-fold improvement in areas relevant to daily clinical practice.
A practical approach would be to identify a real problem, evaluate the effectiveness of current solutions, and evaluate whether ai can be employed to develop better solutions. the typical mom test.
Currently, most of the major advances in ai for healthcare come from research groups and technology companies. This association explains why the focus is more on the IT aspect than on the healthcare component.
To solve this problem, the direct involvement of doctors and surgeons will be essential.