Healthcare AI -- Where Are We?
Credit: CIO

Healthcare AI -- Where Are We?

We started Tau Ventures in 2019 with the fundamental premise that the inflection point of AI was near. Specifically our thesis was predicated on the quadrant below, where people refers to people qualified enough to apply AI.

Now in 2023 we believe we are near the peak of inflated expectations. But we also fundamentally believe that the plateau of productivity is going to lead to tectonic shifts. When it comes to healthcare our view is grounded on (1) credibility is increasing ever faster, (2) there are some specific ways to get things done and (3) big challenges remain outstanding. 


1) Credibility – Publications and patents around AI, especially around healthcare, were far and few for decades. Consider also the first FDA approval around AI was in 1995 and it was in double digits for several years thereafter. But the pace has truly picked up, as the diagram below from Gesund.ai illustrates (disclaimer: this article’s co-author is the company CEO / founder). The bar for AI in healthcare is understandably evolving but we see this increasing credibility as being the linchpin for adoption, from a regulatory and a cultural perspective when it comes to the 4 Ps (payors, providers, pharma, patients). 

The sheer size of the problem and therefore the opportunity is also growing exponentially. Eric Topol pointed out in his Science article last month, a much larger wave, a tsunami, is in the making. Though many early medical AI applications were unimodal, e.g., radiology, “As artificial intelligence goes multimodal, medical applications multiply”.

Unsurprisingly, regulators are stepping in to ensure AI trustworthiness. We have already been observing multiple regulatory tailwinds from the FDA, HHS, NIST, the White House and the US Congress and Senate, in addition to state-level interventions. To build on Andrew Ng’s statement “AI is the new electricity”, we expect AI, particularly for industries like healthcare, to be developed and used in compliance with (emerging) regulations and best practices, just like electricity.


2) How To Get It Done – We have written at length about what and how to build with AI in healthcare. We also believe good ideas are shared and so are quoting the diagram below from a recent post by a16z. 

Our view at Tau is that tackling these massive areas requires understanding:

  • Market – It’s rarely the issue. Case in point in 2022 the US spent $4.3T in healthcare, making up 18.3% of our GDP, almost double of the average of OECD countries. 
  • Go To Market (GTM) – Usually the much bigger problem. Sales cycles into providers can be 9-18 months, into payors and pharma even longer, and patients rarely want to pay more out of pocket. Which is why there is a gap in VC funding at the GTM stage for healthcare too, since it has historically been very tough.
  • Integration – The tech not being good enough is rarely a reason for failure when it comes to healthcare AI. Integrating with the workflow of the provider, understanding the motivations of the patient, aligning with the interests of payor, enabling pharma to be more agile are usually the key levers.


3) Challenges – All that said, there is much to do and therein lie many opportunities.

  • Comprehensiveness – How can you ensure that the AI has access to enough data, especially considering rare diseases / diagnoses and underserved populations.
  • Explainability – Why did the algorithm make a certain decision when you cannot explain it, which is especially tricky in healthcare considering AI is in many ways a black box.
  • Liability – Who is responsible for an AI’s mistake, is it the doctor applying the AI or is it the company that created the AI. How do we monitor AI for adverse events, data drift and leak?
  • Integration – How does it fit into existing workflows for providers, after all you can have the coolest tech in the world but if it’s not easy to adopt then it makes very little difference. This integration problem is unprecedented: how can you reconcile the compliance requirements (privacy, security and regulatory) with the technical necessities of MLOps while ensuring that all stakeholders, many of which are non-coders, can effectively collaborate, develop, use and monitor AI? Enes discussed this at length in his January 2023 Stat Op-Ed based on some of the insights he gleaned from the FDA’s 510(k) database for AI-based medical devices.
  • Training – How do we train and retrain humans fast enough to adopt the new ways. How do we ensure that AI is trustworthy today and in the future, by adopting an expert-in-the-loop approach and enabling humans with appropriate tools.


Originally published on “Data Driven Investor.” Primary authors of this article are Amit Garg and Enes Hoşgör . These are purposely short articles focused on practical insights (we call it gl;dr — good length; did read). See here for other such articles. If this article had useful insights for you, comment away and/or give a like on the article and on the Tau Ventures’ LinkedIn page, with due thanks for supporting our work. All opinions expressed here are from the author(s).

Emilian Elefteratos

| Grow, Sell, Exit, Repeat | | M&As from Sourcing to Integrations |

9mo

I am thrilled that I came across Tau Ventures as I experienced the contributing value that you and the team bring to your investments.

Samir Agarwal

Founder and CEO, Agmeta

9mo

Awesome article Amit Garg . The inertia and regulatory/malpractice fears are probably the most significant impediments in adoption of the AI technologies. Perhaps that is what makes it such fertile grounds for change!

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics