A few thoughts on Google’s breast cancer screening paper published in Nature on New Year’s Day.

1. The article is about software first, advertising Google’s prowess second, medicine third

It was published in Nature, along with “Inverse transition of labyrinthine domain patterns in ferroelectric thin films”, “Spectroscopic confirmation of a mature galaxy cluster at a redshift of 2”, and “The past and future of global river ice”. There was one medical article in this issue, but it wasn’t Google’s.

Out of 31 authors, 21 are software engineers from Google, Deep Mind, and Verily, two are engineers from Nortwestern University. The eight authors from the medical field are:

This isn’t a knock against the paper, but it should put it in perspective as being medicine-adjacent rather than medical. I am therefore judging it as such.

2. Screening mammograms are an excellent problem for AI to solve

Without going into why we do breast cancer screening (see below), the way it is being done now is as good of a setup for AI to tackle as any we have in medicine.

  1. The input is digital by default, unlike in pathology, another field commonly associated with AI, where most of the work is still being done with good old slides under a microscope.
  2. The problem area is constrained to one a single imaging modality of a single organ.
  3. The required output is simple: refer for a biopsy, or don’t. Contrast with another suggested use of AI, reading chest X-rays. Even if used for a limited purpose (such as detecting tuberculosis, as discussed recently here), there is an enormous number of possible incidental findings — from lung cancer to broken bones — that AI would need to be trained on if it has any hope of approximating humans.

3. But we shouldn’t be doing screening mammograms to begin with!

I agree! The evidence on screening mammograms in the general population is insufficient to support the practice. But their widespread use can’t change overnight — we know how rare medical reversals are. So we have at least two problems:

  1. Women are subjected to scan anxiety at best and unnecessary surgery at worst
  2. Countless highly and expensively-trained proffessionals are tied up doing unnecessary (at best) work

Yes, the two are not on the same level of badness. But using AI to read screening mammograms can help with both.

4. The two ways in which AI actually help

  1. AI performing most of the radiologists’ work decreases the opportunity cost of screening mammograms. Of course, this isn’t straightforward: most radiologists may decide to spend the newly liberated hours enjoying their millions earned by reading all those mammograms back in the day, but some will hopefully help with patient care.
  2. AI decreases both the cost and variability of mammogram reads, thus decreasing the cost of any future multicenter randomized trials of screening mammograms in the general population. The decreased cost is both direct (AI being cheaper than humans), and indirect (more uniform reads accross multiple centers decreasing variance and therefore the sample size needed to detect a meaningful improvement in survival or incidence of metastatic breast cancer — the only two meaningful metric in such a trial).

5. So why are people all up in arms about this paper?

Because they are triggered by screening mammograms. I can’t blame them — the whole situation is sad — but that shouldn’t detract from the value of this paper in AI research. It’s not practice-changing — that would require a randomized trial — but because of the nature of the change it’s proposing the trial would be smaller, less costly, and more conclusive than its predecessors.

Share on: TwitterFacebookEmail


Published

Category

Blog

Tags

Contact