Instead of giving AI a bad rap, teach it to whistle laryngitis.
AI-generated image. Bing using DALL-E. Prompt: "draw a picture of the sound of a cough". January 28 2024.

Instead of giving AI a bad rap, teach it to whistle laryngitis.

In February 1891, Londoner Ernest Hart and the distinguished specialist Dr Felix Semon had an article published in the French journal La Médecine Nouvelle (New Medicine). Hart and Semon described an application for Thomas Edison's Perfected Phonograph that they had been exploring with Colonel George Gouraud, Edison's London agent.

The Phonograph had come out of Edison's original Menlo Park invention factory in New Jersey, and Gouraud had fitted out “Little Menlo”, his home in suburban South Norwood, to show off everything Edison sent his way. Every corner was brightened by electric light and cooled by electric fans. Visitors could have their shoes shined by electric revolving brushes and, if Gouraud was out, record a message for him on a Phonograph in the hallway, and put an American flag in a bottle to let him know there was a message waiting. The first voicemail. Gouraud was always up for another nail to hit with the Phonograph hammer.

Hart and Semon had recorded the voices of patients onto wax cylinders and captured the sound of serious chest problems for the first time. Until then, the only way a trainee doctor could learn how whooping cough sounded was to lean in to a live patient. The only way for a sufferer of whistling laryngitis to get a diagnosis was to perform for a doctor who had heard it before. Hart and Semon proposed that the cylinders be used for physician training and remote diagnosis.

"If a new technology extends one or more of our senses outside us ... what had been vague or opaque will become translucent."

The idea got no press coverage and fizzled out. Edison cylinders were sexy but still niche in 1891. Right now, another technology that offers something radically — and not entirely — different, is all we can talk about.

"AI algorithms," according to this Diagnostics journal article, "can analyze medical images (e.g., X-rays, MRIs, ultrasounds, CT scans, and DXAs) and assist healthcare providers in identifying and diagnosing diseases more accurately and quickly." It's more than images of course; we now have bio-signals, vital signs (body temperature, pulse rate, respiration rate, and blood pressure), demographic information, medical history, laboratory test results, and, still, coughing and croaking. It all rests on technology that processes, preserves and shares data that documents medical conditions, to improve diagnosis. We may be training algorithms not medical students; we may be diagnosing not just from how a presenting patient sounds, but from hundreds of different signals, from inside and outside the body; but the intended outcome is the same.

Marshall McLuhan wrote in 1962 in The Gutenberg Galaxy: The Making of Typographic Man, "…[I]f a new technology extends one or more of our senses outside us into the social world, then new ratios among all of our senses will occur in that particular culture. It is comparable to what happens when a new note is added to a melody. And when the sense ratios alter in any culture then what had appeared lucid before may suddenly become opaque, and what had been vague or opaque will become translucent."

AI, like sound recording, extends our senses. In expert hands, it will reveal diagnoses for appropriate and timely treatment, which will translate into lives saved.

We don't always need to invent new problems just because we have new solutions. We don't have to teach AI to rap about cuddling with pets. Sometimes just going back to the same old nails with a new hammer is the best thing we can do.

#audio #medicaldiagnostics #soundrecording #ai #generativeai

Alister Fairley

Portfolio Manager | Strategic Program Lead | AWS Partner

5mo

Really enjoying your writing, Matthew!

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics