As AI capabilities advance in complex medical scenarios that doctors face on a daily basis, the technology remains controversial in medical communities.

  • ourob@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    “Looks right” in a human context means the one that matches a person’s actual experience and intuition. “Looks right” in an LLM context means the series of words have been seen together often in the training data (as I understand it, anyway - I am not an expert).

    Doctors are most certainly not choosing treatment based on what words they’ve seen together.

    • Puzzle_Sluts_4Ever@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      edit-2
      1 year ago

      What is experience and intuition other than past data and knowledge of likely outcomes?

      if my doctor is “using their gut” to decide what treatment I need and not basing it on data? I am gonna ask for another doctor. Also, if/when I die, they are completely screwed if there is any form of an investigation.

      Doctors are most certainly not choosing treatment based on what words they’ve seen together.

      They literally (in the truest sense of the word) are. Words have meaning and are how we convey information related to collected data. Collected data is compared against past data and experience and slotted in to likely diagnoses. This is likely an incredibly bad example but: Stomach ache, fever, constipation, and nose bleeds? You might have stomach cancer. Let’s do a biopsy.

      Doctors read charts. That is how they know what collected data there is. There is a LOT of value in the doctor also speaking to the patient, but that more speaks to patients not communicating valuable information because “they are just a nurse” or whatever other stupidity.

      But, at the end of the day: it is data aggregation followed by pattern matching. Which… AI are actually incredibly good at. It is just the data collection that is a problem and that is a problem in medicine as a whole. Just ask any woman or person of color who was ignored by their doctor until they got a white dude to talk for them.

      This is very much a situation where a bit of a philosophy background helps a lot. Because the idea of “what is consciousness” and “what is thought” long predate even the idea of artificial intelligence. And as you start breaking down cognitive functions and considering what they actually mean…


      Just to be clear, I am not at all saying LLMs will take over for doctors. But as a preliminary diagnosis and a tool used throughout patient care? We have a ways to go, but this will be incredibly valuable. Likely in the same way that good doctors learn to listen to and even consult the nurses who spend the most time with those patients.

      Which is what we already see in computer programming and (allegedly) other fields like legal writing and the like. You still need a trained professional to check the output. But reading through a dozen unit tests and utility functions or some boilerplate text is a lot faster than writing it. And then they can spend the majority of their “brain juice” on the more interesting/difficult problems.

      Same here. Analyzing blood tests after a physical or the vast majority of stuff at an urgent care? Let the AI do the first pass and the doctors can check that that makes sense. Which lets them focus on the emergencies and the “chronic pain” situations.