ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women’s Hospital found that cancer treatment plans generated by OpenAI’s revolutionary chatbot were full of errors.

  • dual_sport_dork 🐧🗡️@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    1 year ago

    This is why without some hitherto unknown or so far undeveloped capability of these sorts of LLM models, they’ll never actually be useful for performing any kind of mission critical work. The catch-22 is this: You can’t trust the AI to produce correct work without some kind of potentially dangerous, showstopping, or embarassing error. This isn’t a problem if you’re just, say, having it paint pictures. Or maybe even helping you twiddle the CSS on your web site. If there is a failure here, no one dies.

    But what if your application is critical to life or safety? Like prescribing medical care, or designing a building that won’t fall down, or deciding which building the drone should bomb. Well, you have to get a trained or accredited professional in whatever field we’re talking about to check all of its work. And how much effort does that entail? As it turns out, pretty much exactly as much as having said trained or accredited professional do the work in the first place.