I must confess to getting a little sick of seeing the endless stream of articles about this (along with the season finale of Succession and the debt ceiling), but what do you folks think? Is this something we should all be worrying about, or is it overblown?

EDIT: have a look at this: https://beehaw.org/post/422907

  • TheRtRevKaiser@beehaw.orgM
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Hey @Gaywallet, I was hoping I’d see you chime in given your background. I don’t have any particular expertise when it comes to this subject, so it’s somewhat reassuring to see your confidence that folks in the healthcare industry will be more careful than I assume. I work in an adjacent field and know there are a lot of folks doing really good work with ML in healthcare, and that most of those people are very cognizant of the risks. I still worry that there are a lot of spaces in healthcare and especially in areas like claims payment/processing where that care is not going to be taken and folks are going to be harmed.

    • Gaywallet (they/it)@beehaw.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Ahhh yeah claims and payment processing is typically done by the government or by insurance companies and is definitely a valid risk. They already do as much as they can to automate - they’re mostly concerned with whether they’re making profit or keeping costs down. That is absolutely a sector to be worried about.

      • TheRtRevKaiser@beehaw.orgM
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Yeah, I work for a company that builds and runs all kinds of healthcare related systems for state and local governments. I work on a Title XIX (Medicaid) account and while we are always looking for ways to increase access, budgets are very tight. One of my concerns is that payors in this space will look to AI as a way to cut costs, without enough understanding or care for the potential risks, and the lowest bidder model that most states are forced into will mean that the vendors that are building and running these systems won’t put in the time or expertise needed to really make sure those risks are accounted for.

        When it comes to private insurance, I don’t expect anything from them but absolute commitment to profit over any other concern, and I’m deeply concerned about the ways that they may use AI to try and automate to the detriment of patients, but especially minorities. I absolutely don’t expect somebody like UHC to take the kind of care needed to mitigate those biases when applying AI to their processes and systems.

        • Gaywallet (they/it)@beehaw.org
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          If its of any console most of these places have systems in place which already automate out most of the work - that article that broke recently about physicians looking at appeals for an average of under one second is an example of how they’re currently doing it. There are some protections in place, but I am also very pessimistic about this sector, and as you have mentioned all sectors which operate under a lowest bidder model.