Lemmy, I really would like to hear your opinions on this. I am bipolar. after almost a decade of being misdiagnosed and on medication that made my manic symptoms worse, I found stable employment with good insurance and have been able to find a good psychiatrist. I’ve been consistently medicated for the past 3 years, and this is the most stable I have been in my entire life.

The office has rolled out the use of an app called MYIO app. My knee jerk reaction was to not be happy about the app, but I managed my emotions, took a breath and vowed to give it a chance. After being sent the link to validate my account, the app would force restart my phone at the last step of activation. (I have my phone locked down pretty tight, and lots of google shit, and data sharing is disabled, so I’m thinking that might be the cause. My phone is also like 4-5 years old, so that could also be the cause.)

Luckily I was able to complete the steps on PC and activate that way. Once I was in the account there were standard forms to sign, like the HIPAA release. There was also a form there requesting I consent to the use of AI. Hell to the NO. That’s a no for me dawg.jpg.

I’m really emotional and not thinking rationally. I am hoping for the opinions of cooler heads.

If my doctor refuses to let me be a patient if I don’t consent to AI, what should I do? What would you do? Agree even though this is a major line in the sand for me, or consent to keep a provider I have a rapport with, who knows me well enough to know when my meds need adjusting?

EDIT: This is the text of the AI agreement. As part of their ongoing commitment to provide the best possible service, your provider has opted to use an artificial intelligence note-taking tool that assists in generating clinical documentation based on your sessions. This allows for more time and focus to be spent on our interactions instead of taking time to jot down notes or trying to remember all the important details. A temporary recording and transcript or summary of the conversation may be created and used to generate the clinical note for that session. Your provider then reviews the content of that note to ensure its accuracy and completeness. After the note has been created, the recording and transcript are automatically deleted.

This artificial intelligence tool prioritizes the privacy and confidentiality of your personal health information. Your session information is strictly used for the purpose of your ongoing medical care. Your information is subject to strict data privacy regulations and is always secured and encrypted. Stringent business associate agreements ensure data privacy and HIPAA compliance.

  • VampirePenguin@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    6 hours ago

    AI and the people pushing it are not trustworthy. They do not have your data security nor your wellbeing at heart, even if your doctor does. LLMs are inherently bad at data security and there is no way these companies can, in good faith, promise HIPPA compliance. Likely, the AI use will be on the part of the insurance company to find ways of denying your claims.

  • Royy@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    6 hours ago

    Hello, It us absolutely justified to be worried, tell your doctor you concerns, and ask your doctor questions about the use of AI. If you want some help putting together questions for your doctor lmk.

    I’m involved with the development / integration of AI. From the specific text of the AI agreement, it looks like these are the AI tools you’re consenting to:

    • Transcription tool: This is a speech-to-text tool. It can differentiate between speakers.

    • Transcript -> clinical documentation tool. This takes the text of the transcript, interprets it, and generates clinical documentation based on it.

    It does not seem like, as part of the agreement, it covers taking the clinical documentation and attempting to suggest diagnosis or care steps.

    I am actually concerned by the “recording and transcript are automatically deleted” line. If your doctor reviews the generated clinical documentation vs the transcript, and misses something for whatever reason, if they are unsure about something in the future they can’t go back and reference the original audio / generated transcript to verify accuracy?

    There are also concerns about how they are following HIPAA laws:

    What model / service are they using?

    Did they do their due diligence in deciding what service to use?

    Have they looked at other cases where data companies have said they don’t persist/ sell your data and then they sold it / there was a breach of data that shouldn’t have persisted in the first place?

    Do they anonymize personal information before they send it to whatever service they are using? -Note that this is not possible for transcription models, as they cannot know what text to anonymize/censor until the model generates the text. That doesn’t mean there are not HIPAA-compliant text transcription models, text transcription models can even be run locally on maybe consumer-grade devices, meaning the audio doesn’t have to be sent to a 3rd party.

  • GrayBackgroundMusic@lemmy.zip
    link
    fedilink
    arrow-up
    2
    ·
    6 hours ago

    Your provider then reviews the content of that note to ensure its accuracy and completeness.

    You know they’re not gonna do that, in practice.

  • 𝕸𝖔𝖘𝖘@infosec.pub
    link
    fedilink
    arrow-up
    3
    ·
    7 hours ago

    Show him the EULA for copilot (where it’s for entertaining purposes only), and tell him you’ll be going elsewhere and leaving an appropriate review.

  • cley_faye@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    10 hours ago

    It depends on many things. The hard line for me would be is this running locally, on a server with the same IT management as my actual data, or on a third party servers. If the doctor either don’t know this, or can’t give adequate proof that it isn’t running on some third party servers, then all the “prioritize your privacy” aren’t worth shit.

    But that’s only the point where I give a hard no. The way it is used would also matter a lot. Is it used as a clutch for reference searching, or a full self driving decision making process that will write me a prescription in the end? This part is the same whether it’s for medical advice or for anything else: if the user is skilled enough to be able to evaluate/validate the output of the process faster than it would have taken them to do it manually, then there might be some value. Some usages fits into this. Some don’t. Summarizing large documents you did not read does not work as a safe thing, because, you’d have to read the document to check the summary. Getting the summary of a drug/sickness/whatever that you know about but need a reminder of, could be ok.

    tl;dr: it have to run in a privacy-enabled context (no third parties), it have to be used as a clutch (no skipping work), and the user have to keep is brain en mental activity alive enough to steer the system instead of being dragged by it. As things stands right now, I doubt there’s a lot of doctors that would fit all three points, but in the future, maybe.

    • brygphilomena@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 hours ago

      We have a BAA and our vendor attests that they are HIPAA compliant. I don’t know what or where it runs. But BAA and they promise that it’s good for PHI.

      • cley_faye@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        4 hours ago

        Yeah, I stopped trusting service provider with promises the moment they came into existence. “We’re compliant with XYZ” have as much value as “We promise to not snoop, see?”. And that’s not even considering security vulnerabilities. Certifications are merely the promise that at some point, someone maybe did something right (or maybe not), and paid to be able to say so (sometimes they don’t). Not very reassuring.

        Data remains on controlled systems, and if it has to get out, it’s encrypted properly, either for cold storage, or for specific recipients. Anything below that is believing random people saying random shit, and ignoring that every time there’s a data leak somewhere people go “oops, our mistake, it won’t happen again, pinky swear”.

        And I know there’s already an incredible amount of sensitive, personal data on the loose. That’s no excuse to let this trend keep going.

  • leadore@lemmy.world
    link
    fedilink
    arrow-up
    17
    ·
    16 hours ago

    I feel very strongly about this and I would change doctors. But of course it won’t be long before they all do this and we’ll have no alternative. The two biggest problems I see are

    1. I saw a news story where a doctor who uses this said it saves her time because before seeing the patient she gets an AI summary of their chart, so she doesn’t have to “go through several tabs” to read the actual information. Oh great, let the statistical probability text generator hallucinate up some shit about what’s in a person’s chart, to save 10 seconds of tab-clicking to read the ACTUAL patient records! If they want a summary there’s no reason a traditional report or summary screen couldn’t be programmed to pull data out of the most important fields and arranging them in the desired format.

    2. THEN the doctor uses her damn phone to record your visit, everything you say, and that gets run through the AI which generates a visit summary and puts that into your medical records. So, god only knows what 3rd party private corporate vulture has access to your doctor/patient conversations and what they’ll do with them, and again, what hallucinated shit will get put into your medical records!

    So your doctor never reads your chart and never writes your chart! [Readacted] me now! Also what happens after a few iterations of an AI summarizing records that an AI wrote?

    • sem@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      6
      ·
      10 hours ago

      If you buy into the story that “someday they’ll all be using it” you are doing the AI boosters’ job for them. It is not a foregone conclusion, and there is no reason to accept that future.

      • leadore@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        6 hours ago

        I hope you’re right! The magical thinking and child-like trust in this tech by otherwise intelligent people is scary though.

    • Cellari@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      15 hours ago

      AI is really good at concepts, not logic. But even then the performance is going to be dependant of the data it was modelled with.

      You can ask for a specific symptom of pneumonia and it can answer. You can also ask for a summary of pneumonia, as someone has most likely wrote one already and AI understand to use it because of the concept relevance. But if you ask it to summarize a patient information, it will split the patient information into blocks it can summarise based on what summarisation information it has in the model data. I can assure you it cannot ever have all the possibilities pretrained already.

      • leadore@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        5 hours ago

        My fear is that the models merge all kind of patient record info together as the statistical model so the ‘summaries’ will write the most likely word to come next in the phrase, so wrong information and incorrect diagnoses will be recorded into a person’s record, or that important information will be omitted.

        I predict that people will be harmed or die because of missing or false information patient records. But it will be difficult for the public to find out about it because of privacy issues and the unwillingness of institutions to acknowledge it.

        Drugs have to go through multiple stages of testing and trials before they’re allowed to be used on patients. But no one is doing any kind of testing on the effects of this at all, let alone controlled trial rollouts with review, before allowing general use.

  • Nibodhika@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    15 hours ago

    I know this might go against the flow here, but realistically if they’re using the tools in the way they say they are (which you should 100% check with your doctor to let him know about possible hallucinations) it’s not that bad. Speech-to-text is not prone to hallucinate, it can fail and detect wrong things but shouldn’t outright hallucinate. After that, LLMs are good at summarizing things, yes they are prone to hallucinations which is why having the doctor review the notes immediately after the session is important (and they said they do), so I don’t see this as such a big issue from the usability point of view.

    You might still have issues from a privacy point of view and that’s a much more complex discussion with them about what kind of contract they have with the LLM company to ensure no HIPAA violations (as from the LLM point of view it’s just making a summary of a text it might store it, and then the whole stack is suable). They need to understand that just because they haven’t kept a copy around doesn’t mean the other party hasn’t, and because they shared it out without your agreement (you’re only agreeing to AI note taking which can be done locally so them sharing information with third parties is entirely up to them) they would be liable. I’m not a lawyer, so you might want to double check that, but I would be very surprised if that’s not the way it works, otherwise Drs could get away with a bunch of HIPAA violations by having you sign something that says they use a computer to store data and then storing things in shared Google drive.

  • BlindFrog@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    15 hours ago

    No, but.

    One of my doctors has an assistant nurse (or whatever they’re called in the hierarchy) take notes just so the conversation can be more fluid. She always asks my permission for if that’s okay with me.

    My other doctor types and reads out her notes with me towards the end of my visit to make sure she hasn’t missed anything, and she makes me feel heard and involved.

    No, I wouldn’t consent. Sending my PHI to a third party is unnecessary, and AI data centers are a net negative on the planet. I also wouldn’t trust that the Ai service provider isn’t helping themselves to your data + doctor’s feedback to use for further training anyway. Thank god healthcare providers are required to ask before shunting your info off to some third party.

    But, if presented with this, I’d talk to my doctor about the extent that third-party AI-services are already being used in my own healthcare. If I can fully opt out, I’d stay. If I didn’t have a real choice to opt out, and if it were easy to find a new doctor that didn’t use Ai-services, ~l’d fuck off so fast, like bye felicia, I ain’t dealing with this palantir-esque bullshit just for getting a rx refill~

  • e0qdk@reddthat.com
    link
    fedilink
    arrow-up
    10
    ·
    19 hours ago

    My medical provider started doing that when I last had a video conference with them, and I declined to allow the use of AI. They took no issue with that – didn’t even bring it up. It’s very unlikely that your provider will care that you declined either. I recommend saving your energy for other problems and dealing with this later in the unlikely event that they do actually make an issue of it.

  • stringere@sh.itjust.works
    link
    fedilink
    arrow-up
    22
    ·
    22 hours ago

    No. Absolutely not. I csnnot trust any current AI model with HIPPA compliance.

    Find another doctor. I just had to fire my therapist because when I went in for this week’s appointment they were playing some jesus worship service and song. I told her that it was our last session because I no longer had trust in their offices and added that I had no faith any progress would ever be made after I was triggered waiting to see my therapist. It could have been the receptionists choice in music or someone else from their office but since they do not advertise as a faith based therapy group they should have left that shit at home or should expect more of the same from people like me.

    • BanMe@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      18 hours ago

      It’s worth researching a therapist’s credentials, some states allow “pastoral counseling degrees” and so on to be a path to “mental health therapist.” You want LISW, a licensed social worker. I’m not saying there aren’t weirdos, or that your experience wouldn’t happen with a social worker… just that many folks don’t realize some therapists went to theology classes instead of psychology classes, which is a prime setup for problems.

      • stringere@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        9 hours ago

        I didn’t know about the theology to therapist route. My therapist herself never indicated their faith leanings, so credit due to them there. They have a Masters and are a LPC. As I mentioned before, it’s entirely possible she had nothing to do with nor endorses the music choice in the building, but tacit endorsement by not stopping it from happening is enough for me to leave.

        Maybe, just maybe, let’s not play music from the loudest hate group in the USA in the lobby of the therapist office.

      • Tollana1234567@lemmy.today
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        16 hours ago

        probably better to look for a licensed psychologist/psychiatrist, or someone with a PsyD. dont really want to risk when someone isnt in the field.

  • slazer2au@lemmy.world
    link
    fedilink
    English
    arrow-up
    84
    arrow-down
    6
    ·
    1 day ago

    I would nope the fuck out and change doctors. A regurgitation machine prone to hallucinations has no place in medical care.

    • oneser@lemmy.zip
      link
      fedilink
      arrow-up
      15
      arrow-down
      19
      ·
      1 day ago

      If this was for a GP, I would agree with this stance. But a good, fitting and competent mental health professional can be harder to find.

      • Zos_Kia@jlai.lu
        link
        fedilink
        arrow-up
        4
        arrow-down
        2
        ·
        14 hours ago

        By god they’re going to make OP change doctors just because they hate “le stochastic parrot”. And op is probably in the US which makes the whole thing even crueller.

        Literally a horde of teenagers playing with a bipolar’s head because they have big feelings about stuff.

        And all this for a fucking note taking app Jesus Christ. Yeah sure OP is probably risking their mental health in the process but who gives a shit about that when you have an occasion to proclaim that le AI bad.

        • Washedupcynic@lemmy.caOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          I am concerned about what is done with the data generated via the saved recording and transcription. Yes I live in the USA. Our government is currently kidnapping people off the street and disappearing them for being brown. They are attempting to build databases identifying trans people. So yeah, I’m concerned that the third party my doctor is using, MYIO, could sell the data/transcripts, and before I know it I end up on a government list and disappeared because I am gay. Could the theft of this data being generated by the app lead to identity theft? MYIO says the videos aren’t stored long term, and everything is encrypted; but companies like and the monetary penalties are just rolled into the cost of doing business. This isn’t a note taking app, there are already plenty of transcribers on the market. This is something entirely different.

          I’ve already had my identity stolen and credit cards opened in my name.

          And no one is going to “MAKE” me change doctors. That’s something I decide for myself.

        • WhyJiffie@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          12 hours ago

          you seem to have no clue about the problem at hand. It’s the lesser of issues that the AI transcriber could hallucinate. the worse problem, which is irreversible, that the treatment session and every private detail that gets discussed is funneled to at best questionable companies who will do whatever they want with your private information. once that happened, you can’t just make them delete what they stored in the process, it is completely unveriable what they do besides offering the original service. everything that was told in the session will not stay between the two of you.
          accepting this unknowingly is very dangerous. accepting it knowingly will alter what you say and the results with it, like going to a therapist who you know personally, which is not allowed for very good reasons.

          • Zos_Kia@jlai.lu
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            11 hours ago

            You think therapists and doctors in general don’t use Docs or Notes services that are hosted or backed up in the cloud ? You think having your medical data leaked to tech companies is new ? Just because the notes transcription app is AI doesn’t make it magically worse. In fact it makes the data harder to access as you need to re-infer the whole enchilada if you want to mine it (as opposed to, say, Google Drive who can just make a SQL query on your data and get it structured and ready to use).

            It’s nice that mental health is so inconsequential to you that you can balance it against privacy purity politics. It’s really cool for you that you’re in this position of privilege. It’s not cool to be pushing on someone with a clinical condition in a way that will probably get them worse off, in a country with absolutely no mental health safety net. Just like antivax it’s coated in fake concern, but you’re playing a dangerous game with someone else’s life and you’re cool with it because you’re insulated from the consequences.

            You guys really are a pure product of those amoral hyper-individualistic times.

      • applebusch@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        8
        ·
        19 hours ago

        That’s the last fucking profession who should be using LLMs… People can gaslight themselves with chatbots without paying for a trusted professional to reinforce that bullshit.

        • oneser@lemmy.zip
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          9 hours ago

          OP didn’t state this clearly, but I went and looked. The app is not for replacing consults, only billing etc. so I’d put it in the “annoying, but not world ending” category.

      • phoenixarise@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        4
        ·
        edit-2
        20 hours ago

        I don’t believe that. They just don’t want to pay them what they’re worth. Machines don’t ask for days off or health insurance, that’s their rationale. I hope they go out of business.

    • originalucifer@moist.catsweat.com
      link
      fedilink
      arrow-up
      7
      arrow-down
      53
      ·
      1 day ago

      you do know at some point the whole ‘hallucinations’ line is going to be as fresh as calling things ‘woke’, right?

      the ‘does this thing have ai in it’ is already a fucking blur as businesses link to each other via private and public APIs… healthcare is no different.

      these things are already in place in many places. if youre a part of any nation wide health services, youre already impacted.

      its like the fact that a huge % of our GDP is tied to like 10 companies… you cannot live your life in the modern united states without suffering products or services from those 10 companies, full stop. your life with ai will look the same.

      can you work hard avoid shit and cry about it? yep. yep you can… but thats about it.

      • OwOarchist@pawb.social
        link
        fedilink
        English
        arrow-up
        43
        arrow-down
        1
        ·
        edit-2
        1 day ago

        you do know at some point the whole ‘hallucinations’ line is going to be as fresh as calling things ‘woke’, right?

        The truth doesn’t care whether it’s “fresh” or not.

        As long as AI still hallucinates, it will be useful for entertainment purposes only and never for anything as serious as healthcare.

        your life with ai will look the same.

        lol, tell that to every other business fad that has come and gone.

        The AI bubble will pop, the economy will crash, and in the long run, that will be a good thing.

        • THE_GR8_MIKE@lemmy.world
          link
          fedilink
          English
          arrow-up
          23
          arrow-down
          1
          ·
          1 day ago

          Dude must be some MBA crypto bro AI slop jock. His grammar isn’t good enough to be one of those idiot CEOs who just learned what artificial intelligence is. Maybe he’s a shareholder for one of those soul-less companies. Probably not that either though. Perhaps he’s just a terrible artist or programmer who uses AI slop for all of his works of shart. The possibilities really are endless these days.

          • originalucifer@moist.catsweat.com
            link
            fedilink
            arrow-up
            5
            arrow-down
            10
            ·
            1 day ago

            im an ex corp drone whose value was replacing humans with automation.

            it sucks, its already exists, it will happen more. llms are already in these pipelines and theres nothing any of us can do to avoid it.

            im not saying its good. im not saying it should be. im saying, it exists right now cuz ive been a part of it.

                • phoenixarise@lemmy.world
                  link
                  fedilink
                  arrow-up
                  4
                  ·
                  edit-2
                  21 hours ago

                  Oh okay, so your only value is the pursuit of material bullshit and not the well being of human beings. Good luck getting AI to pay for your shitty wares when nobody makes money to afford them. 🤭

                  I have no idea what it’s like to be you, and I’m glad I don’t. Enjoy your cold empty heart! 🙂

      • Janx@piefed.social
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        1
        ·
        edit-2
        1 day ago

        It’s almost like the very businesses that creamed their pants about being able to replace workers and endless “blue ocean” profits exaggerated, lied, and forced AI into every. single. product. That’s not consumers’ faults…

        • originalucifer@moist.catsweat.com
          link
          fedilink
          arrow-up
          3
          arrow-down
          6
          ·
          1 day ago

          i cant understand why people are oblivious to the multi-faced war-front that is AI.

          theres the shit you hear about and see every day (oh look copilot shit the bed! claude cant add! teehehee look at all the extra fingers!) and then theres the shit that is actually being implemented in process models all over the place in nearly every department. from inventory to healthcare analysis to customer service, this shit is in daily use now … and you cannot avoid it.

          ai is just an api call away and software companies suck.

      • kescusay@lemmy.world
        link
        fedilink
        arrow-up
        10
        ·
        1 day ago

        Ummm, hallucinations are literally how LLMs work. Everything they generate is confabulation, though sometimes it’s useful confabulation.

        • timbuck2themoon@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 hours ago

          I think we should stop using their terms.

          Llms spout BULLSHIT half the time. They don’t hallucinate. They confidently state incorrect garbage.

      • slazer2au@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 day ago

        you cannot live your life in the modern united states without suffering products or services from those 10 companies

        Well, its good that I don’t live there.

  • chicken@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    14
    arrow-down
    2
    ·
    22 hours ago

    I would only be ok with an AI note taking app if the model is running on hardware the doctor physically has in their office because otherwise any privacy assurances don’t mean that much.

    • Hacksaw@lemmy.ca
      link
      fedilink
      arrow-up
      7
      ·
      19 hours ago

      It takes more time to make sure the summary isn’t a hallucination than it takes to just write notes.

      Every waiter can do it live without AI for 10 drunk idiots at the same time. Every doctor I’ve ever known just takes notes as we went without ever slowing the interaction down.

      This tool cannot help and can only harm, using AI in medicine practically violates the Hippocratic oath doctors take.

      • chicken@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        4
        ·
        18 hours ago

        Maybe there’s an argument to be made there, but to me the privacy issues with all of the source material being sent to the servers of some AI company, and that company having the ability to put their thumb on the output in addition to the risk of the data being misused or leaked, are so much more of a thing to worry about. Those issues exist independently of how doctors use these tools and what risks those particular uses may have, which honestly I know nothing about, definitely not enough to argue that it is ok, it just isn’t something I would personally stress over as much as the other stuff.

  • I’m a therapist and I use SimplePractice for my practice. They recently added an AI note taker that is HIPAA compliant, and the consent form they suggest giving to clients sounds okay, but I read the actual privacy policy and the language used is way too vague for me to trust, so I don’t use it.

    In your position, I would:

    1. Ask if you have to sign that, or if you can opt out. Your specific provider may be open to just not enabling the AI note taker for your profile, and they may be able to remove that form from the app for you on their end. This may not be in their control, but if they’re a good person who cares about you, they’ll make an effort to get it done anyway.

    2. If not, ask for a link to the actual privacy policy and see if it sounds acceptable to you. Not the practice’s Privacy Practices, not the Patient Portal privacy policy, but the actual privacy policy for the AI note taker (whoever you ask might have to do some digging to actually find it)

    • sem@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 hours ago

      I can share my experience here.

      I initially opted-out. I did not want a LLM in charge of summarizing something as important as a medication consultation. I’ve seen the kinds of errors it makes, the way parts of its training data make their way into your file without you having any knowledge or agency.

      My provider then came back to me and said it was an error that the original form said you could opt out. Everyone had to sign it, it was HIPAA, it was nothing to worry about, etc.,

      I tried explaining my reasons, but they didn’t care. I said that if they couldn’t budge, I would have to change providers. They gave me 90 days of my prescription. My primary care physician agreed to continue my prescription as long as I needed them to. And not long after, I was able to find another psychiatric provider who did not require an AI release.

      Also, my primary care doctor asks if I will allow AI transcription every office visit. I feel bad that by saying no, she has to do more work typing. But I feel that the harms are too great, the risks are too much to say yes. While I have the choice, I want humans to be end-to-end responsible for what words are in my medical file, and not by pressing “yes” to agree with llm output.