I am informed that a new product called #Mentalyc has entered the market. It’s mission is to write psychotherapy notes for clinicians AND to gather a non-identifiable dataset for research into clinical best practices.
I have no firm opinion yet on Mentalyc, but it’s expensive ($39-$69 per month per clinician) and I’d personally need to know a lot more about what’s in that dataset and who is benefiting from it.
So I’m asking the community for thoughts on what acceptable ethical and practical criteria would be for an AI to write psychotherapy notes or medical notes.
Here are MY thoughts so far:
-
REQUIRED: The AI either: 1a) Invents NOTHING and takes 100% of the information in the note from the clinician, or 1b) Prompts the clinician for additional symptoms often present in the condition before writing the note, or 1c) Presents a very clear information page before writing that lets the clinician approve, delete, or modify anything the AI got creative with and was not told explicitly to include. (So, for example, in an experiment with Bard a clinician found that Bard added sleep problems as an invented symptom to a SOAP note for a person with depression and anxiety. This is a non-bizarre symptom addition that makes lots of sense, is very likely, but would have to be approved as valid for the person in question.)
-
OPTIONAL: The AI is on MY computer and NOT reporting anything back to the Internet. This will not be on everyone’s list, but I’ve seen too many BAA subcontractors playing loose with the definition of HIPAA (medical privacy) and there is more money to be made in data sales than clinician subscriptions to an AI.
-
OPTIONAL: Inexpensive (There are several free AI tools emerging.)
-
OPTIONAL: Open Source
-
Inputting data to the AI to write the note is less work than just writing the note personally. (Maybe a complex tablet-based clickable form? But then, a pretty high percentage of a note can be in a clickable form format anyway.)
-
The AI does NOT record the entire session and then write a note based upon what was said. (It might accept dictation of note directions sort of like doctors dictate notes to transcribers today.)
I think I may be envisioning a checkbox and drop-down menu form along with a space for a clinician to write a few keywords and phrases, then the AI (on my laptop) takes this and writes a note – possibly just a paragraph to go along with the already existing form in the official note. I think. It’s early days in my thinking.
I have this same discussion set-up here: https://mastodon.clinicians-exchange.org/@admin/110153358784312024
You do not have to have a Mastodon account to read it – only to post. This should also get the attention of computer science, AI researchers, and other technical folks as well as counseling professionals.