Bard and ChatGPT have been recognized as the powerful tools they are but we’ve only scratched the surface. In another great article from Brian Roemmele’s Multiplex, How ChatGPT Type AI May Form A Revolution In Positive Mental Health, he discusses another “SuperPrompt” that reminded me of an original GPT sample, a packing list for Mars. However, in this case, he provides a SuperPrompt for creating a psychologist for a long Mars mission.
I tried out the prompt and it’s wonderful. I created for myself a fictional character with his own problems (I’m still not eager to share with ChatGPT my inner, darkest secrets). In interacting with this virtual psychologist, it was clear that it could help my character elicit ideas for self-help. It was as good as a human in the short interaction (as I could imagine).
There are a few things beyond the content that would be a challenge for an AI but are probably fixable:
- Interfacing with a psychologist through chat is weird. Language communicated through typing is a lot more filtered than through voice. I worry about spelling and capitalization. I edit my input.
- The responses come back way too quickly. I tried to slow down the response time or add a pause but ChatGPT pushed back. This could be resolved through API tweaking.
- Maybe as a result of the fast responses, I get the sense that the AI is hurried in dealing with me. It feels like a “feel better already and get out of my office”, even though there’s definitely not that intention.
However, above all of these is the knowledge that the responses have no grain of care seeded in them. There is no consciousness that is concerned about my well being. I need to suspend disbelief and pretend like this is a person. An avatar or perhaps a personal message from the AI’s psychologist designer might help set the stage better.
In the final season of WestWorld, Caleb (future Jesse Pinkman) interacts with an AI-psychologist that is based on the voice and tone of his best friend. The slow voice-based interaction is more convincing but even he ends up ditching it because the machines are using it to manipulate him.
Generative AI-based interactions will definitely lead to a breakthrough in mental health but they’re a medicine that might need to administered differently. We might need to tie it together with a human to review and comment on interact based on the analysis. Maybe the input from the treated will be through video and the response can be simulate chat with delays (e.g. “Eliza is typing…” messages appear and disappear as we await the response).
Looking at the Mars scenario, there’s definitely enough time to email the chat back to Earth for a “customized” message from a human psychologist. Even if we go as far as the Voyager spacecrafts, we could still get a message back within two days.
Beyond that, we might need our own Counsellor Troi for long journeys to augment our interactions with Data. (Sorry to Trek out)