I saw a paper and thought about AI psychosis
I just saw this paper and I want to talk about it.
It's basically an exploratory investigation into the possibility of using AI to talk with a future version of yourself. In this case, using chatgpt-3 and loading current info about a patient (including relevant bits for prediction, such as turning point, low point in life, future career, etc) to construct a single-session version of a future you to talk to. This with the aim to improve better self-continuity (the level of connection you feel with an hypothetical future version of yourself), that is positively related to mental well being. To better improve this, it also shows an aged version of a photo of the user. This study N=344 reports that users after a session report significantly reduced anxiety and and increased future self-continuity.
I don't want to throw shade into the authors of this paper, neither to the paper itself. I think it's commendable to explore the use of currently developing technology to improve well being. I'm no academic voice nor authority - I'm still an undergrad, and just a dude that reads random papers from time to time because. It's science! I love science.
Yet, when I saw this paper, I couldn't stop thinking about how this is a sort of early onset on AI psychosis and how it affects us. This paper is from middle 2024, before AI psychosis was at everyone's mouth as it is today. But people were already noticing this phenomena happening, even before this current situation.
I personally wish that AI worked better as the kind of AI in movies of "jarvis, calculate the mass and volume of that building and build a CAD reconstruction out of it, then add another floor", or "gemini, look at the house market around me and do me a report of how this will affect my taxes this year" (or "gemini, pay my taxes" would be enough!). Or similar. The good use case of AI is to relieve me of having to worry about something.
Yet, whenever I'm about to use AI, I have to be extra mindful out of it! I don't want validation of my own biases, I want correct answers! This is made worse by the fact that AI will never disagree with you in a way that makes you feel bad. When sometimes, yeah, you should feel bad. And I know my close friends will strike the correct balance of "god fucking dammit" and "here, got you" when I really really mess up. Something I cannot say about any chatbot I know of. I don't want to cheat myself out of the human experience, I want to live it further.
I want to be able to see a future where AI is used as a tool for anxiety/depression relief without cheating oneself of the human experience. But I can't seem to visualize it. This paper made me think of this because I believe that this framework to help connect with one's self-continuity is interesting and should be studied further. But the result... anxiety relief... I don't know if that is as a result of the same methods that drive people into AI psychosis (there is something to be said about all medicine being poison at the right dosage), or if the conversation with an hypothetical future version of yourself is the cause of anxiety relief. And because it's single-session, it's hard to know.
But as long as it is this difficult to distinguish between 'future-AI-psychosis-case' and treatment, I don't think it's wise to use AI this way. And the main problem is that it's currently so easy and accessible to use AI this way! I'm confident in our ability to fix this mess, but in the meantime let's do our best to help each other.