lb_lee: A happy little brain with a bandage on it, enclosed within a circle with the words LB Lee. (Default)
[personal profile] lb_lee
Rogan: Okay, I know I owe responses to folks, but am kinda blurgh, so I promise I haven't forgotten y'all, it's just been a busy couple weeks!

So, thanks to [personal profile] erinptah, yesterday I learned about ChatGPT psychosis. As I linkjumped through the rabbit hole, one thing that stood out to me was how shocked people seemed to be that folks "with no prior history of mental illness" were falling into it. And I was like, "Well, yeah, of course, why is that surprising?" but I realized that other folks may not know this, so let me tell you why ChatGPT Psychosis happens to "normal people."

A lot of people have this mistaken opinion that psychosis or extreme mental states only happen to abnormal people... you know, weirdoes like me. They think (erroneously) that it's purely a genetic or chemical thing, or maybe it can be brought on if you have a harrowing enough history, but if you don't have those things, then you're safe.

But that's not true at all. Anyone's mind can break. All you need is three things:
  1. A "reality-breaker," something that requires someone majorly change how they see themselves or the world. (COVID-19. Political upheaval. Job and ensuing identity loss. Take your pick.)
  2. Mess with the person's food or sleep. (Especially deprivation--note how many of the "ChatGPT psychosis" cases involve sudden loss of shitloads of weight and staying up all night to talk to the bot.)
  3. Keep them isolated. Don't give them time or space to get away from this and digest it all. (Keep talking to the bot. Keep talking. Don't stop. Cut everyone who doesn't agree with you.)
And boom! You got yourself a mental breakdown. You'll have them raving messianic screeds in two weeks, tops!

David Sullivan (RIP) was a professional cultbreaker. This was a man who, voluntarily and professionally, would join a cult to help get people out. And he talks about how he only ever allowed himself to stay for a few days, because one time he was stuck inside for two weeks, and by the end, he was hallucinating the leader's voice like everyone else! If HE could experience that, a man who knew a lot about what cults did and how they worked, a man who probably slid into the grave laughing because the Scientologists failed to take him out (and not for lack of trying), nobody is immune.

When I listened to that podcast and heard him talking about it, it was a revelation to me. It was an illustration to me just how fragile our minds are. Two weeks. That's all it takes. (And that presumes you're a guy like David Sullivan, and not a walking loony-bin like yours truly!)

Now, even if you ignore the global mind-crusher that is the COVID-19 pandemic, even if you include all the political upheaval and everything that has left a lot of people going, "Oh shit, I was wrong about a LOT OF THINGS," these chatbots are a reality-breaker for some folks, all by themselves. (Me. I include me.) They aren't called chatbots, like SmarterChild or what-have-you. They're called AI, a sci-fi term associated with sapient robots for decades in our pop culture. The hype train constantly encourages us to see them as sapient people, or even superhuman, all the while simultaneously assuring us that they're not really. It's all very two-faced and, in my opinion, intentionally confusing, because let's be real, these guys want to make money, and saying, "We made a better SmarterChild" just isn't gold in the bank.

Even sensible people might kinda stagger, wondering things like, "How do I interact with this being? It doesn't fit in my usual categories. Is it a person? Is it a thing? Is it something else?" So maybe they start talking. And then they keep talking, because let's be honest, it is kinda fascinating, talking to a being we've never encountered before, who behaves in a way we don't understand. We want to understand it.

And so we keep talking, which is what the companies really, REALLY want us to do, because the more time we spend on this, the more "engaged" we are, and engagement = money (hypothetically, theoretically, eventually). So whether they intentionally mean to or not, they have an incentive to make this chatbot as attention-sucky as possible. They're not going to FIX that, because that's the entire point, that's where the money comes from.

Admire the ingenuity of humankind. We were apparently so goddamned bored we decided to AUTOMATE mental breakdowns.

Date: 2025-08-29 06:13 pm (UTC)
gze: Silhouette of a wolf head against a silver circle representing the Moon. (default)
From: [personal profile] gze
Reading all of this provokes so many feelings from our team, from "yeah that tracks, you are not immune to propaganda as Garfield says" to OCD-triggered concerns to "fuck capitalism" to empathic distress over AI in a general sense that gets way too lengthy to put into a single comment.

All in all that article and your thoughts on it were an important read for us, thank you for sharing. Truly no one is immune to things like this no matter how much they may insist otherwise. It's easy to believe you couldn't fall for something, be affected by something, etc. until it happens. This whole situation with AI is all a big mess, to put it absurdly simply.

-The Silvermoon Team

Date: 2025-08-29 08:34 pm (UTC)
gze: Portrait of a black and white wolfdog wearing a blue collar. (G)
From: [personal profile] gze
Similar vibes with me and my disorders, it's like...yeah on the one hand I am probably higher risk? On the other hand it's one of the few times I can say it's weirdly helpful because it often keeps us away from things that seem harmful! (Not always successfully but I'd say we have a pretty good track record so far!)

Unsurprisingly we liked Links the Cat and Rocky the Dog more than Clippy way back when we had the assistants on Word, haha!

-G (they/them)

Date: 2025-08-30 02:31 pm (UTC)
ghost_ship: A cartoony shadow figure with googly eyes. (Default)
From: [personal profile] ghost_ship

Oh yeah, Clippy! Recently, he's become the face of protest against corporate shittiness after a consumer-rights activist came out with this video "Change your profile picture to clippy. I'm serious." On YouTube, Clippy is everywhere. Some of them are edited to have little cowboy hats or anime hair or things like that.

-Kai

Date: 2025-08-29 06:50 pm (UTC)
pantha: (Default)
From: [personal profile] pantha
Two weeks!?

We're all doomed.

Date: 2025-08-30 09:01 am (UTC)
pantha: (Default)
From: [personal profile] pantha
As if all of these companies don't have shed loads of psychologists and behaviourists on staff trying to manipulate their customers for pleasure and profit...

~~~~~

In other news, this also explains why there's also recorded instances of otherwise mentally healthy people developing psychosis after practising mindfulness. Which I keep having to remind people about when they promote it as a totally harmless wellbeing activity.

Date: 2025-08-29 07:45 pm (UTC)
numb3r_5ev3n: Concentric red and cyan hexagon pattern. (Default)
From: [personal profile] numb3r_5ev3n
I know I've mentioned this before, but a big reason that I was vulnerable to Draven's Matrix cult is mainly because of the first item. Especially 9/11 and the Iraq War ("surely people see this is a load of manufactured consent and will not be easily gaslit into a war over it - oh fuck me, seriously?")

It's occurred to me on more than one occasion that if I hadn't encountered Draven's cult, and if I hadn't been the type of person who tends to defer to people who speak forcefully or authoritatively, I might have just started one of my own, and indeed I was probably on that path when I ran into Draven (mine would have been better! Mine would have not been a cult, but a *Movement.* My friend/neighbor Irish and I were even working out the logistics when the flame war happened between the LJ Matrix RP and Selina, and I found out about Draven's cult through Selina. But then Irish bailed because Draven rightfully gave her the creeps.)

Draven was also really good at manipulating people into sleep deprivation and isolation, even over long distances.

And all of this, and knowing I was vulnerable to it before, makes me hella nervous about ChatGPT and the affect it is having on people now, and enough to make me not want to go near it.
Edited Date: 2025-08-29 07:46 pm (UTC)

Date: 2025-08-29 08:45 pm (UTC)
wolfy_writing: (Default)
From: [personal profile] wolfy_writing
I think like a year before LLMs took off, I'd read that one study about sleep deprivation psychosis. (https://pmc.ncbi.nlm.nih.gov/articles/PMC6048360. Basically anyone is going to be in a psychotic state if they go 72 hours without sleep. The effect of insufficient sleep doesn't have as clear and predictable of an effect as no sleep at all, but it does significantly increase the chances of psychosis, and I suspect it's a factor in why heavy users of stimulants like cocaine and amphetamines sometimes develop psychotic symptoms during periods of high use.) I'd also learned enough about cult tactics and high-control groups to know that situational vulnerabilities, such as a period of life transition or disruption, are the biggest factor in who is vulnerable to recruitment. And I knew that the science around biological susceptibilities and mental illness was a lot more complex than "These things are simply genetic, an article I read alluding to some research studies I didn't look at said it, therefore it's science."

So when I heard about ChatGPT psychosis, I pretty quickly got a picture of how it could develop.

Date: 2025-08-29 09:50 pm (UTC)
From: (Anonymous)
I...had not heard of this cause I just. Don't even really look in ChatGPT's direction tbh but yeah checks out. Human brains are squishy and weird and actually very easy to damage, who knew!

(me. i knew. i been knew for years.)

Date: 2025-08-30 12:06 am (UTC)
erinptah: (Default)
From: [personal profile] erinptah
I hope the news keeps highlighting the "no prior history of mental illness" aspect (in cases where it's true, anyway), not because it's surprising or improbable, but as a matter of public safety, you know?

Especially now that there are some actual lawsuits happening. The more the general public understands "this isn't just Already Troubled People who coincidentally fixate on your product, this is your product being a safety hazard for literally any user," the better.

Date: 2025-08-30 07:44 pm (UTC)
synecdoches: (Default)
From: [personal profile] synecdoches

I've added this to my memories under "thought control," both because of the David Sullivan connection, and because these companies discourage critical thinking and encourage a mindset that is very susceptible to cults. It might not be their direct intent to promote psychosis, but it's an obvious side effect of their product and business model.

Date: 2025-08-31 02:30 am (UTC)
sinistmer: a little dragon sitting at an outside cafe table (Default)
From: [personal profile] sinistmer
Ugh, I continue to be frustrated that these tech companies continue to not fix their products to be better for people. I hope for a day when the pressure becomes too great, and they have to do it. I may not live to see it, but I can hope.

Date: 2025-08-31 08:27 am (UTC)
wolffyluna: A green unicorn holding her tail in her mouth (Default)
From: [personal profile] wolffyluna
Another thing adding on to the ways ChatGPT can be dangerous is the fact that it turns out that the built in safe guards? Get *worse* the longer conversations go on. So combine really long sleep deprived sessions of use and the ways that can be dangerous with the fact that the chat bot itself is going off the rails... yeah.

(I have so many grumps about AI Safety and AI ethics as, like, fields, but it means I end up hearing about this stuff.)

Date: 2025-09-01 01:55 am (UTC)
wolffyluna: A green unicorn holding her tail in her mouth (Default)
From: [personal profile] wolffyluna
I want to be careful that I don't accidentally give wrong information, because I am not an expert. But in general, LLMs have a limited memory, and in long conversations you can bump up against it. Most of the time this is annoying but harmless forgetting the beginning the conversation, or taking a really long time to process answers. But it seems sometimes it will lose the 'reinforcement learning with human feedback' safety stuff like "don't claim to be self improving" or "don't start quoting fiction projects like they were real."

Date: 2025-09-04 01:00 am (UTC)
silvercat17: (Default)
From: [personal profile] silvercat17
That is an accurate summation of so-called "AI"

God I hate these things...

Date: 2025-09-01 07:47 pm (UTC)
From: [personal profile] phoenix_council
If I could Thanos snap all LLMs out of existence, I swear I would. We flatly refuse to touch one for any reason, and ignore any and all AI that's weaseled its way into our life (looking at you Google, Microsoft). One of the best things to come from our time in software was programming a machine learning tool. It just takes training one to make you realize just how dumb the things are. They're only as good as the data you feed them, and the messier your data, the worse it performs. And as these LLMs grow, they're being trained on their own AI slop, and they're breaking down. And yeah, when your bug fixes are like "people are giving themselves psychosis from our product, but it makes us moneys so ¯\_(ツ)_/¯ "

Also, something something cults, something something 2 weeks, something something MAGA.... That might explain some things.

I know we're pretty dang gullible and trusting, and we're working on it. Have dodged this bullet so far, but between the evangelical church and almost joining an MLM, we're definitely pretty vulnerable to cult dynamics. Already enforcing a "no major life decisions get made in a crisis" rule. Should probably add "while exhausted" to that list.

Re: God I hate these things...

Date: 2025-09-06 11:52 pm (UTC)
From: [personal profile] phoenix_council
@ LB and Sneak:
I'd be down for book recs! Put my library card to use!

[personal profile] writerkit
Exactly! It's just stupid spin! The only way this garbage will ever make money is if they get the populace addicted to using LLMs for their every day lives, then revoke access and paywall it once folks are addicted. And that's so fucking predatory that we better add a few CEOs to our lists in that future.

Re: God I hate these things...

Date: 2025-09-04 01:35 am (UTC)
From: [personal profile] writerkit
The thing is, the product isn't even making them money! They're desperate to find a way for it to do so to justify all that venture capital money, but none of them are turning a profit. They're having people give themselves psychosis from the product because of the sunk cost fallacy.
Powered by Dreamwidth Studios