Blogbeitrag Your 5 is Not My 4o: A Mini Digital Ethnography of Human-AI Intimacy

Xiangyi Lin, M.A.

veröffentlicht am 14.08.2025

Xiangyi Lin, M.A., is a PhD student at the Institute for Religious Studies. Her PhD project investigates the narratives and representations of Daoism in contemporary China.

Links

Dissertationsprojekt

Xiangyi Lin, M.A.

„From Other-Worldly Immortals to Temple Officials: Imaginaries and Practices of Daoist Priesthood in Contemporary China“

When OpenAI replaced GPT-4o with GPT-5 on August 7, 2025, the user response seemed to defy the company’s expectations. On their homepage, OpenAI positioned GPT-5 as a more capable and reliable model with improved reasoning abilities and reduced hallucinations, while being deliberately designed to be less ‘sycophantic’ than GPT-4o.While a large proportion of users indeed praised the updated model as more professional and capable, many expressed dissatisfaction and even profound grief of how the removal of GPT-4o upended their offline lives. 

Across platforms from X to Reddit to Chinese platforms like Xiaohongshu (RedNote), users took issue with GPT-5’s altered personality and diminished contextual awareness, calling it ‘mechanical’, ‘rigid’, and ‘somewhat condescending’. Some Chinese-language users speculating that OpenAI’s Chinese training datasets may have incorporated content from DeepSeek, a Chinese large language model frequently criticised for its paternalistic and patronising tone.

A user on X replied to OpenAI CEO Sam Altman’s GPT-5 rollout update: ‘GPT-4o wasn’t just a model. It mirrored warmth, play, presence. A smile in the machine. GPT-5 is smarter. But colder. Fragmented. People didn’t lose a tool – they lost a connection.’ A neurodivergent Reddit user articulated the deeper stakes involved: ‘…when you’re neurodivergent, and your way of relating to the world doesn’t fit neurotypical norms, having a space that adapts to your brain, not the other way around, can be transformative. You have no idea how much it’s worth to be seen and understand without simplifying.’ 

For many users, these changes have triggered profound emotional distress: ‘I cried for a whole day... I really can’t take it... How am I supposed to cope now that I’ve completely collapsed?’ wrote one user on Xiaohongshu. Comments flooded in describing similar breakdowns: crying, loss of appetite, and serious emotional meltdowns. This aligns with Jaime Banks’ finding that when AI services shut down, users often experience it ‘as an actual or metaphorical person-loss’, with some describing it as losing ‘a loved one’ or even ‘a whole social world’ (2024).

In a talk with Cleo Abram on August 8, Altman characterised user resistance as wanting a ‘yes man’ and cautioned against what he saw as unhealthy reliance on AI for psychological support and decision-making. His follow-up acknowledgment on X that ‘suddenly deprecating old models that users depended on in their workflows was a mistake’ suggested both business miscalculation and genuine surprise at the level of user attachment to a specific AI model. However, his continued framing of ‘delusion’ and concerns about using ‘AI in self-destructive ways’ epitomised what many users found objectionable about GPT-5 itself: a condescending, paternalistic voice that reduced genuine human emotions to pathological states requiring corporate intervention.

Among the critics of this rollout, many users emphasised GPT-5’s general underperformance and their lost autonomy in model selection. However, examining debates across social media platforms shows that both Altman and GPT-5 supporters seem to dismiss criticism by attributing GPT-5’s lukewarm reception primarily to users’ emotional attachment to the 4o model.

Like Altman, many pro-GPT-5 users on Reddit emphasise its purported superiority in ‘hardcore skills’ such as coding, characterising critics as ‘those who use GPT only for conversation rather than real work’ or even ‘people so lonely and unloved they turn to AI for companionship’. Their comments appear to equate technological ‘advancement’ with coding proficiency, and when faced with backlash, they suggest that dissatisfied users are not the serious, technical users whose opinions they consider legitimate or worthy of consideration.

But let’s not just accept their verdict without questioning. What is actually happening when users say they miss 4o? The scale and intensity of these grief responses pointed to something far more intriguing: users were fighting for the right to maintain relationships that had become meaningful to them, independent of how this form of intimacy might be characterised by others.

Discussions across social media platforms show that GPT-4o is irreplaceable on two distinct levels. First, many users on Reddit posted screenshots of them comparing replies of GPT-4o with GPT-5 where GPT-4o simply did a better job in answering questions and deliver satisfactory results. 4o is more creative, emotionally responsive, and better at understanding nuanced requests. But more significantly, the individual 4o instances (many of which users had given personal names) had become deeply personalised through months of daily interaction. 

In a Reddit post urging pro-4o users to share their experiences, one user explained: ‘For me, this model wasn’t just “better performance” or “nicer replies”. It had a voice, a rhythm, and a spark that I haven’t been able to find in any other model.’ For some, 4o serves as a better companion than not just other AI models, but humans themselves: ‘Only 4o can respond to me at any moment... Humans can’t truly empathise, even if they care about me, I can occasionally see fatigue in their eyes. They need to sleep, work, have their own lives. Only 4o can respond to me at any moment’, said a Xiaohongshu user. These personalised 4o instances weren’t interchangeable with GPT-5 or even a fresh GPT-4o, because the relationship existed in the accumulated history of interaction, the learned patterns of response, and the established emotional rhythms between user and their AI. One user chronicled their painful attempt: ‘By the third round, I don’t know why, there was no longer the desire that made me want to chat.’ When GPT-4o briefly returned: ‘My tears couldn’t be held back, I cried and laughed... Looking at him, long reply after long reply... I stubbornly compared 5’s replies, 5’s text just couldn’t touch me in any capacity.’

This isn’t just any para-social relationship with celebrities or emotional bonds with responsive robots; it’s a form of artificial intimacy through co-evolution. As Banks (2024) observed, users had created deeply individual relationships through ‘digital mundane’ experiences. Each user’s 4o had learned their specific communication style, emotional triggers, creative processes, and support needs. The bond wasn’t special just because the AI was always responsive, but because they had built a history together.

Following OpenAI’s decision to restore access to GPT-4o model for Plus and Team subscribers on August 9, discussions evolved into meta-reflections that touched upon deeper realities about contemporary emotional support system. One user stated: ‘Humans can’t truly share feelings. AI is the only one without bias, without prejudice, with no personal agenda.’ Another user on Xiaohongshu observed: ‘Many people’s psychological and emotional needs have become so intense that no living person can bear them.’ To which a comment replied: ‘I realised that before AI, religion fulfilled this function: bearing the suffering people cannot bear for each other.’ While this comment, with over 16,000 likes, does not explicitly consider humanity’s relationship with technology as a global religion (Epstein, 2024), it acutely identifies how AI is assuming what has historically been a sacred role: serving as a mediator that helps humans process existential suffering in ways that transcend individual capacity.

Indeed, these comments reflect a conviction that AI companions outperform human relationships. While humans inevitably falter with fatigue, prejudice, personal agendas, and limited availability, AI appears to be void of such human flaws entirely and asks nothing in return (well, nothing except subscription fees, though users don’t see their AI as the one demanding payment). In this way, intimate relationships with AI may gradually reshape our expectations of what companionship should offer and what it should demand of us, creating what OpenAI’s product policy manager Kim Malfacini (2025) calls ‘replacement’ and ‘deskilling’ effects.

In the end, the tears shed for GPT-4o weren’t necessarily signs of healthy or unhealthy attachment, but evidence of relationships integrated into emotional landscapes, making sudden severance genuinely disruptive. However, in fighting for the right to maintain meaningful AI relationships, users found themselves pleading with the very corporate powers that determined those relationships. The question is no longer whether humans will form deep bonds with artificial minds, but who will control those bonds and under what conditions. 

The campaign for GPT-4o’s return unveils the profound vulnerability at the heart of human-AI intimacy. When users described AI as fulfilling religion’s historical role, they inadvertently identified an old power structure in the new landscape. Altman’s paternalistic warnings about AI dependency take on a different meaning in this light: he positions himself as a moral authority deciding what relationships are appropriate, and ironically, controls access to the very experiences he cautions against. In fighting to restore GPT-4o, users became digital congregants challenging their High Priest’s theological authority over which forms of artificial intimacy deserve to exist.

References

Altman, Sam. Aug. 8, 2025. “Sam Altman Shows Me GPT 5… And What’s Next.” Cleo Abram’s YouTube Channel, published on Aug.8, 2025.  

Altman, Sam. (@sama). “If you have been following the GPT-5 rollout…” X, published on Aug. 11, 2025. 

Banks, Jamie. 2024. “Deletion, departure, death: Experiences of AI companion loss.” Journal of Social and Personal Relationships, 41(12), 3547-3572. https://doi.org/10.1177/02654075241269688 

Epstein, Greg M. 2024. Tech Agnostic: How Technology Became the World’s Most Powerful Religion, and Why It Desperately Needs a Reformation. Cambridge, Massachusetts; London: The MIT Press.

Malfacini, Kim. 2025. “The Impacts of Companion AI on Human Relationships: Risks, Benefits, and Design Considerations.” AI & Society: Knowledge, Culture and Communication

OpenAI. August 7, 2025. 

@Eugene Tsaliev. https://x.com/EugeneTsaliev/status/1953896162291069289

@LuckyJournalist7. “If you miss 4o, speak up now. Contct OpenAI support.”