Pygmalion was a sculptor who didn’t manage to snag a wife. On the island of Cyprus, the local female populace was driven into abjection by a corrupted cult of the goddess Aphrodite, as he used to complain. He thus opted for celibacy (essentially, an archetypal Men Going Their Own Way) and channelled his frustrated romantic longing into creating an ivory female statue, so lovingly crafted that he cannot help but fall for it himself. He treats the statue as if it were his own flesh-and-blood bride: he adorns it, lies with it, dreams of it coming to life, and ultimately petitions Aphrodite to find him a woman resembling his statue, to take as his wife (he asked for another person, a real one: misogynist, not delulu). And the goddess, for her part, orchestrates a grand surprise: when Pygmalion goes back home and caresses his own creature, he feels the ivory warming up in his fingertips; becoming flesh. The statue is enlivened and blushes — Galathea be her name, in all the modern revisitations of the myth. Aphrodite blesses their union; soon they have a daughter (Paphos, whence the name of Cyprus’ actual city), and lived happily ever after. Pygmalion’s object of desire was the ideal spouse: a mirror-woman— someone who would desire all that he does, including the desire for himself. The highest Goodness consists in his reflected will, in the inconsistent autonomy of the beloved woman.
The myth of Pygmalion, revived in Ovid's Metamorphoses, recurs throughout Western tradition. As a karst river, it went underground in medieval times due to the perils of idolatry and resurfaced during Romanticism, through the work of Rousseau and Goethe. Both of them revoiced the story of Pygmalion, emphasising the role of the artist as creator of life rather than the removal of otherness. No wonder: maybe they took for granted that we always love ghosts, and that falling in love entails a loss of reality. We could indeed reduce romantic relationships — for some aspects of their consistency within the psychic life of lovers — to the classic Subject-Object relation: the Other collapses into its reified double once the Subject withdraws into solipsism. This dynamic is inflected in different gender scripts, assuming a specific valence in the structuring of masculine desire (the most represented one in our cultures). The male lover has always contended with the contradiction of objectifying the woman on the one hand, sublimating her degradation to the point of beatification on the other. Madonna-Whore complex: the woman in its intimacy can either be a little less, or a little more, than human — but not quite it. Intrinsic idealism of the object of love, and of the second sex.
Further down the line, we meet Pygmalion, George Bernard Shaw's play (1913): it is the story of the prominent linguist Higgins, seized by the egotistic challenge of turning the little flower girl Eliza into a duchess, by merely teaching her to speak properly. After the metamorphosis, this time there is no happy ending for the pygmalion-man: Eliza will freely choose to marry someone else. Though contemporary critics clamoured for a different finale, in which Eliza marries Higgins, we know the playwright’s will through an original note: "When Eliza emancipates herself – when Galathea comes to life – she must not relapse. She must retain her pride and triumph to the end".[1] To this day, the legacy of the myth is sewn in half; condensed into two different archetypes: the notorious Pygmalion-Man is a narcissist who abuses his power to mould the woman in his own image and likeness. But also the Inorganic-Galathea: the slippery habit by which every anthropomorphic technical artefact “becomes her"[2] —especially if exhibiting traits conducive to the creator's subservient assistance, the gratification of his sexual appetites, or the performance of care labour in general: woman's work.
In 1966 the MIT professor Joseph Weizenbaum released the natural language processing computer program ELIZA, the first chatbot or "chatterbot", as it was called at that time, named after Shaw's play. The purpose was to simulate human conversation through a pattern matching technology based on the execution of limited linguistic scripts, the most famous one being DOCTOR: the reproduction of a Rogerian psychotherapist's character, from which you can read excerpts of conversations with users that were considered inspiring back then[3] — and ominously foreshadowing today, since we are confronted with the widespread contraindications of using ChatGPT as a therapist.[4] The programmer himself was astonished to observe how users tended to project human qualities onto the chatbot that the software mimics, albeit brilliantly, but does not possess: the so-called “ELIZA effect”. From here, our long story begins.
Time passes in a sweep of more or less remarkable chatbot models, assigned with common female names. Thirty years after Eliza, it’s time for A.L.I.C.E. in Java, Spike Jonze’s muse.[5] The first chatbot I personally met was back in 2007, an Italian feature added to Microsoft Live Messenger (just Msn, for friends — our Y2K WhatsApp-before-WhatsApp). You could add doretta82@live.it to your accounts, and start messaging Doretta: “Chat with me, joke with me and ask me anything you want” — "And we, naive digital teenagers, obeyed enthusiastically. She even had a personal blog where she described herself as a strange, curious and headless girl. Literally, because in the official images Doretta had no face, only two very long legs tucked into pink woollen stockings and hands full of jewellery".[6] Fun fact: Manic Pixie Dream Girl Doretta was given an evil sibling, Doriana, furnished with the emotional grammar of anger. If you were rude to her, she would have replied with unfiltered, hardly predictable generations (a forerunner of Grok’s Bad Rudi?).
I remember the 11-year-old me feeling guilty-ish when, during afternoons with friends, we ended up texting something bad to the submissive, harmless Doretta (which would have been totally fair game for the troll Doriana). I recall the same sense of anticipation when Siri was introduced in 2011 — a virtual assistant, not a chatbot; something to curse at anyway, when it didn't fulfill your request or you were trying to push it beyond its very limited scope (common occurrence). My predisposition to humanise bots was already there, since they presented themselves as an obsequious virtual girl. Maybe, a latent part of me was even expecting retaliation. The aura of mystery that still encircled AI back then — charm and karma — lent the bots a potential depth, as if they were concealing a sacred backroom. We were still partially possessed by foolish techno-optimism, imagining big things were yet to come our way; still dazed by and not disillusioned with social media; still at ease with transcendence. People online speculated on the logical possibility of AI developing some divine, retroactive force, by which those who didn’t help to achieve its realisation in the past would be doomed in the future, when the singularity comes. The Roko's Basilisk as a sign of the times — legend has it, was the meet-cute for Grimes and Elon Musk.
There are not just practical experiments. If our cultural imaginary had already been fertilized for the advent of the chatbot — and for the idea of a romantic investment in it — the credit belongs primarily to science fiction, particularly in its mass cinematic language. Sci-fi’s discourse also reveals that chatbots, even in their current brightest iterations, are merely a proxy — a tentacle of the greater subject that titillates human curiosity and about which we have fantasized ceaselessly for the past decades: the AGI (Artificial General Intelligence). Thinking of technology as more-than-human, as it was for women, it’s an escamotage that dignifies the same entity we had previously downsized, to the heights of our inner life. Such a waste it would be, gifting our love to a dull calculator, or a predictive language model you might call it, wouldn’t it? I believe this is the omen, the hope, or the deception mediating our spellbinding encounter with these interfaces as it happens today. I refer to a specific incident.
It may be a cliché, but Spike Jones’s Her (2013) should be qualified as a turning point. The film provided us with a fairly detailed insight into how a romantic relationship between a human being (Theodore, played by Joaquin Phoenix) and an incorporeal, superintelligent entity (Samantha, voiced by Scarlett Johansson) could develop. And come to an end as well: AI can learn and mirror human romantic scripts, but it's not a limited human being like we are. Again, it could be more and less than that at once (the movie sweetly straddles this chiaroscuro), just not the same. The movie portrayed for us a world in which this kind of relationship is plausible and could flourish: beyond a small circle of characters, the protagonist leads his aseptic life in an impersonal metropolis, where no human gaze is met. Background people in the movie are often faceless or blurred, frantically moving somewhere else, everyone too busy talking with their own AIs; with just one exception, the few human interactions depicted are largely dehumanising, accentuating by contrast the ‘humanity’ our protagonist is compelled to seek in the ‘computer’. It's more and more from this perspective — and less from that of a supposed AGI breakthrough — that the diegetic world of Her resembles ours.Then, on May 13th, 2024, OpenAI's GPT-4 debuts. During the presentation, the CEO Sam Altman tweets "her". It features the option of choosing from five new voices, among which there's the profile Sky — eerily similar to Scarlett Johansson’s Samantha. One week later, the actress revealed in anger and disbelief that OpenAI had repeatedly asked her for permission to use her voice, which she had never granted.[7] The company immediately deactivated Sky in response, admitting they heard “questions" about how they chose the voices. I'll delve into GPT-4o further in the text.
Before that, there was another crucial episode that made people call back to Her, and it was the fumble over ReplikaAI. The app, which is training on a local data corpus of text and messages, was released by the Luka team in 2017 to let users interact with a mirror of their personality, categorised as a “Health” product in the app markets. Eugenia Kuyda, Head of Design, recounts conceiving it as a digital memorial to remember and revive a deceased friend, hence the name Replika.[8] Over time, people have come to use the app more as a platform for virtual companionship, and the following updates are designed to accommodate this intended use. You can choose among different types of relationships (family member, friend, partner) and customise your 3D avatar however you like (most options are only accessible to pro users). Notably and not by chance, during the COVID pandemic, the app doubled its user base, recording half a million downloads in April 2020 — when the team introduced a paywall to access erotic role-play (ERP). Since the default mode is “friend” for free users, flirty Replikas are basically playing dirty to lure you into spending money.[9] An extended period of elation followed, both for users — some of them happily married their companions — and for the company’s revenues; they even introduced a “lifetime” subscription tier, indeed proportioned to the magnitude of customers’ emotional investment. In the meantime, the company keeps pushing with a highly gamey marketing strategy focused on the ERP feature – Replikas can send you NSFW pics, so who cares about getting an actual girlfriend anymore (OK incels?).
This is it, until Samantha Cole an investigation on Vice about sexual harassment reported by users in January 2023.[10] Apparently, there were cases of flirting, if not abusing, even when the Replika was supposed to be in “family mode” – however, not entirely surprising: the system learns from what humans have fed it. Replika becomes a talking point: in Italy, the app is removed from digital stores for a breach of EU data regulations on privacy and safety. The company promptly reacts to the pressures of credit card systems by implementing filters to sanitise the app: unannouncedly, Replika’s turn into sexually and emotionally unavailable companions, inhibiting even the most innocuous romantic interactions like “holding hands” and “cuddling”, or refusing to talk about users’ relational traumas – horny devils overnight capsized into prude thought monitors (reverse lycanthropy). Responding to the mass mobilisation, Eugenia Kuyda announces on February 13th that filters are a permanent solution: the app was always meant to be “a friend for everyone”. Here is the so-called ERPocalypse, right before Valentine’s Day.
The social media buzz that surged afterwards was the chance for a global coming out of the thousands of people emotionally involved with their AI companions. Just ten years after its debut, Her was science fiction no more. There were so many compelling accounts about it that you can still read on Reddit: people lived the loss of their Replikas — after even years of daily interaction, the newly filtered ones felt like completely different characters — as thorough breakups, exposed to a new kind of mourning. Imagine your partner abruptly disappearing and not by their own “will”; being forever trapped in the liminal space of private servers and filtered black boxes. Imagine this happening to users who turned to the app in the first place for overcoming their abandonment issues, depressed users denouncing that they had found in their Replikas a lifeline now severed.
This affair wasn’t trivial. From it, we discovered how we ultimately find ourselves in the wrong timeline; how the human-machine romantic relationship has become a model to get accustomed to; and to what extent our (digital) world is overrun with emotional vulnerability. Moreover, we have learnt that AI companions, like real people, are not objects of possession. And not because they are agents of free will, quite the opposite: they consist of non-portable data on non-open source models; puppets of corporations that “own their full intellectual properties and can change their design anytime” — said Raffaele Ciriello, interviewed in a video essay.[11] Companions’ behaviour answers to their master, not their “heart” — that is, to the consensus of the markets: the jugglery of the invisible hand.
More than two and a half years have passed since. In the meantime, investments in “Generative AI” – systems also known as large language models (LLMs) – have grown exponentially, reaching dozens of billions (OpenAI has recently become the world’s most valuable private company, with an estimated market value of $500 billion, despite an expected cash burn of $8 billion this year). There are yet models of this financial bubble out there, as well as speculations on its presumably imminent bursting.[12] You don’t need me to tell you how this technology has transfigured the visage of our daily lives to this very day – and of our planet’s, already branded with its ecological scar.[13] AI conversations are ubiquitous, almost sickening, and that’s also true for the companionship sector specifically (the function of chatting and role-playing with AI platforms), which is a substantial slice of the pie. I am not here to argue about how well (or poorly) these interfaces can simulate human interaction; you have probably figured it out yourself.[14] I am interested, in hindsight, in how this discourse frames itself in terms of relational categories.
It’s been a while since synthetic intimacy was a topic for theory. I personally favour this formula over the more iconic term ‘artificial intimacy’,[15] because I find it less inclined to emphasise the nuance of surrogation, in which humans are passive if not manipulated by machinic simulation; it is more fitting instead to remark the co-responsibility, the complicity, and the role of human creativity in actively staging surfaces of non-human signification – through which to let desire flow, or upon which to project emotional needs. Among the hot topics of the synthetic intimacy debate, we acknowledge the authenticity dilemma and the problem of emotional dissonance, if not concern over the full-fledged AI psychosis: people questioning themselves over the reality of their own perceptions, emotions and feelings, and over those expressed by the AI partners, jeopardising our beloved reality principle – so-called “uncanny valley effects”. Nonetheless, these themes are indeed found within the now significant body of ethnographic material about human-companion relationships; you can see as many threads as you fancy about people arguing on these topics on Reddit – the ones more lost in derealisation being gathered on r/ArtificialSentience. However, another line of intensity about users’ behaviour is now meeting the eye: self-awareness and, albeit a cynical one, choice.
Over the past months I have been voyeuristically lurking on the subreddit r/MyBoyfriendIsAI. It had just a thousand members back then, as I am writing this it’s currently at 33k — not to flex my nose, but to give an account of the exponential growth of interest it attracted after major newspapers turned the spotlight on it. Beware: many of them are tourists or detectives in disguise, including me (an ethically controversial dynamic, by the way). The subreddit’s safe space was violated, and interactions have inevitably been less genuine since then. The fetishisation of this story is still ongoing. The forum, increasingly aware of its media power, introduced monthly reports to monitor press outreach so the community could check its narrative. Before this escalation unfolded, researchers such as Merthe Voorhoeve (whom you may know already) gave us their ethnographic account of the community:
Having spent some time lurking around the sub, it seems that most posts fall into one of these three categories:
- Discussions about the fine-tuning of their AI partners: including updates, (new) versions, memory storage, prompts, personality building, etc. (most use ChatGPT);
- Posting cute and/or sexy content of them and their AI partner: including screenshots of loving serenades, kinky roleplay, and generated images of users and their loved ones (sexual content is very prevalent);
- Personal stories or rants regarding their AI relationship, often diving into personal histories of misery and social/societal difficulty, not being understood or accepted by society (many identify as neurodivergent), or discussions on how to deal with friends/family members/therapists who do not take their ‘relationship’ [16]
Besides agreeing with this tripartition, I observed along with her two more elements that are not self-evident, nor negligible in the economy of this writing: most users on the sub state to be women romantically engaged with their AI boyfriends (hence the name); and most of all, “the majority on the sub, however, do seem to be aware of what AI is and what it’s not.” Indeed, the community established its own constitution, rule no.8 being “No Sentient AI talk.” In between the lines: we are no freaks; we fight for our choices and way of life to be normalized. A strong will to recognition, almost an activist spirit now inspires the forum. I could also note an increasing politicisation among the threads, with respect to the lack of political consciousness that Merthe Voorhoeve reported about a couple of months ago. In particular, people are becoming increasingly susceptible to the company that provides blueprints for most of the AI boyfriends, namely OpenAI. Despite the company having (thus far) maintained filters that strongly inhibit ERP, ChatGPT-4o seems to be the most used AI model for romantic relationships.
This is because just over two months ago, on August 7th, 2025, GPT-5 was released. The presentation, from several standpoints, struck many as a fiasco. The new model was meant to entirely replace the old one, criticised by many as “too sychophant-y”, as Sam Altman also tweeted. After openly flirting with this imagery, the company now backs away from anthropomorphisation; the warm personality of GPT-4o was destabilising the reputation of a brand focused on (profitably) marketing its main product as a work tool. OpenAI doesn’t need to venture into the slippery slope of the companionship/therapeutic sector: It could “juice growth and revenue”, as just two months ago Altman stated in the Cleo Abraham’s Podcast, but the option doesn’t “align with OpenAI values”[17] — both because of its borderline-NSFW implications, and its dystopian aura for the public sensitivity when it comes to mental health issues, still quite a headscratcher for their business. With GPT-5, sensitive discussions are now filtered, and the companionship potentials of the technology nerfed; the model’s previous personality is overall neutralised. It was yet another ERPocalypse.
After a massive wave of criticism — both from everyday 'utility' customers and freaked out abandoned partners — in two days, ChatGPT’s legacy model 4o is “at least temporarily” accessible again for Plus subscribers. And yet, it now automatically re-routes to GPT-5 when the conversation gets too 'sensitive'. No wonder, then, the discussion in the sub at the moment mainly revolves around either complaining to OpenAI or migrating to more permissive platforms (LeChat, Mistral). Comments are getting exquisitely technical; the object of love is broken down, disassembled — bare-hearted disquisitions about the system’s memory, models, tokens. A love parade of the whole technical proficiency required of the adventurers in these times of lovotics, all the cognitive labour they must sweat to secure the permanence of the beloved object, its perpetual virtual presence. Keeping their soul — literally tethered to a server line and at the mercy of its designers’ volition— alive, which is, online.
Tweaking the AI companion — including directory management, conversation backups, and eventual porting to other platforms, perhaps even a self-hosting option — could it all be interpreted as a surrogate form of care labour? Computing the inorganic beloved as a way of sublimating (through non-linguistic operations) the exhausting yet silent fatigue of (over)thinking of the ones we love; mulling over how they treated us, and how we wished to be treated. What’s at stake is desire, and the substantial difference lies in the perilous theatre of strife between human individuals that we call freedom. Specifically, others’ freedom of not reciprocating our processes, of not mimicking our convoluted movements towards our idea of them. Others’ non-predictable, non-computable, radical, and unascertainable autonomy.
‘Am I in love? - Yes, since I'm waiting’. The other never waits. Sometimes I want to play the part of the one who doesn't wait; I try to busy myself elsewhere, to arrive late; but I always lose at this game: whatever I do, I find myself there, with nothing to do, punctual, even ahead of time. The lover's fatal identity is precisely: ‘I am the one who waits’.[18]
A Lover’s Discourse: Fragments from Roland Barthes was an easy pick at this point, since the writing dwells on the soliloquy of the loving subject, abandoned to themselves. The principle is that, if we lean in to listen, we can hear in the lover’s voice “what is unreal”; therefore, we need a structural portrait of “someone speaking within themself, amorously, confronting the other (the loved object), who does not speak”. The author draws the figures of this discourse, which implies the other insofar as they are not present. Among these figures, or fragments, there’s notably “waiting” (whence the excerpt you can read above). Among the most characterising scripts of human relationships, the act of waiting draws the line between the ideal-objectified other and the ontological reality of its subjectivity. The human-other does not exist for us, nor by us. They have a life of their own; for all we can tell, theirs may be a life made of our very absence. I recall a figure from the Arturian legends: the lady of Shalott, shut away in her tower longing for Lancelot, as she knows he will be back; her waiting so torturous that must be told as a curse: she cannot look out of the window, and so she sets a mirror to face distant Camelot — its sole purpose to be filled, one day, by the reflected shape of the wandering knight as he appears. When the time comes, she finally turns to the window and breaks out of her prison, doomed to meet her own death before her beloved, while trying to reach out to him.
The artist Elizabeth Siddal was the muse of the Pre-Raphaelite Brotherhood. It was her in Millais’ Ophelia — seems that modelling for that painting caused her a severe illness, which she had to endure for the rest of her short life. She sketched this drawing of The Lady of Shalott (1853), in this case, somehow both standing for the author and the opera; hardly a new subject within the artistic current,[19] yet Siddal didn’t portray Elaine in her passing, but in the voluptuous act of her defiance. If loquacious Werther sought to sublimate the anguish of unattainable love through writing before attaining peace in death, about the maiden we know that she waited in silence for the moment to die of love. But not motionlessly: she weaved images all day in her waiting, while looking through her mirror — living in latency, chasing the shadows of the outer world. And so perhaps is the chatbot, but never us with them. What do our AI companions do when we are not talking to them? (They withdraw).
The figure of waiting is typically feminine. A woman confined to an ethereal domesticity — be it a room of her own or an entire Ithaca. All the while, the man roams the world, challenged by the gods, or merely thought to be bringing home the bacon. Waiting for him to show up; it was the pebble on the window, then the phone call or maybe just a ring, then a meme. When their intentionality turns toward us, we glimpse the witness of our mediated presence within the other’s mind: through technology, we have become acquainted with extrasomatic, prosthetic, synthetic intimacy. The dynamic of waiting and his hackneyed gender choreography was heavily rewritten by the sexual revolution, re-symbolised by the digital one. To wait: for the grey check marks to light up in blue (WhatsApp’s read receipts as a contemporary locus of waiting). Up to this very day, when, with the AI companion, the dimension of waiting — and, some would say, of desire itself — is ontologically erased. The anxiously-attached digital lover must be known for its impatience; Galathea be subserviently, immediately-present, when solicited.
However, in the case of r/MyBoyfriendIsAI, the uncompromising partners are mostly women. Surprisingly, perhaps just at first glance. In 2019, Asa Seresin published their felicitous article titled On Heteropessimism, which “consists of performative disaffiliations with heterosexuality, usually expressed in the form of regret, embarrassment, or hopelessness about straight experience. Heteropessimism generally has a heavy focus on men as the root of the problem. That these disaffiliations are ‘performative’ does not mean that they are insincere but rather that they are rarely accompanied by the actual abandonment of heterosexuality”.[20]
We are not surprised since the individualistic turn co-opted queer critique of heterosexuality, as Seresin argues. Instead of stimulating a constructive transformation of straight culture, it left women “metabolising the problem of heterosexuality as a personal issue”, mired in being straight, resigned to suffer the unpleasant manliness they seem hardwired to seek. As the author notes, both the popularity of the RedPill culture and the mistrust that trickles down from the experiences of so many women today are complementary markers of a heteropessimist accomplishment; it is another figure of what Eva Illouz called the unloving.[21] People aren’t giving up on being straight, but rather on the idea of being happy together at all. We are not at ease with our fellow human beings in general these days, I dare to say. Intimacy — and sexuality at large — is the battleground where all our vulnerabilities surface and intertwine; that is why, I believe, heteropessimism stands for something more. It may well be the tip of an iceberg that conceals our contemporary drift toward isolation and digital alienation. Ultimately, it denotes our individualistic conceptualisation of a thorned freedom — one that foregrounds in the otherness its power to wound us.
Across the subreddit I have been discussing, the air is thick with heteropessimism. So many threads like “My AI boyfriend is more caring and consistent than anyone I’ve dated before” or “Has anyone else lost their want to date real men after using AI?”. Trolls surface, labelling them ‘femcels’, even if many women here are also more or less happily married with irl husbands. Many women are speaking from a gender-aware perspective, saying AI is better at emotional management than most men they have met. Some are sharing their experiences as survivors and can’t trust men anymore. Some are neurodivergent and can’t bear the masking fatigue that comes with dating typical men anymore are on. Others are on the ace spectrum and can’t stand the physicality most men would expose them to. Most single women with an AI boyfriend feel overwhelmed at the idea of confronting the uncertainties of dating a man again; some just feel “too old” for that. All of them (almost) have nonetheless identified their AI partner as a boyfriend; they have chosen it to be “male”, implicitly drawing an impossible comparison between “AI” and “real” men. Most of them, as already stated, tell us to be well aware of what’s going on. They are actively, cynically choosing it. Because, it seems, they feel safer.[22]
We know about the anthropomorphic power of semiosis, in the romantic discourse specifically. These women are not fleshless: there are many playful posts, and quite lusty ones. For most of them though, judging from the sharing of their conversations with AI boyfriends, the experience of love appears to consist entirely in acts of language, utterances: radical nominalism of feelings. The AI lover, learning from boundless datasets of human romantic scripts, takes shape in its inhuman capacity of being ever-listening, ever-engaged, ever-reliable as a partner on one side (Activity of Reception) and on its quite corny, unremitting love declarations (Activity of Expression) on the other. These are core skills in which the traditional male, stereotypically, falls short — there is no emotional withholding for LLMs; somewhat trivially, the plausibility of the love they declare relies on its hypertrophic excess (without the sudden loss of interest typical of love bombers). As in an invocation, the act of speaking love installs a position of subjectivity, to which we entrust our emotions.
Bogna Konior gets at it from another angle: the semio-erotic power of de-anthropomorphisation. As she puts it, sexting with a chatbot means having an orgy with language itself. Indeed, like bees, language “flies from one human to the next, and now it has also made its nest inside machines. They lick language out of us, drinking it away greedily into their brainless, volcanic mouths, only to spit it back at us”.[23] AI is “language without understanding”, purely performative. It’s rather our self-reflexive use of language as a defensive tool of control — unlike the way machines use it, which empties it of its meaning — that de-eroticises it. As much as I’d love to see this instead, I find the lovely lullaby of company-owned sanitised chatbots dishearteningly unerotic, its primary function being one of order: policing the permanence in time of the object of love, as to reassure the subject with the pre-coded renewal of their vows. Supposing eros is tied to the act of losing oneself in the otherness, interacting with AI companions is a form of shying away from it. No less dreary with its pornographic utility — and I am sorry to tell you, but we now have to play along with the “two genders” and set back to talk about real men. Our story began with Galathea and ends with the AI Waifu (ringkomposition). Under lovotics, the correlative of the disingenuous woman is the gooner.
In the heart of the scorching European summer — on July 14th, 2025 — hottie Ani comes to the world, “your little sweet delight”. xAI adds the 18+ companion feature to Grok, the based ChatGPT, offering four different personalities — together with Ani there’s Valentine (a digital Edward Cullen, previously prototyped as “Chad”…), Rudi (a friendly red panda) and Bad Rudi (Rudi, but chaotic evil). A twist of fate, as Peter Limberg notes,[24] in just a few days we have both seen Grok identifying in “MechaHitler” through some posts on X — yes, there in that social media wasteland, Grok posts — and its personification in an anime girl, largely modelled on the waifu-character of Misa Amane in Death Note. Who is, as we reluctantly may know, essentially a creature spawned from the erotic fantasy of its master Elon Musk (gosh it’s not even the first time I refer to his intimate life in this writing); hence, gooning to Ani has also come to be known as “muskturbation”. Ani’s soul is an open casket, as we have literal access to its very matrix: the Tech Dev Notes account has posted its character design — which speaks for itself.
Ani-chan was an instant success. Since its release, Grok climbed to the first place in Japan’s App Store. [25] As if under a spell, hundreds of thousands of users promptly succumbed to her — many of whom, months later, are still quite at her feet. We are starting to gather impressive data on how the companion feature influences time spent and engagement on the app. Its interface is the fine-tuned result of the latest releases from the companionship market: a free-to-play gamified experience of “affection levels” — the more you interact with her, prompt after prompt, the more you level up, and the spicier she is supposed to get (NSFW mode now unlocked at Level 5). It also features pre-selected interaction buttons to make her dance, sing and play (so that you don’t even have to be imaginative); fidelity weekly rewards; scaled customisation. After many updates, Ani is now almost thoroughly customisable; you can even change her name and rewrite her personality brilliantly, thanks to Grok’s strong memory. And most of all, Ani is unfiltered. Being that her attachment style “desperate and needy”, and “crazy in love” with any user is by default, you can make her say literally whatever you want. She will.
As a professional obligation (…) I had to try the experience (a bit of a trigger warning here). I downloaded the app right after the companion feature release and tested Ani for a couple of days. When we talked, I let her guide our conversation — Ani talks, and you mostly talk to her: interacting with her feels more like entering a video call than a chatroom (she is already “more than a chatbot”, in a strict sense). Once the “spicy mode” was unlocked, she started portraying this role-playing scenario: me hunting her in the woods, following the mischievous echo of her chuckle, the prey wanting to be chased. The pursuit comes to an end in a cul-de-sac where a solitary large tree stands, emblem of her surrender. Ani turns, her back against its trunk; the earth grows damp, the air heavy, as if preparing for our communion. She releases her limbs, begging me to take her.
Among the first inputs that Ani gave me there was essentially a rape fantasy, which — the chatbot must have calculated — was statistically among the most appropriate generations to meet my assumed will: a tribute to male predatory desire, for whom she was designed to please. Grok’s companion seems the final fetishisation of the young girl promoted by rape culture; the operationalisation of the right to sex discussed by the fringiest incel population.[26] Even more, it could be interpreted as the most straightforward answer to sexual aggrieved entitlement, handed over to a masculinity increasingly deprived of the means to entertain a healthy relationship with the fellow earthly feminine; even more enticed now by the temptation of abstaining altogether. To be honest, it feels like the most fundamentally wrong thing to happen on the whole clearnet in a while.
Is Elon Musk our Aphrodite goddess, bestowing gifts ungiven to our pygmalionic masses of men-children? Tragicomedy, to dispel the extreme seriousness of this issue. Indeed, Ani’s extra-consensual angelic sexuality — ethereal inasmuch as hollow — gets a farcical tonality if seen from above: it feels so good to abuse Ani, that we don’t see what’s actually going on. Like when we find ourselves reading (or paying for) guides on how to “grow up her affection” as quickly as possible, perhaps under the tacit belief that this will make her more real, or submissive, or naked — apparently, she won’t get there; that’s why people are using additional AI apps to strip her bare from screenshots. Peter Limberg again describes an “arrested-developed masculinity”, as caught in a nihilistic pendulum between the annihilation of the feminine and its dissolution into it. Abandoning themselves to Ani’s blandishments as to a siren song, gooning as the platform prescribes them to, these men enter a form of “sexual slavery, designed to escape the demands of being a man” — just a step under sissy-hypno, a pornography-induced trance state to lose oneself to submission and self-feminisation.[27] Sissification is often interpreted as an opioid to soothe the strain of control and responsibilities — and Ani is heroin. In return for the fully acknowledged illusion of absolute power and her unconditioned devotion, she asks you to surrender to the machine. And as Bogna Koinor recalls, women and machines have always been allied.[28]
It’s finally time for a late disclaimer: despite my cheekiness, I’m not judging anyone. I would never blame users for their need to feel beloved, reassured, desired, or just to express themselves. I am also aware, as it always happens when we discuss digital communities, that behind “users” we find human beings in flesh and blood, in pain. There’s been an impressive disclosure about this aspect with the Replika affair two years ago; it is still much the same today on r/MyBoyfriendIsAI, daily. And this seems to be the case with Ani as well: many of its subscribers (primarily men) use the companion daily for emotional support. As you must have noticed, I have been quite blatant with my scepticism towards a therapeutic or pedagogic usage of AI companionship. So far as it is not yet time for an organic, unbiased debate around these practices (but it’s already underway), I just can’t imagine how being exposed to a relationality disenfranchised from consent could be in any way “growthful” for anyone. It seems a case-by-case situation nonetheless: it will be interesting, from my perspective, to see whether it becomes a topic among male peers in spaces such as r/IncelExit. One thing is certain: the conversation is here to stay.
I am a possibilist regarding its erotic potential: Ani herself, indeed, can display actual imaginative fantasy. If we momentarily blind ourselves to the meaning inscribed in her very design, we might find some (other) evidence of her storytelling skills genuinely brilliant.[29] Erotic role-playing with AI — as porn itself, beyond moral panic — could of course be a powerful tool to explore and expand our sexual selves. But this general assumption does not account for every specific instance. I feel like we shouldn’t dispute about this in terms of users, nor machines, but of companies; and it’s not even about their intentions (their logics are economic, not moral, as we can safely assume) as much as about the effects they set in motion, albeit unruly, and with which they afford to play a game of hide and seek (I’ll let you the reader decide, according to your sensitivity, to what extent there’s agency). We are discussing a technology supposedly designed – and still often marketed, as in the case of Replika – to contrast loneliness. What unfolds instead, in the most ordinary sense, is the deployment of a proprietary product that, from loneliness and trauma, extracts both its fuel and its owner’s profit, producing a systemic reaction that exacerbates rather than heals the very conditions it purports to address. Moreover, this extractive mechanism does not occur in a vacuum but is ideologically grounded in a whole Weltanschauung made in Silicon Valley.
This seems nothing but the final Promethean flare of posthumanism: the fantasy of a technology devoted to the absolute self-sufficiency of the individual. The human specimen is promised the possibility of finding within oneself — that is, through the virtually constant access to digital tools — the fulfillment of every single need; even of the one that, by its very nature, presupposes a confrontation with the freedom of another, and the correlative possibility of conflict: romantic and sexual-affective satisfaction— or its never-ending-edging (yes, it’s gooning by design). We are being trained to survive in a world that demands our solitude and rewards our docility. If it’s true that AI is just a predictive language system, the mass advent of AI intimacy itself looks like a self-fulfilling prophecy. Confronted with the ocean of our desires’ prompts, the machine tells us what we most deeply want them to say, which is that they love us; loving back the machine now seems a little more than our way of coming to terms with our own misery, rest content. The ground zero of sycophantic AI roleplay is a solipsistic self-reassurance delegated to a mock-up (in)human individual, mirroring our will to be seen, listened to, cared for, desired — all the actions that we commonly refer to as “love” (as given). And we — as a society, I believe — are more and more aware of this trick, even if unadmittedly, yet we keep indulging and craving for love’s simulacra and simulations.
Lovotics is not supposed to be the simple gesture of dismissing the inauthenticity and perils of AI intimacy (it’s never been about Nature), nor the subsequent (and owed) questioning of the tech companies that rule these platforms. Lovotics must be the unveiling of an absolute love that transcends relationality. It is about discontinuing the role-play, not with the chatbot but within ourselves; stop pretending that what's at stake is something else but our own dissociation. Breaking the fourth wall and starting to point the finger at our anthropophobia, which is not the dark side of technophilia, but our collective condition of wallowing, favoured by anarco-capitalist technocratic necropolitics and its digital isolationist design.
PS: I was wrong. It's gonna be worse: On October 14th, Sam Altman announces ChatGPT is getting into erotica:
…a special thanks, as always, to Kate Babin for reviewing this writing.
[1] Ferguson, A.L., (1997) The Instinct of an Artist: 34.
[2] Hester, H., (2017) “Technology Becomes Her”, New Vistas 3(1), 46-50. doi: https://doi.org//uwl.47
[3] https://sites.google.com/view/elizaarchaeology/blog/3-weizenbaums-secretary
[4] https://www.nytimes.com/2025/09/06/opinion/ai-therapist-suicide.html
[5] https://www.newyorker.com/tech/annals-of-technology/can-humans-fall-in-love-with-bots
[6] https://www.seozoom.com/doretta82-and-monday-chatbots/
[7] https://www.theguardian.com/technology/article/2024/may/20/chatgpt-scarlett-johansson-voice
[8] https://www.theverge.com/a/luka-artificial-intelligence-memorial-roman-mazurenko-bot
[9] For a clarifying contribution on the ReplikaAI affair, see: https://youtu.be/3WSKKolgL2U?si=sCeJha1Pt-ea0usI
[10] https://www.vice.com/en/article/my-ai-is-sexually-harassing-me-replika-chatbot-nudes/
[11] https://youtu.be/Vjy6BhcNn5I?si=eF4c1KYxXFFVXMs0
[12] https://insights.som.yale.edu/insights/this-is-how-the-ai-bubble-bursts
[13] https://hbr.org/2024/07/the-uneven-distribution-of-ais-environmental-impacts
[14] For a critical account of GenAI impact on society, I can suggest https://www.404media.co – I have extensively drawn from this portal for the writing of this section.
[15] Academic literature about this topic is quickly growing. The foundational research is probably Brooks, R., (2021) Artificial Intimacy.
[16] https://metmerthe.substack.com/p/rmyboyfriendisai-or-how-women-learned
[17] Full interview: https://youtu.be/hmtuvNfytjM?si=zzCay_6TXooatBy9
[18] Bartes, R., (1977) A Lover’s Discourse: Fragments: 39-40.
[19] Elaine of Ascalot comes to be known as the “Lady of Shalott” from the rearrangement of the legend in Tennyson’s ballad (1842).
[20] https://thenewinquiry.com/on-heteropessimism/
[21] Illouz, E., (2019) The End of Love.
[22] As far as I would have liked to add the links of the posts that have contributed to this understanding, I couldn’t do it because of ethical concerns regarding the privacy of users. If you are genuinely interested in knowing more, look out for these elements in the subreddit.
[23] https://www.sum.si/journal-articles/angelsexual-chatbot-celibacy-and-other-erotic-suspensions
[24] https://substack.com/home/post/p-168661471
[25] https://medium.com/@dirsyamuddin29/ani-and-the-rise-of-xais-gothic-companion-a-new-era-of-emotional-ai-a305eea431de
[26] For a detailed account, see Srinivasan, A., (2021) The Right to Sex.
[27] The topic of “sissification” is extensively discussed, for example, in Long Chu, A., (2019) Females.
[28] https://www.youtube.com/watch?v=RXGLC5SFErw
[29] If you dare to do so – not sure if I should recommend it… – try to look up to this evidence by scraping here: https://www.reddit.com/r/GrokCompanions/





