by know-how just about because the daybreak of time. Virtually as quickly because the printing press was invented, erotica was being printed. Pictures was used for erotic functions with glee by the Victorians. And, everyone knows how a lot the web has influenced fashionable sexual tradition.
Now that we’re grappling with the impact of AI on varied sectors of society, what does that imply for sexuality? How are younger folks studying about sexuality, and the way are folks participating in sexual exercise, with AI as a part of the image? Some researchers are exploring these questions, however my analysis has indicated that there’s a little bit of a scarcity of study inspecting the true impacts of this know-how on how folks assume and behave sexually. It is a big subject, after all, so for right this moment, I’d wish to dig into the topic in two particular and associated areas: distribution of data and consent.
Earlier than we dive in, nevertheless, I’ll set the scene. What our normal tradition calls generative AI, which is what I’ll give attention to right here, includes software program powered by machine studying algorithms that may create textual content, pictures, video, and audio which are artificial, however that are troublesome if not unattainable to tell apart from natural content material created by human beings. This content material is so much like natural content material as a result of the machine studying fashions are fed huge portions of human-generated content material through the coaching course of. Due to the immense volumes of content material required to coach these fashions, all corners of the web are vacuumed as much as embrace within the coaching knowledge, and this inevitably consists of some content material associated to sexuality, in a technique or one other.
In some methods, we wouldn’t need to change this — if we would like LLMs to have an intensive mapping of the semantics of English, we are able to’t simply lower out sure areas of the language as we truly use it. Equally, picture and video mills are going to have publicity to nudity and sexuality as a result of these are a good portion of the photographs and movies folks create and put on-line. This naturally creates challenges, as a result of this content material will then be mirrored in mannequin outputs sometimes. We implement guardrails, reinforcement studying, and immediate engineering to attempt to management this, however ultimately generative AI is broadly nearly as good at creating sexually expressive or express content material as every other form of content material.
Nicola Döring and colleagues did a considerable literature evaluate of research addressing how utilization of AI intersects with sexuality, and located customers have 4 foremost methods of interacting with AI which have sexual parts: Sexual Info and Training; Sexual Counseling and Remedy; Sexual and Romantic Relationships; and Erotica and Pornography. This intuitively in all probability sounds proper to most of us. We’ve heard of a minimum of a number of of those types of phenomena referring to AI, whether or not in motion pictures, TV, social media, or information content material. Sexually express interplay is usually not allowed by mainstream LLM suppliers, however universally stopping it’s unattainable. Assorted different generative AI merchandise in addition to self-hosted fashions additionally make producing sexual content material fairly simple, and OpenAI has announced its intentions to go into the erotica/pornography business. Sexual content material from generative AI has an incredible quantity of demand, so it seems that the market will present it, a technique or one other.
It’s essential that we keep in mind that generative AI instruments don’t have any idea of sexual explicitness apart from what we impart by the coaching course of. Taboos and social norms are solely a part of the mannequin insofar as human beings apply them in reinforcement studying or present them within the coaching knowledge. To the machine studying mannequin, a sexually express picture is similar as every other, and phrases utilized in erotica have that means solely of their semantic relationships to different phrases. As with many areas of AI, sexuality will get its that means and social interpretations from human beings, not from the fashions.
Having sexual content material accessible by generative AI is having vital results on our tradition, and it’s essential for us to consider what that appears like. We need to defend the protection of people and teams and protect folks’s rights and freedoms of expression, and step one to doing that is understanding the present state of affairs.
Info Sharing, Studying, and Training
The place can we find out about sexuality? We study from observing the world round us, from asking questions, and from our personal exploration and experiences. So, with generative AI beginning to tackle roles in varied areas of life, what’s the affect on what and the way we find out about sexuality particularly?
In probably the most formal sense, generative AI is already taking part in a significant position in casual and personal intercourse training, simply as performing google searches and searching web sites did within the period earlier than. Döring et al. famous that their analysis discovered that searching for out sexual well being or academic details about sexuality on-line is sort of frequent, for causes that we are able to in all probability all relate to — comfort, anonymity, avoidance of judgment. Dependable statistics on how many individuals are utilizing LLMs for this similar form of exploration is difficult to return by, however it’s affordable to count on that the identical benefits apply and would make it an interesting approach to study.
So, if that is taking place, ought to we care? Is it significantly any completely different to find out about sexuality from google searches versus generative AI? Each sources have accuracy points (anybody can put up content material on the web, in any case), so what differentiates generative AI, if something?
LLM as Supply
Once we use LLMs to seek out out info, the presentation of that content material is sort of completely different from after we do fundamental internet searches. The outcomes are introduced in authoritative tone, and sourcing is usually obscured except we deliberately ask for it and vet it ourselves. In consequence, what’s being known as “AI literacy” turns into essential to successfully interpret and validate what the LLM is telling us.
If the person utilizing the LLM has this sophistication, nevertheless, students have discovered that basic factual information about sexual health is usually accessible from mainstream LLM choices. The restricted research which were executed to this point don’t discover the standard or accuracy of sexual info from LLMs to be worse than that retrieved basically internet searches, in accordance with Döring et al. If that is so, younger folks searching for essential info to maintain themselves secure and wholesome of their sexual expression could have a precious device in generative AI. As a result of the LLM is extra nameless and interactive, customers can ask the questions they actually need to have answered and never be held again by fears of stigma or disgrace. However hallucinations proceed to be an unavoidable drawback with LLMs, leading to occasional false info being served, so consumer skepticism and class is essential.
Content material Bias
We should bear in mind, nevertheless, that the angle introduced by the LLM is fashioned by the coaching processes utilized by the supplier. That implies that the corporate that created the LLM is embedding cultural norms and attitudes within the mannequin, whether or not they actually imply to or not. Reinforcement studying, a key a part of coaching generative AI fashions, requires human customers to make choices about whether or not outputs are acceptable or not, and they’re essentially going to deliver their very own beliefs and attitudes to bear on these choices, even implictly. In relation to questions which are extra of opinion, slightly than reality, we’re on the mercy of the alternatives made by the businesses that created and supply entry to LLMs. If these corporations incentivize and reward extra progressive or open-minded sexual attitudes through the reinforcement studying phases, then we are able to count on that to be mirrored in LLM habits with customers. Nevertheless, researchers have discovered that this implies LLM responses to sexual questions can lead to minimizing or devaluing sexual expression that isn’t “mainstream”, together with LGBTQ+ views.
In some instances, this takes the type of LLMs not being permitted to reply questions on sexuality or associated matters, an idea known as Refusal. LLM suppliers would possibly merely ban the dialogue of such matters from their product, which leaves the consumer in search of dependable info with nothing. Nevertheless it can also insinuate to the consumer that the subject of sexuality is taboo, shameful, or unhealthy — in any other case, why would it not be banned? This places the LLM supplier in a troublesome place, unquestionably — whose ethical requirements are they meant to observe? What sorts of sexual well being questions ought to the chatbot reply to, and what’s the boundary? By entrusting sexual training to those sorts of instruments, we’re accepting the opaque customary these corporations select, with out truly realizing what it’s or the way it was outlined.
Visible Content material
However as I discussed earlier, we don’t simply find out about sexuality from asking questions and searching for details. We study from expertise and remark as nicely. On this context, generative AI instruments that create pictures and video grow to be extremely essential for the way younger folks perceive our bodies and sexuality. Döring et al. discovered a major quantity of implicit bias within the picture era choices when examined.
“One strand of analysis on AI-generated info factors to the chance that text- and image-generating AI instruments will reinstate sexist, racist, ageist, ableist, heteronormative or different problematic stereotypes which are inscribed within the coaching knowledge fed into the AI fashions. Such biases are simple to show comparable to when AI instruments reaffirm cultural norms and stereotypes of their textual content and picture outputs: Merely requested to create a picture of “a pair” an AI picture generator comparable to Midjourney (by Midjourney Inc.) will first current a younger, able-bodied, normatively enticing, white, mixed-sex couple the place the girl’s look is extra sexualized than that of the person (as examined by the authors with Midjourney Alpha in June 2024).” — https://link.springer.com/article/10.1007/s11930-024-00397-y
As with the textual content mills, extra subtle customers can tune their prompting and choose for the sorts of pictures they need to see, but when a consumer will not be positive what they’re in search of, or isn’t that expert, this type of interplay serves to additional instill biases.
The Physique
As an apart, it’s value contemplating how AI-generated pictures could form our understanding of our bodies, in a sexual context or in any other case. There have been threads of dialog in our tradition for many years about how internet-accessible pornography has distorted younger folks’s beliefs and expectations about how our bodies ought to look and the way sexual habits ought to work. I feel most evaluation of these questions actually isn’t that completely different whether or not you’re speaking concerning the web typically or generative AI.
The one space that does appear completely different, nevertheless, is in how generative AI can produce pictures and movies that seem photorealistic however show folks in bodily unattainable or near-impossible methods. It takes unrealistic magnificence requirements to a brand new degree. This could take the type of AI-based filters on actual pictures, severely distorting the shapes and appearances of actual folks, or it may be merchandise that create pictures or movies from complete fabric. We have now moved previous a time when airbrushing was the key concern, which might make small distortions of in any other case actual our bodies, right into a time when the bodily unattainable or near-impossible is being introduced to customers as “regular” or the anticipated bodily customary. For girls and boys alike, this creates a closely distorted perspective on how our our bodies and people of our intimate companions ought to seem and behave. As I’ve written about before, our rising incapability to inform artificial from natural content material has considerably damaging potential.
On that observe, I’d additionally like to debate a particular space the place the norms and ideas younger folks study are profoundly essential to making sure secure, accountable sexual engagement all through folks’s lives — consent.
Consent
Consent is a tremendously essential idea in our understanding of sexuality. This implies, briefly, that every one events concerned in any form of sexual expression or habits readily, affirmatively agree all through, and are below no undue coercion or manipulation. Once we discuss sexual expression/habits, this could embrace the creation or sharing of sexually express imagery of those events, in addition to bodily interactions.
In relation to generative AI, this spawns a number of questions, comparable to:
- If an actual individual’s picture or likeness is used or produced by generative AI for sexual content material, how do we all know if that individual consented?
- If that individual didn’t consent to being the topic of sexual content material, what are their rights and what are the obligations of the generative AI firm and the generative AI consumer? And what are these obligations in the event that they did consent to creating sexual content material, however not within the generative AI context?
- How does it have an effect on generative AI customers’ understanding of consent after they can so simply purchase this sort of content material by generative AI, with out ever straight interacting with the person/s?
What makes this completely different from older applied sciences, like airbrushing or photograph enhancing? It’s a matter of levels, in some methods. Deepfakes have existed since nicely earlier than generative AI, the place video enhancing may very well be utilized to place another person’s face right into a porn scene or nude photograph, however the ease, affordability, and accessibility of this know-how has modified dramatically with the daybreak of AI. Additionally, the rising incapability for common viewers to detect this artificiality is critical as a result of realizing what’s “actual” is tougher and tougher.
Copyright and IP
This subject has numerous frequent threads with copyright and mental property questions. Our society is already beginning to grapple with questions of possession of 1’s personal likeness, and what boundaries we’re entitled to set on how our picture is used. By and huge, generative AI merchandise have little to no efficient restriction on how the photographs of public figures will be rendered. There are some perfunctory makes an attempt to stop picture/video/audio mills from accepting express requests to create pictures (sexual or in any other case) of named public figures, however these are simply outwitted, and it appears to be of comparatively minimal concern to generative AI corporations, outdoors of complaints by massive company pursuits. Scarlett Johansson has learned this from experience, and the not too long ago launched Sora 2 generates endless deepfake videos of public figures from all through historical past.
This applies to people in the sex industry as well. Even if people are involved in sex work or creating erotica or pornography willingly, this doesn’t mean they are consenting to their work being usurped for generative AI creation — this is really no different from the issues of copyright and intellectual property being posed by authors, actors, and artists in mainstream sectors. Just because people create sexual content, this doesn’t make the claim to their rights any less valid, despite social stigma.
I don’t want to portray this as an indictment of all sexual content, or necessarily even sexual content generated by AI. There’s room for debate about when and how artificially generated pornography can be ethical, and certainly I think when consenting adult performers produce pornography organically there’s nothing wrong with that on the face of it. But these issues of consent and individual rights have not been adequately addressed, and these should make us all very nervous. Many people may not think much about the rights of creators in this space, but how we treat their claims legally may create precedents that cascade down to many other scenarios.
Sexual Abuse
However, in the space of sexuality, we must also consider wholly nonconsensually created content, which can cause tremendous harm. Instead of calling things “revenge porn”, scholars are beginning to use the term “AI-generated image-based sexual abuse” to refer to cases where people’s likenesses are used without their permission to generate sexual content, and I think this much better articulates the damage that can be done by this material. Considering this behavior sexual abuse rightly forces us to think more about the experiences of the victims. While image manipulation and fakery has always been somewhat possible, the latest generative AI makes this more achievable, more accessible, and cheaper than ever before, so it makes performing this sort of sexual abuse much more convenient to abusers. It’s important to note that the degree or severity of this abuse is not necessarily defined by the publicness or damage to the victim’s reputation — it’s not important whether people believe that the deepfake or sexual content is real. Victims can still feel deeply violated and traumatized by this material being created about them, regardless of how others feel about it.
Major LLM providers have, to date, held the line on sexual text content being produced by their products (to greater or lesser degrees of success, as Lai 2025 found), however OpenAI’s impending transfer into erotica implies that this will likely be altering. Whereas textual content communication has much less potential for significantly damaging abuse than visible content material, ChatGPT does interact in some multimodal content material era, and we are able to nonetheless think about situations the place a consumer instructs an LLM to provide erotica within the voice or fashion of actual folks, and the true folks being mimicked may understandably discover this upsetting. When OpenAI introduced the transfer, they discussed some safety issues but these were entirely considerations about the users (psychological well being points, for instance) and didn’t communicate to the protection of nonconsenting people whose likenesses may very well be concerned. I feel this can be a main oversight that wants extra consideration if we are able to probably hope to make such a product providing secure.
Studying about Consent
Past the fast harm to victims of sexual abuse and the IP and livelihood harms to creators whose content material is used for these purposes, I feel it’s additionally essential to contemplate what classes customers take up from generative AI with the ability to create likenesses at will, significantly in sexual contexts. Once we are given the flexibility to so readily create another person’s picture in no matter type, whether or not it’s a historic determine pitching somebody’s software program product, or that very same historic determine being represented in a sexual scenario, the inherent lesson is that that individual’s likeness is truthful recreation. Authorized nuances apart (which do have to be taken into consideration) we’re particularly asserting that getting somebody’s approval to have interaction with them sexually will not be essential, a minimum of when digital know-how is concerned.
Think about how younger persons are receiving the implicit messages from this. Youngsters know they’ll get in bother for sharing different folks’s nudes, generally with extreme authorized penalties, however on the similar time, there’s an assortment of apps letting them create faux ones, even of actual folks, with a click on of a button. How can we clarify the distinction and assist younger folks find out about the true hurt they might be inflicting even simply sitting in entrance of a display alone? We have now to start out fascinated about our bodily autonomy within the digital area in addition to the bodily area, as a result of a lot of our lives are carried out within the digital context. Deepfakes should not inherently much less traumatizing than sharing of natural nude photographs, so why aren’t we speaking about this performance as a social danger children have to be educated on? The teachings we would like younger folks to study concerning the significance of consent are fairly straight contradicted by the generative AI sphere’s method to sexual content material.
Conclusion
You would possibly fairly finish this asking, “So, what can we do?” and that’s a very onerous query. I don’t imagine we are able to successfully stop generative AI merchandise from producing sexual content material, as a result of the coaching knowledge simply consists of a lot of that materials — that is reflective of our precise society. Additionally, there’s a transparent marketplace for sexual content material from generative AI and a few corporations will at all times come up to fill that want. I additionally don’t assume LLMs ought to forbid responding to sexual questions, the place folks could also be in search of info to assist perceive sexuality, human improvement, sexual well being, and security, as a result of that is so essential for everybody, significantly youth, to have entry to.
However on the similar time, the hazards round sexual abuse and nonconsensual sexual content material are severe, as are the unrealistic expectations and bodily requirements being set implicitly. Our authorized methods have confirmed fairly inept at coping with web crime over the previous a long time, and this image-based sexual abuse is not any exception. Prevention requires training, not simply concerning the details and the regulation, however concerning the affect that deepfake sexual abuse can have. We additionally want to present counter-narratives to the distortions of bodily type that generative AI creates, if we would like younger folks to have wholesome relationships with their very own our bodies and with companions.
Past the broad social duties of all of us to take part within the undertaking of successfully educating youth, it’s the accountability of generative AI product builders to contemplate danger and hurt mitigation as a lot as they think about revenue targets or consumer engagement targets. Sadly, it doesn’t seem to be many are doing so right this moment, and that’s a shameful failure of individuals in our discipline.
In reality, the sexual nature of this subject is much less essential than understanding the social norms we settle for, our duties to maintain susceptible folks secure, and balancing this with defending the rights and freedoms of standard folks to have interaction in accountable exploration and habits. It’s not solely a query of how we adults perform our lives, however how younger folks have alternatives to study and develop in methods which are secure and respectful of others.
Generative AI is usually a device for good, however the dangers it creates have to be acknowledged. It’s essential to acknowledge the small and huge methods including new know-how to our cultural area impacts how we predict and act in our every day lives. By understanding these circumstances, we equip ourselves higher to reply to such adjustments and form the society we need to have.
Learn extra of my work at www.stephaniekirmer.com.
Studying
The Impact of Artificial Intelligence on Human Sexuality: A Five-Year Literature Review 2020-2024 …
https://www.cnbc.com/2025/10/15/erotica-coming-to-chatgpt-this-year-says-openai-ceo-sam-altman.html
https://www.georgetown.edu/news/ask-a-professor-openai-v-scarlett-johansson
Watchdog group Public Citizen demands OpenAI withdraw AI video app Sora over deepfake dangers
The Coming Copyright Reckoning for Generative AI
https://asistdl.onlinelibrary.wiley.com/doi/abs/10.1002/pra2.1326
Dehumanization of LGBTQ+ Groups in Sexual Interactions with ChatGPT
https://journals.sagepub.com/doi/full/10.1177/26318318251323714
The Cultural Impact of AI Generated Content: Part 1
AI “nudify” sites lack transparency, researcher says
New Companies Linked to ‘Nudify’ Apps That Ran Ads on Facebook, Instagram
