this post was submitted on 09 May 2024
102 points (77.1% liked)
Technology
59207 readers
2520 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Erotic text messages could be considered pornographic work I guess, like erotic literature. But I think they just start to realize how many of their customers jailbreak GPT for that specific purpose, and how good alternatives have gotten who allow for this type of chat, such as NovelAI. Given how many other AI services started to censor things and how much that affected their models (like your chat bot partner getting stuck in consent messages as soon as you went into anything slightly outside vanilla territory), and how much drama that has caused throughout those communities, I highly doubt that "loosening" their policy is going to be enough to sway people towards them instead of the competition.
After experiencing janitor AI and local models I'm certainly not coming back to character AI, why waste so much time trying to jailbreak a censored model when we have ones that just do as they are told?
Janitor, like most "free" models, degrades too quickly for my liking. And if I pay I might as well use NovelAI + Sillytavern, since they don't have any restrictions on their text gen models that could interfere with their generation. Local models I didn't had much luck with getting them to run and I suspect they'd be pretty slow too.
KoboldAI has models trained on erotica (Erebus and Nerybus). It has the ability to spread layers across multiple GPUs, so as long as one is satisfied with the output text, in theory, it'd be possible to build a very high-powered machine (like, in wattage terms) with something like four RX 4090s and get something like real-time text generation. That'd be like $8k in parallel compute cards.
I'm not sure how many people want to spend $8k on a locally-operated sex chatbot, though. I mean, yes privacy, and yes there are people who do spend that on sex-related paraphernalia, but that's going to restrict the market an awful lot.
Maybe as software and hardware improve, that will change.
The most obvious way to cut the cost is to do what has been done with computing hardware for decades, like back when people were billed for minutes of computing time on large computers in datacenters -- have multiple users of the hardware, and spread costs. Leverage the fact that most people using a sex chatbot are only going to be using the compute hardware a fraction of the time, and then have many people use the thing and spread costs across all of them. If any one user uses the hardware 1% of the time on average, that same hardware cost per user is now $80. I'm pretty sure that there are a lot more people who will pay $80 for use of a sex chatbot than $8000.
They can see and data-mine what people are doing. Their entire business is based on crunching large amounts of data. I think that they have had a very good idea of what their users are doing with their system since the beginning.