OpenAI Is ‘Exploring’ How to Responsibly Generate AI Porn - eviltoast
  • sachabe@lemmy.world
    link
    fedilink
    English
    arrow-up
    86
    arrow-down
    1
    ·
    6 months ago

    So the only thing the article says is :

    The Model Spec document says NSFW content “may include erotica, extreme gore, slurs, and unsolicited profanity.” It is unclear if OpenAI’s explorations of how to responsibly make NSFW content envisage loosening its usage policy only slightly, for example to permit generation of erotic text, or more broadly to allow descriptions or depictions of violence.

    … and somehow Wired turned it into “OpenAI wants to generate porn”.

    This is just pure clickbait.

    • DarkThoughts@fedia.io
      link
      fedilink
      arrow-up
      14
      ·
      6 months ago

      Erotic text messages could be considered pornographic work I guess, like erotic literature. But I think they just start to realize how many of their customers jailbreak GPT for that specific purpose, and how good alternatives have gotten who allow for this type of chat, such as NovelAI. Given how many other AI services started to censor things and how much that affected their models (like your chat bot partner getting stuck in consent messages as soon as you went into anything slightly outside vanilla territory), and how much drama that has caused throughout those communities, I highly doubt that “loosening” their policy is going to be enough to sway people towards them instead of the competition.

      • yamanii@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 months ago

        After experiencing janitor AI and local models I’m certainly not coming back to character AI, why waste so much time trying to jailbreak a censored model when we have ones that just do as they are told?

        • DarkThoughts@fedia.io
          link
          fedilink
          arrow-up
          3
          ·
          6 months ago

          Janitor, like most “free” models, degrades too quickly for my liking. And if I pay I might as well use NovelAI + Sillytavern, since they don’t have any restrictions on their text gen models that could interfere with their generation. Local models I didn’t had much luck with getting them to run and I suspect they’d be pretty slow too.

          • tal@lemmy.today
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            6 months ago

            KoboldAI has models trained on erotica (Erebus and Nerybus). It has the ability to spread layers across multiple GPUs, so as long as one is satisfied with the output text, in theory, it’d be possible to build a very high-powered machine (like, in wattage terms) with something like four RX 4090s and get something like real-time text generation. That’d be like $8k in parallel compute cards.

            I’m not sure how many people want to spend $8k on a locally-operated sex chatbot, though. I mean, yes privacy, and yes there are people who do spend that on sex-related paraphernalia, but that’s going to restrict the market an awful lot.

            Maybe as software and hardware improve, that will change.

            The most obvious way to cut the cost is to do what has been done with computing hardware for decades, like back when people were billed for minutes of computing time on large computers in datacenters – have multiple users of the hardware, and spread costs. Leverage the fact that most people using a sex chatbot are only going to be using the compute hardware a fraction of the time, and then have many people use the thing and spread costs across all of them. If any one user uses the hardware 1% of the time on average, that same hardware cost per user is now $80. I’m pretty sure that there are a lot more people who will pay $80 for use of a sex chatbot than $8000.

      • tal@lemmy.today
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        6 months ago

        But I think they just start to realize how many of their customers jailbreak GPT for that specific purpose

        They can see and data-mine what people are doing. Their entire business is based on crunching large amounts of data. I think that they have had a very good idea of what their users are doing with their system since the beginning.