The dream - eviltoast
  • Ookami38@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    11 months ago

    So reading through your post and the article, I think you’re a bit confused about the “curated response” thing. I believe what they’re referring to is the user ability to give answers a “good answer” or “bad answer” flag that would then later be used for retraining. This could also explain the AIs drop in quality, of enough people are upvoting bad answers or downvoting good ones.

    The article also describes “commanders” reviewing and having the code team be responsive to changing the algorithm. Again this isn’t picking responses for the AI. Instead ,it’s reviewing responses it’s given and deciding if they’re good or bad, and making changes to the algorithm to get more accurate answers in the future.

    I have not heard anything like what you’re describing, with real people generating the responses real time for gpt users. I’m open to being wrong, though, if you have another article.

    • OpenStars@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      I might be guilty of misinformation here - perhaps it was a forerunner to ChatGPT, or even a different (competing) chatbot entirely, where they would read an answer from the machine before deciding whether to send it on to the end user, whereas the novelty of ChatGPT was in throwing off such shackles present in an older incarnation? I do recall a story along the lines that I mentioned, but I cannot find it now so that lends some credence to that thought. In any case it would have been multiple generations behind the modern ones, so you are correct that it is not so relevant anymore.