Robin Williams' daughter Zelda says AI recreations of her dad are 'personally disturbing' - eviltoast

Robin Williams’ daughter Zelda says AI recreations of her dad are ‘personally disturbing’::Robin Williams’ daughter Zelda says AI recreations of her dad are ‘personally disturbing’: ‘The worst bits of everything this industry is’

  • assassin_aragorn@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I haven’t watched Star Trek, but if you’re correct, they depicted an incredibly rudimentary and error prone system. Google “do any African countries start with a K” meme and look at the suggested answer to see just how smart AI is.

    I remain skeptical of AI. If I see evidence suggesting I’m wrong, I’ll be more than happy to admit it. But the technology being touted today is not the general AI envisioned by science fiction nor everything that’s been studied in the space the last decade. This is just sophisticated content generation.

    And finally, throwing data at something does not necessarily improve it. This is easily evidenced by the Google search I suggested. The problem with feeding data en masse is that the data may not be correct. And if the data itself is AI output, it can seriously mess up the algorithms. Since these venture capitalist companies have given no consideration to it, there’s no inherent mark for AI output. It will always self regulate itself to mediocrity because of that. And I don’t think I need to explain that throwing a bunch of funding at X does not make X a worthwhile endeavor. Crypto and NFT come to mind.

    I leave you with this article as a counterexample: https://gizmodo.com/study-finds-chatgpt-capabilities-are-getting-worse-1850655728

    Throwing more data at the models has been making things worse. Although the exact reasons are unclear, it does suggest that AI is woefully unreliable and immature.

    • lloram239@feddit.de
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      Google “do any African countries start with a K” meme and look at the suggested answer to see just how smart AI is.

      Oh noes, somebody using AI wrong and getting bad results. What else is new? ChatGPT works on tokens (aka words or word segments converted to integers), not on characters. Any character based questions will naturally be problematic, since the AI literally doesn’t see the characters you are questioning it about. Same with digits and math. The surprising part here isn’t that ChatGPT gets this wrong, that bit is obvious, but the amount of questions in that area that it manages to answer correctly anyway.

      This is just sophisticated content generation.

      Whenever I read “just” I can’t help but think of Homer Simpson’s: It Only Transports Matter?. Seriously, there is nothing “just” about this. What ChatGPT is capable of is utterly mind boggling. Humans worked on trying to teach computers how to understand natural language ever since the very first computers 80 or so years ago, without much success. Even just a simple automatic spell checker that actually worked was elusive. ChatGPT is so f’n good at natural language that people don’t even realize how hard of a problem that is, they just accept that it works and don’t think about it, because it’s basically 100% correct at understanding language.

      And finally, throwing data at something does not necessarily improve it.

      ChatGPT is a text auto-complete engine. The developers didn’t set out to build a machine that can think, reason, replicate the brain or even build a chatbot. They build one that tells you what word comes next. And then they threw lots of data at it. Everything ChatGPT is capable of is basically an accident, not design. As it turns out, to predict the next word correctly you have to have a very rich understanding of the world and GPT figures that out all by itself just by going through lots and lots of texts.

      That’s the part that makes modern AI interesting and a scary: We don’t really know why any of this works. We just keep throwing data at the AI and see what sticks. And for the last 10 years, a lot of it stuck. Find a problem space that you have lots of data for, throw it at AI and get interesting results. No human set around and taught DALLE how to draw and no human taught ChatGPT how to write English, it’s all learned from the data. Worse yet, the lesson learned over the last decade is essentially that human expertise is largely worthless in teaching AIs, you get much better results by simply throwing lots of data at it.

      I leave you with this article as a counterexample

      That is utterly meaningless. OpenAI is constantly tweaking that thing for business reasons, including downgrading it to consume less resources and censoring it to not produce something nasty (Meta didn’t get the memo). Same happened with Bing Chat and same thing just happened with DALL-E3, which until a few days ago could generate celebrity faces and now blocks all requests in that direction.

      When you compare GPT-3.5 with the new/pay GPT-4, i.e. a newly training versions with more data, it ends up being far superior than the previous one. Same with DALLE2 vs DALLE3.

      Also note that modern AIs don’t learn. They are trained on a dataset once and that’s it. The models are completely static after that. Nothing of what you type into them will be remembered by them. The illusion of a short-term memory comes from the whole conversation history getting feed into the model each time. The training step is completely separate from chatting with the model.