Recent AI failures are cracks in the magic - eviltoast
  • Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    9 months ago

    I agree 100%, and I think Zuckerberg’s attempt at a massive 340,000 of Nvidia’s H100 GPUs AI based on LLM with the aim to create a generel AI sounds stupid. Unless there’s a lot more to their attempt, it’s doomed to fail.

    I suppose the idea is something about achieving critical mass, but it’s pretty obvious, that that is far from the only factor missing to achieve general AI.

    I still think it’s impressive what they can do with LLM. And it seems to be a pretty huge step forward. But It’s taken about 40 years from we had decent “pattern recognition” to get here, the next step could be another 40 years?

    • Lvxferre@mander.xyz
      link
      fedilink
      English
      arrow-up
      7
      ·
      9 months ago

      I think that Zuckerberg’s attempt is a mix of publicity stunt and “I want [you] to believe!”. Trying to reach AGI through a large enough LLM sounds silly, on the same level as “ants build, right? If we gather enough ants, they’ll build a skyscraper! Chrust me.”

      In fact I wonder if the opposite direction wouldn’t be a bit more feasible - start with some extremely primitive AGI, then “teach” it Language (as a skill) and a language (like Mandarin or English or whatever).

      I’m not sure on how many years it’ll take for an AGI to pop up. 100 years perhaps, but I’m just guessing.