“In 10 years, computers will be doing this a million times faster.” The head of Nvidia does not believe that there is a need to invest trillions of dollars in the production of chips for AI - eviltoast

“In 10 years, computers will be doing this a million times faster.” The head of Nvidia does not believe that there is a need to invest trillions of dollars in the production of chips for AI::Despite the fact that Nvidia is now almost the main beneficiary of the growing interest in AI, the head of the company, Jensen Huang, does not believe that

  • rebelsimile@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    2
    ·
    9 months ago

    Honestly as someone who has watched the once-fanciful prefixes “giga” and “tera” enter common parlance, and saw kilobytes of RAM turn to gigabytes, it’s really hard for me to think what he’s saying is impossible.

    • Buddahriffic@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      9 months ago

      Even if he is accurate, specialist hardware will outperform generic hardware at what it is specialized for.

      I remember a story sometime in the 00s about PCs finally getting to the point where they were as fast as one of the WWII code breaking computers (or something like that). It wasn’t because we backtracked in computer speeds after WWII, but because even that ancient hardware was able to get good performance when it was purpose-built, but it couldn’t do anything else and likely would have required a lot of work to adjust to a different kind of cypher scheme, if it could be adapted at all.

      So GP compute might be a million times faster in a decade, but specialist AI chips might be a million times faster than that.

      A hardware neural net might be able to eliminate memory latency by giving each neuron fast resisters to handle all their memory needs. If it doesn’t need to change connections, each connection could be hard wired. A GPU wouldn’t have a chance at keeping up no matter how wide that memory bus gets or how many channels it gets split into. It might even use way less power (though with the elimination of memory latency, it could go fast enough to use way more, too).

    • TheGrandNagus@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      9 months ago

      Nobody is saying it won’t happen eventually. But a million times within the next decade, i.e. 4x better every year for 10 years?

      This generation isn’t better than last generation by even close to that. Nevermind doing 4x for 10 years straight.

      • fidodo@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        He was probably not being literal with the number, but when you’re the head of a computer chip hardware company you should pick numbers carefully.