Microsoft Needs So Much Power to Train AI That It's Considering Small Nuclear Reactors - eviltoast
  • eestileib@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    2
    ·
    1 year ago

    For those who haven’t seen this discussion before, I feel like doing the next step in the dance. Cheers Plex.

    It’s important to note that nuclear is capable of satisfying baseload demand, which is particularly important for things like a commercial AI model training facility, which will be scheduled to run at full blast for multiple nines.

    Solar+storage is considerably more unreliable than a local power plant (be it coal, gas, hydro, or nuclear). I have solar panels in an area that gets wildfire smoke (i.e. soon to be the entire planet), and visible smoke in the air effectively nullifies solar.

    Solar is fantastic for covering the amount of load that is correlated with insolation: for example colocated with facilities that use air-conditioning (which do include data centers, but the processing is driving the power there).

    • FooBarrington@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      9
      ·
      edit-2
      1 year ago

      While you are right about baseload being more satisfiable through nuclear, you are wrong that it’s in any way important for AI model training. This is one of the best uses for solar energy: you train while you have lots of energy, and you pause training while you don’t. Baseload is important for things that absolutely need to get done (e.g. powering machines in hospitals), or for things that have a high startup cost (e.g. furnaces). AI model training is the opposite of both, so baseload isn’t relevant at all.

      • eestileib@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        It’s not life-critical but it is financially-critical to the company. You aren’t going to build a project on the scale of a data center that is capable of running 24/7 and not run it as much as possible.

        That equipment is expensive, and has a relatively short useful lifespan even if not running.

        This is why tire factories and refineries run three shifts, this isn’t a phenomenon unique to data centers.

        • FooBarrington@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          It’s not life-critical but it is financially-critical to the company. You aren’t going to build a project on the scale of a data center that is capable of running 24/7 and not run it as much as possible.

          Sorry, but that’s wrong. You’ll run it as much as is profitable. If electricity cost goes up, there is a point where you’ll stop running it, since it becomes too expensive. Even more so considering that AI models don’t have a set goal to reach - you train them as long as you want and can, but training a little bit extra will have diminishing returns after a while.

          That equipment is expensive, and has a relatively short useful lifespan even if not running.

          Not really, the limiting factors in AI training are mostly supply of cards. The cards already in use will stay in use until they fail, they won’t be replaced with newer cards the second they get released.

          This is why tire factories and refineries run three shifts, this isn’t a phenomenon unique to data centers.

          This is comparing apples and oranges, since tire factories:

          • have long-term planning and production goals to reach

          • have employees who must be planned

          • have resource input costs that are higher than electricity

          Of course you want the highest utilisation that you can economically reach, but a better comparison would be crypto mining - which also has expensive equipment that has a relatively short useful lifespan even if not running, and yet they stop mining when electricity is too expensive.

      • guacupado@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        1 year ago

        “And you pause training while you dont.” lmao I don’t know why people keep giving advice in spaces they’ve never worked in.

        • FooBarrington@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 year ago

          What are you trying to imply? That training Transformer models necessarily needs to be a continuous process? You know it’s pretty easy to stop and continue training, right?

          I don’t know why people keep commenting in spaces they’ve never worked in.

          • guacupado@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            1 year ago

            No datacenter is shutting off of a leg, hall, row, or rack because “We have enough data, guys.” Maybe at your university server room where CS majors are interning. These things are running 24/7/365 with UU tracking specifically to keep them up.

            • FooBarrington@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              1 year ago

              What are you talking about? Who said anything close to “we have enough data, guys”?

              Are you ok? You came in with a very snippy and completely wrong comment, and you’re continuing with something completely random.