Microsoft inks deal to restart Three Mile Island nuclear reactor to fuel its voracious AI ambitions - eviltoast

Modern AI data centers consume enormous amounts of power, and it looks like they will get even more power-hungry in the coming years as companies like Google, Microsoft, Meta, and OpenAI strive towards artificial general intelligence (AGI). Oracle has already outlined plans to use nuclear power plants for its 1-gigawatt datacenters. It looks like Microsoft plans to do the same as it just inked a deal to restart a nuclear power plant to feed its data centers, reports Bloomberg.

  • EnoBlk@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    2 months ago

    That’s one groups opinion, we still see improving LLMs I’m sure they will continue to improve and be adapted for whatever future use we need them. I mean I personally find them great in their current state for what I use them for

    • finitebanjo@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      2 months ago

      What skin do you have in this game? Leading industry experts, who btw want to SELL IT TO YOU, told you it has hit a ceiling. Why do you refute it so much? Let it die, we will all be better off.

      • EnoBlk@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        2 months ago

        I use them regularly for personal and work projects, they work great at outlining what I need to do in a project as well as identifying oversights in my project. If industry experts are saying this, then why are there still improvements being made, why are they still providing value to people, just because you don’t use them doesn’t mean they aren’t useful.

        • finitebanjo@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 months ago

          Maybe you saw the news about a major hit to US Cybersecurity due to morons like you copy-pasting from the GeePeeTee? Or about a wave of falsified research papers generated by AI? Or how a lawyer tried to use an AI assistant resulting in fines and a bar reviewal?

      • areyouevenreal@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        2 months ago

        Even if it didn’t improve further there are still uses for LLMs we have today. That’s only one kind of AI as well, the kind that makes all the images and videos is completely separate. That has come on a long way too.

        • finitebanjo@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          2 months ago

          I made this chart for you:

          ------ Expectations for AI

           

           

           

           

           

          ----- LLM’s actual usefulness

          ----- What I think if it

           

           

          ----- LLM’ usefulness after accounting for costs

          • areyouevenreal@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            2 months ago

            Bruh you have no idea about the costs. Doubt you have even tried running AI models on your own hardware. There are literally some models that will run on a decent smartphone. Not every LLM is ChatGPT that’s enormous in size and resource consumption, and hidden behind a vail of closed source technology.

            Also that trick isn’t going to work just looking at a comment. Lemmy compresses whitespace because it uses Markdown. It only shows the extra lines when replying.

            Can I ask you something? What did Machine Learning do to you? Did a robot kill your wife?

            • finitebanjo@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 months ago

              Earlier this year, the International Energy Agency released its energy usage and forecast and has predicted that the total global electricity consumption of data centers is set to top 1 PWh (petawatt-hour) in 2026. This more than doubles its 2022 value and (as the report states) “is equivalent to the electricity consumption of Japan.” SOURCE

              It does fuck all for me except make art and customer service worse on average, but yes it certainly will result in countless avoidable deaths if we don’t heavily curb its usage soon as it is projected to Quintuple its power draw by 2029.

              • areyouevenreal@lemm.ee
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                2 months ago

                I am not talking about things like ChatGPT that rely more on raw compute and scaling than some other approaches and are hosted at massive data centers. I actually find their approach wasteful as well. I am talking about some of the open weights models that use a fraction of the resources for similar quality of output. According to some industry experts that will be the way forward anyway as purely making models bigger has limits and is hella expensive.

                Another thing to bear in mind is that training a model is more resource intensive than using it, though that’s also been worked on.

                • finitebanjo@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  edit-2
                  2 months ago

                  You put power in and you get worthless garbage out. Do the world a favor and just mine crypto, try FoldingCoin out.

                  • areyouevenreal@lemm.ee
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    1
                    ·
                    2 months ago

                    I’ve seen teachers use this stuff and get actually decent results. I’ve also seen papers where people use LLMs to hack into a computer, which is a damn sophisticated task. So you are either badly informed or just lying. While LLMs aren’t perfect and aren’t a replacement for humans, they are still very much useful. To believe otherwise is folly and shows your personal bias.