NVIDIA has removed “Hot Spot” sensor data from GeForce RTX 50 GPUs - eviltoast

“Jensen sir, 50 series is too hot”

“Easy fix with my massive unparalleled intellect. Just turn off the sensor”

If you needed any more proof that Nvidia is continuing to enshittify their monopoly and milk consumers. Hey lets remove one of the critical things that lets you diagnose a bad card and catch bad situations that might result in gpu deathdoors! Dont need that shit, Just buy new ones every 2 years you poors!

If you buy a Nvidia GPU, you are part of the problem here.

    • empireOfLove2@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      1
      ·
      edit-2
      3 days ago

      Is it though?

      The Hotspot temp sensors are one of the most critical diagnostic sensors an end user can have. When the thermal interface material begins to degrade (or leak out of the rubber gasket, in the case of the 5090’s liquid metal) your package temp may only go up a few C but your Hotspot may increase by 10-20C or more. That indicates problems and almost definitely is one of the leading causes of dead and crashing GPU’s- it’s also the easiest to detect and fix.

      Removing this quite literally has zero engineering reason beyond

      • hiding from reviewers the fact that the 5090 pulls too much power and runs too hot for a healthy lifespan, even with liquid metal and the special cooler
      • Fucking over the consumer so they can no longer diagnose their own hardware
      • Ensure more 5090’s die rapidly, via lack of critical monitoring, so that Nvidia funny number can keep going up by people re-buying new GPU’s that cost more than some used cars every 2 years.

      The sensors are still definitely there. They have to be for thermal management or else these things will turn into fireworks. They’re just being hidden from the user at a hardware level.

      This isn’t even counting the fact that Hotspot also usually includes sensors inside the VRM’s and memory chips, which are even more sensitive to a bad TIM application and running excessively warm for longer periods of times.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      3 days ago

      It looks bad with the insane TDP they run at now. They could cut 33% of it off and probably lose like 5-10% perf depending on the SKU. Maybe even less.

      It also looks a lot like planned obsolescence.