AMD's Dr. Lisa Su on the role of artificial intelligence in gaming: 'Not everything has to be rendered' - eviltoast
  • QuadratureSurfer@lemmy.world
    link
    fedilink
    arrow-up
    16
    arrow-down
    1
    ·
    5 months ago

    If you’re trying to compare “AI” and electrical use you need to compare every use case to how we traditionally do things vs how any sort of “AI” does it. Even then we need to ask ourselves if there’s a better way to do it, or if it’s worth the increase in productivity.

    For example, a rain sensor on your car.
    Now, you could setup some AI/ML model with a camera and computer vision to detect when to turn on your windshield wipers.
    But why do that when you could use this little sensor that shoots out a small laser against the window and when it detects a difference in the energy that’s normally reflected back it can activate the windshield wipers.
    The dedicated sensor with a low power laser will use far less energy and be way more efficient for this use case.

    On the other hand, I could spend time/electricity to watch a Video over and over again trying to translate what someone said from one language to another, or I could use Whisper (another ML model) to quickly translate and transcribe what was said in a matter of seconds. In this case, Whisper uses less electricity.

    In the context of this article we’re talking about DLSS where Nvidia has trained a few different ML models for upscaling, optical flow (predicting where the pixels/objects are moving to next), and frame generation (being able to predict what the in-between frames will look like to boost your FPS).

    This can potentially save energy because it puts less of a load on the CPU, and most of the work is being done at a lower resolution before upscaling it at the end. But honestly, I haven’t seen anyone compare the energy use differences on this yet… and either way you’re already using a lot of electricity just by gaming.