Opera is testing letting you download LLMs for local use, a first for a major browser - eviltoast
    • T156@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      7 months ago

      Not exactly. Most integrated chips have a small pool of dedicated VRAM, and then a bit more that they share with the system memory, though it’s generally only a portion, not all of it. It’s only Apple’s unified memory, and maybe other mobile chips that has them both share memory pool entirely, for better or worse, as far as I’m aware.

      But it is worth noting that if you don’t have enough VRAM and have to put it into RAM, the minimum expectation is that you have twice the amount of RAM space. So if you have a GPU with 4GB of VRAM, and need to offload the extra to the system, you don’t need 16 GB, you need 32 GB.