LLMs are surprisingly great at compressing images and audio, DeepMind researchers find - eviltoast
  • falkerie71@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    I don’t know how this would apply to decompression models in actuality, but in general, deep learning is VRAM intensive only during the training process, that’s because they train multiple batches of data at once for generalization, and all those batches of data need to be stored in ram.
    But once the model is trained, the end user is only going to input data one by one, so VRAM usually is not an issue. There are also light weight models that are designed to be run on lower end hardware.