LLMs are surprisingly great at compressing images and audio, DeepMind researchers find - eviltoast
  • andruid@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    Training tends to be more compute intensive while inference is more likely to be able to be ran on a smaller hardware foot print.

    The neater idea would be a standard model or set of models, so that a 30G program can be used on ~80% of target case, games and video seem good canidates for this.