An Analysis of DeepMind's 'Language Modeling Is Compression' Paper - eviltoast
  • abhi9u@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Do you mean the number of tokens in the LLM’s tokenizer, or the dictionary size of the compression algorithm?

    The vocab size of the pretrained models is not mentioned anywhere in the paper. Although, they did conduct an experiment where they measured compression performance while using tokenizers of different vocabulary sizes.

    If you meant the dictionary size of the compression algorithm, then there was no dictionary because they only used arithmetic coding to do the compression which doesn’t use dictionaries.