The art critic - eviltoast
  • 0laura@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    4 months ago

    I’m talking about LoRA, not LoRa. I’m a fan of both though. I’ve been considering getting a Lilygo T-Echo to run Meshtastic for a while. Maybe build a solar powered RC plane and put a Meshtastic repeater in there, seems like a cool project.

    https://en.wikipedia.org/wiki/Fine-tuning_(deep_learning) Low-rank adaptation (LoRA) is an adapter-based technique for efficiently fine-tuning models. The basic idea is to design a low-rank matrix that is then added to the original matrix.[13] An adapter, in this context, is a collection of low-rank matrices which, when added to a base model, produces a fine-tuned model. It allows for performance that approaches full-model fine-tuning with less space requirement. A language model with billions of parameters may be LoRA fine-tuned with only several millions of parameters.