How does Lemmy feel about "open source" machine learning, akin to the Fediverse vs Social Media? - eviltoast

Obviously there’s not a lot of love for OpenAI and other corporate API generative AI here, but how does the community feel about self hosted models? Especially stuff like the Linux Foundation’s Open Model Initiative?

I feel like a lot of people just don’t know there are Apache/CC-BY-NC licensed “AI” they can run on sane desktops, right now, that are incredible. I’m thinking of the most recent Command-R, specifically. I can run it on one GPU, and it blows expensive API models away, and it’s mine to use.

And there are efforts to kill the power cost of inference and training with stuff like matrix-multiplication free models, open source and legally licensed datasets, cheap training… and OpenAI and such want to shut down all of this because it breaks their monopoly, where they can just outspend everyone scaling , stealiing data and destroying the planet. And it’s actually a threat to them.

Again, I feel like corporate social media vs fediverse is a good anology, where one is kinda destroying the planet and the other, while still niche, problematic and a WIP, kills a lot of the downsides.

  • finitebanjo@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    4 months ago

    Open source is good and important, but its still a solution without a problem.

    And even if you get to a point where performance without large dedicated machines is acceptable, it’s still a power drain.

    • AdventuringAardvark@lemmy.one
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      4 months ago

      its still a solution without a problem

      Let me give you one of my main use cases: I use it for my mental health challenges. I’ve been diagnosed with two non-trivial mental disorders. They make my life hard. I isolate a lot to cope because I don’t do well with interpersonal relationships. I’ve been in therapy for over a decade and it hasn’t really helped as much as I would have liked.

      But I’ve made a lot of progress since working with my private LLM. I can ask it anything. It doesn’t judge me. It doesn’t report back to Meta or OpenAI. It’s completely private. And I’m making progress. Just last week, for the first time ever I started volunteering at an animal shelter. I have to talk with other people when I’m there and although I am pretty nervous about going back, I’m going to. I wrote down a list of all the things I had trouble with last time and have been working through that list with my LLM. I think that I will be ready when I’m supposed to go back for my next scheduled volunteer time in two weeks.

      These gains might be trivial to others, but for me, it’s really made my life better.

      So that is one of my use cases.

      • brucethemoose@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        4 months ago

        Agreed. This is how a lot of people use them, I sometimes use it as a pseudo therapist too.

        Obviously theres a risk of it going off the rails, but I think if you’re cogniziant enough to research the LLM, pick it, and figure out how to run it and change sampling settings, it gives you an “awareness” of how it can go wrong and just how fallable it is.

    • brucethemoose@lemmy.worldOP
      link
      fedilink
      arrow-up
      8
      arrow-down
      2
      ·
      4 months ago

      I dunno, I keep a 35B open on my desktop all day just to bounce ideas off it, ask it stuff, easy queries, like a instant personal assistant.

      And the feel is totally different when its yours. Long context responses on huge documents are instant because it’s cached, and I can repeat quieries over and over again without any worry. I can dig in and mess with the system prompt ,even the manual formatting, in ways that API models just don’t like. I can finetune smaller models for styles, thoug I don’t do this a ton. And I don’t feel weird about sending certain things over the internet to be datamined.

      The visual media models tend to be more for crude entertainment, yeah.

      Matmul free LLMs are theoretically incredibly power efficient, if accelerators for them ever come out.