Home Assistant 2024.6: Dipping our toes in the world of AI using LLMs - eviltoast
  • Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    1
    ·
    5 months ago

    Great, but it’s restrictive only letting you use openai and google. I’m already hosting oogabooga text generation, let me use that

    • Zikeji@programming.dev
      link
      fedilink
      English
      arrow-up
      19
      ·
      5 months ago

      I believe that’s because those two APIs support function calling, open source support is still coming along.

      • wagesj45@kbin.run
        link
        fedilink
        arrow-up
        3
        ·
        5 months ago

        Mistral Instruct v0.3 added in function calling, but I don’t know if its method for implementation is the same/compatible. Also, it is fairly new and wasn’t released all that long ago. Hopefully we’ll get there soon. :)

        • Zikeji@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 months ago

          I saw a few others, but the ones I looked at were basically instruct layers where you’d need to add your own parser. I didn’t find anything (in my 3 minutes of searching) that offers an openai chat completions endpoint, which is probably the main stopper.

          • wagesj45@kbin.run
            link
            fedilink
            arrow-up
            1
            ·
            5 months ago

            Looking at the documentation it looks like it relies on Mistral’s python tooling to work. I’m fairly dumb, so I don’t know if the tool suggestion coming from Mistral is from some kind of separate neural net or as some kind of special response you have to parse (or that their client parses for you?).

  • bushvin@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    6
    ·
    5 months ago

    Oh cool, implementing mediocre algorithms. What could possibly go wrong?

    • warmaster@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      5 months ago

      Local LLMs have been supported via the Ollama integration since Home Assistant 2024.4. Ollama and the major open source LLM models are not tuned for tool calling, so this has to be built from scratch and was not done in time for this release. We’re collaborating with NVIDIA to get this working – they showed a prototype last week.

      Are all Ollama-supported algos mediocre? Which ones would be better?