• phoneymouse@lemmy.world
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    1
    ·
    edit-2
    1 year ago

    That would be the goal. The tricky part is matching intents that align with some API integration to whatever psychobabble the LLM spits out.

    In other words, the LLM is just predicting the next word, but how do you know when to take an action like turning on the lights, ordering a pizza, setting a timer, etc. The way that was done with Alexa needs to be adapted to fit with the way LLMs work.

    • Deceptichum@kbin.social
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      1 year ago

      Eh just ask the LLM to format requests in a way that can be parsed to a function.

      Its pretty trivial to get an llm to do that.

      • PupBiru@kbin.social
        link
        fedilink
        arrow-up
        7
        ·
        1 year ago

        in fact it’s literally the basis for the “tools” functionality in the new openai/chatgpt stuff!

        that “browse the web”, “execute code”, etc is all the LLM formatting things in a specific way

    • floofloof@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      1 year ago

      Microsoft seems to be attempting this with the new Copilot in Windows. You can ask it to open applications, etc., and also chat with it. But it is still pretty clunky when it comes to the assistant part (e.g. I asked it to open my power settings and after a bit of to and fro it managed to open the Settings app, after which I had to find the power settings for myself). And they’re planning to charge for it, starting at an outrageous $30 per month. I just don’t see that it’s worth that to the average user.