• sithOP
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    edit-2
    2 days ago

    It’s for sure not impossible. But my guess is that it’s because you learn the new model and your behavior and expectations change. It’s a known phenomenon and I do believe the developers/companies when they say that they didn’t change anything. It’s also quite easy to verify/test this hypothesis by using locally hosted LLMs. There are probably a few papers covering this already.

    Though it does happen that one is downgraded to a smaller model when using free versions OpenAI, Anthropic and others. But my experience is that this information allways is explicit in the UI. Still, it’s probably quite easy to miss.

    Also, I’m almost exclusively using the free version of Mistral Large (Le Chat) and I’ve never experienced regression. But Mistral also never downgrades, it just becomes very slow.