According to the analytics firm’s report, worldwide desktop and mobile web traffic dropped by 9.7% from May to June, and 10.3% in the US alone. Users are also spending less time on the site overall, as the amount of time visitors spent on chat.openai.com was down 8.5%, according to the reports.

The decline, according to David F. Carr, senior insights manager at Similarweb, is an indication of a drop in interest in ChatGPT and that the novelty of AI chat has worn off. “Chatbots will have to prove their worth, rather than taking it for granted, from here on out,” Carr wrote in the report.

Personally, I’ve noticed a sharp decline in my usage. What felt like a massive shift in technology a few months ago, now feels like mostly a novelty. For my work, there just isn’t much ChatGPT can help me with that I can’t do better myself and with less frustration. I can’t trust it for factual information or research. The written material it generates is always too generic, formal, and missing the nuances I need that I either end up re-writing it or spending more time instructing ChatGPT on the changes I need than it would have taken me to just write it myself in the first place. Its not great at questions involving logic or any type of grey area. Its sometimes useful for brainstorming, but that is about it. ChatGPT has just naturally fallen out of my workflow. That’s my experience anyway.

  • exohuman@kbin.social
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    1 year ago

    I see this becoming more of an advanced “auto-complete”. It really shouldn’t be authoring anything, but instead work with software to make suggestions on how to improve human generated work.

    Also, for software development it is a minefield. They train the AIs on code from GitHub and other projects and then suggest it back to users in violation of the license the code was built with.

    • Rhaedas@kbin.social
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      That’s exactly what a LLM is, an advanced autocomplete using a huge training database to determine the probabilities. It never was anything more, even with the unexpected characteristics from further development. The models that are being fine-tuned for their training (stepping them back to a more narrow field LLM) do a lot better than a general purpose one.