They are referencing this paper: LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset from September 30.
The paper itself provides some insight on how people use LLMs and the distribution of the different use-cases.
The researchers had a look at conversations with 25 LLMs. Data is collected from 210K unique IP addresses in the wild on their Vicuna demo and Chatbot Arena website.
Hey man,
My experiences with it are kinda double edged. On the one hand, explaining your situation and feelings to chatgpt is kind of cathartic since you’re sitting down writing and thinking of them. Gpt will usually analyse a bit and offer some “options” ranging from totally realistic to wildly imagined. You can then pick the points you want to further analyze and go from there, sometimes with surprising results.
On the other hand, I’ve ended many such a conversation on GPT’s note that they’re not a mental health specialist and that they can offer no more aid or insight.
Yeah, I don’t really like the way ChatGPT talks to me. It’s a) lecturing me too much. And b) has quite a distinct tone of voice and choice of words and always borders on sounding pretentious. Which I don’t really like. But I have all the Llamas available and I like them better.
Thx for the insight!