Maybe I’m missing something, but has anyone actually justified this sort of “reasoning” by LLMs? Like, is there actually anything meaningfully different going on? Because it doesn’t seem to be distinguishable from asking a regular LLM to generate 20 paragraphs of ai fanfic pretending to reason about the original question, and the final result seems about as useful.
Sounds like all it would take is one company to do it right, and they’d clean up. Except somehow, with all of the billions being poured into it, every product with ai sprinkled on it is worse than the non-ai-sprinkled alternatives.
Now, maybe this is finally the sign that everyone will accept that The Market is completely fucking stupid and useless, and that literally every company involved in ai is holding it wrong.
Or, and I know it’s a bit of a stretch here, but consider the possibility that ai just isn’t very useful except for fooling humans and maybe you can fool people into paying for it but it’s a lot harder to fool them into thinking it makes stuff better.