The argument for current LLM AIs leading to AGI has always been that they would spontaneously develop independent reasoning, through an unknown emergent property that would appear as they scale. It hasn’t happened, and there’s no sign that it will.

That’s a dilemma for the big AI companies. They are burning through billions of dollars every month, and will need further hundreds of billions to scale further - but for what in return?

Current LLMs can still do a lot. They’ve provided Level 4 self-driving, and seem to be leading to general-purpose robots capable of much useful work. But the headwinds look ominous for the global economy, - tit-for-tat protectionist trade wars, inflation, and a global oil shock due to war with Iran all loom on the horizon for 2025.

If current AI players are about to get wrecked, I doubt it’s the end for AI development. Perhaps it will switch to the areas that can actually make money - like Level 4 vehicles and robotics.

  • Greg Clarke@lemmy.ca
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 months ago

    I’m not defending Sam Altman or the AI hype. A framework that uses an LLM isn’t an LLM and doesn’t have the same limitations. So the accurate media coverage that LLMs may have reached a plateau doesn’t mean we won’t see continued performance in frameworks that use LLMs. OpenAI’s o1 is an example. o1 isn’t an LLM, it’s a framework that augments some of the deficiencies of LLMs with other techniques. That’s why it doesn’t give you an immediate streamed response when you use it, it’s not just an LLM.