Moore’s law is the observation that the number of transistors in an integrated circuit (IC) doubles about every two years.

Is there anything similar for the sophistication of AI, or AGI in particular?

  • TacoEvent
    cake
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    What drastically better results are you thinking of?

    • AggressivelyPassive@feddit.de
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      Actual understanding of the prompts, for example? LLMs are just text generators, they have no concepts of what’s being the words.

      Thing is, you seem to be completely uncreative or rather deny the designers and developers any creativity if you just assume “now we’re done”. Would you have thought the same about Siri ten years ago? “Well, it understands that I’m planning a meeting, AI is done.”

      • TacoEvent
        cake
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        I see your point. Rereading the OP, it looks like I jumped to a conclusion about LLMs and not AI in general.

        My takeaway still stands for LLMs. These models have gotten huge with little net gain on each increase. But a Moore’s Law equivalent should apply to context sizes. That has a long way to go.