• snooggums@midwest.social
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    9 months ago

    It would require the “AI” to understand what was being asked instead of mishmashing what has been fed into it. Since the odds of a plain background and solid colors are probably not included in the training, and it isn’t intelligent, it includes the things it has associated with the color white.

    ChatGPT is also not intelligent and is designed to respond, so of course it has trouble not responding.

    • webghost0101@sopuli.xyz
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      9 months ago

      This is part of my main issue with these models, and i believe the key is mathematics.

      I bet that the moment these models can generate perfect mathematical geometry it will also ben able To understand background and nothing else.

  • mindbleach@sh.itjust.works
    link
    fedilink
    arrow-up
    4
    ·
    9 months ago

    Well, yeah. It wasn’t trained on many solid-color 300-byte PNGs. MS Paint bucket fill won’t let you do Jackson Pollock dribbling, either, and it’s terrible at landscapes.

    I was going to say ChatGPT is incapable of inaction, since it just guesses the most-likely next letter, and something has to be most likely… but it does know when to return control for another prompt and it could theoretically do that.