I’ve fucked around a bit with ChatGPT and while, yeah, it frequently says wrong or weird stuff, it’s usually fairly subtle shit, like crap I actually had to look up to verify it was wrong.

Now I’m seeing Google telling people to put glue on pizza. That a bit bigger than getting the name of George Washington’s wife wrong or Jay Leno’s birthday off by 3 years. Some of these answers seem almost cartoonish in their wrongness I almost half suspect some engineer at Google is fucking with it to prove a point.

Is it just that Googles AI sucks? I’ve seen other people say that AI is now getting info from other AIs and it’s leading to responses getting increasingly bizarre and wrong so… idk.

  • Rojo27 [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    1 month ago

    I think part of it is that Genini is further behind ChatGPT and the training model isn’t all that great. Part of the training that Google uses for Gemini comes from Reddit, as part of deal in which Reddit shares info with Google to “improve” Gemini. Not sure about it using info from other AI models to train, but sounds about as dumb as using Reddit.