I’ve fucked around a bit with ChatGPT and while, yeah, it frequently says wrong or weird stuff, it’s usually fairly subtle shit, like crap I actually had to look up to verify it was wrong.

Now I’m seeing Google telling people to put glue on pizza. That a bit bigger than getting the name of George Washington’s wife wrong or Jay Leno’s birthday off by 3 years. Some of these answers seem almost cartoonish in their wrongness I almost half suspect some engineer at Google is fucking with it to prove a point.

Is it just that Googles AI sucks? I’ve seen other people say that AI is now getting info from other AIs and it’s leading to responses getting increasingly bizarre and wrong so… idk.

  • amphibian [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 month ago

    Googles highest end ai is ranked really highly though, second only to the 3 best gpt-4 models. Even outranks a couple other gpt-4 variants. They weren’t really that late (I don’t think they were late at all tbh), they just didn’t invest in a completely dedicated AI company like OpenAi and Microsoft, they actually built their own from the ground up. I think the main problem with googles search model is that it’s really bad at crawling the web for data and turning that into a coherent answer. But their LLM alone and as a chatbot is top notch

    You are absolutely right about having something to please investors because Microsoft was way more prepared to integrate Copilot into all their products. Googles models are super powerful, they just were not prepared to package it into a consumer product as soon as they did