I get the general techbro sphere doing this because they’re fucking idiots, but you are the maintainers of this technology. You know this is ultimately a language model built to synthesize information from pre-existing data sets. This is extremely fucking ghoulish.
I hate the promotion of the idea that we should be using generative adversarial networks as search engines or encyclopedias so much it’s unreal, literal misinfo generation machines :agony-shivering:
Here’s the problem. Google used to be good. Now it sucks. Microsoft tried to do Bing by building a rival AI capable of crawling the web and building a relational database better than Google’s and failed. So now we’ve got Microsoft rebranding their Search Engine as a Chat AI and… It doesn’t suck!
I can look for things I had enormous trouble finding online previously and get the results I actually want. This is a good thing! I like good things! I’m going to probably use ChatGPT as a search engine going forward because it works better than Google or DuckDuckGo or whatever else. Do not try to stop me, because I like Search Engines That Return The Results I’m Actually Looking For.
I just recognize that this is a term-limited experience. Eventually, ChatGPT will get slogged down by adware and bloatware and shitty SEO. And then it’ll stop working properly.
At that point, I’ll have to either go back to Alta Vista or Yahoo or whatever else doesn’t suck, or find a Sexy New Thing to use as a search engine.
Sorry, haters. But that’s the Brave New World we live in.
That’s a bubble that will burst once it actually needs to be used for anything mission-critical.
I think they’ll end up making an enterprise version of it that draws from that organization’s information.
It still doesn’t understand context or logic, so the same problem remains.
this could be such useful technology for automating thinking labor but instead it is going to be used as a means to control “The Truth” because loud mouth tech bros are going to cite it as an official source.
I trust BazingaChat over the human because humans are more prone to errors! Human! :so-true:
(very similar arguments are already made by bazingas about why their Teslas are safe, actually)