As a brand new user of ChatGPT, I have never been so incredibly impressed and rage-inducing frustrated at exactly the same time with any new tech I’ve ever tried.
I was using it to help create some simple javascript functions and debug some code. It could come up with working functions almost immediately that took a really interesting approach that I wouldn’t have thought of. “Boom,” I thought, “this is great! Let’s keep going!” Then, immediately afterwards, it would provide absolute shit that couldn’t and wouldn’t work at all. It couldn’t remember the very code it just outputted to me on multiple occasions, and when asked to make a few minor changes it constantly spouted brand new very different functions, usually omitting half the functionality it had before. But, when the code was directly typed in by me in a message, every time, it did much better.
Seems with every question like that I had to start from scratch every time, or else it would work from clearly wrong (not even close, usually) newly generated, code. For example, if I asked it to print exactly the same function it printed a moment ago, it would excitedly proclaim, “Of course! Here’s the exact same function!” and then print a completely different function.
I spent so much time carefully wording my question to get it to correctly help me debug something that I ended up finding the bug myself, just because I was being so careful in examining my code so I could ask it a question that would give me a relevant answer. So…I guess that’s a win? Lol. Then, just for fun, I told ChatGPT that I had found and corrected the bug, and it took responsibility for the fix.
And yet, when it does get it right, it’s really quite impressive.
Humans can do something, doesn’t mean humans only do that thing and nothing else.
Humans have many models of the world running in different modes in parallel, enabling us to make sense of things other than just process language and come up with plausible sounding answers within the rules of a given language.
Our understanding of concepts is different than how we process language, demonstrated in that there are perfectly intelligent people who can’t communicate using spoken or written language (including sign language) but can do so using other methods which demonstrate language processing isn’t essential to our intelligence.
The way we learn information and integrate it into our neural network is vastly different than how we train our artificial models using machine learning. Even if we just take language processing, we definitely don’t learn by reading the entirety of written human language many times over regardless of what language it’s written in, until we can understand how it’s underlying mechanics work so that we can form plausible structures of word-chunk strings without necessarily understanding the concepts behind the word-chunks.