This shows that AI isn’t an infallible machine that gets everything right — instead, we can think of it as a person who can think quickly, but its output needs to be double-checked every time. AI is certainly a useful tool in many situations, but we can’t let it do the thinking for us, at least for now.
No, it’s not “like a person who can think.” Unless you mean it’s like an ADHD person who got distracted halfway through the transcript and started working on a different project in the same file.
we can think of it as a person who can think quickly
No.
Do not do this. This way lies madness. It’s a text prediction system which is incredibly complex just to get it to barf out three sentences that sound about right. It is not “thinking” shit.
No, it’s not “like a person who can think.” Unless you mean it’s like an ADHD person who got distracted halfway through the transcript and started working on a different project in the same file.
Agreed.
No.
Do not do this. This way lies madness. It’s a text prediction system which is incredibly complex just to get it to barf out three sentences that sound about right. It is not “thinking” shit.
It’s a more complicated version of that feature where Gmail offers suggested responses like “let me look into that” and “thank you.”
As an ADHD person (among other things), I don’t think I can be replaced with an LLM either.
“Because, unlike some other LLMs, I can speak with an English accent.”