- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
“* People ask LLMs to write code
LLMs recommend imports that don’t actually exist
Attackers work out what these imports’ names are, and create & upload them with malicious payloads
People using LLM-written code then auto-add malware themselves”
What’s with the massive outflow of scaremongering AI articles now? This is a huge reach, like, even for an AI scare piece.
I tried their exact input, and it works fine in ChatGPT, recommending a package called “arangojs”, which, link, seems to be the correct package that’s been around for 1841 commits. Which seems to be the pattern of “ChatGPT will X”, and I try it, and “X” works perfectly fine with no issues that I’ve seen for literally every single article explaining how scary ChatGPT is because of “X”.