Nemeski@lemm.ee to Technology@lemmy.worldEnglish · 4 months agoOpenAI’s latest model will block the ‘ignore all previous instructions’ loopholewww.theverge.comexternal-linkmessage-square101fedilinkarrow-up1445arrow-down17cross-posted to: [email protected]
arrow-up1438arrow-down1external-linkOpenAI’s latest model will block the ‘ignore all previous instructions’ loopholewww.theverge.comNemeski@lemm.ee to Technology@lemmy.worldEnglish · 4 months agomessage-square101fedilinkcross-posted to: [email protected]
minus-squareKeenFlame@feddit.nulinkfedilinkEnglisharrow-up11arrow-down1·4 months agoI just love that almost anyone can participate in hacking language models. It just shows how good natural language is as a programming language, and is a great way to explain how useful these things can be when used correctly
minus-squareT156@lemmy.worldlinkfedilinkEnglisharrow-up2·4 months agoIt won’t be long before you end up with language models that suggest ways to break other language models.
I just love that almost anyone can participate in hacking language models. It just shows how good natural language is as a programming language, and is a great way to explain how useful these things can be when used correctly
It won’t be long before you end up with language models that suggest ways to break other language models.