BlushedPotatoPlayers@sopuli.xyz to Technology@lemmy.worldEnglish · 10 months agoAI chatbots tend to choose violence and nuclear strikes in wargameswww.newscientist.comexternal-linkmessage-square59fedilinkarrow-up1222arrow-down126file-textcross-posted to: [email protected][email protected][email protected][email protected][email protected]
arrow-up1196arrow-down1external-linkAI chatbots tend to choose violence and nuclear strikes in wargameswww.newscientist.comBlushedPotatoPlayers@sopuli.xyz to Technology@lemmy.worldEnglish · 10 months agomessage-square59fedilinkfile-textcross-posted to: [email protected][email protected][email protected][email protected][email protected]
minus-squarekibiz0r@midwest.sociallinkfedilinkEnglisharrow-up5·10 months agoFor AGI, sure, those kinds of game theory explanations are plausible. But an LLM (or any other kind of statistical model) isn’t extracting concepts, forming propositions, and estimating values. It never gets beyond the realm of tokens.
For AGI, sure, those kinds of game theory explanations are plausible. But an LLM (or any other kind of statistical model) isn’t extracting concepts, forming propositions, and estimating values. It never gets beyond the realm of tokens.