This is the best summary I could come up with:
In the recent study, listed on arXiv at the end of October, UC San Diego researchers Cameron Jones (a PhD student in Cognitive Science) and Benjamin Bergen (a professor in the university’s Department of Cognitive Science) set up a website called turingtest.live, where they hosted a two-player implementation of the Turing test over the Internet with the goal of seeing how well GPT-4, when prompted different ways, could convince people it was human.
Surprisingly, ELIZA, developed in the mid-1960s by computer scientist Joseph Weizenbaum at MIT, scored relatively well during the study, achieving a success rate of 27 percent.
In a post on X, Princeton computer science professor Arvind Narayanan wrote, "Important context about the ‘ChatGPT doesn’t pass the Turing test’ paper.
While this generally leads to the impression of an uncooperative interlocutor, it prevents the system from providing explicit cues such as incorrect information or obscure knowledge.
More successful strategies involved speaking in a non-English language, inquiring about time or current events, and directly accusing the witness of being an AI model.
“Nevertheless,” they write, “we argue that the test has ongoing relevance as a framework to measure fluent social interaction and deception, and for understanding human strategies to adapt to these devices.”
The original article contains 904 words, the summary contains 204 words. Saved 77%. I’m a bot and I’m open source!
Removed by mod
Removed by mod
Removed by mod
Removed by mod
Removed by mod
Removed by mod
Removed by mod
Removed by mod
Removed by mod
Removed by mod
Removed by mod
Removed by mod