• sp3ctr4l
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 months ago

    This has utility as a reverse turing test.

    Seriously, get 1000 people, do a double blind experiment where they get a remotely controlled phone, tell them they’re part of a market research project for a new social network.

    Have one group always in real new social media app one group always in fake new social media app, another group that starts in real, switches to fake, visa versa.

    Mandate they all have to post something 3 times a day and view the app for 2 hours a day, run this for a month or two, then explain the actual experiment to them and question them as to which app version they thought they were in at what point time.

    The ‘real’ app has real people and no bots, the ‘fake’ app has just the isolated user talking to bots, and when you switch one group from real to fake, well the ai was training on the real people and now attempts to emulate them in the solipsism mode to keep up the illusion, and for switching from fake to real, just match what were bots to their LLM nearest real person, but keep giving the now real people the already established bot names.

    All of the versions have a variety of seeding from news sources to kick off discussions.

    Something like that, this isn’t a totally exhaustive experiment design, but something like this might show whether or not people can even tell when they are or are not talking to bots.

      • sp3ctr4l
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        I mean, this isn’t profitable at all.

        Do VCs typically fund scientific studies for anything other than free energy machines?