Big if true.

  • AngryPancake@sh.itjust.works
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    4 days ago

    The docs give an example for a trump character, which is weird why they would do it, but people make choices.

    But then I went to the GitHub project of Eliza and just searched for trump in the repo. Granted, this was only about 10 mins looking through the code with the trump keyword, but it definitely seems like everything is in place to have a trump-like ai. There is also a note that the trump bot doesn’t directly reply to questions but often diverts the conversation, so it was definitely tested.

    That’s only the main branch, god knows what’s in the other branches, I’m sure if someone invests significant time, more info could be gathered. Regardless, the program advertises itself as a way to create bots for social media, so surely someone has used it.

    It’s difficult to believe OP (I definitely wants to), but the software is concerning regardless.

    Edit: I was a bit back and forth about the whole thing, but I feel like investigating the project definitely has some merit. I’m done for now, but I’d like to hear more opinions as well.

    • AngryPancake@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      ·
      4 days ago

      Actually, thinking more about it, it’s quite sinister. The characters they have available as examples are: c3po cosmosHelper Dobby eternalAi sbf trump

      Of those, I (and I’m guessing most people) only know c3po, Dobby and Trump. And trump is the only known human model. Now let’s say you want to test the application (which you can from their website if you give them your chatgpt API token), then people are more likely to pick a character they know and so it’s likely to be one of those three. So just running the example with the trump model because you want to test it has already launched a chat bot that has a right leaning rhetoric.