• spujb@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    åtta månader sedan

    glad i was able to clarify.

    there’s little incentive for these companies to actually address these (brand security) issues

    this is where i disagree, and i think the facts back me up here. bing’s ai no longer draws kirby doing 9/11. openAI continues to crack down on ChatGPT saying slurs. it’s apparent to me that they have total incentive to address brand security, because brands are how they are raking in cash.

    • Signtist@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      åtta månader sedan

      Oh, I’m sure they’ll patch anything that gets exposed, absolutely. But that’s just it - there are already several examples of people using AI to do non-brand-friendly stuff, but all the developers have to do is go “whoops, patched” and everyone’s fine. They have no need to go out of their way to pay people to catch these issues early; they can just wait until a PR issue happens, patch whatever caused it, and move on.

      • spujb@lemmy.cafe
        link
        fedilink
        English
        arrow-up
        1
        ·
        åtta månader sedan

        the fact that you explained the problem doesn’t make it not a problem