• Pengilly@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    6 months ago

    Fascinating! I often wondered if corporations used hyper-specific prompts in an effort to get an image as close as possible to the original so they could blame the image generator for plagiarism, (then sue them and naturally get a crap ton of money from doing so), but the prompts used here seem very generic, yet beat an uncanny resemblance to these screencaps.

    There is some debate about the ethics of it, but supposedly there should be no legal problem with using copyrighted images for a dataset so long as the outputs are transformative (i.e. don’t resemble any one image too closely). I wonder if there’s anything the developers can do anything to prevent it, or if it’s just something an image model will inevitably do.

    • mindbleach@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 months ago

      ‘Popular 90s yellow cartoon family’ is pretty goddamn specific. You know what they’re describing.

      Labels are not a game of Taboo.