Here’s some context for the question. When image generating AIs became available, I tried them out and found that the results were often quite uncanny or even straight up horrible. I ended up seeing my fair share of twisted fingers, scary faces and mutated abominations of all kinds.

Some of those pictures made me think that since the AI really loves to create horror movie material, why not take advantage of this property. I started asking it to make all sorts of nightmare monsters that could have escaped from movies such as The Thing. Oh boy, did it work! I think I’ve found the ideal way to use an image generating AI. Obviously, it can do other stuff too, but with this particular category, the results are perfect nearly every time. Making other types of images usually requires some creative promptcrafting, editing, time and effort. When you ask for a “mutated abomination from Hell”, it’s pretty much guaranteed to work perfectly every time.

What about LLMs though? Have you noticed that LLMs like chatGPT tend to gravitate towards a specific style or genre? Is it longwinded business books with loads of unnecessary repetition or is it pointless self help books that struggle to squeeze even a single good idea in a hundred pages? Is it something even worse? What would be the ideal use for LLMs? What’s the sort of thing where LLMs perform exceptionally well?

  • YourFavouriteNPC@feddit.de
    link
    fedilink
    arrow-up
    2
    arrow-down
    9
    ·
    9 months ago

    Funnily enough: revealing plagiarism. Or even just judging the originality of a given text. Train it to assign an “originality value” between 0 (I’ve seen this exact wording before) and 1 (this whole text is new to me) to help universities, scientific journals or even just high schools judge the amount of novelty a proposed publication really provides.

    • Hamartiogonic@sopuli.xyzOP
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      9 months ago

      Recently I’ve seen some discussion surrounding this. Apparently, this method also gives lots of false positives, but at least it should be able to help teachers narrow it down which papers may require further investigation.

      • Dr. Dabbles@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        9 months ago

        Recent studies show it doesn’t work at all, and has likely caused irreparable harm to people whose academics have been judged by all of the services out there. It has finally been admitted that it didn’t work and likely won’t work.

    • Mixel@feddit.de
      link
      fedilink
      arrow-up
      2
      ·
      9 months ago

      Well yeah that approach would work if you train it on one model but that doesn’t mean it would work on another model, but for the normal user who uses chatgpt it is probably enough to detect it at least 80-90% of the times