Removed by mod
The consequence of viruses for me is that I completely avoid anywhere that could give me a virus. No amount of antivirus protection gives me confidence in pages/downloads that feel unreliable.
I think there will be a similar arms race that’s lost from the start. Especially once they develop one of these for LLMs, it’s going to instantly limit them to empty entertainment which costs OpenAI a lot of money to generate. Like we currently see with legal/academic uses, it will hallucinate nonsensical facts until it’s as limited as earlier chatbots. They’ll eat their own tail trying to combat this poisoning and the product will only degrade as a result.
Simply eat data scientists and the problem is solved
🍎:windows-cool:
https://nightshade.cs.uchicago.edu/downloads.html
https://glaze.cs.uchicago.edu/downloads.html
no :tux:Excellent.
I generated an image the other day, just sort of playing around with stable diffusion to see what the hype was about (kinda neat but fucking difficult to make it do specific things) and one of my renderings had half a pencil in it. Like one of those photos an artist takes of their work to post online with the pencils on it to, idk, show that it’s not photoshop or something?
So at least one of the training images was not legally acquired, likely quite a few, and I thus support the poisoning.
The condemnation of this tool seems to boil down to ‘I am upset, hence, something illegal is being done’.
Ian Goodfellow being the creator of both Generative Adversarial Networks and Adversarial Examples is such a big career plot twist that I don’t think even he was expecting, though the latter was in an effort to prevent them causing problems.
Now we just need to integrate that natively on FOSS artist platforms like pixelfed.