- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Upscayl lets you enlarge and enhance low-resolution images using advanced AI algorithms. Enlarge images without losing quality, it’s almost like magic! 🎩🪄 Upscayl is a cross-platform application built with the Linux-first philosophy. This means that Linux users receive pre-release builds earlier but Upscayl itself is available on all major desktop operating systems :)
now look what this baby can do …
Huh? A sparkly fart in the sky? 🧐
Holy fuck, a Supernova! Cosmic spectacle of incomprehensible dimensions! 🤯
I wanted to implement the pics into the post via
![alt text](image url)
But it was not working for the upscayl version, sorry.
The tool Upscayl uses, RealESRGAN, is superior to waifu2x - waifu2x is really showing its age at this point. waifu2x is good if you want an upscale that basically retains the same level of fidelity while increasing the resolution (i.e., usually it’s blurry). Newer upscalers try to clean up the lines and make it look like it’s an original copy.
Edit: Just for fun, here’s some tests via downscaling a picture then re-upscaling it:
Original Picture
Original Picture (downscaled)
RealESRGAN - Normal Mode
RealESRGAN - Anime Mode
waifu2x
There are also other models available to test with beyond the stock RealESRGAN ones (I’m not sure if Upscayl can use them but they’re available for CLI use). I recommend using chaiNNer - e.g. you can use a setup like this
This is one using UltraMix - Balanced
just seen your edits, that is some SUPER COOL STUFF! definitely big improvements from waifu2x
waifu2x technology is the main part that’s old I think. There’s a lot of NCNN/etc. models to use now to try to tune for different details. My main problem with waifu2x is that it gives you roughly the exact same fidelity as the original picture, i.e. it’s only slightly sharper than using a generic zoom function on it. Newer upscalers try to make the image more realistic, as if it was the original in the first place. In a direct comparison with the original I think there’s a notable loss of detail in this method (not sure if it’s inherent or just based on how the model is tuned), but without the original to compare to I think the images that the newer ones generate look more pleasing/sharp.
nice!