This is done through ControlNet, an extension architecture for the open source Stable Diffusion latent diffusion models. Basically, ControlNet takes a reference image, in this case an image of text, and then modifies the model inference to meet certain parameters/weights vis-a-vis that reference image. While this has been possible for a while, the popularity of this meme took off with the development of a QR code controlnet that allowed for hidden QR codes to be embedded in an image. Since this controlnet forces a high contrast based on the white and black reference image, it can force any generated image to match the original image. Since latent diffusion is a denoising process, applying a simple gaussian blur (literally by squinting or moving the image away) is enough to see the outline of the original image embedded.
This is done through ControlNet, an extension architecture for the open source Stable Diffusion latent diffusion models. Basically, ControlNet takes a reference image, in this case an image of text, and then modifies the model inference to meet certain parameters/weights vis-a-vis that reference image. While this has been possible for a while, the popularity of this meme took off with the development of a QR code controlnet that allowed for hidden QR codes to be embedded in an image. Since this controlnet forces a high contrast based on the white and black reference image, it can force any generated image to match the original image. Since latent diffusion is a denoising process, applying a simple gaussian blur (literally by squinting or moving the image away) is enough to see the outline of the original image embedded.
This is the first real explanation I’ve heard, and now it makes sense (though some still went over my head), so thank you very much!