• The Facebook stuff is mostly old stable diffusion models or Dalle, because they’re free and relatively easy to use. Midjourney and the newer stable diffusion models get it right most of the time, and have an inpainting feature so you can tell the computer to do that bit again when they don’t.

    • amemorablename@lemmygrad.ml
      link
      fedilink
      arrow-up
      7
      ·
      4 months ago

      Yes and no. It’s not a solved problem, but a worked around problem. Diffusion models struggle with parts that are especially small and would normally have to be done with precision to look right. Some tech does better on this, by increasing the resolution (so that otherwise smaller parts come out bigger) and/or by tuning the model such that it’s stiffer in what it can do but some of the worst renders are less likely.

      In other words, fine detail is still a problem in diffusion models. Hands are related to it some of the time, but are not the entirety of it. Hands were kind of like a symptom of the fine detail problem, but now that they’ve made hands better, they haven’t fixed that problem (at least not in entirety and fixing it in entirety might not be possible within the diffusion architecture). So it’s sorta like they’ve treated the symptoms more so.