• bitsplease@lemmy.ml
      link
      fedilink
      English
      arrow-up
      35
      ·
      11 months ago

      In using Stable Diffusion for a DnD related project, I’ve found that it’s actually weirdly hard to get it to generate people (of either sex) that aren’t attractive - I wonder if it’s a bias in the training materials, or a deliberate bias introduced into the models because most people want attractive people in their AI pics

      • bionicjoey@lemmy.ca
        link
        fedilink
        English
        arrow-up
        41
        ·
        11 months ago

        It’s trained on professionally taken photos. Professional photographers tend to prefer taking photos of attractive subjects.

        • bitsplease@lemmy.ml
          link
          fedilink
          English
          arrow-up
          6
          ·
          11 months ago

          That’s true, but it’s not like ugly people don’t get photographed - ultimately a professional photographer is going to take photos of whoever pays them to do so. That explanation accounts for part of the bias I think, but not all of it

          • ErwinLottemann@feddit.de
            link
            fedilink
            English
            arrow-up
            4
            ·
            11 months ago

            If I would get pictures taken by a photographer I would not allow them to be used as training data. I don’t even like looking into a mirror. Maybe that’s part of why there are less ugly people pictures to train with.

          • biddy@feddit.nl
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 months ago

            I would guess that ugly people are less likely to commission photos.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      33
      ·
      11 months ago

      They were created via a prompt, that prompt probably included some tags to make them more attractive. It’s often standard practice to put tags like “ugly” and “deformed” into the negative prompts just to keep the hands and facial features from going wonky.

      There are no elderly women, no female toddlers, and so forth either. Presumably just not what whoever generated this was going for. You can get those from many AI models if you want them.

    • IHeartBadCode@kbin.social
      link
      fedilink
      arrow-up
      6
      ·
      11 months ago

      Battleship coordinates, (B10). Also (I4) looks a lot like my niece. I really think it depends on your definition of “average” though. But as @fubo indicated. There are 0% black people in this photo. There’s some vaguely Asian, roughly Middle Eastern looking, sort of South American, and whatever that is going on in (M8). But there are distinctly zero black people pictured.