• toad31@lemmy.cif.su
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    19
    ·
    edit-2
    3 months ago

    I’ll never understand how a statistical model for “next most probable pixel” can arrive at shit like this.

    Probably because you have no experience writing or reading code for AI.

    Why would you think you could understand this? Dunning-Kruger effect?

    • pyre@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      3 months ago

      “I’ll never understand this”

      “Why would you think you could understand this?”

      maybe you should work on understanding basic english

    • scarabic@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      3 months ago

      I’ve read enough about how the langauge models work to develop a statistical graph of words based on a huge corpus of training input. But no amount of training pixels adds up to z-index errors like this. Also, wow your attitude is hostile as hell. I feel sorry for you having to walk around like that. Try not to have a terrible day now, hm?