Google’s DeepMind unit is unveiling today a new method it says can invisibly and permanently label images that have been generated by artificial intelligence.

  • beta_tester@lemmy.ml
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    1 year ago

    Why it matters: It’s become increasingly hard for people to distinguish between images made by humans and those generated by AI programs. Google and other tech giants have pledged to develop technical means to do so.

    You don’t need a watermark for good intentions. A bad actor doesn’t put a watermark on it. A watermark may hurt because the broad mass will think “if there’s no watermark, the image is real”.

  • cybirdman@lemmy.ca
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    1 year ago

    TBF, I don’t think the purpose of this watermark is to prevent bad people for passing AI as real. It would be a welcome side-effect but that’s not why google wants this. Ultimately this is supposed to prevent AI training data from being contaminated with other AI generated content. You could imagine if the data set for training contains a million images generated with previous models having mangled fingers and crooked eyes, it would be hard to train a good AI out of that. Garbage in, garbage out.

    • SkySyrup@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I’m not sure that’s the case. For instance, a lot of smaller local models leverage GPT4 to generate synthetic training data, which drastically improves the model’s output quality. The issue comes in when there is no QC on the model’s output. The same applies to Stable Diffusion.

  • djmarcone@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Spoiler - they will secretly have all humans in ai generated art have slightly messed up hands.

    Mind blown!