- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Really good article I read today. I was already impressed by the first hype of image generators but since haven’t informed myself much. They got really good lately apparently. I can’t decide if I am concerned or impressed. Do you think this would actually be used for something other than memes and misinformation? I thought I might share it and hear your opinions.
I find the author’s reasoning strained. They use flat Earthers as an example of the power of photographs to “prove” reality?
I question the central premise that photographs were ever the foundation of reality. Haven’t filmmakers been fooling us with photography for over a hundred years?
i think its more about the ease.
you used to need a team, then a qualified professional, now any moron can tell a machine to do it.
That all comes up in the article. The core idea the author is getting at is the general ease of fabricated situations is coming in a new way that previously hasn’t been a couple clicks for the average user. Think less about political turmoil (propaganda has existed as long as there as been politics) and more about how your Karen aunt can add a worm to their Google review for spaghetti. Most people won’t learn Photoshop, most people can click a few buttons.
I think it’s still important to consider the tomorrow we’re being thrust into even if we could do this on a smaller scale yesterday.
Yeah, clearly the author doesn’t know about Stalin
From the article you clearly didn’t read:
Photography has been used in the service of deception for as long as it has existed. (Consider Victorian spirit photos, the infamous Loch Ness monster photograph, or Stalin’s photographic purges of IRL-purged comrades.)
k
Did you read the article or comment on the title
Yeah, we finally start to get accountability from public officials via bodycam and now here comes technology that will make it trivial to skew the narrative
It’s still rather easy to identify AI generated pictures, especially of people. There’s still way to go untill we get video that’s sufficiently good that it’s difficult to tell it’s fake. It’s absolutely going to be a problem sooner or later but I doubt we’re anywhere near.
Also, the one benefit this all comes with is the plausible deniability when you’re accused of something even if it really was you. Say you have nudes leak online for example. You can just say they’re not real and it would be really difficult to prove otherwise.
This doesn’t even begin to scratch the surface of what you can do with tools like ComfyUI and stable diffusion. It’s sad to see how all the hate around AI image generation prevents people from being educated about it, which in the end is beneficial to bad faith actors using these tools to create misinformation. If people weren’t afraid to learn how it works they wouldn’t be fooled as easily by it. The days of 8 finger hands are over.