It was ever thus:
A lie gets halfway around the world before the truth has a chance to get its pants on.
Winston Churchill
lol it’s “boots,” but I like pants better. Makes the truth seem so much cooler ‘cause it was fuuuuuuuuuckin
See, even quotes with errors in them get upvoted before someone can come along and correct them :)
Ah, so one of those clever case-in-point lemmy comments. Very clever. Your plan was masterful
Nah, even my retcon explanation is a lie. I copied the quote from some famous quote website and didn’t catch the error.
Especially if you are using the word “pants” the way that Churchill would have.
I don’t dare to ask why your truth has been naked before…
Because the truth fucks, homie.
And trust me, these generated images are getting scarily good.
I have to agree, I would not be able to spot a single one of them as fake. They look really convincingly authentic IMO.
Stalin famously ordered people he had killed erased from photos.
Imagine what current and future autocratic regimes will be able to achieve when they want to rewrite their histories.
Stalin famously ordered people he had killed erased from photos.
This checks out, here’s an article about it: https://www.history.com/news/josef-stalin-great-purge-photo-retouching
So why are you downvoted? Maybe because your view is too optimistic? And the problem isn’t only with autocratic regimes. But much more general.
How do we validate anything, when everything can be easily faked?Probably just because some people really like Stalin, and have become convinced his accounts are the truthful ones and everyone else lies about him.
That’s a scary thought!! But all kinds of crazy exist, and I mean people have to be literally crazy to want to live under a regime like Stalin made.
So why are you downvoted?
lemmygrad dot ml
“Photoshopping” something bad existed for a long time at this point.AI generated images doesn’t really change anything other then the entire photo being fake instead of just a small section.
I’d disagree. It takes, now, zero know-how to convincingly create a false image. And it takes zero work. So where one photo would take one person a decent amount of time to convincingly pull off, now one person can create 100 images or more in that time, each one a potential time bomb that will go off when it starts getting passed around as evidence of something. And there are uncountable numbers of bad actors on the internet trying to cause a ruckus. This just increased their chances of succeeding at least 100-fold, and opened the access to many, many others who might just do it accidentally, for a joke, or who always wanted to create waves but didn’t have the photoshop skills necessary.
It changes a lot. Good Photoshopping skills would not create the images as shown in the article.
Yeah some of these would be like 100 layer creations if someone was doing it themselves in photoshop – It would take a professional or near-professional level of skills.
The easy and speed with which AI created photos, of a quality most photoshoppers could only dream, can be created of does very much change everything.
Wikipedia page on the dude: Nikolai Yezhov
Like 1984.
Honestly, it looks like the picture on the left is fake, like the guy was inserted into it. Just look at his outline, compared with the rest of the background.
(I’m no Stalin fan, just commenting on the picture itself.)
the cat is out of the bag. every nation and company is racing to invent the most advanced AI ever. and we are entering times when negative impact of AI outweighs the positive use of it.
I am really feeling uneasy about the uncertain times ahead of us.
I used to be excited about it, especially the image generation AI.
I believe that the internet has already lost a lot of authenticity in general. The amount of misinformation boomers and gen X lap up on their socials is unreal.
Having advanced image/video AI that would force people to call everything into question, to double check and to fact check sounded good. Except, people aren’t fact checking.
The past we know is a carefully crafted and curated story and not at all accurate as it is. It is valuable to learn and understand but also be skeptical. I don’t really think wide spread forgery changes that. Historiography is a very important field.
Any serious historical research will have to verify the physical copies exist or existed in a documented way to be admitted as evidence. This is called chain of custody and is already required.
We’ll now need AIs to spot AI fakes. AI wins!
The problem is that it’s a constant war between fake generators and fake detection algorithms. Sort of a digital version of bacteria out-evolving antibiotics.
And for a reasonable price, the AI corporations will sell you the chance to survive in the world they created for you.
Check out Adobe’s Content Authentication Initiative. It won’t prevent those images but it will allow you to verify their source, which in this case should not authenticate.
From the article…
The real danger lies in those images that are crafted with the explicit intention of deceiving people — the ones that are so convincingly realistic that they could easily pass for authentic historical photographs.
Fundamentally/meta level, the issue is one of is; are people allowed to deceive other people by using AI to do so?
Should all realistic AI generated things be labeled as such?
There’s no realistic way to enforce that. The answer is to go the other way. We used to have systems in place for accountability of information. We need to bring back institutions for journalism and historians to be trustworthy sources that cite their work and show their research.
There’s no realistic way to enforce that.
You can still mandate through laws that any AI generated product would have to have a label on it, identifying itself as such. We do the same thing today with other products that are manufactured and sold (recycling icons, etc).
As far as enforcement goes, the public themselves would ultimately (or in addition to) be the enforcers, as the recent British royal family photos scandal suggests.
But ultimately Humanity has to start considering laws that affects the whole species, ones that don’t just stop at an individual country border.
Don’t get me started on the sham that is recycling icons 😂
I’m all for making regulation that would require media companies to disclose that something is fake if it could be reasonably taken as truth. But that doesn’t solve the problem of anyone with a computer pumping fake images on to the web. What you’re suggesting would require a world government that has chip level access to anything with a CPU.
As for the public enforcing the truth; that’s what I’m suggesting. Assume anything you see online could be fake. Only trust trustworthy institutions that back up their media with verifiable facts.
What you’re suggesting would require a world government that has chip level access to anything with a CPU.
Well, not something that harsh, but I think we’re looking at losing some of the faux anonymity that we have (no more sock puppet accounts, etc.).
Most people haven’t thought far enough ahead on what this means, all of the ramifications, if we let AI run rampant on the human ‘public square’.
Instead of duplicating my other comment on this subject, I’ll just link to it here.
Physical products are not the same as digital products. Your suggestions are very unrealistic.
Problem with that is that for data, it’s much easier to lie and get away with it. If a bot throws up an unlabelled AI generated image, law enforcement agencies would have a much harder time tracking down who made it.
There could be hundreds, or even thousands, and the moment they pin one down, more will appear.
By comparison, physical products can only be made and enter the country so quickly. There are physical factories where they can be tracked down, and it’s prohibitively expensive to spin up a new product line every time the other one is shut down.
Hot take incoming…
If a bot throws up an unlabelled AI generated image, law enforcement agencies would have a much harder time tracking down who made it.
Well they would just start with the person who has the user account, or the site that the user account is associated with (we might end the days of being able to have sock puppet accounts). Or they get that information from the NSA (the government knows every one of your porn fetishes).
Honestly, I realize what I’m stating is not as easy to do as I’m saying it is, and making it actually work would be kind of ugly and not completely fair to all parties, but it is something that is actually doable, and needed.
We shouldn’t just throw up our hands on day one and say “fuck it, nothing can be done about it”, and then we all suffer in the pollution of the human conversational-sphere to the point that no one can converse with each other anymore because of all the garbage.
When we stop talking to each other, because we think everything is just AI generated, that’s a formula for destruction for the human race. We have to be able to talk to each other, and be confident that we’re actually talking to each other, and not a robot.
/getsoffsoapbox
Well for the majority of human existence we got by on talking to each other in person. So I think the collapse of humanity is a bit dramatic.
Now, as we’ve seen with torrenting, if any country doesn’t comply or enforce laws against how their citizens should interact with the internet you can just VPN through that country to do what you want.
Ok so
- Create the infrastructure for an entire world government.
- Force every country to join and fully enforce laws tying every person to their online accounts.
- Of course this will create a dangerous police-state like China’s government for many countries where speaking out against your government is dealt with harshly. So either abolish free speech or fix all corruption in all the countries in the world.
- Of course this level of control over the world will attract a lot of corruption itself, so build an unassailable global set of checks and balances for how this government should be run that literally everyone on earth can agree on.
Or
Proper journalism.
Well for the majority of human existence we got by on talking to each other in person. So I think the collapse of humanity is a bit dramatic.
We never had the ability to con each other over so completely and in very large numbers than we do today with the Internet and specialized Networks.
And more importantly, you always knew you were talking to another person, and not a conflict bot or an astroturfing bot, or a political party bot, etc. Now, you don’t, which is my point. We can’t solve problems if we don’t know we’re talking to a person versus a not person.
I wouldn’t be so quick to dismiss what I’m saying.
Proper journalism.
If the last couple of years proves anything, that’s not going to save us, not that alone.
You’re making an assumption that 100% of people are aware enough to consume the proper journalism and make the proper decisions.
Right now large swaths of people are being convinced the things that are not true through improper journalism.
AI is creating fake XY, and that is problems, problems, problems everywhere…
During the last decades, IT guys and scientists have always dreamed about using AI for good things. But now AI has become so much better at creating fake things than good things :-(
People created fake photoshop images long before AI…
It’s not really a new problem, people were doing it with their imaginations and stories long before AI came around. The tools of the digital age simply amplified the effect. Healthy skepticism is still the solution, that hasn’t changed.
It’ll never actually go away, though. Of all the possibile ways of looking at any given situation, the vast majority will always be inaccurate. Fiction simply outnumbers nonfiction. Wrong answers outnumber correct answers.
So, the adjustment has to be inside of us, and again, it’s always been necessary. This isn’t fundamentally new.
The new thing is the scope in which fake content is being created. In a very near future most internet content will be fake, including history. That is not something that has happened before in history.
The current AI situation is completely unprecedented in history.
I would disagree. I think if we go back even a few centuries, we find that virtually nobody had a firm grasp on historical fact, due to the printing press not being invented yet, alongside archeological techniques not existing.
I mean, maybe it has happened before in history, but someone changed it via AI and we just don’t know…
“statement headline” + “and here’s how you should think” = fuck right the unholy toe fungal hell off.
It’s an opinion piece, they start out with their claim and try to back it up, it’s not a news article, what is the problem?
Their opinion sucks.
Same energy as saying “slammed” or “blasted”.
Sure it’s on opinion piece, but it’s indicative of very low quality.
So AI really is a seminal paradigm-changing technology. For the worse.
Automatic spam generator.
For the worse.
Not necessarily.
But we’re going to have to deal with the basic issue of deceiving someone with AI, and if any AI generated thing should be labeled or not as such.
Basically, a legislative fix, and not just a free market free for all.
How do you enforce labelling when there will never be a way to reliably test if something was ai generated?
Basic is not a word that fits the situation.
How do you enforce labelling when there will never be a way to reliably test if something was ai generated?
If the icon is not there and then it’s determined that it’s AI generated, as it was with that British royal family picture the other day; crowd sourced.
I am not understanding you. Or perhaps you’re not understanding me.
Firstly, the British royal family photo was not ai generated.
If you can’t find a way to test if something is ai generated, who decides what is or isn’t ai generated?
When I read the title I sarcastically thought “Oh no, why is AI deciding to create fake historical photos? Is this the first stage of the robot apocalypse?” I find the title mildly annoying because it putting the blame on the tool and ignoring that people are using it to do bad things. I find a lot of discussions about AI do this. It is like people want to avoid that it is how people are using and training the tool is the issue.
Isn’t the tool part of the issue? If you sell bomb-making parts to someone who then blows up a preschool with them, aren’t you in some way culpable for giving them the tool to do it? Even if you only intended it to be used in limestone quarries?
That really depends on whether the bomb making part is specific to bombs, and if their purchase of that item could be considered legitimately suspicious. Many over the counter products have the potential to be turned into bombs with enough time or effort.
If a murderer uses a hammer, do you think the hardware store they purchased the hammer from should be liable?
You can make crude chemical weapons by mixing bleach with other household items. Should the supermarket be liable for people who use their products in ways they never intended?
Exactly this, many times over.
Most tools with legitimate uses also have unethical uses.
Everything needed to make a bomb can be found at your local Walmart. Nobody blames the gas companies when something gets molotoved.
I would say the supplier is culpable if the tool supplied is made for the purpose of the harm intended or if the supplier is giving the tool to the person who does the harm with the explicit intent for that person to use it for that harm. For example, giving someone an AK-47 to shoot someone or a handgun/rifle with the intent that the user shoot someone with it. If the supplier gives someone a tool to use for one legit purpose but the user uses it for a harmful purpose instead, I don’t think you can blame the supplier for that. For example, giving someone a knife to cut food with, and then the user goes and stabs someone with it instead. That’s entirely on the user and nobody else.
So the potential to do harm should never be considered?
To clarify, instead of intent a better word may be knowledge. If the supplier knows that the user is going to use the tool for harm but gives the tool to the user anyway, then the supplier shares culpability. If the supplier does not (reasonably) know, either through invincible ignorance (the supplier could not reasonably know) or the user’s deception (lying to the supplier), then the supplier is not culpable.
Maybe if the tool’s singular purpose was for killing. I think guns might be a better metaphor there. Explosives have legitimate uses and if you took the proper precautions to vet your customers then it’d be hard to blame you if someone convincingly forged credentials, for example.
Compare to the “Cottingley Fairies” photographs of 1917.
I just listened to the Criminal podcast on that, recently. Fascinating cultural moment.
Can AI write car service manuals that are only slightly incorrect?
@[email protected] draw for me a fake historical photo.
Does that AI think that Indian people are monkeys or something case there is a photo there where it clearly made their face look more inline with a monkey
Digital documents NFT is the solution that comes to my mind for the upcoming massive chaos of AI generated digital material.
What a techbro take.
That’s only a solution if everyone adopts it. Which I doubt will happen on social media.