Whats your reasoning on this? Because i dont see why liberal democracy would cease to exist... life would go on if we all know that all pictures can be fabricated. I think this is already the case without AI.
Apparently you missed the problem Deep Fakes posed...
If you cannot distinguish reality (well), and in fact it becomes possible that most things you see do not exist, then there is nothing to stop a bad actor from producing a fake version of events in which they are elected, control everything, etc.
So, democracy would cease to exist, because democracy relies ultimately on a choice - if you have no choice then you do not have democracy, only a dictatorship.
Photoshop has existed for years and humans have been manipulating photos for longer, what's the difference, really?
If I see a photo in the Guardian newspaper (or any other reputable news outfit) I'm going to presume it's real, and I expect journalists to verify that for me. If I see a random photo that doesn't look quite right on a 4chan, I'm not going to immediately assume it's news.
Reach is no different, bots and humans are able to post to social media, and cost is probably no different at the moment either since AI isn't perfect, some human interaction is probably needed to make it believable, and because of that, scale is the same too. I think we're approaching all of those things but it's probably still quite some time away until a machine can be trusted to manipulate the public on its own.
> For a Linux user, you can already build such a system yourself quite trivially by getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem. From Windows or Mac, this FTP account could be accessed through built-in software.
Right but today we have that problem already. We know that a bad actor journalist can write a fake story. We therefore require sources. If deepfakes come along we will know videos can be fake and so we will be skeptical, as we are today, and look to proper sources. We will easily come up with some way to validate sources via cryptography or org reputation (e.g. we might trust the NY Times to not just fabricate things)
This is already barely holding together with mostly human actors doing the astroturfing and creating bullshit "news organizations" expressly to spread propaganda. Automation is going to overwhelm a system that's already teetering.
Yeah but we will get the word out that none of that is trustworthy, then. There will be countermeasures and reactions to this just like previous things. It will certainly be effective to some degree - propaganda is effective for sure - but it won't just be, oh, there are deepfakes, everyone will now just unthinkingly accept them.
1) This effective backlash/education-campaign has not already happened despite there already being significant problems with this kind of thing, and most of it not being that hard to spot, even, and 2) I think the more likely effect is the destruction of shared trust in any set of news sources—we're already pretty damn close to this being the case, in fact. "It's all lies anyway" is a sentiment that favors dictators more than it does democracy.
> This effective backlash/education-campaign has not already happened despite there already being significant problems with this kind of thing
I think it could be claimed that we're clearly in the process of learning to distrust. People distrust media more now than ever [1], because they've demonstrated, again and again, that they can't be trusted. Trust in institutions are down to new lows [2], because they've also shown that they're untrustworthy and/or incompetent. Everyone is shouting "disinformation", "misinformation", and "fake news" constantly, because nobody trusts anything anymore, and rightfully so in many cases.
People already produce all kinds of fake news and doctored photos and false flags and all kinds of things. This has been going on since we developed language and photography I suspect.
People already have trouble telling propaganda from fact. That has been going on since forever.
At the end of the day I don't see this being a game changer. If anything, now video and photos are less evidence for/against something as the potential falseness becomes well known. Congressman X: "no, that wasn't me you saw leaving the hotel with the prostitute, my slimy opponent obviously is deep faking stuff".
And people will continue to believe what they want to believe, in spite of all evidence to the contrary, just like they do right now.
There seems to me a huge difference between a few organizations being able to produce & distribute a total of X amount of self-serving bullshit with some limited reach, and anyone with a bit of money being able to produce 100,000 • X amount of self-serving bullshit and deliver it to exactly the people most likely to respond to it the way they want, anywhere in the world (save, notably, China and North Korea and such) while making it very hard to tell who it's coming from.
An environment in which 90% of the information is adversarial is really bad. It's a severe problem and very challenging to navigate. An environment where 99.9999% of it's adversarial and it's even harder than before to sort truth from fiction, functionally no longer has any flow of real information whatsoever.
Maybe liberal democracy is not the final outcome of human civilization. You like it and I like it (presumably we were both raised to believe this way) but perhaps it's not really true.
Just to question a base assumption here.
It seems to me, if all the things that are claimed to threaten liberal democracy actually do, liberal democracy might be much less robust and long lived then previously believed.
Oh, absolutely. I've even come around to thinking that's likely. But one can hope.
[EDIT] One thing I no longer think has any realistic future is the open, semi-anonymous Internet. We're either losing it because despots take over and definitely won't permit that threat to remain unfettered, or we're losing it (in perhaps a gentler-touch way) because we have to to prevent authoritarian take-over and vast civil strife. I don't think we're getting to keep that no matter what happens.
Yep I think you might be right. It's ultimately too much of a risk to all sorts of powers to have open unfettered real time communication and mass dissemination.
Even the "good guys" will call emergency that will never end.
Oh well, it was nice while it lasted. An intellectual Cambrian explosion. And all that porn!
Take it a step further: Can you be arrested for having porn that would be illegal in your country if it was real, but instead it's a thousand generated images/videos? How blurred will those lines get?
Eh you're asking the wrong question - training sets are not made of gold, it might be hard to make good ones, but faking a training set with a program like this, resource intensive but possible.
This is already possible today and we don't need AI generated stock photos to do it. A bad actor can already spin events to fit their narrative, suppress dissent and control their population. Dictators have been doing it for centuries and we're seeing it in real time in the form of Putin's Russia right now.
Sure, but being able to do the same thing at 100,000x the scale for the same price seems like a pretty big difference. Throw in the ability to target narrow constituencies with custom messages via modern ad networks, automation-assisted astroturfing, et c., and the whole thing looks like a powderkeg to me.
Then everyone quickly realizes it's all garbage. The bigger danger is when people think garbage isn't possible, which has been the case since Stalin erased people from photos 98 years ago.