October 15, 2019
If deepfakes became popular and widely known, people would start distrusting video, and the overlords don’t want people to stop trusting what they see displayed on screens.
Now their move appears to be “but we’ll be able to detect the fakes and tell you what is real.”
Deepfake videos are becoming increasingly easy to make, and harder than ever to detect, experts warn.
As fake videos are used to spread misinformation quickly around the world, there are fears that more effort is going into developing deepfake-generating tools than into detection.
EU tech policy analyst at the Centre for Data Innovation Eline Cheviot warned that there is a growing imbalance between technologies for detecting and producing deepfakes and that this means there is a lack of tools needed to efficiently tackle the problem.
And she warned humans are no longer able to spot the difference between deepfakes and the real thing, and so are unable to stop weaponised fake news from spreading.
They may soon propose some kind of license for posting videos online with the excuse of “fighting misinformation.”
“Debunking disinformation is increasingly difficult, and deepfakes cannot be detected by other algorithms easily yet,” she said.
“As they get better, it becomes harder to tell if something is real or not.
“You can do some statistical analysis, but it takes time, for instance, it may take 24 hours before one realises that video was a fake one, and in the meantime, the video could have gone viral.”
She added that simply bringing in laws banning or regulating deepfakes is unlikely to be enough and that more understanding is needed by politicians of the technology.
Banning or regulating deepfakes would only result in the ones with access to the technology getting even more powerful, because people would assume that since the technology is banned or regulated, that it isn’t likely to be used and they’d be far less likely to question what they see on a screen.
Tech entrepreneur Fernando Bruccoleri said tech platforms need to make it easier for people to work out what is real and what is fake.
“I think it will not be as simple as it seems to be able to pass and legislate in the short term,” he said.
“Surely any platform will design tools to detect if a video is fake or not, as a counterpart.”
“Tech companies will tell you what is real.”
But CEO of video verification site Amber, Shamir Allibhai, whose company specialises in detecting fakes, said that it would be impossible to regulate the creation of deepfakes.
Instead, he said, platforms should work to tackle the distribution of such videos, in the same way that they already work to stop the spread of revenge porn.
And he also warned that deepfakes are here to stay, adding: “I think we are going to see significantly more of it in the run-up to the US presidential elections in 2020.”
It comes after a study revealed almost all deepfake videos are porn .
I can see a scenario where internet people making Greta Thunberg porn using deepfake technology results in worldwide chaos and society being pulled apart.
Porn is bad, yes, but that would be hilarious.
Just look at this retard’s face:
It may or may not be illegal to do that kind of thing until she officially turns 18 though.