AI experts have warned that the weaponisation of deepfakes “is only going to get worse” but the real concern is “denying real content”.
Speaking at the Edinburgh TV Festival’s presentation on AI and TV, on Wednesday, experts spoke of how AI and TV has the potential for good and the introduction of new jobs, but warned of how deepfakes are advancing at an increased rate.
The event was held at the Edinburgh International Conference Centre by presenters Alex Connock, a senior fellow in management practice at The University of Oxford, Hannah Fry, mathematician, author and radio and TV presenter, and Muslim Alim, commissioning editor for the BBC.
The trio showed the audience a variety of deepfake examples, including a Martin Lewis video in which he appears to endorse a product created by Elon Musk.
They also played a less convincing but humorous video of Donald Trump and Joe Biden’s deepfakes debating, and a realistic video of Facebook founder Mark Zuckerberg speaking fluently in Hindi.
Mr Connock admitted he may not have spotted that the Martin Lewis video was fake, and warned of the danger of deepfakes becoming more advanced.
He said: “We’ve all probably read a lot about deep fakes in the last few years and the good news is they haven’t been that real thus far. So we haven’t yet seen too many examples in the wild that are really convincing.
“The bad news is they’re about to get really real. And I think as we go into the US 2024 election, that’s going to happen.”
He added: “And in fact, if you look at the academic research, it’s quite interesting because the recent papers does is shows is that the more fake something is, the more credible it is.
“In other words, if you mix up real and fake, it’s quite easy to spot it sometimes. But if it’s purely fake, it’s really hard to spot it – So be very alert to that.
“And there’s certainly anyone who is in the news business. Obviously deep fakes are already very real and certainly big in the Ukraine war, both sides of doing it and it’s only going to get worse.”
The trio were asked by PA Media how much more realistic deepfakes will become over the next few months, and what measures can be taken to ensure they are identifiable.
Ms Fry said: “There are techniques that you can put into generated content that will allow other algorithms to be able to identify them.
“So sort of clever little mathematical tricks that you can do like barcodes where you can’t see terribly often it will still work.”
She added: “The thing is, I think the real concern I think about deep fakes isn’t so much about fake content. It’s about people denying real content. And that I think is like actually slightly more concerning.
“But I also actually, in broad terms, I’m not as concerned about political misinformation or deep fakes, because I think that we have been in this situation before, you know, the invention of Photoshop. I think there was a bit of a moral panic about that.
“And I think that actually, you know, it’s about making sure that the source of where it’s coming from, if a video pops up out of nowhere on YouTube, and it shows I don’t know, some politician do something kind of bananas.
“I think that you add in the context of where it appeared to sort of support the credibility of something.”