Hey, did you hear about deepfakes being used to spread fake news during elections? It sounds scary!
I read about it! But actually, most deepfake disinformation isn’t aimed at big elections like we think. It’s often more hidden.
Hidden? Like how?
Well, imagine fake videos or images being shared in private groups, like on Telegram or WhatsApp. People in those groups see them, but the rest of us might never know they exist.
Whoa, so it’s not like a fake video of a celebrity on YouTube?
Not really. Those get a lot of attention, but the real danger is deepfakes that are harder to spot. For example, a fake video showing something like an attack in another country. If people don’t have a way to check the truth, they might believe it.
But can’t AI detect deepfakes now?
It’s getting better, but it’s not perfect. A nonprofit called TrueMedia uses AI to identify fake videos, images, and even audio. They also have a forensic team to double-check results.
That sounds helpful. How do they figure out how many people see these deepfakes?
It’s tricky. Sometimes they can estimate based on things like social media stats—like if a post says ‘10 million views.’ But the harder part is knowing how it changes people’s decisions, like voting.
So, what can people do to stop this?
One idea is adding watermarks to AI-generated content, but bad actors often find ways around that. It’s not enough on its own.
It sounds like a tough fight.
It is. But experts think we’ll get better at detecting and measuring deepfakes over the next few years. The key is staying informed and questioning suspicious content.
Good point. I’ll try to check things before believing them. Thanks for explaining!