[ad_1]
On June 24, 2022, Berlin mayor Franziska Giffey said: “It was completely normal. Conversation with Kyiv Mayor Vitali Klitschko”.
Or so she thought.
In the middle of war-torn Kyiv, she began to become suspicious when the supposed mayor asked her to support a gay pride parade.
It turned out to be a crook, not Klitschko. Giffey’s office later said the individual was probably using deepfake technology to trick the mayor of Berlin (although the technology behind it remains unknown).
A year or two ago, few people were familiar with deepfakes. Most people do today. Much of its popularity is due to its prominence in popular apps such as Face Swap and his AI-powered lip-syncing technology on TikTok.
Once just tools for entertainment, disinformation actors are starting to take advantage of them. Several similar high-profile stunts have been seen, including potentially dangerous ones, such as a mimic deepfake telling citizens to lie down. their arms.
But what’s even scarier is that deepfakes themselves are quickly becoming an ‘obsolete’ method of creating fake video content.
The new kid on the block this year is entirely synthetic media. Unlike deepfakes, which are partially synthesized and graft an image of one person’s face onto another person’s body in an existing video, fully synthesized media can be created from scratch.
This year has seen the rise of text-to-image software that does just that.
It’s not real magic, but the technology behind the generator isn’t all that mysterious. It relies on a vast artificial neural network that mimics the ability to perceive. The model is trained to recognize millions of paired images and their text descriptions.
The user only needs to enter a simple text prompt. —The picture will come out. The most popular programs are Stable Diffusion and DALL-E. Both are now free and open access.
This presents a troubling possibility. These tools are a dream for disinformation actors who only need to imagine and create the “evidence” they need to support their narrative.
These technologies are already starting to permeate social media, and images are just the beginning.
As recently as September, Meta released “Make-A-Video,” which allows users to create “brief, high-quality video clips” from text prompts. Experts warn that synthetic videos are even more troublesome than synthetic images, given that fast-clipped videos are prioritized over text and images in today’s social media environment.
Entertainment aside, the infiltration of synthetic media into apps like TikTok is particularly troubling. TikTok is centered around user-generated content, encouraging people to take existing media, add their own edits, and re-upload. This is an operational model not too different from creating deepfakes.
A recent study by the Associated Press found that one-fifth of videos on TikTok are misinformation, with young people using the app as a search engine for important issues like Covid-19, climate change, and Russia’s invasion of Ukraine. is increasing.
It’s also much harder to audit than other apps like Twitter.
In short, the TikTok app is the perfect incubator for such new tactics, commonly spread across the web through cross-platform sharing.
Most disinformation is still created using mundane tactics such as video and audio editing software. By altering video by splicing, changing speed, replacing audio, or simply taking video out of context, disinformation actors can already easily sow discourse.
seeing is believing
But the dangers of text-to-image conversion are already real. We don’t have to expend too much creative energy imagining a not-too-distant future when untraceable synthetic media pops up on our phones and laptops en masse. is already tenuous, which is terrifying given its potential impact on democracy.
Today’s high density of news complicates matters. Each of us has a limited ability to consume news. We know debanking is a time consuming and ineffective solution. For most of us, seeing is believing.
We need to provide a simple and pervasive solution to help users quickly identify and understand deceptive images and videos. Solutions that do not enable users and journalists to identify fake news faster, easier, and more independently lag behind.
Currently, the most promising solutions focus on provenance: technologies that embed signatures or invisible watermarks in media at the time of creation, as proposed by Adobe’s Content Authenticity Initiative. This is a promising but complex solution that requires collaboration across multiple industries. European policy makers in particular should pay more attention to this issue.
We live in a fast-paced world and disinformation moves faster than current solutions. It’s time to catch up.
[ad_2]
Source link