Support us

Support our activities and help children and adults in need. For 15 years we have been helping children and adults solve their problems online. Get involved too! Every help counts! Just click on the DONATE button.

mini_robohuman.jpgArtificial intelligence creations are becoming increasingly perfect, and soon we will not be able to distinguish authentic records of real events (e.g., in the case of photos and videos) from AI-generated products. AI will dramatically change the way we approach information - photos and videos will no longer be a guarantee of truthfulness because they may (or may not) be artificially created through artificial intelligence. 

Greater emphasis will be placed on media literacy of online media users and their ability to critically evaluate, question, and verify information - but in the real world, it is impossible to verify everything (we would do nothing else). The importance of specific sources of information, such as certain individuals or organizations, will also increase, as it can be assumed that their information will be high quality and unbiased. One way or another - we must gradually get used to the fact that what we see and hear on the internet may not capture reality at all. 

The problem is that before most people get used to this paradigm shift (which can take several years), artificial intelligence will cause many problems. One of them will be, for example, targeted election influencing (one of the fundamental democratic processes). With AI, it will be possible to create and spread videos or audio recordings of politicians saying absolutely anything across the internet on a large scale. So-called political marketing will massively use AI to damage political opponents and improve the image of their own candidates, and supporters or opponents of individual political rivals will also be active.

This is already happening in practice, examples include fraudulent recordings circulated during the pre-election contest in Slovakia (which we wrote about here), but also current cases from the pre-election contest for the President of the USA (2024). For example, fake deep fake photos of Donald Trump surrounded by African Americans are being circulated to encourage black voters to vote for Donald Trump. However, these are forgeries probably created by supporters of this politician. At first glance, the materials are very credible, but upon closer inspection, you can reveal, for example, too shiny skin or missing fingers, or various redundant fragments.


AI-generated photos used in the election campaign in the USA

Unfortunately, creating similar materials is extremely easy, both in the form of photos and videos. Josef Šlerka and his team recently published an experiment with a deep fake video of Václav Klaus, in which he says sentences about global climate changes that he would probably never say in the real world. The post caused a significant response, many users were indeed fooled, commented on the video, and did not recognize it as a fake. A large portion of internet users unfortunately does not realize that similar videos can now be made by anyone who learns to work with AI tools. And that within a few minutes. The range of misuse of these creations is extremely wide.

Deep fake videos have so far been used primarily in fraud, suffice it to mention classic examples of fraudulent investments, in which, for example, deep fake videos of well-known politicians promising miraculous wealth appeared. Targets of fraudsters have included, for example, Andrej Babiš, Michal Žantovský, Daniel Beneš, but also, for example, President Petr Pavel and other well-known personalities. The logo of ČEZ, the Government Office, or CNN Prima News has also been misused. More about this type of fraud here

Deep fake video with Andrej Babiš used in fraud

However, it can be assumed with certainty that in the near future, AI products will begin to be massively misused in elections - and will actively influence voters' moods and preferences. Imagine, for example, that a well-known politician (such as a president or prime minister) calls us and persuades us to give them our vote. Or, conversely, persuades us not to go to the polls at all. Does it sound like sci-fi? This is exactly what is happening in the USA, where an artificially created voice of Joe Biden convinced democratic voters not to go to the polls (more on CNN). 

Similar scenarios can be expected in other countries, after all, the risks of election manipulation are also highlighted by the European Union itself, which is preparing for the European elections and is appealing to major tech platforms like X, TikTok, and Facebook to identify and label AI-generated content. Election manipulation using deep fake videos is also feared by, for example, the United Kingdom and other European countries.

What should we prepare for before the elections in the Czech Republic? A flood of deep fake videos of Czech politicians and other well-known personalities that will influence voters.

It is highly likely that a wave of deep fake videos will appear in the Czech Republic before the elections. As previous experiments show, a large part of social media users cannot identify deep fake videos and reveal them as fake, perceiving them as authentic - capturing reality.  

Deep fake video with Tomio Okamura (more videos here)

Deep fake video with Alena Schillerová (received via WhatsApp)

The above videos of politicians are very simple, and it is evident that they were created from source photos found on the internet. Creating a fraudulent deep fake video then takes only a few minutes - to create it, you only need to obtain a sample of the voice (a few minutes of speech) and the appearance of the person (whether an original photo or video). Then it is enough to use online applications (their names will not be mentioned, but they are publicly available) to create a simple or more advanced video. Deep fake videos circulating the internet are often amateurish, but among them, we can also find more successful and realistic pieces, in which more complex algorithms have already been used.

An example is the recent video "apology" of Andrej Babiš for his email attempts to find compromising materials on Minister Lipavský. Here, the use of AI is already significantly more advanced, and it can be very difficult for a layman to detect that it is a deep fake.

Deep fake video with Andrej Babiš circulating on Facebook

Some recommendations on the issue of deep fake videos

A. Investigate the source: Always check where the video comes from. Trusted sources are more likely to share verified information. If the video comes from an unknown or questionable source, be cautious.

B. Focus on quality: Deep fake videos often contain visual or audio flaws. Look for errors such as unusual lip movements, suspicious shadows, poor intonation, overly perfect language, etc., which might indicate manipulation.

C. Verify information: If the video seems suspicious, try to find other sources that could confirm or refute the information. If you can't find any other information on the topic, be careful. You can also use various applications that can help identify deep fake videos.

D. Educate yourself: The more you learn about deep fake technologies and their methods/functioning, the better you will be able to recognize them. Follow the latest developments in artificial intelligence and media literacy. A helpful resource could be the AI course from E-Bezpečí..

E. Be wary of videos that evoke emotions: If the video tries to provoke a strong emotional reaction or seems too shocking or controversial, it should be a warning sign that it might be a fake. Disinformation and hoaxes often play on emotions to obscure rational judgment and prompt a hasty reaction.

F. Learn how to respond: If you come across a deep fake video, do not share it further. Instead, inform the relevant platforms (social networks) and warn others. Strengthening the community against the spread of disinformation is crucial.

G. Actively point out sources spreading false information and support trustworthy institutions.

For E-Bezpečí
Prof. Kamil Kopecký
Palacký University Olomouc


Jak citovat tento text:
KOPECKÝ, Kamil. Generativní umělá inteligence výrazně mění náš pohled na informace. Čemu budeme věřit? A jak ovlivní např. volby? (2). E-Bezpečí, roč. 9 (2024), č. 1, s. 46-50. ISSN 2571-1679. Dostupné z: https://www.e-bezpeci.cz/index.php?view=article&id=4074


Ocenění projektu E-Bezpečí

KYBER Cena 2023
(1. místo)
Nejlepší projekt prevence kriminality na místní úrovni 2023
(1. místo)
Evropská cena prevence kriminality 2015
(1. místo)

Partneři a spolupracující instituce


Generální partner projektu E-Bezpečí

logo o2

Další partneři a spolupracující instituce

 logo nadaceo2logo googlelogo msmtlogo mvcrlogo olomouclogo olomouckykrajlogo ostravalogo hoaxlogo policielogo rozhlaslogo linkabezpecilogo microsoft bwlogo czniclogo nukiblogo podanerucelogo googleovachamp_logo.png


We use cookies

Na naší webové stránce používáme cookies. Některé z nich jsou nutné pro běh stránky, zatímco jiné nám pomáhají vylepšit vlastnosti stránky na základě uživatelských zkušeností (tracking cookies). Sami můžete rozhodnout, zda cookies povolíte. Mějte prosím na paměti, že při odmítnutí, nemusí být stránka zcela funkční.