Ukrainian activists propose AI regulation to combat Russian disinformation
At the conference held by the Institute of Innovative Governance, experts discussed the spread of Russian disinformation by AI technologies and suggested ways to combat deepfakes and PSYOPs, Rubryka reports.
What is the problem?
Disinformation is a powerful tool of the Russian propaganda machine to demoralize Ukrainians, spreading instigating lies, and promoting anti-Ukrainian narratives.
"The issue of disinformation is one of the most important ones the ministry addresses — before and after the start of the full-fledged invasion. This is a top priority for us," said Deputy Minister of Culture and Information Policy Taras Shevchenko.
According to Olha Yurkova, co-founder of Stopfake, a project that refutes propaganda and fake news, Russian propagandists actively use artificial intelligence to create deepfakes and spread disinformation on social media through newly formed online communities and channels. Propaganda evokes negative emotions, puts pressure on Ukraine's weak points, such as corruption, and demoralizes Ukrainian society.
What is the solution?
Media literacy experts propose to adopt artificial intelligence legislation, which regulates AI and obliges developers and users to observe certain regulations. The European Parliament has already approved the draft law on AI regulation. It won't enter into force until 2026, so the European Union plans to conclude temporary voluntary pacts with technology companies.
Activists also believe it's important to increase the media literacy of Ukrainians and encourage users to report fakes on social media. Deputy Director of the Center for Democracy and the Rule of Law Ihor Rozkladnyi pointed out that the accounts of ordinary users can cooperate with the platform by leaving complaints about false content. The system will block disinformation thanks to user appeals.
If users share fakes on their pages, this will only accelerate the spread of disinformation:
"The primary spreader of disinformation is an ordinary consumer who presses the 'share' button on social media, then someone else picks it up, and fake or disinformation spreads very rapidly in real-time," said Olha Yurkova. "This is very dangerous, and at this very moment, it is important to stop the spread of disinformation."
How does it work?
The European Parliament-approved law will regulate the use of AI, evaluating the risk level: the higher it is to the rights or health of people, the greater the obligations. High-risk fields include education, critical infrastructure, public order, and migration management.
The law will also require the labeling of content created by AI. It will be applied to generative systems that can write texts and create images, audio, and media files. Google DeepMind is already testing the SynthID tool to mark AI-generated images with inseparable watermarks.
At the discussion, the group of experts proposed introducing a media literacy course in high schools, where students will learn to work with social media and check the sources of information. Activists also stressed the importance of open training and conferences to increase the level of critical thinking and attentiveness of Ukrainians.
"The war made people understand that information security impacts physical security, and they became more interested in it," said Valeria Kovtun, head of the national media literacy project "Filtr." "However, it is not entirely correct to say that people have become much more media literate. They pay more attention to it, but I think we still have a lot of work to do. The fact that society wants to work on this [on media literacy — ed.] is already a big, significant step."
Rubryka reported that Ukrainian developers based on Artificial Intelligence have created the Mantis Analytics platform, which can monitor, analyze events, and detect manipulations in the information space.
The Center of National Resistance also presented a comic called "Time of the Russians is Coming," based on the book, created with the help of artificial intelligence.