Montenegro facing a challenge – How to protect yourself from AI-generated false information?
Written by: Jelena Nelević Martinović
What is the interface between an interview of an experienced Montenegrin doctor who recommends some medicine conducted by a journalist of a national broadcaster and students who wrote the same essay en masse during the graduation exam? Well, it is the artificial intelligence (AI).
Designed to serve as an apparatus to facilitate access to information, it can also be a method for learning and progressing, to be used for editing and redaction,in short, it is an assistant in collecting information that the user ultimately needs to filter and use.
Conversely, it also brought fierce manipulations and contributed to creating a feeling of insecurity since we are no longer sure if what we see with our own eyes is accurate. It evolved into a powerful misused instrument to create and distribute false information.
Contents created by AI are realistic photos and videos, and social media users accept those without thinking of the truth. That false content is so realistic that even experts find it difficult to be fully analyzed and then declared as untrue.
Furthermore, it is almost impossible for the average consumer of social networks and media to explore it even if they are media literate. Similarly, in the newsrooms that further distribute supposedly accurate content – a large number of media outlets do not have the capacity and the expertise to scrutinize.
AI misuse already happens in Montenegro. In this manner, the position and title of the former director of the Podgorica Health Center, Dr. Danilo Jokić, was misused for marketing purposes, as well as the cardiologist’s from Nikšić, Dr. Radovan Mijanović, who allegedly gave recommendations for antihypertensive drugs, cleaning of blood vessels, varicose veins, or to improve erection. Dr. Jokić was earlier the target of marketing wizards and creators of false information, and RTCG, the Public Broadcaster, was also compromised when an interview of Dr.Jokić for that media from September last year was manipulated.
On a fake page with the RTCG logo, but without the real address of that media outlet, the news was published that Dr. Jokić was arrested because he allegedly claimed that there is a drug for hypertension that doctors do not prescribe to patients and that tens of thousands of people happen to die because of it, not because of the virus. Additionally, he was also presented in that text as the chief specialist in the Center for Heart Diseases and Vascular Surgeon. Whereas, at that moment, he was the director of the Podgorica Health Center, and he is actually, a subspecialist in addiction diseases. RTCG journalist Julijana Žugić, who conducted the original interview, was also misused. However, this alleged interview and text still exist on the page, dated July 23 of this year.
Dr. Jokić, after being misused several times for false marketing purposes reported the unidentified person to the Police Directorate, i.e. the department for the fight against cybercrime. He later stated to the media that he was told that it would be difficult to find the perpetrators. Since that report in November 2023 until today, the police have not identified the fraudsters.
Shortly before that, a fake interview appeared on social networks with the alleged “doctor Dušan Zečević” who “developed a product for varicose veins treatment”, and within which citizens of Montenegro are offered a “special discount” of 50 percent for the purchase of one product.
Dr. Mijanović also, eight months ago, spoke to the journalists of the RTCG, the Public Broadcaster, not about erection problem,s but about his profession and hobbies, which can be heard in his original guest appearance and can be found on the YouTube channel of RTCG, when he spoke in the studio during the show “Our Heritage”. His speech was edited over the original video and “turned” into an advertisement for the products. Consequently, Dr. Miljanović filed a complaint against an unidentified person. Notwithstanding, the filed complaint, the same “advertisement” still exists on the FB page “Soft land” since March 8 of this year, and it continues to serve as a potential profit for fraudsters.
In comparison, the original guest appearance by Dr. Miljanović with the deepfake advertisement, you can notice that the voice has went through the AI filters, with a different way of speaking, and there is a way, for the media literate, to understand what it is about. Unfortunately, the majority of viewers do not have the skills to recognize fraud.
This way of abusing AI is called deepfake. Videos created with the help of artificial intelligence skillfully simulate a person’s appearance or voice, since it only takes a few seconds of real audio or video of a person, based on which a false statement is construed.
The European Union passed the Digital Services Act to fight disinformation. According to that law, platforms must mitigate the risk of disinformation spreading and can be held accountable and fined. The Artificial Intelligence Act was also passed in the EU, which includes the requirement to flag manipulated videos. However, it has not yet entered into force.
The issue of the use of artificial intelligence (AI) in the media and social networks has been problematic for a long time, but it seems that Montenegro is still far away from this topic.
The Ministry of Culture and Media (MKM) said that the Action Plan for the implementation of the Media Strategy foresees the preparation of an Analysis on the establishment of communication with global Internet companies and social networks to combat disinformation and hate speech.
“Given that Montenegro is not yet a member of the EU, the Digital Service Act of the European Union, the EU Regulation that enters into force this year, which refers to the obligations of social network founders towards EU member states, is not valid and for Montenegro. Having in mind the size of our market, the only way to define the responsibility of the owners of Facebook and other social networks towards our country at this stage is to start communication with them on this occasion. The same applies to global streaming platforms, including video-on-demand streaming platforms, such as Netflix and HBO,” MKM said.
Now, there is less and less time for Montenegro to catch the global flywheel, warns Marko Banović, an analyst at the Digital Forensic Center. He believes that compared to the EU, Montenegro is delayed in the adoption of the media strategy and new media laws by at least four, to five years.
“With new technology and innovation brought by AI, the Strategy and new laws will not provide an adequate response to the newly created circumstances to protect against the misuse of artificial intelligence.” These documents need to be updated to adjusted to the realities of the modern media environment. Moreover, Montenegro urges to have a special strategy to fight against foreign interference and manipulation of information,” he claims.
This topic is still being developed in Montenegro, and the existing legislative framework and the Media Strategy do not address the specific challenges that AI brings about the spread of disinformation.
“The rapid progress of this technology does not leave much space for waiting. Decision makers in Montenegro must be aware that we must not get into a situationof being late in establishing a strategic and legislative framework related to AI, as it was the case with the fight against disinformation and foreign interference,” Banović emphasized.
Banović underlines that in the EU and the rest of the world, the regulation of the use of AI in the media and the spread of false content are at different stages of development. The European Union is working on legislation dealing with AI, such as the Artificial Intelligence Act (AI Act), but regulating the spread of false information remains a challenge due to the pace in which the technology is developing and how it is being used.
Radoje Cerović, the psychologist and communicologist, highlights the importance of media literacy and a more serious approach to the problem.
“It is a noteworthy question to what extent we will ever be able to seriously reduce the space for manipulating our senses, and thus our emotions, attitudes, and behavior, through various (so far ridiculously shallow) educations on “media literacy”, warns Cerović.
For the Montenegrin media to be ready to face the challenges brought by AI, more education, training, and adaptation of the legislative framework is needed. Through media literacy and appropriate strategies, the media can assist immensely in the fight against the spread of false information and misinformation.
Marko Banović also points out that, unfortunately, the Montenegrin media scene is currently not ready to face the challenge of adjustments to new circumstances.
“The issue of readiness and training of media professionals to use AI as a tool, as well as to combat misuse, becomes essential. This is also an opportunity for the media to grow into public educators about the risks arising from improper use of AI and the spread of disinformation,” continues Banović.
Montenegrin traditional media are still not as constrained as those who spread fake news, hate speech, misinformation, and the like, but it seems that social networks are increasingly occupying the space that belongs to the media.
Whether the media should be the ones who will make the audience media literate, first of all with their media literacy, i.e. their ability to control AI, to learn to use it as a tool, to recognize it, and in some way present the danger to the public, Banović replies that the increase in the use of artificial intelligence is a cause for concern.
“It enables the manipulation of information in a sophisticated and difficult-to-detect manner.” However, at the same time, artificial intelligence can be a key tool in the fight against disinformation. Montenegrin media can play a key role in educating the public about the risks arising from AI. Also, the media can use AI as a tool to improve their content and to help detect fake content, but it is foremost that media are trained and have appropriate mechanisms to prevent misuse,” claims our interlocutor.
He reminds us that AI has long been used to create and spread information manipulations, primarily through generating fake news, images, videos, and audio recordings.
“Neverthless, with the arrival of the so-called “generative models” of the ChatGPT type, we have entered a new era of artificial intelligence that enables the production of a huge amount of content, both visual and textual, which is very difficult to detect as synthetic, and therefore to fight against disinformation campaigns. Two documented cases of AI misuse show how important it is to establish mechanisms to prevent the spread of false information. Given that AI can generate highly persuasive content, the average consumer often cannot tell the difference between real and fake information. This situation highlights the importance of digital media literacy and education,” Banović pointed out.
Communicator and psychologist Radoje Cerović reminds us that people have succeeded in deceiving or hacking biological mechanisms and misuse those in similar fashion.
“For example, when we invented chemical drugs, we “hacked” the natural motivational mechanisms in the brain with them, and later we managed to abuse them for profit. Our innate motivational mechanism (the so-called “dopamine pathway”) is thus “cheated”. Natural mechanisms of behavioral control were not made for heroin or methamphetamine, and with their appearance on the scene, we got powerful tools for controlling, even malicious, someone’s behavior,” he explains, drawing a parallel with trust in fake news that we are unable to recognize.
Cerović thinks that in a similar way advanced artificial intelligence mechanisms, from image creation to information shaping, effectively “hack” our perception of reality.
“After all, we say “I saw it with my own eyes”, which almost made sense until recently. How can we learn that “believing with one’s own eyes” is an inadequate reality control system? If you shape the input in a convincing manner and if you use emotional content, that mechanism is more powerful than any hypnosis. A convincing picture with a scene of violence can initiate a war. Created reality can eliminate one from public life (or cancel, as it is fashionable to say today) a person and his public discourse. In the end, we can now completely “create” a non-existent person, who will influence our perception of reality or our behavior with its guided communication,” Cerović points out.
He emphasizes that all such cases will bypass our “reality control” mechanisms and directly cause the intended reaction, the one desired by those who have the intent and tools to manipulate our behavior.
This text was produced with the financial support of the National Endowment for Democracy. The content is solely the responsibility of the authors and publishers of the Media Institute of Montenegro and does not necessarily reflect the views of the donors.