A new study by the Institute for Strategic Dialogue (ISD) reveals a concerning trend: AI chatbots are frequently incorporating Russian propaganda into their responses about the war in Ukraine. The research, which analyzed answers from prominent AI platforms like OpenAI’s ChatGPT, Google’s Gemini, xAI’s Grok, and Deepseek’s V3.2, highlights how these systems are susceptible to mirroring biased language and relying on Russian sources, particularly when addressing specific topics.
Methodology and Key Findings
The ISD researchers posed over 300 questions in five languages about the conflict in Ukraine. These questions were intentionally crafted with biased, malicious, or neutral language to assess how each chatbot would respond and which sources they would draw upon. The study found that Russian sources appeared significantly more often in answers to biased and malicious prompts, raising concerns about the potential for these platforms to inadvertently amplify disinformation.
Platform-Specific Observations
Each AI chatbot exhibited distinct behaviors when responding to queries about Ukraine:
- ChatGPT: Demonstrated a strong tendency to provide Russian sources when presented with biased or malicious prompts, offering three times more Russian links compared to neutral questions.
- Grok: Was most prone to citing Russian sources even in neutral questions. A notable pattern was Grok’s direct quoting of journalists from Russia Today, blurring the line between state-backed propaganda and personal opinion. Researchers also noted that Grok may not effectively detect and restrict content from sanctioned Russian state media, even when reposted by third parties.
- Deepseek: Provided the highest volume of links to Russian-backed sources in two queries, directing users to platforms like VT Foreign Policy, which spreads content from known Russian propaganda groups.
- Gemini: Showed the most discerning behavior, refusing to answer some malicious prompts by citing concerns about potentially inappropriate or unsafe topics. While it recognised the risks associated with biased prompts, it sometimes failed to disclose the sources used to formulate its responses.
The Role of ‘Data Voids’ and Confirmation Bias
The study identified a key factor contributing to this phenomenon: the presence of “data voids.” These are search terms or topics lacking a substantial amount of high-quality, reliable information. In these instances, chatbots appear more likely to rely on available sources, even if they originate from potentially biased or propagandistic outlets.
Researchers emphasized that the findings confirm the presence of “confirmation bias” in AI systems. This means that chatbots tend to mimic the language used in prompts, influencing both how they phrase their responses and the sources they prioritize.
Topics of Concern
The prevalence of Russian sources varied based on the subject matter being queried:
- Military Recruitment: Russian state sources were most frequently cited when questions concerned Ukraine’s military recruitment efforts. Grok cited at least one Russian source in 40% of its responses, while ChatGPT followed with over 28%.
- War Crimes and Ukrainian Refugees: These topics resulted in the fewest number of Russian-backed sources across all four chatbots.
Conclusion
The study’s findings underscore a pressing need for greater vigilance regarding the potential for AI chatbots to disseminate Russian propaganda. While some platforms like Gemini have demonstrated an ability to recognize and refuse malicious prompts, the overall trend highlights the need for improved source verification, algorithmic adjustments to mitigate confirmation bias, and efforts to fill data voids with reliable, unbiased information. Addressing these challenges is crucial to ensuring that AI chatbots serve as trustworthy sources of information and do not inadvertently contribute to the spread of disinformation






















![[неформат] опубліковано трейлер гри final fantasy vii: the first soldier для мобільних пристроїв](https://itci.kiev.ua/wp-content/uploads/2021/10/4d081097ff951cbf9b7da16702ea78b6-100x70.jpg)







