top of page

The Black Box of ChatGPT: A Risky Tool for Propaganda?

  • Konstantin Hoglund
  • Dec 1, 2024
  • 4 min read

Updated: May 17

by Konstantin Love Höglund

Not too long ago, you might have heard someone excitedly ask, “Have you checked out ChatGPT yet?” This simple question from a friend or colleague would often be followed by an enthusiastic demonstration of its capability, as if the person was a salesperson for the company itself. In a very short amount of time, society was introduced to this an easily accessible information hub that cleaved itself into our everyday life, helping many of us draft emails, summaries, and give us advice on various subjects. What is less known is that OpenAI, the company behind ChatGPT, isn’t quite as “open” as its name suggests. When OpenAI launched in 2015 it had a clear mission which was to make AI technology transparent and accessible to everyone. In 2015, it lived up to that promise by making its code open-source, which means allowing anyone to peek behind the curtain and see how its models functioned and how the model provided its output. However, a major shift occurred in 2019 after OpenAI partnered with Microsoft. The company transitioned to a closed-source model, turning its once transparent system into what has been commonly referred to as a ”black box model”, hiding its inner workings from public view.


What Exactly Is a Black Box Model?


Imagine asking a question where you can only see the final output but have no idea how or why it was produced. That’s essentially what happens with a black box AI system. With models like ChatGPT, the algorithms, training data, parameters, and filters that shape its behavior are all concealed. While the system generates answers based on patterns in massive datasets, the underlying logic is hidden to its users. This opacity creates a serious problem: we, the users, cannot extract responsibility for the information we are given if we are not shown it.


This could make GPT AI vulnerable to misuse as propaganda tools when we have no way of knowing if someone has manipulated or biased the information it produces. The AI could be trained on skewed data, or its algorithms could be tweaked to push certain narratives. The result? Misleading or even manipulative outputs, all without users ever realizing it. Shoshana Zuboff, a professor at Harvard, warns in her seminal book, The Age of Surveillance Capitalism, that “[t]he black box is a tool of control that could be used to manipulate public consciousness in ways that are nearly invisible to the people affected by it.”


A New Era of Cambridge Analytica?


We’ve seen information being angled to manipulate public opinion before. Think back to the scandal surrounding Cambridge Analytica during the 2016 U.S. presidential election. The firm accessed data from millions of Facebook profiles without permission, crafting targeted political ads that selectively presented information to manipulate voter behavior. Their sole purpose was to alter the perception of voters by using machine learning to categorize the Facebook users’ behaviours to know what selective information they would be more susceptible towards. All of this was done without the knowledge of the users themselves.


Similar things could be done with the help of black box models of AI. If these systems are trained on biased data or are intentionally designed to push certain narratives, they could quietly influence users without their realization. Just as Cambridge Analytica did, only this time with the opportunity to reach ever so further. Wallach sums it up well in his 2015 book, A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control: “The integrity of AI systems is only as reliable as the data they are trained on. If that data is biased or manipulated, the AI’s decisions will reflect those distortions, potentially leading to harmful outcomes or the advancement of specific agendas.”


The Implications for Democracy


The rise of black box AI models poses profound risks to democratic institutions. At the heart of democracy lies an informed citizenry, who rely on accurate, transparent information to make decisions about their leaders and policies. Black box AI challenges this foundation by obscuring the processes that shape the information we receive.


In a democratic society, media outlets are expected to uphold rigorous standards, ensuring accountability for the accuracy and fairness of their reporting. Black box AI operates without this oversight by denying its users transparency of the inner workings. Moreover, black box AI could be deliberately exploited by authoritarian regimes or even private individuals within democracies. A black box AI could prioritize narratives that favor certain political agendas while suppressing dissenting views, all without leaving a trace or proof of it doing so.


Transparency is the cornerstone of democracy. Citizens expect to question decisions, scrutinize policies, and hold leaders accountable. But black box AI systems undermine this principle. If these systems do not share in this transparency, users are left to blindly trust systems that could be quietly shaping opinions, spreading misinformation, or pushing hidden agendas. How could we distinguish between accidental bias, deliberate manipulation, or simple error if the inner workings of these models are hidden from view?


The stakes couldn’t be higher. As AI becomes increasingly embedded in everyday life, we must demand transparency and accountability. Without these safeguards, black box AI risks becoming a powerful tool for misinformation, eroding the very foundations of democracy.


Ultimately, we are left with a pressing question: Can we truly trust a system that denies us transparency? As ChatGPT and its fellow black box competitors progress and integrate into our society, transparency is not just a bonus. It is essential for protecting our democratic institutions and the trust we place in them.





Konstantin Love Höglund is a student of political science and history at the Stockholm University and a member of UF Stockholm. His studies inspired a deep interest in the transformative impact of artificial intelligence on politics. He’s particularly drawn to examining how AI-driven changes influence democratic institutions and ethical principles, as well as exploring strategies to address these emerging challenges.





 
 
 

Comments


bottom of page