Language in Focus: Decoding the AI Discourse

How We Talk About AI and Why it Matters for Social Change Work

Artificial Intelligence, or AI, tools like ChatGPT have taken the world by storm. We are collectively witnessing a colossal rush toward integrating these new and impressive tools into areas of our work life as diverse as speech writing to drafting of grant proposals. Depending on who you are talking to, AI is either the next step toward limitless human capacity or toward the downfall of human civilization.

One element that both these passionate and exaggerated discourses have in common is their reliance on language tropes about AI technology that stem from the history and contours of the field of AI. The discussion below provides you with some tips to keep in mind regarding how we talk about AI, particularly in the context of the pursuit of social and political systems change.

Anthropomorphizing AI

We think about AI as human-like, imbibing it with meaning-making qualities that encourage us all to think about this technology as having a life of its own. But, AI technology is actually driven by human creators. This regularly comes across in language used around AI:

For example, “Artificial Intelligence” itself is a term loaded with meaning. But many critics have pointed out that ChatGPT and other AI tools like it are poorly described by this moniker. The name is buttressed by the widespread but mistaken belief that these tools are akin to non-human brains, even mimicking neural network processes and emotions. But this is not the case. These tools (large language models) are actually highly complex probability calculators that simply choose the most likely next word based on what has come before. That is more computer than human intelligence. This basic argument is summed up by critics who argue that ChatGPT is simply a “stochastic parrot” — a parroting device based on random probability of the distribution of words in context.

The assumption of human-like intelligence implies intention, weighing of pros and cons, understanding of parallel themes and even experience of sensory phenomena. None of which are correct assumptions. So what political, social and ethical implications play out when we impute human-like qualities to intricate calculators?

Instead:
Some AI experts such as Luc Julia, have suggested talking about “augmented intelligence” as more accurate since it implies that with the use of these tools, human intelligence is augmented. This adjusts our attention back to those who have agency, and responsibility, in the unfolding drama of AI – human tech creators and programmers. It is essential to keep in mind that humans control AI and have the power to determine the future of AI technology.

Another example of anthropomorphizing AI is in the naming of errors by ChatGPT and other similar tools as “hallucinations.” By using this term, we associate the experience of human sensory perception detecting something that is not actually present, with factual inaccuracies produced by a chatbot. This language choice produces the impression that AI has brain-like qualities. Our reality today includes the erratic push toward Star Wars androids like C-3PO, but the language we are using to talk about it gives the impression C-3PO is already here.

Despite our tendency to assume human-like qualities in AI tech tools, many people are prone to overlooking their most human (and problematic) qualities: biases inherent to the underlying data used to train these tools. Given the mounting evidence of such biases, we have an ethical obligation to flag factual errors and retrain AI systems to mitigate bias. Using language that obscures the very existence of errors, such as referring to them as “hallucinations”, weakens our ability as a society to build better systems as it avoids speaking plainly about the problems at hand.

Instead:
Let’s use plain language to say what we mean. The term “hallucination” is misleading. We are actually talking about factual errors that are difficult to detect because the AI tech was not built to catch nor flag factual inaccuracies. This fact alone encourages a critical stance when thinking through the use of this technology in conjunction with search engines, and other everyday tools.

Mystifying AI

The achievements of generative AI tools like ChatGPT are so impressive, so exponentially more successful than earlier generations, that they push beyond rational explanation. In fact, the seemingly magical performance of AI has been hotly contested by those within the field since the early 1950s during AI tech gatherings when AI discussions were purely theoretical. Today, AI emerges in news stories and tweets, and even in statements by those designing the systems. For example, Palantir CEO Alex Karp wrote recently “It is not at all clear — not even to the scientists and programmers who build them — how or why the generative language and image models work.”

Language that mystifies AI influences the way we assume humans are interacting with these new technologies. It obscures the very real human labor that is behind the training of massive datasets like those underpinning ChatGPT. When we talk about ChatGPT and other tools like it as producing superhuman results that cannot be fully explained, we are ignoring some of what is documented about how these accomplishments come about. For example, there is mounting evidence that AI supply chain companies are directly facilitating relentless exploitation and harm to human beings in Kenya, India, and the Philippines, who moderate obscene and dangerous content.

Part of the magic behind ChatGPT and other tools are their apparent ability to choose the right words, images or soundbites, and (appear to) make accurate guesses much of the time. This is accomplished through human content controllers, pouring through and tagging the internet’s most vile and disturbing content, so the AI tools will later recognize it as inappropriate. Initial reports like those linked above indicate that this is taxing, and psychologically damaging work. In this case, “magic” is actually human exploitation.

Campolo and Crawford succinctly identify this mystifying dynamic as “enchanted determinism.” Even without their references to social scientific theory, this name is apt given our collective enchantment with AI accomplishments, and the immediacy with which the concept-to-market push is leading to real-life effects that determine outcomes for many. Some are clearly harmful, for example, the flaws of facial recognition technology, or the proposed use of AI to provide medical advice.

A related narrative is that AI technology is quickly developing out of our control. Its magical, indiscernible qualities lead to the myth of possible, or current, autonomous evolution. This concept is what is referred to as artificial general intelligence. But, critical analysis of recent AI advancements has pointed out that by spreading the story that AI technology is soon to be out of our control, we obscure the very concrete responsibility and oversight that tech developers do have to control this spiral. It also reduces attention to the question of why AI developers claim it is “impossible” to know how AI tools produce their results. The reality is AI is not autonomous nor inscrutable – it is developed by tech industry leaders who can choose to put resources toward understanding its causal mechanisms. Calls for government regulation conveniently allow those individuals, and their companies, to obscure their responsibilities and position themselves as the experts needed to guide government actions. This posturing comes at the expense of scholars, community activists, journalists and others currently critical of their interests and claims.

Instead:
Let’s not propagate the story of AI as beyond human understanding, and when we notice it in our communications environments, let’s choose to identify this theme directly and adopt language that supports demands for the responsibility and accountability of humans as we develop these technologies.

This means choosing to ask what power and labor relations are behind claims of non-human expertise and unknowability. Additionally, scrutiny of this narrative reminds us to probe the construction of AI tools themselves through attention to how bias enters these systems. Oversight of this kind needs to be democratized, and we have a role to play.

Finally, let’s tell different stories through our communications that separate out dubious guidance from companies profiting from AI’s development, in support of organizations, scholar-practitioners and journalists dedicated to thinking ethically about AI and the collective social good.

Schedule a confidential consultation to learn how our strategic communications offerings can elevate your organization’s impact.