Is AI worth it for nonprofit communications?

In 2023, we shared some of our insights about the novelty and the pitfalls of society’s collective approach to AI and why it is important to examine how we talk about AI. There has been and continues to be significant hype about large language models (LLMs) used first by ChatGPT, but which have now proliferated as a main offering from a wide range of technology companies. Since we originally wrote on the topic, a significant change that has blossomed across industries is the adoption and integration of ChatGPT-like technology in almost every major online suite of products, from search engines, to customer service chat-bots, all the way to virtual therapists.

Although many of these products are framed as offering something new and innovative, now that we have had some time to observe AI and reflect on its use in society, recent data shows that it is actually much like the systems and technology tools that have come before it. This is particularly the case in relation to representation and structural inequality. 

Public perception demonstrates that issues of representation and structural inequality are built into AI systems in ways that mimic other areas of society and institutions that have been subject to scrutiny by the nonprofit and social change sector. Newly released public opinion data from Pew Research Center demonstrates that the public is significantly more concerned about the increased use of AI than AI experts—with 51% sharing they are “more concerned than excited.” There is an almost opposite trend among experts: 47% of AI experts indicate they are more excited than concerned. This might have been understandable for such a new technology, but with time this public concern has actually increased. 

We see trends in “excitement for” AI that reflect conventional structural inequalities that many have flagged as present also in AI development and the technology industry at large. These predictable dynamics of bias and inequity in technology show up in two ways in newly released data: 

Gender

Among both AI experts and the public “women are far less likely than men to say women’s perspectives are well accounted for in AI design.”

  • 22% of women in the general population and 33% of men in the general population say that women’s perspectives are well accounted for.
  • 27% of AI experts who are women, but 50% of AI experts who are men, say that women’s experiences are well accounted for. 
  • There is no significant difference between women and men in the general adult population who perceive that men’s experiences and views are taken into account; however, among AI experts who are men, 73% agree that the views and experiences of men are taken into account “very or somewhat well”–17% points higher than any other group— and 86% of AI experts who are women believe that the views and experiences of men are “taken into account very or somewhat well.”

Among both the public and AI experts men report being:

  • More excited than women about the increased use of AI in daily life
  • More excited than concerned about AI in daily life 

Among AI experts, women are significantly more concerned than their male counterparts about data misuse, bias and inaccurate information produced by AI. Women indicated between 2-20 percentage points higher levels of concern regarding all forms of misuse when asked about data misuse, bias and accuracy in AI.


Race and Ethnicity

Both groups—experts and the public at large—report that “White adults’ views are better represented than other racial or ethnic groups’ when it comes to AI design.” 73% of AI experts indicated that “the people designing AI take the experiences and views of White adults into account at least somewhat well.” Only “half say the same for Asian adults” with even smaller proportions saying so about Black (27%) and Hispanic (25%) adults. 

What story do these numbers tell us? 

Taken together, these numbers demonstrate several themes that are familiar to anyone seeking to advance social change: 

  • People who benefit from systems and are embedded in them are less likely to examine them critically.
  • The experiences and views of those at the top of the social hierarchy are embedded into AI and thus mimic other long standing institutions that have emerged over the last few centuries (e.g., religious institutions, law, academia and media). These privileged White experiences and views, and those of men, take precedence over those of women overall, and Black and brown men and women in particular. 

Rather than being something entirely new, which can magically solve the problems previous technology and social systems have grappled with, AI is shaping up to be a mirror reflecting and reinforcing the structural biases of previous systems built by humans. 

What does this mean for the nonprofit and philanthropy sectors? 

There are a wide range of ethical questions raised by the use, maintenance and impact of AI. Nonprofits or philanthropic organizations may be tempted to adopt these tools to resolve the underlying omnipresent issue of high competition for resources to be allocated toward an increasing number of priorities. This will only grow as the sectors continue to come under attack by the current administration. Within this context, cutting costs by using AI tools in the development of communications or other areas can be tempting. However, anyone working in the social change sector should consider the following, when making these decisions for their organizations: 

  • Put in the extra work

    Given what we know to date about AI systems, using them in a nonprofit or philanthropic context requires additional scrutiny and effort to address the issues of bias, environmental impact and other ethical concerns stemming from their use. This should be understood as extra work that your staff needs to take on to compensate for the issues we know exist within these LLMs such as ChatGPT. It may be the case that these LLMs are actually not saving time, resources or energy, when counting the costs of checking for errors, reviewing for bias, or searching for and adding additional data that you want LLMs to draw on, in order to counteract the latter.
  • Observe

    As with all communications efforts, it’s important to observe the impacts and outcomes of changes to your messaging or communications practices. If your organization does decide to adopt the use of AI tools in communications, take stock of base-line measures before you adopt a change. Base-line measures include audience sentiment, responsiveness to calls to action, engagement with various types of content, etc. This will give you the ability to observe in real time and through ongoing measurement the impact of using AI in your messaging. Having this data will allow you and your teams to make informed proactive decisions about whether this shift has allowed you to better meet your organizational objectives, and to weigh the risks, challenges or costs that may accompany documented benefits. 

Leaders that aspire to advance equitable change should take the data discussed here as a strong indication that although AI may seem novel, in many ways it presents us with the same challenges the sector has been working to address for decades, and which experts in AI ethics have been flagging for some time now. Remember, sometimes what appears to be new and shiny is just more of the same. 

If you need help with your communications, reach out to The Wakeman Agency for a private consultation.

Schedule a confidential consultation to learn how our strategic communications offerings can elevate your organization’s impact.