Ethical AI in Living Color – Using AI to Advance Diversity, Equity and Inclusion

By Lelani Clark

In recent news, there has been a weaponization of the term Diversity, Equity and Inclusion (DEI) and increased backlash against equity-based programs and initiatives. However, in taking the temperature of our current climate, I believe the fight has just begun. This highly charged moment is an opportunity for a true culture shift where AI has the potential to play a key role in driving positive social change.

Artificial Intelligence (AI) is a transformative tool, boosting innovation, efficiency, and productivity across industries. However, like with any evolving technology, it poses both opportunities and challenges. Paired with a DEI lens, AI can become a powerful ally for promoting social justice especially in communications and cause-related marketing. Specifically, Ethical AI that integrates DEI principles adheres to guidelines that prioritize fundamental human-centered values, while avoiding harm. As a result, communicators that use these critical tools can develop communications campaigns that motivate diverse audiences and represent marginalized communities with authenticity, dignity and respect.

AI-driven tools with a DEI focus can analyze language in marketing and communications to detect gender, racial or cultural biases, allowing organizations to refine their messaging to be more inclusive. This ensures that communications are culturally sensitive and resonate with a wider audience. Significantly, DEI-focused AI helps prevent organizations from falling into the trap of performative DEI efforts or the reinforcement of harmful stereotypes.

Monsters and Ghosts in the Machine

Joy Buolamwini, bestselling author of “Unmasking AI: My Mission to Protect What is Human in a World of Machines” and an “Ethical AI” firebrand has been a vocal leader in making the case for DEI-informed AI to increase tech diversity, center marginalized communities and close the digital divide. In her book and lectures, she addresses the “coded gaze” and “coded bias” that dominates the tech industry to highlight the biases embedded in algorithms that serve to dehumanize BIPOC communities, especially with faulty facial recognition software that disproportionately targets and racially profiles black and brown people, turning them into digital boogeymen and phantoms.

As an advocate for “algorithmic justice,” Dr. Buolamwini has pushed for diverse representation at the developer level and to create AI systems that combat systemic racism, gender discrimination and ableism. Her book serves as a valuable resource for communicators. We too must be vigilant in ensuring that the tools used to enhance our work are not harmful to the communities and organizations we represent.

The New Digital Culturalists

A new generation of diverse tech leaders is disrupting the traditionally exclusive white male-dominated “tech bro” industry by building ethical and inclusive AI systems. Large Language Models (LLM) and chatbots like Latimer and ChatBlackGPT, along with organizations such as Black AI Think Tank are at the forefront of advocating for deep inclusion and developing anti-bias AI tools to ensure underserved communities are represented in authentic ways. Their mission is to combat the whitewashing, misrepresentation and erasure of BIPOC histories in technology.

I recently attended the National Black AI Literacy Day event hosted by Black AI Think Tank and

ChatBlackGPT’s listening session with industry leaders, which emphasized the need for transparency, diverse representation, culturally sensitive datasets and the development of ethical AI systems. These leaders are advocating for anti-bias tools, policy recommendations, and accountability from Big Tech to ensure more BIPOC developers and tech leaders are included as decision makers in the tech industry, especially at the C-suite level. They are leading an AI revolution, demanding a seat at the table and more skin in the game to make sure BIPOC communities have agency and control the narrative when it comes to preserving historical and cultural accuracy.

The Future of Ethical AI

Incorporating Ethical AI in communications can be instrumental in futureproofing DEI programs and initiatives. To maximize the benefits of ethical and responsible AI while mitigating risks, it’s important for organizations and communicators to follow these best practices:

  • Use inclusive AI tools that monitor bias through equity assessments during the design phase and incorporate diverse training datasets based on various demographics, cultures and perspectives.
  • Ensure transparent AI best practices are used across all departments within an organization, building trust in AI systems by making information about data use and algorithms accessible.
  • Provide ongoing education and training on AI’s ethical implications and opportunities to advance DEI initiatives, ensuring buy-in from leadership and staff.

AI has the potential to be a game changer in advancing DEI in communications and positioning

organizations as change agents, but only if it incorporates a commitment to digital equity and inclusion. By prioritizing ethical AI practices, organizations can ensure AI becomes a transformative force in fostering more inclusion and promoting social justice.

For more hot topics and engaging content on Ethical AI, check out Vanguard Communication’s AI Taskforce blog series.

About Lelani Clark

Lelani Clark is Associate Director and Senior Media Relations Strategist at Vanguard Communications. As a certified DEI advocate, she serves on the AI Taskforce, focusing on the intersection of AI and DEI. Her work centers on using Ethical AI in communications to amplify the voices of BIPOC communities and promote social justice. Ms. Clark is a professional member of WWPR.

Ethical AI for PR: Five Questions to Guide Your Strategy

By Brenda Foster 

ChatGPT has become my best friend. I know it will deliver what I need in a pinch—even if it’s not perfect. A quick list of TV and radio stations serving Bristol, TN? Check. Ten recipes using the quickly expiring root vegetables in my fridge? Done! 

Much like human relationships, I’m aware of AI’s limitations and vulnerabilities—and I always have an eye out for signs that the relationship might be toxic. 

A University of California at Riverside study showed that 20 to 50 ChatGPT queries use around a half liter of fresh water in the form of steam emissions, but a study in Nature highlighted some benefits of that efficiency: AI tools emit between 130 and 1500 times less carbon per page of text generated compared to human writers. Considering all the tradeoffs can be overwhelming, and it’s easy to see why many in PR are concerned about exploring AI use in their work.

Understanding how to assess the ethics of AI use in PR is crucial, particularly as we’re being doused with a firehose of AI innovation. Our firm’s AI task force has spent the past six months exploring environmental, copyright and other well-known issues to guide our PR colleagues and clients. We developed five key questions to help us determine whether a tool meets our ethical standards. 

Who is impacted if we use this tool?

The personal and professional impact of AI tools can be far-reaching and fraught with competing barriers and benefits. For example:

  • Are there trained professionals who are losing opportunities because they’re being replaced with AI?
  • Is it ethical to use an AI-generated actor, voice or model in place of the real thing?
  • Are text or images being generated from work that was originally created by someone else? 

Fully replacing human performers and writers devalues human artistry and eliminates the depth of emotion and authenticity that AI cannot replicate. However, there are incredible opportunities for AI to help us work faster and smarter. AI tools can boost opportunities for a PR team by giving them a chance to move beyond repetitive, mundane tasks that don’t allow them to fully use their skills and creativity. Evaluating these issues related to each tool is a critical part of practicing ethical AI. 

Who is left out if we use this tool?

Dr. Joy Buolamwini has spent her career unmasking the coded gaze of technology, where baked-in prejudice abounds, including bias related to race and gender. AI algorithms have offered lower credit limits to women and incorrectly flagged black defendants as future criminals at twice the rate of white defendants. While AI shows potential in bridging barriers for people with disabilities, it may not fully address their diverse needs. When assessing these tools, research how the AI was trained and how they are monitoring for bias. Most importantly, ensure developers maintain a continuous feedback loop with users to quickly identify and correct biases.

How does this tool help us pursue our mission?

Despite concerns about AI conflicting with organizational ethics, AI can significantly enhance the pursuit of a mission. Its ability to quickly analyze large datasets allows for more efficient monitoring of trends and challenges. Even with the potential environmental toll of AI, it’s currently being deployed to precisely determine the most critical areas of need related to deforestation and climate change. Schools can use AI to track individual student progress and pinpoint specific interventions to help. However, overreliance on AI can diminish human interaction, leading to a loss of empathy and understanding in sensitive situations, potentially alienating donors or customers. AI’s capabilities might also cause mission drift by shifting focus to data and metrics over qualitative activities that support the core mission. Selecting AI tools for PR should involve weighing their benefits and risks against the organization’s goals.

What do our employees need to maximize use of this tool?

Diving into new AI tools can be exhilarating—and encouraging employees to experiment is an important part of gaining enthusiasm and support for advancing technology. The downside is that, without clear guardrails and a training plan, users can quickly find themselves in an ethical pickle. ChatGPT is the perfect example of a free tool that offers endless possibilities for generating information, yet headlines about misuse, plagiarism and poor data abound. Start with a policy that outlines basic organizational operating principles related to AI. For each new tool, take the time to educate and inform employees about the functions and benefits, then stay in touch over the first few weeks to determine whether there are any operational or ethical concerns. Provide training and support for those who are less comfortable with recent technology and highlight success stories when the tool improves processes or outcomes. Finally, be open to criticisms or concerns about the tool and its impact on employee growth and satisfaction. 

What are the ultimate risks and harms of using this tool?

There will be AI tools for PR that, no matter how exciting and helpful, just don’t meet the ethical or privacy standards for an organization. Many tools haven’t corrected some very real concerns about bias, while others operate in open systems where your information and data become part of the algorithm training. Some PR tools are currently free, but what happens when a department makes them part of its operation, and the tool suddenly has a subscription fee? Practicing ethical AI in PR starts with a thoughtful, purposeful approach that considers the tough questions about privacy and security, bias and fairness, copyright, costs and mission alignment. 

There are certainly tremendous tradeoffs to consider related to using AI in PR, but that doesn’t mean we should ignore AI and hope it goes away. Like computers, the internet and social media, these tools will evolve in exciting and unexpected ways. Only with our eyes open can we forge the path toward the ethical use of them in our profession. 

Brenda K. Foster, M.P.A., is a senior vice president at Vanguard Communications in Washington, D.C., and an instructor for the graduate program at American University’s School of Communications. She was named a PR News Top Woman in PR and was a finalist for WWPR Woman of the Year.

Join the Mailing List

Stay connected with WWPR by signing up for our mailing list! You’ll receive the latest updates on professional development events, exclusive networking opportunities, leadership initiatives, and more!