Ethical AI: Policy Development Tips for PR Professionals 

by Allison Gross, associate director, Vanguard Communications

As ChatGPT and other artificial intelligence (AI) tools emerge into our workplace, many in our firms are equal parts giddy and guarded about what it means for our short-term tasks and our long-term jobs. And the truth is, the evolution is happening so quickly that getting your head around what it is and what it does can only be relied upon for a few moments. (I mean, AI is advancing at such a rapid pace that I worry this blog post might feel outdated by the time you finish it!) 

Through our research and focus on capitalizing on all that AI has to offer us, we believe the challenges we need to address with policies are clear – ethics, privacy, misinformation, and transparency.

Not surprisingly, AI usage hit a new peak in 2024. According to a McKinsey survey, 65% of respondents report their organizations regularly use generative AI – nearly double the rate from just 10 months ago. However, along with the fast growth in connection with AI, many users have already experienced some negative impacts, with 44% of those respondents indicating they have faced at least one problem related to AI use. 

Implementing an ethical AI policy is a critical step to harness the benefits of generative AI while mitigating its risks, and that has been our focus for the past six months. 

We have developed a robust AI policy, including an AI FAQs document that provides guidance and best practices for use on all client-related activities, as well as an AI tool vetting checklist, understanding that not all AI tools are created equally.) Creating these resources was a challenging process given the constantly changing landscape of AI. However, it was necessary to provide our staff with the parameters that can enable them to use AI ethically in this moment and ensure that their usage aligns with our values. 

Here is how we tackled the AI opportunity. 

Assemble an AI team

Creating an ethical AI policy should not be a solo mission. We established an AI task force composed of a diverse team and perspectives. The task force meets regularly to assess a variety of issues related to the consistently changing AI environment, including updating guidelines to better address the unique challenges (and opportunities) presented by AI. We walk through new tools together. We share stories of where AI has gone all wrong (e.g., those emails that you just KNOW were written by a chatbot). And we brainstorm on how to use specific tools so that we aren’t just chasing the bright shiny thing but actually processing how it will help our efficiency without crossing ethical lines. 

By bringing together a variety of perspectives, we strive to ensure that our policy is comprehensive and considerate of different viewpoints.

Align your AI policy with your core values

At Vanguard, we pride ourselves on a people-first approach, which means that we use only human voice artists and real people in images and video to the extent possible. 

Your organization should determine the ethical principles and values that guide AI development and deployment, such as inclusiveness, human well-being, transparency and accountability. These principles will lay the groundwork for a comprehensive and ethical AI policy. We think of AI as an assistant, never an author. AI should enhance, not replace, human intelligence and decision-making. Your AI policy should also include human oversight to avoid overreliance on automated decisions.

Prioritize inclusivity

Our AI task force created an AI tool vetting checklist that includes three key questions that our staff should consider before using any AI tool:

1) Is the AI tool transparent on how it formulates responses and what dataset/sources it uses? 

2) Does the AI tool formulate responses based on hard data? 

3) Does the AI tool include features to accommodate users with different types of disabilities (e.g., visual, auditory, cognitive)? 

These questions help ensure that whatever AI tool we are considering supports our Diversity, Equity, and Inclusiveness (DEI) efforts because we can identify how the tool is generating its response. It is essential for our staff to understand that AI can replicate the same types of biases exhibited by people

Train and educate

A standalone policy is not enough. Pair the policy with sessions to educate staff about the ethics surrounding the use of AI. We hold regular learning sessions to help staff stay updated on best practices and new developments in AI. These trainings cover various topics, including AI implications for DEI, using AI ethically in writing, and the basics of our policies. 

We also recommend creating a collaborative space (we use a Teams channel) where staff share news, research, upcoming webinars, and their personal experiences, both good and bad, with colleagues.

Promote transparency

Your policy should emphasize transparency in using AI, just as you are transparent about how you bill clients or staff your project teams. Our policy emphasizes the requirement of being open and honest with our stakeholders in how we use AI. If used in the appropriate manner, AI is simply another professional tool that can help us generate ideas and provide direction, but it can take time for organizations to come on board. Be patient with them.

Embrace the iterative process

AI is continuously evolving, and your first policy cannot possibly be comprehensive of all of your organization’s considerations in using AI. Think about including into your policy a process to regularly review and update it as the environment changes and new AI tools are introduced. 

Our work with AI is just beginning, and as PR pros we probably can’t be experts right away, but we can build a framework of understanding for ourselves and for our clients. As we continue to explore new AI tools and technologies, our commitment to ethical practices remains steadfast. In fact, we almost never reference “AI” on its own; in our conversations, it’s always “ethical AI.”

By regularly updating our policies and providing ongoing education for our team, we aim to ensure that our use of AI aligns with our values and serves our mission.

Allison Gross is an associate director at Vanguard Communications in Washington, D.C. and a member of WWPR. She has extensive experience in health communications and marketing activities for government and nonprofit clients. At Vanguard, she is part of the AI task force and oversees the production of communications campaigns and materials for a Medicaid managed care organization in D.C. Before joining Vanguard, Allison led the overall marketing and communications strategy for the Primary Care Collaborative (PCC). Prior to PCC, she developed and executed communications campaigns to promote the 340B Drug Discount Program at the American Pharmacist Association.

Join the Mailing List

Stay connected with WWPR by signing up for our mailing list! You’ll receive the latest updates on professional development events, exclusive networking opportunities, leadership initiatives, and more!