Author and AI: The Perfect Pair for Advocacy Communications

By LeAnne DeFrancesco, vice president at Vanguard Communications

When I joined my company’s AI Task Force in early 2024, I knew I was going to be the skeptic on the team. I had an open mind about AI tools for other tasks, but for writing? I was not enthused, mostly because I assumed AI outputs would be stale, cold and only tell half the story.

To me, writing is a sacred process that not just anyone can do. You may learn technique and pick up style, but I still believe that people are born with a storytelling brain, or they aren’t. They know how to thread together thoughtful content that keeps readers interested, or they don’t. They can put themselves in the shoes of their audiences and deliver what they need to know and care about, or they can’t.

So to ask a machine to create a very original, nuanced piece of content seemed ludicrous.

At the same time, I saw — like everyone else did — the media headlines that gave me pause:

Dynamic Duo

The more I researched, read, talked to colleagues and consulted with peers about how they use AI to help them generate ideas or be a sounding board for their content, the more my stance on AI softened. Particularly in an industry with unforgiving deadlines and late-breaking curveballs, if AI could help me deliver for my boss or my clients by providing small, research-y shortcuts — without being dishonest or misleading — why not give it a go?

The same is true in advocacy communications. More often than not, probably, staff at advocacy organizations are stretched. There is a lot to do, to develop, in a short window of time. Breaking news on their issue changes their day in an instant. Response time must be quick, yet messaging must be on point. They need shortcuts without sacrificing the message.

If your goal is to change people’s hearts and minds about something, it’s not enough to just deliver the facts and summarize, which is what AI is good at. You need context, emotion and a personal story. You need impact, examples and turns of phrase that resonate with people. You might need humor, you might need shock. This is what humans are good at.

So really, combining robot with writer (or AI with author) is a perfect coupling.

Don’t Be Talked Into Breaking Up

There have been many threads on social media, blogs and podcasts pitting writers against AI. But in my view, there is no fight here, as long as leaders know how to use AI ethically and provide guidance for their employees to do the same.  

  • AI is good at some things, like searching online information quickly, and should be embraced for that quality. It can be clutch.
  • Humans are good at other things, like providing the “color commentary” around issues and making things personal, relatable and memorable.

Writing is indeed an art, and AI is a science. A tool. A technology that can help us get to our beautiful prose and thoughtful executions more quickly.

Which for those of us in PR and in advocacy communications, sounds dreamy.

LeAnne DeFrancesco is a vice president at Vanguard Communications in Washington, D.C., where she leads the firm’s Design and Editorial practice. She joined WWPR’s Pro Bono Committee in 2018, where she has helped several D.C.-based organizations enhance their PR and communications activities.

Ethical AI: Policy Development Tips for PR Professionals 

by Allison Gross, associate director, Vanguard Communications

As ChatGPT and other artificial intelligence (AI) tools emerge into our workplace, many in our firms are equal parts giddy and guarded about what it means for our short-term tasks and our long-term jobs. And the truth is, the evolution is happening so quickly that getting your head around what it is and what it does can only be relied upon for a few moments. (I mean, AI is advancing at such a rapid pace that I worry this blog post might feel outdated by the time you finish it!) 

Through our research and focus on capitalizing on all that AI has to offer us, we believe the challenges we need to address with policies are clear – ethics, privacy, misinformation, and transparency.

Not surprisingly, AI usage hit a new peak in 2024. According to a McKinsey survey, 65% of respondents report their organizations regularly use generative AI – nearly double the rate from just 10 months ago. However, along with the fast growth in connection with AI, many users have already experienced some negative impacts, with 44% of those respondents indicating they have faced at least one problem related to AI use. 

Implementing an ethical AI policy is a critical step to harness the benefits of generative AI while mitigating its risks, and that has been our focus for the past six months. 

We have developed a robust AI policy, including an AI FAQs document that provides guidance and best practices for use on all client-related activities, as well as an AI tool vetting checklist, understanding that not all AI tools are created equally.) Creating these resources was a challenging process given the constantly changing landscape of AI. However, it was necessary to provide our staff with the parameters that can enable them to use AI ethically in this moment and ensure that their usage aligns with our values. 

Here is how we tackled the AI opportunity. 

Assemble an AI team

Creating an ethical AI policy should not be a solo mission. We established an AI task force composed of a diverse team and perspectives. The task force meets regularly to assess a variety of issues related to the consistently changing AI environment, including updating guidelines to better address the unique challenges (and opportunities) presented by AI. We walk through new tools together. We share stories of where AI has gone all wrong (e.g., those emails that you just KNOW were written by a chatbot). And we brainstorm on how to use specific tools so that we aren’t just chasing the bright shiny thing but actually processing how it will help our efficiency without crossing ethical lines. 

By bringing together a variety of perspectives, we strive to ensure that our policy is comprehensive and considerate of different viewpoints.

Align your AI policy with your core values

At Vanguard, we pride ourselves on a people-first approach, which means that we use only human voice artists and real people in images and video to the extent possible. 

Your organization should determine the ethical principles and values that guide AI development and deployment, such as inclusiveness, human well-being, transparency and accountability. These principles will lay the groundwork for a comprehensive and ethical AI policy. We think of AI as an assistant, never an author. AI should enhance, not replace, human intelligence and decision-making. Your AI policy should also include human oversight to avoid overreliance on automated decisions.

Prioritize inclusivity

Our AI task force created an AI tool vetting checklist that includes three key questions that our staff should consider before using any AI tool:

1) Is the AI tool transparent on how it formulates responses and what dataset/sources it uses? 

2) Does the AI tool formulate responses based on hard data? 

3) Does the AI tool include features to accommodate users with different types of disabilities (e.g., visual, auditory, cognitive)? 

These questions help ensure that whatever AI tool we are considering supports our Diversity, Equity, and Inclusiveness (DEI) efforts because we can identify how the tool is generating its response. It is essential for our staff to understand that AI can replicate the same types of biases exhibited by people

Train and educate

A standalone policy is not enough. Pair the policy with sessions to educate staff about the ethics surrounding the use of AI. We hold regular learning sessions to help staff stay updated on best practices and new developments in AI. These trainings cover various topics, including AI implications for DEI, using AI ethically in writing, and the basics of our policies. 

We also recommend creating a collaborative space (we use a Teams channel) where staff share news, research, upcoming webinars, and their personal experiences, both good and bad, with colleagues.

Promote transparency

Your policy should emphasize transparency in using AI, just as you are transparent about how you bill clients or staff your project teams. Our policy emphasizes the requirement of being open and honest with our stakeholders in how we use AI. If used in the appropriate manner, AI is simply another professional tool that can help us generate ideas and provide direction, but it can take time for organizations to come on board. Be patient with them.

Embrace the iterative process

AI is continuously evolving, and your first policy cannot possibly be comprehensive of all of your organization’s considerations in using AI. Think about including into your policy a process to regularly review and update it as the environment changes and new AI tools are introduced. 

Our work with AI is just beginning, and as PR pros we probably can’t be experts right away, but we can build a framework of understanding for ourselves and for our clients. As we continue to explore new AI tools and technologies, our commitment to ethical practices remains steadfast. In fact, we almost never reference “AI” on its own; in our conversations, it’s always “ethical AI.”

By regularly updating our policies and providing ongoing education for our team, we aim to ensure that our use of AI aligns with our values and serves our mission.

Allison Gross is an associate director at Vanguard Communications in Washington, D.C. and a member of WWPR. She has extensive experience in health communications and marketing activities for government and nonprofit clients. At Vanguard, she is part of the AI task force and oversees the production of communications campaigns and materials for a Medicaid managed care organization in D.C. Before joining Vanguard, Allison led the overall marketing and communications strategy for the Primary Care Collaborative (PCC). Prior to PCC, she developed and executed communications campaigns to promote the 340B Drug Discount Program at the American Pharmacist Association.

Join the Mailing List

Stay connected with WWPR by signing up for our mailing list! You’ll receive the latest updates on professional development events, exclusive networking opportunities, leadership initiatives, and more!