Are you a non-profit organization dedicated to empowering women and/or families in the Washington D.C. metropolitan area? Do you have ambitious communication goals but lack the resources to achieve them?
Washington Women in Public Relations (WWPR) is excited to announce our search for a new pro bono client for a two-year partnership, commencing in January 2026.
This is a unique opportunity to elevate your organization’s mission and amplify your impact. Our team of experienced communications professionals offers a comprehensive range of services, including:
Strategic communications planning
Media relations and training
Brand development and messaging
Executive communications
Digital communications and website support
Social media strategy and engagement
Event planning and promotion
And more!
We’re seeking a partner who is ready to collaborate closely with us to achieve their key communications objectives, enhance their visibility, and ultimately strengthen their ability to serve the community. As part of the engagement, WWPR will conduct a communications audit to evaluate existing key messages, marketing materials, social media sites, and other relevant platforms, and provide recommendations for key objectives and priorities.
Eligibility Requirements
To be considered, organizations must:
Be based or headquartered in the Washington D.C. metropolitan area.
Focus on serving women and/or families.
Hold a 501(c)(3) status.
Have been operating for at least 24 months.
Designate a dedicated point person to work directly with the WWPR team.
Ready to Apply?
Don’t miss this chance to transform your communications efforts. Applications are due by 11:59 p.m. on Friday, August 29th, 2025.
You may also download a Google Docs version of the form here to draft your responses before submitting. Please note that all nominations must be submitted via the Google Form; emailed submissions will not be accepted.
By LeAnne DeFrancesco, vice president at Vanguard Communications
If you make a living from writing, your hackles have likely been up for a while about artificial intelligence (AI)–not the customer service chatbots on a website “Can I help you find what you’re looking for?” or even the expanding search capabilities. I’m talking about generative AI that puts together words and phrases that make sense, are creative and do the legwork of writing in an instant.
Wait – Was that my career that just flew out the window?
Writers, be comforted. Your career is intact, but there’s no reason you can’t make AI your bestie. In fact, we know that writers and authors are using AI as a creative and organizing tool, not a replacement for the beautiful subtleties and nuances of language and persuasion that only humans can provide.
And that’s really where the intersection of ethical AI and writing lies: Can we find a way to use AI to help us do our jobs better without passing it off as the work product of a trained, experienced author?
Think of it this way: The internet has made research vastly more accessible. It has provided a shortcut TO the research; it isn’t writing it. And that’s how we can be ethical AI users, by using the tool, not the talent.
The Guardrails
A few weeks back I was putting together a piece of marketing content that I had written in one form or another at least 100 times. It’s so familiar to me, yet I couldn’t find the words to start. Maybe at this point in my career I have too many reference points.
So, I sat looking at the screen for about 10 minutes and then decided to plug my carefully engineered prompt into ChatGPT. Et voila! The results reminded me of what I wanted to say, not HOW I wanted to say it, necessarily, but it gave me the bones.
That’s Guardrail #1: Never let an AI-generated piece of content speak for you.
When you do that, you give it creative control. Let it gather and summarize, and then you should take the wheel to make it a great piece of content that reflects your expertise, insights and understanding of the assignment.
Guardrail #2: Make sure you can stand behind your work product.
AI can be famously wrong. In a process called “hallucinating”, AI tools can actually make things up when they don’t know the answers. So even if you get something out of ChatGPT that sounds great and on point, fact check it. Writers can find a lot of inspiration in AI tools but should never depend on their outputs. Credibility can be lost in an instant.
Guardrail #3: Be transparent.
I think the communications world and all the clients we support are quite aware that AI is in the mix and is likely used every day in some way, shape or form. That doesn’t mean they are excited about it. In fact, some of our clients are extremely wary about what it can mean for their organizations. At Vanguard, we are intentional about how we use AI. When it has been a companion to us in a work product, we say so. (e.g., “We used AI to help us brainstorm these campaign names.”)
For brainstorming purposes, AI is just a few steps away from plugging ideas into Google, so it feels like a natural and “safe” step for us. But we still always reveal it, and our clients appreciate it.
Our Value Continues
Organizations want to see words that reflect our experiences and knowledge, and that expertise goes deeper than what the internet can uncover. AI wasn’t there for the conversation I had with a colleague that made me pivot on a campaign direction. And it wasn’t part of the meeting I had with my client last week where they gave me feedback on an op-ed draft. Those are my experiences, which will lead to better drafts.
And that’s how I can sleep at night. AI is not omnipotent, but using it ethically is a muscle we are all going to have to strengthen. Right now, while large language models are still relatively new, it’s a good time to set up a healthy relationship with AI.
Here are some tips that helped me get comfortable with AI as a writing partner:
Start small, giving whatever platform you like a few exploratory tasks—even personal ones that carry no professional risk. (I had ChatGPT develop an invitation to a summer solstice party I was throwing last summer, just to see where it landed and where it missed.)
Watch webinars and videos on what AI can really do. I use it sparingly in client work but as noted earlier, it has some real potential to save time on marketing tasks, which feels like a better role for it.
Be curious. To trust AI and use it well, you are going to have to lean into it. Research the pitfalls and know where you need to draw the line.
AI was not used in the development of this blog post.
LeAnne DeFrancesco is a vice president at Vanguard Communications in Washington, D.C., where she leads the firm’s Design and Editorial practice. She joined WWPR’s Pro Bono Committee in 2018, where she has helped several D.C.-based organizations enhance their PR and communications activities.
By Gabriela Linares, lead marketing specialist at Washington Gas
As we celebrate Hispanic Heritage Month, I find myself reflecting deeply on my journey as a Latina woman—one shaped by my upbringing, professional challenges, and personal triumphs. From my international childhood to my career in leadership roles, this journey highlights the resilience of Hispanic women while emphasizing the continued fight for equity, transparency, and opportunity in the workplace.
A Global Upbringing: Shaped by Culture and Identity
My story begins in Washington, D.C., where I was born to Venezuelan diplomats. Growing up across cities like Savannah, Georgia; Vienna, Austria; and Madrid, Spain, my childhood was a melting pot of diverse experiences. Summers spent in Venezuela, the Dominican Republic, and Nicaragua further enriched my cultural perspective. This global upbringing shaped my worldview, helping me appreciate the richness of different cultures and making me fluent in Spanish, German, and French. It also taught me to cherish my heritage, and that pride remains at the core of who I am today.
Living in Vienna, Austria at the age of eight, surrounded by students from over 80 countries at an international school, I learned the true meaning of diversity and inclusion. We were equals, learning from one another’s differences and celebrating them. It was here that the seeds of global unity and respect for all cultures were planted in my heart. Attending the British secondary (high) school at King’s College in Madrid, Spain, among international students, further cemented the importance of global interconnectedness, acceptance, and respect.These formative experiences set the foundation for the rest of my life and career, where I’ve worked to embody those same values.
Throughout my career, I’ve remained deeply connected to my Venezuelan roots. From the language and traditions to the music and food that tie me to my family and culture, these elements have been guiding lights as I’ve navigated the complexities of the professional world. Venezuelan heritage, especially during Hispanic Heritage Month, reminds me of the strength, vibrancy, and joy that we carry with us wherever we go.
Venezuelan Influence and the Fight for Justice
Growing up with Venezuelan parents, I was also deeply impacted by the political struggles in my home country. The legacy of Simón Bolívar and the ongoing quest for democracy in Latin America shaped my passion for justice, equality, and social responsibility. These struggles are part of a broader context for many in the Hispanic community who strive to create better lives despite political and economic instability. I believe that supporting democracy and human rights is key to addressing migration challenges and providing economic opportunities across borders.
As I reflect on this, I see how my background fuels my dedication to fostering equity in the workplace. We need to create environments that promote fairness, diversity, and inclusion for all, including Hispanic professionals.
Breaking Ground Professionally
Entering the professional world, I quickly realized that my international background offered unique advantages. One of my career-defining moments came when I led the launch of the first Internet Service Provider in the Dominican Republic, a groundbreaking achievement that connected the country to the digital age. This experience, along with my ability to navigate multicultural spaces, led to many successes, including my role as Vice President of Marketing, where I achieved 300% annual growth and earned Deloitte’s Fast 50 awards for five consecutive years.
Yet, despite these accomplishments, my journey has been fraught with challenges. Like many Latina women, I’ve faced discrimination in the workplace. There have been instances where I was passed over for promotions or paid less than my peers despite my qualifications and contributions. These experiences reflect a larger issue that many Hispanic women face—unequal pay and limited opportunities for advancement.
Facing Discrimination: A Personal Story
One particular experience stands out in my mind. After giving a presentation at a company town hall, a cleaning lady I had befriended approached me with a heartfelt message. She told me that when she saw my name called to speak, she prayed I would do well, saying she wanted me to show that Latinas are capable of so much more than society often assumes. That conversation stuck with me. It highlighted the stereotypes that Hispanic women, even in professional settings, still fight to overcome.
It’s moments like these that remind me of the urgency of fighting for pay equity and equal opportunities for Hispanic women in the workforce. We must continue breaking down barriers and challenging the status quo.
The Road Ahead: Pay Equity and Accountability
As we celebrate Hispanic Heritage Month, we must not only honor our heritage but also commit to making lasting change. Hispanic women still earn less than their peers, and many continue to face barriers in career progression. Pay transparency and accountability are crucial steps toward closing the wage gap. It’s not just about fairness in salary; it’s about respect, dignity, and the value of our contributions.
Together, we can advocate for pay equity, push for systemic changes, and create spaces where Hispanic women can thrive. I’m proud of my journey, but there is still much work to do to ensure the road ahead is smoother for the next generation of Latina professionals. As we celebrate Hispanic Heritage Month, let’s uplift one another and continue to push for progress. Our community is resilient, and together, we can achieve the equity we deserve.
Hispanic Heritage Month is a time to celebrate our culture, contributions, and the road ahead. I am proud of my journey and the challenges I’ve overcome, but I remain committed to ensuring that the path is smoother for those who come after me. Pay equity is not just a goal; it is a necessity.
by Allison Gross, associate director, Vanguard Communications
As ChatGPT and other artificial intelligence (AI) tools emerge into our workplace, many in our firms are equal parts giddy and guarded about what it means for our short-term tasks and our long-term jobs. And the truth is, the evolution is happening so quickly that getting your head around what it is and what it does can only be relied upon for a few moments. (I mean, AI is advancing at such a rapid pace that I worry this blog post might feel outdated by the time you finish it!)
Through our research and focus on capitalizing on all that AI has to offer us, we believe the challenges we need to address with policies are clear – ethics, privacy, misinformation, and transparency.
Not surprisingly, AI usage hit a new peak in 2024. According to a McKinsey survey, 65% of respondents report their organizations regularly use generative AI – nearly double the rate from just 10 months ago. However, along with the fast growth in connection with AI, many users have already experienced some negative impacts, with 44% of those respondents indicating they have faced at least one problem related to AI use.
Implementing an ethical AI policy is a critical step to harness the benefits of generative AI while mitigating its risks, and that has been our focus for the past six months.
We have developed a robust AI policy, including an AI FAQs document that provides guidance and best practices for use on all client-related activities, as well as an AI tool vetting checklist, understanding that not all AI tools are created equally.) Creating these resources was a challenging process given the constantly changing landscape of AI. However, it was necessary to provide our staff with the parameters that can enable them to use AI ethically in this moment and ensure that their usage aligns with our values.
Here is how we tackled the AI opportunity.
Assemble an AI team
Creating an ethical AI policy should not be a solo mission. We established an AI task force composed of a diverse team and perspectives. The task force meets regularly to assess a variety of issues related to the consistently changing AI environment, including updating guidelines to better address the unique challenges (and opportunities) presented by AI. We walk through new tools together. We share stories of where AI has gone all wrong (e.g., those emails that you just KNOW were written by a chatbot). And we brainstorm on how to use specific tools so that we aren’t just chasing the bright shiny thing but actually processing how it will help our efficiency without crossing ethical lines.
By bringing together a variety of perspectives, we strive to ensure that our policy is comprehensive and considerate of different viewpoints.
Align your AI policy with your core values
At Vanguard, we pride ourselves on a people-first approach, which means that we use only human voice artists and real people in images and video to the extent possible.
Your organization should determine the ethical principles and values that guide AI development and deployment, such as inclusiveness, human well-being, transparency and accountability. These principles will lay the groundwork for a comprehensive and ethical AI policy. We think of AI as an assistant, never an author. AI should enhance, not replace, human intelligence and decision-making. Your AI policy should also include human oversight to avoid overreliance on automated decisions.
Prioritize inclusivity
Our AI task force created an AI tool vetting checklist that includes three key questions that our staff should consider before using any AI tool:
1) Is the AI tool transparent on how it formulates responses and what dataset/sources it uses?
2) Does the AI tool formulate responses based on hard data?
3) Does the AI tool include features to accommodate users with different types of disabilities (e.g., visual, auditory, cognitive)?
These questions help ensure that whatever AI tool we are considering supports our Diversity, Equity, and Inclusiveness (DEI) efforts because we can identify how the tool is generating its response. It is essential for our staff to understand that AI can replicate the same types of biases exhibited by people.
Train and educate
A standalone policy is not enough. Pair the policy with sessions to educate staff about the ethics surrounding the use of AI. We hold regular learning sessions to help staff stay updated on best practices and new developments in AI. These trainings cover various topics, including AI implications for DEI, using AI ethically in writing, and the basics of our policies.
We also recommend creating a collaborative space (we use a Teams channel) where staff share news, research, upcoming webinars, and their personal experiences, both good and bad, with colleagues.
Promote transparency
Your policy should emphasize transparency in using AI, just as you are transparent about how you bill clients or staff your project teams. Our policy emphasizes the requirement of being open and honest with our stakeholders in how we use AI. If used in the appropriate manner, AI is simply another professional tool that can help us generate ideas and provide direction, but it can take time for organizations to come on board. Be patient with them.
Embrace the iterative process
AI is continuously evolving, and your first policy cannot possibly be comprehensive of all of your organization’s considerations in using AI. Think about including into your policy a process to regularly review and update it as the environment changes and new AI tools are introduced.
Our work with AI is just beginning, and as PR pros we probably can’t be experts right away, but we can build a framework of understanding for ourselves and for our clients. As we continue to explore new AI tools and technologies, our commitment to ethical practices remains steadfast. In fact, we almost never reference “AI” on its own; in our conversations, it’s always “ethical AI.”
By regularly updating our policies and providing ongoing education for our team, we aim to ensure that our use of AI aligns with our values and serves our mission.
Allison Gross is an associate director at Vanguard Communications in Washington, D.C. and a member of WWPR. She has extensive experience in health communications and marketing activities for government and nonprofit clients. At Vanguard, she is part of the AI task force and oversees the production of communications campaigns and materials for a Medicaid managed care organization in D.C. Before joining Vanguard, Allison led the overall marketing and communications strategy for the Primary Care Collaborative (PCC). Prior to PCC, she developed and executed communications campaigns to promote the 340B Drug Discount Program at the American Pharmacist Association.
ChatGPT has become my best friend. I know it will deliver what I need in a pinch—even if it’s not perfect. A quick list of TV and radio stations serving Bristol, TN? Check. Ten recipes using the quickly expiring root vegetables in my fridge? Done!
Much like human relationships, I’m aware of AI’s limitations and vulnerabilities—and I always have an eye out for signs that the relationship might be toxic.
A University of California at Riverside study showed that 20 to 50 ChatGPT queries use around a half liter of fresh water in the form of steam emissions, but a study in Nature highlighted some benefits of that efficiency: AI tools emit between 130 and 1500 times less carbon per page of text generated compared to human writers. Considering all the tradeoffs can be overwhelming, and it’s easy to see why many in PR are concerned about exploring AI use in their work.
Understanding how to assess the ethics of AI use in PR is crucial, particularly as we’re being doused with a firehose of AI innovation. Our firm’s AI task force has spent the past six months exploring environmental, copyright and other well-known issues to guide our PR colleagues and clients. We developed five key questions to help us determine whether a tool meets our ethical standards.
Who is impacted if we use this tool?
The personal and professional impact of AI tools can be far-reaching and fraught with competing barriers and benefits. For example:
Are there trained professionals who are losing opportunities because they’re being replaced with AI?
Is it ethical to use an AI-generated actor, voice or model in place of the real thing?
Are text or images being generated from work that was originally created by someone else?
Fully replacing human performers and writers devalues human artistry and eliminates the depth of emotion and authenticity that AI cannot replicate. However, there are incredible opportunities for AI to help us work faster and smarter. AI tools can boost opportunities for a PR team by giving them a chance to move beyond repetitive, mundane tasks that don’t allow them to fully use their skills and creativity. Evaluating these issues related to each tool is a critical part of practicing ethical AI.
Who is left out if we use this tool?
Dr. Joy Buolamwini has spent her career unmasking the coded gaze of technology, where baked-in prejudice abounds, including bias related to race and gender. AI algorithms have offered lower credit limits to women and incorrectly flagged black defendants as future criminals at twice the rate of white defendants. While AI shows potential in bridging barriers for people with disabilities, it may not fully address their diverse needs. When assessing these tools, research how the AI was trained and how they are monitoring for bias. Most importantly, ensure developers maintain a continuous feedback loop with users to quickly identify and correct biases.
How does this tool help us pursue our mission?
Despite concerns about AI conflicting with organizational ethics, AI can significantly enhance the pursuit of a mission. Its ability to quickly analyze large datasets allows for more efficient monitoring of trends and challenges. Even with the potential environmental toll of AI, it’s currently being deployed to precisely determine the most critical areas of need related to deforestation and climate change. Schools can use AI to track individual student progress and pinpoint specific interventions to help. However, overreliance on AI can diminish human interaction, leading to a loss of empathy and understanding in sensitive situations, potentially alienating donors or customers. AI’s capabilities might also cause mission drift by shifting focus to data and metrics over qualitative activities that support the core mission. Selecting AI tools for PR should involve weighing their benefits and risks against the organization’s goals.
What do our employees need to maximize use of this tool?
Diving into new AI tools can be exhilarating—and encouraging employees to experiment is an important part of gaining enthusiasm and support for advancing technology. The downside is that, without clear guardrails and a training plan, users can quickly find themselves in an ethical pickle. ChatGPT is the perfect example of a free tool that offers endless possibilities for generating information, yet headlines about misuse, plagiarism and poor data abound. Start with a policy that outlines basic organizational operating principles related to AI. For each new tool, take the time to educate and inform employees about the functions and benefits, then stay in touch over the first few weeks to determine whether there are any operational or ethical concerns. Provide training and support for those who are less comfortable with recent technology and highlight success stories when the tool improves processes or outcomes. Finally, be open to criticisms or concerns about the tool and its impact on employee growth and satisfaction.
What are the ultimate risks and harms of using this tool?
There will be AI tools for PR that, no matter how exciting and helpful, just don’t meet the ethical or privacy standards for an organization. Many tools haven’t corrected some very real concerns about bias, while others operate in open systems where your information and data become part of the algorithm training. Some PR tools are currently free, but what happens when a department makes them part of its operation, and the tool suddenly has a subscription fee? Practicing ethical AI in PR starts with a thoughtful, purposeful approach that considers the tough questions about privacy and security, bias and fairness, copyright, costs and mission alignment.
There are certainly tremendous tradeoffs to consider related to using AI in PR, but that doesn’t mean we should ignore AI and hope it goes away. Like computers, the internet and social media, these tools will evolve in exciting and unexpected ways. Only with our eyes open can we forge the path toward the ethical use of them in our profession.
Brenda K. Foster, M.P.A., is a senior vice president at Vanguard Communications in Washington, D.C., and an instructor for the graduate program at American University’s School of Communications. She was named a PR News Top Woman in PR and was a finalist for WWPR Woman of the Year.
Stay connected with WWPR by signing up for our mailing list! You’ll receive the latest updates on professional development events, exclusive networking opportunities, leadership initiatives, and more!