You Have an AI Policy. Now What?
By Pete Pachal and Melissa Flynn, APR
September 2025
As AI continues to infuse itself in every facet of business, companies small and large are rushing to create and adopt AI policies.
AI is a powerful but imperfect tool, and it’s important that everyone using it understands its limitations and the potential consequences of misuse. Managing risk is at the heart of every company’s success.
Yet as any good PR person knows, there’s a difference between policy and real-life implementation. So, the question is: Now that you have a policy, how is your company “living” it?
Take this scenario. A member of your team realizes an AI notetaker wasn’t turned off during a client meeting, and all the confidential notes were sent to the team. Then one of those individuals forwards those notes to other people outside of the team.
Suddenly, your confidential discussions about an upcoming project are effectively public — how do you deal with this? Does your policy do more than just provide you with a framework for Responsible AI and identifying issues as they arise?
Consider these four common AI ethical challenges facing PR and communications professionals today. These situations can happen even with a solid AI policy in place, so it’s important to think them through to come up with actionable solutions.
Leaky signs of AI
The most obvious use case for generative AI is to create content. And even though AI has gotten a lot better at writing over the past two-and-a-half years, we’ve all become attuned to the signs of AI text: repeated sentence structure, constantly saying “it’s not this; it’s that,” and em dashes sprinkled all over the place. You don’t want any of that appearing in content that might be client- or public-facing.
The solution: That’s why, when creating any workflow around AI content, it’s imperative to carve out the telltale signs of AI with surgical precision. That involves using the right model and tool, getting your prompting just right, and knowing where in the process humans need to be in the loop.
And not just any humans — those with the right expertise to not just correct AI quirks, but check the facts as well. AI hallucinations are still a problem, and might even be getting worse.
It’s crucial that subject matter experts vet the output of AI — especially if what it says is going to go into the latticework of any message, project or campaign. If it turns out to be wrong, then the whole thing could fall apart.
Fact-check everything. Look back at source data, revisit leadership quotes, ensure that key product information is accurate vs. “filled in” by AI.
AI repurposes content, so unless you check new sources and data, challenge assumptions and question language, you’re at risk of passing along incorrect information — with your brand name attached to it.
Lack of transparency
Telltale leaks of AI are a problem, but they’re a much bigger problem if you haven’t set expectations. Using AI and then representing its output as authentically human — either deliberately or by implication — is not an ethical practice, and it can be extremely damaging if the truth is revealed.
The solution: Be transparent and unambiguous. Really, it would be hard to find a PR firm that isn’t using AI in some way in 2025, so there’s no reason to hide that it plays a role in your process. And that role should be as clear as possible. Simply slapping an “AI-assisted” label or similar language on something isn’t enough.
While it’s not always practical to put detailed explanations of AI workflows on every piece of content, linking to more detailed AI guidelines typically is.
Bias
It’s true, bias is inherent in many AI models. AI is a reflection of both its human designers and what’s put into it, so it’s not surprising that outcomes replicate and amplify the biases they contain. And, if we’re not careful, those biases make their way into our AI-enhanced writing, content and campaigns.
As the stewards of brand voice and reputation, it’s our job to solve for this bias — to protect our brands and our clients’ brands. The good news is that AI can correct for its own bias if you tell it to.
The solution: Create go-to prompt templates or libraries to help neutralize bias. When we train PR teams, we coach them to think through the kinds of biases they’ll likely face, then help them create prompts — think gender-neutral and inclusive language, for example — to test for and eliminate bias from their writing and research.
Deepfakes don’t die
AI has made it trivial to create realistic-looking videos or photos from just a few lines of text. But really, they’re not that good — most of the time there are artifacts that give away the nature of the videos — an extra finger, a disappearing supporting character or background that’s just too perfect.
In fact, in many cases, online sleuths will do a lot of the work to expose and call out AI fakery.
But just because deepfakes are easy to debunk doesn’t mean they aren’t a serious problem. For starters, your staff needs to be trained on how to spot AI-generated imagery and have a go-to network of experts for support. The response needs to be swift and certain to have any hope of getting the truth out.
More important, a crisis doesn’t end by simply proving something’s AI. If you’re lucky, then a social network might take a deepfake video down in response to a complaint, but the truth is audiences often don’t care — especially if it’s entertaining. You’ll still need to have an action plan for how to handle its continued spread, and the fallout that results.
As AI continues to permeate our working lives, the gap between having a policy and implementing it effectively will only widen if companies don’t take proactive steps.
The organizations that will be able to deal with situations like these are the ones that move beyond checkbox compliance to develop robust, practical frameworks for AI use. That means investing in training, creating clear accountability structures, and building workflows that recognize both the tremendous potential of AI — and its damage potential when used poorly.

