How PR Pros Can Counter AI Misinformation

August 2023
Share this article

Almost every senior-level communicator has a story to tell about combating misinformation. Maybe they’ve had to correct spurious rumors spread by disgruntled employees or debunk a competitor’s false accusations on social media.

Unfortunately, the proliferation of generative artificial intelligence technology, like Chat GPT, is making it more difficult for PR professionals to counter misinformation. 

A recent joint research paper from Georgetown and Stanford universities and the company OpenAI pointed out that AI “will improve the content, reduce the cost and increase the scale” of misinformation campaigns, while introducing “new forms of deception.” The Congressional Research Service issued a report in April warning that the United States’ adversaries may release AI-generated videos that could “erode public trust, negatively affect public discourse or even sway an election.”

Creating fake content gets easier

But it’s not just governments that have access to misinformation-creating AI. AI democratizes the ability to effectively create and spread misinformation at a scope, speed, scale and quality that was previously only the provenance of state actors. 

Today, AI can easily and inexpensively produce deepfakes — realistic, computer-generated videos of fabricated content. One researcher recently made a deepfake video of himself giving a lecture that never happened. According to NPR, creating that false video took the researcher eight minutes and cost him $11, using a commercially available AI platform.

Now imagine this scenario: A disgruntled employee creates a deepfake video of their company’s CEO holding a press conference to announce a product recall. Then, using an AI-generated bot army, the employee spreads the phony but realistic-looking video across social media. This type of fake news has the potential to hurt brands, reputations and stock prices. And it’s already starting to happen.

In May, a deepfake photo of an explosion at the Pentagon went viral on Twitter, amplified by Russian state media. The news caused the S&P 500 to briefly drop three-tenths of a percentage point, before the Department of Defense and the Arlington County fire department jointly debunked the false report.

Artificial intelligence is getting better at generating realistic content — and not just deepfakes. In 2019, a Harvard researcher submitted AI-generated comments to Medicaid. As Wired reported, people couldn’t tell the comments were fake. The researcher had used ChatGPT 2.0, which, at the time, was a cutting-edge AI tool. That tool is now eclipsed by GPT-4, which was released to the public this March, and costs just $20 a month. While Chat GPT-4 claims to have misinformation safeguards built in, there are plenty of other readily available AI services that don’t.

With AI now publicly accessible, there are almost no barriers to entering the misinformation game. Someone who wants to harm your organization’s reputation just needs a laptop, a few dollars and an internet connection. They don’t even need computer expertise. Given the right text prompt, AI will write custom-made misinformation software. 

Artificial intelligence can also analyze audiences faster, cheaper and perhaps more precisely than we humans ever could — even if it can’t truly understand an audience. With a few more prompts, AI can then use that analysis to almost instantaneously create and deliver customized, targeted misinformation. 

Understanding the ethical uses of AI

We’re at the dawn of the AI-generated misinformation age and nobody knows where it will lead. But as communicators, there are a few things we can do to protect our clients, stakeholders and organizations from this new threat. 

It starts with understanding how to ethically use AI.

With the right prompts, AI can perform many tasks that will make our jobs easier. In moments, the technology can churn out semi-decent content that only requires editing and fact-checking before we can use it. 

AI can also analyze articles to spot trends, spit out summaries of media coverage on a given topic, and quickly and cheaply create useful graphics and videos.

By familiarizing ourselves with this new technology, we become better equipped to prepare our organizations for AI-generated misinformation.

Navigating the new age of misinformation 

Unfortunately, there’s not a single process, policy or technological solution that can adequately safeguard organizations from AI misinformation. But the good news is that, as trusted advisers and communicators, we are well positioned to help our organizations and clients navigate the age of AI misinformation.

Our job starts with showing clients, leaders and stakeholders how AI can help — or hurt — their brands and reputations. We do this by planning for worst-case scenarios and asking hard questions along the way.

Some questions to consider are: Does the communications function have a strong partnership with the IT department? And are other parts of the organization — such as operations, finance, legal, human resources — considering how AI-generated misinformation might impact what they’re doing — and how can we help them mitigate risk?

As PR counselors, we can also help our organizations put systems and processes in place to monitor and detect misinformation, whether from AI or other sources. At the same time, organizations need to have a process to quickly validate information, because not every unflattering video will be a deepfake.

We can also use our communications expertise to advise organizational leaders on how and when to respond to misinformation. We can pre-identify spokespeople and content distribution channels and help ensure the organizations we serve have plans in place to share timely and accurate information — assuming that it’s appropriate to do so.

For better or for worse, AI is here. By learning about and mastering the technology, our PR profession can protect the reputations of clients and organizations and help lead them into the future.

A Suite of AI Resources Now Available for PRSA Members

A new section on the PRSA website allows members to access hours of on-demand programming as well as articles, blogs and video interviews specifically focused on AI. A companion page on MyPRSA also offers members an opportunity to engage in discussion and share best practices related to AI. PRSA will upload additional content on an ongoing basis. Access these AI Resources at this link

Return to Current Issue Agency Management | August 2023
Share this article

Subscribe to Strategies & Tactics


*Strategies & Tactics is included with a PRSA membership