40% use Al-powered media monitoring tools 31% said Al features were part of their analytics and reporting dashboards. 72% said they use generative tools like ChatGPT to brainstorm ideas. 67% are working with Al to write or refine press. Obviously whether we trust it or not, AI is part of our lives. For me, I see it as the “Roto Rooter” of writing – a very effective dissolver of writer’s block. No matter how stuck you might be or disorganized your thoughts are, it’s got ideas. But that doesn’t mean you should use all of them. It’s one thing to generate an idea that you can back up with data or personal experience. It’s another thing to take an AI generated analysis of your PR results into a board room. AI is Really Good for Eroding trust. 15 years ago, when I wrote Measure What Matters , what mattered most was trust. It is still the cornerstone of communications. If people don’t trust what you say – and the platforms or media outlets you are saying it on, and the spokesperson who is saying it, your efforts are wasted. Which is why, if you’re relying on AI as your source, you can lose the trust of your audience very quickly. Last year I had some 30 survey responses that my survey platform spit out in an incomprehensible spreadsheet of words. I asked Chat GPT to segment it and turn it into what I would consider a “normal” survey data base and it did. AND, more importantly, because the database was relatively small, I could manually check the percentages and was pleased to find that they were all correct. I also learned to trust AI from more detailed research that showed that it could reliably be used to identify a crisis and recommend the best response, (Best being defined as the response which would most quickly turn the deluge of negative press back to neutral.) I trusted that data because FullIntel, the company that had done the AI experiment, also backed it up with human-analyzed data against which we could check the results. And, as it happened, AI and humans agreed more than 90% of the time. Does that mean I trust AI to analyze my data, no. It means I trust FullIntel or my own eyes to check and validate the data. As long as I can statistically validate it, I’d take AI data into a board room any day. However, most of the trust in AI that the FullIntel experience engendered was quickly eliminated. I needed to figure out how many times my client’s prominent CEO was quoted in some 2000 articles. Knowing that he had been a frequent spokesperson and even testified in a congressional hearing, I figured it would be an easy job for Chat GPT. Its answer was …. zero. I experimented with different prompts and then realized that it would be faster to use the filter function in an Excel pivot table to get the information I needed. So much for trust in AI. The point is not that you can never trust AI, it’s just that for public relations and communications professionals, who are constantly struggling to prove their worth and their credibility in the boardroom, AI can be a mine field without a significant amount of human oversight. The problem is only going to get worse. Conservative estimates suggest that a third or more of text these days is generated by AI, and there’s no automated way to tell if it’s true or false. Which means that your media monitoring platforms are probably ingesting AI-assisted content, that that looks like human reporting but might be generated by a journalist or a bot, which in turn feeds into “themes” and “trends” that the monitoring platforms summarize and deliver in dashboards with zero indicators of origin or veracity. Not great for building trust. The practical impact of this is that if you are a brand or an organization with opponents or competitors, it’s now incredibly easy to flood the zone with completely fabricated statements that can thrust you into crisis mode even if they aren’t true. The phenomenon is called the “Illusory Trust Effect” and it works because our brains evolved to process information efficiently—not accurately. In short: our brains mistake familiarity for truth. When a statement is repeated often enough it begins to seem normal. In other words, your brain thinks “I’ve heard this before, so it must be true.” Over time our brains remember the claim but forget the context and the source. As we get more stressed due to the content we read or conflict we experience, our brains want to take the easiest possible route, and so people are even more likely to believe what is most familiar. Which is why if you are trying to debunk a myth and, in the process, you repeat that myth, you are essentially reinforcing it. Media Monitoring has to reinvent itself. If AI can, and no doubt will be used to repeat lies ever faster, there are currently very few ways to detect the problem. At the moment, most media monitoring tools are based on the PR practices of a decade ago. They ingest content, and report the results, regardless of whether the content or the results are true or false. They focus on “Top Tier” media which no one is reading and report a lot of other numbers that are generally meaningless if not downright silly. (No, you did not reach all 8 billion people on the planet with news about your new product, especially since most of them don’t need it and can’t buy it anyway.) What these tools need to capture in 2026 is: The actual channels where people’s opinions are being formed – be they Substacks, TikTok, personal blogs or newsletters. The historical accuracy of those outlets – i.e., a credibility score Automated fact checking of messages or phrases that are repeated with abnormal frequency against reality or at least the client’s key messages. Automated credibility checks when the same theme is repeated identically across multiple platforms and outlets. The good news is that AI, with sufficient oversite and checking and training, can monitor and report on these elements. We just need to teach it how.