The Institute for Public Relations celebrated “Measurement Month” in November. The organization populated its online newsletters during the month with items about how to evaluate public relations effectiveness. Offerings included “Teaching measurement and evaluation in interesting times,” “Issue measurement as a first step to issue management,” “Latest evaluation models—UK government evaluation cycle, EU guidelines, and more,” and “2024 and the changing world of entertainment and sport PR measurement.”
The goal was to highlight the importance of research and evaluation to public relations practice. The Florida-based institute was founded in 1956 to promote research-based approaches to public relations.
Also during November, Muck Rack, a media software and database company, issued its State of PR Measurement report for 2024. That report reflected survey responses from more than 400 public relations practitioners. Results showed:
- Only 38% were “extremely confident” or “very confident” in the metrics they used to demonstrate the effectiveness of their work. Another 49% were “somewhat confident.”
- More than one-third had trouble tracking the effectiveness of their work.
- Linking public relations results to business goals was the most difficult measurement challenge.
Practitioners used many methods to assess results. Metrics included the number of stories placed or pitches made, message reach, share of voice, social media engagement, and effects of public relations activity on sales, leads, or revenue.
Skepticism about measurement methods may be justified
Some survey respondents questioned the reliability of every metric in the survey. Such skepticism may be justified. Just because we have measurements doesn’t mean we know what those numbers represent. We must be sure of what we are measuring. Furthermore, we must determine if our metrics are accurately assessing the situation.
For example, some businesses routinely send customers a one-question follow-up survey to assess satisfaction. The question asks how likely the customer is to recommend the company to a friend or family member.
Frederick Reichheld said in a 2003 Harvard Business Review article that the “likely to recommend” behavior was “perhaps the strongest sign of customer loyalty.” Consequently, the single National Promoter Score (NPS) question could provide a useful predictor of top-line business growth.
Since 2003, the use of the question has morphed. Managers not familiar with Reichheld’s original research now use the NPS item to gauge customer satisfaction. That use is inconsistent with the original intent. Results, therefore, may not be reliable.
A customer, for instance, might be “willing to recommend” the company. But the customer knows he or she wouldn’t normally have an occasion to recommend the business to anyone. Therefore, the customer might say he or she would not be very likely to recommend the company.
Misguided responses could cost companies
The customer may be completely satisfied with his or her experience. Managers reviewing the response, however, may think just the opposite, draw the wrong conclusions about what the business is doing, and react inappropriately. A misguided response could cost the company money.
Statistician W. Edwards Deming, father of the Total Quality Management movement, said collecting data wasn’t enough. Data are just numbers until they are interpreted. Interpretation takes human analysis. Each analysis reflects our biases about what we think those numbers are measuring. A lot depends on the unit of analysis, the way data are collected, and how results are dissected.
Deming disputed the claim, often attributed to business guru Peter Drucker, that you can’t manage what you don’t measure. Deming called an absolute belief in the measurement claim “a costly myth.”
In Out of the Crisis (1986, pp. 121-126), Deming said that not all business elements that must be managed could be measured. Managers nevertheless needed to deal with those topics. Therefore, managers had to clearly understand what data were and weren’t telling them.
To plan effectively, public relations practitioners need to know whom they want to reach, how they want to influence people in those groups, and what responses will indicate that a public relations effort has succeeded—and advanced the organizational mission. To answer these questions, practitioners need accurate measurements.
The November Muck Rack report indicates that many public relations practitioners aren’t sure they are collecting the information they need to effectively plan or accurately track the success of their work.
Feedback from individuals in key publics needed
I agree. Story placement and pitch counts measure output by practitioners, not outcomes among people in key publics or connections to business goals. Message reach, share of voice, or social media engagement focus on communication dynamics, not what people who receive those messages think about the company sending them. Correlations of sales, lead generation, or revenue to public relations activities can’t show direct causation. Public relations activities may successfully influence what people in key publics think about an organization. Nevertheless, the changed perceptions don’t lead to commercial transactions.
Public relations work might enhance an organization’s reputation, boost the value of goodwill, and help accomplish business goals without directly influencing sales or revenue. To know for sure, practitioners need direct feedback from individuals in the publics we want to influence.
American humorist Mark Twain popularized the idea of lies, damn lies, and statistics. Public relations practitioners must be confident about the data they collect and the statistics they use to plan and evaluate their work. Otherwise, they risk making costly mistakes or lying to themselves and clients. The intelligence underlying their plan might be flawed, or their measurement results might not accurately assess the situation.