Studies slam charity watchdogs’ ratings

Todd Cohen

Two studies commissioned by the Direct Marketing Association Nonprofit Federation say watchdog groups’ ratings of charities suffer from big flaws that could do big damage to charities.

In response, chief executives of three watchdogs cited in the research papers, which they say they have not seen, say their groups provide a critical service in holding charities accountable.

One study, by George E. Mitchell, research assistant with the Transnational NGO Initiative at the Moynihan Institute of Global Affairs at Syracuse University, says the three big watchdog agencies providing ratings that are widely seen as measures of organizational effectiveness or accountability for nonprofits, and that influence billions of dollars in charitable giving each year, suffer from “severe shortcomings.”

The second study, by Jessica E. Sowa, associate professor in the School of Public Affairs at the University of Colorado in Denver, says charity rating scales, including those used by the three big watchdogs, “contain some inherent limitations that may not provide a full picture to donors and could have a long-term detrimental impact on the way in which nonprofit organizations operate.”

Both studies were delivered at the annual conference of the Direct Marketing Association Nonprofit Federation.

Ken Berger, president and CEO of Charity Navigator, one of the watchdogs criticized by the two studies, says rating services do important work.

The logic of the two studies ultimately “implies a total lack of accountability,” Berger says, “because they’re arguing every charity is so unique and complex, you can’t make any kind of comparison.”

The Direct Marketing Association Nonprofit Federation, he says, “has always been hostile to us because we’ve always cautioned donors about the high fees” charged by charity telemarketers.

The research

Mitchell’s study says watchdogs base their ratings and accreditation on financial benchmarking that is arbitrary and inconsistent and does not truly measure nonprofits’ effectiveness or efficiency, and that can reflect discretion the watchdogs exercise.

Based on in-depth interviews with top leaders, mainly presidents and CEOs, at 152 international nonprofits rated by Charity Navigator, the study says watchdogs’ ratings also hold nonprofits accountable to some benchmarks they may have little ability to influence.

“All three watchdogs evaluate the cost of raising one dollar for the nonprofits they rate, but this metric is determined not by nonprofits alone but collectively both by both nonprofits and donors, says the study, “Reframing the Discussion about Nonprofit Effectiveness.”

Sowa’s study says the “complex nature of organizational effectiveness in the nonprofit sector precludes the development of a single rating scale that will ever provide the full picture of a nonprofit organization’s performance.”

Based on an examination of the design of current charity rating scales against research on nonprofit organizational effectiveness, Sowa concludes that charity-rating scales “should be redesigned to include more comprehensive measures of effectiveness.”

At a minimum, “charity raters need to become much more explicit about the limitations associated with their ratings,” says her study, Charity Rating Scales: The Challenge of Developing “Effective” Measures of Nonprofit Organizational Effectiveness.

More study and attention is needed, the study says, “to determine whether existing ratings are doing more harm than good in their goal to promote informed giving and improved performance in the nonprofit sector.”

Mitchell’s study says that while few nonprofit leaders define organizational effectiveness as “overhead minimization,” watchdogs still “implicitly” apply that definition in evaluating nonprofits.

“In the absence of credible data for evaluating nonprofits on the basis of their programmatic performance, agencies look to financial proxies and checklists of standards instead,” the study says.

“It is easy to mistake watchdog ratings for measures of organizational effectiveness, including financial efficiency,” it says. “A more appropriate measure of organizational effectiveness would measure the extent to which organizations are achieving their promised goals.”

A fundamental problem with financial benchmarking, the study says, is that “different types of organizations doing different types of things should probably be held accountable to different standards, but consensus is elusive on the appropriate way to segment organizations for this purpose.”

The study says it is “important not to underestimate the influence watchdogs can have over potential donors’ giving decisions, particularly in an environment in which credible and accessible information about nonprofits is hard to come by.”

Watchdog ratings “may be influential not in spite of being misinterpreted but because they are misinterpreted,” the study says. “While watchdogs are usually careful to subdue their claims about what their ratings measure, this also means that they provide little guidance to help consumers interpret their scores appropriately.”

The “interpretative ambiguity” that results, the study says, “encourages consumers to impute more meaning into watchdog ratings than the ratings themselves justify.”

Watchdogs use discretion when “segmenting” nonprofits for benchmarking, and choose “arbitrary and inconsistent” benchmarks, the study says.

Some watchdogs “selectively accommodate” exceptions based on the way they have segmented nonprofits or on the discretion of evaluators, it says.

All three watchdog systems – at Charity Navigator, the Better Business Bureau Wise Giving Alliance, and the American Institute of Philanthropy – “incorrectly imply that nonprofits unilaterally determine the cost of fundraising,” the study says.

And it says a “single arcane or trivial criterion can marginally determine accreditation status since certification is all-or-nothing.”

What’s more, the study says, watchdog and accreditation designations are based on the “untested assumption” that specific financial ratios are “good indicators for organizational effectiveness, efficiency or accountability, but credible evidence is lacking that any such connections exist empirically.”

Watchdog agencies also “fail to provide adequate guidance about what their ratings measure,” contributing to ambiguity in interpreting the ratings, the study says. “They implicitly employ incorrect definitions of organizational effectiveness and efficiency and wrongly substitute functional expense-ratios for cost-effectiveness.”

The study says watchdog ratings may be biased against smaller organizations and groups involved in advocacy rather than service-delivery, and may contribute to the “extreme inequality observed among nonprofits and encourage resources misallocation within the nonprofit sector.”

Watchdogs “rely excessively on poor quality financial data incapable of measuring either effectiveness or efficiency,” yet watchdogs and other stakeholders lack alternatives that are easy to access or credible, the study says.

“Some agencies have acknowledged these limitations and are revising their systems, but their efforts are limited by the quantity and quality of information generated and routinely disclosed by nonprofits,” the study says.

It says nonprofits have an opportunity to influence the development of ratings systems and take part in talks with watchdogs, donors and the general public to “reframe the conversation about nonprofit effectiveness from one focused on financial proxies and overhead minimization to one emphasizing sustainable long-term impact and outcome accountability.

To help make that change happen, the study says, nonprofits should “augment their monitoring and evaluation capabilities and refocus attention on indicators of outcome accountability and cost-effectiveness.”

Watchdogs’ response

Berger of Charity Navigator says the studies commissioned by the Direct Marketing Association Nonprofit Federation are “talking about a theoretical reality.”

Many of the measures the studies recommend for evaluating charities “don’t currently exist,” he says. “There is no standard available data on charities’ effectiveness at this point. The only standardized data that’s currently available is financial.”

What’s more, he says, the fact that the rating services “may not have all the pieces of the puzzle does not mean they are not useful.”

And the claims the research papers make are “behind the times,” he says,

“Part of what we are working on, all of us, is to try to continuously improve by adding more of the pieces of the puzzle,” he says. “We do not just look at financials.”

The studies, he says, “are looking at a straw dog that does not exist. It’s another example of ‘no good deed goes unpunished.'”

The purpose of the Direct Marketing Association Nonprofit Federation is to “market happy stories and the great work of a charity,” Berger says. “So anybody who is going to be critical and not only give happy news, they are not going to like it because it threatens their profit, it threatens their work to have mitigating information.”

Art Taylor, president and CEO of the Better Business Bureau Wise Giving Alliance, says some watchdogs use only financial ratios that by themselves do not “tell the story of an organization’s performance.”

He says his organization, however, uses a set of 20 standards that look at a broad range of measurements, including two standards that “encourage charities to assess their own effectiveness in meeting their missions.”

Daniel Borochoff, president and founder of the American Institute of Philanthropy says that while watchdogs lack the expertise to conduct program evaluations of charities in thousands of programmatic areas, ratings of financial performance “are vitally important right now because the donating public is watching their money very carefully and they want to know what the charity is doing with their money and if it is being spent on bona-fide program services they want to support.”

Variation in reporting by charities and a lack of accountability is causing a lot of confusion,” he says. “Charities are in denial when they say financial performance measurements are somehow not relevant or important.”

Measuring financial performance is not perfect, he says, “but it’s one of the initial screens donors can make.”

Watchdogs provide a critical service, Borochoff says.

“It’s pathetic and sickening that there are so many charities that are ripping the public off, and fundraisers are ripping everybody off,” he said. “Financial performance measures are essential to the nonprofit sector and for the donating public to be able to allocate their scarce charitable resources.”

Leave a Response

Your email address will not be published. All fields are required.