By Joanne G. Carman and Richard M. Clerkin
Figuring out how to evaluate the work that they do can be tricky for most nonprofit organizations. While most nonprofits are conducting some type of assessment, ranging from outcomes evaluation (which focuses on program results) to formative evaluation (which focuses on organizational learning), small nonprofit organizations in North Carolina are reporting that they struggle with evaluation capacity issues.
In the third survey of Trend Spotter, a special project of the Institute for Nonprofits at NC State University, we asked small North Carolina nonprofits with budgets under $600,000 to describe their evaluation practices. Eighty-three nonprofit organizations completed the survey.
Seventy-three percent of 83 nonprofit organizations reported that they had conducted some type of evaluation during the last two years. The majority of these organizations reported conducting outcomes evaluation focused on tracking the short or long term results of a program.
Measuring outcomes, however, can be challenging for nonprofit organizations that are required to conduct outcomes evaluation findings to report to funders about grants or contracts. As one of the survey respondents noted, “Some funder-mandated measurable objectives do not accurately measure some of the program outcomes.”
While the required measurement tool may not be an ideal program match, nearly every survey respondent asked to identify the great challenge in conducting evaluation was the lack of capacity.
For example, some survey respondents described how they had “limited” or “short” staff, or that cost was the issue. Others were more concerned about not having enough time, describing how staff members are already stretched thin trying to manage the day-to-day operations. As one respondent reported, their greatest challenge was “finding the time given the size of staff and number of programs.”
Similarly, another noted “taking sufficient time for a comprehensive evaluation amidst the busy time of everyday work.” A third respondent was much more emphatic, responding “TIME!! It is very labor intensive and we’re already wearing many, many hats.”
Respondents also noted specific concerns about the lack of evaluation expertise. These organizations reported how they struggled to design evaluation tools that were meaningful and captured all of the necessary information, yet didn’t require significant staff time or experience. We have “limited expertise among staff members on how to do professional evaluations and how to create evaluation tools that really measure what we are trying to evaluate,” said one respondent.
Some nonprofit organizations find it challenging to figure out the best ways to present evaluation data or how to place their findings into the proper context. “We would like to be able to compare our programs results to others, but [we] have not figured out the best way to do that,” said Samantha Hayes of Life Challenge of Western North Carolina, which operates in Cullowhee.
Some challenges were related to the implementation of evaluation activities, such as receiving low response rates to surveys and door-to-door canvassing, and the reluctance among some clients to participate in evaluation activities. A few expressed concerns about the quality of data they collect, with respect to the lack of representativeness or lack of coverage. Some questioned the truthfulness or accuracy of the data they collect.
In addition to outcomes evaluation, however, more than half of the nonprofit organizations reported conducting formative evaluation. Formative evaluation focuses on organizational learning and figuring out what is working well, what is isn’t, and using this knowledge to make changes.
Almost all of these organizations described how their evaluation efforts help them to plan or revise programs and strategies. These organizations also described how they used evaluation data to make decisions about resource allocations, staffing and whether to expand or replicate programs. They also used evaluation data to share best practices or lessons learned.
Not surprisingly, for most of these organizations, the primary audience for the evaluation work was stakeholder groups internal to the organization, such as the board of directors, the CEO and management team, staff or clients, as opposed external funders or grant-makers.
Trend Spotter respondents also were engaged in other types of evaluation. Among those doing evaluation, half had conducted implementation evaluation (ensuring programs are delivered as planned) and satisfaction studies (assessing how satisfied their clients or stakeholders are). One quarter had conducted developmental evaluation (focused on strategic learning and innovation) and engaged in benchmarking (comparing their program’s results to similar programs).
Findings from this Trend Spotter survey were similar to the findings reported in State of Evaluation 2012: Evaluation Practice and Capacity in the Nonprofit Sector, a national survey of nonprofit organizations conducted by Innovation Network Inc. According to this report, smaller nonprofit organizations, like the Trend Spotter organizations, are not only less likely to conduct evaluation than larger nonprofit organizations, but they also are less likely to have staff who are dedicated to conducting evaluation or have experience working with an external evaluator.
Consistent with Trend Spotter reporting, outcomes evaluation continues to be the most prevalent type of evaluation being conducted by nonprofit organizations. Yet, among the Trend Spotter members who have conducted evaluation in the last two years, more have engaged in formative evaluation (62 percent) compared to the national sample (17 percent). This may be because the Trend Spotter project targets activities of smaller organizations. Developmentally, they may still be trying to figure out what is working and what isn’t.
Andrew Rodgers, executive director of the RiverRun International Film Festival in Winston-Salem, offered an especially insightful comment about the challenges associated with evaluation in small nonprofit organizations with limited resources and evaluation expertise. “If we’re not careful, we can quickly become swamped in data overload,” he cautioned. “It’s important to identify the most important metrics as well as areas of inquiry, in order to avoid data overload.”
Joanne G. Carman is an Associate Professor in the Political Science and Public Administration Department at the University of North Carolina-Charlotte. Her teaching and research focuses on program evaluation and nonprofit management, and she is the coordinator of the Graduate Certificate in Nonprofit Management.
Richard M. Clerkin is an Associate Professor in the Public Administration Department at NC State University. His teaching and research focuses on philanthropy and management and he is the director of the Graduate Certificate in Nonprofit Management.