The DASCorps Survival Guide: Program Evaluation

Categories:

Why Evaluate?
DASCorps VISTAs build dozens of new programs, systems, strategies, infrastructure, and ways of operating throughout the year, which are meant to last and endure long after their year of service is up. Just as important as planning and building these capacity-building initiatives is to ensure that they are properly evaluated.

Evaluation does not just serve the purpose of proving whether or not you succeeded or failed, but also functions to show you how, where, and why you succeeded or failed and make improvements and changes accordingly.

Here are some specific ways evaluation will help you and your organization:

  • Verify the effectiveness of programs or projects (necessary for ALL funders)
  • Improve the future outcomes and impact of your activities
  • Generate relevant data for marketing or publishing
  • Compare your programs and projects to establish strongest/weakest undertakings by your organization
  • Find out ‘what is really going on’ in a program, project, or organization (not just what is said in grant reports, marketing materials, or during meetings)
  • Figure out where gaps or excesses are located and make adjustments to increase efficiency (resources allocated more efficiently)
  • More effective and productive meetings since there will be less guess work or ‘gut feeling’ in understanding “what is actually going on”

When Do You Evaluate?
VISTAs do sometimes create large-scale organizational evaluations for their host organization, but most of the time evaluations are typically small and related to projects and programming.

After you complete a strong or successful project or program, it is your role as a capacity builder to make sure that it is not one-off and can be replicated. Part of that is documenting the process of the project or program you are involved with, but another part is creating the mechanism to evaluate and assess how and why your program or project worked.

Also, and especially, perform evaluations after an unsuccessful undertaking. There are just as many lessons to be learned in catastrophic failures than monumental success so always evaluate both. Don’t be afraid of or ignore failure, learn from it!

Data and What To Do With It
Collecting and analyzing data is the backbone to most any evaluation or assessment you will make. Data and information provide the ingredients to most successful decision-making. With data being so critical, you must have a huge scope of how to get and use data when evaluating.

Data: Qualitative and Quantitative
The difference between qualitative and quantitative data is pretty straightforward. Quantitative data is anything numerical or calculated (number of participants, age range, demographics, time or money spent on activities) while qualitative is anything this is not numerical or calculated (usually descriptive text but can also be types of media).

However, you can easily make the qualitative data more quantitative by grouping or categorizing content then calculating based off those groupings. For example if you have a text field in a survey you can try and find common patterns and group those accordingly (what sociologists and anthropologists call ‘coding’).

Getting The Data
The Field Guide to Nonprofit Program Design, Marketing and Evaluation (published by Authenticity Consulting) describes a variety of methods and avenues to collect different types of data to suit your evaluation needs. Here is a table listing and comparing data collection methods adapted from their guide:

Type of Data Collection

Surveys, Polls, and Questionnaires
When you need lots of information (mostly quantitative) quickly and efficiently in a non-intrusive way
+ Anonymous
+ Cheap
+ Easy to analyze and compare
- May not be thoughtfully completed
- Easy to bias answers via wording
- Impersonal

Focus Groups
When you want to examine at a specific issue in group discussion to draw out common patterns (concentrates on feelings, experiences, reactions, complaints, and suggestions)
+ Understand common perceptions
+ Broad range and depth of information in short time span
- Requires strong facilitator
- Difficult to schedule around people

One-on-one Interviews
When you want to understand individual/on-the-ground experiences more (mostly qualitative)
+ Strengthens relationship between interviewer and interviewee
+ Broad range and depth of information
- Time and resource intensive
- Difficult to compare data across interviews

Type of Data Collection
Observation When you want a subjective first-hand account of “what is actually going on”, typically focuses on programmatic or organizational processes (mostly qualitative)
+ Real-time analysis
+ Adapts to most any condition
- Relies on subjective interpretation
- Very complex to document and organize

Document Research
When you want to analyze an organizational or program process by looking at communications, memos, emails, meeting minutes, financials, applications, etc. (almost always for internal purposes)
+ Information already exists
+ Limits bias since info already created
- Time consuming
- Complex to categorize and understand
- Information may be incomplete
- Difficult to know what to look for

Case Studies
Creating: When you want a comprehensive and well-formatted examination of your own programs or organization (for external stakeholders)
Comparing: When you want to compare related programs or organizations doing similar work to put your own work into a larger context
+ Thorough and well-researched
+ Persuasive way to frame work
+ Great deal of input and voices represented
- Very time consuming to research and create
- Limited scope and breadth

Statistical Data
When you need background information (almost always quantitative) on a community, industry, field, etc… that can be found using available archival/governmental/institutional/tech data (i.e. census, demographic studies, web traffic reports, Google analytics, etc.)
+ Information already exists
+ Broad scope and breadth of information
+ Relatively easy to find
+/- Conducted by outside agency
- May not be regularly updated
- Very limited depth of information

Data Driven Decisions
There is no phrase that gets more buzz across all sectors of the economy than “Data Driven Decision-making”. What this essentially means and signifies is a departure from ‘gut decisions’ or decisions based on intuition towards basing decision-making on information and data gathered from the field.

To start building in data driven decision-making ask yourself “at what point would a critical decision happen?” After you identify those points, try to work in evaluations on a timeline of those ‘critical junctures’ where data and evaluations will inform the decisions about whatever is being done.

For example, if you are starting up an afterschool radio program you will need to set evaluation benchmarks for students to ensure they are meeting their individual goals and project due dates. With projects due once every two weeks, the timeframe is tight for each student in your program so you will need to make fairly regular ‘critical decisions’ (maybe once every two weeks). These evaluations might document where each student is in the production process, what skills they still need to learn, or how much additional time might be needed for them to complete their project.

To take that a step further, a board of directors’ committee meets once every three months to decide the future direction of that same program. With this in mind, you would want to ensure that a formal evaluation geared towards getting the data and information relevant to that committee’s decision is completed every three months just before their meeting. Relevant data for this would not include how individual students or student projects are going, but will be more quantitative like recruitment numbers, retention percentages, and costs of programming.

Yet, with these two examples you will still be looking at the same single program. With this in mind, you will need to build in the mechanisms and processes to incorporate both evaluations when creating the methods for collecting relevant data.

Constructing a Framework for Your Evaluations
The Center for What Works and The Urban Institute have published a series of incredibly useful nonprofit evaluation tutorials called the Outcomes Indicators Project. It summarizes types of assessments and evaluations that cut across the nonprofit sector, no matter the organizational mission, while also detailing 14 program specific reports. These focused reports span from youth mentoring to community organizing to adult education and family literacy to performing arts and more.

For a thorough listing of different types of outcomes and commonly used indicators for each, read The Nonprofit Taxonomy of Outcomes report also published as part of the Outcomes Indicators Project. It is too lengthy to adapt here, but it is highly recommended to read the report (it’s like a cheat cheap for crafting effective measurement methods). You can download it here: www.urban.org/center/met/projects/upload/taxonomy_of_outcomes.pdf
External Stakeholders Focus on Outcomes
External stakeholders (such as the community, other agencies you work with, foundation or government funders, etc.) will usually want to hear about your outcomes (what you’ve accomplished) rather than hear evaluations on internal processes and organizational (in)efficiency. Funders, in particular, are very concerned with outcomes-reporting as they want to see organizations accountable for what they do with their allotted funds.

Since most of your evaluation for external stakeholders will be based upon outcomes and impact results, it may be beneficial to reexamine what exactly those outcomes will be and how you will achieve them. See the Social Entrepreneurship section for a quick how-to on creating a Logic Model for the program, project, or organization you will be evaluating. It may be useful to start at the Impacts stage (end) and work back through outcomes all the way to inputs (start).

Last Words
Remember as your VISTA year ends one of your most important roles as capacity builder is to make sure your organization has the documentation, evaluations, and recommendations necessary to continue with your work after you’ve left. So, make sure that you plan, coordinate, and make time to create and complete evaluations.

Further Resources
Davenport, Thomas H. and Harris, Jeanne G. Competing on Analytics. Boston: Harvard University Press, 2007.
Management Help: Basic Guide to Program Evaluation
United Way Outcome Measurement Resource Network
Urban Institute and The Center for What Works - Outcomes Indicators Project