Skip Navigation

Comprehensive Community Initiatives, Improving the lives of youth and families through systems change, a toolkit for federal managers
How the toolkit was created What is a CCI? CCI Tools for Federal Staff
Develop your CCI Project
Guidelines to design evaluation
2. Be clear and realistic about your goals for multisite evaluation--the questions you want to answer and how the findings will be used.
What should I consider as I begin to plan for a multisite evaluation?
To plan for a multisite evaluation...
  • Be clear from the beginning about what you want to find out.
  • Work with the multisite evaluator to select the particular questions you want to focus on.
  • Get clarity about what stakeholders want from the evaluation.

Be clear from the beginning about what you want to find out in order to make best use of evaluation resources. Because a CCI is far more complex than a conventional service-delivery program, evaluating a CCI is also more complex, time-consuming, and challenging. (See The Challenges of Evaluating CCIs (page 12) in the Literature Review).

Work with the multisite evaluator to select the particular questions you want to focus on. Your theory of change and logic model will help to clarify the range of questions that could be addressed by the evaluation. But you won't be able to answer them all. Weigh your priorities and obligations to make a selection.

Get clarity about what stakeholders want from the evaluation. As you frame the goals for the evaluation and zero in on the questions you want to answer, it's critical to identify and engage all the stakeholders. Find out:

  • What expectations does each stakeholder have for the evaluation? What questions will each need to have answered by the end of the funding cycle?
  • If multiple Federal agencies are involved in funding the initiative, what congressional mandates are attached to their funding and participation?
  • What information does each stakeholder need to gauge the progress of the CCI, monitor quality, and make adjustments?
  • What kinds of systems change and community outcomes are stakeholders looking for?
  • What questions might be addressed to build the base of knowledge for future CCIs?

Resolve conflicting priorities among stakeholders, and point out expectations that are unrealistic or unattainable given the available resources and time. Make sure to balance stakeholder priorities in the choice of questions to be addressed. Develop an MOU among the stakeholders to codify these decisions.

When stakeholders want different things from evaluation...

"...politicians may desire 'quick wins,' funders want to see that their investment achieves results, grantees may want ongoing feedback so they can improve their performance, and researchers may emphasize 'clear results' and 'academic credibility'" (Coote et al., 2004, pp. xi; Mott, 2003).
CCI evaluation can be a "lightning rod for tensions" and unresolved questions about the purpose and audience of the evaluation, the outcomes to be evaluated, the method of evaluation, and the role of the evaluator (Kubisch, et al., 2002, pp. 70).

Put questions in a form that is clear and answerable so the evaluator can use them as a starting point for design of a proposed multisite evaluation--including the type of evaluation to use, the number of sites to include, and the timeframe--taking into account the resources available for evaluation.

What qualities should I look for when I select a multisite evaluator?

A multisite evaluation of a CCI calls for skills that go beyond those needed to evaluate a conventional service-delivery program. When selecting an evaluator, look for someone who is:

  • Experienced with evaluating systems change as well as community outcomes.
  • Knowledgeable about theory of change and logic models, and able to facilitate a group of stakeholders as they conceptualize the CCI.
  • Able to accommodate variation among sites--flexible and respectful of their unique characteristics while still maintaining the coherence of the overall evaluation.
  • Able to relate well to sites--involving participants in the process and helping them to build a capacity for monitoring and evaluating their work.
  • Skilled in building systems for continuous feedback so that both funders and sites will have up-to-date information to guide decisions over the life of the initiative.

For information about specific types of expertise see Areas of Evaluator Expertise.

Because it may be difficult to find this combination of characteristics in a single evaluator, consider assembling a team.

How can I measure outcomes for a CCI--given that it takes more than a decade to see changes in a community?

Identify and measure interim outcomes. Systems change takes a long time, and demonstrating the impacts from systems change takes even longer. As you plan for evaluation:

  • Acknowledge that you won't be able to demonstrate long-term change in 3-5 years.
  • Designate interim outcomes--milestones along the path to long-term community change.
  • Use your theory of change to demonstrate that these interim outcomes are situated on critical pathways to desired long-term outcomes. Show that, in time, the initiative will likely result in those long-term outcomes.

This approach has been used by numerous multisite evaluations. (See also What Evaluation Has Shown (page 18) in the Literature Review and Using a Logic Model to Conceptualize a CCI).

What decisions will I need to make as I work with evaluators to design a multisite evaluation?
When working with your evaluators to develop the multisite evaluation...
  • Determine the rigor of the design.
  • Choose what you will measure.
  • Select strategies for implementing the evaluation.

Determine how rigorous you want the evaluation to be.

  • Consider three options ranging from most to least rigorous:
    • An experimental design with randomized control groups.
    • A quasi-experimental design with comparison groups.
    • A non-experimental design without control or comparison groups.

    For each option, there are trade-offs. In general, the more rigorous the design, the more you can learn--but the more expensive the evaluation will be.

  • Consider incorporating a counterfactual analysis as a part of the design.
  • Pose the following questions to your evaluator:
    • Should the evaluation use a control group? A comparison group?
    • What laws, policies, and ethical issues arise for each design?
    • What are the costs of each design?
    • Should we incorporate a counterfactual analysis in the evaluation?
CCIs with limited funding for evaluation may be faced with limited choices for evaluation methodology because they do not have the resources to develop and implement an experimental research design (Coffman, 2007).

Choose what you will measure.

  • Account first for the measures mandated by partner agencies. Then discuss what other measures will get you the answers to questions you identified during planning.
  • Include process measures to track how sites are implementing programs and systems change along the way. This will also give you and the sites interim feedback to make quality improvements.
  • Measure outcomes to learn in what ways the initiative has changed policies and practices and made a difference for individuals involved in programs or affected by system changes.
  • Use impact measures when taking the long view of the changes made to the conditions in the community and its residents. You might begin to track indicators, but you won't be able to draw conclusions about impacts for many years.
  • Work with your evaluator to explore these questions:
    • What can we realistically measure given my budget and time?
    • How will we examine both programming and systems change?
    • How will we track mandated measures?
    • How will we measure process, outcomes, and impacts?

See examples of a multi-site evaluation methodology and a solicitation for a multi-site evaluation.

Select the strategies for implementing the evaluation. Consider strategies that will help you address the challenges of evaluating a CCI.

  • A two-tiered strategy: Collect basic data from all sites, and then select sentinel sites for more indepth study. This will economize on time and costs for evaluation. See The SS/HS Cross-Site Evaluation--School Violence and Safety: Evaluation Design.
  • A staggered startup to accommodate new sites that join the initiative each year. As you learn more about what you want to know, you can add more queries to your evaluation model.
  • Participatory evaluation. Consider the level of participation you want from stakeholders--at both the site and Federal levels--in planning and implementing the evaluation. Participatory evaluation may be good cultural fit for some tribal communities. (See Sections I through IV in the Phase II CIRCLE Evaluation Proposal.) (See also: Using Community Based Participatory Evaluation (CBPE) Methods as a Tool to Sustain a Community Health Coalition.)
  • Address these questions with your evaluator:
    • Should we include all or some of the initiative's sites in the evaluation, and to what degree?
    • How will we bring on sites that start at a later date, after the multisite evaluation has already begun?
    • To what degree will we involve stakeholders in the identification of the evaluation issues, the design of the evaluation, the collection and analysis of the data, and the action taken as a result of the evaluation findings?


Coffman, J. 2007. A Framework for Evaluating Systems Initiatives. Build Strong Foundations for Our Youngest Children Initiative.

Coote, A. et al. 2004. Finding Out What Works: Building Knowledge about Complex, Community-Based Initiatives. London, England: King's Fund Publications.

Kubisch, A. et al. 2002. Voices from the Field II: Reflections on Comprehensive Community Change. Washington, DC: Aspen Institute.

Mott, A. 2003. Evaluation: The Good News for Funders. Washington, DC: Neighborhood Funders Group.

A counterfactual analysis investigates what the community outcome would have been if a given policy had not been in place, or if the target group had not been exposed to the intervention. A counterfactual approach to evaluation is a good fit for systems-change initiatives.