A Different Kind of Risk-Taking: Improving Evaluation Practice at the Jim Joseph Foundation

"We're in the business of risk-taking," is something Chip Edelsberg, executive director of the Jim Joseph Foundation, likes to say. Generally speaking, Edelsberg's notion of risk-taking refers to the investments the foundation makes in its grantees and their programs. The mission of the foundation, which has assets of roughly $1 billion, is to foster compelling, effective Jewish learning experiences for young Jews. Between 2006 and June 2014, the foundation granted more than $300 million to increase the number and quality of Jewish educators, expand opportunities for Jewish learning, and build a strong field for Jewish learning (Jim Joseph Foundation, 2014). Rarely is there an established research base for the kinds of initiatives the foundation supports in Jewish education. In the spring of 2013, though, Edelsberg had another kind of risk in mind.

What might be gained, Edelsberg wondered, if foundation staff brought together a group of competing evaluation firms with whom they had worked in the past to consider ways to improve the foundation's practice and use of evaluation? The idea had emerged out of a study of the foundation's evaluation practices, from the foundation's inception in 2006 through 2012, that was commissioned by the foundation and conducted by Lee Shulman, president emeritus of the Carnegie Foundation for the Advancement of Teaching and Charles E. Ducommun Professor of Education Emeritus at Stanford University. Edelsberg thought it was a risk worth taking, and the board of the foundation agreed. Edelsberg also made the bold decision to allow a doctoral student in evaluation studies at the University of Minnesota to study the venture.

In the winter of 2013, a colleague of mine from the field of Jewish education who was then a staff member at the foundation heard about my research interest in the role evaluation plays in the work of foundations and their grantees and offered to connect me with Edelsberg. Edelsberg described the idea for what became the "evaluators' consortium," and I asked about the possibility of studying the process as a case study for my dissertation. By the time the consortium met for the first time in October 2013, and with the agreement of the foundation's board and participating evaluators, I launched the research. The purpose of the study was to explore what occurred when a foundation inaugurated an innovative approach to evaluation practice, examining factors that supported successful implementation of the innovation and the impediments to its success. It also sought to provide insights into the elements of organizational culture, practices, circumstances, and structures that can support effective practices of evaluation in the foundation field. The foundation gave me access to documents and invited me to observe meetings of the consortium held both in person and electronically. Over the course of the first year of the consortium's operation, I interviewed all foundation program staff members, Shulman (who served as the facilitator), a member of the board, and each of the participating evaluators.

In the initial stages of the work, the goals for the consortium were general and somewhat vague. The foundation hoped to establish a more efficient process for selecting evaluators for foundation grants, stimulate collaboration among the evaluators, explore possibilities to conduct cluster evaluations and/or meta-analyses, and examine ways the foundation could improve its overall program of evaluation. One hope was that, in their coming together, the evaluators would help the foundation define an agenda for their work together. In spite of the uncertainty of the initiative's outcomes, all the evaluation firms that were asked accepted Edelsberg's invitation to participate — a testament to the nature of the relationship they already had with Edelsberg and the foundation, and an indication of what a deeper relationship with the foundation meant to the evaluators. The consortium met for two face-to-face gatherings and two Web-based conferences, and there was email communication among the participants between convenings.

There was some discomfort among participants about the initial lack of clarity around outcomes and the timeline, especially as the evaluators were participating without compensation. Both foundation staff and evaluators wondered how long they would be able to continue without a clear focus. But an idea that emerged toward the end of the first convening gained traction in the months leading up to the second: what if the group developed a set of outcomes and measures for Jewishness (or Jewish identity/growth/development) that could be used across organizations, initiatives, and programs? Nothing like this existed in the field of Jewish education. The notion of a tangible product, one that could be used by the evaluators, by the foundation, and by the field at large, had broad appeal. There were some concerns about committing to this goal among the evaluators — while worthwhile, such a goal was ambitious, difficult to achieve, and time consuming.

The consortium's work on measures of Jewish growth came at a critical time for the foundation. At about the same time as the evaluators' consortium was launched, the foundation had begun work on one of its most ambitious projects to date, the Initiative on Jewish Teen Education and Engagement. The strategy behind the initiative, which linked directly to the foundation's mission to "foster compelling, effective Jewish learning experiences" for teens and young adults, included working in partnership with funders in up to ten local communities in the U.S. to incubate new models of learning and involvement for Jewish teens. The initiative itself grew out of an understanding of the importance of this stage of the life cycle in human development coupled with a reading of the data on low participation rates of Jewish teens in the Jewish educational experiences available to them in their communities ("Informing Change," Jim Joseph Foundation, & Rosov Consulting, March 2013). On the eve of the launch of the initiative, the foundation was particularly interested in measures of Jewish growth that could play a role in evaluating the work within and across communities.

Over the course of the first year, the consortium helped the foundation develop the vision for a cross-community evaluation of the Teen Initiative, including more in-depth work on outcomes and measures of Jewish growth. In an unprecedented step for the foundation, the staff asked the members of the consortium for feedback on a draft of the evaluation RFP and made changes on the basis of their suggestions. At the end of the year, the foundation awarded a million-dollar, four-year contract to two of the participating firms to conduct the cross-community evaluation. Another member of the consortium is participating as a consultant on pieces of that work. And the fourth member has been contracted by several of the local communities to conduct community-based evaluations.

In addition to shaping the cross-community evaluation and taking the first steps on the development of outcomes and measures of Jewish growth, the initiative produced several other outcomes for the foundation and the participating evaluators. The foundation clarified its ideas about effective evaluation practices. Foundation staff members developed the capacity to think differently about evaluation. Relationships were strengthened between foundation staff and evaluators and between individual evaluators and evaluation firms. The initiative also created relationships among competitors who entered into collaboration with one another to their own benefit — and  the benefit of the foundation and its grantees. Through its success with the consortium, the foundation was emboldened to consider other new approaches to evaluation. Finally, as a result of the work done with the consortium, the foundation was able to introduce evaluators and high-quality evaluation practices to other funders and communities.

The data collection for my dissertation came to a close in August 2014, nearly a year after the first convening of the evaluators' consortium. In the thirteen months since then, the consortium has continued to meet. Its current goals, according to a blog post written by the foundation's Sandy Edwards and Stacie Cherner, include:

  • a plan for researchers, funders and practitioners to agree on common constructs of Jewish learning and growth;
  • the development of a set of standardized questions that can be utilized across the foundation’s portfolio of grantees;
  • Field testing of a "universal toolkit" for collecting data on common outcomes and demographics; and
  • a plan for longitudinal testing and disseminating resources aimed at encouraging the use of universal sets of tools.

Various factors supported the success of the consortium. One was the foundation's willingness to take a risk and anticipate the possibility of failure. A learning culture at the foundation and a commitment to field building were also factors. Yet another contributing factor was the foundation’s ongoing approach to evaluation. Program officers at the foundation work in partnership with grantees to develop evaluation RFPs and hire evaluators; the foundation then funds the evaluation of their grants. Members of the program staff are engaged in nearly all stages of the evaluations of grants they manage. The staff cultivates relationships with its grantees and the evaluators with whom it works. Beyond accountability, the foundation is committed to learning from evaluation and using it to inform its grantmaking. The foundation also shares the majority of completed evaluation reports on its website.

To understand the success of the consortium, one also needs to appreciate the role of its leaders and participants, including the foundation’s own leader, Chip Edelsberg, whose commitment to the initiative in particular and to evaluation in general, and whose ability to cultivate relationships with others, were crucially important to the success of the project. Also critical were the intellectual leadership and facilitation provided by Lee Shulman. In addition, there were benefits to participation for the participating evaluators — including the possibility of being signed to an evaluation contract by the foundation, enhanced relationships with foundation staff, and opportunities for professional development and peer learning. All these things encouraged participation, and all who were invited to participate agreed to do so. It was no small feat, though, that the evaluators agreed to work alongside firms with which they compete for contracts, share their expertise with one another, participate without direct compensation, engage without promises of future work — and to do so without the benefit of defined outcomes (early on) and a clear-cut timeline. The small size of the Jewish education field and even smaller size of the Jewish education evaluation sub-field also were facilitating factors, in that the players were known to one another, by reputation if not personally, and recognized that the impact of the work had the potential to make a difference in the field.

The example of the consortium is worthy of consideration by other foundations engaged in strategic philanthropy, although it is a model likely to involve practices that represent a significant departure from "business as usual." Strategic philanthropy requires that outcomes be spelled out in advance and that progress is measured against those outcomes. When contemplating this requirement in the practice of evaluation, a foundation ought to be aware of the need for emergent goals and uncertainty. Not only is it impossible to specify all possible outcomes of an innovation or intervention, the process of attempting to define outcomes may limit the foundation's consideration of promising courses of action. Working in an emergent way requires some faith in the process, trust in the people promoting the innovation or intervention, and a fairly clear understanding of its potential benefits. It also requires a champion who is willing to take risks and encourage others to follow an uncharted path.

It may be counter-intuitive to bring together competitors to work together on behalf of a foundation's evaluation program. Convening competitors in a collaborative venture, though, can create capacity, build networks, and magnify potential outcomes. At the same time, careful consideration needs to be given to the conditions under which collaboration takes place, who facilitates it, and the expectations established throughout the process. Cultivating relationships is a critical step in introducing and sustaining innovation in evaluation practice.

Risk-taking is central to the work of foundation leaders as they hone their strategies, strive to make effective investments in organizations and programs, and pursue a mission related to social change and progress. The approach pioneered by the evaluators' consortium is worthy of consideration by other foundation leaders. Working collaboratively with a diverse group of external evaluators who bring a range of skills, perspectives, and expertise has the potential for significant payoffs – both for the foundations that adopt the model and, ultimately, for the fields they hope to impact.

About the author(s)

Evaluator and Jewish Educator