What can a researcher do to foster a good partnership with an implementing organization?

In a previous blog post I discussed what a researcher should look for in an implementing partner with whom they want to do an RCT. But what does an implementer want in a research partner, and how can a researcher make him- or herself a better partner?

I)  Answer questions the partner wants answered

Start by listening. A researcher will go into a partnership with ideas about what they want to test, but it is important to understand what the implementer wants to learn from the partnership. Work together to come up with a design that answers key questions from both parties. Sometimes this doesn’t require another arm to be added to the study, but rather some good monitoring data or quantitative descriptive data of conditions in the population to be collected.

II)  Be flexible about the evaluation design

The research design you have in your head initially is almost never the design that ends up being implemented. It is critical to respond flexibly to the practical concerns raised by the implementer. One of the main reasons that randomized evaluations have taken off in development in the last twenty years is because of the range of tools that have been developed to introduce an element of randomization in various ways. It is important to go into a partnership with all those tools in mind and use the flexibility they provide to achieve a rigorous study that also takes into account the implementer’s concerns.

A common concern implementers have about randomization is that they will lose the ability to choose the individuals or communities they think are most likely to benefit from the intervention; for example, a training program may want to enroll students that have some education, but not too much. These concerns are relatively easy to deal with: agree to drop individuals or communities that don’t fit the criteria as long as there are enough remaining to randomize some into treatment and some into control. This may require expanding the geographic scope of the program. Randomization in the bubble can be a useful design in these cases.

Randomized phase-in designs are also useful for addressing implementer concerns, although they come with important downsides (Glennerster and Takavarasha 2013 detail the pros and cons of different randomization techniques).

There can and should be limits to this flexibility. If an implementing organization repeatedly turns down research designs carefully tailored to address concerns they've raised previously, at some point the researcher needs to assess whether the implementer wants the evaluation to succeed. This is a very hard judgment to make and is often clouded by an unwillingness to walk away from an idea that the researcher has invested a lot of time in. In this situation, the key question to focus on is whether the implementer is also trying to overcome the practical obstacles to the evaluation. If not, then it probably makes sense to walk away and let go of the sunk costs. Better to walk now than be forced to later, when even more time and money have been invested.

III)  Share expertise

Many partners are interested in learning more about impact evaluation as part of the process of engaging on an evaluation. Take the time to explain the impact evaluation techniques to them and involve them in every step of the process. Offer to do training on randomized evaluations or run a workshop on Stata for the organization’s staff. Having an organization-wide understanding of RCTs also has important benefits for research. In Bangladesh, employees of the Bangladesh Development Society were so well-versed in the logic of RCTs that they intervened when they noticed girls attending program activities from surrounding communities. They explained to the communities (unprompted) that this could contaminate the control group and asked that only local girls attend. 

Researchers often have considerable expertise in specific elements of program design, including monitoring systems and incentives, and knowledge of potential funding sources--all of which can be highly valued by implementers. Many researchers end up providing technical assistance on monitoring systems and program design that go well beyond the program being evaluated. The good will earned is invaluable when difficult issues arise later in the evaluation process.

IV)  Provide intermediate outputs

While implementing partners benefit from the final evaluation results, the timescales of project funding and reporting are very different from academic timelines. Often an implementing organization will need to seek funding to keep the program going before the endline is in place and several years before the final evaluation report is complete. It is therefore very helpful to provide intermediate outputs. These can include: a write-up of a needs assessment in which the researcher draws on existing data and/or qualitative work that is used in project design; a description of similar programs elsewhere; a baseline report that provides detailed descriptive data of the conditions at the start of the program; or regular reports from any ongoing monitoring of project implementation the researchers are doing.  Usually researchers collect these data but don’t write them up until the final paper. Being conscious of the implementer’s different timescale and getting these products out early can make them much more useful.

V)  Have a local presence and keep in frequent contact

Partnerships take work and face time. A field experiment is not something you set up, walk away from, and come back to some time later to discover the results. Stuff will happen, especially in developing countries: strikes, funding cuts, price rises, Ebola outbreaks. It is important to have a member of the research team on the ground to help the implementing partner think through how to deal with minor and major shocks in a way that fits the needs of both the implementer and the researcher. Even in the middle of multiyear projects I have weekly calls with my research assistants, who either sit in the offices of the implementer or visit them frequently. We always have plenty to talk about. I also visit the research site once and often twice a year. Common issues that come up during the evaluation are lower-than-expected program take-up, higher-than-expected costs of running the program, uneven implementation quality, and new ideas on how to improve the program. 

Investment pays off

The benefits of investing in long-term partnerships are high. Some of the most interesting RCTs have come out of long-term partnerships between researchers and implementers that cover multiple evaluations. Once a level of trust in the researcher and familiarity with RCTs has been established, implementers are often more willing to randomize different elements of their program and try new approaches to their programs. Indeed, they often become the drivers of new ideas to test.