Author Archives: wordpressadmin

Learning from failures

I’d like to share with you a worldbank blogpost on a book review from a new book called FAILING IN THE FIELD: WHAT WE CAN LEARN WHEN FIELD RESEARCH GOES WRONG.

It is well known that we learn and progress most when we fail. Therefore, this book seems interesting to me. It is not yet available at amazon.de, but only can be shipped from the US.

Book Review: Failing in the Field – Karlan and Appel on what we can learn from things going wrong

Submitted by David McKenzie on 2016/10/10

Dean Karlan and Jacob Appel have a new book out called Failing in the Field: What we can learn when field research goes wrong. It is intended to highlight research failures and what we can learn from them, sharing stories that otherwise might otherwise be told only over a drink at the end of a conference, if at all. It draws on a number of Dean’s own studies, as well as those of several other researchers who have shared stories and lessons. The book is a good short read (I finished it in an hour), and definitely worth the time for anyone involved in collecting field data or running an experiment.

A typology of failures
The book divides failures into five broad categories, and highlights types of failures, examples, and lessons under each:

  1. Inappropriate research settings – this includes doing projects in the wrong place (e.g. a malaria prevention program where malaria is not such an issue), at the wrong time (e.g. when a project delay meant the Indian monsoon season started, making roads impossible and making it more difficult for clients to raise chickens), with a technically infeasible solution (e.g. trying to deliver multimedia financial literacy in rural Peru using DVDs when loan officers weren’t able to find audio and video setups).
  2. Technical design flaws – this includes survey design errors like bloated surveys full of questions where there is no clear plan for how they will be used in analysis, and poorly designed questions; measurement protocols being inadequate (they have an example where because their survey offered a couple of dollars in incentivized games, others would pretend to be the survey respondents to try and participate, and they didn’t have good id systems); and mistakes in randomization protocols (e.g. a marketing firm sorting a list of donors by date of last donation, and splitting it in half so that the treatment group were all more recent donors than the control)
  3. Partner organization challenges – a big part here is realizing that even if the top staff are committed, the lower-tier staff may have limited bandwidth and flexibility. A couple of examples were from programs which tried to use existing loan officers to deliver financial literacy training, only to find that many of them were not good teachers; and where bank tellers decided to ignore a script for informing customers of a new product because they felt it slowed them down from serving clients as quickly as possible.
  4. Survey and Measurement Execution Problems – these include survey programming failures (e.g. trying to program a randomized question ordering, but it ends up skipping the question for half the sample); misbehaving surveyors (who make up data); not being able to keep track of respondents (as noted in the impersonation example above); and measurement tools not working (as in my RFID example).
  5. Low Participation Rates – they separate this into low participation during intake (when fewer people apply for the program than expected); and low participation after random assignment (when fewer of those assigned to treatment actually take it up). They note how partner organizations are often overconfident on both accounts. They have examples of financial education in Peru where only 1 percent of groups assigned to treatment completed the full training, and a new loan product in Ghana where delays in processing and a cumbersome application process meant that while 15 percent of business owners visited the branch, only 5 percent applied and 0.9 percent received a loan.

After discussing each of these general categories with some examples, it then goes into more depth on case studies of six failed projects to provide a lot more detail on what went wrong and why, as well as lessons learned.

A few general lessons
So many of the failures seem to have come from lack of piloting/dealing with immature products– studies involving launching new products, but not getting all the implementation issues sorted out in advance beforehand. They note the challenge of doing this – the researchers and partners are often excited and eager to launch their new product, and adding a step that might take everyone back to the drawing board might seem politically untenable.
One lesson they note is that individual failures tend to snowball quickly if they are not caught, so many studies can find themselves facing many of these.
Another is that researchers find it hard to know when to walk away. They give an example of a project on supply-chain credit, where the researchers lined up funding, a partner bank, a consumer product distributor etc. and then kept hitting roadblocks such as software problems at the bank, or changes in design. After 3 baseline surveys, all of which had to be scrapped, and nearly three years, they finally abandoned the project – but “more than once they considered shutting down the project, but there always seemed to be a glimmer of hope that encouraged them to try again or hold another meeting with the partner”. Another example comes up in one of the case studies – when a research team had planned to run a project on sugarcane that fell through, they hastily put together a poultry loan product instead which failed.

Some reactions
I was struck most of all by how mundane many of the stories of failure were – products were launched too soon, they involved additional work for people in the partner organization who didn’t do this work, not enough people applied for a program, someone messed up survey coding etc. Failure here is not coming from the survey team getting accused of witchcraft, enumerators who have never ridden a motorcycle before claiming they were experts, the research team all contracting malaria etc. Instead it comes, by-and-large, from a set of problems that in retrospect could often have been controlled and may seem obvious to an outside party. This is why sharing these lessons is all the more important – these are not all self-deprecating funny stories, but things researchers may otherwise not share f or fear of coming out looking bad.
The second point was that all of the examples in the book came from working with NGOs, reflecting much of the work Dean has done. Working with governments brings a whole new set of ways to fail. From basic bureaucracy problems to political economy to much less ability of researchers to control what is getting done, I am sure there are many lessons that can be drawn from such failures.

View on our site at http://blogs.worldbank.org/impactevaluations/book-review-failing-field-karlan-and-appel-what-we-can-learn-things-going-wrong.

Capacity development is a great key to success

Recently I did an evaluation of a programme on improving food security of households in Burkina Faso, Senegal, Democratic Republic of Congo and Ethiopia. The activies ranged from formation and strengthening of community based organisations to immprovement of agricultural productivity, creation of cereal banks, improvement of market access, awareness raising on natural resource protection and a lot of other activities.

The most important result: Training and capacity building are the most important and sustainable activities to include in development programmes. If you get them right and you follow the training measures regularly up, you are on a great way to success, right as the often quoted proverb says: Give a man/woman a fish and you feed him for a day; teach him/her how to fish and you feed him/her for a lifetime.

Fishermen in Sri LankaNot only technical capacities and skills are needed, but especially good management of time, human resources and money during the whole project cycle. This includes a good and realistic match between those three pillars and the planned activities.

Not surprisingly, the involvement of key stakeholders in project design is another key to success. It massively increases the success of project and programme activities.

What has history to do with evaluation?

My first degree is in history and languages, my second in rural development with a specialisation in evaluation. Only after some time I found the connection between these fields that at first sight seem not to have much in common: In evaluation often we look back to find out how things happened, what worked, what did not work, why,….In history we do more or less the same. If we don’t know the past, we can’t improve the present and the future. And we need language to communicate effectively with different stakeholders.

So, finally the combination of historical science and evaluation doesn’t seem so strange anymore. What do you think?

Evaluation in development cooperation – tensions between feasibility and scientific rigour

On Monday, 21 January 2015, I had the chance to assist in a very interesting speech given by Dr. Eva Terberger from the KfW Bank in Germany on this topic at Vienna University of Economics. Dr. Terberger is the head of the independent evaluation unit within KfW.

It was interesting to see how, influenced by the global discourse on rigorous evaluations and randomised controlled trials (especially Ester DufloPoverty Action lab) as the gold standard of evaluation, KfW made several attempts to design and carry out experimental evaluations for infrastructure projects. Resulting from these experiences Dr. Terberger shared some of the reflections on RCTs:

  • in fragile contexts randomisation is hardly possible (for example in Mali a RCT was jeopardized by the outbreak of the civil war)
  • there is only limited possibility to transfer knowledge between programmes as context matters so much
  • the cost to carry out such experiments is just extremely high with a quite limited information gain, so they found the method not very cost-effective
  • to measure the impact of large infrastructure projects as financed by KfW is almost impossible with this method as there are no control groups to define

In conclusion, KfW prefers to adopt a mixed methods approach with expert judgements based on data, facts and appraisals.

It will be interesting to see if other institutions in the world will adopt RCTs as THE standard in development evaluation or if RCTs in development evaluation will remain
itself an experiment or a short-term phenomenon.

HAPPY NEW YEAR 2015!

Today I woke up and one of my first thoughts were the MDGs (Millennium Development Goals) that we should have achieved by the end of this year. Eight goals that had the potential to improve the lives of Millions of people in the world and to make this world a better place to live. It is now clear that the goals will not be reached by the end of this year on a global scale. Certainly, there have been large improvements and success in some countries, but, unfortunately, we can’t say that today for all goals and for all countries.

Millennium Development Goals

Why could the MDGs not be achieved overall? Was the time-span too small, the goals too ambitious, the money made available too little, the political will lacking or implementation poor?

It will be interesting to analyse the achievements of countries with a similar development status in 2000, context and similar amounts of money available and to find out why some are doing better now than others. Are the success factors more endogenous or exogenous? What comes next? Since a couple of years already scholars, practicioners and politicians discuss about post-2015 strategies.

I am convinced that goal setting is a fundamental step necessary prior to action. Where do we want to go? Which means do we have available to go there, how much time? But we know from many failures in development practice that all those who should reach the goals need to be involved in the agreement on them. In fact, one critique of the MDGs is this lack of involvement and also that the goals did not respect the local context and needs. Maybe every country should develop its own post-MDG agenda and context-specific “roadmap” linked to PRSPs. 15 years are quite short in order to achieve structural transformation in the many aspects covered by the original eight goals. Sometimes we forget how much time Western countries needed to be where they are now.

I wish all those involved in the development of policies and goals, in the implementation of development measures that they learn a lot from the past 15 years so that they can create a new vision for their countries. Political commitment, democracy, equality, respect of human rights and good governance are certainly some of the necessary preconditions to do better.