Failing in the field (book review)

I bought Dean Karlan and Jacob Appel’s book Failing in the field: What we can learn when field research goes wrong as a potential addition to our Research Methods course reading list. And while the short and very well written text provides some practical insights into how to learn from failure in development field research, the further along the book I read, the uneasier I grew about some of the underlying discourses of the book.

First and foremost, the book is about randomized controlled trials (RCTs), surely the authors’ expertise, and at no point in the book is the view on field research broadened.

Today, conversations about poverty alleviation and development are much more focused on evidence than they were before-a shift due, in large part, to the radical drop in the price of data and the growth of randomized controlled trials (RCTs) (p.2).
This obviously sets the tone for the book, but it also phrases ‘failing’ and ‘field research’ in a certain way right away: Failing happens when you do not do your RCT homework correctly before, during and after field research in communities in the global South. Relationships, power dynamics and very often people disappear from the authors’ view and the failing never engages with multi- or trans-disciplinary perspectives of learning and avoiding failure. But let’s return to these aspects a bit later.

A refreshingly engaging narrative on getting field work right
The great strength of the book is that it is part of an interesting new genre of books that is written in a more conversational style. In a very accessible, relatively jargon-free way the short book reminds me more of a series of ‘super blog posts’ or edited interviews and is a welcome break from academic textbooks or original research articles. There is a difference between the double-spaced pre-print journal article pdf file that easily runs for fifty pages and this handy hardcover book that is suitable for undergrads, practitioners and researchers alike to discuss some common challenges and how to do a better research job:

Bottom line, a bad RCT can be worse than doing no study at all: it teaches us little, uses up resources that could be spent on providing more services (even if of uncertain value), likely sours people on the notion of RCTs and research in general, and if believed may even steer us in the wrong direction (p.11).
In the first part, Kaplan and Appel present leading causes of research failures. These are definitely worth highlighting, but I would have liked to see maybe one chapter that addresses the broader context, not just technical problems in setting up and implementing a research study.
But there is a black box, a
research team or a partner organization and I would have liked to look more inside it: How is failure communicated within different institutional frameworks (universities, donors, governments, communities), where is the modern equivalent of Latour’s laboratory life? And much simpler: Did you talk to an anthropologist, geographer or engineer about the study?
The majority of researchers who are cited by name in the book are men, and there is often an ‘I failed’ tone to insights rather than ‘we fixed it’. There are definitely some
voices’ missing in the conversation and I wonder whether that could be a major aspect of failure that has not been addressed so far.

Make (tough) decisions and pull out if necessary
One of the important common themes in the second part with six longer case studies is finding
good enough solutions to the research challenges outlined in the previous part that fit different contextual layers. The challenges of the case study on credit and financial literacy in rural Peru (chapter 6) sum up the sentiment well:
(I)ntegrating technology is more of a hurdle that the researchers had initially thought, and future efforts will likely require a bigger investment to lay the groundwork for success. Not only must the educational content be high quality, but there are prerequisites, too: well-trained and charismatic trainers, functional equipment, power, and reasonably tech-savvy users, to name a few (p.82).
Some of these challenges have already been discussed in the ICT4D community, but I am actually a bit more worried about a successful project of that nature rather than the failing one introduced in the book. Does better technology lead to better results or to a tech-solutionism that looks good for researches but maybe not for the rural communities?

As chickens are back in the development spotlight thanks to Bill Gates and his critics, the poultry loan case study (chapter 9) is a good reminder how tricky the implementation is:

Fixed on an idea, with a grant disbursed and partially spent, and a project team ready to go to the field, the dissolution of the original sugarcane plan triggered a similar loss frame: We are going to lose our opportunity to learn, balk on this grant, and disappoint our funders; what can we do? (pp.112-13).
Regardless of your good intentions, methodological sophistication or data-driven plans development interventions will remain complicated, messy and can often not easily scaled up from one local context to the next.

Don
t leave the field to the political scientists & economists!
As I said in the beginning, I wish that there was maybe one extra chapter to address qualitative aspects, the location of
the field (as opposed to multi-sited research including sites in the global North and along the aid chain(s) or aspects of culture and power that other disciplines have engaged with for a long time. Maybe some projects even fail because of their multi- or trans-disciplinary nature and because researchers overestimated the impact it would make on field research?
And while the book certainly is a proof of the importance of communicating failure and writing about field research differently, I would have liked to see those communicative aspects maybe part of the five mantras for better research that the authors propose in their conclusion.

Engaging in a well-understood context or careful data collection are certainly important points to take away from the book, but I am a little worried that people, culture and other
softer aspects may get ignored too easily:
Cultivate buy-in from senior management down to front-line managers and employees (p.134).
This language perpetuates a managerial discourse and possible mindset that may ultimately contribute to some failing rather than eliminating pitfalls.

At the same time, this is the biggest achievement of the book as a conversation starter about better and good enough field research and doing a research job ethically and well.

Failing in the field is a great primer for students and non-academic researchers who are embarking on the exciting journey of data collection and fieldwork. But while getting research design and implementation right, we should remember not to leave
the field to the political scientists and economists alone ;)!

Karlan, Dean & Appel, Jacob: Failing in the field: What we can learn when field research goes wrong. ISBN 978-0-691-16189-1, 176pp, GBP 24.95, Princeton, NJ: Princeton University Press, 2016.

Comments

Popular posts from this blog

Global Development Substacks I Like

Should I consider a PhD in International Development Studies?

Handbook on Humanitarianism and Inequality - Chapter 06 - Localisation and the humanitarian sector

What if MrBeast really is one of the futures of philanthropy-and what does that mean for communicating development?

Handbook on Humanitarianism and Inequality - Chapter 05 - Humanitarianism, development and peace: a southern perspective