Artificial Intelligence (AI) & ChatGPT in development and humanitarian work-a curated collection

As AI, ChatGPT & broader discussions on the tools & technology behind it enter the digital #globaldev discussion sphere, I started a curated collection of articles & podcasts that have caught my attention so far.

Much of the content is about early experiments with ChatGPT with a few writers sharing broader reflections on potential opportunities & problems-often along the lines of what has previously been discussed in the ICT4D field around 'digital divides', 'technology not solving deeper-rooted political issues' & 'we don't know yet'...
Not surprisingly, almost all of the authors are based (in organizations) in the Global North, but not as gender-imbalanced as I would have thought, given that (technical) #globaldev writing on the Internet is still dominated by male writers.

I will update the collection on a rolling basis-so don't hesitate to point me to stuff I have overlooked so far!

P.S.: I asked Nightcafe for an AI-generated image of 'humanitarian aid helping people in a disaster' & I particularly liked those blue-ish dots that may indicate UN-branded emergency supplies...

UPDATE 4 September: I reversed the chronological order of the articles so the newest ones come first now & added a few resources I found during the summer...
UPDATE 1 December: I added a few more new resources

How will funders respond to AI-generated proposals?
Is AI the great equaliser for fundraising?
AI in proposal writing is democratising the playing field. No longer do nonprofits have to rely heavily on professional grant writers or the opportunities to network at foreign conferences, often out of reach for many. It’s saving countless hours that staff would spend writing and re-writing the problem and solutions. No more arbitrary rejection of sound ideas based on grammatical errors or typos, especially by non-native speakers, or other factors that disadvantages historically marginalised groups. Most importantly, AI tools allow the redirection of our collective energies towards more impactful and meaningful work - the type of work that makes us come alive and in the long run, reduces burn out from bureaucratic violence.
Neema Iyer, 27 November 2023, focuses on fundraising aspects in her post.

5 things to be aware of when using AI for media and communications in the international development sector
As you can see, what ChatGPT creates is a script of the imagined interview that draws on some of the key arguments on providing ODA. However, it uses some outdated language such as “developing” “developed” and “less fortunate” which demonstrates the limitations of ChatGPT and the lack of nuance by the tool and the prompt. One way round this could be asking in the prompt for the tool to refrain from using certain words.
As we have seen above, AI programmes like ChatGPT are useful for supporting our messaging creation, or predicting the type of questions journalists might ask in an interview, but always remember, AI tools are what they are: a tool, but not a solution. We cannot rely on them to produce all of our content or final products. But they can be used for as a springboard to get you started.
Jessica Salter for BOND, 17 November 2023, with insights into applying AI tools in communication for #globaldev work.

What’s next for Emerging AI in Evaluation? Takeaways from the 2023 AEA Conference
The evaluation field as a whole needs to learn more about the low risk, high gain ways we can use emerging AI tools – where results are useful and valid and the potential for inaccuracies and harm are minimal. A non-exhaustive set of questions we might begin with includes:
Linda Raftree for MERL Tech, 6 November 2023, with nuanced reflections & interesting questions to ask yourself when applying AI tools.

AI for Evaluation: a way out of the bullshit?
I think AI is an invitation for us to get out of the bullshit. For that, we first have to get better at recognising bullshit (...). And we need a sector-wide understanding that bullshit is — well, bullshit, in the sense that it will keep us from making progress on any of the big issues the sector is trying to tackle. We can’t edit away climate change with yet another document that regurgitates the risks it poses to humankind, we actually have to get things done.We don’t have time for bullshit.
Sonja Wiencke, 18 September 2023, with more reflections on AI & evaluation.

Taking stock: Generative AI, humanitarian action, and the aid worker
This blog has mapped out three broad conversations around generative AI happening in the aid sector. One of the interesting aspects of these conversations is the broad recognition that it is here. In terms of the humanitarian workforce, generative AI entails an expansion of the digital literacy requirement in the sector, while also promulgating a sense of fear and urgency. Old dilemmas persist – data and cybersecurity issues are not going away –and some new ones are on the horizon.
As illustrated by this post, the change engendered by generative AI appears slated to be different than what is usually promised by either salvational or dystopian tech talk: the change is both mundane and incremental – and represents a fundamental yet little-understood disruption of humanitarianism. In the context of the digital transformation of aid, imaginaries of failure regularly amount to utopianist and fantastical. Yet, as has been noted, while AI systems can exceed human performance in many ways, they can also fail in ways that a human never would.
Kristin Bergtora Sandvik for Global Policy, 28 July 2023, shares some broader reflections in line with her forthcoming book on 'Humanitarian Extractivism'.

Chatbots/ChatGPT: The New Ally in NGO Proposal Writing and Fundraising
AI can simplify your donor engagement by helping you write notes for a meeting with a donor, draft advocacy papers, personalize fundraising emails, and generate a pitch for a donor based on simple notes from your team. It can even assist in planning your donor engagement. Just prompt the AI with something like, "Develop a plan for engaging with Donor X over the next year, based on their funding patterns and our organization's projects in Sector Y." The AI would then lay out a plan for regular, strategic engagement with the donor and assist you in generating the content of each activity.
Ali Al Mokdad, 2 July 2023, shares some practical tips on LinkedIn on working with ChatGPT in NGO writing.

Caution: Free Generative AI Solutions Like ChatGPT Don’t Understand USAID – Yet
Either way, I would advise much caution when asking LLMs detailed digital development questions. Certainly do not rely on these generic solutions for formal reporting purposes.
Wayan Vota for ICTworks, 22 June 2023, uses some of the major generative artificial intelligence solutions to ask: How Do Public GenAI Tools Interpret USAID Projects?

Why Chayn took down its chatbot in 2020 and what we’ve learned about building culturally-aware chatbots since
Chayn’s journey with chatbots started in 2017 when we launched our Little Window bot. In April 2020, we unceremoniously retired it. This blog post is about why we did that, what we learned, and some insights from a report we did in 2021 for UNICEF’s Safer chatbots project looking at building culturally-aware chatbots. Our research and recommendations fed into UNICEF’s Safer Chatbots Implementation guide but we’re now also sharing our research findings as recently we’ve engaged with many people who are working on chatbots and are finding it useful.
Hera Hussain for Chayn, 7 June 2023; even though Chayn's experience with chatbots is ancient in Internet terms, the reflections are a timely reminder about the pitfalls of using chatbots that are also often powered by AI.

Is the future of imagery and photography being rewritten by AI?
Someone at our event wrote in the chat that “real people with real stories will always be important”, and I think that’s the thing: you can’t use AI to write the kind of human stories INGOs produce. AI will keep growing and improving, but I think INGOs will continue to tell stories and use real photographers. AI might just be another tool in your toolbox.
In the past, I saw NGO campaigns, such as one produced by Shelter in the 1990s, using real stories accompanied by footage posed by models as it would have been unethical to show real people in those situations. Will the NGO sector use AI in a similar way? I guess that depends on how successful those campaigns are.
Your industry has a very strong ethical foundation, and I’d be really interested to know how you end up using the technology. How are you navigating those ethical issues? Are you getting the things you wanted from it? Those are questions I am desperate to know the answers to!
Rachel Erskine talks to Hamish Crooks for BOND about AI & photography, 7 June 2023; good ethics are AI-proofed ethics is one of the take-aways for me.

Generative AI may be the next AK-47
But such boilerplate responses on ethics leave us with more questions: How will AI be regulated in places where the rule of law is weak? How will tech firms designing AI ensure their tech is “governed by ethical principles”, particularly in light of recent layoffs? What happens when these tools fall into the hands of non-state armed groups? And how will refugees and humanitarian actors protect themselves from the potential harms?
Sarah W. Spencer for the New Humanitarian, 30 May 2023, asks some pertinent questions about AI & fragile contexts.

Harnessing the power of Artificial Intelligence to uncover patterns of violence
In a nutshell, this rich collaboration between political and computer scientists from the ICRC, EPFL and ETHZ produced a model that uses artificial intelligence to give us a clearer and faster picture of evolutions in the use and type of violence by armed forces and groups. It will improve accuracy in the classification of events – using categories more closely related to the ICRC’s mandate – and enrich the data with more granularity. The time savings associated with automating this process will allow ICRC colleagues to more fully triangulate events with other data sources, including the ICRC’s own, to obtain greater accuracy. The visualization of patterns of violence will permit us to overlay other information such as political events which might have provoked violence or brought restraint and allow us to observe whether the topics discussed in the ICRC’s confidential bilateral dialogue with commanders correlates with any changes in behaviour. The model can also be used to track patterns of violence that might threaten our field teams, and thus support decision-making processes on safety and security.
Fiona Terry & Fabien Dany for ICRC's Humanitarian Law & Policy, 25 May 2023. The main reason I included this article is to highlight that 'artificial intelligence' has more humanitarian applications than producing more or less decent text blurbs to standard questions for funding applications or reports.

So I asked ChatGPT to write about the pros and cons of AI for International Development. What do you think?
Seems quite happy to exceed its brief. I never asked it for ‘ways to harness its potential for the greater good’, but it did it anyway. Lots of policy ‘shoulds’ that I did not request.
Very techno-regulatory – pretty feeble on politics and power – e.g. nothing about how AI is likely to interact with fragile/conflict affected/predatory states.
Overall, it seems more balanced on development implications than the UK government White Paper I wrote about last week, but that’s probably down to the framing of the question.
Duncan Green again for From Poverty To Power, 22 May 2023, also experiments with ChatGPT.

How do we Start Thinking About AI and Development?
Doesn’t look like the Foreign Commonwealth and Development Office has been anywhere near it, because there is nothing on AI as a global public good/bad. This is all about short-term British National Interest.
So if we want to go broader, how should we think about AI as a huge opportunity/threat for global development (which it is)?
Duncan Green for From Poverty To Power, 19 May 2023, on a discussion between UK #globaldev NGOs on the topic of AI with a focus on legal frameworks & implications.

The machines are coming for written applications and written reports. Are we ready?
Suddenly, community members and people with lived experience of the issues discussing applications and making decisions is not seen as a risky way to make grants. Getting those you fund together and genuinely listening to them talk about the impact they are having, what is working and what is not working becomes a much more reliable way to get a ‘report’. It becomes easier for everyone to learn about the impact of unrestricted and non-project-based funding, as project-based and linear reports become redundant. Devolving money to local and national grant-makers in the South now makes perfect sense. Everything that many people argue is too ‘risky‘ has now become the perfect mitigation for everyone using ChatGPT to write applications and reports.
Matt Jackson for BOND, 16 May 2023, introduces two different scenarios on the impact of AI on learning & knowledge management.

Stepping back from Data and AI for Good – current trends and ways forward
This paper analyses the development of the initiatives from a rhetorical slogan into a research program that understands itself as a ‘field’ of applications. It discusses recent academic literature on the topic to show a problematic entanglement between the promotion of initiatives and prescriptions of what ‘good’ ought to be. In contrast, we call researchers to take a practical and analytical step back. The paper provides a framework for future research by calling for descriptive research on the composition of the initiatives and critical research that draws from broader social science debates on computational techniques.
The empirical part of the paper provides first steps towards this direction by positioning Data and AI for Good initiatives as part of a single continuum and situating it within a historical trajectory that has its immediate precursor in ICT for Development initiatives.
Ville Aula & James Bowles with an open access article for Big Data & Society, 9 May 2023.

ChatGPT and the most marginalized
Like most people, I think ChatGPT, like other language technology, can be used for good and for evil. And, like most language technology, until the international development sector decides to harness our data appropriately and invest in developing language technology for the most marginalized, the digital divide is just going to get bigger and bigger.
Aimee Ansari on LinkedIn, 20 April 2023, focuses on digital literacy, the dominance of English & a potential broadening digital divide.

Chats with Chatbots BARD and ChatGPT: Humanitarian Applications
Both chatbots waded through truckloads of irrelevant data after I eliminated mainstream media contributions and hot-button topics during the Covid-19 pandemic to produce solid humanitarian aid uses for Artificial Intelligence based on peer-reviewed journals and articles.
Lisa Nowatzki, 6 April 2023, documents her experiments using two AI platforms to produce a text on how they could be used in a meaningful way for humanitarian work.

How could artificial intelligence help (and hinder) our humanitarian work?
About 30 of our colleagues from across almost every department recently got together to explore some of the possibilities and pitfalls of generative AI tools such as ChatGPT. Here are some of our ideas, reflections and questions on how these new tools could potentially augment and automate elements of our work.
Matthew Gwynfryn Thomas for Insight & Improvement at British Red Cross, 27 March 2023, with an interesting initial list of uses of ChatGPT for services, research, product development, administration & more.

Four ways ChatGPT could help level the humanitarian playing field
Make no mistake: ChatGPT is not the silver bullet that rectifies the aid sector’s power imbalances, or that drives its stalled localisation reform promises.
Real change can only come from real people who hold real power – and the willingness to give up some of that power.
But ignoring ChatGPT altogether is a missed opportunity. If humanitarians are serious about creating more just and effective emergency responses, we can use all the help we can get – real or artificial.
Aanjalie Roane for the New Humanitarian, 20 March 2023, focuses on the potential of amplifying local voices, navigating funding mazes, minimizing administrative costs & identifying local organizations with the support of tools like ChatGPT.

Liam Nicoll with IRC's Signpost Initiative
Liam Nicoll, Product Lead with the International Rescue Committee’s Signpost initiative speaks with Humanitarian AI Today (...) about Signpost’s work providing people with accurate, accessible and timely information in times of crisis and about open data sharing, ChatGPT and humanitarian applications of artificial intelligence.
Alexandra Pittman & Brent Phillips with another podcast episode for Humanitarian AI Today, 6 March 2023.

How are humanitarians using AI tools like Chat GPT?
Last month CALP ran a poll on its LinkedIn channel to find out exactly how our followers were using artificial intelligence tools like Chat GPT for their jobs as humanitarians. 77% had not started using these tools yet. We got in touch with some of the other 23%, (who are using various tools), to see what we could learn from them. Nine people got back to us, and their stories are below.
Rory Crew, Sonja Ruetzel, Jack Barton, Thomas Byrnes, Emily Wegener, Vimaris Rivera, Joel Kaiser, Stephen Kimotho & Jackson Mushagalusa for CALP Network, 20 February 2023, on how their organizations are early adopters of AI tools.

Gaurav Nemade on Humanitarian AI Today
Gaurav, Christopher and Megan broach the growing impact of AI on the humanitarian sector, challenges and opportunities that humanitarian actors should be conscious of and how humanitarian actors can get involved and contribute to helping train, fine-tune, adapt and improve AI applications, making them suitable for use across the humanitarian community.
Gaurav Nemade, Christopher Hoffman, Megan DeMatteo & Brent Phillips with a podcast for Humanitarian AI Today, 5 February 2023.

ChatGPT: Considering the Role of Artificial Intelligence in the Field of Evaluation
The AI looks so good because a lot of developmental and humanitarian work is based on set approaches and jargon. We play by the book, when writing projects, when monitoring and evaluating change. This has advantages of course (we should not always reinvent the wheel!). But this is also where an AI works best. It is like these professionals good at making any project look cool, using the right words: nice, streamlined, even when reality is messy. And, sadly, what surfaces about many projects and programmes are just these sanitized proposals/reportings: confirmation of preset causal chains, with pre-set indicators… whilst local partners and change makers would tell more interesting and varied stories. It is the sanitized stories which eventually travels up the reporting chain, and into the AI of the future.
Silva Ferretti for the American Evaluation Society, 20 January 2023, with an interesting point about some #globaldev writing already looking it was produced by an algorithm that knows the jargon and buzzwords only too well...

Responsible AI in Africa-Challenges and Opportunities
It is a unique collection that brings together prominent AI scholars to discuss AI ethics from theoretical and practical African perspectives and makes a case for African values, interests, expectations and principles to underpin the design, development and deployment (DDD) of AI in Africa. The book is a first in that it pays attention to the socio-cultural contexts of Responsible AI that is sensitive to African cultures and societies.
Damian Okaibedi Eke, Kutoma Wakunuma & Simisola Akintoye with an open access edited book from Palgrave Macmillan published 2 January 2023.

ChatGPT is Coming for Your International Development Job
If we test ChatGPT on international development questions… well the answers speak for themselves with common questions we hear from donor and humanitarian organization staff. Who needs a marketing team when ChatGPT can give you answers like these?
Wayan Vota for ICTworks, 8 December 2022, with some initial examples from ChatGPT was the earliest post I could find on the topic.

Humanitarian AI: The hype, the hope and the future
This Network Paper attempts to explore the benefits, opportunities, risks and obstacles to using AI/ML in the humanitarian sector. It seeks to unpack the myths and rhetoric related to AI/ML and evaluate the range of arguments made in favour of or against their use, drawing on literature and interviews with scores of experts across the aid and technology industries.
Sarah W. Spencer for ODI's Humanitarian Practice Network, 8 November 2021, with an interesting 'how it started' background paper.

Nonhuman humanitarianism: when 'AI for good' can be harmful
Yet AI applications still have powerful consequences. Apart from the risks associated with misinformation and data safeguarding, chatbots reduce communication to its barest instrumental forms which creates disconnects between affected communities and aid agencies. This disconnect is compounded by the extraction of value from data and experimentation with untested technologies. By reflecting the values of their designers and by asserting Eurocentric values in their programmed interactions, chatbots reproduce the coloniality of power.
Mirca Madianou with an academic open access article for Information, Communication & Society, 8 April 2021, focusing on AI chatbots.


Popular posts from this blog

Dear white middle class British women: Please don't send used bras (or anything, really) to Africa

Should I consider a PhD in International Development Studies?

Happy retirement Duncan Green!

Links & Contents I Liked 500