How could artificial intelligence help (and hinder) our humanitarian work?

--

We love an acronym at the British Red Cross — and over the last six months, a new one has been on people’s lips: GPT.

In November 2022, a tech company called OpenAI published a new web app — ChatGPT — that set the internet ablaze with excitement. But what is a GPT, why would you want to chat with it, and what does it all mean for humanitarians?

Introducing ChatGPT 🤖💬

I asked ChatGPT to introduce itself; this is what it wrote (lightly edited for style):

“ChatGPT stands for Chatbot Generative Pre-trained Transformer. It is a state-of-the-art ‘generative artificial intelligence (AI)’ that can understand and generate human-like text. Generative AI is a type of artificial intelligence that can create content, like text, images, or even music and video.”

Humanitarian robot talks to its human friends in a flooded town.
Source: Microsoft Image Creator, using the prompt: “Humanitarian robot talks to its human friends in a flooded town.”

What have people used ChatGPT for? 💻

Since ChatGPT launched, people around the world have used it to generate articles, blog posts (I’ve used it to help write this one), social media posts, corporate documents such as job descriptions and report templates, and much more besides.

Here are some standout examples:

What could ChatGPT mean for humanitarians? 🦸‍♂️

About 30 of our colleagues from across almost every department recently got together to explore some of the possibilities and pitfalls of generative AI tools such as ChatGPT. Here are some of our ideas, reflections and questions on how these new tools could potentially augment and automate elements of our work:

Services

  • Could ChatGPT help with triaging our support?
  • Self-reflection and human connection are key parts of casework; AI should not replace that, only facilitate it. Some people have tried using ChatGPT for providing mental health support — but this is unlikely to be as sophisticated as a mental health professional at interpreting questions and providing clinical-quality responses. (Anyone looking for mental health support should access the NHS or look for accredited technology recommended by their health practitioner.)
  • We must avoid giving it personal or sensitive information.
  • Writing tenders and analysing contracts.

Research, analytics, evaluation, policy

  • Summarising research and evaluation reports.
  • Suggesting actionable recommendations.
  • Exploring research questions and gaps in knowledge.
  • Writing programming code to support data scientists.
  • Potentially asking it to summarise data — for example: “What is the percentage change in numbers of people experiencing displacement since 2019?”

Innovation and product development

  • Supporting design sprints — for example, by making plans, offering ideas, and suggesting audiences and stakeholders to speak to.
  • Analyse user research, audience insights, and other information to find starting points in product development.

Management and administration

  • Writing job descriptions, interview guides, and assessments. Could it even shortlist candidates?
  • Editing and summarising email drafts and other text.
  • Generating ideas for workshop facilitation plans (although it can lack the necessary subject-matter knowledge)
  • Writing step-by-step guides for making an Excel template (e.g. to track performance indicators).

I also asked ChatGPT to give us some of its own ideas. It came up with:

  1. Crisis communication: ChatGPT can be used to develop chatbots that provide real-time information to affected populations during emergencies, helping to alleviate panic and confusion.
  2. Language translation: Generative AI can create accurate translations on-the-fly, allowing humanitarian workers to communicate effectively with diverse communities, regardless of language barriers.
  3. Resource allocation: ChatGPT can analyse and synthesise data from various sources to optimise resource distribution, ensuring aid reaches those who need it most.
  4. Training and education: Generative AI can create realistic training scenarios and educational materials to enhance the skills and knowledge of humanitarian workers.
A genie’s lamp in front of an old computer.
Source: Microsoft Image Creator, using the prompt: “A 1980s desktop computer with green code on the screen emerging from a genie’s lamp, 2D Disney cell animation, the genie looks like Aladdin’s Genie.”

What if Generative AI goes rogue? 🦹‍♂️

As powerful as generative AI can be, there are potential pitfalls when it comes to humanitarian work:

  1. Misinformation: The ability of generative AI like ChatGPT to create human-like text could lead to the spread of misinformation or fake news during crises, creating confusion and hampering relief efforts.
  2. Bias and discrimination: If ChatGPT is trained on biased data, it may inadvertently generate content that discriminates against certain groups or communities.
  3. Privacy concerns: The use of generative AI to analyse and synthesise large amounts of data may raise privacy concerns, as sensitive information about individuals could be inadvertently exposed or misused.
  4. Ethical dilemmas: The use of ChatGPT in humanitarian work raises ethical questions about the balance between efficiency and the human touch, as well as the potential dehumanisation of the aid process.

Striking the right balance ⚖️

To harness the full potential of generative AI while minimising harm, it’s crucial to find the right balance. Here are a few suggestions:

  1. Develop ethical guidelines: Establish clear ethical guidelines for the use of generative AI, like ChatGPT, in humanitarian work, ensuring that technology is used responsibly and with respect for human dignity.
  2. Address bias: Continuously monitor and address potential biases in generative AI systems, striving to make them as fair and inclusive as possible.
  3. Prioritise privacy: Implement strong data protection measures to safeguard the privacy of affected communities and respect their rights.
  4. Human-AI collaboration: Utilise generative AI as a tool to support and enhance the work of human professionals, rather than replace them entirely.
  5. Foster collaboration: Encourage collaboration between generative AI developers, humanitarian workers, and affected communities to ensure AI solutions are designed with their needs in mind.
Renaissance painting of a butler offering you a sleek, modern laptop.
Source: Microsoft Image Creator, using the prompt: “Renaissance painting of a butler offering you a sleek, modern laptop.”

This blog post was written in collaboration with ChatGPT, using prompts such as “Write a conversational, non-technical blog post titled: “How Could Generative AI, Especially ChatGPT, Help (and Harm) Our Humanitarian Work?” The images in this post were generated using Bing Image Creator.

--

--

Matthew Gwynfryn Thomas
Insight and Improvement at British Red Cross

Anthropologist, analyst, writer. Humans confuse me; I study them with science and stories.