Buzzword 5: AI for good, in this case disaster risk reduction
- Ildiko Almasi Simsic
- Sep 22, 2025
- 3 min read
This one hit my sweet spot. Technology and working with people, together, in service of decisions that matter. For Episode 5 of The No Nonsense Sustainability Podcast I invited David Daou to talk about AI for disaster risk management. David lives at the overlap of modelling and practice, which is exactly where I think our field needs to be.
Why this episode
Too many AI conversations stop at the shiny part. We say “garbage in, garbage out,” then skip the hard question that follows, which is how to get good data in the first place. In disaster work the cost of skipping that question is very real. False alarms erode trust. Missed events cost lives and livelihoods. I wanted to talk about building models with people, not for people, and about digitising local knowledge so it can shape the model rather than sit in a footnote.
What we dug into
1) From for people to with people
We talked about shifting from scientists modelling for communities to modelling with them. That means participatory mapping, local hazard timelines, seasonal calendars, and short workshops to validate what the model thinks. You can call it co-production. I call it listening and then proving you listened.
2) The missing middle of “good data”
“Garbage in” is obvious. The “how” of good data is less glamorous. David broke it into practical habits: clear problem framing, fit-for-purpose indicators, ground truth collection, simple metadata, and a fast path to correct mistakes. No fancy pipeline needed.
3) Context beats elegance
Some general models travel well. Others do not. A flood model tuned for one delta can be ill fitting for an upland catchment with different soils, slopes, and settlement patterns. The fix is not always a bigger model. It is often better local inputs, different thresholds, and the humility to change the loss function to match the decision you need to make.
4) Local knowledge is data
We spoke about digitising indigenous and community knowledge in ways that survive the AI pipeline. That can be codified place names and landmarks, community-maintained risk maps, and short text narratives that explain “why this place floods” or “where the fire usually jumps.” Numbers are the core of machine learning, but the way we turn stories into features decides whether the model fits the place.
5) Evaluation that reflects the real risk
Accuracy is a comfort metric. In early warning, what matters is the cost of being wrong. For flash floods or landslides you might accept more false positives to avoid deadly false negatives, then manage alert fatigue with better targeting and messaging. Make the metric match the harm you want to avoid.
Off mic, but important
We both see private sector and startups as a big component that can support AI in disaster risk management. We move faster, innovate at a greater rate and there are several ready to use tools that could be procured. Except, the procurement rules are often skewed against small companies that have not been around for that long. Collaboration was the only buzzword I buzzed in this episode, because I’ve seen everyone talk about it but as a tech startup CEO I have no idea what it actually means. We run into the same issue during procurement, despite having robust data protection standards and a track record of building AI tools.
What I learned from David
Great disaster models are social artefacts as much as technical ones. The model works because the people around it trust it, and they trust it because they helped shape it, they can see how it learns, and it respects their knowledge.
What I am still wrestling with
How do we fund the boring parts that make the system reliable. Who owns the long tail of maintenance once the pilot ends. How do we balance openness with safety when maps can help responders and harm communities at the same time.
This episode is for anyone who wants AI that helps real people on real days, not just in slide decks. If you listen, I would love your examples. Where did local knowledge change the model. What one practice made your warnings more trusted.






Comments