Landmine detection revisited; the inverse unicorn problem

A couple weeks ago I wrote about an interesting idea to clear landmines using the power of the wind. A reader asked me to comment more on the value of using these wind-powered “Kafons” to do an initial assay of a suspected minefield, an idea I mentioned at the end of my video on the subject. In particular, how good would the devices be at detecting the existence of mines if they were very sparse in an area? In a sense, this is the inverse of the unicorn problem; instead of trying to find every last mine, we’re concerned with finding the very first one, if indeed land mines are there. Put another way: How hard do we have to look before we can safely conclude that unicorns don’t exist?

Download the code for this simulation.

The animated plot shown at the top of this post represents a small sample of data from the simulation I ran. Each blue dot shows the progress of testing of a location to see if that field has mines. I’ve cutoff the testing at $30,000, which is 600 kafons based on their estimated cost (feel free to go into the code and change the cutoff to whatever you want). The dots at the top, with numbers above them, represent testing that used all 600 kanfons without finding any mines. Does this mean no mines exist? Sometimes, but not always. The number above the dot shows the true number of mines in that field during that particular simulation. As you can see, it’s very possible for the field to have several mines, yet still not have any detected, even after trying with hundreds of kafons. In the entire simulation, there were 283 trials with a single mine in the field. On 36 of those occasions, the mine was detected (and, in the simulation, detonated). The other 87% of time, we spent a (virtual) $30,000 and failed to detect its presence.

I’ve shown the results as an animation so that you can put yourself into the position of someone trying to decide whether to continue testing a field for mines, or move on to another location. Each new test costs additional funds, but when do you stop? My $30,000 cutoff is arbitrary. It represents a best guess on my part as to when it would be better to use other methods to test for landmines. These kinds of optimal stopping points can be extremely difficult to determine, especially when, as in this case, those testing for landmines will have to deal with the problem of sunk costs: imagine you’ve just spent $30,000 testing for mines in a field you suspect is dangerous, but you haven’t found anything. The very next kafon, at a cost of just $50, could be the one to find a mine. Do you continue? In my simulation, with this particular distribution of probabilities, once no mine was found in the first 300 kafons, it was very unlikely one would ever be found (although, as mentioned, even when no mine was detected after 600 kafons the field was still way more likely than not to have mines).

Based on the results of the simulation, using kafons to detect mines is cheap when the probability of finding a mine is very high, but in that case I would imagine there’s already strong evidence that landmines exist. In the case where landmines are more sparse, testing with kafons is expensive and the question of when to stop testing is difficult. Note that in a real world use, we don’t know the underlying probability of a mine in the field; we you could be anywhere along the x-axis of the plot shown at top. All we see, in real time, is a rising cost and no kafon found.

If we know how much (new) area is covered by each addition kafon, and if we assume that coverage and placement of landmines is randomly distributed (at least from our position of ignorance), then we can come up with some probability estimates for the chances that a field has an undetected landmine after each additional kafon is given a chance to detect mines. The biggest challenge is that, unless I’m missing something, the question of this exact probability is unanswerable unless we assume a prior distribution on the number of landmines in our field.

We wanted to let the data speak for itself in terms of whether we have hidden mines or not, but in the end, our final beliefs will depend as much on our previous hunch as on the data itself. Which is, in effect, exactly what we were hoping to avoid by sending out kafons to detect for mines.

Shown below is a full plot of all 2800 trials, each dot at the top might represent dozens of failed attempts to find a mine.

One comment

  1. Catherine Villara Giumini

    I am a student working on a digital storytelling project about demining.
    Can you please simplify your statistics into a summary that I can translate
    Into a digital story. Thank you, Cathy