feature


15
Mar 16

Probability Podcast Ep2: Imprecise probabilities with Gert de Cooman

Gert

I happened to be travelling through Brussels, so I stopped by Ghent, the world hotspot for research into imprecise probabilities, and setup an interview with Gert de Cooman. Gert has been working in imprecise probabilities for more than twenty years, is a founding member and former President of SIPTA, the Society for Imprecise Probability: Theories and Applications, and has helped organize many of the ISIPTA conferences and SIPTA Schools.

Topics include fair betting rates, Dutch books, Monte Carlo methods, Markov chains, utility, and the foundations of probability theory. We had a rich, wide-ranging discussion. You may need to listen two (or more!) times to process everything.


Episode on SoundCloud


30
Jan 14

Probability Podcast

I’ve produced a pilot episode of a “Probability Podcast”. Please have a listen and let me know if you’d be interested in hearing more episodes. Thanks!

The different approaches of Fermat and Pascal
Pascal’s solution, which may have come first (we don’t have all of the letters between Pascal and Fermat, and the order of the letters we do have is the matter of some debate), is to start at a point where the score is even and the next point wins, then work backwards solving a series of recursive equations. To find the split at any score, you would first note that if, at a score of (x,x), the next point for either player results in a win, then the pot at (x,x) would be split evenly. The pot split for player A at (x-1,x) would be the chance of his winning the next game, times the pot amount due him at (x,x). Once you know the split in the case where player A (or B) lacks a point, you can then solve for the case where a player is down by two and so on.

Fermat took a combinatorial approach. Suppose that the winner is the first person to score N points, and that Player A has a points and Player B has b points when the game is stopped. Fermat first noted that the maximum number of games left to be played was 2N-a-b-1 (supposing both players brought their score up to N-1, and then a final game was played to determine the winner). Then Fermat calculated the number of distinct ways these 2N-a-b-1 might play out, and which ones resulted in a victory for player A or player B. Each of these combinations being equally likely, the pot should be split in proportion to the number of combinations favoring a player, divided by the total number of combinations.

To understand the two approaches to solving the problem of points I have created the diagram shown at right.

Suppose each number in parenthesis represents the score of players A and B, respectively. The current score, 3 to 2, is circled. The first person to score 4 points wins. All of the paths that could have led to the current score are shown above the point (3,2). If player A wins the next point then the game is over. If player B wins, either player can win the game by winning the next point. Squares represent games won by player A, the star means that player B would win. The dashed lines are paths that make up combinations in Fermat’s solution, even though these points would not be played out.

Pascal’s solution for the pot distribution at (3,2) would be to note that if the score were tied (3,3), then we would split the pot evenly. However, since we are at point (3,2), there is only a one-in-two chance that we will reach point (3,3), at which point there is a one-in-two chance that player A will win the game. Therefore the proportion of the pot that goes to player A is 1/2+1/2 (1/2)=3/4 whereas player B is due 1/2 (1/2)=1/4.

Fermat’s approach would be to note that there are a total of 4 paths that lead from point (3,2) to the level where a total of 7 points have been played:

(3,2)→(4,2)→(5,2)
(3,2)→(4,2)→(4,3)
(3,2)→(3,3)→(4,3)
(3,2)→(3,3)→(3,4)

Of these, 3 represent victories for player A and 1 is a victory for player B. Therefore player A should get 3/4 of the pot and player B gets 1/4 of the pot.

As you can see, both Pascal and Fermat’s solutions yield the same split. This is true for any starting point. Fermat’s approach is generally agreed to be superior, as the recursive equations of Pascal can become very complicated. By contrast, Fermat’s combinatorial method can be solved quickly using what we now call Pascal’s Triangle or its related equations. However, both approaches are important for the development of probability theory.


10
Dec 13

Prize for statistics students?

In order to promote work on statistical simulations, as well as thinking about deeper issues in data analysis, I’m considering starting a prize for students.

Here are my ideas:

* One prize would be for the most innovative use of Monte Carlo methods to model a problem in pure or applied statistics. This prize would be offered in two divisions: undergraduate and graduate.

* One prize would be for an essay that explores the foundations of probability theory or statistics with an emphasis on epistemological issues. This would be open to all students.

* Prizes would be in the $3,000 – $6,000 range.

* The judging committee would be drawn from professors, students and industry.

What are your thoughts? Specifically:

* If you’re a student, is this something you’d apply for?

* If you’re a professor or instructor, do you think your students would be interested in this? Would you pass along the information to them?

* If you represent a company, could you see advantages to sponsoring one of the prizes?

* What changes or suggestions do you have?


22
Oct 13

The disgrace of the mandatory census

In 2011, Audrey Tobias refused to provide Statistics Canada with a filled out copy of her census form, as mandated by law. Her decision, and her decision to stand by that decision, led to a trial in which the 89-year-old faced jail time. Although Tobias stated that her act was protest against the use of US military contractor Lockheed Martin to process the forms, and not against the mandatory nature of the census itself, this was really a trial of the government’s power to compel citizens to provide it with private information. As Tobias’ lawyer, Peter Rosenthal, argued, compelling Tobias to fill out the form on threat of jail was a violation of the Canadian Charter of Rights, and its provisions for freedom of conscience and expression.

The judge in the case, Ramez Khawly, rejected Rosenthal’s argument, but found a way to find Tobias not guilty anyway on the basis of his doubt about her intent in not filling out the form. Perhaps sensing the outrage that might ensue over punishing an octogenarian for a non-violent act of civil disobedience, Khawly was nevertheless too fearful, or obtuse, to uphold an argument that would set a highly inconvenient precedent from the standpoint of the state. The judge both justified and exposed his particular mix of cowardice and compassion by asking, “Could they [the Crown] not have found a more palatable profile to prosecute as a test case?”

I suppose I shouldn’t be surprised by the judge’s politically expedient decision. What shocks me is the reaction of many regular citizens, and in particular of some fellow statisticians. Let me be as clear as possible about this: support for the mandatory census is a moral abomination and a professional disgrace. It should go without saying that informed consent is a baseline, a bare minimum for morality when conducting experiments with human subjects. Forcing citizens to divulge information they would otherwise wish to keep private, on pain of throwing them in a locked cage, does not qualify as informed consent!

There is no point here in arguing that what’s being requested is a minor inconvenience, or an inconsequential imposition. Informed consent doesn’t mean “what we think you should consent to.” More than anything else, statistics is about understanding the inherent uncertainties in measurement, prediction, and extrapolation. Just because you might not object to answering certain questions, gives no reason to assume the universality of your preferences. Finally, note that to at least a small group of revolutionaries, the right not to divulge certain information to authorities was so important that it was written right into the Bill of Rights.

Besides the argument that the census in minimally invasive, I’ve also heard it argued that the value of obtaining complete data outweighs concerns of privacy and choice. To this I say that our desire, as statisticians, for complete and reliable data, isn’t some ethical trump card, nor is it the scientific version of a religious indulgence that purifies our transgressions.

Dealing with incomplete and imprecise data isn’t some unique problem that can be overcome at the point of a gun, it’s the very heart and soul of statistics! In the real world, there is no such thing as indisputably complete or infinity precise data. That’s why we have confidence intervals, likelihood estimates, rules for data cleaning, and a wide variety of sampling procedures. In fact, these sampling procedures, if properly chosen and well executed, can be more accurate than a census.

I call on all those who work for StatsCan or other organizations to refuse to participate in any non-consensual surveys, to stand up for their own good name and the good name of the profession, and to focus their energies on finding creative, scientifically sound, non-coercive ways to obtain high quality data.


31
Jul 13

A probability cookbook

Randomness – Probability = Chance

Chance – Randomness = Fate

Fate + God = Predestination

Probability + Epistemology = Types of Randomness

Subjective Probability = Betting + Coherence

Propensity theory = Probability + Animism

Kolmogorov Axioms = Probability – randomness – chance

Probability + Complexity = Cryptography

Chaos + Ignorance = Randomness

Regression: Data = Signal + Noise

Bayesian:
Posterior = Prior [latex] \times [/latex] Likelihood
Prior + Data [latex] \rightarrow [/latex] Probability

Probabilitst:
Probability [latex] \rightarrow [/latex] Frequency

Statistical:
Frequency [latex] \rightarrow [/latex] Probability

Big Data:
Predictive value [latex] \gg [/latex] Model simplicity
High dimensions + Fast computers = De chao ordo


10
Jul 13

Updates to types of randomness

Just a quick note that I’ve gone through and made some revisions to A classification scheme for types of randomness. If you haven’t yet read this post, I’d highly recommend it. If you have, go read it again!


3
Jul 13

The hat trick

In his book Quantum Computing Since Democritus, Scott Aaronson poses the following question:

Suppose that you’re at a party where every guest is given a hat as they walk in. Each hat has either a pineapple or a watermelon on top, picked at random with equal probability. The guests don’t get to see the fruit on their own hats, but they can see all of the other hats. At no point in the evening can they communicate about what’s on their heads. At midnight, each person predicts the fruit on their own hat, simultaneously. If more than 50% of the guests get the correct answer, they’re given new Tesla cars. If less than 50% of the guests get it right, they’re given anxious goats to take care of. What strategy (if any) can they use to maximize their chances of winning the cars?

Answer: there is no strategy that works.

Kidding! Of course there’s a strategy, as you can tell by the length of this post. Did you come up with any ideas? At first glance, it seems like the problem has no solution. If you can’t communicate with the other party goers, how can you find out any information about the fruit on your own head? Since each person was independently given a pineapple or a watermelon with equal probability, what they have on their heads tells you nothing about what you have on your head, right?

My own initial strategy, after considerable (but not enough!) thought, was to bet on regression to the mean. Suppose you see 7 pineapples and 2 watermelons. The process of handing out hats is more likely to generate a pineapple/watermelon ratio of 7 to 3 than 8 to 2 (it’s most likely to generate an equal number of each type, with every step away from a 5/5 ratio less and less likely). Thus, I figured it would be best to vote that my own hat moved the group closer towards the mean. Following my strategy, we all ended up with goats. What did I do wrong?

The key to solving this problem is to realize that the initial process for handing out hats is irrelevant. All that matters is that, from the perspective of a given person, they are a random sampling of 1 from a distribution that is known to have either 7 pineapples and 3 watermelons, or 8 pineapples and 2 watermelons. Thus, each person knows that the probability a randomly sampled guest will have a pineapple on their head is somewhere between 70% and 80%. More precisely, it’s either 70% or 80%. In any case, so long as every person votes for themselves being in the majority, then the majority of guests will be voting that they are in the majority.

I simulated this strategy using parties of different sizes, all of them odd (to avoid the issue of having and equal number of each hat type). Here’s the plot, with each point representing the mean winning percentage with 500 trials for each group size. As always, you can find my code at the end of the post.

As you can tell from the chart, once we have 11 or more guests, it’s highly likely that we all win Teslas.

One way to look at this problem is through the lens of the anthropic principle. That is, we need to take into account how what we observe gives us information about ourselves, irrespective of the original process that made each of our hats what they are. What matters is that from the perspective of each party goer, their view comprises a random sampling from the particular, finite distribution of pineapples and watermelons that was set in stone once everyone had entered the room. In other words, even if the original probably of getting a pineapple was 99%, if you see more watermelons than pineapples, that’s what you should vote for.

This problem, by the way, is related to Condorcet’s Jury Theory (featured on the most recent episode of Erik Seligman’s Math Mutation podcast). Condorcet showed, using the properties of the binomial distribution, that if each juror has a better than 50% chance of voting in accordance with the true nature of the defendant, then the more jurors you add, the more likely the majority vote will be correct. And vice versa. Condorcet assumed independence, which we don’t have because our strategy ensures that every person will vote the same way, so long as the difference between types of hats is more than 2.

# Code by Matt Asher for StatisticsBlog.com
# Feel free to modify and redistribute, but please keep this header
set.seed(101)
iters = 500
numbPeople = seq(1, 41, 2)
wins = rep(0, length(numbPeople))

cntr = 1
for(n in numbPeople) {
	for(i in 1:iters) {
		goodGuesses = 0
		hats = sample(c(-1,1), n, replace = T)
		disc = sum(hats)
		for(h in 1:n) {
		
			personHas = hats[h]
			# Cast a vote based on what this person sees
			personSees = disc - personHas
			
			# In case of a tie, the person chooses randomly.
			if(personSees == 0) {
				personSees = sample(c(-1,1),1)
			}
			
			personBelievesHeHas = sign(personSees)
			
			if(personBelievesHeHas == personHas) {
				goodGuesses = goodGuesses + 1
				break
			}
			
		}
		
		if(goodGuesses > .5) {
		
			# We win the cars, wooo-hooo!
			wins[cntr] = wins[cntr] + 1
		}
	}
	
	cntr = cntr + 1
}

winningPercents = wins/iters

plot(numbPeople, winningPercents, col="blue", pch=20, xlab="Number of people", ylab="Probability that the majority votes correctly")

1
Jul 13

Morality needs probability, manifesto addendum

Just added to my Big Bright Green Manifesto Machine. You might need to read this through a couple times; it’s a difficult concept since it lives in a collective blind spot for us:

Doing ethics without probability is like performing surgery with a wooden spoon — it’s a blunt instrument capable of only the most basic operations, and more likely to kill the patient than heal them. Implicitly, we understand this need for probability in making ethical judgements, yet most people recoil when the calculus of probabilities is made explicit, because it seems cold, because the math frightens and confuses them, or because letting odds remain unestimated and unacknowledged allows people to confuse positive outcomes with moral behavior, sweeping hidden risks under the rug when things go well, or claiming ignorance when they don’t. It’s time to acknowledge — directly, explicitly, mathematically — that morality needs probability. For ethics to move forward it must be integrated with our knowledge of randomness and partial entailment.

Here’s an example of how we already take probability into account implicitly. If we retrieve our lost ball from someone’s yard without asking first, we justify this based on our belief that the owner is more likely to be bothered by us interrupting their dinner, than by our temporary trespass on their lawn. The greater the probability of great harm, the higher the level of certainty we demand. Our most heated debates involve situations where the probability of harm from both action and inaction is high. If someone’s dog is stuck in a hot car on a sunny day, should you break in and try to save it? Does the chance of a dog dying of heatstroke justify a forced entry that will probably result in expensive damage and an irate owner (though it’s possible they would be grateful instead). If you decide to break in, how long should you wait first? What prior distribution should you put on the owner’s return time, and how do you update your prior as time goes by? If the waiting time is chi-square on low degrees of freedom, your concern for the dog might be unjustified. If it follows the unreliable friend distribution, you may be that dog’s only hope.

As I hope is becoming clear, questions of morality cannot be resolved without asking questions about probability. If the example above seems trivial (perhaps the owner’s property rights trump your concern for a dog), then substitute the animal for a toddler who looks uncomfortably warm. Now how long do you wait, and how do you deal with the risk that smashing a window might harm the child?


30
May 13

Uncovering the Unreliable Friend Distribution

Head down to your local hardware store and pick up a smoke detector. Pop off the cover and look inside. You’ll see a label that mentions Americium 241, a radioactive isotope. Put on your HEV suit, grab a pair of tweezers and a fine-tipped pen, and remove the 0.3 millionths of a gram of Americium. If you need reading glasses, now might be a good time to put them on. Pick out one of atoms and label it with an X. Now watch closely. Sooner or later, it will spit out an Alpha particle.

Just how long will you have to wait? Decay rates are measured in half-lives, which is the amount of time needed for half of the particles to decay (any particular atom has a 1/2 chance of decaying in this time as well). The stated half-life for this isotope is 432 years, and your waiting time will follow an exponential distribution. The strange, oddly beguiling quality about this distribution is that the conditional probabilities remains constant. In other words, no matter how long you’ve waited, there’s still a 1 in 2 chance that your Americium isotope will decay in the next 432 years. Waiting for an exponentially distributed event to happen leads to an odd feeling, at least for me. The longer you wait, the more you “expect” the event to happen soon, even knowing that your expected wait time never changes. I wrote about that feeling previously, and created an exponential timer you can try out for yourself. I would suggest setting it to less than 432 years.

Cranking uncertainty up to 11
Recently, as I waited patiently for my own particle of Americium to give up its Alpha, I got to thinking about conditional uncertainty. No matter how long we wait for our event, we never get any smarter about when it will happen. But we don’t get any dumber, either. Would it be possible, I wondered, to build a kind of “super-exponential” distribution, where the longer we wait, the less we know. In other words, can we take our level of uncertainty up to 11?

Imagine the following scenario: first we sample from a standard uniform distribution, which gives us a number somewhere between 0 and 1. Call this number [latex]U[/latex]. Then we take [latex]U[/latex] (without looking at it!), and plug it into the exponential distribution as the parameter [latex]\lambda[/latex]. This gives us a random variable with a mean waiting time of [latex]\frac{1}{U}[/latex] for the first occurrence. (Note that this mean isn’t the same as the half-life, which is actually the median. To convert from mean to half-life, multiply by the natural log of 2).

My prediction was that this method would increase the overall level of uncertainty about our waiting time, and, even worse, make our uncertainty grow over time. Why? The longer we’ve waited, I figured, the more likely our (presumed) [latex]\lambda[/latex] will be small, which in turn means the expectation and variance of our exponential waiting time grow, widening our confidence intervals.

At this point, I had the vague feeling that this probability distribution should already exist as a known thing, that it may even be a version of something I’ve encountered before. Another way to look at the exponential is in terms of the failure rate, or, conversely, the survival rate. When Ed Norton, the un-named narrator of Fight Club (I know, I know), says that “on a long enough timeline, the survival rate for everyone drops to zero,” this is what he means. Only Norton is referring to the cumulative survival rate, whereas it’s usually most interesting to look at the instantaneous (or marginal, for my economist friends) rate. For the exponential this rate is constant, ie flat. There is a distribution specifically crafted to let you simulate failure rates when the rate itself is variable, it’s called the Weibull. It can be used to model products whose expected durability increases with time (note that we are not saying the product becomes more durable over time, but that the fact that it has survived tells us that it is highly durable). Did I just rediscover the Weibull, or one it’s friends in the same family of extreme distributions?

Before breaking out my great big Compendium of Probability Distributions, I dove right in with a quick Monte Carlo simulation. As with all my posts using R, you’ll find the code at the end of this post.

A wave of plots
Here’s the histogram for our sample, with the rightmost tail chopped off (because your screen, unlike mine, is just too damn small):

So it looks like a variant of the exponential, but this plot doesn’t tell us much. To really understand the distribution we have to see it as if we were inside the distribution, waiting for the event to happen. All we know is the process, and we have to come up with a guess about our distribution curve conditional on how long we’ve waited so far. In order to understand this curve, we first need to make a guess about [latex]\lambda[/latex], which is to say [latex]U[/latex]. Can we put a probability distribution on [latex]U[/latex] given how long we’ve been waiting so far? Yes, we most certainly can! And, because our prior distribution on [latex]U[/latex] is uniform (of course), our posterior is our likelihood. Here’s what our (posterior) curves look like:

Each curve is a probability distribution on our belief about [latex]U[/latex]. In other words, the peaks represent what we believe to be the most likely value for [latex]U[/latex], given how long we’ve waited so far. The biggest curve is our distribution for [latex]U[/latex] after waiting for one unit of time (let’s just call them “minutes”). As you can tell, if we continue to wait, our maximum likelihood estimate (MLE) for [latex]U[/latex] shifts left, and it looks like our curve flattens out. But wait! Each of these curves has a different area. To treat them like a true probability distribution, we should normalize each of the areas to one. Here’s what those same curves look like after normalization:

From this handsome chart (the same one from the beginning of the post), we can tell that expected range of values for [latex]U[/latex] is narrowing, not broadening. So could our uncertainty be decreasing along with our wait, as we hone in on the true value of [latex]U[/latex]? Let’s take a look at what happens to our additional wait time as time passes.

You can think of these curves as the chance that your friend will show up in the coming minutes, given how long you’ve already been waiting. At the very beginning of your wait, modeled by the orange curve at the far left, you can be almost certain that your friend will show up in the next 10 minutes. But by the time you’ve been waiting for 500 minutes, as seen in the blue curve at the far right, you are only 50% sure that she will show up in the next 500 minutes. Are those probabilities exact? It seems like it, but let’s zoom in on the first 25 minutes:

The X’s represent the median time for your friend’s arrival. If this was always equal to your wait time so far, all of the X’s would be in a straight line at 0.5. From this plot, it’s clear that this is not the case from the beginning, but only becomes so as you wait longer. So what have we got here? At this point I’m at the limit of what I can get out of Monte Carlo. It’s time to do math! (or not, feel free to skip this next section).

The formula
To get the pdf for this distribtion, I start by noting that if we had two possible choices for [latex]\lambda[/latex] with a one-half chance each of being picked, then the probability our waiting time would be less than [latex]x[/latex] would be:

[latex] P(t < x) = \frac{1}{2}(1 - e^{-x \lambda_1}) + \frac{1}{2} (1 - e^{-x \lambda_2})[/latex]

where [latex](1 – e^{-x \lambda_i})[/latex] is the cumulative distribution function (CDF) of the exponential distribution with parameter [latex]\lambda_i[/latex]. If you really know your exponential, you may have noticed some similarities with the hyperexponential distribution, but we’re gonna take it to the limit, and create a kind of hyper-hyperexponential. More generally, for a sample of [latex]\lambda_i[/latex]:

[latex]P(t < x) = \frac{1}{n} \sum_{i=1}^n (1 - e^{-\lambda_i x}) [/latex] Since the [latex]\lambda_i[/latex] are uniformly distributed, the more of them we sample, the more our order statistics are going to look like [latex](\frac{1}{n}, \frac{2}{n}, \frac{3}{n}... \frac{n}{n})[/latex] where our sample size is n (proof is left as an exercise for you, my dear reader). [latex]P(t < x) = \frac{1}{n}\sum_{i=1}^n (1 - e^{-xi/n})[/latex]

Ready to take it to the limit?

[latex] \lim_{n \rightarrow \infty} \frac{1}{n}\sum_{i=1}^n (1 – e^{-xi/n}) = \int_0^1 1 – e^{xt} dt[/latex]


Solving this integral, we get:

[latex] F(t) = \frac{e^{-t} + t – 1}{t}[/latex]


Did we get it right?
Maybe you trust my math, maybe you don’t and skimmed over the last section. Either way, let’s see how well the math matches the data. Here I’ve plotted the log of the observed (Monte Carlo) density versus what the math says it should be:

Looks like we nailed it, no? But wait, why are the blue points at the beginning of the curve in between the red points? That’s because we took the differences between points on the empirical CDF, so each density reading is really in-between the true pdf values. So far as Monte Carlo confirmation goes, it doesn’t get much better than this.

Introducing, the Unreliable Friend Distribution!
So far as I can tell, other than the hyperexponential, which is merely similar and more limited, this is a brand new distribution. Have you ever been waiting for someone, and the more they make you wait, the more you suspect they’ve forgotten about you completely? In that person’s honor, I’m calling this the Unreliable Friend Distribution (UFD).

As seems appropriate for such a distribution, the expected wait time for the UFD is infinite. Which means that no matter how late your unreliable friend shows up, you should be grateful that he came early.

The code:

# Code by Matt Asher for StatisticsBlog.com
# Feel free to modify and redistribute, but please keep this header

set.seed(943) #I remembered this time!

# Initial MC sampling
trials = 10^7
results = rexp(trials, runif(trials))

# Plot of liklihood curves for U based on waiting time
# colr = sample(colours(), 1000, replace=T)

lik = function(p, t){
	return((1 - p)^(t-1)*p)
}

# x-values to plot
p = seq(0,1,0.0001)

# Waiting times
t = 1:20

dataMatrix = matrix(nrow=length(t), ncol=length(p))

for(i in t) {
	dataMatrix[i,]=lik(p,rep(i+1,length(p)))
}

plot(p, dataMatrix[1,], col=colr[1], pch=".", cex=3, bty="n" )

for(i in 2:max(t)) {
	points(p, dataMatrix[i,], col=colr[i], pch=".", cex=3)
}

# Let's standardize the area of each curve
standardMatrix = dataMatrix/rowSums(dataMatrix)

plot(p, standardMatrix[1,], col=colr[1], pch=".", cex=3, bty="n", ylim=c(0,max(standardMatrix)))

for(i in 2:max(t)) {
	points(p, standardMatrix[i,], col=colr[i], pch=".", cex=3)
}

# Find wait time curves conditional on having waited t minutes

# We need tail probabilities, let's find them!
t = 0:1000
tailP = rep(0,max(t))
for(i in t) {
	tailP[(1+i)] = length(results[results>i])/trials
}

show = seq(1,25,1) 

# Blank Plot
plot(0,0,col="white", xlim = c(0,2*max(show)), ylim = c(0, 1), ylab="Probability that your friend will have shown up", xlab="Time")

for(i in show) {
	# Normalizing the probabilies so that tailP[i] = 1
	tmp = tailP[(i+1):(max(t)+1)]
	tmp = tmp * 1/tmp[1]
	tmp = 1-tmp
	
	print(length(tmp[tmp<.5]))
	
	# par(new = TRUE)
	lines(i:(max(t)), tmp, col=sample(colours(), 1), lwd=3)
	
	
	# Find the index of the closest tmp to tmp[i]
	xloc = which.min(abs(tmp[i] - tmp))
	
	# Put a point where we cross time 2t on the curve
	points(i+xloc-1, tmp[i], pch=4, col="black", cex=2, lwd=3)

}




plot(0,0,col="white", xlim = c(0,100), ylim = c(0, 0.25))

t = 1:20

tmp = results[resultsi]), xlim = c(0,100), ylim = c(0, 0.25), col=colr[i], cex=3)
}

tpdf = function(x) {
	toReturn = (-x*exp(-x)+1-exp(-x))/x^2
    return(toReturn)
}

tF = function(x) {
	toReturn = (exp(-x) + x - 1)/x
	return(toReturn)
}

lengths = rep(0,1000)
for(i in 0:1000) {
    lengths[(i+1)] = length(results[results>i])
}

empericalF = 1 - (lengths/trials)
empericalf = diff(empericalF)

# Because this the the perfect size for the dots!
plot(log(tpdf(1:1000)), col=rgb(0,0,1,.2), pch=20, cex=1.3728, xlab="Wait time", ylab="Log of density")
points(log(empericalf), col=rgb(1,0,0,.2), pch=20, cex=1.3728)

21
May 13

What are the chances this headline will still be true in 10 years?

In this post I’ll be discussing the ideas presented in The Half-Life of Facts, by Samuel Arbesman. The book argues that facts, which we often take to be iron-clad, unchanging laws of the universe, are regularly discovered to be false or replaced by updated versions. He argues that while it’s impossible to predict in advance how long a particular fact will endure, in aggregate truth values decay at stable rates. In effect, Arbesman is proposing a kind of Law of Large Numbers for belief.

Arebesman’s thesis, I should say right up front, is highly appealing to me. It fits my belief that all facts are, to some extent, fuzzy, uncertain, contingent, and most importantly prone to revision over time as new information comes in. Of course, some facts, or categories of facts, are more likely to be revised than others. What I hoped to get from Arbesman’s book was a deep analysis of why some facts (or fictions) last longer than others, and how you might quantify different categories of facts from the viewpoint of survival analysis.

What are facts?
Arbesman defines facts as “individual states of knowledge awareness.” His main way of subdividing facts is on the basis of how quickly they change, from those constantly in flux (the current weather) to the very stable (the number of continents). In between are what Arbesman calls “mesofacts,” those which change at an intermediate timescale. Most of our scientific knowledge fits in this category.

When I mentioned the continents, you may have wondered whether I was referring to the number of huge landmasses on earth (a slow-changing fact, by any measure), or what we consider to be a continent. For example, if scientists decide that Madagascar or Baffin Island should be called a continent, the quantity of large land-masses on earth hasn’t changed.

Brawndo, the thirst mutilator!
This may seem like an obvious distinction, but it’s one that Arbesman fails to make. He conflates facts about the earth with nomenclature, confusing words with objects. The worst example of this confusion occurs in the chapter on how facts spread. Arbesman explains how we came to use the word “brontosaurous” for what, by scientific convention, should be called “apatosaurus”, as this name came first. Here the “fact” that changed doesn’t really have anything to do with the nature of dinosaurs, it has to do with the name we’ve decided to give it (which is, of course, a matter of convention, and arbitrary). To Arbesman, though, this issue of nomenclature becomes an “erroneous” fact which has “sadly” persisted for way to long.

The conflation of semantics and understanding allows Arbesman to hide a normative decree in a linguistic assessment. If my explanation of the confusion between the descriptive and prescriptive is, itself, confusing, consider Mike Judge’s wonderful illustration from the film Idiocracy. The main character tries to explain to the people of the future that their plants are dying because they are being irrigated with Brawndo, a sport drink. Here’s how their conversation goes:

Arbesman’s failure to draw a line around what are facts, and what aren’t, leads to even deeper confusions. Making this distinction clear would be, no doubt, a very difficult task. But instead of attempting it, and risking falling into “an epistemological rabbit hole,” Arbesman’s shrugs and paraphrases the supremely weasely Supreme Court Justice Potter, who said that no precise, legal definition of pornography was needed, because “I know it when I see it.”

Without a line (however fuzzy) drawn around his subject, Arbesman quickly wanders off from an insightful discussion of the decay rate of information in physics, medicine and scientific models in general, to a broad discussion of the things in our world that change. This transition is completed in the chapter titled “Moore’s law of everything,” in which Arbesman compares exponential growth in computing power to other technologies with accelerating levels of change, like transportation. At this point it’s no longer clear which are the facts under consideration. Is it the maximum number of transistors per chip? Is it our model of how technology changes? Or is it the rate of change of change itself?

Is change a constant?
This last question might be the most interesting one of all. More clearly stated, what is the derivative of the half-life of facts, for a given category? And even one more step beyond, are these derivatives themselves stable? I want to know what the evidence says. Are medical facts becoming obsolete faster than ever? Has our knowledge about basic physical concepts like inertia begun to solidify? Arbesman hints at these questions, but just barely. I was very disappointed by his lack of rigor and quantification. Perhaps this field of study still needs it’s Darwin or John Graunt, someone willing to spend years or decades compiling and analyzing the minutia how facts change, before coming up with a well-informed model of truth decay.

My own suspicion? The stability of a fact is proportional to how well the related field of study is established, and to how long that particular fact has been considered valid. Thus the lifespan of facts would be Weibull distributed, or have some variant of the Unreliable Friend distribution (more about that in a future post). Arbesman hints at this possibility when discussing the history of mathematical proof. He notes that the waiting time for a conjecture to be settled follows a heavy-tailed distribution, which makes it difficult to predict how much longer it will take for mathematicians to come to a conclusion about long-standing problems.

But even this attempt at a more nuanced view of half-lives hints at another problem with Arbesman’s incomplete taxonomy of facts, and his unwillingness to specify which facts we are discussing. In this case of mathematics, it seems at first that he might be referring to the underlying proposition itself. This leads me to wonder if Arbesman is positing (at least implicitly) a Schrödinger’s cat view of the mathematics, where Fermat’s Last Theorem (FLT) exists in a state of superposition, both true and false and indeterminate all at once, waiting for Andrew Weil to come along to open the lid, peer into the box, and declare it “true.” Another interpretation is that the fact being discussed is the social phenomenon; mathematicians went from believing that FLT was probably true but definitely unproven, to believing that FLT was indisputably true. Based on his initial definition in terms of awareness, I assume it’s the later. Unfortunately, no clarification is forthcoming, and Arbesman misses out on an opportunity to comment on the two most interesting twists in the FLT saga, especially from the point of view of evaluating “facts”. For one, Weil made a crucial mistake in his first official version of the proof, and for the other, Weil’s proof depends on a newer, and somewhat controversial, mathematical assumption (the Axiom of Choice).

Chart from The Half-Life of Facts showing the increase in transportation speeds over time.

The depths of shallowness
I suppose there’s a limit to how much depth we can expect from a general interest book. Still, I’m disappointed that the author seems to explicitly avoids discussing the basic, hard puzzles of knowledge: How close to the (real?) truth are the “facts” we are learning today? What is the probability that these will be later found out to be untrue? Does that probability go to one on a long enough timeline, and to what extent can we quantify that timeline.

Instead of rigorous analysis, Arbesman fills out his short book by rehashing famous stories from well-known research papers (if I have to read about the gorilla on the basketball court one more time, I just might go apeshit). We do get occasional bits of insight, usually in the form of quotes, like Lord Kelvin’s insistence that anything that can be measured, can be measured incorrectly, or John M. Smith’s quip that “Statistics is the science that lets you do twenty experiments a year and publish one false result in Nature.”

This last quote refers to the p-value, which Arbesman does a decent job of explaining, though I’m not sure he fully understands it. He quotes John Ioannidis saying that, “If a study is small, it can yield a positive result more easily due to random chance.” However, the use of a fixed p-vale cutoff generally ensures that the exact oppose is true (see this delightfully humorous video about “The power of the test”). The structure of hypothesis testing can be tricky, but since Arbesman is described on the book jacket as an applied mathematician, I’m not willing to grade him on a curve.

There’s one other confusion in Arbesman’s book that I feel compelled to point out, since it may just be the most insidious (and common) epistemological mistake of all: the conflation of facts, predictions, and models. Arbesman mixes them all together in a short passage. In describing computer simulation of a social network, he says:

“When [the researchers] ran this experiment, they discovered that weak ties aren’t that important to spreading knowledge. While weak ties do in fact hold the network together, much as Granovetter suspected, they aren’t integral for spreading facts.”

Did you catch that? Arbesman went from describing a model (in this case a computer simulation) that generated a prediction (about the spread of information), to asserting a fact about our world (weak ties “aren’t integral for spreading facts”).

Am I just being annoying, noxious, always lingering?
Am I’m being overly fussy (to use the nicer word)? Am I too focused on precise definitions and picky distinctions, at the cost of missing the bigger picture? I don’t think so. The history of scientific progress, and in particular statistics, shows a strong correlation between linguistic and taxonomic advances. We can look back and see how progress is stifled by a lack of common, well-defined terms. For example, some of the early attempts to understand probability disintegrated into confused debates that could have been avoided with a clear stating of terms. More recently, E.T. Jaynes resolved Bertrand Russell’s paradox of the random chord by explicitly defining the characteristics a “random” chord would need to have.

If Arbesman is sloppy with the details, can he at least get credit for presenting the broader story in context? To some extent, I think so. As a general tour of how facts change, there’s no mistaking the basic message: facts do change, and we can be particularly blind (or caught off guard) when it comes to changes which happen at a medium pace. I wish, though, that Arbesman had explicitly connected this broader story with what is, to me, the central lesson: all of our beliefs should come with a measure of doubt!

To understand this doubt mathematically, we use probability theory. To understand it in practice, we use a framework for statistical inference. There are a number of these frameworks available, each with its own strengths and weaknesses. Hume said we could never infer anything from anything, giving us a kind of historical “null hypothesis” of inference, one that’s been soundly rejected by the evidence of scientific and technological progress. Fisher and von Mises maintained that probability should be restricted to long term frequencies. Keynes and Jefferies spoke of subjective probabilities and degrees of rational belief. Jaynes viewed probability theory as an extension of logical deduction.

All modern approaches to inference share the assumption that knowledge is not static, and that empirical evidence provides partial information. Full certainty, to the extent that it exists at all, is to be found only in the very long run (mathematically speaking, at the infinite limit). As such, we need to recognize the provisional nature of all facts.