June, 2010


19
Jun 10

The perfect fake

Usually when you are doing Monte Carlo testing, you want fake data that’s good, but not too good. You may want a sample taken from the Uniform distribution, but you don’t want your values to be uniformly distributed. In other words, if you were to order your sample values from lowest to highest, you don’t want them to all be equidistant. That might lead to problems if your underlying data or model has periods or cycles, and in any case it may fail to provide correct information about what would happen with real data samples.

However, there are times when you want the sample to be “perfect”. For example, in systematic sampling you may wish to select every 10th object from a population that is already ordered from smallest to biggest. This method of sampling can reduce the variance of your estimate without introducing bias. Generating the numbers for this perfect sample is quite easy in the case of the Uniform distribution. For example, R gives you a couple easy ways to do it:

# Generate a set of 100 equidistant values between 5 and 10 (inclusive)
x <- seq(5,10,length=100)
 
# Generate every 12th integer between 50 and 1000
x <- seq(50,1000,12)

When it comes to other distributions, grabbing a perfect sample is much harder. Even people who do a lot of MC testing and modeling may not need perfect samples every day, but it comes up often enough that R should really have the ability to do it baked right into to the language. However, I wasn’t able to find such a function in R or in any of the packages, based on my searches at Google and RSeek. So what else could I do but roll my own?

# Function returns a "perfect" sample of size n from distribution myDist
# The sample is based on uniformly distributed quantiles between 0 and 1 (exclusive)
# If the distribution takes additional parameters, these can be specified in the vector params
# Created by Matt Asher of StatisticsBlog.com
perfect.sample <- function(n, myDist, params = c()) {
	x <- seq(0,1,length=(n+2))[2:(n+1)]
 
	if(length(params)) {
	  	toEval <- paste(c("sapply(x,q", myDist, ",", paste(params,collapse=","), ")"), collapse="")
	} else {
	  	toEval <- paste(c("sapply(x,q", myDist, paste(params,collapse=","), ")"), collapse="")
	} 
 
	eval(parse(text=toEval))
}

This function should work with any distribution that follows the naming convention of using “dname” for the density of the distribution and has as its first parameter the number of values to sample. The histogram at the top of this post shows the density of the Lapalce, aka Double Exponential distribution. Here is the code I used to create it:

# Needed library for laplace
library(VGAM)
z <- perfect.sample(5000,"laplace",c(0,1))
hist(z,breaks=800,col="blue",border=0,main="Histogram from a perfect Laplace sample")

As you can see, my function plays nice with distributions specified in other packages. Here are a couple more examples using standard R distributions:

# Examples:
perfect.sample(100,"norm")
 
# Sampling from the uniform distribution with min=10 and max=20
z <- perfect.sample(50,"unif",c(10,20))

Besides plotting the results with a histogram, there are specific tests you can run to see if values are consistent with sampling from a known distribution. Here are tests for uniformity and normality. You should get a p-value of 1 for both of these:

# Test to verify that this is a perfect sample, requires library ddst
# Note only tests to see if it is Uniform(0,1) distributed
library(ddst)
ddst.uniform.test(z, compute.p=TRUE)
 
# Needed for the Shapiro-Wilk Normality Test
library(stats)
z = perfect.sample(1000,"norm")
shapiro.test(z)

If you notice any bugs with the “perfect.sample” function please let me know. Also let me know if you find yourself using the function on a regular basis.


18
Jun 10

Those dice aren’t loaded, they’re just strange

I must confess to feeling an almost obsessive fascination with intransitive games, dice, and other artifacts. The most famous intransitive game is rock, scissors, paper. Rock beats scissors.  Scissors beats paper. Paper beats rock. Everyone older than 7 seems to know this, but very few people are aware that dice can exhibit this same behavior, at least in terms of expectation. Die A can beat die B more than half the time, die B can beat die C more than half the time, and die C can beat die A more than half the time.

How is this possible? Consider the following three dice, each with three sides (For the sake of most of this post and in my source code I pretend to have a 3-sided die. If you prefer the regular 6-sided ones, just double up every number. It makes no difference to the probabilities or outcomes.):

Die A: 1, 5, 9
Die B: 3, 4, 8
Die C: 2, 6, 7

Die A beats B 5/9 of the time which beats C 5/9 of the time which beats A 5/9 of the time. Note that the ratios don’t all have to be the same. Here’s another intransitive trio:

Die A: 2, 4 ,9
Die B: 1, 8, 7
Die C: 3, 5, 6

Take a moment to calculate the relative winning percentages, or just trust me that they are not all the same…. Did you trust me? Will you trust me now in the future?

In order to find these particular dice I wrote some code in R to automate the search. The following functions calculate the winning percentage for one die over another and check for intransitivity:

# Return the proportion of the time that d1 beats d2. 
# Dice need to have same number of sides
calcWinP <- function(d1,d2) {
	sides = length(d1)
	d1Vd2 = 0
 
	for(i in 1:sides) {
		for(j in 1:sides) {
			if(d1[i] > d2[j]) {
				d1Vd2 = d1Vd2 + 1
			}
		}
	}
 
	return( d1Vd2/(sides^2) )
}
 
# Assumes dice have no ties. 
# All dice must have the same number of sides.
# How many times do I have to tell you that?
checkIntransitivity <- function(d1,d2,d3) {
	d1beatsd2 = calcWinP(d1,d2)
 
	if (d1beatsd2 > 0.5) {
		if(calcWinP(d2,d3) > 0.5) {
			if(calcWinP(d3,d1) > 0.5) {
				return(TRUE)
			}
		}
	} else {
		# Check if d1 beats d3, if so check if d3 beats d2
		if(calcWinP(d1,d3) > 0.5) {
			if(calcWinP(d3,d2) > 0.5) {
				return(TRUE)
			}
		}
	}
	# Regular old transitivity.
	return(FALSE)
}

I then checked every possible combination. How many unique configurations are there? Every die has three numbers on it, and you have three die for a total of nine numbers. To make things simpler and avoid ties, no number can be used more than once. If each sides of a die was ordered and each of the die was ordered, you’d have 9! different combinations, which is to say a whole mess of them. But our basic unit of interest here isn’t the digits, it’s the dice. So let’s think about it like this: For die A you can choose 6 of the 9 numbers, for die B you can pick 3 of the remaining 6, and for die C you’re stuck with whatever 3 are left. Multiply this all together:

choose(9,6)*choose(6,3)

and you get 1680 possibilities. But wait? What’s that you say? You don’t care which die is A, which is B, and which is C? Fantastic. That reduces the number of “unique” configurations by 3!, which is to say 6, at least if my back-of-the-envelope calculations are correct. Final tally? 280.

Not bad. Unfortunately, there no obvious way to ennumerate each of these 280 combinations (at least not to me there isn’t). So I ended up using a lot of scratch work and messing around in the R console until I had what I believed to be the right batch. Sorry, I no longer have the code to show you for that. After testing those 280 configurations, I found a total of 5 intransitive ones, including the 2 dice shown previously and the following 3 sets:

Die A: 2, 9, 3
Die B: 1, 6, 8
Die C: 4, 7, 5

Die A: 7, 1, 8
Die B: 5, 6, 4
Die C: 9, 3, 2

Die A: 7, 3, 5
Die B: 2, 9, 4
Die C: 8, 6, 1

Did I make a mistake? According to my calculations, 5/280 of the combinations are intransitive. That represents 1.786% of the total. How might I very this? That’s right, it’s Monte Carlo time.

Using the following code, I created all 9! permutations of dice and sides, then sampled from those 362,880 sets of dice many, many times:

library(e1071) # Makes making permutations easy
allPerms = permutations(9)
intransFound = 0
for(i in 1:dim(allPerms)[1]) {
	d1 = allPerms[i,1:3]
	d2 = allPerms[i,4:6]
	d3 = allPerms[i,7:9]
	if(checkIntransitivity(d1,d2,d3)) {
		intransFound = intransFound + 1
	}
}
 
print(intransFound)
 
found = 0	
tries = 100000
for(i in 1:tries) {
	one2nine = sample(1:9,9)
	d1 = one2nine[1:3]
	d2 = one2nine[4:6]
	d3 = one2nine[7:9]
 
	if( checkIntransitivity(d1,d2,d3)) {
		found = found + 1
		# Uncomment below if you want to see them.
		#print("found one")
		#print(d1)
		#print(d2)
		#print(d3)
		#flush.console()
	}
}
 
print(found/tries)

Final percentage: 1.807%. That’s pretty close to 5/280, and much closer than it is to either 4/280 or 6/280, so I’m going to conclude that I got them all and got it right.

What happens if your dice have fewer, or more, sides? Turns out you need at least 3 sides to achieve intransitivity. Can you have it with 4 sides? What about 5, 6, or 7? To estimate the fraction of dice configurations which are intransitive for different numbers of sides I wrote the following code. Note that this could take a while to run, depending on the number of “tires” you use:

# Transitivity vs sides.
results = rep(0,6)
tries = 100000
 
for(j in 4:12) {
	found = 0	
 
	for(i in 1:tries) {
		one2nine = sample(1:(3*j),(3*j))
		d1 = one2nine[1:j]
		d2 = one2nine[(j+1):(2*j)]
		d3 = one2nine[(2*j+1):(3*j)]
 
		if( checkIntransitivity(d1,d2,d3)) {
			found = found + 1
		}
	}
 
	results[j] = found/tries
	print("Found:")
	print(results[j])
	flush.console()
}

If you wait through all that you might notice some interesting patters emerge, which probably have explanations rooted in theory but it’s getting on nap time, so I’ll wrap this post up.

I think what fascinates me the most about intransitive dice, and games like rock, scissors, paper, is that they represent breakdowns in what math folks like to call a “total order”. Most of our calculations are done in this nice land of numbers where you can count on transitivity. A>B and B>C, therefore A>C. Every item has it’s place on the hierarchy, and “ties” only occur between an object and itself. “Total order” is a good name in that these are comfortable spaces to do calculations where nothing all that unexpected happens (usually, ok?). For excitement and unexpected delight, you need relax those orders, the more relaxing the better. Incidentally, if instead your goal is frustration and dirty looks from your friends at a party, try pretending that you can apply the methods of a total order (like the calculus) to economics, consumer choice, and love.

One final note before drifting off… in statistics we have at least one delightfully unexpected instance of intransitivity: correlations. Just because X is positively correlated with Y and Y is positively correlated with Z, doesn’t mean that X and Z are positively correlated. Strange, no? But you can prove it with an example covariance matrix. Have you got one to show me?


16
Jun 10

Five dumb arguments smart people make

When smart people make dumb arguments they tend to fall into one of a few categories. I’ve documented five of the most common bad arguments I see at websites where otherwise intelligent geeks, math nerds and skeptics hang out and discuss things. Chances are you’ve encountered at least one of these arguments, maybe you’ve even used one of them yourself.

#1: Occam’s razor

In simple terms, the idea of Occam’s razor is that, whenever possible, simple models are to be preferred. Note that Occam’s razor tells you absolutely nothing about whether a model or a theory is good or bad, useful or worthless. It’s is a rule of thumb. And while not necessarily a bad one, in practice it tends to act like intellect retardant to put out active minds who question existing (often simplistic), beliefs and scientific constructs.

Let me make this very clear: Occam’s razon doesn’t prove anything. In particular, and despite how it is commonly used, it doesn’t show that the more likely, or “simple”, explanation is the correct one. Unlikely, complicated things happen all the time. If you don’t believe this, go flip a coin a thousand times. I guarantee you that the exact result you get will only happen once in a godzillion tries. In terms of (absolute) likelihood, explaining what you just observed with probability theory and stochastic processes is way more complicated than the assertion “that’s what God wanted”, a theory which, if true, explains your effectively impossible result with 100% likelihood. I’m guessing that’s not the can of worms you hoped to open up with Occam’s razor?

#2: You’re a hypocrite.

Yes, and so are you. So what? Do I really need to explain why this agumentum-ad-hypocrisum (note, made-up Latin) is so bad? Given that accusations of hypocrisy are almost as popular as kittens in the blogosphere, I suspect I do. So here goes: showing that your opponent is a hypocrite proves nothing except that they are human. It doesn’t make their arguments wrong, and only weakens them under very limited circumstances, like when you catch a sworn bretharian sneaking a pint of Häagen Dazs to keep from starving to death. Beyond that, calling your opponent a hypocrite has less nutritional value than a peep.

#3: That’s just an anecdote. It doesn’t prove anything.


Repeat after me: Anecdotes are evidence. Nothing more, but certainly nothing less. In the context of a common event followed by another common event (I got a headache after stopping at three red lights in a row), anecdotal evidence is nearly useless. In terms of more rare events (I got kuru after eating my sister), anecdotal evidence can be extraordinarily powerful. Taken to its extreme, discounting anecdotal evidence led (presumably intelligent) academics to hold firm to fallacies like the idea that fireflies can’t flash all in unison, long after anecdotal evidence had come in from reliable observers.

#4: No known mechanism


There are lots of intelligent ways to argue that homeopathy doesn’t work. You can cite studies (although you may find that not all of them confirm your beliefs), or you can say that believers are the random subset of the people who have tried it and then afterwords felt an improvement. I’m not going to weigh in on the subject, except to note that one argument I see used fairly often is that homeopathy doesn’t work because it can’t work, and it can’t work because we don’t understand how it possibly could.

To see how bad this argument is you need to look at the assumptions behind it and view it in historical context. What people are really saying with this argument is: Our current scientific model is comprehensive and infallible. It accounts for all observations, and it has no holes or leaks. I’m going to assume that you are able to see the problem with this mindset yourself. I’ll just note that one particularly unfortunate use of this argument led doctors to reject hand washing before performing operations or delivering babies. After all, evidence that fastidious midwives had lower infection rates was purely anecdotal (see above), and there was no reason to believe cleanliness could make a difference in the pre-germ era. There was no known mechanism.

#5: Three guys with boards


I’m calling this the “three guys with boards” argument in honor of those skeptics who, whenever someone mentions crop circles, declare “it’s a hoax. Three guys with boards admitted they did it.” In fact, some guys with boards did indeed admit to creating a crop circle, and they showed us how they did it. So what’s wrong with this argument?

The problems is that you can’t discount all anomalous observations as “fakes” just because some are known to be fakes, nor does the possibility of faking an event mean that all such events must have been faked. Obviously, scientists don’t like playing games of intellectual wack-a-mole. If you research enough supposed “unexplained mysteries”, and come away convinced each time that the mystery is bogus, the tendency to dismiss other, similar claims outright is understandable. That said, “three guys with boards” is still a dumb argument, especially when you are going against claims of specific evidence that can’t be easily explained away as a hoax. For example, we have large, complicated, precisely implemented crop circles done in a short span of time and exhibiting strangely bent stalks. This evidence may fall far short of irrefutable proof of alien intervention, but it does require much more than dismissively stating that we know they are all fakes. We don’t.

More broadly, any attempt to automatically sort new observations into known categories (often categories that make us comfortable) is a bad idea. Unfortunately there are no shortcuts when it comes to evaluating data or evidence. It has to be done the hard way, one piece at a time.


14
Jun 10

Repulsive dots pattern, the difference of distance

What if you wanted to randomly place objects into a field, and the more objects you had, the more they rejected newcomers placed nearby? To find out, I setup a simulation. The code, shown at the end, isn’t all that interesting, and the plots shown below aren’t all that special. I think there is one interesting part of this, and that’s how the clustering changes depending on how distance is measured. One of the plots uses the traditional “L2″ distance, the other uses L1″ (Manhattan taxi cab) measure . Each plot shown below has almost exactly the same number of dots (277 vs 279). Can you tell which uses L1 and which uses L2 just by looking?

Plot A:

Plot B:

Here’s the code. Run it and see for yourself. Make sure to change adjust the values which have comments next to them. Uncommenting “print(force)” can help you pack a maxRepulse value.

calcRepulse <- function(x,y,dots,use="L2") {
	force = 0
	i = 1
	while(i <= dim(dots)[1] && dots[i,1] != 0) {
		if(use == "L2") {
			force = force + 1/( (x-dots[i,1])^2 + (y-dots[i,2])^2 )
		} else if(use == "L1") {
			force = force + 1/( abs(x-dots[i,1]) + abs(y-dots[i,2]) )
		}
		i = i+1
	}
	# print(force)
	return(force)
}
 
par(bg="black")
par(mar=c(0,0,0,0))
plot(c(0,1),c(0,1),col="white",pch=".",xlim=c(0,1),ylim=c(0,1))
 
place = 1 #Maximum number of dots to place, change this to something bigger
dots = matrix(rep(0,place*2),ncol=2)
maxTries = place * 10
maxRepulse = 1 # Anything above this will be rejected as too repulsive a location
dist2use = "" # Pick L1 or L2
 
placed = 0
tries = 0
 
 
while(placed < place && tries < maxTries) {
	x = runif(1)
	y = runif(1)
 
	if(calcRepulse(x,y,dots,dist2use) < maxRepulse) {
		dots[(placed + 1),1] = x
		dots[(placed + 1),2] = y
		placed = placed + 1
		points(x,y,col="blue",pch=20)
	}
 
	tries = tries + 1
}

12
Jun 10

A different way to view probability densities

The standard, textbook way to represent a density function looks like this:

Perhaps you have seen this before? (Plot created in R, all source code from this post is included at the end). Not only will you find this plot in statistics books, you’ll also see it in medical texts, sociology, and even economics books. It gives you a clear view of how likely an observation is to fall in a particular range of x. So what’s the problem?

The problem is that what usually concerns us isn’t probability in isolation. What matters is the impact that observations have on some other metric of importance, like the total or average. The key thing we want to know about a distribution is: What range of observations will contribute the most to our expected value, and in what way? We want a measure of influence.

Here’s the plot of the Cauchy density:

From this view, it doesn’t look all that different from the Normal. Sure it’s a little more narrow, with “fatter tails”, but no radical difference, right? Of course, the Cauchy is radically different from the normal distribution. Those slightly fatter tails give very little visual indication that the Cauchy is so extreme-valued that it has no expected value. Integrating to find the exception gives you infinity in both directions. If your distribution is like this, you’ve got problems and your plot should tell you that right away.

Here’s another way to visualize these two probability distributions:

Go ahead and click on the image above to see the full view. I’ll wait for you…

See? By plotting the density multiplied by the observation value on the y-axis, you get a very clear view of how the different ranges of the function effect the expectation. Looking at these, it should be obvious that the Cauchy is an entirely different beast. In the normal distribution, extreme values are so rare as to be irrelevant. This is why researchers like to find ways to treat their sample as normally distributed: a small sample gives enough information to tell the whole story. But if your life (or livelihood) depends on a sum or total amount, you’re probably best off plotting your (empirical) density in the way shown above.

Another bit of insight from this view is that the greatest contribution to the expectation comes at 1 and -1, which in the case of the Normal isn’t the mean, but rather the second central moment (plus or minus). That’s not a coincidence, but it’s also not always the case, as we shall see. But first, what do things look like when a distribution gets completely out of hand?

The Student’s t distribution, on 1 Degree of Freedom , is identical to the Cauchy. But why stop at a single DF? You can go all the way down to the smallest (positive) fraction.

The closer you get to zero, the flatter the curve gets. Can we ever flatten it out completely? Not for a continuous distribution with support over an infinite range. Why not? Because in order for the slope of value * density to continue to flatline it indefinitely, the density function would have to be some multiple of \frac{1}{x}, and of course the area under this function diverges as we go to infinity, and densities are supposed to integrate to 1, not infinity, right?

What would the plot look like for a continuous function that extends to infinity in just one direction? Here’s the regular Exponential(1) density function plot:

Now look at the plot showing contribution to expectation:

Were you guessing it would peak at 1?  Again, the expectation plot provides insight into which ranges of the distribution will have the greatest impact on our aggregate values.

Before I go look at an a discrete distribution, try to picture what the expectation curve would look like for the standard Uniform(0,1) distribution. Did you picture a diagonal line?

Can we flatten things out completely with an infinitely-supported discrete distribution? Perhaps you’ve heard of the St. Petersburg Paradox. It’s a gambling game that works like this: you flip a coin until tails comes up. If you see one head before a tails, you get $1. For 2 heads you get $2, for 3 heads $4, and so on. The payoff doubles each time, and the chances of reaching the next payoff are halved. The paradox is that even though the vast majority of your winnings will be quite modest, your expectation is infinite.  The regular view of the probability mass function for provides almost no insight:

But take a look at the expectation plot:

Flat as a Nebraska wheat field. You can tell right away that something unusual is happening here.

I could go on with more examples, but hopefully you are beginning to see the value in this type of plot. Here is the code, feel free to experiment with other distributions as well.

# Useful way to make dots look like a line
x = seq(-5,5,length=1500)
 
# You've seen this before. Our good friend the Normal
plot(x,dnorm(x),pch=20,col="blue", main="Standard Normal density function")
 
# Cauchy looks a little different, but it's not obvious how different it is 
plot(x,dcauchy(x),pch=20,col="blue", main="Cauchy density function")
 
# New way of plotting the same
plot(x,dnorm(x)*x,pch=20,col="blue", main="Normal density: contribution to expectation")
abline(h=0,lty="dashed",col="gray")
 
plot(x,dcauchy(x)*x,pch=20,col="blue", main="Cauchy density: contribution to expectation")
abline(h=0,lty="dashed",col="gray")
 
# Extreme student-t action:
plot(x,dt(x,0.001)*x,pch=20,col="blue", main="Student's t on 0.001 d.f. contribution to expectation")
abline(h=0,lty="dashed",col="gray")
 
 
# The Exponential
x = seq(0,10,length=1500)
plot(x,dexp(x,1),pch=20,col="blue", main="Standard Exponential density function")
 
# The expectation view:
plot(x,dexp(x,1)*x,pch=20,col="blue", main="Exponential mass contribution to expectation")
 
# What do we see with the St. Petersburg Paradox
x = 2^(0:30)
dStPete <- function(x) {
	return (1/(2*x))
}
 
# Note the log
plot(x,dStPete(x),pch=20,col="blue", main="St. Petersburg mass function", log="x", xlab="Payoff", ylab="Probability",ylim=c(0,.5))
 
# Now we see the light
plot(x,dStPete(x)*x,pch=20,col="blue", main="St. Petersburg mass fcn: contribution to expectation", xlab="Payoff", log="x", ylab="Payoff times probability",ylim=c(0,.5))
abline(h=0,lty="dashed",col="gray")