June, 2010


29
Jun 10

Entropy augmentation the modulo way

Long before I had heard about the connection between entropy and probability theory, I knew about it from the physical sciences. This is most likely how you met it, too. You heard that entropy in the universe is always increasing, and, if you’re like me, that made very little sense. Then you may have heard that entropy is a measure of disorder: over times things fell apart. This makes a little more sense, especially to those teenagers tasked with cleaning their own rooms. Later on, perhaps you got a more precise, mathematical definition of entropy that still didn’t fully mesh with the world as we observe it. Here on earth, we see structures getting built up over time: plants convert raw energy to sunflowers, bees build honeycombs, humans build roads. Things do sometimes fall apart. More precisely, levels of complexity tend to grow incrementally over long periods of time, then collapse very quickly. This particular asymmetry seems to be an ironclad rule for our word, which I assume everyone understands, at least implicitly, though I can’t remember anywhere this rule is written down as such.

In the world of probability, entropy is a measure of unpredictability. Claude Shannon, who created the field of Information Theory, gave us an equation to measure how much, or little, is known about an incoming message a prori. If we know for sure exactly what the message will be, our entropy is 0. There is no uncertainty. If we don’t know anything about the outcome except that it will be one of a finite number of possibilities, we should assume uniform probability for any one of the outcomes. Uncertainty, and entropy, is maximized. The more you look into the intersection of entropy and statistics, the more you find surprising, yet somehow obvious in retrospect, connections. For example, among continuous distributions with fixed mean and standard deviation, the Normal distribution has maximal entropy. Surprised? Think about how quickly a sum of uniformly distributed random variables converges to the Normal distribution. Better yet, check it out for yourself:

n = 4
tally = rep(0,10000)
for(i in 1:n) {
	tally = tally + runif(10000)
}

hist(tally, breaks=50, col="blue")

Try increasing and decreasing “n” and see how quickly the bell curve begins to appear.

Lately I’ve been thinking about how to take any general distribution and increase the entropy. The method I like best involves chopping off the tails and “wrapping” these extreme values back around to the middle. Here’s the function I created:

smartMod <- function(x, mod) {
	sgn = sign(x)
	x = abs(x)
	x = x %% mod
	return(sgn * x)
}

Now is a perfect time to use a version of our “perfect sample” function:

perfect.sample <- function(dist, n, ...) {
	match.fun(paste('q', dist, sep=''))((1:n) / (n+1), ...)
}

The image at the top of this post shows the Chi Square distribution on 2 degrees of freedom, with Modulo 3 Entropy Enhancement (see how nice that sounds?). Here’s the code to replicate the image:

hist(smartMod(perfect.sample("chisq",10000,2),3),breaks=70,col="blue",main="Entropy enhanced Chi-Square distribution")

Here’s another plot, using the Normal distribution and Modulo 1.5:

One nice property of this method of increasing entropy is that you get a smooth transition with logical extremes: As your choice of Mod goes to infinity, the distribution remains unchanged. As your Mod number converges to 0, entropy (for that given width) is maximized. Here are three views of the Laplace, with Mods 5, 1.5, and 0.25, respectively. See how nicely it flattens out? (Note you will need the library “VGAM” to sample from the Laplace).

It’s not clear to me yet how entropy enhancement could be of practical use. But everyone loves enhancements, right? And who among us doesn’t long for a  little extra entropy for time to time, no?


26
Jun 10

Weekend art in R (Part 2)

I put together four of the best looking images generated by the code shown here:

# More aRt
par(bg="white")
par(mar=c(0,0,0,0))
plot(c(0,1),c(0,1),col="white",pch=".",xlim=c(0,1),ylim=c(0,1))
iters = 500
for(i in 1:iters) {
	center = runif(2)
	size = 1/rbeta(2,1,3)
 
	# Let's create random HTML-style colors
	color = sample(c(0:9,"A","B","C","D","E","F"),12,replace=T)
	fill = paste("#", paste(color[1:6],collapse=""),sep="")
	brdr = paste("#", paste(color[7:12],collapse=""),sep="")
 
	points(center[1], center[2], col=fill, pch=20, cex=size)
	points(center[1], center[2], col=fill, pch=21, cex=size,lwd=runif(1,1,4))
}

Weekend art Part 1 is here.


25
Jun 10

A rare event in tennis

70-68

That was the final score in the final set, won by John Isner over Nicolas Mahut at Wimbledon. In the final set of a match the winner has to beat the other by at least 2 games, so as in baseball matches can (theoretically) go on indefinitely. In practice even final sets that go into “extra innings” tend to end quickly. The previous record for number of games in a final set was 1/3 as many. For some commentary on the odds related to this extraordinary match check out:

A mathematician watches tennis II

Wikipedia has a list of Longest tennis match records.


23
Jun 10

Data is dumb, try to be smart

I blurred out my personal information, but otherwise the check shown above is exactly as I received it. Yahoo! really did send me a check for ZERO DOLLARS 00 CENTS. Obviously, mailing out checks for nothing is wasteful and pointless (shall I try cashing it? Will the bank “credit” me for nothing?). It could be that this check isn’t even “legal”, if I’m to believe the Best Answer by Yahoo!’s crack team of expert volunteers. But instead of making this a “Ha! Ha! Big Company Does Something Stupid” post, I thought I’d dig a little deeper into the story behind the check.

Until recently, Yahoo! had an advertising program similar to Google’s Adsense. Webmasters could show ads from Yahoo!’s network and share in the profits when people clicked on the links. Although basically similar, Yahoo!’s version was much more restrictive: fewer websites were accepted into their network and webmasters could only display ads to visitors from the United States. I can understand why Yahoo! would want to limit payment to clicks coming from a single (huge and wealthy) country, but the way they implemented this requirement struck me as odd. It was the webmaster’s responsibility to make sure no one from outside the US saw their ads. This meant that I had to install Geo-IP software on my server, which maps the unique numerical identifiers of visitors to their country of origin. Installing this software is a pain, and you have to update it regularly with the latest free or paid versions of the database, or else the information becomes inaccurate. Yahoo!, with all its infrastructure and CPU power and economies of scale, was making thousands of individual webmasters perform a task best handled by a giant IT company, like, say, Yahoo!

The other strange, and occasionally frustrating, thing about Yahoo!’s program was the high volatility in payment amounts. On average, the earnings from their ad network were good, but I would see days when a website made $50 and others when it made $2.11, sometimes from the same number of visitors or even the same number of clicks. Variability is built in to programs like this, but I found Yahoo!’s results to be way more inconsistent that Google’s or even some of the much smaller companies with their own advertising networks. You might think that only the average matters, but high volatility leads to poorer decision making. Imagine the following two scenarios: in one, you see the revenue from a advertising program slowly and steadily decline over the course of months. Time to find a different program. In the other scenario, payouts vary widely from month to month. Some are fantastic, some very poor. How can you tell when to leave the program, and when to expand your use of it? There are ways to measure non-obvious trends with time series analysis. But no matter which tool you use, the more “noise” in your data stream, the worse your predictions will be.

High volatility in Yahoo!’s payouts was so prolonged that it couldn’t have been just a matter of growing pains. Either no-one at Yahoo! was looking into payment variability at the level of individual users (it’s possible that the total revenue generated by their program, summed across all users, was much more stable), or they didn’t see why this was a problem.

Finally, we have the issue of the worthless check. When Yahoo! closed their program, they had to send out final checks, even to those users with balances below the regular minimum threshold (most programs require $50 or so in earnings before they send you a check). One possibility is that Yahoo!’s algorithm never looked to see if the amount owed was positive before cutting a check. That would be really dumb. Another, slightly-more-favorable explanation is that they really owed me 0.5 cents, so their algorithm saw this as a positive balance. It looked like I should be getting a check, but then Yahoo!’s check-printing software rounded down to the nearest penny and I end up with $0.00. Either way, I see this as a final indicator that Yahoo! just isn’t thinking very hard about their data. By contrast, Google and other companies seem to understand that their business depends on managing data with a huge amount of human intelligence and understanding. Data is dumb, Yahoo! needs to be smarter.


22
Jun 10

Reaching escape velocity

Sample once from the Uniform(0,1) distribution. Call the resulting value [latex]x[/latex]. Multiply this result by some constant [latex]c[/latex]. Repeat the process, this time sampling from Uniform(0, [latex]x*c[/latex]). What happens when the multiplier is 2? How big does the multiplier have to be to force divergence. Try it and see:

iters = 200
locations = rep(0,iters)
top = 1
multiplier = 2
for(i in 1:iters) {
	locations[i] = runif(1,0,top)
	
	top = locations[i] * multiplier
}

windows()
plot(locations[1:i],1:i,pch=20,col="blue",xlim=c(0,max(locations)),ylim=c(0,iters),xlab="Location",ylab="Iteration")

# Optional save as movie, not a good idea for more than a few hundred iterations. I warned you!
# library("animation")
# saveMovie(for (i in 1:iters) plot(locations[1:i],1:i,pch=20,col="blue",xlim=c(0,max(locations)),ylim=c(0,iters),xlab="Location",ylab="Iteration"),loop=1,interval=.1)

19
Jun 10

The perfect fake

Usually when you are doing Monte Carlo testing, you want fake data that’s good, but not too good. You may want a sample taken from the Uniform distribution, but you don’t want your values to be uniformly distributed. In other words, if you were to order your sample values from lowest to highest, you don’t want them to all be equidistant. That might lead to problems if your underlying data or model has periods or cycles, and in any case it may fail to provide correct information about what would happen with real data samples.

However, there are times when you want the sample to be “perfect”. For example, in systematic sampling you may wish to select every 10th object from a population that is already ordered from smallest to biggest. This method of sampling can reduce the variance of your estimate without introducing bias. Generating the numbers for this perfect sample is quite easy in the case of the Uniform distribution. For example, R gives you a couple easy ways to do it:

# Generate a set of 100 equidistant values between 5 and 10 (inclusive)
x <- seq(5,10,length=100)

# Generate every 12th integer between 50 and 1000
x <- seq(50,1000,12)

When it comes to other distributions, grabbing a perfect sample is much harder. Even people who do a lot of MC testing and modeling may not need perfect samples every day, but it comes up often enough that R should really have the ability to do it baked right into to the language. However, I wasn't able to find such a function in R or in any of the packages, based on my searches at Google and RSeek. So what else could I do but roll my own?

# Function returns a "perfect" sample of size n from distribution myDist
# The sample is based on uniformly distributed quantiles between 0 and 1 (exclusive)
# If the distribution takes additional parameters, these can be specified in the vector params
# Created by Matt Asher of StatisticsBlog.com
perfect.sample <- function(n, myDist, params = c()) {
	x <- seq(0,1,length=(n+2))[2:(n+1)]
	  
	if(length(params)) {
	  	toEval <- paste(c("sapply(x,q", myDist, ",", paste(params,collapse=","), ")"), collapse="")
	} else {
	  	toEval <- paste(c("sapply(x,q", myDist, paste(params,collapse=","), ")"), collapse="")
	} 
	
	eval(parse(text=toEval))
}

This function should work with any distribution that follows the naming convention of using "dname" for the density of the distribution and has as its first parameter the number of values to sample. The histogram at the top of this post shows the density of the Lapalce, aka Double Exponential distribution. Here is the code I used to create it:

# Needed library for laplace
library(VGAM)
z <- perfect.sample(5000,"laplace",c(0,1))
hist(z,breaks=800,col="blue",border=0,main="Histogram from a perfect Laplace sample")

As you can see, my function plays nice with distributions specified in other packages. Here are a couple more examples using standard R distributions:

# Examples:
perfect.sample(100,"norm")

# Sampling from the uniform distribution with min=10 and max=20
z <- perfect.sample(50,"unif",c(10,20))

Besides plotting the results with a histogram, there are specific tests you can run to see if values are consistent with sampling from a known distribution. Here are tests for uniformity and normality. You should get a p-value of 1 for both of these:

# Test to verify that this is a perfect sample, requires library ddst
# Note only tests to see if it is Uniform(0,1) distributed
library(ddst)
ddst.uniform.test(z, compute.p=TRUE)

# Needed for the Shapiro-Wilk Normality Test
library(stats)
z = perfect.sample(1000,"norm")
shapiro.test(z)

If you notice any bugs with the "perfect.sample" function please let me know. Also let me know if you find yourself using the function on a regular basis.


18
Jun 10

Those dice aren’t loaded, they’re just strange

I must confess to feeling an almost obsessive fascination with intransitive games, dice, and other artifacts. The most famous intransitive game is rock, scissors, paper. Rock beats scissors.  Scissors beats paper. Paper beats rock. Everyone older than 7 seems to know this, but very few people are aware that dice can exhibit this same behavior, at least in terms of expectation. Die A can beat die B more than half the time, die B can beat die C more than half the time, and die C can beat die A more than half the time.

How is this possible? Consider the following three dice, each with three sides (For the sake of most of this post and in my source code I pretend to have a 3-sided die. If you prefer the regular 6-sided ones, just double up every number. It makes no difference to the probabilities or outcomes.):

Die A: 1, 5, 9
Die B: 3, 4, 8
Die C: 2, 6, 7

Die A beats B [latex]5/9[/latex] of the time which beats C [latex]5/9[/latex] of the time which beats A [latex]5/9[/latex] of the time. Note that the ratios don’t all have to be the same. Here’s another intransitive trio:

Die A: 2, 4 ,9
Die B: 1, 8, 7
Die C: 3, 5, 6

Take a moment to calculate the relative winning percentages, or just trust me that they are not all the same…. Did you trust me? Will you trust me now in the future?

In order to find these particular dice I wrote some code in R to automate the search. The following functions calculate the winning percentage for one die over another and check for intransitivity:

# Return the proportion of the time that d1 beats d2. 
# Dice need to have same number of sides
calcWinP <- function(d1,d2) {
	sides = length(d1)
	d1Vd2 = 0
	
	for(i in 1:sides) {
		for(j in 1:sides) {
			if(d1[i] > d2[j]) {
				d1Vd2 = d1Vd2 + 1
			}
		}
	}
	
	return( d1Vd2/(sides^2) )
}

# Assumes dice have no ties. 
# All dice must have the same number of sides.
# How many times do I have to tell you that?
checkIntransitivity <- function(d1,d2,d3) {
	d1beatsd2 = calcWinP(d1,d2)
	
	if (d1beatsd2 > 0.5) {
		if(calcWinP(d2,d3) > 0.5) {
			if(calcWinP(d3,d1) > 0.5) {
				return(TRUE)
			}
		}
	} else {
		# Check if d1 beats d3, if so check if d3 beats d2
		if(calcWinP(d1,d3) > 0.5) {
			if(calcWinP(d3,d2) > 0.5) {
				return(TRUE)
			}
		}
	}
	# Regular old transitivity.
	return(FALSE)
}

I then checked every possible combination. How many unique configurations are there? Every die has three numbers on it, and you have three die for a total of nine numbers. To make things simpler and avoid ties, no number can be used more than once. If each sides of a die was ordered and each of the die was ordered, you’d have [latex]9![/latex] different combinations, which is to say a whole mess of them. But our basic unit of interest here isn’t the digits, it’s the dice. So let’s think about it like this: For die A you can choose 6 of the 9 numbers, for die B you can pick 3 of the remaining 6, and for die C you’re stuck with whatever 3 are left. Multiply this all together:

choose(9,6)*choose(6,3)

and you get 1680 possibilities. But wait? What’s that you say? You don’t care which die is A, which is B, and which is C? Fantastic. That reduces the number of “unique” configurations by [latex]3![/latex], which is to say 6, at least if my back-of-the-envelope calculations are correct. Final tally? 280.

Not bad. Unfortunately, there no obvious way to ennumerate each of these 280 combinations (at least not to me there isn’t). So I ended up using a lot of scratch work and messing around in the R console until I had what I believed to be the right batch. Sorry, I no longer have the code to show you for that. After testing those 280 configurations, I found a total of 5 intransitive ones, including the 2 dice shown previously and the following 3 sets:

Die A: 2, 9, 3
Die B: 1, 6, 8
Die C: 4, 7, 5

Die A: 7, 1, 8
Die B: 5, 6, 4
Die C: 9, 3, 2

Die A: 7, 3, 5
Die B: 2, 9, 4
Die C: 8, 6, 1

Did I make a mistake? According to my calculations, [latex]5/280[/latex] of the combinations are intransitive. That represents 1.786% of the total. How might I very this? That’s right, it’s Monte Carlo time.

Using the following code, I created all [latex]9![/latex] permutations of dice and sides, then sampled from those 362,880 sets of dice many, many times:

library(e1071) # Makes making permutations easy
allPerms = permutations(9)
intransFound = 0
for(i in 1:dim(allPerms)[1]) {
	d1 = allPerms[i,1:3]
	d2 = allPerms[i,4:6]
	d3 = allPerms[i,7:9]
	if(checkIntransitivity(d1,d2,d3)) {
		intransFound = intransFound + 1
	}
}

print(intransFound)
	
found = 0	
tries = 100000
for(i in 1:tries) {
	one2nine = sample(1:9,9)
	d1 = one2nine[1:3]
	d2 = one2nine[4:6]
	d3 = one2nine[7:9]
	
	if( checkIntransitivity(d1,d2,d3)) {
		found = found + 1
		# Uncomment below if you want to see them.
		#print("found one")
		#print(d1)
		#print(d2)
		#print(d3)
		#flush.console()
	}
}

print(found/tries)

Final percentage: 1.807%. That’s pretty close to [latex]5/280[/latex], and much closer than it is to either [latex]4/280[/latex] or [latex]6/280[/latex], so I’m going to conclude that I got them all and got it right.

What happens if your dice have fewer, or more, sides? Turns out you need at least 3 sides to achieve intransitivity. Can you have it with 4 sides? What about 5, 6, or 7? To estimate the fraction of dice configurations which are intransitive for different numbers of sides I wrote the following code. Note that this could take a while to run, depending on the number of “tires” you use:

# Transitivity vs sides.
results = rep(0,6)
tries = 100000
	
for(j in 4:12) {
	found = 0	

	for(i in 1:tries) {
		one2nine = sample(1:(3*j),(3*j))
		d1 = one2nine[1:j]
		d2 = one2nine[(j+1):(2*j)]
		d3 = one2nine[(2*j+1):(3*j)]
		
		if( checkIntransitivity(d1,d2,d3)) {
			found = found + 1
		}
	}
	
	results[j] = found/tries
	print("Found:")
	print(results[j])
	flush.console()
}

If you wait through all that you might notice some interesting patters emerge, which probably have explanations rooted in theory but it’s getting on nap time, so I’ll wrap this post up.

I think what fascinates me the most about intransitive dice, and games like rock, scissors, paper, is that they represent breakdowns in what math folks like to call a “total order”. Most of our calculations are done in this nice land of numbers where you can count on transitivity. [latex]A>B[/latex] and [latex]B>C[/latex], therefore [latex]A>C[/latex]. Every item has it’s place on the hierarchy, and “ties” only occur between an object and itself. “Total order” is a good name in that these are comfortable spaces to do calculations where nothing all that unexpected happens (usually, ok?). For excitement and unexpected delight, you need relax those orders, the more relaxing the better. Incidentally, if instead your goal is frustration and dirty looks from your friends at a party, try pretending that you can apply the methods of a total order (like the calculus) to economics, consumer choice, and love.

One final note before drifting off… in statistics we have at least one delightfully unexpected instance of intransitivity: correlations. Just because [latex]X[/latex] is positively correlated with [latex]Y[/latex] and [latex]Y[/latex] is positively correlated with [latex]Z[/latex], doesn’t mean that [latex]X[/latex] and [latex]Z[/latex] are positively correlated. Strange, no? But you can prove it with an example covariance matrix. Have you got one to show me?


16
Jun 10

Five dumb arguments smart people make

When smart people make dumb arguments they tend to fall into one of a few categories. I’ve documented five of the most common bad arguments I see at websites where otherwise intelligent geeks, math nerds and skeptics hang out and discuss things. Chances are you’ve encountered at least one of these arguments, maybe you’ve even used one of them yourself.

#1: Occam’s razor

In simple terms, the idea of Occam’s razor is that, whenever possible, simple models are to be preferred. Note that Occam’s razor tells you absolutely nothing about whether a model or a theory is good or bad, useful or worthless. It’s is a rule of thumb. And while not necessarily a bad one, in practice it tends to act like intellect retardant to put out active minds who question existing (often simplistic), beliefs and scientific constructs.

Let me make this very clear: Occam’s razon doesn’t prove anything. In particular, and despite how it is commonly used, it doesn’t show that the more likely, or “simple”, explanation is the correct one. Unlikely, complicated things happen all the time. If you don’t believe this, go flip a coin a thousand times. I guarantee you that the exact result you get will only happen once in a godzillion tries. In terms of (absolute) likelihood, explaining what you just observed with probability theory and stochastic processes is way more complicated than the assertion “that’s what God wanted”, a theory which, if true, explains your effectively impossible result with 100% likelihood. I’m guessing that’s not the can of worms you hoped to open up with Occam’s razor?

#2: You’re a hypocrite.

Yes, and so are you. So what? Do I really need to explain why this agumentum-ad-hypocrisum (note, made-up Latin) is so bad? Given that accusations of hypocrisy are almost as popular as kittens in the blogosphere, I suspect I do. So here goes: showing that your opponent is a hypocrite proves nothing except that they are human. It doesn’t make their arguments wrong, and only weakens them under very limited circumstances, like when you catch a sworn bretharian sneaking a pint of Häagen Dazs to keep from starving to death. Beyond that, calling your opponent a hypocrite has less nutritional value than a peep.

#3: That’s just an anecdote. It doesn’t prove anything.


Repeat after me: Anecdotes are evidence. Nothing more, but certainly nothing less. In the context of a common event followed by another common event (I got a headache after stopping at three red lights in a row), anecdotal evidence is nearly useless. In terms of more rare events (I got kuru after eating my sister), anecdotal evidence can be extraordinarily powerful. Taken to its extreme, discounting anecdotal evidence led (presumably intelligent) academics to hold firm to fallacies like the idea that fireflies can’t flash all in unison, long after anecdotal evidence had come in from reliable observers.

#4: No known mechanism


There are lots of intelligent ways to argue that homeopathy doesn’t work. You can cite studies (although you may find that not all of them confirm your beliefs), or you can say that believers are the random subset of the people who have tried it and then afterwords felt an improvement. I’m not going to weigh in on the subject, except to note that one argument I see used fairly often is that homeopathy doesn’t work because it can’t work, and it can’t work because we don’t understand how it possibly could.

To see how bad this argument is you need to look at the assumptions behind it and view it in historical context. What people are really saying with this argument is: Our current scientific model is comprehensive and infallible. It accounts for all observations, and it has no holes or leaks. I’m going to assume that you are able to see the problem with this mindset yourself. I’ll just note that one particularly unfortunate use of this argument led doctors to reject hand washing before performing operations or delivering babies. After all, evidence that fastidious midwives had lower infection rates was purely anecdotal (see above), and there was no reason to believe cleanliness could make a difference in the pre-germ era. There was no known mechanism.

#5: Three guys with boards


I’m calling this the “three guys with boards” argument in honor of those skeptics who, whenever someone mentions crop circles, declare “it’s a hoax. Three guys with boards admitted they did it.” In fact, some guys with boards did indeed admit to creating a crop circle, and they showed us how they did it. So what’s wrong with this argument?

The problems is that you can’t discount all anomalous observations as “fakes” just because some are known to be fakes, nor does the possibility of faking an event mean that all such events must have been faked. Obviously, scientists don’t like playing games of intellectual wack-a-mole. If you research enough supposed “unexplained mysteries”, and come away convinced each time that the mystery is bogus, the tendency to dismiss other, similar claims outright is understandable. That said, “three guys with boards” is still a dumb argument, especially when you are going against claims of specific evidence that can’t be easily explained away as a hoax. For example, we have large, complicated, precisely implemented crop circles done in a short span of time and exhibiting strangely bent stalks. This evidence may fall far short of irrefutable proof of alien intervention, but it does require much more than dismissively stating that we know they are all fakes. We don’t.

More broadly, any attempt to automatically sort new observations into known categories (often categories that make us comfortable) is a bad idea. Unfortunately there are no shortcuts when it comes to evaluating data or evidence. It has to be done the hard way, one piece at a time.


14
Jun 10

Repulsive dots pattern, the difference of distance

What if you wanted to randomly place objects into a field, and the more objects you had, the more they rejected newcomers placed nearby? To find out, I setup a simulation. The code, shown at the end, isn’t all that interesting, and the plots shown below aren’t all that special. I think there is one interesting part of this, and that’s how the clustering changes depending on how distance is measured. One of the plots uses the traditional “L2″ distance, the other uses L1” (Manhattan taxi cab) measure . Each plot shown below has almost exactly the same number of dots (277 vs 279). Can you tell which uses L1 and which uses L2 just by looking?

Plot A:

Plot B:

Here’s the code. Run it and see for yourself. Make sure to change adjust the values which have comments next to them. Uncommenting “print(force)” can help you pack a maxRepulse value.

calcRepulse <- function(x,y,dots,use="L2") {
	force = 0
	i = 1
	while(i <= dim(dots)[1] && dots[i,1] != 0) {
		if(use == "L2") {
			force = force + 1/( (x-dots[i,1])^2 + (y-dots[i,2])^2 )
		} else if(use == "L1") {
			force = force + 1/( abs(x-dots[i,1]) + abs(y-dots[i,2]) )
		}
		i = i+1
	}
	# print(force)
	return(force)
}

par(bg="black")
par(mar=c(0,0,0,0))
plot(c(0,1),c(0,1),col="white",pch=".",xlim=c(0,1),ylim=c(0,1))

place = 1 #Maximum number of dots to place, change this to something bigger
dots = matrix(rep(0,place*2),ncol=2)
maxTries = place * 10
maxRepulse = 1 # Anything above this will be rejected as too repulsive a location
dist2use = "" # Pick L1 or L2

placed = 0
tries = 0


while(placed < place && tries < maxTries) {
	x = runif(1)
	y = runif(1)
	
	if(calcRepulse(x,y,dots,dist2use) < maxRepulse) {
		dots[(placed + 1),1] = x
		dots[(placed + 1),2] = y
		placed = placed + 1
		points(x,y,col="blue",pch=20)
	}

	tries = tries + 1
}

12
Jun 10

A different way to view probability densities

The standard, textbook way to represent a density function looks like this:

Perhaps you have seen this before? (Plot created in R, all source code from this post is included at the end). Not only will you find this plot in statistics books, you’ll also see it in medical texts, sociology, and even economics books. It gives you a clear view of how likely an observation is to fall in a particular range of [latex]x[/latex]. So what’s the problem?

The problem is that what usually concerns us isn’t probability in isolation. What matters is the impact that observations have on some other metric of importance, like the total or average. The key thing we want to know about a distribution is: What range of observations will contribute the most to our expected value, and in what way? We want a measure of influence.

Here’s the plot of the Cauchy density:

From this view, it doesn’t look all that different from the Normal. Sure it’s a little more narrow, with “fatter tails”, but no radical difference, right? Of course, the Cauchy is radically different from the normal distribution. Those slightly fatter tails give very little visual indication that the Cauchy is so extreme-valued that it has no expected value. Integrating to find the exception gives you infinity in both directions. If your distribution is like this, you’ve got problems and your plot should tell you that right away.

Here’s another way to visualize these two probability distributions:

Go ahead and click on the image above to see the full view. I’ll wait for you…

See? By plotting the density multiplied by the observation value on the y-axis, you get a very clear view of how the different ranges of the function effect the expectation. Looking at these, it should be obvious that the Cauchy is an entirely different beast. In the normal distribution, extreme values are so rare as to be irrelevant. This is why researchers like to find ways to treat their sample as normally distributed: a small sample gives enough information to tell the whole story. But if your life (or livelihood) depends on a sum or total amount, you’re probably best off plotting your (empirical) density in the way shown above.

Another bit of insight from this view is that the greatest contribution to the expectation comes at 1 and -1, which in the case of the Normal isn’t the mean, but rather the second central moment (plus or minus). That’s not a coincidence, but it’s also not always the case, as we shall see. But first, what do things look like when a distribution gets completely out of hand?

The Student’s t distribution, on 1 Degree of Freedom , is identical to the Cauchy. But why stop at a single DF? You can go all the way down to the smallest (positive) fraction.

The closer you get to zero, the flatter the curve gets. Can we ever flatten it out completely? Not for a continuous distribution with support over an infinite range. Why not? Because in order for the slope of [latex]value * density[/latex] to continue to flatline it indefinitely, the density function would have to be some multiple of [latex]\frac{1}{x}[/latex], and of course the area under this function diverges as we go to infinity, and densities are supposed to integrate to 1, not infinity, right?

What would the plot look like for a continuous function that extends to infinity in just one direction? Here’s the regular Exponential(1) density function plot:

Now look at the plot showing contribution to expectation:

Were you guessing it would peak at 1?  Again, the expectation plot provides insight into which ranges of the distribution will have the greatest impact on our aggregate values.

Before I go look at an a discrete distribution, try to picture what the expectation curve would look like for the standard [latex]Uniform(0,1)[/latex] distribution. Did you picture a diagonal line?

Can we flatten things out completely with an infinitely-supported discrete distribution? Perhaps you’ve heard of the St. Petersburg Paradox. It’s a gambling game that works like this: you flip a coin until tails comes up. If you see one head before a tails, you get $1. For 2 heads you get $2, for 3 heads $4, and so on. The payoff doubles each time, and the chances of reaching the next payoff are halved. The paradox is that even though the vast majority of your winnings will be quite modest, your expectation is infinite.  The regular view of the probability mass function for provides almost no insight:

But take a look at the expectation plot:

Flat as a Nebraska wheat field. You can tell right away that something unusual is happening here.

I could go on with more examples, but hopefully you are beginning to see the value in this type of plot. Here is the code, feel free to experiment with other distributions as well.

# Useful way to make dots look like a line
x = seq(-5,5,length=1500)

# You've seen this before. Our good friend the Normal
plot(x,dnorm(x),pch=20,col="blue", main="Standard Normal density function")

# Cauchy looks a little different, but it's not obvious how different it is 
plot(x,dcauchy(x),pch=20,col="blue", main="Cauchy density function")

# New way of plotting the same
plot(x,dnorm(x)*x,pch=20,col="blue", main="Normal density: contribution to expectation")
abline(h=0,lty="dashed",col="gray")

plot(x,dcauchy(x)*x,pch=20,col="blue", main="Cauchy density: contribution to expectation")
abline(h=0,lty="dashed",col="gray")

# Extreme student-t action:
plot(x,dt(x,0.001)*x,pch=20,col="blue", main="Student's t on 0.001 d.f. contribution to expectation")
abline(h=0,lty="dashed",col="gray")


# The Exponential
x = seq(0,10,length=1500)
plot(x,dexp(x,1),pch=20,col="blue", main="Standard Exponential density function")

# The expectation view:
plot(x,dexp(x,1)*x,pch=20,col="blue", main="Exponential mass contribution to expectation")

# What do we see with the St. Petersburg Paradox
x = 2^(0:30)
dStPete <- function(x) {
	return (1/(2*x))
}

# Note the log
plot(x,dStPete(x),pch=20,col="blue", main="St. Petersburg mass function", log="x", xlab="Payoff", ylab="Probability",ylim=c(0,.5))

# Now we see the light
plot(x,dStPete(x)*x,pch=20,col="blue", main="St. Petersburg mass fcn: contribution to expectation", xlab="Payoff", log="x", ylab="Payoff times probability",ylim=c(0,.5))
abline(h=0,lty="dashed",col="gray")