Oct 13

The week in stats (Oct. 7th edition)

  • The picture above is a very well-known mathematical construction called the fractal cat. Brian Lee Yung Rowe shows how to construct fractal artworks using R.
  • Arthur Charpentier of Freakonometrics explains how to construct ROC (rate of change Receiver Operating Characteristic) curves in R, as well as how to interpret and plot them. This is a useful for those in fields that frequently encounter longitudinal data, such as finance, engineering or biostatistics.
  • There are many kinds of intervals in statistics. To name a few of the common ones: confidence intervals, prediction intervals, credible intervals, and tolerance intervals. Each are useful and serve their own purpose. You should not only know their names, but also when to use them and why.
  • A map of the most visited website for every country in the world (source:, as well as the internet population of each country.
  • Suppose that you drop 5 blue marbles and 5 red marbles randomly (and uniformly) on the interval [0,1]. What is the probability that the marbles will interleave each other?

Sep 13

The week in stats (Sept. 30th edition)

Sep 13

The week in stats (Sept. 16th edition)

Sep 13

The week in stats (Sept. 9th edition)

Bayesian Evolution

Aug 13

The week in stats (Aug. 19th edition)

  • Data science is emerging as a new, hot field, but is it really different from statistics? Wesley from discusses why data science is more than just a title.
  • Are you in the market research industry? If you ever run into incomplete data, here is how machine learning can help to fill in the gaps.
  • This year, more than 6,000 people attended the Joint Statistical Meetings, the largest statistical meeting in the world. If you missed the 2013 JSM, this summary will bring you up to speed.
  • Why an infinite number of monkeys (or even just one monkey!) will eventually crank out a complete play every bit as melodramatic as The Bard’s famous Hamlet.
  • Egon Pearson (11 August 1895 – 12 June 1980) is one of the most prominent figures in the history of statistics. His most important contributions include the Neyman-Pearson (1933) theory of hypothesis testing, and promoting statistical methods in industry. However, most people fail to realize that Pearson’s contributions go well beyond hypothesis testing. Here are some early pioneering works of Pearson that have been neglected.