One of our SeaTUG members wrote a blog about "small data". The premise is sometimes we don't have all the data in the world to build our analysis. In Clark Stevens example, he works at Jobaline, a new enough job placement company. They have an ever growing database (~200 million records), but when a specific request comes in he often finds a much smaller sample to evaluate. So what do we do when N is small? Take a read of his blog to help minimize the risk of making bad decisions from poor analysis.