After I made the last post, I though I should go back and add a TL;DR version. But a few minutes into Preacher and it was clear I was getting nothing else done last night, the season premiere was

**insane**.

But it's a new day, so here's a TL;DR haiku:

*the soup was cloudy*

but good, rice added for the

unexpected guest(that might be too subtle, should've gone with a limerick.)

I was horrible at Stats in college. And I don't understand any of what you just said . But thanks for trying!

Here's an analogy, if you took a stats class one of the things they probably had you learn how to do was to compare the means from 2 samples, or the mean from a sample with some fixed value. You may have had to look up some number in the back of the book, and you would use a different table if you happened to know the standard deviation (they'd probably call this a z test) or if you did not know the standard deviation (t test.) The z test is just based on the normal distribution. The t distribution is a bit fatter, making the tests based on it more conservative, because it incorporates all of the possible values that the standard deviation could have had.

That's a general principle, when there are things you don't know you have to move that uncertainty into your model so that you'll be more cautious about making conclusions.

"It was my understanding that there would be no math."

Well then no discussion on conditional probabilities and maximum entropy optimizations for you young man! But +1 like for classic SNL.

Thanks for the explanation, Mark!

To paraphrase for those who got a bit lost, he essentially said that he tested the data in different ways, taking into account that many of the times were estimated and not exact, and no matter how he rearranged the data, the same overall results came through. So he's very confident about the general trend of results as discussed in the article.

That's right. Another, probably better, way I could have approached this would be to skip ahead to the model and then go into detail about parts of it. I'm assuming 3 parts.

(1) Your request is granted form access at a specific time on the onPeak server. This time (or something equivalent to it) is only known by that server.

(2) The next time your browser refreshes, it sees the access and goes to the form, possibly with some lag. At some point here the browser loads the elements that we can see in the browser cache.

(3) At some point you notice that your browser has gone to the form and you translate that to some estimated time, a very nervous brain is involved.

So we have:

Browser_timestamp = Actual_timestamp + Refresh_time + Lag_time

Estimated_time = Browser_timestamp + Brain_adjustment

and Refresh_time, Lag_time and Brain_adjustment are random factors that need to be estimated and described. My earlier post was some high level detail about trying to figure out the Brain_adjustment part. But when it's all done the questions now become:

Are the estimated factors in my model reasonable?

Are they sufficient to account for the results that we saw?

Are the trends we're looking for still present once I've included the uncertainties?

And in all cases the answer was a pretty comfortable yes.

In conclusion,

*Mark made some posts, quite unclear*

That left readers in need of a beer

But it all turned out okay

When later on in the day

They saw Conan notifications appear!