Author Topic: SDCC 2017 Hotel Lottery - Analysis and reflection  (Read 4183 times)

Online SteveD

  • Administrator
  • Volunteer HQ
  • *****
  • Join Date: Nov 2014
  • Posts: 2541
  • Karma: 0
  • Liked: 1680
Re: SDCC 2017 Hotel Lottery - Analysis and reflection
« Reply #15 on: June 26, 2017, 08:19:17 AM »
"It was my understanding that there would be no math." :o

Offline Transmute Jun

  • Stan Lee's Hospitality Suite
  • *******
  • Join Date: Mar 2012
  • Posts: 23719
  • Karma: 5
  • Queen of the Bird Missiles
  • Liked: 9714
Re: SDCC 2017 Hotel Lottery - Analysis and reflection
« Reply #16 on: June 26, 2017, 08:26:53 AM »
Thanks for the explanation, Mark!

To paraphrase for those who got a bit lost, he essentially said that he tested the data in different ways, taking into account that many of the times were estimated and not exact, and no matter how he rearranged the data, the same overall results came through. So he's very confident about the general trend of results as discussed in the article.

Friends of Comic Cons

  • Guest
Re: SDCC 2017 Hotel Lottery - Analysis and reflection
« Reply #17 on: Today at 07:19:31 AM »

Offline mark

  • Volunteer HQ
  • ******
  • Join Date: Mar 2015
  • Posts: 3917
  • Karma: 0
  • Liked: 2023
Re: SDCC 2017 Hotel Lottery - Analysis and reflection
« Reply #17 on: June 26, 2017, 01:17:36 PM »
After I made the last post, I though I should go back and add a TL;DR version. But a few minutes into Preacher and it was clear I was getting nothing else done last night, the season premiere was insane.

But it's a new day, so here's a TL;DR haiku:

the soup was cloudy
but good, rice added for the
unexpected guest


(that might be too subtle, should've gone with a limerick.)

You are not allowed to view links. Register or Login
I was horrible at Stats in college.  And I don't understand any of what you just said ???.  But thanks for trying!

Here's an analogy, if you took a stats class one of the things they probably had you learn how to do was to compare the means from 2 samples, or the mean from a sample with some fixed value. You may have had to look up some number in the back of the book, and you would use a different table if you happened to know the standard deviation (they'd probably call this a z test) or if you did not know the standard deviation (t test.) The z test is just based on the normal distribution. The t distribution is a bit fatter, making the tests based on it more conservative, because it incorporates all of the possible values that the standard deviation could have had.

That's a general principle, when there are things you don't know you have to move that uncertainty into your model so that you'll be more cautious about making conclusions.

You are not allowed to view links. Register or Login
"It was my understanding that there would be no math." :o

Well then no discussion on conditional probabilities and maximum entropy optimizations for you young man! But +1 like for classic SNL.

You are not allowed to view links. Register or Login
Thanks for the explanation, Mark!

To paraphrase for those who got a bit lost, he essentially said that he tested the data in different ways, taking into account that many of the times were estimated and not exact, and no matter how he rearranged the data, the same overall results came through. So he's very confident about the general trend of results as discussed in the article.

That's right. Another, probably better, way I could have approached this would be to skip ahead to the model and then go into detail about parts of it. I'm assuming 3 parts.

(1) Your request is granted form access at a specific time on the onPeak server. This time (or something equivalent to it) is only known by that server.

(2) The next time your browser refreshes, it sees the access and goes to the form, possibly with some lag. At some point here the browser loads the elements that we can see in the browser cache.

(3) At some point you notice that your browser has gone to the form and you translate that to some estimated time, a very nervous brain is involved.

So we have:

Browser_timestamp = Actual_timestamp + Refresh_time + Lag_time
Estimated_time = Browser_timestamp + Brain_adjustment

and Refresh_time, Lag_time and Brain_adjustment are random factors that need to be estimated and described. My earlier post was some high level detail about trying to figure out the Brain_adjustment part. But when it's all done the questions now become:

Are the estimated factors in my model reasonable?

Are they sufficient to account for the results that we saw?

Are the trends we're looking for still present once I've included the uncertainties?

And in all cases the answer was a pretty comfortable yes.

In conclusion,
Mark made some posts, quite unclear
That left readers in need of a beer
But it all turned out okay
When later on in the day
They saw Conan notifications appear!
:)

Offline Transmute Jun

  • Stan Lee's Hospitality Suite
  • *******
  • Join Date: Mar 2012
  • Posts: 23719
  • Karma: 5
  • Queen of the Bird Missiles
  • Liked: 9714
Re: SDCC 2017 Hotel Lottery - Analysis and reflection
« Reply #18 on: June 26, 2017, 01:37:40 PM »
You are not allowed to view links. Register or Login
But it all turned out okay
When later on in the day
They saw Conan notifications appear!
:)

From your keyboard to 1iota's ears!

Offline TardisMom

  • Supporter
  • Volunteer HQ
  • ******
  • Join Date: Aug 2012
  • Posts: 3346
  • Karma: 0
  • Liked: 1845
Re: SDCC 2017 Hotel Lottery - Analysis and reflection
« Reply #19 on: June 26, 2017, 03:53:56 PM »
I think [member=4270]mark[/member] needs a dedicated thread in which to post a haiku/limerick/poem each and every day.  Or more than one, if that's how his day is going.