Author Topic: John Hollinger's Power Rankings  (Read 6903 times)

0 Members and 1 Guest are viewing this topic.

Re: John Hollinger's Power Rankings
« Reply #15 on: December 01, 2008, 11:00:32 PM »

Offline nickagneta

  • James Naismith
  • *********************************
  • Posts: 48120
  • Tommy Points: 8794
  • President of Jaylen Brown Fan Club
The rankings also consider home/away breakdown, and the results of the last 10 games.

What is ridiculous is not the stat, but rather being bothered by the rankings. Just enjoy it.
Amen. I'm getting close to the point that I'm not going to read Celticsblog threads about Hollinger's stats anymore. Too many people seem to completely miss the boat and appear to have no capacity to view the stats with even a modicum of perspective.

But, for old time's sake, I'll offer my own explanation one last time. Hollinger's statistical power rankings are just ONE piece of a puzzle. You can't see the whole NBA picture just by looking at Hollinger's stats - they simply don't tell you everything. But at the same time, I don't think you can see the whole picture without Hollinger's stats. Just like a puzzle piece. The puzzle piece is not a complete puzzle by itself, nor is the puzzle complete without that puzzle piece.

You guys are right - everyone else is missing the point. Every statistic has its flaws. This statistic rates margin of victory as one of its inputs, and for a team like the Celtics that is confident that it can win every game and just cruises until the 4th quarter, their margin of victory won't be that great, even though they may still unquestionably be the team to beat. As a result their rank is going to be lower than it should be. As a comparison look at the Lakers and last year's Celtics, both of which consistently won by huge margins, and both of which were 1st in the power rankings.

Hollinger's power rankings solely rely on the numbers, and I'm sure if you asked Hollinger directly, he would not say that he believes that the Celtics were the 4th best team in the NBA. It's just that his formula states it that way. Every metric has its flaws.
But if a metric is coming up with obvious errors over and over again then why keep it? It obviously isn't doing the job right or properly capturing the dynamic adequately enough. And if it is not then there is no reason why it shouldn't be criticized.

It is obvious that the Celtics are not the fourth best team in the league right now. It is obvious that the Pacers aren't the 10th best team in the league right now. There is no way that the Suns, Nets and Hawks deserve to be rated as low as they are.

The metric is obviously wrong and poorly contrived.

Re: John Hollinger's Power Rankings
« Reply #16 on: December 01, 2008, 11:33:16 PM »

Offline fairweatherfan

  • Johnny Most
  • ********************
  • Posts: 20738
  • Tommy Points: 2365
  • Be the posts you wish to see in the world.

But if a metric is coming up with obvious errors over and over again then why keep it? It obviously isn't doing the job right or properly capturing the dynamic adequately enough. And if it is not then there is no reason why it shouldn't be criticized.

It is obvious that the Celtics are not the fourth best team in the league right now. It is obvious that the Pacers aren't the 10th best team in the league right now. There is no way that the Suns, Nets and Hawks deserve to be rated as low as they are.

The metric is obviously wrong and poorly contrived.

I disagree - as a guy who uses a lot of stats in my job, there are several things to be said in defense of Hollinger's system.  The first is that it's based on a small sample size right now, so there are going to be some fluky results.  As the season gets further on it becomes more reflective of reality.

Second is that, while we're clearly better than Portland, one of the system's fundamental qualities is that it ignores subjective things like pre-existing opinions.  All teams start out equal, and only objective data - what happens on the court is the ONLY thing that goes into the formula.  This can be a disadvantage too, but by removing the human element, it removes the bias of opinions from the equation, which can shed some light on things.  Indiana or Milwaukee are good examples - they have lost a lot of games, but have played a lot of good teams and generally played them close.  Both teams are playing better than their record indicates, and this is supported by the data.

Hollinger's formula isn't thrown together at random or "made up" either (not your words but others have said this) - it looks at data from past seasons and weights variables according to how predictive they are of regular season and playoff success.  For example, margin of victory is a stronger predictor of future wins and even winning a championship than  current W-L record, so the model weights it more strongly.  Same thing with recent performance. 

Lastly, the model suffers from what I call "BCS syndrome" - the only time people care about it are when it deviates from conventional wisdom.  Usually people get upset and criticize the model heavily, like with the BCS, when this happens.  The problem is, if the model just parroted the opinions of the average fan, it would be useless - it wouldn't give any additional information above and beyond what people already thought. 

The model is MOST useful when it gives a different result from what people expect, but that's unfortunately when people have the lowest opinion of it.  Portland's high ranking tells us they've been playing very well in situations where a mediocre or even good NBA team wouldn't - what they've done so far is the profile of an elite team.  Phoenix's lower ranking says they aren't, so far, as good as their record would indicate - and if you take a look at the data or watch them play, that appears to be true.  When the model gives a surprising result, that's a time to look a little closer at a team and see if they maybe aren't over or underperforming in ways that people aren't picking up on yet. 

Now, it's far from perfect: the reliance on objective data  means it'll make mistakes, and the mistakes will be bigger and more frequent earlier in the year.  But conventional wisdom is often wrong too - it was obvious preseason that New Orleans was a top-5 team, and Denver and Portland were average at best - both have been wrong so far, and Hollinger's numbers reflected their actual performance significantly earlier than most people. Basically, it's providing a tool, not an oracle - it helps look at team performance from a different perspective, one that doesn't care about opinions, hype and pre-season rankings.  It's not perfect and never will be, but it can be useful.

Ok, I think I just got carpal tunnel from typing all this out, but it's a topic I'm passionate about.

Re: John Hollinger's Power Rankings
« Reply #17 on: December 01, 2008, 11:48:50 PM »

Offline guava_wrench

  • Satch Sanders
  • *********
  • Posts: 9931
  • Tommy Points: 777

But if a metric is coming up with obvious errors over and over again then why keep it? It obviously isn't doing the job right or properly capturing the dynamic adequately enough. And if it is not then there is no reason why it shouldn't be criticized.

It is obvious that the Celtics are not the fourth best team in the league right now. It is obvious that the Pacers aren't the 10th best team in the league right now. There is no way that the Suns, Nets and Hawks deserve to be rated as low as they are.

The metric is obviously wrong and poorly contrived.

I disagree - as a guy who uses a lot of stats in my job, there are several things to be said in defense of Hollinger's system.  The first is that it's based on a small sample size right now, so there are going to be some fluky results.  As the season gets further on it becomes more reflective of reality.

Second is that, while we're clearly better than Portland, one of the system's fundamental qualities is that it ignores subjective things like pre-existing opinions.  All teams start out equal, and only objective data - what happens on the court is the ONLY thing that goes into the formula.  This can be a disadvantage too, but by removing the human element, it removes the bias of opinions from the equation, which can shed some light on things.  Indiana or Milwaukee are good examples - they have lost a lot of games, but have played a lot of good teams and generally played them close.  Both teams are playing better than their record indicates, and this is supported by the data.

Hollinger's formula isn't thrown together at random or "made up" either (not your words but others have said this) - it looks at data from past seasons and weights variables according to how predictive they are of regular season and playoff success.  For example, margin of victory is a stronger predictor of future wins and even winning a championship than  current W-L record, so the model weights it more strongly.  Same thing with recent performance. 

Lastly, the model suffers from what I call "BCS syndrome" - the only time people care about it are when it deviates from conventional wisdom.  Usually people get upset and criticize the model heavily, like with the BCS, when this happens.  The problem is, if the model just parroted the opinions of the average fan, it would be useless - it wouldn't give any additional information above and beyond what people already thought. 

The model is MOST useful when it gives a different result from what people expect, but that's unfortunately when people have the lowest opinion of it.  Portland's high ranking tells us they've been playing very well in situations where a mediocre or even good NBA team wouldn't - what they've done so far is the profile of an elite team.  Phoenix's lower ranking says they aren't, so far, as good as their record would indicate - and if you take a look at the data or watch them play, that appears to be true.  When the model gives a surprising result, that's a time to look a little closer at a team and see if they maybe aren't over or underperforming in ways that people aren't picking up on yet. 

Now, it's far from perfect: the reliance on objective data  means it'll make mistakes, and the mistakes will be bigger and more frequent earlier in the year.  But conventional wisdom is often wrong too - it was obvious preseason that New Orleans was a top-5 team, and Denver and Portland were average at best - both have been wrong so far, and Hollinger's numbers reflected their actual performance significantly earlier than most people. Basically, it's providing a tool, not an oracle - it helps look at team performance from a different perspective, one that doesn't care about opinions, hype and pre-season rankings.  It's not perfect and never will be, but it can be useful.

Ok, I think I just got carpal tunnel from typing all this out, but it's a topic I'm passionate about.
TP. Well said.

I think people get so caught up with W-L record that they miss other nuances, and a metric like Hollinger's provides a different perspective.

Like with any stat, it is important to understand its shortcomings to use it properly. That means understanding what it measures.

Re: John Hollinger's Power Rankings
« Reply #18 on: December 02, 2008, 07:27:12 PM »

Offline Hoops

  • Jayson Tatum
  • Posts: 956
  • Tommy Points: 5

But if a metric is coming up with obvious errors over and over again then why keep it? It obviously isn't doing the job right or properly capturing the dynamic adequately enough. And if it is not then there is no reason why it shouldn't be criticized.

It is obvious that the Celtics are not the fourth best team in the league right now. It is obvious that the Pacers aren't the 10th best team in the league right now. There is no way that the Suns, Nets and Hawks deserve to be rated as low as they are.

The metric is obviously wrong and poorly contrived.

I disagree - as a guy who uses a lot of stats in my job, there are several things to be said in defense of Hollinger's system.  The first is that it's based on a small sample size right now, so there are going to be some fluky results.  As the season gets further on it becomes more reflective of reality.

Second is that, while we're clearly better than Portland, one of the system's fundamental qualities is that it ignores subjective things like pre-existing opinions.  All teams start out equal, and only objective data - what happens on the court is the ONLY thing that goes into the formula.  This can be a disadvantage too, but by removing the human element, it removes the bias of opinions from the equation, which can shed some light on things.  Indiana or Milwaukee are good examples - they have lost a lot of games, but have played a lot of good teams and generally played them close.  Both teams are playing better than their record indicates, and this is supported by the data.

Hollinger's formula isn't thrown together at random or "made up" either (not your words but others have said this) - it looks at data from past seasons and weights variables according to how predictive they are of regular season and playoff success.  For example, margin of victory is a stronger predictor of future wins and even winning a championship than  current W-L record, so the model weights it more strongly.  Same thing with recent performance. 

Lastly, the model suffers from what I call "BCS syndrome" - the only time people care about it are when it deviates from conventional wisdom.  Usually people get upset and criticize the model heavily, like with the BCS, when this happens.  The problem is, if the model just parroted the opinions of the average fan, it would be useless - it wouldn't give any additional information above and beyond what people already thought. 

The model is MOST useful when it gives a different result from what people expect, but that's unfortunately when people have the lowest opinion of it.  Portland's high ranking tells us they've been playing very well in situations where a mediocre or even good NBA team wouldn't - what they've done so far is the profile of an elite team.  Phoenix's lower ranking says they aren't, so far, as good as their record would indicate - and if you take a look at the data or watch them play, that appears to be true.  When the model gives a surprising result, that's a time to look a little closer at a team and see if they maybe aren't over or underperforming in ways that people aren't picking up on yet. 

Now, it's far from perfect: the reliance on objective data  means it'll make mistakes, and the mistakes will be bigger and more frequent earlier in the year.  But conventional wisdom is often wrong too - it was obvious preseason that New Orleans was a top-5 team, and Denver and Portland were average at best - both have been wrong so far, and Hollinger's numbers reflected their actual performance significantly earlier than most people. Basically, it's providing a tool, not an oracle - it helps look at team performance from a different perspective, one that doesn't care about opinions, hype and pre-season rankings.  It's not perfect and never will be, but it can be useful.

Ok, I think I just got carpal tunnel from typing all this out, but it's a topic I'm passionate about.
Thank you. Much better than my attempt at explaining it.

You mentioned using statistics at work everyday, which reminds me of how so many people look at math (algebra, calculus, etc.) and say "I don't see how this could ever be useful in the real world." People that understand statistics and math appreciate their usefulness and understand that advanced math and stats are involved in virtually every aspect of our daily lives. Everyone else just doesn't seem to get it. Not sure if it really brings any bliss, but it definitely qualifies as ignorant.

Re: John Hollinger's Power Rankings
« Reply #19 on: December 02, 2008, 07:41:17 PM »

Offline moiso

  • Tiny Archibald
  • *******
  • Posts: 7642
  • Tommy Points: 441
well said, fairweatherfan.  It's so early in the season...stuff will fall in line.  You wouldn't say Shaq is 90% free throw shooter if he went 9 for his first 10 to start the year.  I really enjoy hollinger.

Re: John Hollinger's Power Rankings
« Reply #20 on: December 02, 2008, 08:12:00 PM »

Offline nickagneta

  • James Naismith
  • *********************************
  • Posts: 48120
  • Tommy Points: 8794
  • President of Jaylen Brown Fan Club

But if a metric is coming up with obvious errors over and over again then why keep it? It obviously isn't doing the job right or properly capturing the dynamic adequately enough. And if it is not then there is no reason why it shouldn't be criticized.

It is obvious that the Celtics are not the fourth best team in the league right now. It is obvious that the Pacers aren't the 10th best team in the league right now. There is no way that the Suns, Nets and Hawks deserve to be rated as low as they are.

The metric is obviously wrong and poorly contrived.


I disagree - as a guy who uses a lot of stats in my job, there are several things to be said in defense of Hollinger's system.  The first is that it's based on a small sample size right now, so there are going to be some fluky results.  As the season gets further on it becomes more reflective of reality.

Second is that, while we're clearly better than Portland, one of the system's fundamental qualities is that it ignores subjective things like pre-existing opinions.  All teams start out equal, and only objective data - what happens on the court is the ONLY thing that goes into the formula.  This can be a disadvantage too, but by removing the human element, it removes the bias of opinions from the equation, which can shed some light on things.  Indiana or Milwaukee are good examples - they have lost a lot of games, but have played a lot of good teams and generally played them close.  Both teams are playing better than their record indicates, and this is supported by the data.

Hollinger's formula isn't thrown together at random or "made up" either (not your words but others have said this) - it looks at data from past seasons and weights variables according to how predictive they are of regular season and playoff success.  For example, margin of victory is a stronger predictor of future wins and even winning a championship than  current W-L record, so the model weights it more strongly.  Same thing with recent performance. 

Lastly, the model suffers from what I call "BCS syndrome" - the only time people care about it are when it deviates from conventional wisdom.  Usually people get upset and criticize the model heavily, like with the BCS, when this happens.  The problem is, if the model just parroted the opinions of the average fan, it would be useless - it wouldn't give any additional information above and beyond what people already thought. 

The model is MOST useful when it gives a different result from what people expect, but that's unfortunately when people have the lowest opinion of it.  Portland's high ranking tells us they've been playing very well in situations where a mediocre or even good NBA team wouldn't - what they've done so far is the profile of an elite team.  Phoenix's lower ranking says they aren't, so far, as good as their record would indicate - and if you take a look at the data or watch them play, that appears to be true.  When the model gives a surprising result, that's a time to look a little closer at a team and see if they maybe aren't over or underperforming in ways that people aren't picking up on yet. 

Now, it's far from perfect: the reliance on objective data  means it'll make mistakes, and the mistakes will be bigger and more frequent earlier in the year.  But conventional wisdom is often wrong too - it was obvious preseason that New Orleans was a top-5 team, and Denver and Portland were average at best - both have been wrong so far, and Hollinger's numbers reflected their actual performance significantly earlier than most people. Basically, it's providing a tool, not an oracle - it helps look at team performance from a different perspective, one that doesn't care about opinions, hype and pre-season rankings.  It's not perfect and never will be, but it can be useful.

Ok, I think I just got carpal tunnel from typing all this out, but it's a topic I'm passionate about.
While I agree that statistics are an extremely useful tool and that the dicyphering and interpretting them are great, there is a huge difference between interpreting statistics and warping what they say into something completely different. Hollinger is going back and taking completely arbitrary results that rely on tons of variables, human variables at that, and trying to find patterns in them.

Here's the problem. Hollinger is not Asimov's Hari Seldon that can turn arbitrary human results into a predictable mathematical formulation. What he does is no different than a guy that looks at a horse race card and predicts the winners based on his system or the guy that tries to find mathematical patterns in picking out his 6 Megabucks numbers every week. People who do these things are trying to find a pattern in random number generation. But by it's very definition there is no pattern in random numbers.

And unfortunately, that's what statistics created by humans playing a game is. It is random number generation and Hollinger's formulation,

RATING = (((SOS-0.5)/0.037)*0.67) + (((SOSL10-0.5)/0.037)*0.33) + 100 + (0.67*(MARG+(((ROAD-HOME)*3.5)/(GAMES))) + (0.33*(MARGL10+(((ROAD10-HOME10)*3.5)/(10)))))

which uses averages and percentages as a way of prioritizing different aspects of his formula, is nothing more than trying to find a pattern in a random number sequence. He is attempting to take current results and compare them to past random results to come up with a formula that predicts the relative strength of a team.

I have no faith in mathematics attempting to predict or rate the actions of humans. Just because Hollinger feels he sees patterns that revolve around margin of victory and how a team is performing lately as compared to overall, doesn't mean they are there. He is dealing with random results that differ from year to year and game to game.

This is why I feel his rankings are a farce and will never give any credence to his rankings.
« Last Edit: December 02, 2008, 09:07:36 PM by nickagneta »

Re: John Hollinger's Power Rankings
« Reply #21 on: December 02, 2008, 08:50:43 PM »

Offline Fafnir

  • Bill Russell
  • ******************************
  • Posts: 30859
  • Tommy Points: 1327
Nick looks like you posted inside a quote box. I picked out the part I'm responding too.

Quote
I have no faith in mathematics attempting to predict or rate the actions of humans. Just because Hollinger feels he sees patterns that revolve around margin of victory and how a team is performing lately as compared to overall, doesn't mean they are there. He is dealing with random results that differ from year to year and game to game.
People aren't random though. They are often irrational, but not random.

Especially when you have a large sample size and lots of data about their behavior. When people behave in groups they become more predictable. Basketball players are for the most part pretty consistent over the course of a season with what they produce.

I don't think his rankings are the be all and end all. Especially since basketball is a game of individual match ups that can be exploited. But they are a tool, and lots of statisticians have developed their own tools. I like to look at a variety of them myself. It is all information to consider.

Just because a formula is a linear model with a number of coefficients doesn't mean its invalid. There are a lot of ways to evaluate whether a term is statistically significant. In other words whether it adds anything to the model. I don't know for sure that Hollinger and others run this analysis but I'd be very surprised if they did not.

Re: John Hollinger's Power Rankings
« Reply #22 on: December 02, 2008, 09:15:32 PM »

Offline fairweatherfan

  • Johnny Most
  • ********************
  • Posts: 20738
  • Tommy Points: 2365
  • Be the posts you wish to see in the world.
While I agree that statistics are an extremely useful tool and that the dicyphering and interpretting them are great, there is a huge difference between interpreting statistics and warping what they say into something completely different. Hollinger is going back and taking completely arbitrary results that rely on tons of variables, human variables at that, and trying to find patterns in them.

Here's the problem. Hollinger is not Asimov's Hari Seldon that can turn arbitrary human results into a predictable mathematical formulation. What he does is no different than a guy that looks at a horse race card and predicts the winners based on his system or the guy that tries to find mathematical patterns in picking out his 6 Megabucks numbers every week. People who do these things are trying to find a pattern in random number generation. But by it's very definition there is no pattern in random numbers.

And unfortunately, that's what statistics created by humans playing a game is. It is random number generation and Hollinger's formulation,

RATING = (((SOS-0.5)/0.037)*0.67) + (((SOSL10-0.5)/0.037)*0.33) + 100 + (0.67*(MARG+(((ROAD-HOME)*3.5)/(GAMES))) + (0.33*(MARGL10+(((ROAD10-HOME10)*3.5)/(10)))))

which uses averages and percentages as a way of prioritizing different aspects of his formula, is nothing more than trying to find a pattern in a random number sequence. He is attempting to take current results and compare them to past random results to come up with a formula that predicts the relative strength of a team.

I have no faith in mathematics attempting to predict or rate the actions of humans. Just because Hollinger feels he sees patterns that revolve around margin of victory and how a team is performing lately as compared to overall, doesn't mean they are there. He is dealing with random results that differ from year to year and game to game.

This is why I feel his rankings are a farce and will never give any credence to his rankings.

No offense Nick, but if you truly believe basic stats generated during a basketball game are completely random numbers, you have a fundamental misunderstanding of how the world works.  To broaden the topic, your belief, if applied as generally as you state it above, that human behavior cannot be predicted or evaluated by mathematics, would also force you to believe:

- That every insurance company in the world should go out of business very quickly (using actuarial tables and data modeling of human behavior to predict costs and benefits for their company is the entire business model of most companies)

- That prediction models used in clinical assessments of mentally ill individuals to predict their likelihood of violent outbursts, relapses, etc - which have been repeatedly proven to be multiple standard deviations better than subjective ratings of even the most skilled and well-trained clinicians - are just somehow getting lucky over and over again.

- That every standardized test ever devised is worthless, as is grading in general - after all, these are just random patterns being generated by "players" (test-takers) and the idea that past performance predicts future success must be false.

- That the entire field of economics is either a farce or a completely blind shot in the dark.  Economics is all about analysis of human behavior and prediction of future behavior, quite often (not an economist, so don't know how often) using advanced statistical modeling.

- The stock market will inevitably collapse...whoops, ok moving on.

Whether you accept it or not, many of our fundamental societal institutions are founded on the philosophy of predicting future human behavior based on statistical modeling of past trends, and for the most part, most of the time, they work.  Like with other things, we tend to notice the one time they break down and ignore the 99 where things go smoothly, but, as there isn't anarchy just yet, these things have held up pretty well.

Bringing it back to basketball, the argument that wins and losses and scoring margin - the main stats in Hollinger's formula - are just randomly generated numbers comparable to a lottery draw makes no sense.  If that was correct, each and every game would simply be a coinflip, we'd see a repeat champion approximately once every 900 years, and nearly all teams would be within 2 or 3 games of .500 all season, every season.  There would be no point in even following the sport, because it's not like anyone's actually better than anyone else, it's just a string of random events.

But I don't think you actually believe that - I think you are confusing individual event probabilities with trends in much larger samples.  A given shot might trickle in or roll off the rim independently of whether Ray Allen or Rajon Rondo is shooting it, but over 1000 shots, Ray will basically always come out ahead.  The numbers obtained with the large sample, then, are very indicative of ability, and so can be used to predict future shooting success, even though the single shot sample is kind of, well, random.  Same deal with this model.  A single event within a game can be a total fluke - a full game can be a fluke, though it's less likely.  A season takes a host of improbable on-court events (all we're talking about here, not injuries, lottery luck, etc) to become a fluke.  A decade's worth of seasons will almost always very closely reflect the actual abilities of your team.  As the sample gets bigger, prediction, with a good model, gets better.  Not perfect (which Hollinger has never claimed to be, nor anything better than a tool to look at the league differently), but better. 

Re: John Hollinger's Power Rankings
« Reply #23 on: December 02, 2008, 10:01:02 PM »

Offline guava_wrench

  • Satch Sanders
  • *********
  • Posts: 9931
  • Tommy Points: 777

But if a metric is coming up with obvious errors over and over again then why keep it? It obviously isn't doing the job right or properly capturing the dynamic adequately enough. And if it is not then there is no reason why it shouldn't be criticized.

It is obvious that the Celtics are not the fourth best team in the league right now. It is obvious that the Pacers aren't the 10th best team in the league right now. There is no way that the Suns, Nets and Hawks deserve to be rated as low as they are.

The metric is obviously wrong and poorly contrived.


I disagree - as a guy who uses a lot of stats in my job, there are several things to be said in defense of Hollinger's system.  The first is that it's based on a small sample size right now, so there are going to be some fluky results.  As the season gets further on it becomes more reflective of reality.

Second is that, while we're clearly better than Portland, one of the system's fundamental qualities is that it ignores subjective things like pre-existing opinions.  All teams start out equal, and only objective data - what happens on the court is the ONLY thing that goes into the formula.  This can be a disadvantage too, but by removing the human element, it removes the bias of opinions from the equation, which can shed some light on things.  Indiana or Milwaukee are good examples - they have lost a lot of games, but have played a lot of good teams and generally played them close.  Both teams are playing better than their record indicates, and this is supported by the data.

Hollinger's formula isn't thrown together at random or "made up" either (not your words but others have said this) - it looks at data from past seasons and weights variables according to how predictive they are of regular season and playoff success.  For example, margin of victory is a stronger predictor of future wins and even winning a championship than  current W-L record, so the model weights it more strongly.  Same thing with recent performance. 

Lastly, the model suffers from what I call "BCS syndrome" - the only time people care about it are when it deviates from conventional wisdom.  Usually people get upset and criticize the model heavily, like with the BCS, when this happens.  The problem is, if the model just parroted the opinions of the average fan, it would be useless - it wouldn't give any additional information above and beyond what people already thought. 

The model is MOST useful when it gives a different result from what people expect, but that's unfortunately when people have the lowest opinion of it.  Portland's high ranking tells us they've been playing very well in situations where a mediocre or even good NBA team wouldn't - what they've done so far is the profile of an elite team.  Phoenix's lower ranking says they aren't, so far, as good as their record would indicate - and if you take a look at the data or watch them play, that appears to be true.  When the model gives a surprising result, that's a time to look a little closer at a team and see if they maybe aren't over or underperforming in ways that people aren't picking up on yet. 

Now, it's far from perfect: the reliance on objective data  means it'll make mistakes, and the mistakes will be bigger and more frequent earlier in the year.  But conventional wisdom is often wrong too - it was obvious preseason that New Orleans was a top-5 team, and Denver and Portland were average at best - both have been wrong so far, and Hollinger's numbers reflected their actual performance significantly earlier than most people. Basically, it's providing a tool, not an oracle - it helps look at team performance from a different perspective, one that doesn't care about opinions, hype and pre-season rankings.  It's not perfect and never will be, but it can be useful.

Ok, I think I just got carpal tunnel from typing all this out, but it's a topic I'm passionate about.
While I agree that statistics are an extremely useful tool and that the dicyphering and interpretting them are great, there is a huge difference between interpreting statistics and warping what they say into something completely different. Hollinger is going back and taking completely arbitrary results that rely on tons of variables, human variables at that, and trying to find patterns in them.

Here's the problem. Hollinger is not Asimov's Hari Seldon that can turn arbitrary human results into a predictable mathematical formulation. What he does is no different than a guy that looks at a horse race card and predicts the winners based on his system or the guy that tries to find mathematical patterns in picking out his 6 Megabucks numbers every week. People who do these things are trying to find a pattern in random number generation. But by it's very definition there is no pattern in random numbers.

And unfortunately, that's what statistics created by humans playing a game is. It is random number generation and Hollinger's formulation,

RATING = (((SOS-0.5)/0.037)*0.67) + (((SOSL10-0.5)/0.037)*0.33) + 100 + (0.67*(MARG+(((ROAD-HOME)*3.5)/(GAMES))) + (0.33*(MARGL10+(((ROAD10-HOME10)*3.5)/(10)))))

which uses averages and percentages as a way of prioritizing different aspects of his formula, is nothing more than trying to find a pattern in a random number sequence. He is attempting to take current results and compare them to past random results to come up with a formula that predicts the relative strength of a team.

I have no faith in mathematics attempting to predict or rate the actions of humans. Just because Hollinger feels he sees patterns that revolve around margin of victory and how a team is performing lately as compared to overall, doesn't mean they are there. He is dealing with random results that differ from year to year and game to game.

This is why I feel his rankings are a farce and will never give any credence to his rankings.
?

His rankings are for fun. He has had disclaimers on the rankings. If you can't see the value of having those rankings, than fine.

But your analysis is way off. Random results? Random number sequence? Where are you getting that from? SOS and score differential are use in all computer rankings such as BCS rankings. There is nothing random about that. Even the weights are not random, though they may be flawed.

Re: John Hollinger's Power Rankings
« Reply #24 on: December 02, 2008, 10:20:27 PM »

Offline nickagneta

  • James Naismith
  • *********************************
  • Posts: 48120
  • Tommy Points: 8794
  • President of Jaylen Brown Fan Club
While I agree that statistics are an extremely useful tool and that the dicyphering and interpretting them are great, there is a huge difference between interpreting statistics and warping what they say into something completely different. Hollinger is going back and taking completely arbitrary results that rely on tons of variables, human variables at that, and trying to find patterns in them.

Here's the problem. Hollinger is not Asimov's Hari Seldon that can turn arbitrary human results into a predictable mathematical formulation. What he does is no different than a guy that looks at a horse race card and predicts the winners based on his system or the guy that tries to find mathematical patterns in picking out his 6 Megabucks numbers every week. People who do these things are trying to find a pattern in random number generation. But by it's very definition there is no pattern in random numbers.

And unfortunately, that's what statistics created by humans playing a game is. It is random number generation and Hollinger's formulation,

RATING = (((SOS-0.5)/0.037)*0.67) + (((SOSL10-0.5)/0.037)*0.33) + 100 + (0.67*(MARG+(((ROAD-HOME)*3.5)/(GAMES))) + (0.33*(MARGL10+(((ROAD10-HOME10)*3.5)/(10)))))

which uses averages and percentages as a way of prioritizing different aspects of his formula, is nothing more than trying to find a pattern in a random number sequence. He is attempting to take current results and compare them to past random results to come up with a formula that predicts the relative strength of a team.

I have no faith in mathematics attempting to predict or rate the actions of humans. Just because Hollinger feels he sees patterns that revolve around margin of victory and how a team is performing lately as compared to overall, doesn't mean they are there. He is dealing with random results that differ from year to year and game to game.

This is why I feel his rankings are a farce and will never give any credence to his rankings.

No offense Nick, but if you truly believe basic stats generated during a basketball game are completely random numbers, you have a fundamental misunderstanding of how the world works.  To broaden the topic, your belief, if applied as generally as you state it above, that human behavior cannot be predicted or evaluated by mathematics, would also force you to believe:

- That every insurance company in the world should go out of business very quickly (using actuarial tables and data modeling of human behavior to predict costs and benefits for their company is the entire business model of most companies)

- That prediction models used in clinical assessments of mentally ill individuals to predict their likelihood of violent outbursts, relapses, etc - which have been repeatedly proven to be multiple standard deviations better than subjective ratings of even the most skilled and well-trained clinicians - are just somehow getting lucky over and over again.

- That every standardized test ever devised is worthless, as is grading in general - after all, these are just random patterns being generated by "players" (test-takers) and the idea that past performance predicts future success must be false.

- That the entire field of economics is either a farce or a completely blind shot in the dark.  Economics is all about analysis of human behavior and prediction of future behavior, quite often (not an economist, so don't know how often) using advanced statistical modeling.

- The stock market will inevitably collapse...whoops, ok moving on.

Whether you accept it or not, many of our fundamental societal institutions are founded on the philosophy of predicting future human behavior based on statistical modeling of past trends, and for the most part, most of the time, they work.  Like with other things, we tend to notice the one time they break down and ignore the 99 where things go smoothly, but, as there isn't anarchy just yet, these things have held up pretty well.

Bringing it back to basketball, the argument that wins and losses and scoring margin - the main stats in Hollinger's formula - are just randomly generated numbers comparable to a lottery draw makes no sense.  If that was correct, each and every game would simply be a coinflip, we'd see a repeat champion approximately once every 900 years, and nearly all teams would be within 2 or 3 games of .500 all season, every season.  There would be no point in even following the sport, because it's not like anyone's actually better than anyone else, it's just a string of random events.

But I don't think you actually believe that - I think you are confusing individual event probabilities with trends in much larger samples.  A given shot might trickle in or roll off the rim independently of whether Ray Allen or Rajon Rondo is shooting it, but over 1000 shots, Ray will basically always come out ahead.  The numbers obtained with the large sample, then, are very indicative of ability, and so can be used to predict future shooting success, even though the single shot sample is kind of, well, random.  Same deal with this model.  A single event within a game can be a total fluke - a full game can be a fluke, though it's less likely.  A season takes a host of improbable on-court events (all we're talking about here, not injuries, lottery luck, etc) to become a fluke.  A decade's worth of seasons will almost always very closely reflect the actual abilities of your team.  As the sample gets bigger, prediction, with a good model, gets better.  Not perfect (which Hollinger has never claimed to be, nor anything better than a tool to look at the league differently), but better. 

No offense, fairweatherfan but I would counter that you do not understand the difference between tables and graphs and percentages and checking numbers in comparison to those numbers, tables and graphs and rendering an opinion and the socio-environmental and human variables involved in  mathematical formulations that are derived from human beings interacting with each other.

Actuaries use data not to predict events of human behavior but to create tables that give risk assessments based on data input about a certain client.  If someone smokes stats show they are X amount more probable to die before the age of 70 as those who do not. Those are probability tables that are vastly different than what I am speaking of.

Your standardized testing example doesn't hold water because the results are what are analyzed. Each player plays against a standard unchanging test and then the results are studied. In basketball the players are playing against human beings that are not standardized or unchanging.

Economics isn't a good example either as economics deals with the wider number generation of unemployment rates, GNP, national income, output, consumption, unemployment, inflation, savings, investment, international trade and other numbers created by huge amount of individuals interacting and creating results.

Economics isn't a bad example of the point you are trying to make but it is off base. Hollinger is attempting to use the results of two human teams competing against each other and giving them significance when it comes to to completely different human teams. Everything is way too fluid.

Mathematical formulations to predict human behavior on vast scales, like economics or clinical assessment algorithms for mentally ill patients create a general idea for where a vast group may trend. In basketball the dynamic is infinitesimally smaller. For instance, clinical assessment algorithms are generated to predict how patients who have certain diseases and certain past behavioral patterns may generally trend when introduced to different medications, situations, or stimulus. But it will not predict what Joe, Tom, Bill, Susan, and Jack will do. Just how they might react.

Hollinger is trying to highly volatile and changing stats from year to year for small group interaction with other small groups and find lineal formulations that will predict the relative strength of a completely different and equally small group of people. Using advanced algorithms it would be nearly impossible to do.

I'm sorry but your semi sarcastic examples completely underwhelmed me. They do not pertain to the dynamics I am discussing. Each 12 person team that plays only five players against 12 players on another team only playing 5 players at a time is just way to variable to have the results be anything other than random number generation. There are no constants that connect the situations from year to year.

Take a look at the 1976-77 Celtics. They had a 44-38 record and gave up more points than they scored. Hollinger's numbers probably would find them to be a very mediocre team at best or maybe even bad team. The 1975-76 team scored about 3 points more than they allowed and was 54-28. Hollinger would probably find them to be a pretty good team. But they were virtually the same team except that Cowens and Scott both missed half a season in 77. The difference is the 77 team went on to lose in the EC semis and the 76 team won it all. But given their stats would Hollinger's numbers have predicted their relative strengths properly? I'm not sure because I don't want to do the work but comparatively speaking maybe, maybe not. It's a toss up because the numbers for each year are so randomly generated.


Re: John Hollinger's Power Rankings
« Reply #25 on: December 02, 2008, 10:45:32 PM »

Offline fairweatherfan

  • Johnny Most
  • ********************
  • Posts: 20738
  • Tommy Points: 2365
  • Be the posts you wish to see in the world.
Well, I apologize for being sarcastic Nick, and it's obvious you're more familiar with the material than I gave you credit for.  But blanket statements like "mathematics cannot predict or rate human behavior" and "occurrences over the course of an NBA season are purely random events similar to a lottery draw" (both paraphrased, but I think they reflect your point) are not, in my opinion, defensible.  While it's legit to argue the merits and shortcomings of this specific model in this specific area, both of those statements are, again in my opinion, unequivocably false. 

As for my examples, you can disagree with them individually, or argue that they aren't analogous to the events of an NBA season, but I don't think it's arguable that people can and do predict human behavior based on existing data and past trends, and that these predictive models are often more successful than any subjective tool.  It seems like you were arguing that that was not the case in any area of assessment or prediction. 

I think the general nature of what you were saying was what riled me up more than the criticisms of the model itself.  I hear people say things like that all the time in my line of work, and it probably creates a kneejerk reaction by this point.  So I apologize for any lack of civility I might've shown.

Either way, my nerd rage is over - neither one of us is gonna change our minds.  I do know that Hollinger has correctly predicted the last 2 champions from his model, including one that contradicted the W-L leader.  That's not a huge sample, but it's about as good an indicator a blind model can have up til now.

Re: John Hollinger's Power Rankings
« Reply #26 on: December 06, 2008, 09:43:16 AM »

Offline Kwhit10

  • Antoine Walker
  • ****
  • Posts: 4257
  • Tommy Points: 923
I love how the Celtics pretty much dominate the Blazers yet the Blazer's power ranking actually went up...?

Re: John Hollinger's Power Rankings
« Reply #27 on: December 09, 2008, 11:19:12 AM »

Offline Fafnir

  • Bill Russell
  • ******************************
  • Posts: 30859
  • Tommy Points: 1327
I love how the Celtics pretty much dominate the Blazers yet the Blazer's power ranking actually went up...?
That is actually pretty easy to explain. The formula gives extra weight to the last ten games. The game that "fell off" after we beat them was a 5 point loss to Golden State.

The formula weights a 15 point loss to the C's better than a 5 point loss to the Warriors. A bit silly but not that ridiculous, they are still playing very good ball on the whole.

Re: John Hollinger's Power Rankings
« Reply #28 on: December 09, 2008, 11:23:32 AM »

Offline crownsy

  • Don Nelson
  • ********
  • Posts: 8469
  • Tommy Points: 157
plus they have a slight edge in the all important "style points!" catagory that notes margin of victory, which if we escwed giving the bench garbage time in the 4th, we'd have them on.
“I will hurt you for this. A day will come when you think you’re safe and happy and your joy will turn to ashes in your mouth. And you will know the debt is paid.” – Tyrion

Re: John Hollinger's Power Rankings
« Reply #29 on: December 09, 2008, 01:32:51 PM »

Offline guava_wrench

  • Satch Sanders
  • *********
  • Posts: 9931
  • Tommy Points: 777
I love how the Celtics pretty much dominate the Blazers yet the Blazer's power ranking actually went up...?

The SOS seems to be overvalued this early in the season. The problem with having the best record is that you can't play yourself, so you end up with a lower SOS.

Portland has had a lot of road games.

Keep in mind that someone's hollinger score can go up if they most recent game was better than the game it replaced in the last 10 games.