Are the Official World Golf Rankings Biased?

Did you see this golf club review?

1 post / 0 new
Scott Rushing
Scott Rushing's picture
Are the Official World Golf Rankings Biased?

Are the Official World Golf Rankings biased against PGA Tour players? A recent analysis suggests they are.

I will try to not take things out of context but I do suggest you review the entire article for yourself for more detailed information. There is a wealth of information in the full report but it's difficult to digest that all here. You can read the entire article here : http://www.columbia.edu/~mnb2/broadie/Assets/owgr_20120507_broadie_rendl...

Are the Official World Golf Rankings Biased? May 12, 2012

Mark Broadie, Graduate School of Business, Columbia University
Richard J. Rendleman, Jr., Tuck School of Business, Dartmouth College

In this paper, we investigate whether the OWGR system is biased for or against any of the tours, and if so, by how much. To investigate any potential bias, we compare
the OWGR system with two unbiased methods for estimating golfer skill and performance.

The OWGR awards points to participants in all majors, WGC events, and regular events on the tours. The winner of each event is awarded specific OWGR points. All players
who make the cut are awarded points, but on a declining basis, depending upon their finishing positions. This analysis compared that method to a score-based skill estimation (SBSE) method, which uses scoring data to estimate golfer skill, taking into account the relative difficulty of the course in each tournament round. The second is the Sagarin method, which uses win-lose-tie and scoring differential results for golfers playing in the same tournaments, to rank golfers. Neither the score-based skill method nor the Sagarin method use tour information in calculating player ranks, and therefore neither method is biased for or against any tour.

Using data from 2002 to 2010 and comparing the results ranks from the OWGR and score-based methods, we find that PGA Tour golfers are penalized by an average of 26 to 37 OWGR ranking positions compared to non-PGA Tour golfers.

A major obstacle to ranking golfers on a single scale is that professional golfers play in dierent tournaments on different courses against dierent competition. Tournament finishing positions are not necessarily comparable across tournaments. For example, winning a minor tournament against lesser competition is likely a less impressive accomplishment than finishing tenth at a major. Scores at dierent tournaments are not comparable because of differences in course difficulty. For example, many golfers on the Nationwide Tour, the developmental tour for the PGA Tour, have average scores that are lower (i.e., better) than the average scores of PGA Tour golfers. This happens not because the players are better
but because the courses are easier. The two methods described next measure the performance of golfers relative to other golfers playing on the same course on the same day, and then tie the results together through the play of golfers who compete on multiple tours and those who participate in majors and WGC events. Both the score-based skill and Sagarin methods use 18-hole scores as input, while the OWGR method uses tournament nishing positions and does not use scoring information directly.

Using 18-hole scoring data of all golfers participating in tournaments from 2002 to 2010 on the major tours (PGA, Europe, Japan, Asia, Sunshine and Australasia) and from the developmental tours (Nationwide and Challenge), we test for bias in OWGR rankings by comparing the OWGR rankings with two methods, score-based skill estimation (SBSE) and Sagarin, which do not use tour information in their computations. We find a persistent, large and statistically significant bias in the OWGR rankings against PGA Tour golfers; a golfer of a given estimated SBSE skill level, or a given Sagarin rank, is likely to be penalized in the OWGR rankings for playing events on the PGA tour and rewarded for playing elsewhere.

The general bias can be illustrated vividly with a specific example. At the end of 2010, PGA Tour player Nick Watney and Yuta Ikeda of the Japan Tour
had roughly the same OWGR rankings: Watney ranked 36 in the OWGR while Ikeda was ranked 40. But according to our SBSEs, Watney's mean neutral score was estimated to be 0.98 strokes lower (better) than Ikeda's. Watney was ranked 11 on the basis of SBSE while Ikeda was ranked 93, a dierence of 82 ranking positions. These two golfers had similar OWGRs but very different SBSE ranks based on their performances in the Jan 2009-Dec 2010 period.

These findings are important, because OWGR rankings determine, in part, eligibility to play in major tournaments, World Golf Championships and other events. Designing a
ranking system is dicult, and we are not proposing SBSE method as an alternative world ranking system. However, a ranking system where points are determined by a committee, rather than objective analysis, could easily lead to the biases described in this paper.