Welcome back to another edition of Basketball, Stat. Hopefully you enjoyed the last post I did on PER. As a follow up, I wanted to do a post covering Regularized Adjusted Plus/Minus (RAPM). For those of you who might have missed the first post or introduction, I wanted to do a series covering many of the advanced stats used in the NBA. We see these stats used all the time to compare players, but I know for myself at least, I didn’t have a firm grasp of what went into these numbers. So given that, I wanted to build that understanding for myself and share what I find!
The Basics
To get to RAPM, we first have to get to Adjusted Plus/Minus, or APM. APM is a variation of the standard Plus/Minus stat. If you’re unfamiliar with Plus/Minus, it is a very simple concept: the net score while a given player is on the court is defined as his Plus/Minus. In other words, if Player A enters the game with a score of 50-50, then exits the game with his team winning 75-70, he would have a Plus/Minus of +5 for that “stint.” This stat is cumulative. You can use it for certain stretches of a game (e.g. the Lakers were +10 when LeBron was on the court in the first half), for an entire game (e.g. even though the Lakers lost, they were +3 when LeBron was on the court), or even throughout the season (e.g. the Lakers are only 4th in the West, but are +100 with LeBron on the court so far this season).
I’m sure you can already see the potential flaws in using standard Plus/Minus to measure player performance. For example, it doesn’t account for competition or other players on your team. You can put nearly any player as the fifth in a lineup of Steph, LeBron, AD, and Giannis, and that player will have a positive Plus/Minus. Similarly, if your minutes only come against garbage-time lineups, you’re Plus/Minus will be higher than it should be.
This also raises some questions: Does a player with an unexpectedly high Plus/Minus benefit from circumstances, or is he doing something that creates value that we might not be seeing? Is a player with an unexpectedly low Plus/Minus not as good as we thought, or is he just in an unfortunate situation? To try and answer some of these questions, many people have tweaked the stat to try account for various things, but Dan Rosenbaum was the one who really articulated and defined Adjusted Plus/Minus in 2004. Here’s part of what he wrote:
However, these “unadjusted” plus/minus ratings do not measure the value of a player per se; they measure the value of the player relative to the players that substitute in for him. In addition, there are differences in the quality of players that players play with and against. A weak starter on a team with exceptionally good starters (relative to bench players) will generally get a very good unadjusted plus/minus rating – regardless of their actual contribution to the team.
Thus, a better measure of player value would “adjust” these plus/minus ratings to account for the quality of players that a given player plays with and against. In addition, it would account for home court advantage and for clutch time/garbage time play. Thus, unlike in unadjusted plus/minus ratings, these “adjusted” plus/minus ratings do not reward players simply for being fortunate to being playing with teammates better than their opponents.
With that said, there’s still a lot of ambiguity around APM, as there are many different versions by different people with slight tweaks. But we will specifically focus on the work Dan did for APM which will then launch us into RAPM!
The “Formula” – APM
APM is a little more dense than a statistic like PER, as it’s not simply calculated by plugging numbers into a formula and getting a number back. To get APM, we must return to our college math classes and remember how to do a linear regression. But fear not! This is not a subpar attempt to teach you linear algebra, so I will do my best to explain it in a way the average basketball fan can understand.
Returning to our Plus/Minus example, let’s take the idea of stints. We can define a stint as a period of play in which no substitutions were made and that will have a margin (e.g. +2 if Team A wins that stint by two points). In his original article, Rosenbaum observed over 60,000 stints over two seasons in order to calculate his first public set of APM stats.
For each stint, we have 10 players on a court, five players for Team A, the home team, and five players for Team B, the away team. Let’s call the players for Team A as follows: P₁, P₂, P₃, P₄, P₅. And for Team B, we will have: P₆, P₇, P₈, P₉, P₁₀.
If a player is playing on the home team, P = 1. If a player is playing on the away team, P = -1. And if a player is not in the game for that stint, P = 0.
APM tries to figure out how these players work together to reach the margin on the court during that time, while also accounting for home court advantage (b₀). So let’s lay that out:
Margin = b₀ + b₁P₁ + b₂P₂ + b₃P₃ + b₄P₄ + b₅P₅ -+ b₆P₆ + b₇P₇ + b₈P₈ + b₉P₉ + b₁₀P₁₀
“Margin” is the offensive rating (points per 100 possessions) of Team A minus the offensive rating of Team B during that stint. Note that we use offensive rating difference instead of actual score difference in case one team gets more possessions than the other. This makes it a “per 100 possessions” margin.
If we were actually in a college linear algebra class, this is the part where we would start using matrices with lots of 1s, 0s, and -1s to solve for the “b” coefficients. But alas, we’re not, so I’ll save you the eye-sore and just assume we run a linear regression here.
But in simple terms, what the regression is doing is trying to find the values of all of the “b”s that correspond to each player that makes this equation true over all of the stints that we are evaluating.
So this regression is solving for all of the “b” coefficients, and that is ultimately what is each player’s Pure APM (this is how Dan refers to it). You can interpret each player’s Pure APM as the value a player is adding per 100 possessions according to the regression. It uses thousands of stints for data points to see how all of the “players” impact each other and minimize standard error of the regression. So we solve (to an extent) the issue of players playing with either elite or subpar lineups that will impact their standard Plus/Minus, and we get a closer representation of how much value they are adding to their team.
If you look at these raw values, you will absolutely see some outliers, and even small adjustments to the numbers will cause huge swings in the results. So Dan does a few things to try to adjust his results and turn Pure APM into APM, but this is where we will part ways with Dan’s APM and introduce RAPM!
The “Formula” – RAPM
As we begin our discussion on RAPM, this is where things can get extremely math-heavy! Again, I won’t deep-dive into what would be covered in a college linear algebra class, but if you wish to do so yourself, here is a great article that gets into that level of detail. I also want to mention Jermias Engelmann, since he’s largely credited with the creation of RAPM. He did an awesome lecture on it you can find here.
With that said, let’s talk about what RAPM is compared to APM. With APM, we will inevitably have instances where there are players with small playing times that result in giant outliers. Since APM is accounting for all of the relationships of how the players impact each other, this will affect the whole model, and you will end up with some extremely “off” values that don’t make sense. On top of that, even the numbers that do make sense will have a large mathematical margin of error according to the model. And again, small tweaks to any of the numbers in the model can impact things drastically.
So RAPM applies a filter, to put it simply. The filter is a technique called ridge regression that attempts to filter out the “bad” pieces of data without changing the ones that are valid. For those interested, here’s the equation:
β = ( XTX + ƛI )-1XTY
This isn’t a perfect analogy by any means, but for the less-mathematically inclined, you can think of it like a water faucet that hinges on the value of ƛ (lambda) in the filter equation. If lambda is zero, the faucet is turned all the way on, meaning every piece of data gets through untouched, so at lamba = 0, RAPM is the same as APM since the filter isn’t actually restricting any of the data. Conversely, as lambda approaches infinity, it’s as if the faucet is turned completely off, so all of the data is restricted, and RAPM = 0 for everyone! So we need to find the sweet spot for lambda so that we’re controlling the data at the right level. It’s not extremely important for us here, but if you’re curious at all, Joseph Sill wrote a paper for the Sloan Conference and found that the ideal lambda is close to 2,000 for one year of data, or close to 3,000 for three years of data.
So when you apply this filter, it gives us a much better balance in terms of which data is used. You won’t have the issue of “overfitting,” where it takes every piece of data, including the guy who played five minutes and went 4/4 from three and had a Plus-Minus of +10 in those random minutes. This kind of data throws off normal APM and would likely overrate that player because it doesn’t necessarily recognize this as an outlier automatically. But RAPM does!
So that is how RAPM is calculated! There really is a lot of room for minor variations, so if you go looking at various RAPM calculations that are available online, you might notice they’re all different. Again, given the nature of how it’s calculated, that’s just going to be part of the package with RAPM. But in general, the numbers should all be relatively close and ultimately tell you the same information with only slight nuances. For example, some people will calculate ORAPM and DRAPM separately, then add the two together for overall RAPM. Some won’t separate them. Some will use two seasons of data at a time, some will use just one. Some will have slightly different values of lambda. So it’s important when looking at RAPM measurements to realize what the model is using.
Taking a Step Back: Strengths and Weaknesses
You might have realized we haven’t used a single basketball value in calculating RAPM other than the score of the game. That’s one of my favorite things about this stat! Ultimately, the point of basketball is to outscore your opponent, and you do that by maximizing the difference between the rate at which you score and the rate at which your opponent scores. And RAPM focuses on players doing that exclusively. Your points scored, rebounds, and assists absolutely play a part in how well you do that, but RAPM looks below the surface of those numbers to see how you’re impacting the game based on the scores while you’re in the game, while accounting for who else is on the floor.
With that said, there’s still some head-scratching results at times. And given the nature of how it’s calculated, RAPM makes it extremely difficult to go back and find where the errors come from. That is perhaps my biggest frustration with the stat. Going through people’s RAPM data and seeing an extremely random player in the top 10 raises lots of questions: Is this player extremely underrated? Do we need to adjust the model because we’re getting an outlier result? Are we going to get this outlier result no matter how we adjust and we just need to ignore it?
I think we’ll see something that’s a common theme with every advanced stat out there, we can’t rely on RAPM blindly. RAPM is something that helps us uncover the players that traditional box score stats miss, like the Shane Battiers of the world. But even when we see an unexpected player, RAPM isn’t reliable enough alone to know that a player is a positive impact player. So ultimately I treat it as a clue or a piece of the puzzle, as should be the case with any stat or player evaluation. But we can be more specific here and say that this is a tool that is looking specifically for impact on the scoring difference, regardless of the box score. So RAPM is a stat in which a player that is creating value on defense, via screen assists, or through “hockey assists” (a pass leading to an assist) would shine. Executing to a well-designed scheme and being in the right spot that enables your teammate to get an open shot is something that is recognized by RAPM (not specifically, but through scoring margin).
Historical Examples
We want to look back at actual data from past years to compare historical data to what we expect to see from RAPM. But as we discussed earlier, RAPM does not necessarily have a single, specific way to calculate, as there are different values of lambda, multi-year vs single-year datasets, etc. So we need to choose one data source, and I will be looking at this website and their calculation of RAPM. In their “About” section, they detail how they’ve calculated RAPM, but for simplicity, they are using a single-year RAPM calculation, where ORAPM and DRAPM are both calculated and added together.
We can really make our points about RAPM just by looking at one data set, so let’s take a look at the top 10 from the 2018-19 regular season:
Rank | Player | Team | Poss | ORAPM | DRAPM | RAPM |
1 | Kevin Durant | GSW | 12,315 | 5.2647 | 1.4193 | 6.684 |
2 | Danny Green | TOR | 9.830 | 4.297 | 2.1749 | 6.4719 |
3 | Stephen Curry | GSW | 10,669 | 4.3916 | 1.0492 | 5.4409 |
4 | Paul George | OKC | 13,310 | 1.8271 | 3.5296 | 5.3567 |
5 | Jrue Holiday | NOP | 11,097 | 3.2686 | 1.7695 | 5.0382 |
6 | Derrick White | SAS | 7,424 | 2.5176 | 2.2822 | 4.7998 |
7 | Jayson Tatum | BOS | 10,820 | 2.0052 | 1.9543 | 3.9595 |
8 | Joel Embiid | PHI | 9,886 | 2.406 | 1.4051 | 3.8111 |
9 | Al Horford | BOS | 8,644 | 3.145 | 0.5891 | 3.734 |
10 | Jimmy Butler | PHI | 8,225 | 2.7189 | 0.9049 | 3.6238 |
Well that’s interesting! With our first small set of data, we already have quite a few unexpected results! Durant at 1 makes sense, and Steph Curry at 3 being an example of someone who plays often with him and is an elite player in his own right also makes a lot of sense. Paul George had an incredible season last year, so him at 4 looks solid as well. But Danny Green at 2, Jrue Holiday at 5, and Derrick White at 6 are huge surprises! Again, we don’t want to take these numbers as prescriptive, so our takeaway shouldn’t be “Danny Green was better than Steph last year.” That’s clearly not the case for anyone that follows basketball.
But it should prompt us to ask, “Did Danny Green, Jrue Holiday, and Derrick White provide a lot of value that we’re not seeing in the box score?” I would guess the answer is yes, but to know, we really need to dig in to each of those player’s seasons. We need to watch film to see what they’re doing on defense and off the ball. We need to look at spacing data to see if maybe Danny Green’s shooting is drawing players to him and increasing the spacing on the floor while he’s there. We need to look at the other players on their teams and see if they have similar RAPMs that could be positively impacting these numbers.
On the opposite end of the spectrum, we see a number of “star” players that might surprise us far down the rankings:
- Kemba Walker ranked 460, with an RAPM of -0.5252
- Ben Simmons ranked 445, with an RAPM of -0.4669
- Devin Booker ranked 435, with an RAPM of -0.4186
- Blake Griffin ranked 425, with an RAPM of -0.3952
- D’Angelo Russell ranked 390, with an RAPM of -0.2611
Several of these I expected, while others were big surprises. Devin Booker is often someone talked about who puts up “empty stats.” I don’t fully believe that’s the case, but I do think his point scoring output has caused him to be overrated, and he hasn’t impacted winning at a high level. But does this mean Devin Booker is a bad player? Absolutely not! It means that if it’s assumed Devin Booker is going to be a future All-Star, we should rethink that opinion and look more closely at the areas of his game that we might be missing. Is he missing many defensive rotations? Is he over-shooting and dragging down his and/or his team’s scoring efficiency? Again, it’s something we have to watch out for, but RAPM isn’t going to tell us; it simply alerts us to a potential player misevaluation.
Conclusion
Essentially, RAPM is a “jumping off” point. It’s a signal that maybe we’re missing something. Looking back at this list, I would say that Al Horford in the top 10 makes a lot of sense, but it’s not an opinion that would have been widely accepted 5 years ago. He’s another player in the Shane Battier mold that impacts the game without filling up the box score. RAPM tells us that. But that doesn’t mean we should blindly accept it! I would be hard pressed to believe that Danny Green was the best player on the Raptors, much less the 2nd best player in the league last year. I would venture to guess, however, that his impact was underrated by the average fan. Similar things can be said for Jrue Holiday and Derrick White (and I’m especially interested in digging into how White ended up there!).
Since RAPM was invented, there have also been several new metrics created that are improvements on RAPM, such as RPM (Real Plus/Minus) and PIPM (Player Impact Plus/Minus). I will likely visit both of those metrics in a future post.
But hopefully this was helpful for you and educational! I know that for me, learning how APM and RAPM were calculated was extremely helpful in terms of understanding what I’m really looking at when I see these numbers. Thank you all again for your time reading this! If you have any thoughts, feedback, or requests, please let me know as I would love to hear them!
Leave a comment