clock menu more-arrow no yes mobile

Filed under:

Hot Streaks and Playoff Success in MLS

Do teams that enter the MLS playoffs in good form do better in the postseason?

Read Michael Lewis’ Moneyball, and you may be surprised to learn that the most famous efficiency project in sports history did not end in a glorious title. For all the work Athletics’ GM Billy Beane did to build a team that won way more games than their payroll should have allowed, those teams couldn’t get over the hump in October. Eventually, Beane threw his hands up and famously said, “My s*** doesn’t work in the playoffs.” Now, the Red Sox took his methods and broke a massive title drought a couple of years later, but the point was simple: you can raise your chances of winning as much as you want, but in the small sample sizes of the playoffs, even the best teams can get eliminated. Unlikely things happen all the time (especially in single-elimination tournaments).

This leads to the obvious question: what can teams do to maximize their chances of making a deep run postseason? PEDs seem like an obvious choice but are generally frowned upon.

How about getting hot at the right time? Yes, that favorite hobby horse of drivetime radio sports hosts. Some team goes on a winning streak right as the playoffs approach, maybe taking the last spot, and the pundits talk about how they are the team “no one wants to face”. Heck, Steve Davis mentioned it a few times in the SKC broadcast last weekend. Such analysis happens across sports, which in this case is a positive because it means people have investigated whether those claims are true in a lot of different scenarios.

In the NBA (whose playoffs are structured differently than MLS, though gameplay flows in a similar manner), teams peaking at the right time don’t appear to do better in the playoffs than others (in fact, maybe there’s some evidence for the opposite). In the NFL (which shares MLS’ single-elimination playoff structure), late-season success bears no relationship to postseason progress, with the hottest and coldest teams accruing almost identical playoff records. Other studies have found similar results in baseball.

There is not a comparable body of literature in soccer because there are few leagues globally that use an American-style playoff system. However, there have been studies examining “form” in soccer, and they routinely find that “form” or “momentum,” or whatever you want to call it fails as a predictor of future results.

But single-elimination playoff games are really different than your run-of-the-mill league game, so let’s talk about MLS playoffs.

I went back through every MLS postseason since 2012 (mainly because that’s how far back Matt Montgomery’s excellent Form Guide goes). For each playoff team, I collected two inputs (points-per-game and goal-differential-per-game) on a full season, final-five-games, and final-ten-games basis. I then collected their points- and goal-differential-per-game in the playoffs, as well as the number of rounds they won in the playoffs and the maximum number of rounds they could win.

First things first: can we predict playoff success at all? We ran very simple linear regressions of all of our inputs against our main outputs, which you can see below. Key takeaways:

  • Nothing is especially good at predicting playoff results. The “best” matchup (full season goal differential per game predicting playoff GD/G) could only explain ~10% of the total variation in playoff outcomes.
  • Full-year results were more predictive than “form” results in basically every case. Results from the final five games were especially useless.
  • In general, regular season goal differentials were better at predicting playoff results than regular season points.
  • Predicting playoff progress (here defined as [playoff rounds won] / [total possible playoff rounds]; eg: if a team needs to win four rounds to win MLS Cup and they advance once, they get 0.25 = 1/4) was way more difficult than predicting average game results.

We probably should have expected this wouldn’t have gone well. MLS is notoriously difficult to predict in general, and predicting one- or two-legged playoff matchups is even tougher. Plus, we’re not talking about massive sample sizes here: 114 playoff rounds and 155 playoff games.

One question you could raise: how much of the above has to do with home-field advantage? Maybe the dominance of full-season results over late-season form has to do with the fact that teams with better full-season results are more likely to play home games in the playoffs. One seemingly obvious step would be to adjust results for venue.

Using the American Soccer Analysis database, I went back through that same period and calculated the average PPG earned by home (1.96) and away (0.86) teams in the playoffs, as well as the average goal differential (+.70 for home teams)*. I then compared teams’ per-season playoff performance to their expected performance. We exclude playoff progress here because in 2018 and earlier, most playoff rounds were two-legged, which reduces our sample size beyond repair.

*Side note: Those numbers are even more skewed towards home teams than regular season MLS games, which are notoriously home-friendly already. Why do home teams tend to do better in the playoffs than in the regular season? I suspect it’s because the home team is, on average, a higher seed than the away team, whereas in the regular season, they are of equal quality on average.

I’ll save you the indignity of seeing the above analysis repeated. Once you adjust for venue, all the explanatory power of regular season results goes away in the aggregate. Let’s take a slightly different tack: is there a difference in venue-adjusted playoff performance between the postseason participants with the best regular season results and those with the worst?

After separating all the teams along our various metrics by quartiles, I looked at where teams in the top and bottom regular season quartiles ended up by playoff outcome quartile.

Here’s an example of one pairing:

And here’s the full set:

There is no real pattern except in every case, and the top quartile regular season teams were far more likely to beat expectations in the postseason than the bottom quartile teams. That should set off alarm bells. We’re no longer seeing the same trends in the data as before – now, everything is moving in the same direction.

In a way, though, we shouldn’t be too surprised. After all, end-of-season performance is highly correlated with full-season performance. Part of that is because teams that are good overall in a season also tend to be good late in the season, but it’s also because points earned late in the season count toward season totals. All the information in the late-season “form” is also included in full-year metrics. If full-year metrics are predictive overall, then our late-season metrics should be too.

But we’re not interested in whether we can predict the playoffs with full-year records. We want to know if late-season form predicts postseason play. To do that, we have to hold full season form constant. A good example: SKC in 2018, ORL in 2020, CLB in 2020, and SEA in 2021 all entered the playoffs with full-year PPGs near 1.8 but had wildly different run-ins to the playoffs (2.2, 1.8, 1.4, and 0.6 PPG in their final five games). If we cluster it like that, do the “hotter” teams perform better in the playoffs?

Let’s look at one possible way to slice our data like that; then we’ll look at every possible way to slice our data like that, and then we’ll summarize and conclude.

Okay, here’s full-year PPG held constant while we vary final-five-game PPG and compare groups by playoff PPG vs expected (that is, adjusted for venue).

Look at that yellow square in the bottom left. That is the average observed PPG in the playoffs vs expected for teams that are in the top quartile for last-five-game PPG (they are really hot going into the playoffs) but the bottom quartile for full-year PPG (they have a poor season-long record relative to other playoff teams). An example of a team in that bucket: is 2016 DC United, who finished with 1.35 PPG that year but collected 12 of the last 15 points. Teams like that averaged playoff PPGs almost a full point lower than we would expect, only knowing how many home and away games they got in the playoffs. Following that yellow line to the right, we see more teams that had low full-year PPGs, but they get progressively less “hot” until we get to the last square: teams like Inter Miami in 2020, who had a low full-year PPG (1.04 for MIA) and were “cold” going into the playoffs (got only six of their last 15 possible points).

If form predicts playoff success, we should expect these lines to slope downward because teams in worse form should perform worse in the playoffs. In the case of this chart, we don’t see that pattern. If anything, this shows that there is a Goldilocks Zone for form – it pays to be neither too hot nor too cold.

As promised, here’s that same chart for all our other ways to slice the data.

Is there a clear downward-sloping pattern in this data? Not really. The “average of averages” line has about the same shape as our example above (first column, third row in the array). You can run more sophisticated tests on this stuff to detect a trend, and I did, but suffice it to say this does not pass the “Hot damn! Check out this chart!” test. After accounting for venue and for full-year performance, late-season “form” does not predict playoff results in MLS in the way suggested by the usual story.

A few caveats to the analysis. As you can imagine, some of these filters slice the data down until it’s wafer thin (eg: only one team in the dataset entered the playoffs in the top quartile for FY PPG but the bottom quartile for final-ten-game PPG: the 2020 Crew, who won the whole thing). We mostly considered per-game results instead of success in advancing; we didn’t consider xG or other more sophisticated predictors; is five games too long to represent “form”? Is ten games too short?; we made no specific matchup adjustments in either the regular season or the playoffs and no venue adjustment for regular season records.

I’d also like to say that I’m not stupid – I know teams can change during the season. Consider these two made-up situations.

  1. A galaxy-brain coach forces his team to adopt a new tactic to start the year: touch the ball only with the hips. It goes poorly, and the coach is fired midway through the year. The new coach allows regular kicks, the team starts winning games, and they make the playoffs. Changing tactics can change teams.
  2. The Sounders are near the bottom of the league when they buy a DP #10 named Nicolas Lodeiro. The team immediately gets WAY better, storms into the playoffs, and wins MLS Cup. Changing the personnel can change teams.

In both cases, the teams’ results as they neared the playoffs were more indicative of future postseason performance than the full year. There was a dramatic change in the team that explained why they were all-of-a-sudden so much better than they were prior. The above analysis should not be construed as denying that there can be legitimate reasons for improved performance. What it speaks to, however, is that (i) soccer is a highly random sport, and (ii) we as humans are really bad about assigning patterns to randomness. Most of the time, a run of good results is just a run of good results.