AI Falls Down on World Cup Predictions

July 20, 2018 4:01 AM
  • CDC Gaming Reports
July 20, 2018 4:01 AM
  • CDC Gaming Reports

Not long after the World Cup began, I ran a piece in CDC International (AI Predictions for World Cup abound, July 2) detailing some of the latest efforts by academics and financial institutions alike to use artificial intelligence and machine learning techniques to predict the tournament’s results.

Story continues below

The most notable and comprehensive predictions mentioned in the piece were conducted by none other than Goldman Sachs, which, considering the business they’re in, clearly have a good deal of vested interest in figuring out how well they can predict future events of all kinds with machine learning or AI. I thought it might be fun and educational to look back, now that the Cup is over, at how well, or badly, various AI predictions ultimately turned out – whether any of them had France and Croatia in the final, for starters, and how close the AI got to predicting various outcomes when compared against the bookies.

The AI predictions covered in my original article variously had Brazil, Germany and Spain as the World Cup champions, none of whom, of course, even progressed to the final. Other, more humble prognostications, made by both AIs and humans alike, were a bit more successful; of eleven football writers on the staff of the Telegraph, three picked France to go all the way.  Finnish sports data scientists AccuScore, on the other hand, had Argentina winning the Cup, and they might have been the most disappointing team in the tournament, barely scraping their way into the Round of 16 before losing there to France. This is not to say that the bookies don’t employ some sort of machine learning in their algorithms, of course; they may do in some cases, and certainly they would employ digital aggregation of statistics without a doubt. AccuScore, for their part, say they use a different algorithm for each player. But much of the current methodology used by bookmakers is strictly in-house and naturally off limits to researchers.

Goldman Sachs also updated its predictions as the tournament unfolded, saying prior to the semis that Belgium would beat England in the final, whereas Belgium actually England for 3rd position in the 3rd / 4th position playoffs. Four years back, Goldman Sachs conducted a smaller venture towards the same goal; that year, the financial giant predicted three of the four semi-finalists correctly.

Why is AI so limited in predicting football results at this point, considering that it’s now advanced to the point, twenty years after Deep Blue beat Kasparov, that it’s beaten so many of the world’s toughest human challengers in all sorts of games, such as Deepmind’s victory over Go grandmaster Ke Jie last year? The lay answer is that football is a complex and hard-to-predict game. The analyst’s response would probably be more geared to terms like “confounding variables”: simply put, micro-events within matches which cause chain reactions. Since no AI simulation went so far as to simulate full 90-minute matches, these kinds of factors cannot be controlled for.

AI as used for predicting complex real-world events still has a long way to go, but there’s no reason to think that it won’t approach excellence at these tasks eventually. It wasn’t long ago that people were saying AI would never win a trivia contest, either.

In the meantime, it’s business as usual for the bookies, who incidentally also failed for the most part to predict France’s eventual victory – to say nothing of the fact that many bookies also gave Croatia 60-1 to win, which came reasonably close to happening. Croatia certainly fought valiantly against those odds, and everyone loves an underdog, right? (Except maybe the bookies.)