Really! Players cannot detect differences in pars By Anthony F. Lucas, Ph.D. & Katherine A. Spilde, Ph.D. August 17, 2019 at 3:53 pm We deeply appreciate the opportunity to engage in this conversation about one of the key questions of our day. Frankly, we came at this project from a position of skepticism ourselves, as it seems that a player would notice the difference between a 10% and a 5% game. Certainly this is what all of us in casino operations have been told. Players consistently complain that the slots are too “tight,” and that they can tell that “we” have “tightened” them. Often, as any operator will share with a laugh, they say this even after “looser” machines have been installed, which should have been a strong hint! It’s not so much that gamblers cannot perceive a difference in the outcomes produced by different pars; it’s that there isn’t a one. More on this later, but first, it may be helpful to examine a few of the assumptions that underlie Mr. Frank’s recent response to our work. He argues that moving a game from 7% to 9% will noticeably reduce the average play time. This conclusion stems from taking a buy in, say $100, and dividing it by 7%, to arrive at $1,429 in expected coin-in. The same calculation results in $1,111 in expected coin-in, for the 9% game. These calculations rely on some important yet unrealistic assumptions. First, each game wins precisely 7% and 9% on every spin, respectively. Second, and most importantly, every player must play-off their jackpots to zero credits. This must happen even if the player wins Megabucks. This is the only way that this simplistic calculation can produce the difference in the average play time that Mr. Frank is using as the premise of his argument. But no players will lose precisely 7% or 9% on every spin (not even close), and some players will leave the casino as winners. So this math doesn’t work for estimating the average play time. The next example further pumps the brakes on the assertion that play time is critically dependent on par. Let’s say we have two games: Game A and Game B. To keep it simple, let’s say both games have only one payout and accept only one credit per spin. Game A features a 999,000-credit jackpot that hits, on average, once in every 1 million spins. Game B has the same 1-million-spin cycle, but its lone jackpot pays 800,000 credits. Therefore, the pars for Game A and Game B are 1% and 20%, respectively. If 10,000 players were to make 100, 1-credit wagers on each game, then the average (or expected) time on device would be identical. This would occur in spite of wildly different pars. Staying with the current example, 9,999 of these players would lose every spin on each game, with two lucky winners collecting all of the coin-out. The only way for a difference in play time to manifest would stem from continued play by these winners, should any occur. We realize this is an unusual example, but it demonstrates the limitations of assuming that par and play time are critically dependent – they are not. Imagine that you just encountered a player that lost every spin on Game B, with its 20% par. She is not very happy. She demands that you direct her to a loose game. Would you send her to Game A? We didn’t think so. Par is not a reliable proxy for loose or tight. In our earlier cliffhanger, we noted that individual players do not produce different outcomes on games with distinctly different pars. We hope this statement now seems a little less abstract, but let’s bolster it with some research results. We conducted several simulations of reel slot play to understand what happens when players engage games with different pars, under experimentally-controlled rules of engagement. What we learned may surprise you. For instance, when we graphed the session-level outcomes produced by 10,000 gamblers who played both a 6% game and a 12 % game, they both formed somewhat normal distributions. Each gambler placed 1,000 same-sized wagers on each game. The surprise was that these two outcome distributions occupied the same space. That is, imagine the outcomes from the 6% game expressed as a red bell curve on a number line. Next, imagine overlaying a blue bell curve representing the outcomes produced by the 12% game. After the overlay, you will be looking at a purple bell curve. Only very tiny slivers of blue and red will remain. These tiny slivers represent the number of players (out of 10,000) that produced an outcome that could lead them to conclude that the games had different pars. Of course, this assumes they recorded the results of each spin and conducted a two independent-samples t-test. We’re pretty sure most players wouldn’t go to that extreme. This experiment was repeated 89 times, at different levels of volatility, spins per trip, and par differences. The results were consistent. This is why we say there is no difference to detect at the session level. Some argued that these simulations did not include actual gamblers in a live casino setting, hence the series of field studies that Mr. Frank now questions. These critics contended that, over time, players would detect the differences in pars of otherwise identical games, by somehow stitching together all of their disjointed short-term interactions with the games. These field studies examined this claim by analyzing the game-level performance of experimentally-controlled two-game pairings. Although we came at the problem from a different angle, the results lined-up with those from our simulations. Even with a strong disincentive to play the high-par games, these machines earned significantly more revenue, over samples ranging from 6 to 9+ months. These sample durations greatly exceeded the minimums recommended by AGEM. Moreover, the field studies demonstrated no evidence of par detection, by way of revenue migration to the low-par games. These results have held across 17 two-game pairings, 3 countries, 6 casinos, 9 game titles, and 10 par differences. Much of this research was facilitated by the manufacturing sector. Given the consistency in the results of the carefully-designed field studies and those from the simulations, which is more likely: (1) our findings were due to a 9-month supply of new, fool-me-once players (as Mr. Frank suggests); or (2) players cannot detect differences in pars, even over extensive periods of time? Before you answer, you may want to consider the work of Nobel laureates Kahneman and Tversky, who elegantly demonstrated the fallibility of humans in estimating probabilities, especially those of infrequent events (like jackpots). Finally, where are the field studies that demonstrate this supposed hypersensitivity to par? We are familiar with the 2015 AGEM study, and its lack of scientific rigor. Mr. Frank offers additional counterfactuals based on changes in annual market-level revenues, concluding that the cause was due to a single variable – hold percentage. What about the countless other impacts on market revenues? Are we to ignore those? This is anecdotal evidence. We feel like we’re at a poker game, it’s the showdown, and we have presented a full house. The other side says, “Nice, but we have a four of a kind” and attempts to reach for the pot without showing their cards. We need to see some cards.