Dear devs, please fix your cheating AI on console

So we actually ran some tests on the AI last week, because we’re hearing a few complaints here… we simulated input from the human player, and ran 816 matches overnight on one of our console test kits. The simulation uses the AI to determine the human player’s move, but that only provides input for which Gems to switch & spells to cast… in ALL other respects the actual Gem Board can’t tell whether that input comes from a Human Player or an AI player.

For the record:
a) This was done with all cheating in the human’s favor disabled
b) We define a “lucky” drop as one of the following events (relative weighting is show in brackets):

  • A Skull drops in to form a 4+ Skull match, when that skull was not previously visible on the board (5 pts)
  • A Gem drops in to form a 4+ match, when that gem was not previously visible on the board (3 pts)
  • A Skull drops in to a space where the opponent then immediately gets a 4/5-of-a-kind Skulls (4 pts)
  • A Gem drops in to a space where the opponent then immediately gets a 4/5-of-a-kind (2 pts)
  • A Skull Gem drops in to set up a Skull match for the opponent (3 pts)

Here are the results from 816 Games

LUCK SCORES (based on weightings above)

  • Average Player Luck Score: 24.3
  • Average AI Luck Score: 25.0
  • Difference in Luck Scores - the AI was on average 2.9% luckier than the human player

LUCKY GAMES (number of games where player had a higher luck score)

  • Games with Equal Luck: 4
  • Games where the Human was Luckier: 414
  • Games where the AI was Luckier: 398
  • Difference in Lucky Games: The Human player was luckier in 4% more games

Now that’s not a conclusive result, sample size could certainly be bigger… and obviously we could change the weightings… but I think we can see a clear trend that both sides are receiving fairly similar “luck scores” in the games we ran.

11 Likes