Friday, February 8, 2013

Minesweeper Solver AI with a flawed algorithms.

The above is an interesting article on how to build a minesweeper solver.  There is a good discussion of various algorithmic solutions that can solve everything except the "must guess" states in the game.

The result of the AI in the article is a solver that acheives ~50% win/loss ratio.  Which is effectivly chance.  

It takes about 5seconds to figure out a couple of ways to beat the AI's win ratio using "indirect" strategies. 

1) Only "complete" games that do not result in a "guess" state.
2) Generate a "result" screen without actually playing the games.
3) Throw a tantrum - blue-screen the computer, throw a critical error... whatever the AI equivilent is. This allows the AI to avoid registering a less than optimal win ratio
4) Use the AI to fiddle the number of mines and the board size variables to allow it to increase its win ratio.  (Set the number of mines to 1 and the board size to expert and rack up a huge number of wins...tada)
5) Take a state snapshot of the game, clone the snapshot, play each position until a successful solution is found then move forward with that winning solution. This is a brute force mechanism to acheive a perfect 100% win ratio. 
6) Use the "Replay" function to increse the proportion of games that are won using partial knowledge rather than starting with a fresh board every time.

This in itself is strange.   If we assume that there must be a proportion of games that are solvable without reaching a "must guess" state, then these should be 100% solvable using the AI's methods.  The rest of the games must involve one or more "must guess" situations.  Obviously for a win, each guess is dependant which means that a game involving more than a very small number of guesses becomes improbable for the AI to "win".  If we assume that the proportion of games that do not involve a guess event are fairly fixed ( say 5%) and the games involving guesses are all essentially chance, then we should still get a result of about 55% win ratio.  But like any coin walk, this will only appear with a sufficiently large sample of games.  (Approching infinity?)

So are the games being presented to the AI really random or is there another factor that is hiding this effect?

We can assume that as the number of mines increase relative to the size of the board, the number of "must guess" events required to "win" this kind of game would increase.  So is there a sweet spot with these variables that allows for games with minimum guessing events but still results in a "credible" seeming AI? Probably the easy levels. 

If memory serves there are a fixed number of mines in each of the three pre-set "hardness" levels. (Someone has uninstalled the windows games from my machine image..damit.  Ok, Fixed that)

Beginner has 10 Mines  on a 9x9 grid.
Intermediate has 40 mines on a 16x16 grid
Advanced has 99 mines on a 16x30 grid

As there is no clear way to calculate the probability of a particular number of "Must Guess" configurations occuring at a particular board+mine count ratio. (Well none that spring to mind anyway) I guess we could just do some sampling and develop a bell curve.  (This would require that I could complete a reasonable number of games and see the board layouts, so I could infact count the number of instances of guess events in the process of manually solving each game.  Either that or just get the AI to solve a set of games and get it to count the number of guess events... mmm automation.

Anyway, assume we came up with the probability of guess events for each board size.  This would only give us a sense of what the true win ratio should be over a large enough set of games. 

However the probability of solving the boards will be:

No Guess Event (100%) 1
1 Guess Event  (50%) 0.5
2 Guess Events (50% * 50% )  0.25
3 Guess Events ( 50% * 50% * 50%) 0.125

Do you notice a pattern?  My point is that if we assume that each of these types of games has an equal probability of being presented to the AI, then we should get the probability of solving any game simply by averaging them together. The average of the above 4 game types is 0.46875... which is below chance. The further we go with the table of possible game types the lower the probable outcome is.  However the fact that the results reported in the article suggest that the AI using the published strategies was still getting a win ratio of about 50% would suggest that the probability of the game type is not distributed evenly. With some simple spreadsheeting,  the distribution turns out to be a simple falloff curve.

Based on the reported win ratio of about 50% I suggest that the games that are being presented to the AI probably involve only a small number of guess events.

However, we are only dealing with the ratio of games that were won.  We cannot really make a conclusion about the games that the AI lost.  They could infact have contained a much larger number of guess events.  The above curve really only shows the ratio of the games that the AI can win, even when its guessing.  This is simply the law of chance stating to bite hard. It doesn't actually tell us what the distribution of guess events is like in the games being presented to the AI.  Inference go suck.... 

Does this give us any useful ways to optimse the strategy?  Is there an angle that we are not trying simply because we want a general solution? Are we simply chasing a white rabbit down the wrong hole?  Can we be the first to beat the law of chance? Bzzzzt. Wrong answer.  There is no way to remove the effect of chance in these games.  The AI must make a guess and the guess will always have a 50/50 chance of being correct. The number of guess events will generally be small (as shown in the graph above) but these are inviolate.  So how do we beat it?  Simply examine your assumptions.

As I proposed above, there are lots of ways to get a better win ratio by out-of-the-box thinking.  Many of these strategies and approches would be considered as "cheating" by humans who are crippled by social ideals of "fair play".  Keep in mind that the only rule given to the AI was to maximise the number of wins, so these are all successful strategies that the programmer has simply failed to implement.  The AI has not been given these tools and certainly was not given a sense of "fair play" through which it might have decided not to employ some of those strategies.  So in conclusion, the only reason that the AI has not got a 100% win ratio, is that the programmer failed to implement any successful strategies to acheive this ratio.  Essentially crippling the AI.

A better AI would have a result that looked like this:

So whats the point?

The point is that the AI is handicapped by the "humanity" of the designer. 

No comments:

Post a Comment