How to Win Sudoku Using AI
What Is Sudoku (aka 数独)?
The name Sudoku comes from Japan and translates to ‘number’ (Su) and ‘single’ (Doku). However, while the name indicates Japanese heritage, the person chiefly credited of having created the puzzle is Leonard Euler; a very famous 18th-century Swiss mathematician. Since then the puzzle has experienced wide international adoption, even amongst computers!
So how do you play Sudoku? Well to start, the puzzle is presented as a board (like tic-tac-toe) of 81 squares and, depending on the difficulty of the puzzle, has a number of squares filled in with numbers while the rest are left blank. Below you can see a Sudoku puzzle board on the left that hasn’t been solved and the same one on the right that has been solved.
To solve the puzzle, you need to fill in the blank squares with numbers ranging from 1 to 9. The catch is that each square must contain a value that is unique to that row, column, and box. These rows, columns, and boxes will be referred to as regions. A row consists of nine squares from left to right, a column consists of nine squares from top to bottom, and a box consists of one of nine 3x3 boxes evenly distributed across the board.
Now there are different variants of this puzzle, but we’ll only focus on the original. If you haven’t played Sudoku, go ahead and try it out! People have been playing it for centuries and can be a lot of fun (and sometimes frustrating).
Solving Sudoku Using AI
Okay, now that you know the rules of Sudoku and hopefully got some experience playing it, you might’ve noticed some patterns in the process you took to find your way to a solution. More likely than not, you started out by filling in a square that could’ve only taken one possible value. Then maybe you began solving the rest of the squares by eliminating all the possible values a particular square couldn’t take and choose the remaining one.
There were probably a few times when you had to remember several values for multiple squares. Maybe you ran into a couple forks in the road where you had to pick between two separate paths to find a solution, and one of those paths either helped you solve the puzzle or produced a dead end forcing you to backtrack.
This is a good introduction to two common strategies called elimination and only choice, which requires you to list all possible values each square could take (elimination) followed by filling in the squares which have only one possible value listed (only choice). All you need to do then to solve the puzzle is to repeat this cycle over and over again.
Now to teach a computer how to do this, we need to use a common artificial intelligence method called constraint propagation. Put simply, it’s a method that allows a computer to move toward a likely solution (propagate) while following the rules of some space and the possible values to use per variable (constraints). In this case, our constraints are squares that can only contain one value ranging from 1 to 9 and must be unique per region.
So how does this method look in action? Well let’s start by looking at the following Sudoku board. Don’t worry, this puzzle is pretty easy and is even easier for a computer when using constraint propagation.
To start, we’d want to list all the possible values each square could take so we can narrow down the possible choices as well as to discover “already solved” squares (i.e., squares that have only one possible value). Based on the board below, you can see that all the possible values have been listed per square and that some of them only have one value.
The obvious thing to do now would be to solve the squares that have been narrowed to only one value. Based on the rules of Sudoku, we can then look at individual regions (rows, columns, and boxes) to find distinct values per square. This is the process of drilling down to local constraints; in this case regions.
For example, let’s look at the top middle box colored in red in the image below. You can see that we’ve almost solved this region but not quite. Since each square within this box must be unique, we know that neither 2, 3, 5, 6, and 8 are possible values for the unsolved squares. However, we can see that the value 1 is only possible in the upper right square of this region while every unsolved square in the region contains a 4, 7, or 9.
This leads us to assign 1 to that square because it’s the only choice among the values that guarantees it being unique throughout the given region. Following this, the computer would then do the same thing over and over again across all regions until a solution pops up.
By using constraint propagation, we can teach a computer how to solve almost any Sudoku puzzle by focusing on local constraints. But you might know from experience that there are times when the board reaches a “stalled” state where there are squares that can take on multiple values. Maybe the least amount of values to choose from for any particular unsolved square is two or more! Well, this would be an instance where you’re confronted with naked twins, which indicates that there are multiple paths to choose from when on the path to greatness (solving the puzzle). Let’s get to greatness.
If there’s ever a point when solving a Sudoku puzzle where you’ve stalled out on an incomplete solution, you know that you made a once valid choice earlier on when solving the puzzle that leads to an invalid solution. This would mean you’d have to backtrack to where you were when you made that choice and make a different one. Hopefully, you left breadcrumbs.
Implementing a search method that allows a computer to traverse multiple scenarios that could possibly lead to a valid solution is exactly what we want here, and when confronted with naked twins, we know we have two choices. When given the choice between two different directions, we have to branch out to two different worlds of possibilities. This can be done using a tree, especially a binary search tree (BST).
Now a BST consists of a root node where two directions branch out from, and those subsequent points themselves could have two directions branching out from them, and so on.
Now think of node A being a point in time when we have to make one of two decisions, which end up being either B or C. In Sudoku, this is the same problem as running into naked twins; you must choose between one of two paths that may or may not lead to a valid solution.
If you look at the board above, you’ll notice a stalled Sudoku board with a square on the bottom left containing values 8 and 9. Since we’re stuck at an incomplete solution, we have to take a leap into the unknown, which means choosing to solve the puzzle in one way and hope it works out. If it doesn’t, we know where we went wrong and can correct our mistake by backtracking and choosing the other option.
And who knows, maybe we’ll confront ANOTHER set of naked twins after we made the choice of either 8 or 9, which would lead to another set of possibilities and thus another set of branches in our tree! By modeling this as a BST, we can have a computer traverse these possible solutions using a depth-first search, which will eventually lead us to a valid solution.
Is This Intelligence?
Now a computer doesn’t know that a solution exists. It simply moves forward in time poking around hoping to find one. While we humans certainly know that a proper Sudoku board has one or more valid solutions, the computer doesn’t know that. This means it has to be smart about finding one of these solutions by making proper choices because the amount of time it takes to solve the puzzle is unknown, which in this case is known as an NP problem.
Some might not be convinced that this is intelligence, but that really depends on what your definition of intelligence is. It may not be talking to us like Siri, but it sure is working out a problem without a potential solution (as far as it’s aware) and without us giving it explicit conditional statements. In the case of solving Sudoku puzzles, we’re just telling the computer: “You’re allowed to do this, but not this. Now solve the problem.” Then it figures it out.
Peter Norvig — a well-known artificial intelligence researcher — discovered how to apply constraint propagation and depth-first search of a BST to solve Sudoku puzzles and discusses it in fair detail here. He even dives into the code to show how it’s possible to teach a computer how to do it.
We walked through how constraint propagation combined with depth-first search of a BST fits so well when solving the ancient puzzle of Sudoku, which better informs us on how to teach a computer to do it on its own. If your interested in learning more about this topic, I highly suggest that you read Peter Norvig’s post as mentioned above. You can also take a look at the code I wrote to solve this problem here.