Posted 12/11/2024
This post is about theoretical computer science. I’ve written it without getting too far into academic vocabulary, but this is a disclaimer that if thinking about Turing machines sounds dry as sand, this will not be the post for you.
In computer science we are often interested in how difficult a problem is, defined by how the number of steps required to solve a problem scales up as the size of the problem increases. We also use this kind of asymptotic analysis to discuss how well an algorithm scales, something I have written about before. I ended that post by discussing problem complexity classes, and especially three groups of interest:
P - the set of decision problems that can be solved in polynomial time or better, meaning they are relatively approachable
NP - the set of decision problems that can have their solutions verified in polynomial time or better, but may be much slower to solve
NP-Hard - the set of decision problems at least as hard as the hardest NP problems (which we refer to as NP-Complete), meaning this category also includes problems that cannot be verified in polynomial time
However, there is a second way to define NP: the set of all problems that can be solved in polynomial time by a Non-Deterministic Turing Machine. This is in fact where the name comes from, “Nondeterministic, Polynomial time.” In my undergraduate foundations class we glossed over this equivalent definition as a footnote and focused on “if you can verify in polynomial-time it’s in NP.” I never understood how this non-deterministic definition worked or why the two are equivalent, so many years later I’m digging in.
First, a quick refresher: A Turing Machine is a simple model of computation. It’s a theoretical machine that has an “input” (idealized as a tape of symbols drawn from a finite alphabet), and a current state from a finite list of states. The machine can do things like move the tape forward and backward so the tape head points at a different position, read symbols, write symbols, and change the current state.
A typical deterministic Turing Machine can only take one particular action given a particular state and input. Since its action is wholly determined by the state and input, it’s deterministic. Therefore, a non-deterministic Turing Machine (NTM hereafter) is one that can take multiple actions given a particular state and input.
So how do we evaluate an NTM if it can take more than one action at any given step? We usually think of an NTM as a branching process, where it executes all possible actions concurrently, perhaps in parallel universes. Then, once one path of the NTM leads to a result, we return that one resulting path and discard the other branches of the evaluation tree. Another way of thinking about this is that the NTM always guesses perfectly which of its available actions to take to yield a result in as few steps as possible.
As an example, imagine a breadth-first search on a square graph where you can move up, down, left, and right. We can represent the first two steps of such a search in an evaluation tree, as follows:
A deterministic Turing Machine evaluates each node in the evaluation tree one by one; that is, it evaluates “left, right, down, up,” then “left left, left right, left down, left up,” and so on. Therefore, the runtime of the breadth-first search scales with the size of the evaluation tree, which grows exponentially. However, a non-deterministic Turing machine evaluates each of its possible paths concurrently (or alternatively, always guesses which step to take correctly). It evaluates the first four moves in one parallel step, then all sixteen second steps in a second parallel step. Therefore, the number of steps a deterministic TM needs scales with the total number of nodes in the tree, but the steps an NTM needs scales only with the depth of the evaluation tree.
Note that when the NTM returns its answer - a path from the start to end point on the graph, as highlighted above - a verifier walks through that single path step by step. The verifier doesn’t need to make any complex decisions or multiple branching actions per input, it just reads one step at a time in the path, confirms that they’re valid steps for the search, and that the start point and end points of the path are correct. Therefore, the verifier can always be a deterministic Turing machine.
So, if the depth of the evaluation tree scales polynomially with the input size then an NTM will be able to solve the problem in polynomial time - and a TM will be able to verify that same answer in polynomial time. That’s why the two definitions of NP are equivalent.
Okay, so that’s why we can describe NP problems in terms of a rather impractical non-deterministic Turing Machine, but what’s the advantage of doing so? Remember that there are two ways of evaluating a non-deterministic Turing Machine: we can think of each possible action for an input executing “in parallel” and then discarding the false paths once a solution is found, or we can think about the Turing Machine always correctly “guessing” the action that will lead to an answer in the shortest number of steps. Using this second definition we can frame anything beyond NP as “even if you guessed the right action to take at every step and bee-lined towards a solution, the runtime would still increase exponentially with the input size.”
Now P is the set of problems we can solve in polynomial time with no guesswork, NP consists of problems we can solve in polynomial time with perfect guesswork at each step, and anything beyond NP can’t be solved in polynomial time even with perfect guesswork.