In contrast, an intractable problem requires an exponential amount of time to solve (as far as we know). Increase the input by just a few digits, and the problem is ten times as hard. Another few digits, and another factor of ten. The problem quickly overwhelms your computer, or any computer that we can envision in the foreseeable future.
A language is in the class p if it is recognized by a deterministic tm, taking no more than p(|w|) steps, for some fixed polynomial p(n). (Here |w| is the length of the input word in the language.) Such a language is considered tractable. This may not help when p(n) = n10000, but most tractable problems in the real world have p(n) ≤ n6, and are amenable to computer solutions.
A language is in the class np if it is recognized by a nondeterministic tm, taking no more than p(|w|) steps, for some fixed polynomial p(n). The letters np stand for "nondeterministic polynomial" time. These problems are declared intractable.
Since a deterministic tm is a restricted version of a nondeterministic tm, every p language is also an np language. What about the converse? Can every np language be solved in polynomial time? Does p = np? Are intractable problems really tractable after all? This is the most important question in mathematics today. If the answer is yes, then public key cryptography is impossible, and electronic transactions are no longer secure. More on this later.
Since the execution time is bounded, every np language is recursive. Count the state transitions, and if you haven't found a solution after p(|w|) steps, halt in a failure state. The turing machine always stops, declaring success or failure. In fact the modified machine, that counts its own steps on a separate track, still recognizes the language in polynomial time. (More on this later.) Thus the complement of a p language is p, and the complement of an np language is np.
An assortment of problems are undecideable, i.e. not recursive, and these languages cannot be p or np.
As mentioned above, a separate track can be used to count instructions, and put the tm in a failure state after so many steps. This overhead slows down the machine, but doesn't change the class of the problem, be it p or np, hence we can always view these machines as recursive, giving you an answer (yes or no) in polynomial time.
We can put additional bells and whistles on our tm, as long as the enhancment represents a polynomial increase in performance. Eventually the tm looks like a modern computer, with its own instruction set, on-chip registers, and a random access memory. The solution can be expressed as an algorithm in a high level language, rather than a 50 page state transition matrix for a tm. As long as the algorithm runs in polynomial time, we're all right. The corresponding tm, driven only by a state machine, would also run in polynomial time.
Of course we cannot simply add nondeterminism to a deterministic computer; that feature is simply too powerful. It tears through the polynomial envelope. If m1 is nondeterministic and m2 is deterministic, m2 can simulate m1, and solve the same problem, but now the slowdown is exponential. An np problem, quickly solved by m1, is solved by m2 in exponential time. Basically, m2 has to try each possible transition, at each step, instead of running all transitions in parallel. If m1 presents k choices at each step, m2 has to try kp(|w|) transition sequences, looking for success.
As I write this, people are trying to build a quantum computer, which is essentially nondeterministic. This is an attempt to break the np barrier through engineering, rather than mathematics. However, some experts claim the quantum computer, even if it could be built, is not a true analog of the nondeterministic turing machine. It is a beast unto itself, capable of solving some problems, but not others. Yes, it can factor large composite numbers, but this is (probably) not an np complete problem. For more on this, check out Scott Aaronson's Article.
Since l2 has been mapped into a subset of l1, without too much fuss, we say l1 has been reduced to l2. If we can solve the harder problem, l1, we can certainly solve the easier problem l2, since it becomes a subset of l1.
This is a lot of notation for something very intuitive. If you don't know how to solve a problem, perhaps you can convert it into another problem that you do know how to solve.
Note that reduction is transitive. If l1 reduces to l2 reduces to l3, then l1 reduces to l3.
A language lc is np complete if it is an np language, and it is np hard.
Is there really such a language - one that embrases all possible np languages? There is - as we shall see in the next section.
Now assume p ≠ np, as most people believe. Suppose there is a procedure that determines whether an np language belongs to p. Set s3 to the set of np languages and apply rice's theorem again.
There is no way to distinguish an np problem from a p problem. If you have discovered an efficient algorithm, the problem is tractable; other than that, you can't be sure.