Universal Heuristics

Universal Heuristics

How do humans solve “unsolvable” problems?

Summary

Lots of crucial problems defeat current computer arts but yield to our brains. Great many of them can be stated in the form of inverting easily computable functions. Still other problems, such as extrapolation, are related to this form. We have no idea which difficulties are intrinsic to these problems and which just reflect our ignorance. We will remain in the dark till major advances are made on our fundamental questions, such as, e.g., P=NP.

And yet, traveling salesmen do get to their destinations, mathematicians do find proofs of their theorems, and physicists do find patterns in transformations of their elementary particles! How do they do this, and how could computers emulate their success?

Rationale

Brains of insects solve problems of such complexity and with such efficiency, as we cannot dream of. Yet, few would be flattered with a comparison to the brain of an insect :-). What advantage do we, humans, have? One is the ability to solve new problems, those on which evolution did not train generations of our ancestors. We must have some pretty universal methods, not restricted to the specifics of focused problems. Of course, it is hard to tell how, say, the mathematicians search for their proofs. Yet, the diversity and dynamism of math achievements suggest that some pretty universal methods must be at work.

In fact, whatever the difficulty of inversion problems, we know a “theoretically” optimal algorithm for them all, the algorithm that cannot be sped-up by more than a constant factor, even on a subset of instances. It searches for solutions, but in order of increasing complexity, not increasing length. A fairly short solution may be hard to find, while a much longer one could be straightforward. Complexity can be roughly defined as the minimal sum of (1) the length of a prefixless binary program p that transforms the given instance into its solution and (2) the log of the running time of p. (Realistically, p runs on data which include not only the specification of the instance, but also all other available relevant information, possibly including access to a huge database, such as library, or even the Internet.)

Extrapolations could be done by double-use of this concept. The likelihood of a given extrapolation consistent with known data decreases exponentially with the length of its shortest description. This principle, known as Occam Razor, is hard to implement since short descriptions may be extremely hard to find. Decoding these descriptions should not take more time than the complexity of the process that generated the data. However, even if decoding time is reasonable, finding short descriptions may be exponentially hard. Yet, it is an inversion problem and the above optimal search applies.

Such approaches contrast with the methods employed currently by CS – universal algorithms are used heavily, but mostly for negative results.

Yet, these methods are optimal only up to constant factors. Nothing is known about these factors and simplistic attempts make these factors completely unreasonable. Current theory cannot even answer straight questions, such as, e.g., is it true that some such optimal algorithm cannot be sped-up 10-fold on infinitely many instances? Yet humans do seem to use such generic methods successfully, raising hopes for a reasonable approach to these factors. Something interesting may lie here.

Contributors and Credits

Initial text: Leonid Levin.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: