The topic of search has been at the heart of AI since GPS. Some fundamental search algorithms were generate and test, branch and bound, the A* algorithm, iterative deepening, and tree search guided by the global problem structure, or by information elicited during search, or by intelligent backtracking.
The contribution of constraint programming is to allow the end user to
control the search, combining generic techniques and problem-specific
heuristics.
A nice illustration of this is the n-queens problems: how to place
n queens on an nXn chess board, so that no queens can take each
other.
For n=8, depth-first generate-and-test with backtracking is quite
sufficient, finding a solution in a few seconds.
However, when n=16, it is necessary to interleave constraint
propagation with the search so as to obtain a solution quickly.
However when n=32 it is necessary to add more intelligence to the
search algorithm.
A very general technique is to choose as the next variable to label
the one with the smallest domain - i.e. the smallest number of
possible values.
This is called first-fail.
It is particularly effective in conjunction with constraint
propagation, which reduces the size of the domains of the unlabelled
variables (as described for arc-consistency above),
For n queens, the first-fail technique works
very well, yielding
a solution within a second.
Unfortunately even first-fail doesn't scale up beyond n=70.
However there is a problem-specific heuristic which starts by placing
queens in the middle of the board, and then moving outwards.
With the combination of depth-first search, interleaved with
constraint propagation, using the first-fail ordering for choosing
which queen to place next,
and placing queens in the centre of the board first, the 70-queens
problem is solved within a second, and the algorithm scales up easily
to 200 queens.