## 13.5  More Advanced Local Search Methods

In the following we discuss several examples of local search methods. These methods have originally been developed for unconstrained problems, but they work for certain classes of constrained problems as well.

The ECLiPSe code for all the examples in this section is available in the file knapsack_ls.ecl in the doc/examples directory of your ECLiPSe installation.

### 13.5.1  The Knapsack Example

We will demonstrate the local search methods using the well-known knapsack problem. The problem is the following: given a container of a given capacity and a set of items with given weights and profit values, find out which items have to be packed into the container such that their weights do not exceed the container's capacity and the sum of their profits is maximal.

The model for this problem involves N boolean variables, a single inequality constraint to ensure the capacity restriction, and an equality to define the objective function.

 ```:- lib(ic). :- lib(repair). knapsack(N, Profits, Weights, Capacity, Opt) :- length(Vars, N), Vars :: 0..1, Capacity #>= Weights*Vars r_conflict cap, Profit tent_is Profits*Vars, local_search(, Vars, Profit, Opt). ```
The parameters mean
• N - the number of items (integer)
• Profits - a list of N integers (profit per item)
• Weights - a list of N integers (weight per item)
• Capacity - the capacity of the knapsack (integer)
• Opt - the optimal result (output)

### 13.5.2  Search Code Schema

In the literature, e.g. in [18], local search methods are often characterised by the the following nested-loop program schema:
 ```local_search: set starting state while global_condition while local_condition select a move if acceptable do the move if new optimum remember it endwhile set restart state endwhile ```
We give three examples of local search methods coded in ECLiPSe that follow this schema: random walk, simulated annealing and tabu search. Random walk and tabu search do not use the full schema, as there is only a single loop with a single termination condition.

### 13.5.3  Random walk

The idea of Random walk is to start from a random tentative assignment of variables to 0 (item not in knapsack) or 1 (item in knapsack), then to remove random items (changing 1 to 0) if the knapsack's capacity is exceeded and to add random items (changing 0 to 1) if there is capacity left. We do a fixed number (MaxIter) of such steps and keep track of the best solution encountered.

Each step consists of:
• Changing the tentative value of some variable, which in turn causes the automatic recomputation of the conflict constraint set and the tentative objective value.
• Checking whether the move lead to a solution and whether this solution is better than the best one so far.
Here is the ECLiPSe program. We assume that the problem has been set up as explained above. The violation of the capacity constraint is checked by looking at the conflict constraints. If there are no conflict constraints, the constraints are all tentatively satisfied and the current tentative values form a solution to the problem. The associated profit is obtained by looking at the tentative value of the Profit variable (which is being constantly updated by tent_is).
 ```random_walk(MaxIter, VarArr, Profit, Opt) :- init_tent_values(VarArr, random), % starting point ( for(_,1,MaxIter), % do MaxIter steps fromto(0, Best, NewBest, Opt), % track the optimum param(Profit,VarArr) do ( conflict_constraints(cap,[]) -> % it's a solution! Profit tent_get CurrentProfit, % what is its profit? ( CurrentProfit > Best % new optimum? -> printf("Found solution with profit %w%n", [CurrentProfit]), NewBest=CurrentProfit % yes, remember it ; NewBest=Best % no, ignore ), change_random(VarArr, 0, 1) % add another item ; NewBest=Best, change_random(VarArr, 1, 0) % remove an item ) ). ```
The auxiliary predicate init_tent_values sets the tentative values of all variables in the array randomly to 0 or 1: The change_random predicate changes a randomly selected variable with a tentative value of 0 to 1, or vice versa. Note that we are using an array, rather than a list of variables, to provide more convenient random access. The complete code and the auxiliary predicate definitions can be found in the file knapsack_ls.ecl in the doc/examples directory of your ECLiPSe installation.

### 13.5.4  Simulated Annealing

Simulated Annealing is a slightly more complex variant of local search. It follows the nested loop schema and uses a similar move operator to the random walk example. The main differences are in the termination conditions and in the acceptance criterion for a move. The outer loop simulates the cooling process by reducing the temperature variable `T`, the inner loop does random moves until `MaxIter` steps have been done without improvement of the objective.

The acceptance criterion is the classical one for simulated annealing: Uphill moves are always accepted, downhill moves with a probability that decreases with the temperature. The search routine must be invoked with appropriate start and end temperatures, they should roughly correspond to the maximum and minimum profit changes that a move can incur.
 ```sim_anneal(Tinit, Tend, MaxIter, VarArr, Profit, Opt) :- starting_solution(VarArr), % starting solution ( fromto(Tinit, T, Tnext, Tend), fromto(0, Opt1, Opt4, Opt), param(MaxIter,Profit,VarArr,Tend) do printf("Temperature is %d%n", [T]), ( fromto(MaxIter, J0, J1, 0), fromto(Opt1, Opt2, Opt3, Opt4), param(VarArr,Profit,T) do Profit tent_get PrevProfit, ( flip_random(VarArr), % try a move Profit tent_get CurrentProfit, exp((CurrentProfit-PrevProfit)/T) > frandom, conflict_constraints(cap,[]) % is it a solution? -> ( CurrentProfit > Opt2 -> % is it new optimum? printf("Found solution with profit %w%n", [CurrentProfit]), Opt3=CurrentProfit, % accept and remember J1=J0 ; CurrentProfit > PrevProfit -> Opt3=Opt2, J1=J0 % accept ; Opt3=Opt2, J1 is J0-1 % accept ) ; Opt3=Opt2, J1 is J0-1 % reject ) ), Tnext is max(fix(0.8*T),Tend) ). flip_random(VarArr) :- functor(VarArr, _, N), X is VarArr[random mod N + 1], X tent_get Old, New is 1-Old, X tent_set New. ```

### 13.5.5  Tabu Search

Another variant of local search is tabu search. Here, a number of moves (usually the recent moves) are remembered (the tabu list) to direct the search. Moves are selected by an acceptance criterion, with a different (generally stronger) acceptance crtierion for moves in the tabu list. Like most local search methods there are many possible variants and concrete instances of this basic idea. For example, how a move would be added to or removed from the tabu list has to be specified, along with the different acceptance criteria.

In the following simple example, the tabu list has a length determined by the parameter TabuSize. The local moves consist of either adding the item with the best relative profit into the knapsack, or removing the worst one from the knapsack. In both cases, the move gets rememebered in the fixed-size tabu list, and the complementary move is forbidden for the next TabuSize moves.
 ```tabu_search(TabuSize, MaxIter, VarArr, Profit, Opt) :- starting_solution(VarArr), % starting solution tabu_init(TabuSize, none, Tabu0), ( fromto(MaxIter, I0, I1, 0), fromto(Tabu0, Tabu1, Tabu2, _), fromto(0, Opt1, Opt2, Opt), param(VarArr,Profit) do ( try_set_best(VarArr, MoveId), % try uphill move conflict_constraints(cap,[]), % is it a solution? tabu_add(MoveId, Tabu1, Tabu2) % is it allowed? -> Profit tent_get CurrentProfit, ( CurrentProfit > Opt1 -> % is it new optimum? printf("Found solution with profit %w%n", [CurrentProfit]), Opt2=CurrentProfit % accept and remember ; Opt2=Opt1 % accept ), I1 is I0-1 ; ( try_clear_worst(VarArr, MoveId), % try downhill move tabu_add(MoveId, Tabu1, Tabu2) % is it allowed? -> I1 is I0-1, Opt2=Opt1 % reject ; I1=0, % no moves possible, stop Opt2=Opt1 % reject ) ) ). ```

In practice, the tabu search forms only a skeleton around which a complex search algorithm is built. An example of this is applying tabu search to the job-shop problem, see e.g. [19].

 Repair can be used to implement a wide variety of local search and hybrid search techniques.

Figure 13.5: Implementing Search