Re: [eclipse-clp-users] Performance-related issues in heuristic optimization with ECLiPSe

From: Thorsten Winterer <thorsten_winterer_at_...126...>
Date: Wed, 18 Jun 2008 09:45:01 +0200
> For variable selection, I found "most_constrained" and "first_fail" to be the most efficient. They label in such a way that the search tree does not get very wide. This results in reaching a solution faster (fewer backtracks).
> 
> For value ordering, it really depends on the problem. indomain_min would be suitable in your case I beleive, since smaller values mean smaller start times, and better costs.
> 
> Finally for the search strategy: If your heuristics are suitable and efficient, good solutions will be found near the left-hand of the search tree. In this case, i would avoid credit-based search because its searching moves quickly towards the middle and right-hand side of the tree (i.e. away from the good solutions). bbs stays to the left-hand side, but who knows how many backtracks would be required for a good solution? So, what i used myself is a combination of dbs and lds: search is limited to the left-side (where the good solutions are found) and many alternatives are considered while searching for better solutions.



I had to solve a shop scheduling problem recently, and got good results with dynamic ordering of variables and values. At each node, I had to select a task to schedule next and a machine on which the task was to be processed. Both selections were done heuristically (largest unscheduled task with earliest possible start time first, machine with earliest availability first).

Also, it is important to prune unproductive branches in the search tree if possible, eg by adding additional constraints on backtracking. In my scheduling problem, I had subsets of machines with the same characteristics, so I could restrict myself to trying one of each kind at each search node. 

As to the different versions of the cumulative constraints: the amount of propagation differs. edge_finder3 does the most propagation, so is slower than the others. But more propagation may also mean smaller search trees, ie overall your program might be faster.

Use a cost function that can be used to prune the search tree during the search. If your cost can only be computed once the schedule is complete, it's useless.

Finally, if you want a good but sub-optimal solution in a fraction of a second (and I'm not sure that "fraction of a second" is achievable for a large scheduling problem), forget about optimization using bb_min. You should instead concentrate on finding a good and fast search heuritics, which will give you a solution very fast. Then you can use this sub-optimal solution in your outer optimization loop to generate additional constraints which will help your heuristics to find a better solution in the next iteration.

Cheers,
Thorsten


_________________________________________________________________________
In 5 Schritten zur eigenen Homepage. Jetzt Domain sichern und gestalten! 
Nur 3,99 EUR/Monat! http://www.maildomain.web.de/?mc=021114
Received on Wed Jun 18 2008 - 00:45:11 CEST

This archive was generated by hypermail 2.3.0 : Wed Sep 25 2024 - 15:13:20 CEST