Hi Marco, These instances were automatically generated from two task graphs that were randomly generated based on the same parameters, e.g. numbers of nodes and connection rates, and were mapped to the same hardware architecture. In fact, their complexities are not exactly the same, but should be quite close (I suppose...), as they have similar amount of variables (142 and 148) and constraints. There are more complex experiments generated (like hundreds of variables) as well, and many of them could also finish executing within a couple of seconds. I have modified the code as you suggested, and am running the experiments right now. Thanks. BR, Jacky > There can be various reasons. > One reason could be that you found a phase transition. > It looks as if your instances were randomly generated: which parameters > did you use to generate these instances? > > If you used the same parameters for the two instances, then they could > be the so-called "exceptionally hard problems" (see works by Barbara > Smith and colleagues). > If this is the case, the usual solution is to use a randomized search > strategy with restarts at increasing timeouts (see works by Carla Gomes > and colleagues). > > As a basic implementation, you could change your search strategy as follows: > > TimeOut::1..300, > bb_min(( > indomain(TimeOut), writeln(timeout(TimeOut)), > > bb_min(( > search(Starts, 0, smallest, indomain_random, complete, []), > search([R], 0, smallest, indomain, complete, []) > ), > Cost, bb_options{strategy: restart, delta: 0.0001, > timeout: TimeOut, report_failure: Cost}) > ), > Cost, bb_options{strategy: restart, delta: 0.0001, > report_failure: Cost}), > > Of course, this linearly increasing timeout is very simplistic; Luby > sequences would probably give better results: > > http://www.cs.berkeley.edu/~sinclair/vegas.psReceived on Tue Sep 21 2010 - 15:44:08 CEST
This archive was generated by hypermail 2.3.0 : Wed Sep 25 2024 - 15:13:20 CEST