On Wed, Aug 07, 2002 at 09:22:54AM +0100, Marc van Dongen wrote: > Sorry about the delay in replying. We've had a power cut > over the weekend and mail is comming in slowly. Yes, I was a bit surprised to receive a message saying the mail hadn't been delivered for 4 hours, when it had actually been 3 days... :) > Warwick Harvey (wh@icparc.ic.ac.uk) wrote: > : What is it about minimize's behaviour that you would like to change, in > : order to try to exploit this? > > If minimize( labeling( VARS ), EXPR ) results in a valid > labeling and cost COST, then it will post the constraint > EXPR < COST and will search for new solutions to > labeling( VARS ). If constraint propagation forces the domain > of EXPR to be such that COST - 1, COST -2, ..., COST - N + 1 > are no longer in it, then I would like to stop. Especially if > EXPR has many values in its domain, this may save a lot. The problem here is that I suspect you're unlikely to get a suitable upper bound on the cost before starting the branch-and-bound process, at which point the bound is only valid in the current branch. Everything I can think of to overcome this really goes against the grain in that it's pretty much the opposite of what one would normally do: if your initial COST is a long way from optimal and/or N is small these approaches could bite badly. But you probably know all this. Anyway, your best option (at least as a starting point) probably is to write your own labelling predicate which first imposes the constraint EXPR >= COST - N + 1 and then calls labeling/1 (or something more intelligent) as normal. If no solution is found, there is no solution. If a solution is found, the search needs to be re-started with the new value for COST. There are optimisation predicates in the branch_and_bound library which (at least when given the relevant options) do restart whenever a solution is found; use one of them. If that's not enough, you could try digging out an ECLiPSe 5.2 distribution, which includes the source code of the branch_and_bound module, and try modifying it to do what you want. > : Note that if a solution has been found with cost COST, then minimize > : proceeds to search for a solution with *any* cost less than COST. If it > : proceeded to try cost = COST-1, etc., then yes, one could stop when COST-N > : had no solution, but that's not the way it works (because in general that's > : not a particularly smart way to do it). > > I know it's not the way it works. That's why I asked if it's > possible to do it differently:-) Sorry. :) We get questions from users with a wide variety of knowledge and experience, and sometimes it's difficult to know when one is saying too much or not enough, and either is bad... I tend to err on the side of saying too much, which gets me into trouble sometimes (particularly when I'm careless how I say it). > : Is there some way of determining that COST-1 through COST-N have no solution > : that would be significantly cheaper than just looking for anything less than > : COST-1? > > I believe that near an almost optimal solution it is > far more cheaper to find a solution with cost between > COST-1 and COST-N or to prove that such solution does not > exist. That's why I'm so interested in it. Do you have good heuristics for finding an almost-optimal solution, or some way of knowing when the solution you have is almost optimal? (Just trying to increase my own understanding. :) Cheers, WarwickReceived on Wed Aug 07 15:02:09 2002
This archive was generated by hypermail 2.1.8 : Wed 16 Nov 2005 06:07:16 PM GMT GMT