Re: Re: maximum finite domain constraint for eclipse

From: Joachim Schimpf <j.schimpf_at_icparc.ic.ac.uk>
Date: Sun 04 Feb 2001 06:36:28 PM GMT
Message-ID: <3A7DA12C.A11DCF56@icparc.ic.ac.uk>
Rui wrote:
> On a special note to the list. I don't know what is the policy about posts
> and so I am not sure if this kind of post fits in.

Seems to be just the thing to discuss here :-)


> Btw, Any comments about my implementation? Is it efficient?
> Can it be improved?

It is in fact surprisingly tricky to get these things right.
Even this apparently straightforward max-constraint shows many
of the problems that can come up, but that makes it a perfect
example to study!


First, your code can be simplified by removing all the special
handling of non-variables. This is possible because most of the
primitives that work on domain variables also work on integers
(considering them as variables with identical upper and lower
bound). This leaves essentially the last clause of your code.
The only place where you still have to check for variables is
where you decide whether to re-suspend or not:

maxfd1(A, B, M):-
        dvar_range(A, MinA, MaxA),
        dvar_range(B, MinB, MaxB),
        ( MinA >= MaxB ->
	    A = M
        ; MinB >= MaxA ->
	    B = M
	;
	    Min is max(MinA, MinB),
	    Max is max(MaxA, MaxB),
	    ( nonvar(A),nonvar(B) ->
		true
	    ;
		suspend(maxfd1(A, B, M), 3, [(A,B)->fd:min,(A,B)->fd:max])
	    ),
	    M::Min..Max
	).

Another important change I have made was to swap the order of
suspend(...) and M::Min..Max.
This can be important for correctness when the constraint is
used in a constraint-network together with other constraints
that have higher priority. The line M::Min..Max affects the
domain of M. This may wake other constraints, and if these have
higher priority, they are executed immediately (before M::Min..Max
returns). These other constraints may directly or indirectly
affect the domain of A and B again, which in turn requires maxfd/3
to be executed again. But that can only work if we have already
suspended it before!


The next point is that you don't do all the propagation that is
possible! It is natural to think of the max-constraint as computing
the maximum of two numbers, but in fact a constraint is a relation.
Therefore the "output" argument M can also affect the "input"
arguments A and B. For example

[eclipse 119]: [A,B]::1..5, M::2..3, maxfd1(A,B,M).

A = A{[1..5]}
B = B{[1..5]}
M = M{[2, 3]}

should ideally change the upper bounds of A and B to 3.
The easiest way to add this behaviour is to reuse our existing
code and define

maxfd2(A, B, M) :-
	M #>= A,
	M #>= B,
	maxfd1(A, B, M).

This now behaves as expected:

[eclipse 127]: [A,B]::1..5, M::2..3, maxfd2(A,B,M).

A = A{[1..3]}
B = B{[1..3]}
M = M{[2, 3]}


If you look at the efficiency now, you find that some of the
work that maxfd1/3 does is now redundant because the two inequalities
already take care of updating the lower bound of M when the lower bounds
of A or B change. So we could simplify maxfd1 such that it only reacts
to changes to the upper bounds of A and B by updating the upper
bound of M.


An interesting general point is that we have now split the single
max-constraint into 3 "propagators" which share the job of implementing
the constraint's semantics. This is a typical choice that one has to
make when implementing a constraint:

- either one propagator that does everything
- or many specialised propagators

There is no general rule about what is more efficient. It depends on
the particularities of the constraints and also on the actual pattern
of domain updates at runtime.  For example, when only lower bound
updates to A happen at runtime, then cheap, specialised propagators
will be more efficient because the irrelevant ones never wake up. 
If updates happen randomly to all 6 bounds, then a single propagator
might be more efficient because there is less waking and re-suspending
going on.


So lets try to pack everything into a single propagator:

maxfd3(A, B, M):-
        dvar_range(A, MinA, MaxA),
        dvar_range(B, MinB, MaxB),
        dvar_range(M, _MinM, MaxM),
        ( MinA >= MaxB ->
	    A = M
        ; MinB >= MaxA ->
	    B = M
	;
	    call_priority((
		Max is max(MaxA, MaxB),
		Min is max(MinA, MinB),
		dvar_remove_smaller(M, Min),	% M :: Min..Max
		dvar_remove_greater(M, Max),

		dvar_remove_greater(A, MaxM),	% A #=< MaxM
		dvar_remove_greater(B, MaxM),	% B #=< MaxM

		Vars = v(A,B,M),
		( nonground(2,Vars,_) ->
		    suspend(maxfd3(A, B, M), 3, [Vars->fd:max,Vars->fd:min])
		;
		    true
		)
	    ), 2)
	).

This does the same as maxfd2 above. A few remarks:

I have now used the lower-level primitives dvar_remove_smaller
and dvar_remove_greater instead of ::/2 and #=</2.
This is somewhat more efficient, but otherwise equivalent.

Also, I have wrapped all the updates of M, A and B together with
the re-suspend code into a call_priority/2. This makes the whole
sequence sufficiently "atomic", i.e. any waking of other constraints
will be postponed until the end of the call_priority. This is an
alternative way to achieve what we did before by playing with the
order of modification and re-suspension.


If you look at the code, you see that the lower bound of M is never
used.  Is it really irrelevant?  No, it turns out that we have in
fact missed another propagation opprotunity! For Example:

[eclipse 12]: A::1..3, B::1..9, M::4..9, maxfd3(A,B,M).

A = A{[1..3]}
B = B{[1..9]}
M = M{[4..9]}

Here we can infer that A cannot possibly be the maximum because it
does not overlap with M, we can raise B's lower bound to 4, or even
unify B and M. The improved code is then:

maxfd4(A, B, M):-
        dvar_range(A, MinA, MaxA),
        dvar_range(B, MinB, MaxB),
        dvar_range(M, MinM, MaxM),
        ( MinA >= MaxB ->
	    A = M
        ; MinB >= MaxA ->
	    B = M
	; MinM > MaxB ->
	    A = M
	; MinM > MaxA ->
	    B = M
	;
	    call_priority((
		Max is max(MaxA, MaxB),
		Min is max(MinA, MinB),
		dvar_remove_smaller(M, Min),	% M :: Min..Max
		dvar_remove_greater(M, Max),

		dvar_remove_greater(A, MaxM),	% A #=< MaxM
		dvar_remove_greater(B, MaxM),	% B #=< MaxM

		Vars = v(A,B,M),
		( nonground(2,Vars,_) ->
		    suspend(maxfd4(A, B, M), 3, [Vars->fd:max,Vars->fd:min])
		;
		    true
		)
	    ), 2)
	).


This now does all the propagation possible. How can we prove that?
One way is to use ECLiPSe's lib(propia) to implement a perfect (but
somewhat inefficient) constraint:

:- lib(propia).

maxfd_perfect(A, B, M):-
	(labeling([A,B,M]),max(A,B,M)) infers most.

This can now be compared with maxfd4, and it turns out that they have
the same behaviour, so we haven't missed any more opportunities.


After all this, is it worth remembering that the most "complete"
implementation of a constraint is not necessarily the best under all
circumstances.  In our example, the perfect constraint has to wake up
on all 6 possible changes (upper/lower bound change on 3 variables). 
Every change may or may not represent a propagation opportunity.  If
it doesn't, we have just wasted time, and we would possibly have been
better off using a cheaper, incomplete implementation.  By the way,
this is one reason why most so-called "benchmarks" of constraint
systems are quite useless.


There are more interesting aspects, but I leave then for later...
Just want to mention that the library(fd_global) contains a
generalisation of the max-constraint called maxlist/2, which computes
the maximum of a list, but also works efficiently for 2 variables.


-- 
 Joachim Schimpf              /             phone: +44 20 7594 8187
 IC-Parc, Imperial College   /            mailto:J.Schimpf@ic.ac.uk
 London SW7 2AZ, UK         /    http://www.icparc.ic.ac.uk/eclipse
Received on Sun Feb 04 18:39:00 2001

This archive was generated by hypermail 2.1.8 : Wed 16 Nov 2005 06:07:07 PM GMT GMT