Re: Incremental propagation

From: Joachim Schimpf <J.Schimpf_at_icparc.ic.ac.uk>
Date: Fri 25 Feb 2000 05:50:55 PM GMT
Message-ID: <38B6C0FF.87000857@icparc.ic.ac.uk>
Aleksandar Bakic wrote:
> 
> It would be great if, with some tricks, I could replace
> 
> start
> time
>      constraints([A,B,C],Outs),
> t0   [A,B,C] = [1,2,3],
> t0+N labeling(Outs)
> 
> with something like
> 
> start
> time
>      constraints([A,B,C],Outs),
>      read(A),
> t0   read(B),
> t1   read(C), % value of C is entered at t2
> t2+M labeling(Outs)
> 
> where M << N.
> ...
> I just do not like the fact that
> without tricks, instantiating only one or all Ins variables takes
> about the same time, resulting in K-fold slowdown when Ins has K
> variables.

In principle, propagation is always incremental, ie. once you
instantiate a variable, it propagates. As you have correctly
observed, when you instantiate several variables with a single
statement like [A,B,C] = [1,2,3], then this is "atomic" and
there is only one propagation sequence which considers the effects
of all three instantiations together.

Unfortunately, you cannot expect that propagating a single variable
update is necessarily cheaper than propagation several. Keep in mind
that you have a large network of variables, linked by constraints.
Typically, that network is connected, so when you touch one variable,
this may well cause updates to many (even all) others. How quickly
such a change is "absorbed" in practice depends on the structure of
your problem.

The only thing you can expect is that propagating several changes
together is likely to be cheaper than the sum of individual propagations.
This is basically because the propagation can take shortcuts and does
not have to produce so many intermediate states. In particular, if you
look at the extreme case of instantiating ALL variables together,
the propagation becomes trivial as it amounts only to checking
whether the constraints are satisfied.

As far as I understand, your question was whether you can shorten
the time it takes from the input of the last few variables to getting
a solution. If your last variables are still involved in lots of
constraints, then the answer is probably no. However, if the earlier
variable instantiations have already lead to a substantial simplification
of the constraint network, and a smaller number of remaining constraints,
then you can probably gain something. Maybe your problem structure
suggests groups of variables which belong to "sub-problems" and
can be instantiated and propagated together, thus eliminating
a large part of the constraint network more cheaply.


NOTE: This is a little technicality that might fool your experiments:
Eclipse considers sequences of built-in predicates (like =/2 or read/1)
as atomic and does not trigger propagation between them.
Therefore A=1,B=2,C=3 actually behaves the same as [A,B,C]=[1,2,3].
To see the effect of 3 individual propagations, you can for example
write A=1,true,B=2,true,C=3. This will effectively separate the 3
instantiations and force 3 propagation sequences.


> In C++, I used a list of references as Ins, ...

These references just refer to your Eclipse-variables, it makes
no difference whether you instantiate them via the C++ reference
or directly from the Eclipse-code. However, as long as you don't
resume the Eclipse execution (ec_resume()) no propagation happens.
When you resume, propagation happens for all the instantiations that
have occurred since the last ec_resume(). This is therefore similar
to instantiating several variable together like in [A,B,C] = [1,2,3].


------------------------------------------------------------------------
 Joachim Schimpf                /               phone: +44 20 7594 8187
 IC-Parc, Imperial College     /              mailto:J.Schimpf@ic.ac.uk
 London SW7 2AZ, UK           /      http://www.icparc.ic.ac.uk/eclipse
Received on Fri Feb 25 17:58:58 2000

This archive was generated by hypermail 2.1.8 : Wed 16 Nov 2005 06:07:05 PM GMT GMT