Recent Changes - Search:

PmWiki

pmwiki.org

edit SideBar

AttributedVariables

Author: Joachim Schimpf, 2017-09-11

This is part of an email discussion in response to Markus Triska's remarks on attributed variables https://www.metalevel.at/prolog/attributedvariables

Background

A constraint solver is a meta-program: it reasons about object level variables as objects. Lloyd&Hill's Goedel programming language had clean facilities to do this: it had variables at the object level and a corresponding ground representation on the metalevel. In Prolog we are more relaxed about this distinction, but that makes it even more important for a programmer to be aware of when a variable is simply a variable, and when it is an object that is being reasoned about on the metalevel.

Muddling this up is a source of much confusion, and there is probably a lack of literature and teaching material addressing the principles.

What's the question?

What follows deals specifically with the "binding-serialization" feature of verify_attributes/3, i.e. the property that between two invocations of verify_attributes/3 only a single attributed variable can have been bound.

I am _not_ discussing the "goal-return" feature here (although that is possibly more important).

Is the serialization feature necessary?

Let us spell out what is happening in Markus's minatotask.pl program:

1. the solver code maintains a ZDD. It uses the object variables as meta-level node identifiers. This is generally not a good idea, but it isn't the main problem here.

2. in each propagation step, the ZDD is simplified. The code uses an algorithm that can only deal with a single variable instantiation at a time.

3. in each propagation step, if all nodes corresponding to a particular variable disappear from the ZDD, that variable must be set to 0. Markus's code does not explicitly keep track of which variables were in the ZDD, or which are removed. Instead, he uses a hack: if an object-level variable is still a variable at propagation time, he interprets this as an indication that the variable was a node in the ZDD at the end of the previous propagation round (see all_others_0//2).

What can we learn from this?

  • IF you choose an algorithm that can only incorporate a single variable change at a time into your meta-level data structure;
  • AND/OR you use a programming trick that explicitly relies on the object level changing only in a certain restricted way between propagator invocations;
  • THEN indeed that works only in a system where variable bindings are artificially serialized.

Does that mean that the serialization feature is necessary? No, because the problem goes away as soon as proper meta-level data structures and a robust propagation algorithm are used (see attached (ECLiPSe) code for illustration - it doesn't even need any attribute hooks, just standard coroutining).

As a side remark: the pre-unify handler scheme was used in ECLiPSe in the early 1990, and is still supported. However, today, over 20 ECLiPSe libraries use attributes in a variety of ways, but none of them employ the pre-unify handler.

Is the serialization feature desirable?

As a reminder of what we are talking about, the SICStus manual says:

    
    For each relevant variable:
	For each relevant module M, M:verify_attributes/3
	    is called, collecting a list of returned Goals.
	The variable binding is redone.
	Any Goals are called.
	Any blocked goals are called. 

This imposes a variable-centric structure on _all_ propagation that is triggered by unifications, meaning that both "unification hooks" and (to some extent) woken goals see only a single variable binding at a time.

However, most constraint propagators (such as FD constraints) react not only to unifications (X=c), but also to other primitive constraints such as domain updates (X>=c, X\=c). Should all these be serialized and propagated individually as well? Correct me if I am wrong, but I don't think anyone is asking for this.

It seems clear that the limitation "a single change at a time" is not something that can be expected to hold for constraint propagators in general. Consequently, such propagators must be expected to respond correctly to multiple domain changes on multiple variables, including multiple instantiations (which are, after all, just a particular kind of domain change).

It is then unclear why one would insist on having the "single change" restriction in the special case of unification propagation (this is where Occam's much-quoted razor comes in).

In my opinion it is much more sensible to accept as a general rule that, whenever a propagator is invoked (no matter how), many things may have happened since its previous invocation: bound changes, instantiations, aliasing...

Is the serialization feature undesirable?

With its delayed bindings (or unbindings), the mechanism seriously interferes with the standard mechanisms of Prolog engines, which carries a risk of unintended consequences.

What it buys is that certain fragile propagation algorithms can be made to work. That might not be something to be encouraged.

Performance-wise, the variable-centric structure of the hook invocations has a couple of disadvantages:

  • all propagators are run with only a single variable touched, even AC3-style propagators that would naturally deal with multiple touched variables.
  • propagators cannot be globally ordered by complexity (quick checks/propagators first, slow/complex ones later): the slow propagators must finish with the first variable before the quick ones on the second one can be run.

Summing up

Don't get me wrong: I am the first one to agree that there are real problems with interfacing constraint solvers to Prolog (e.g. related to aliasing, related to propagators interrupting each other, etc). But this particular aspect of the attribute mechanism in my opinion isn't one of them.

Cheers, Joachim

Edit - History - Print - Recent Changes - Search
Page last modified on February 21, 2018, at 10:48 AM