The IC (Interval Constraint) library is a hybrid integer/real interval arithmetic constraint solver. Its aim is to make it convenient for programmers to write hybrid solutions to problems, mixing together integer and real constraints and variables.
Previously, if one wished to mix integer and real constraints, one had to import separate solvers for each, with the solvers using different representations for the domains of variables. This meant any variable appearing in both kinds of constraints would end up with two domain representations, with extra constraints necessary to keep the two representations synchronised.
IC is a general interval propagation solver which can be used to solve problems over both integer and real variables. Integer variables behave much like those from the old finite domain solver FD, while real variables behave much like those from the old real interval arithmetic solver RIA. IC allows both kinds of variables to be mixed seamlessly in constraints, since (with a few exceptions) the same propagation algorithms are used throughout and all variables have the same underlying representation (indeed, a real variable can be turned into an integer variable simply by imposing an integrality constraint on it).
IC replaces the ‘fd’, ‘ria’ and ‘range’ libraries. Since IC does not support symbolic domains, there is a separate symbolic solver library ‘ic_symbolic’, to provide the non-numeric functionality of ‘fd’.
The main difference between IC’s interval solving and RIA’s is that IC is aware of and utilises the bounded real numeric type. This means bounded reals may appear in IC constraints, and IC variables may be unified with bounded reals (though direct unification is not recommended: it is preferable to use an equality constraint to do the assignment). In contrast, RIA will fail with a type error if bounded reals are used in either of these cases.
The main problem with using floating point arithmetic instead of real arithmetic for doing any kind of numerical computation or constraint solving is that it is only approximate. Finite precision means a floating point value may only approximate the intended real; it also means there may be rounding errors when doing any computation. Worse is that one does not know from looking at an answer how much error has crept into the computation; it may be that the result one obtains is very close to the true solution, or it may be that the errors have accumulated to the point where they are significant. This means it can be hard to know whether or not the answer one obtains is actually a solution (it may have been unintentionally included due to errors), or worse, whether or not answers have been missed (unintentionally excluded due to errors).
Interval arithmetic is one way to manage the error problem. Essentially each real number is represented by a pair of floating point bounds. The true value of the number may not be known, but it is definitely known to lie between the two bounds. Any arithmetic operation to be performed is then done using these bounds, with the resulting interval widened to take into account any possible error in the operation, thus ensuring that the resulting interval contains the true answer. This is the principle behind the bounded real arithmetic type.
Note that interval arithmetic does not guarantee small errors, it just provides a way of knowing how large the error may have become.
One drawback of the interval approach is that arithmetic comparisons can no
longer always be answered with a simple “yes” or “no”; sometimes the
only possible answer is “don’t know”. This is reflected in the behaviour
of arithmetic comparators (=:=, >=, etc.) when applied to bounded reals
which overlap each other. In such a case, one cannot know whether the true
value of one is greater than, less than, or equal to the other, and so a
delayed goal is left behind. This delayed goal indicates that the
computation succeeded, contingent on whether the condition in the delayed
goal is true. For example, if the delayed goal left behind was
0.2__0.4 >= 0.1__0.3
, this indicates that the computation should be
considered a success only if the true value represented by the bounded real
on the left is greater than or equal to that of the bounded real on the
right. If the width of the intervals in any such delayed goals is
non-trivial, then this indicates a problem with numerical accuracy. It is
up to the user to decide how large an error is tolerable for any given
application.
In order to ensure the soundness of the results it produces, the IC solver does almost all computation using interval arithmetic. As part of this, the first thing done to a constraint when it is given to the solver is to convert all non-integer numbers in it to bounded reals. Note that for this conversion, floating point numbers are assumed to refer to the closest representable float value, as per the type conversion predicate breal/2. This lack of widening when converting floating point numbers to bounded reals is fine if the floating point number is exactly the intended real number, but if there is any uncertainty, that uncertainty should be encoded by using a bounded real in the constraint instead of a float.
One of the drawbacks of this approach is that the user is not
protected from the fundamental inaccuracies that may occur when trying
to represent decimal numbers with floating point values in binary.
The user should be aware therefore that some numbers given explicitly
as part of their program may not be safely represented as a
bounded real that spans the exact decimal value. e.g.
X $= 0.1
or equivalently X is breal(0.1)
.
This may lead to unexpected results such as
[eclipse 2]: X $= 0.1, Y $= 0.09999999999999999, X $> Y. X = 0.1__0.1 Y = 0.099999999999999992__0.099999999999999992 Yes (0.00s cpu) [eclipse 3]: X $= 0.1, Y $= 0.099999999999999999, X $> Y. No (0.00s cpu)
This potential source of confusion arises only with values which are explicitly given within a program. By replacing the assignment to Y with an expression which evaluates to the same real value we get
[eclipse 4]: X $= 0.1, Y $= 0.1 - 0.000000000000000001, X $> Y. X = 0.1__0.1 Y = 0.099999999999999992__0.1 Delayed goals: ic : (0 > -1.3877787807814457e-17__-0.0) Yes (0.00s cpu)
Note the delayed goal indicating the conditions under which the original goal should be considered to have succeeded.
To load the IC library into your program, simply add the following directive at an appropriate point in your code.
:- lib(ic).
The IC library solves constraint problems over the reals. It is not limited to linear constraints, so it can be used to solve general problems like:
[eclipse 2]: ln(X) $>= sin(X). X = X{0.36787944117144228 .. 1.0Inf} Delayed goals: ... Yes (0.01s cpu)
The IC library treats linear and non-linear constraints differently. Linear constraints are handled by a single propagator, whereas non-linear constraints are broken down into simpler ternary/binary/unary propagators.
Any relational constraint ($=
, $>=
, #=
, etc.)
can be reified simply by including it in an expression where it will
evaluate to its reified truth value.
User-defined constraints may also be included in constraint expressions
where they will be treated in a similar manner to user defined
functions found in expressions handled by is/2
. That is to say
they will be called at run-time with an extra argument to collect the
result. Note, however, that user defined constraint/functions, when used
in IC, should be deterministic. User defined constraints/functions which
leave choice points may not behave as expected.
Variables appearing in arithmetic IC constraints at compile-time are assumed to be IC variables unless they are wrapped in an eval/1 term. See section 3.1.7 for an more detailed explanation of usage.
The following arithmetic expression can be used inside the constraints:
X
is not yet a interval variable,
it is turned into one by implicitly constraining it to be a real
variable.Expr
or -Expr
. The result is an interval
enclosing both. If however, either bound is infeasible then
the result is the bound that is feasible. If neither bound is
feasible, the goal fails.Expr*Expr
, but with better
operational behaviour.e^Expr
.+-sqrt(Expr)
.X
in E1 = X^E2
.B #= (X$>3 and X$<8)
B #= (X$>3 or X$<8)
B #= (X$>3 => X$<8)
B #= (neg X$>3)
Within the expression context, the constraint evaluates to its
reified truth value. If the constraint is entailed by the
state of the constraint store then the (sub-)expression
evaluates to 1
. If it is dis-entailed by the state of
the constraint store then it evaluates to 0
. If its
reified status is unknown then it evaluates to an integral
variable 0..1
.
Note: The simple cases (e.g. Bool #= (X #> 5)
) are
equivalent to directly calling the reified forms of the basic
constraints (e.g. #>(X, 5, Bool)
).
call(foo(Arg1, Arg2 ... ArgN, Aux))
.
If no lookup module is specified, then the current
module will be used.This behaviour mimics that of is/2.
The eval/1 wrapper inside arithmetic constraints is used to indicate that a variable will be bound to an expression at run-time. This feature will only be used by programs which generate their constraints dynamically at run-time, for example.
broken_sum(Xs,Sum):- ( foreach(X,Xs), fromto(Expr,S1,S2,0) do S1 = X + S2 ), Sum $= Expr.
The above implementation of a summation constraint will not work as intended because the variable Expr will be treated like an IC variable when it is in fact the term +(X1,+(X2,+(...))) which is constructed in the for-loop. In order to get the desired functionality, one must wrap the variable Expr in an eval/1.
working_sum(Xs,Sum):- ( foreach(X,Xs), fromto(Expr,S1,S2,0) do S1 = X + S2 ), Sum $= eval(Expr).