Title:Draft Proposed Standard for Prolog Evaluable PredicatesAuthor: R. A. O'Keefe.

There are several different implementations of Prolog which resemble Dec-10 Prolog closely enough that it is worth trying to move programs from one version to another. To facilitate this, it is important to have a clear specification of the semantics of the standard predicates (syntax is comparatively unimportant). This paper presents such a specification for term classification, construction, and comparison, for arithmetic, and for control structures. the term classification and construction and the arithmetic evaluable predicates of a Prolog system. The proposed standard extends Dec-10 Prolog in the interests of portability between future systems. This is a draft, published for comments.

This work was supported by a Commonwealth Scholarship. Computing was done on the Edinburgh ICF KL-10 under SERC grant GR/C/06226.

atom(A) & name(A, S) & name(X, S) -> X = Ais not always true. And an implementor whose host language supports strings particularly well might prefer it if the "string" argument of name/2 were a string and not a sequence of character codes. But neither of these reasons for dissatisfaction with name/2 is licence to change it. Instead, a new standard should define more satisfactory operations as well (as this one does). This brings me to another point about this proposed standard. It includes some things that are

! Undefined predicate: pljs/3 ! pljs(3, 17, _273)is recommended. It is redundant, but when the culprit call is very large it may be difficult for the programmer to count the arguments. As a general matter, it is recommended that all error messages have the form

! <error phrase>: <detail> ! <the culprit goal>and that some form of dialogue then be entered. In the case of undefined predicates, the user must have the option of failing

current_predicate(Name, Goal)(see section 4.6 of the Dec-10 Prolog manual) which recognises user predicates. Up till now it has meant that the predicate in question has at least one clause. However, it would be more useful if it meant that the predicate in question has at some time had clauses, for then a Prolog interpreter written in Prolog could easily report undefined predicate errors, the Prolog pretty-printer could distinguish between a command to print a predicate which currently has no clauses and a command to print a non-predicate, and so on. Note that the requirement that only assertions can make a predicate defined does not preclude a declaration

:- data F/N.which indicates a predicate which will be dynamically built and changed, as we can write

declare_data(F, N) :- functor(Goal, F, N), ( current_predicate(F, Goal) ; assert(Goal), retract(Goal) ), !.This works with either definition of current_predicate.

! Type Error: argument 2 of plus/3 is not an integer: ! plus(_273, a, _275)An implementation should be able to provide

handle overflow in plus(X,Y,Z) where nonvar(X) and nonvar(Y) by calling eval(Z is X+Y).But getting this right would require similar hooks into type failures as well, and the whole thing would end up as an incredible mess. So overflows should not be continuable. (It would be wrong to fail an overflowed goal, because there

domain errors, such as sqrt(-1). These should be handled in the same way as integer division by 0. overflows. These should be handled like integer overflows, that is, theyThere may of course be other floating point exceptions, such as trying to perform arithmetic on an IEEE NaN value. With the exception of domain errors (like division by 0), these represent the failure of the implementation to handle a well defined value, and so should be presented asmust be detected, and mustnot be continuable. underflow. It is common practice to replace an underflowed result by 0. There are two problems with this. The first is the fact that the ordered field lawsX > 0 & Y > 0 => X*Y > 0 X < 0 & Y > 0 => X*Y < 0 X*Y = 0 => X = 0 or Y = 0and so on can be falsified. In a language allegedly based on logic this would be a pity. The second is that underflow represents a complete "loss of precision" and indicates a poor formulation of a calculation problem. As there is a vast literature on numerical analysis, and a good deal is known about coping with the defects of floating point, it would be a great pity if this indication of a badly expressed algorithm were not brought to the programmer's notice.

gen_nat(N) :- % gen-erate nat-ural nonvar(N), % if we aren't to generate it !, % demand that it is an integer integer(N), N >= 0. % and non-negative. gen_nat(N) :- % otherwise, generate an gen_nat(0, N). % integer >= 0 gen_nat(L, L). gen_nat(L, N) :- % generate natural > L succ(L, M), % M = L+1 gen_nat(M, N). % generate natural >= M gen_int(I) :- % gen-erate int-eger nonvar(I), % if we aren't to generate it !, % demand that it is an integer. integer(I). gen_int(0). % generate 0 gen_int(I) :- % generate +/- N for N > 0 gen_nat(1, N), ( I = N ; plus(I, N, 0) % I+N = 0 ).It is also better style to use an explicit generator, so that the human reader can better grasp what the algorithm is up to. The basic problem is that when there are infinitely many solutions to a single goal, blind chronological backtracking such as Prologs generally use will stick at that goal, when the cause of the failure may lie elsewhere entirely. Reporting an instantiation fault instead is a good way to ask the programmer for more help in the control component of the program. A system such as MU Prolog should not report instantiation faults, but should delay such goals until more variables are instantiated. When such a system deadlocks (because all the goals it might work on are delayed), that is its equivalent of reporting an instantiation fault. If there is enough information in the goal to determine that it cannot succeed, it should fail. So an instantiation fault should generally correspond to a question with at least one answer (which the Prolog doesn't think it can afford to find). If the system supports user defined error handlers, and if an error handler can generate solutions, then it might be an idea to allow the user's error handler to propose solutions and use the built in code to check them. But when the user's handler/generator fails, it will have failed after a finite number of solutions, and most instantiation faults correspond to questions with an infinite number of solutions, so the error should be re-signalled (by-passing the user's handlers). Instantiation faults are not failure-continuable, though they are success-continuable. Instantiation faults are indicated in the specifications below by

length([], 0). length([_|Tail], N) :- length(Tail, M), succ(M, N).or, to avoid runaway recursion, as

length(List, Length) :- nonvar(Length), !, length_gen(Length, List). length(List, Length) :- length_chk(List, Length). length_chk([], 0). length_chk([_|Tail], N) :- length_chk(Tail, M), succ(M, N). length_gen(0, []). length_gen(N, [_|Tail]) :- succ(M, N), length_gen(M, Tail).This is a very useful predicate, and to get the utmost speed, it might be coded in the host language, or even in micro-code. We would like such an implementation to have the best affordable approximation to its logical semantics. The arithmetic predicates are a case where the relation represented cannot be described in Prolog. If numbers are represented using successor notation, then Prolog can indeed express all the arithmetic operations. But otherwise, even for the relation N=M+1, there are infinitely many values for M, and to describe this relation in Prolog we would need an infinite number of clauses. Still, the

if there is sufficient information in a question to determine that it has a unique answer, the implementation should yield that answer. if there is sufficient information in a question to determine that it has NO answer, the implementation should fail. If neither of these cases holds, but the implementation can tell that there is a finite number of solutions, it may enumerate them. But it need not. If the implementation cannot tell that there is a finite number of solutions, or if it doesn't check, it should either adopt some other control strategy (such as delaying) or it should report an instantiation fault. It must NOT backtrack over an infinite set of solutions.Unfortunately, Dec-10 Prolog has not followed this design principle. For example, the length/2 predicate in Dec-10 Prolog only works one way: length(X,4) reports an instantiation fault instead of computing the unique solution X=[_,_,_,_]. However, there is an important consequence of adopting this principle: it is

D_size(D, Size) :- bind Size to the number of elements in D, or if D is a mapping, to the number of elements in the domain of D. D_member(Element, D) :- recognise or enumerate elements of the collection. D_memberchk(Element, D) :- recognise elements of the collection, do not enumerate. May delay if Element is unbound, but need not. check_D(Pred, D) :- check that Pred(X) is true for each element X of D. part_D(Pred, D, TrueD, FailD) :- distribute the elements of D into two objects TrueD/FailD of the same type according as Pred(Elem) succeeds/fails. map_D(Pred, Input_D, Output_D) :- construct a new object of type D, where corresponding elements I and O of Input_D and Output_D are related by Pred(I,O).These conventions need thinking about. There could also be some others. Given D_member, we can define bounded existential quantification as

some_D(Pred, D) :- D_member(Element, D), Pred(Element), !.Note the presence of the cut. Since there is no "official" way to find out what Element was involved, the only effect of the cut should be to eliminate redundant proofs of the same fact. However, if Pred (which can be a compound term) contains variables, or if it has side effects, or if the collection D contains variables, the program might be able to detect the difference. If it matters, the programmer can easily write the calls to D_member and Pred explicitly, and ia then has the successful element as well. Similarly, we can define bounded universal quantification as

each_D(Pred, D) :- forall(D_member(Element,D), Pred(Element)).Again, this will not bind any free variables in Pred or D, and not only will such bindings not be propagated to the caller, they are not kept from one Element to the next. In contrast, the check_D predicate

check_list(divisible_by(Y), [72,6,30,1296])might instantiate Y to 6, 3, and 2, while

each_list(divisible_by(Y), [5,3,7])might succeed, because each of the numbers has a non-unit factor, leaving Y uninstantiated. Some data structures represent finite functions in various ways. Note that map_D predicates only act on the

for all D_obj there is a unique R_obj such that D_R(D_obj, R_obj)but

* for all R_obj there is a unique D_obj such that * D_R(D_obj, R_obj)If this second rule is true, there should also be a predicate

?- cp "*zeelunk*".(known as 'apropos' in many Lisps) to find out. We should further rule that operation names (make, test, check [hence "chk" in memberchk, as a

true | +---------+---------+ | | nonvar var | +---------+--------------------------+ | | compound atomic=constant | +----------------+---------+---------+ | | | number string atom=symbol | +-------+-------+ | | rational complex | | integer floatEvery standard Prolog implementation is required to support the var, nonvar, compound, atom, and integer types, and to supply all these type testing predicates. A recognition predicate for an unsupported type must always simply

proper_rational(Q) :- rational(Q), \+ integer(Q).and similarly there is no standard type for a complex number with a non-zero imaginary part, as this may be defined by

proper_complex(F) :- complex(F), \+ float(F).

... \+ number(X), ... number(X)can actually succeed, provided X is a variable at the time of the first test and is suitably instantiated at the time of the second. This sort of behaviour is what makes these predicates "metalogical" instead of being equivalent to infinite tables.

compound(Term) -> exists N. exists A. arg(N, Term, A).

simple(Term) :- \+ compound(Term).Dec-10 Prolog uses the name "atomic" to mean a non-variable term other than a compound term. In a mixed language system this can be confusing. Logicians normally use the name "constant" for such terms. For compatibility, a standard Prolog implementation

atomic(Term) :- nonvar(Term), \+ compound(Term).but it is recommended that constant/1 be supplied as well, with identical meaning. It is unfortunate that a number of writers have invented new names for compound terms. 'complex' we've already dealt with as being needed for complex numbers. Another popular not-invented-here name for compound terms has been "structures". This is a particularly unhelpful name, as

structure(X):-compound(X).in his own programs. What is forbidden is having this in a standard Prolog

8-bit ASCII to EBCDIC EBCDIC to 8-bit ASCII 00 00 40 7c 80 20 c0 76 00 00 40 20 80 c3 c0 7b 01 01 41 c1 81 21 c1 77 01 01 41 a0 81 61 c1 41 02 02 42 c2 82 22 c2 78 02 02 42 a1 82 62 c2 42 03 03 43 c3 83 23 c3 80 03 03 43 a2 83 63 c3 43 04 37 44 c4 84 24 c4 8a 04 9c 44 a3 84 64 c4 44 05 2d 45 c5 85 15 c5 8b 05 09 45 a4 85 65 c5 45 06 2e 46 c6 86 06 c6 8c 06 86 46 a5 86 66 c6 46 07 2f 47 c7 87 17 c7 8d 07 7f 47 a6 87 67 c7 47 08 16 48 c8 88 28 c8 8e 08 97 48 a7 88 68 c8 48 09 05 49 c9 89 29 c9 8f 09 8d 49 a8 89 69 c9 49 0a 25 4a d1 8a 2a ca 90 0a 8e 4a d5 8a c4 ca e8 0b 0b 4b d2 8b 2b cb 6a 0b 0b 4b 2e 8b c5 cb e9 0c 0c 4c d3 8c 2c cc 9b 0c 0c 4c 3c 8c c6 cc ea 0d 0d 4d d4 8d 09 cd 9c 0d 0d 4d 28 8d c7 cd eb 0e 0e 4e d5 8e 0a ce 9d 0e 0e 4e 2b 8e c8 ce ec 0f 0f 4f d6 8f 1b cf 9e 0f 0f 4f 7c 8f c9 cf ed 8-bit ASCII to EBCDIC EBCDIC to 8-bit ASCII 10 10 50 d7 90 30 d0 9f 10 10 50 26 90 ca d0 7d 11 11 51 d8 91 31 d1 a0 11 11 51 a9 91 6a d1 4a 12 12 52 d9 92 1a d2 aa 12 12 52 aa 92 6b d2 4b 13 13 53 e2 93 33 d3 ab 13 13 53 ab 93 6c d3 4c 14 3c 54 e3 94 34 d4 ac 14 9d 54 ac 94 6d d4 4d 15 3d 55 e4 95 35 d5 4a 15 85 55 ad 95 6e d5 4e 16 32 56 e5 96 36 d6 ae 16 08 56 ae 96 6f d6 4f 17 26 57 e6 97 08 d7 af 17 87 57 af 97 70 d7 50 18 18 58 e7 98 38 d8 b0 18 18 58 b0 98 71 d8 51 19 19 59 e8 99 39 d9 b1 19 19 59 b1 99 72 d9 52 1a 3f 5a e9 9a 3a da b2 1a 92 5a 21 9a 5e da ee 1b 27 5b ad 9b 3b db b3 1b 8f 5b 24 9b cc db ef 1c 1c 5c e0 9c 04 dc b4 1c 1c 5c 2a 9c cd dc f0 1d 1d 5d bd 9d 14 dd b5 1d 1d 5d 29 9d ce dd f1 1e 1e 5e 9a 9e 3e de b6 1e 1e 5e 3b 9e cf de f2 1f 1f 5f 6d 9f e1 df b7 1f 1f 5f 7e 9f d0 df f3 8-bit ASCII to EBCDIC EBCDIC to 8-bit ASCII 20 40 60 79 a0 41 e0 b8 20 80 60 2d a0 d1 e0 5c 21 5a 61 81 a1 42 e1 b9 21 81 61 2f a1 e5 e1 9f 22 7f 62 82 a2 43 e2 ba 22 82 62 b2 a2 73 e2 53 23 7b 63 83 a3 44 e3 bb 23 83 63 b3 a3 74 e3 54 24 5b 64 84 a4 45 e4 bc 24 84 64 b4 a4 75 e4 55 25 6c 65 85 a5 46 e5 a1 25 0a 65 b5 a5 76 e5 56 26 50 66 86 a6 47 e6 be 26 17 66 b6 a6 77 e6 57 27 7d 67 87 a7 48 e7 bf 27 1b 67 b7 a7 78 e7 58 28 4d 68 88 a8 49 e8 ca 28 88 68 b8 a8 79 e8 59 29 5d 69 89 a9 51 e9 cb 29 89 69 b9 a9 7a e9 5a 2a 5c 6a 91 aa 52 ea cc 2a 8a 6a cb aa d2 ea f4 2b 4e 6b 92 ab 53 eb cd 2b 8b 6b 2c ab d3 eb f5 2c 6b 6c 93 ac 54 ec ce 2c 8c 6c 25 ac d4 ec f6 2d 60 6d 94 ad 55 ed cf 2d 05 6d 5f ad 5b ed f7 2e 4b 6e 95 ae 56 ee da 2e 06 6e 3e ae d6 ee f8 2f 61 6f 96 af 57 ef db 2f 07 6f 3f af d7 ef f9 8-bit ASCII to EBCDIC EBCDIC to 8-bit ASCII 30 f0 70 97 b0 58 f0 dc 30 90 70 ba b0 d8 f0 30 31 f1 71 98 b1 59 f1 dd 31 91 71 bb b1 d9 f1 31 32 f2 72 99 b2 62 f2 de 32 16 72 bc b2 da f2 32 33 f3 73 a2 b3 63 f3 df 33 93 73 bd b3 db f3 33 34 f4 74 a3 b4 64 f4 ea 34 94 74 be b4 dc f4 34 35 f5 75 a4 b5 65 f5 eb 35 95 75 bf b5 dd f5 35 36 f6 76 a5 b6 66 f6 ec 36 96 76 c0 b6 de f6 36 37 f7 77 a6 b7 67 f7 ed 37 04 77 c1 b7 df f7 37 38 f8 78 a7 b8 68 f8 ee 38 98 78 c2 b8 e0 f8 38 39 f9 79 a8 b9 69 f9 ef 39 99 79 60 b9 e1 f9 39 3a 7a 7a a9 ba 70 fa fa 3a 9a 7a 3a ba e2 fa fa 3b 5e 7b c0 bb 71 fb fb 3b 9b 7b 23 bb e3 fb fb 3c 4c 7c 4f bc 72 fc fc 3c 14 7c 40 bc e4 fc fc 3d 7e 7d d0 bd 73 fd fd 3d 15 7d 27 bd 5d fd fd 3e 6e 7e 5f be 74 fe fe 3e 9e 7e 3d be e6 fe fe 3f 6f 7f 07 bf 75 ff ff 3f 1a 7f 22 bf e7 ff ffBut what about other alphabets, such as Hebrew, Greek, Cyrillic, Kanji? The answer is that there is nothing in this standard to prohibit their use. The range of character codes must be at least 0..127; it may be more. In particular, a 16-bit character set may be used. All that this standard requires is that the codes 0..127 bear their ASCII meanings. In order to cope with extended alphabets it may be necessary to have both 8-bit and 16-bit strings; as atom names are immutable they may be stored in the most economical format.

lisp(array(X))

term_type(Term, Type) :- var(Term), wait_for_binding_of(Term), % or report an instantiation fault term_type1(Term, Type). term_type(Term, Type) :- nonvar(Term), term_type1(Term, Type). term_type1(Term, compound) :- compound(Term). term_type1(Term, atom) :- atom(Term). term_type1(Term, string) :- string(Term). term_type1(Term, integer) :- integer(Term). term_type1(Term, rational) :- rational(Term). term_type1(Term, float) :- float(Term). term_type1(Term, complex) :- complex(Term). term_type1(Term, number) :- number(Term). term_type1(Term, atomic) :- atomic(Term).Except for the existence of the predicate term_type1, this is to be taken literally. If term_type(X, T) is called, and X ends up bound to 3, term_type is to enumerate the types integer, rational, and atomic

string_chars(String, ListOfChars) if string(String) { unify ListOfChars with [c1,...,cn] where String has n characters and their codes are c1,....,cn. } if var(String) { check that ListOfChars is a valid list of character codes if it is too long, report a string overflow fault unify String with anew string containing those codes } if nonvar(String) and \+string(String) { type_fail(1, string) }

To check that X is a valid list of character codes: if var(X), report an instantiation fault. if X = [], succeed. if X = [H|T] { if var(H), report an instantiation fault. if \+ integer(H) or H < 0 or H > max_char, type_fail(2,chars). check that T is a valid list of character codes. } otherwise type_fail(2,chars).Note that this follows the naming convention: given any string there is always exactly one list of character codes which corresponds to it, but given a list of integers there may not be a corresponding string (the list might be too long, or some of the integers might be outside the range of character codes). Note also that the string is

atom_string(Atom, String) if var(Atom) and var(String) report an instantiation fault. if nonvar(Atom) and \+ atom(Atom) type_fail(1,atom). if nonvar(String) and \+ string(String) type_fail(2,string). if nonvar(Atom) { unify String with a new string whose contents are the character codes of the name of Atom } else { unify Atom with the atom whose name is String, creating the atom if necessary. }This definition suggests that it is possible for an atom to have any name at all that can be represented as a string. However, some Lisps place different restrictions on strings and atom names. In particular, many Lisps demand that any constant whose name looks like the name of a number

atom_chars(Atom, ListOfChars) :- nonvar(Atom), atom_string(Atom, String), string_chars(String, ListOfChars). atom_chars(Atom, ListOfChars) :- var(Atom), string_chars(String, ListOfChars), atom_string(Atom, String).Although atom_chars is defined in terms of atom_string, it is perfectly permissible to make atom_chars the primitive and define atom_string in terms of it, or even to omit atom_string entirely. There is a subtle point of this definition: we only require the ListOfChars to be a ground list when the Atom is unbound. When we know the Atom, we can use it to fill in the variables in the ListOfChars. So

atom_chars(append, [97,X,X,101|R])must succeed and bind X = 112, R = [110,100].

number_string(Number, String) if var(Number) and var(String) report an instantiation fault. if nonvar(Number) and \+ number(Number) type_fail(1,number). if nonvar(String) and \+ string(String) type_fail(2,string). if nonvar(String) { if String cannot be parsed as a number,There are two subtle points here. The first is the same as with atom_chars; number_chars(12,[X,50]) must succeed and bind X=49. The second is what happens when the String or ListOfChars is a valid string or character list, but the characters cannot be parsed as a number? Calling number_string(X,"1.e") looks like some sort of mistake. But when both arguments are bound there are two ways of proceeding. We could take the number, "print" it into an internal buffer, and compare the result with the given string. Or we could convert the string to a number, and compare the result with the given number. In fact it is the former method which is more general. It lets us determine, for example, that number_string(1,"2/3") must be false, without any need to implement general rational numbers. It would be difficult to explain to a novice why number_string(1,"x") should behave any differently from atom_string(fred,"x"). The clinching argument in favour of doing nothing special when the string or character list does not represent a number is that it would be very difficult indeed to check the syntax of an non-ground list of characters. For consistency then, "bad syntax" must fail in all cases. There is a third subtle point which I'd better make clear in the code; the "unification" of the String with the print name of the number (when the number is instantiated) should ignore case. This is a tiresome detail, and only matters if floats are implemented. Here is the syntax of numbers:simplyfail. unify Number with the number represented by String } else { "print" Number and unify String with a new string of the result } number_chars(Number, ListOfChars) :- var(Number), string_chars(String, ListOfChars), number_string(Number, String). number_chars(Number, ListOfChars) :- nonvar(Number), number_string(Number, String), string_chars(String, ListOfChars).

<number> ::= <integer> | <ratio> | <float> | <complex> <integer> ::= <sign> <digits> <sign> ::= <empty> | - + <digits> ::= (0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9) <ratio> ::= <integer> / <digits> <float> ::= {as in Common Lisp} <complex> ::= <float> I <float>The first thing to note is that + is not a sign. It would make sense if it was, but unfortunately

?- name(X, "+2"), atom(X).succeeds in Dec-10 Prolog. Given that

?- X is +2.succeeds in C Prolog, binding X to the integer 2, I feel that this is an anomaly which should be eliminated. The next thing to note is that because of the ruling that "bad syntax" simply fails, number_string does not have to understand the syntax of number types which are not implemented. To be compatible with Common Lisp, the denominator in N/D should not be 0. In PRESS, however, we found it very useful to have +infinity (1/0), -infinity/0), and undefined (0/0) values. Reproducing the behaviour of the LONG package in a low-level implementation of bignums and ratios is quite easy. On the other hand, an implementation which tries to share the world with LISP could find it difficult to do this. So for the moment this standard does not say whether X/0 must be permitted or not. It should be the case that

p(X) :- number_chars(X, L), number_chars(L, Y), Y = X.succeeds for any number X. In the case of integers and ratios this is easy, and if you can do it for floats it is easy to do it for complex numbers. But for floats this is a very stringent requirement, and it means that the floating point I/O routines supplied by the host language (whether LISP, Pop, or C) are unlikely to be usable. There's no helping that, it's a property we

name(Constant, ListOfChars) :- string(Constant), !, string_chars(Constant, ListOfChars). name(Constant, ListOfChars) :- number(Constant), !, number_chars(Constant, ListOfChars). name(Constant, ListOfChars) :- atom(Constant), !, atom_chars(Constant, ListOfChars). name(Constant, ListOfChars) :- var(Constant), number_chars(Constant, ListOfChars), !. name(Constant, ListOfChars) :- var(Constant), !, atom_chars(Constant, ListOfChars). name(Constant, ListOfChars) :- type_fail(1, atom-or-number).As usual, it is the effect only which is being defined, and these clauses need not be visible. The main reason for defining this predicate is backwards compatibility. There is a rather strange aspect of it. It was pointed out earlier that number_chars need not recognise the syntax of unimplemented number classes. This means that in an implementation such as Dec-10 Prolog which lacks floats, name(X,"1.2e3") may bind X to an atom (as in fact it does) instead of reporting a representation fault.

name(X, "(QUOTE (F))")might call the Interlisp function PACKC and then check that the result is an atom or number, and fail on finding that it isn't. name/2 is required to work with any integer or unquoted Prolog identifier; a program that uses it only for those can expect to fit in with any host language. But new programs should generally use atom_chars or number_chars. It is extremely useful to have a single predicate for converting a constant to a list of ASCII codes. Given that name/2 is already strange, adding the first clause so that it can convert a string to a list of characters increases its usefulness without increasing its strangeness too much. It is already the case that

nonvar(X), name(X, L), name(Y, L)does not imply that X = Y; consider the case X='1'. So the fact that name can convert a string to a list of characters but not conversely is not additionally strange. There are two predicates which "construct" numbers from other numbers.

*rational(Q, N, D) % NOT the real definition! * if nonvar(Q) and \+rational(Q), type_fail(1, rational). * if nonvar(N) and \+integer(N), type_fail(2, integer). * if nonvar(D) and \+integer(D), type_fail(3, integer). * if nonvar(N) and nonvar(D) { * if the implementation cannot handle 1/0,0/0,-1/0 { * if D = 0, zero_divide. * } * unify Q with the value of N/D * } else * if nonvar(Q) { * let N1,D1 be such that N1/D1 = Q and gcd(N1,D1) = 1. * if nonvar(D) { * if D = 0 then { * if D1 =/= 0 then fail. * unify N with N1. * } else { * if gcd(|D|,D1) =/= D1 then fail. * unify N with N1*D/D1. * } * } else * if nonvar(N) then { * if gcd(|N|,|N1|) =/= |N1| then fail. * unify D with D1*N/N1. * } else { * unify N with N1, D with D1. * } * } else { * report an instantiation fault. * } complex(C, R, I) if nonvar(C) and \+complex(C), type fail(1, complex). if nonvar(R) and \+float(R), type fail(2, float). if nonvar(I) and \+float(I), type fail(3, float). if nonvar(C) { unify R with re(C), I with im(C). } else if nonvar(R) and nonvar(I) { unify C with R+i*I. } else { report an instantiation fault. }There is no problem with complex/3. We could almost define it in Prolog as

complex(c(R,I), R, I) :- float(R), float(I), I \== 0.0. complex(R, R, 0.0) :- float(R).R and I cannot be rational numbers. If they could be, we'd run into the problem discussed in the next paragraph. There is quite a nasty little problem with rational/3. For any given pair of integers N,D there is exactly one rational number N/D (possibly including +infinity, -infinity, and undefined as rational numbers). However, for any given rational number Q there are infinitely many ways that of expressing it as the quotient of two integers (with the exception of 0/0). According to the design principle, then, rational(Q,N,D) for var(N) and var(D) should report an instantiation fault or delay until at least one of N and D is bound. But according to the description above, it will succeed uniquely. This has the extremely nasty effect that

rational(1/2, N, D), D = 4will fail (as rational/3 bound N=1, D=2) but

D = 4, rational(1/2, N, D)will succeed. It could be argued that this is due to rational/3 trying to be too clever, and when nonvar(Q) it should not take the other arguments into account. That would make both calls fail. But that would mean that

rational(Q, 2, 4), rational(Q, 2, 4)would fail. If we ever hope to prove anything about Prolog programs, this sort of behaviour in the primitives is totally unacceptable. The conclusion I am forced to is that there is no

divide(Q, N, D) if nonvar(Q) and \+rational(Q), type_fail(1, rational). if nonvar(N) and \+ integer(N), type_fail(2, integer). if nonvar(D) and \+ integer(D), type_fail(3, integer). if var(N) or var(D), instantiation fault. unify Q with N/D. rational(Q, N, D) if nonvar(Q) and \+rational(Q), type_fail(1, rational). if nonvar(N) and \+ integer(N), type_fail(2, integer). if nonvar(D) and \+ integer(D), type_fail(3, integer). if nonvar(Q) { let N1,D1 be such that Q=N1/D1, D1 >= 0, gcd(N1,D1) = 1. unify N=N1, D=D1. } else if nonvar(N) and nonvar(D) { if D1 < 0 or gcd(N1,D1) =/= 1, simply fail. unify Q with N1/D1. } else { instantiation fault. }The purpose of divide/3 is to construct a rational number by performing the division. The purpose of rational/3 is to construct a rational number by packing together two integers which already have the right form, or to unpack a rational into two integers. These definitions are free of the problems which made the previous definition of rational/3 worthless. Recall that these predicates may have to report a representation fault if the complex or rational data types are not implemented. In the interests of helping people debug their code, a representation fault may be reported even when the result is representable, that is

rational(X, X, 1) complex(X, X, 0.0)may report a representation fault.

functor(Term, Fsymbol, Arity) if nonvar(Fsymbol) and \+atomic(Fsymbol), type_fail(2,atomic). if nonvar(Arity) and (\+integer(Arity) or Arity < 0), type_fail(3,natural). if var(Term) { if var(Fsymbol) or var(Arity), instantiation fault. if Arity = 0 { unify Term = Fsymbol. } else { if Arity > some limit, implementation fault. construct a new term with principal functor Fsymbol and arity Arity, all of whose arguments are distinct unbound new variables unify Term = this new term } } else { if atomic(Term) { unify Fsymbol = Term, Arity = 0. } else { compound(Term) is true. unify Fsymbol = the principal function symbol of Term and Artiy = the arity of Term } }This looks amazingly complicated, but it mostly follows from the design principle. If Term is a nonvariable, or Fsymbol and Arity are both nonvariables, we have enough information to succeed at most once, otherwise there is an instantiation fault, and instantiation faults should be checked for before type failures. The Term could be anything. The Arity must be a non-negative integer, and the call should type fail if it is not.

The Arity argument of functor/3 and the ArgNo argument of arg/3 MUST NOT be integer expressions. The call MUST type fail if given such erroneous values.The inquisitive programmer is entitled to use functor(a(1,2),a,1+1) to discover what a type failure looks like. When the Term is a variable, permitting integer expressions here looks like a useful extension. However, permitting integer expressions here would preclude efficient compilation of a construct which

functor(T, a, 1+1)were to be allowed, consistency would mean that

functor(a(1,2), a, X)would have to be able to enumerate X=2, X=1+1, and all the other integer expressions with value 2. This is obviously silly, and the extra "generality" of allowing an integer expression when you are building a term is spurious, as you can always write

N is Expr, functor(Term, F, N)The one thing which is not obvious is that functor(C,C,0) must succeed for any constant C, so that if constant(C), functor(C,F,N) binds F=C,N=0, and constant(T,C,0) binds T=C. Since there are values of N for which functor(T,72.3,N) succeeds, we can only type fail if the function symbol is a compound term. An implementation is permitted to impose an upper bound on the arity of a term. However, all arities up to 200 must be accepted. This is the only Prolog primitive which needs to check this limit. =.. is defined in terms of functor, so no separate check is needed. Note that the error reported is an implementation fault, not a user error. functor(T,a,9999) is perfectly well defined, if a Prolog can't do it that's its fault, not the programmer's.

arg(ArgNo, Term, Argument) if var(ArgNo) or var(Term), instantiation fault. if \+integer(ArgNo) or ArgNo < 1, type fail 1. if constant(Term) and \+ atom(Term), type fail 2. if ArgNo > the arity of Term, simply fail. unify Argument with the ArgNoth argument of Term.Note that arg(1, a, X) simply fails, while arg(1, 0, X) type fails. This is because 'a' might have come from a call functor(T,a,N) where N could take on any non-negative value, and it makes sense to consider it as the "empty" case of a "vector" class. But no other atomic value has any related compound terms, so it makes sense to type fail for them. A common pattern for low level code is

foo(Old, New, Context) :- var(Old), !, var_foo(Old, New, Context). foo(Old, New, Context) :- nonvar_foo(Old, New, Context). foo(Old, New, Context) :- functor(Old, F, N), functor(New, F, N), foo(N, Old, New, Context). foo(0, _, _, _) :- !. foo(N, Old, New, Context) :- succ(M, N), arg(N, Old, OldArg), foo(OldArg, NewArg, Context), arg(N, New, NewArg), foo(M, Old, New, Context).The succ/2 call shields the arg/3 calls from N < 1. If the succ/2 call were moved down, and the cut were not there, the possibility of arg/3 being called with N=0 would arise, and in this context we would prefer arg(0, 72.3, X) just to fail. If we habitually use the template shown (and if the compiler is smart enough the cut can be dispensed with) it doesn't matter whether arg/3 fails or type fails for N=0 or for general constants, and the usefulness of picking up likely errors suggests that type failure is best. =.. (pronounced "univ") converts a term to a list and vice versa. It is defined in an appendix. That definition relies on length/2 (as defined above) and functor/3 to generate all appropriate instantiation faults and type failures. It is interesting to note that

X =.. [F,A1,...,An],where the list is spelled out in the code, can be compiled nearly as efficiently as

X = f(A1,...,An),certainly without ever generating the actual list. While the general case of univ is to be avoided as turning over more non-(local stack) store than the use of functor and arg, this case is quite reasonable. If it were generally appreciated that this can be compiled efficiently, some of the demand for "variable functors" might go away, but I suppose that is too much sense to hope for. While these three predicates are all we actually

same_functor(T1, T2) :- same_functor(T1, T2, _, _). same_functor(T1, T2, N) :- same_functor(T1, T2, _, N). same_functor(T1, T2, F, N) :- nonvar(T1), functor(T1, F, N), functor(T2, F, N). same_functor(T1, T2, F, N) :- var(T1), functor(T2, F, N), functor(T1, F, N).The reason for wanting these predicates is that they are reversible. With them available, we can write

foo(Old, New, Context) :- var(Old), !, var_foo(Old, New, Context). foo(Old, New, Context) :- nonvar_foo(Old, New, Context). foo(Old, New, Context) :- same_functor(Old, New, N), foo(N, Old, New, Context). foo(0, _, _, _) :- !. foo(N, Old, New, Context) :- succ(M, N), arg(N, Old, OldArg), arg(N, New, NewArg), foo(OldArg, NewArg, Context), foo(M, Old, New, Context).and so propagate this reversibility up a level without too much pain. In Dec-10 Prolog (where amongst other things we can't swap the arg/3 and foo/3 calls in the last clause as I just did) the demands of "efficiency" force logic out into the cold when one writes low level term hacking code. This should not be so. A standard conforming implementation can defeat the point of these predicates by implementing them exactly as written, but at least reversible code will still run. The point of putting these predicates in the standard, of course, is to prevent the names being used for something else.

variables, in a standard order (roughly, oldest first - the order is NOT related to the names of variables); integers, from -"infinity" to +"infinity"; atoms, in alphabetical (i.e. ASCII) order; compound terms, ordered first by arity, then by the principal function system, then by the arguments in left to right order).This predicate is defined because sort and keysort need it, and they are defined because bagof and setof need them. The existence of a total order on terms permits the the implementation of efficient algorithms for many data structures, such as linear time set operations, N^2lgN graph operations, and the like. The fact that the ordering is defined for variables presents difficulties. There are logical anomalies, such as the fact that

X @< 1may be true at one time (because X is unbound) and false at another (because X is bound to 72). Undoubtedly the most serious problem is that there is an unstated assumption. So long as we do not bind any variables,

"Major" types should correspond to intervals in the standard order.In the existing standard order, the major types are thus true, var, nonvar, compound, atomic, number, atom. This doesn't uniquely define where things like strings should go. Strings are more like atoms than they are like any other data type, so they should be adjacent to atoms in the standard order. But there is another "data type" which is unnamed: the set of terms T for which

functor(T, F, N), atom(F)is true. That should be an interval too. So strings cannot go between atoms and compound terms. The only place left for strings is less than any atom but greater than any other kind of constant. The position of other data types (such as bit maps) is even less clear. One thing which is absolutely clear is that

var(X) :- X @< _.and it really would be a pity to lose this property. (The anonymous variable is younger than any other variable, and the variable ordering is by age.) So other data types must be greater than variables and less than strings. But this doesn't define their order with respect to numbers. Indeed, they could fall into two groups, one on either side. However, another unnamed "type" might be of interest, the "standard" constants, and that means that non-standard constants must be less than numbers. There is a problem with complex numbers. There is no way that we can provide a total order which is compatible with the standard arithmetic ordering on complex numbers, as the standard mathematical definition has it that

variables, in some time-invariant order; all "other" atomic objects, in some time-invariant implementation-defined order; integers and floating point numbers, ordered according to numeric value (but see section 7.4 for 2 -vs- 2.0), and complex numbers in lexicographic order; strings, in lexicographic order; atoms, in lexicographic order; compound terms, ordered first by arity, then by function symbol (always an atom), then by arguments from left to right.The fundamental predicate here is

compare(R, X, Y) If X comes before Y in the standard order, unify R with '<'. If X is identical to Y, unify R with '='. If X comes after Y in the standard order, unify R with '>'.There are six term comparison predicates defined in terms of compare/3.

X @< Y :- compare(<, X, Y). X @>= Y :- \+ compare(<, X, Y). X == Y :- compare(=, X, Y). X \== Y :- \+ compare(=, X, Y). X @> Y :- compare(>, X, Y). X @=< Y :- \+ compare(>, X, Y).There are two kinds of Prolog implementation where these predicates are not desirable. One is implementations like PopLog, where it is difficult to order variables. The other is implementations like MU Prolog which have coroutining capabilities, and would rather wait until they can ascribe a definite order. One point of this proposed standard is to ensure that certain predicates, even if not implemented, shall not have their names used to mean something else. It is therefore incumbent on me to propose some useful alternative for these other systems so that they shan't make these names do the wrong things. Therefore I define

term_compare(R, X, Y) :- if X and Y are identical, unify R with '='. otherwise, find the first place in a left to right scan where they differ. If either one has a variable in that place, wait for that variable to be bound and try again if the implementation supports coroutining or report an instantiation fault it it doesn't. call compare(R, X, Y).This is not quite the same as demanding that X and Y be ground. term_compare(=,A+B,A+B) should succeed. What is required is that the call be delayed until X and Y are either identical or clearly different in a place where the ordering can be determined. When term_compare eventually returns a result, it returns the same result that compare would then. The point is that if term_compare says that the ordering is such and such, it will never change its mind. (Unlike compare/3 with variables.) The infix relations are

X $< Y :- term_compare(<, X, Y). X $>= Y :- term_compare(R, X, Y), (R = '=' ; R = '>'). X $> Y :- term_compare(>, X, Y). X $=< Y :- term_compare(R, X, Y), (R = '=' ; R = '<').There is a subtle point here.

X $= Y unify X=Y. If this fails, fail. If no variables were bound, succeed. Otherwise, undo the bindings and wait until some variable in X or Y is bound. X $\= Y unify X=Y. If this fails, succeed. If no variables were bound, fail. Otherwise, undo the bindings and wait until some variable in X or Y is bound.This gives us rather a lot of "equality" predicates.

keysort(PairList, SortedPairList) sorts a list of Key-Value pairs so that the Keys are in standard order, retains duplicates, and unifies the result with SortedPairList. This is needed for bagof/3 so that solutions with the same bindings for free variables can be grouped together. It is vital that this sort be stable. sort(Random, Sorted) sorts the elements of the Random list into standard order, eliminates duplicates, and unifies the result with Sorted. This is used by setof/3 to convert a chronologically ordered list to a set representation, and is used by the ordered-set library package to convert lists to sets.C Prolog added two more predicates:

msort(Random, Sorted) sorts a list of terms as sort/2 does, but does not eliminate duplicates. This is useful for putting bags into a standard form. merge(L1, L2, Merged) combines two ordered lists, retaining duplicates.Now there are many variations on sorting that could be useful. It would occasionally be useful to sort a list into descending order instead of sorting it into ascending order and then reversing it, to use other pairing symbols than '-', such as '=', or to use some argument of an element other than the first as a key. Therefore I have chosen to standardise, not the Dec-10/C Prolog routines, but a more general pair sort/4 and merge/5 in terms of which the current predicates can be defined. An appendix gives the definitions of these predicates, and also definitions of sort/2, msort/2, keysort/2, merge/3 in terms of sort/4 and merge/5. The details of the code are not to be regarded as part of the standard. I have a C version of these routines which is quite efficient. sort(Key, Order, Random, Sorted) sorts the list Random according to the Key and Order specifications, and unifies Sorted with the result. Key is a non-negative integer. If Key=0, it means that the entire terms are to be compared. If Key=N>0, it means that the Nth arguments of the terms are to be compared. Order says whether the list is to be sorted into ascending (<, =<) or descending (>, >=) order and whether duplicates are to be retained (=<, >=) or eliminated (<, >). The way to remember the Order argument is that it is the relation which holds between adjacent elements in the Sorted result. A warning to implementors:

append([], L, L). append([H|T], L, [H|R]) :- append(T, L, R). member(X, [X|_]). member(X, [_|T]) :- member(X, T). memberchk(X, [X|_]) :- !. memberchk(X, [_|T]) :- memberchk(X, T).The definitions of member/2 and memberchk/2 are to be taken literally. Too many programmers have been unable to resist the temptation to use memberchk(X, L) to add X to the end of a list with an unbound tail for any other definition to be acceptable. append/3 is another matter. The clauses above define the values it computes and the order it computes them in. However, it may report an instantiation fault instead of entering a runaway recursion, or delay in a coroutining Prolog, and it is strongly recommended that it should. It may also type fail, while member/2 and memberchk/2 may not. It is worth noting that if L is bound to a term [T1,...,Tn] (that is, it is a list and does not end with a variable, then

append(L, X, Y) has at most one solution, whatever X and Y are, and cannot backtrack at all. append(X, Y, L) as at most length(L)+1 solutions, whatever X and Y are, and though it can backtrack over these it cannot run away without finding a solution. A Prolog implementation is required to find all the relevant solutions. In this sense append/3 is not primitive, because a primitive would be allowed to delay or instantiation fault. append(X, L, Y), however, can backtrack indefinitely if X and Y are variables. This may be reported as an instantiation fault.The question does arise: what does functor([_|_], Cons, 2) bind Cons to, and what does name([], Nil) bind Nil to? Historically, the answers have been Cons=(.), Nil="[]". There is little or nothing to gain in a new Prolog by changing the Cons symbol. Indeed, several new Prolog systems represent list cells in a special compact fashion where the Cons symbol or functor is not actually stored. Having the Cons symbol as (.) means that with a suitable operator declaration a.b.c.[] is an acceptable way of writing the list [a,b,c], and it would be a pity to lose this. A Prolog embedded in Lisp might as well continue the Cons=(.) tradition. And of course there are programs which ask functor(X,.,2) to test whether X is a list instead of asking X=[_|_]. The name of [] is another matter. There is a substantial benefit to Prologs embedded in Lisp or Pop if Nil="NIL" or Nil="nil" is permitted, or perhaps even Nil="()". Now there is already something strange about the token [] in Prologt. There may be any number of spaces between the [ and the ]; in fact [] is really a

The empty list may be represented by one of the atoms '[]', '()', 'nil', or 'NIL', the choice made once and for all by the implementor. It may not be represented by any other atom. The parser must turn the empty listSo neither the truth nor the falsehood of []='[]' or []=nil is guaranteed. But the truth of length([],0) is guaranteed even though that of length(nil,0) is not, and the truth of []\='Nil' is guaranteed.[ ]into this atom.

string_size(StringOrAtom, Size) if nonvar(Size) and (\+integer(Size) or Size < 0), type_fail(2, natural). if atom(StringOrAtom) { atom_chars(StringOrAtom, L), length(L, Size) } else if string(StringOrAtom) { string_chars(StringOrAtom, L), length(L, Size) } else if var(StringOrAtom) instantiation_fault else type_fail(1, string-or-atom). string_char(CharNo, StringOrAtom, CharCode) if nonvar(CharNo) and (\+ integer(CharNo) or CharNo < 1), type_fail(1, natural). if nonvar(CharCode) and (\+ integer(CharCode) or CharCode < 0), type_fail(3, natural). if nonvar(StringOrAtom) and \+string(StringOrAtom) and \+atom(StringOrAtom), type_fail(2, string-or-atom). if var(CharNo) or var(StringOrAtom), instantiation_fault. succ(K, CharNo), length(P, K), if atom(StringOrAtom) { atom_chars(StringOrAtom, L), append(P, [CharCode|_], L) } else { string_chars(StringOrAtom, L), append(P, [CharCode|_], L) } substring(StringOrAtom, Offset, Length, SubString) StringOrAtom must be a string or Atom Offset must be an integer >= 0 Length must be an integer >= 0 if bound, SubString must be a string or atom. atom(StringOrAtom) -> atom_chars(StringOrAtom, L) ; string_chars(StringOrAtom, L). length(Drop, Offset), length(Keep, Length), append(Drop, X, L), append(Keep, _, X), atom(StringOrAtom) -> atom_chars(SubString, Keep) ; string_chars(SubString, Keep). concat(A, B, C) <-> A is a string or atom, B is a string, atom or number, C is the same sort of constant as A, and the corresponding lists of characters satisfy append(La,Lb,Lc). concat(A, B, C) if nonvar(A) and \+(string(A) or atom(A)), type_fail(1, string-or-atom). if nonvar(C) and \+(string(C) or atom(C)), type_fail(3, string-or-atom). if var(B), instantiation_fault. if \+(string(B) or atom(B) or number(B)), type_fail(2, constant). string(B) -> string_chars(B, Lb) ; name(B, Lb). if nonvar(A) { string(A) -> string_chars(A, La) ; atom_chars(A, La). append(La, Lb, Lc). string(A) -> string_chars(C, Lc) ; atom_chars(C, Lc). } else if nonvar(C) { string(C) -> string_chars(C, Lc) ; atom_chars(C, Lc). append(La, Lb, Lc). string(C) -> string_chars(A, La) ; atom_chars(A, La). } else { instantiation_fault. }The first thing to note about these is that they can be applied to atoms as well as to strings, so a Prolog system which does not support strings should still support these predicates. In both substring and concat the type of the result is the same as the type of the first argument. The name 'concat' was chosen instead of something with "string" in it because 'concat' is already in the Prolog library with this meaning. It allows the second argument to be a number because 'concat' mainly exists for gensym/2; one wants to call

part_string(StringOrAtom, Neck, Before, After) :- string_size(StringOrAtom, Size), Keep is Size-Neck, substring(StringOrAtom, 0, Neck, Before), substring(StringOrAtom, Neck, Keep, After).NB: the operation is not string_part; that would take a string and a predicate and part the characters according to some property. The argument order of string_char was chosen to resemble arg/3. Extending these predicates to accept lists of ASCII codes might seem a useful thing to do. However, that would leave us with the problem of distinguishing between the empty list [] and the atom []. So the extension is ruled out.

They work on integers, not on expressions. They are invertible (that is succ(M,N) will solve for N given M or for M given N) and so follow the "design principle" in a way that "N is M+1" does not. They can be compiled into efficient code with little effort. For example, a compiler noticing that Sum is unbound in the goal plus(X,Y,Sum) can generate the equivalent ofHowever, there are things you can do with 'is' that you can't do with these predicates. Most notably, C Prolog, NIP, Prolog-X, and PopLog all provide floating point. The efficiency of these predicates is due to the fact that they do"Sum is X+Y"but without the necessity of invoking a tree walker to evaluate X and Y. Further, as the arguments can only be constants or variables (the compiler, detecting anything else, can replace such a goal by code to report a type failure) there is no need to build compound terms at run time or put these variables in the global stack. They do not seduce beginners into writing "Sum = X+Y", which is a particularly nasty bug, as if Sum later on appears in an arithmetic relation or an 'is' it will appear to have the right value. A clause containing at most one of the comparison predicates and at most one of the computation predicates is generally clearer than a clause containing "is". This is of course a matter of personal taste.

lt(X, Y) <-> integer(X) & integer(Y) & "X is less than Y". if nonvar(X) and not integer(X) then type fail. if nonvar(Y) and not integer(Y) then type fail. if var(X) then instantiation fault. if var(Y) then instantiation fault. if X @< Y then succeed else fail. le(X, Y) <-> integer(X) & integer(Y) & "X is less than or equal to Y". if nonvar(X) and not integer(X) then type fail. if nonvar(Y) and not integer(Y) then type fail. if var(X) then instantiation fault. if var(Y) then instantiation fault. if Y @< X then fail else succeed.There are three ways of comparing two numbers.

lt(X, Y) works only for integers, fails for non integers, and reports an instantiation fault if either argument is a variable and the other is a variable or integer. X @< Y compares arbitrary terms, and defines an order on variables even. It generalises micro-PROLOG's "LESS" predicate, and is indispensable for the implementation of setof/3, bagof/3, and efficient implementation of sets and finite maps. The fact that "1 @< b" is useful, but when you intend to compare integers only it is even more useful to say so. X < Y compares arbitrary ground arithmetic expressions, by evaluating both and then calling @< on the results. A Prolog definition for it is given in the second appendix.The term comparison predicate compare/3 is the fundamental operation used to define all the comparison predicates. It is adequately defined in the Dec-10 Prolog manual, except to note that which ordering is given to variables isn't all that important. PopLog could for example put a "timestamp" in the spare word of a variable record and do comparison on that. It is important to note that the "law"

le(X, Y) <-> \+ lt(X, Y)does

gt(X, Y) :- lt(Y, X). ge(X, Y) :- le(X, Y).These predicates are not otherwise necessary, but it can be clearer to use them. While they are

if X and Y are both integers, the goal should be replaced by "true" or "fail" as appropriate. This case is most likely to arise in automatically generated code such as that produced by the unfolder. if X or Y is a non-variable non-integer, the call should be replaced by a type_fail call. if X or Y is a variable which must be uninstantiated at this point, the call should be replaced by an instantiation_fault call. otherwise the compiler can generate the obvious machine code. There should be an implementation subroutine with a fast calling method that dereferences a variable and checks that the result is an integer; all the arithmetic code can use it.If anyone wants to make these operators, they are welcome to do so, but questions of syntax apart from relation name and argument order are beyond the scope of this document.

between(Lower, Upper, X) <-> integer(Lower) & integer(Upper) & integer(X) & Lower =< X =< Upper. if nonvar(Lower) and \+integer(Lower), type_fail(1,integer). if nonvar(Upper) and \+integer(Upper), type_fail(2,integer). if nonvar(X) and \+integer(X), type_fail(3,integer). if nonvar(Lower) and nonvar(Upper) then if Lower > Upper then fail else if var(X) then { try binding X to each of the integers Lower..Upper in turn, but the search order is not specified } else { if Lower =< X =< Upper then succeed else fail } else if nonvar(X) then { if nonvar(Lower) and Lower > X then fail else if nonvar(Upper) and Upper < X then fail else instantiation fault } else { instantiation fault. } range(Lower, X, Upper) :- between(Lower, Upper, X).The pseudo-code for between/3 looks quite complicated, but the Prolog

foo(X, Lower, Upper) foo(Lower, X, Upper) foo(Lower, Upper, X)As with the question of whether it is a good idea to have gt as well as lt, the ground of the decision is "what is a useful partially applied form to give maplist/checklist/mapassoc/...?" The answer is immediate: between(L,U) is a useful combination and the others are not, because X is the only parameter which has a finite solution set and can thus be generated. So between/3 is uniquely determined as the best possible bounded enumeration predicate. Why then have I included range/3? Just as a gesture of politeness to Harry Barrow, who is the only Prolog programmer not at Edinburgh ever to contribute to the Prolog subroutine library. (Udi Shapiro has contributed the Concurrent Prolog system, but while it contains a number of utility predicates they are not split out so that people can get at them; he has contributed to the

if there is enough information at compile time to determine the result, generate a "true", "fail", "type fail", or "instantiation fault" call as appropriate. if X is known to be uninstantiated (perhaps this is the first mention of it in the clause), call directly to the enumeration case (which has to check that Lower and Upper are integers in the right order). it can happen that X is known to be instantiated. If it is mentioned in integer(X) or any other arithmetic relation upstream of this point, it must be instantiated to an integer, and comparison code can be generated as for the two le-s.This means that in the clause

case_shift(Ucase, Lcase) :- plus(Ucase, 32, Lcase), between(0'A, 0'Z, Ucase).the call to "between" is known

case_shift(Ucase, Lcase) :- between(0'A, 0'Z, Ucase), plus(Ucase, 32, Lcase).as that can compute all the solutions. Should we include gen_nat and/or gen_int in some form as standard generators as well? I'm a bit unhappy about including any form of

succ(P, S) <-> integer(P) & integer(S) & S = P+1 & P >= 0. if nonvar(P) and not integer(P) then type fail. if nonvar(S) and not integer(S) then type fail. if integer(P) then { if P >= 0 and S unifies with P+1 then succeed else fail } else if integer(S) then { if S > 0 then unify P with S-1 and succeed else fail } else instantiation fault. plus(A, B, Sum) <-> integer(A) & integer(B) & integer(Sum) & A+B = Sum. if nonvar(A) and not integer(A) then type fail. if nonvar(B) and not integer(B) then type fail. if nonvar(Sum) and not integer(Sum) then type fail. if var(A) then { if var(B) or var(Sum) then instantiation fault. unify A with Sum-B and succeed. } else if var(B) then { if var(Sum) then instantiation fault. unify B with Sum-A and succeed. } else { if Sum unifies with A+B then succeed else fail. }Note: if the implementation does not support bignums, these predicates can report an integer overflow fault. As I said at the beginning, I originally copied succ/2 from Prolog-X. But in Prolog-X succ(X,Y) and plus(1,X,Y) were identical. Why the difference? (Prolog-X has now changed to this definition, I believe.) The difference is that plus is for

p(0,The reason we need the cut in the first clause is to ensure that the "is" in the second clause won't count down to -1, -2, -3, ... Of course we could (and should!) write this without the cut, by putting a test in the second clause. And that would also get around the possibility of a call with the output arguments so instantiated that the base case fails before it gets to the cut. With 'succ' available, we combine the test and the subtraction, so that the well behaved code is no longer nor more complex than the hacky code. And a compiler could perhaps recognise succ as a form of pattern matching. So we get, ) :- !, <base case>. p(N, , ) :- <step case>( , ), M is N-1, p(M, , ).

p(0,If p/3 is called with its first argument instantiated to anything other than a non-negative integer, it will simply fail, without any need for complicated cuts and tests. It is just like, ) :- <base case>. p(N, , ) :- succ(M, N), % N is a successor natural <step case>( , ), p(M, , ).

p(0, ...) :- ... p(s(M), ...) :- ...in that respect. This strong similarity to the obvious pattern matching definition of successor is the reason for succ's utility.

times(A, B, P) <-> integer(A) & integer(B) & integer(P) & A*B=P. if nonvar(A) & not integer(A) then type fail. if nonvar(B) & not integer(B) then type fail. if nonvar(P) & not integer(P) then type fail. if nonvar(A) & nonvar(B) then if P unifies with A*B then succeed else fail if nonvar(A) & nonvar(P) then % B is var if A = 0 then if P = 0 then instantiation fault else fail else if A divides P then unify B with P/A and succeed else fail if nonvar(B) & nonvar(P) then % A is var if B = 0 then if P = 0 then instantiation fault else fail else if B divides P then unify A with P/B and succeed else fail else instantiation faultAs usual, this definition is forced by the decision that the predicate shall apply to integers and that the design principle shall be followed. Only the argument order is left. The argument order is the same as the argument order of plus/3, which is to say that it has the usual output at the right. This order is good for mapping, e.g. to multiply a list of integers by 7 we have only to call

maplist(times(7), SingleFold, SevenFold)We can also use it to test for divisibility: times(7, _, X) is true if and only if X is an integer divisible by 7. As with plus, it doesn't take a very smart compiler to figure out what special purpose code to generate, e.g. the divisibility test above notes that the second argument is a void variable, so it

if var(X) then instantiation fault else if not integer(X) then type fail else if X is divisible by 7 then succeed else failThe main special case to watch out for, of course, is when the third argument is a new variable. Having two sets of arithmetic relations (the 'is' family and the 'plus' family) may seem like a bad idea. But they cover two different kinds of problem. With the 'is' family we can express involved

maplist(times(_), BaseList, Multiples)which would be much harder to write using 'is' and hard to understand once written. While with 'is' we can write

D is (sqrt(B^2 - 4*A*C) - B)/(2*A)which cannot be expressed at all with the 'plus' family (which is restricted to integers in order to have logically simple and efficiently implementable semantics), and which would be completely obscure if it could be so expressed. Note that in the call times(X, Y, 24), although we don't have enough information to determine a unique solution, we do have enough to determine all 16 (1,24, -1-24, 2,12, -2-12, ..., 24,1, -24,-1). A Prolog implementation which backtracked over the factors of P in this way conforms to the standard. So does one which reports an instantiation fault. And so does one which delays.

Xthat is, division truncates towards 0.divY = sign(X/Y)*floor(abs(X/Y)) XmodY = X - (XdivY)*Y

divide(A, B, Q, R) <-> integer(A) & integer(B) & integer(Q) & integer(R) & Q = AProlog code for this is given in the Appendix. As usual, if any argument is a nonvariable noninteger, divide/4 type fails. If A and B are given, Q and R are uniquely determined. If B, Q, and R are given, A must be B*Q+R, but it is still possible for divide/4 to fail, as (B*Q+R)divB & R = AmodB

Q is X/Y, R is X mod Y,in Dec-10 Prolog to

divide(X, Y, _, Rem)and it is a stupid compiler which can't exploit this.

p(A, B, C, Z) :- % Z is (A-C)*(A-B) plus(A, X, B), plus(A, Y, C), times(X, Y, Z).this information is all we need to tell which way round to use the two 'plus' calls. The other information is when a variable is instantiated. For the purpose of compiling arithmetic expressions, all we need to know is whether a variable is an integer or not. After an integer(X) test, or after any of 'plus' family calls, we know that each of the variables involved is an integer. This information is all we need to tell that times(X,Y,Z)

?- ge(X, 0), % X:int >= 0 ge(Y, 0), % Y:int >= 0 plus(X, Y, T1), lt(T1, 4), % X+Y < 4 times(2,X, T2), gt(T2, Y), % 2X > Y times(2,Y, T3), le(T3, X).. % 2Y =< Xwhose unique solution is X=2, Y=1. Detecting this is considerably easier than with 'is'. This isn't much use in practical programming, but it could be important in a "Prolog Technology Theorem Prover" such as Stickel has suggested.

VisE evaluates E X =:= Y evaluates X and Y X =\= Y evaluates X and Y X =< Y evaluates X and Y X >= Y evaluates X and Y X < Y evaluates X and Y X > Y evaluates X and Y put(C) evaluates C ttyput(C) evaluates C skip(C) evaluates C ttyskip(C) evaluates C tab(N) evaluates N ttytab(N) evaluates N

put(C): if C is a variable, instantiation fault. if C is an integer in the range 0..127 append the character C to the current output stream otherwise the implementation MAY type fail the implementation MAY evaluate C/\127 as an arithmetic expression and append the result to the current output stream the implementation may NOT do anything else ttyput(C): as put(C), but reading "standard output stream" for "current output stream" throughout. skip(C): if C is a variable, instantiation fault. if C is an integer in the range 0..127 repeat D := next character from current input stream until D = 26 or D = C if D =/= C then fail otherwise the implementation MAY type fail the implementation MAY evaluate C/\127 as an arithmetic expression and call skip() on the result the implementation may NOT do anything else ttyskip(C): as skip(C) reading "standard input stream" for "current input stream" throughout. tab(N): if N is a variable, instantiation fault if N is an integer then if N < 0 the implementation MAY type fail write max(N,0) spaces to the current output stream otherwise the implementation MAY type fail the implementation MAY evaluate N\/0 as an arithmetic expression and call tab() on the result. ttytab(N): as tab(N) but reading "standard output stream" for "current output stream".Thus for example, it is not defined whether tab(1+2) writes three spaces or type fails, but if it succeeds it may not do anything but write three spaces. Nor is it defined whether tab(-4) succeeds or type fails, but if it succeeds it must have no effect. If an implementation does support the extended forms of these predicates, so that put("0"+N) does normally work, it is recommended that when the "type failures produce error messages" flag is set these predicates should nevertheless produce type failure error reports. The point of this is that argument evaluation will make it easier to convert Dec-10 Prolog programs, but the type failure messages will help you find the places that need to be changed if the program is to run under other standard-conforming Prolog systems that choose to type fail. There is a subtle point in the specifications above: /\ and \/ insist on integer arguments and produce integer results. So even if a Prolog system supports argument evaluation in these predicates, it must reject tab(2.7). A Prolog system MAY NOT accept floating point numbers in these predicates. (Do you round up? down? truncate?)

+ /1 complex conjugate (as in APL) + /2 addition (same as PLUS in Common Lisp) - /1 negation (same as MINUS in Common Lisp) - /2 subtraction (same as DIFFERENCE in Common Lisp) * /2 multiplication (same as TIMES in Common Lisp) / /2 This one is a problem. See the discussion in the text. ^ /2 exponentiation (same as in Common Lisp)The !(X) and $(X) forms of Dec-10 Prolog are peculiar to that implementation. With bignums available (as they should be), there is no need for them. The point of these operations was that Dec-10 Prolog arithmetic used 36 bit 2s complement for calculations, but could only store 18 bits. (Actually, this isn't quite true, but how many people know about xwd?) Dec-10 Prolog does not define +X at all. C Prolog defines +X to be X for real and integer X. If complex numbers are admitted, we need a complex conjugate operator. Since the conjugate of a real number is that same real number, making +X be the complex conjugate of X agrees with C Prolog on all the values that C Prolog can represent. Whether complex/3 or rational/3 always give a representation fault or not, +/1, num/1, den/1, re/1, and im/1 must not. However, if complex numbers are not implemented Xdiv/2 divide(X,Y,Q,R) -> Q is XdivYmod/2 divide(X,Y,Q,R) -> R is XmodYnum/1 rational(Q,N,D) -> N isnum(Q)den/1 rational(Q,N,D) -> D isden(Q)re/1 complex(C,R,I) -> R isre(C)im/1 complex(C,R,I) -> I isim(C)i/2 complex(C,R,I) -> C is RiI /\ /2 bitwise conjunction \/ /2 bitwise disjunction \ /1 bitwise complement << /2 left shift >> /2 right shift

it is strongly recommended that new Prolog implementations should evaluate [X] as they evaluate X.It is, after all, very cheap to do this (especially at compile time), and what

X == Y -> X = YSo the natural desire to make "2 =:= 2.0" true forces us to the conclusion that "2 = 2.0" must be true. (This is perfectly natural on mathematical grounds as well.) And that in turn forces the conclusion that integer(2.0). I find some of these conclusions pretty revolting. We are warned in all good textbooks on numerical analysis "=" and "=/=" are virtually meaningless in floating point computations, and the most one should ever use is

be consistent by forcing floating point results into an integer representation whenever possible. This requires a check in name/2 and is/2 and nowhere else (if there is a routine for constructing "boxed" floats, that is the place to put the check). be consistent by converting floating point results whenever an integer is wanted. This requires checks all over the place, and is likely to have a substantial performance cost. be consistent by ruling that 2.0 < 2 but there is no number lying between them be inconsistent; have 2 == 2.0 but 2 \= 2.0 and not(integer(2.0))A mixed language system such as PopLog or a Prolog cohabiting with PSL (or even a Prolog cohabiting with Fortran or C)

X < N iff X-epsilon < N X > N iff X-epsilon > N (since X \= N) X > N iff X-epsilon > N X < N iff X-epsilon < N (since X \= N)and of course all comparisons of floats with floats are preserved. Since epsilon is an infinitesimal,

no floating point value ever unifies with, compares equal to as a term, or is regarded as having the same numerical value as an integerAn unpleasant property of floating point representations is that

float(N) < N and float(N+1) > N+1can both be true. So it seems unimportant to me whether N.0 is always taken to be less than N, greater than N, either depending on the sign, or either depending on the 17th bit when the moon is full. We're not going to get anything very pleasant no matter what we do. We should insist that if

statistics(Key, ValueOrList)predicate. In order to make life easier for compilers, other program manipulation tools, and human readers, I suggest the following design principle for arithmetic operations:

The value of any arithmetic expression f(X1,...,Xn) may only depend on the functor f/n and the values of the arguments X1,...,Xn.This doesn't mean that C Prolog can't have its little instrumentation functions, only that they aren't standard, and that a code unfolder is entitled to rearrange code in ways that will change the values of heapsize, stacksize, and/or cputime. New Prolog systems should provide a statistics/2 feature, perhaps in a machine dependent module. We should be able to agree on a set of basic constants (and machine- dependent constants) which could usefully be in the standard. There are currently no

X /\ Y does a bitwise and on X and Y. X \/ Y does a bitwise or on X and Y. \Y does a bitwise complement. X << N shifts X left N bits (right if N < 0). X >> N shifts X right N bits (left if N < 0).An interesting thing to note about shifts is that if we had chosen any specific word length (such as 18 bits) we would have had to choose between "logical" and "arithmetic" shifts. But as we regard integers as extending infinitely far to the left (and

No Prolog implementation is obliged to provide any arithmetic facilities beyond those described here. Any additional arithmetic functions should use the names, argument orders, and semantics of suitable Common Lisp functions, except that nospread functions may be implemented as binary operations.

the subexpressions of an arithmetic expression may be evaluated in any order if more than one error could be reported about the same arithmetic expression, exactly one of them must be reported, but it is not defined which because a continuable error in an arithmetic expression does not imply that there are no non-continuable errors, continuable errors are not continuable when reported from arithmetic expression evaluation

X =:= Y :- foo(X, Y, =). X =\= Y :- baz(X, Y, =). X < Y :- foo(X, Y, <). X >= Y :- baz(X, Y, <). X > Y :- foo(X, Y, >). X =< Y :- baz(X, Y, >). foo(X, Y, R) :- A is X, B is Y, compare(R, X, Y). baz(X, Y, R) :- A is X, B is Y, \+ compare(R, X, Y).foo and baz are to be taken as macros, and do not form part of the standard. Note that the signs are "=<" and ">=",

lt(X, Y) X and Y must both be integers X @< Y X and Y may be any terms, but the number order is right X < Y X and Y are arithmetic expressionsWith this range of expression there is no need to restrict '<'. {Term comparison will be dealt with in another note.} A pretty extension would be to define these relational operators as arithmetic functions. The definition would be

if X R Y then (X R Y) evaluates to X. otherwise the predicate evaluating X R Y fails.This could be useful in "X is Y-1 > 0", but its intended use is of course for

X := 1; X := 2is legal, meaningful, and leaves X = 2. In Prolog,

X is 1, X is 2is legal, meaningful, and equivalent to

1 := 1is illegal and meaningless, while in Prolog,

1 is 1is legal, meaningful, and true. I fail to see what we can possibly gain by lying to the programmer this way. The third point is that it really is very stupid to take away a possibly useful symbol when we already have a perfectly good name for the operation in question. A Prolog system could well be augmented with true destructive assignment. For example, in old Dec-10 Prolog (this compiler is no longer available to users, it is just used for building the Dec-10 Prolog system itself), you can write

haulong(N, B) :- N1 is local, B1 is local, N1 := N, B1 := 0, while N1 > 0 do (N1 := N1/2, B1 +:= 1), B is B1.Here := and +:= are genuine destructive assignment. A Prolog system embedded in Lisp or Pop might well have a use for

atom := Expras equivalent to

T is Expr, lisp(set(atom,Expr))While the use of destructive assignment generally would be a regrettable retrograde step, it may have a place in low level interface code, and if it is to be used at all it would be as well to use a symbol which some people understand. This is too vague to be a requirement of the standard, so it is just a recommendation:

':='/2 should be used only for true destructive assignment

p(Y, Z) :- Y > 1, Z is (Y+1)/(Y-1).the compiler can stash the result of evaluating Y in the

a Prolog compiler in a mixed language system should be completely ignorant of the syntax and special forms of the other languages in a mixed language system Prolog should be able to call functions in the other languages, passing data structures constructed in the same way as data structures passed to Prolog predicatesIn general, it will not be possible to implement Prolog procedure calls and host language procedure calls in the same way and an interface primitive will be needed.

"Assert" may rewrite any special form in any way provided that the transformation is provably equivalent to the original clause without reference to any user-defined predicate. Calls to user-defined predicates, and their arguments, may not be rewritten.The point of this is that if I call

assert((p(X) :- true,(p(X+1),(Y=1,Y=2))))and later look at the data base using listing or clause, I have no right to complain if I see

p(X) :- p(X+1), fail.but I

p(X) :- true, fail. p(X) :- fail. p(X) :- Y=1, p(X+Y), Y=2. p(X) :- p(X+Y), (Y=1,Y=2). p(X) :- call(p(X+1)), fail.In particular, conjunctions and disjunctions may be "flattened" (so that the left argument of (A,B) never has ,/2 as its principal functor), 'true' need no appear at all, and variables appearing in a goal position may have call(_) wrapped around them. Further, if a Prolog system is smart enough to spot that a call to some system defined (but

end_of_line(C) :- C is 0'J-64. to end_of_line(C) :- C = 10.it is

end_of_line(10).and it is not allowed to optimise a false clause out of existence. Thus the sequence of clause heads found by clause or listing must be exactly the same in every implementation. It could be useful to allow other transformations on clauses. One that would be particularly pleasant would be to rewrite grammar rules into ordinary clauses, thus letting a program assert new grammar rules. However, in the interests of portability,

'Assert' must not transform a clause except as permitted in the previous boldface paragraph.Why is this non-restrictive? Because the user doesn't have to call the assert family directly. If you want to assert grammar rules, there is nothing to stop you defining

expand_assert(Formula) :- expand_term(Formula, Clause), assert(Clause).and using that instead. This applies to any form of macro expansion. Using expand_assert instead of providing hooks in the ordinary assert predicate means that macro expansion has to be implemented only once, and the user does not have to decide whether to expand a certain macro at read time or at assert time.

true.and no others. As with all standard predicates, it must be impossible to change the effect of this predicate, and although it has a clausal definition, that definition need not be visible to the user and if visible need not agree with that given here. 'true' is mainly needed so that clause/2 can supply something in its second argument as the "body" of a unit clause. It is also needed for conditionals, so that we can write (a->true;b->c;d). This too can be seen as supplying the body of a unit clause. Although "true." and "true. true. true." are logically equivalent, they are not procedurally equivalent. True must succeed exactly once. However, clause(true, X) might report any number of clauses, and any of them might potentially succeed. All that is required is that at most one of them will succeed in any given call, without producing any side effects.

fail :- integer(fred).Note that there is no guarantee that clause(fail, X) will find no clauses. It might find any number. But whenever called, none of them may succeed, and there may be no side effects, not even any output. (Appearing in a trace is a main effect of the tracer, not a side effect of fail.)

repeat. repeat :- repeat.These clauses may or may not be visible to the user. The key property of repeat is that it must not use up any stack space. The question ":- repeat, fail." must run until stopped by the user. Repeat is permitted one side effect: on any "call" (but not on "redo") it may print an insulting message on the standard error channel.

repeat(N) :- ge(N, 1). repeat(N) :- gt(N, 1), succ(M, N), repeat(M).These clauses may or may not be visible to the user. Repeat(N) must not use up any stack space. If given a variable as argument, it must report an instantiation fault. Given anything other than a non-negative integer, it may type fail or it may just fail. Repeat(N) is

repeat(Goal) :- ( call(Goal), fail ; true ).i.e. repeat the Goal until it fails. So that programs written for these other Prologs can be run in a standard conforming Prolog, repeat/1 may not be used for this purpose. That's the only reason that repeat/1 is in this standard, to stop the name being used for any sensible purpose. There is a better way of getting the same effect, and that is to call between(1,N,X) for some possibly anonymous variable X. That is why repeat/1 is required to print an insulting message, so that the code will be converted to the standard form.

call(Just to make this a wee bit clearer, I do NOT mean that call should behave as if there were a clauseX ) :-X .

call(X) :- % non-example X. % non-exampleMany Prolog implementations would translate this to call(X):-call(X) as they are entitled to. No, I mean that call(p(1,X,Y)) should behave as if the definition was

call(p(1,X,Y)) :- % example p(1, X, Y). % exampleand that call((p, !, q; r)) should beahve as if the definition was

call((p, !, q; r)) :- % example p, !, q; r. % exampleThere are two consequences of this which deserve to be spelled out in full. One is that call/1 is more like LISP's "eval" than like "apply". That is, the term being called can be an instance of any control structure at all: a simple predicate call, a forall/2, an if-then-else, another call/1, and so on. The second is that while

?- assert(( z(X) :- repeat, X )), clause(z(X), Body), X = !.binds Body = (repeat,call(!)) which is equivalent to repeat. But

?- X = !, assert(( z(X) :- repeat, X )), clause(z(X), Body).binds Body = (repeat,!) which is equivalent to (!). Now any program where this makes a difference is very badly written. While the fact that a variable goal can be rewritten by assert into call(G) does spoil some of the substitution properties, it is actually a very useful thing for programming. It means that a predicate which calls one of its arguments does not have to worry about a cut being smuggled in inside a Trojan Horse. This integrity benefit is well worth having, whatever the meretricious attractions of micro-PROLOG's "meta-variable". This does raise another question. What is a Prolog system to do if a goal is still a variable, or is some constant other than an atom? call/1 has to worry about this. Since a constant other than an atom can

var(G) -> instantiation_fault; atomic(G), \+ atom(G) -> type_fail(1,compound); system(G) -> call the system predicate; current_predicate(G) -> interpret the clauses for G if any, or call the compiled code if any; otherwise, G is an undefined predicate.This needn't stop an implementor using an integer (perhaps the index of a branch of a case statement, or the address of a procedure) as the body of a clause in bootstrap code. It only means that such clauses must never be visible to the user, and it must not be possible for the user to create such clauses, and if the user does call(72) it must type fail instead of invoking primitive 72 (even if there is such a primitive). If the bootstrap is such that there is a need for a predicate like call/1 but which is capable of invoking primitives this way, fine, but it must have another name.

apply(Pred, ArgList) :- Pred =.. [F|GivenArgs], append(GivenArgs, ArgList, AllArgs), Goal =.. [F|AllArgs], call(Goal).This is the predicate which has been used up till now for applying a computed predicate to a computed argument list. It has a number of serious disadvantages. First, it is usually implemented exactly as written above. This is wasteful. It creates a new list which is never needed again, and a new compound term which is never needed again. If it is built in at a low enough level there is no need for these data structures. Even if apply/2 takes care not to construct superfluous data, the calling site will still construct an argument list which isn't really needed. The second problem is that even when we know what Pred is like, it is difficult (in the general case, impossible) to type check a call to apply. Every time I have seen apply used, there has been a specific number of arguments at the calling site, commonly one or two. Therefore, I propose a

call(p(X1,...,Xm), Y1,...,Yn) :- p(X1,...,Xm,Y1,...,Yn).That is, call(P, X, Y) is equivalent to apply(P, [X,Y]) except that no extra data structures have to be created and it can easily be type checked. The requirement for P is the same as for call(P): it must be an atom or a compound term. For complete consistency, call/N for N>1 should be able to handle control structures too; since call((A,B)) works so should call(,(A), B). However, it may be considerably easier to implement this operation if it only works for sytem and current predicates, and does not work for arbitrary control structures. So call/N may be thought of as "apply", with call/1 being "eval". apply would have been the preferred name for this operation if only it hadn't already been used for the list version.

once(Body) :- call(Body), !.Since cuts cannot get through "call", a cut which appears in the Goal is confined to the once(_) form and does not have the effect of cutting the parent goal. Once need not be implemented exactly this way. It is quite straightforward for a Prolog compiler to open-code this form, but if it does the final cut and any internal cuts must be implemented in a way which cannot be expressed in Prolog source language. If the Body contains no cuts, the expansion (Body->true) can be used.

( Body -> fail ; true )Since disjunction and if-then-else are transparent to cuts, so should negation be. That is, if we write

p :- \+ !. p.this should have exactly the same effect as

p :- ( ! -> fail ; true ). p.which is to say that every call to p must fail. If however, we write

p :- X = !, \+ X. p.this is equivalent to

p :- X = !, ( call(X) -> fail ; true ). p.and as the cut cannot escape from the call, every call to p must succeed (in the second clause). This is all rather unpleasant. What is particularly unpleasant is that it makes Dec-10 Prolog non-standard. (On the other hand, it was non-standard already, because the compiler does not understand if-then-else). I have tried several times to discover a rule which would enable a user to predict which special forms were transparent to cuts and which were not. Making conjunction the only transparent form would make sense, but the language which results is not powerful enough to write an intelligible meta-circular interpreter. To do that disjunction has to be transparent to cuts as well. The cleanest rule I could come up with was that conjunction, disjunction, and if-then-else should be transparent to cuts (as they are in Dec-10 Prolog), and that any other form was transparent if and only if it could be macro expanded into a form involving these three only, without

disjoint(S1, S2) :- not (member(X, S1), memberchk(X, S2)).it is safe to proceed with the call provided S1 and S2 are ground. But in

non_member(X, S) :- not(member(X, S)).it is not safe to proceed with the call until X is ground as well as S, because X can be seen outside. So a compiler should handle sound negation by compiling code to check that the variables which appear elsewhere in the clause are ground. If you write not(X). A sound if-then-else can be defined as well. We might as well copy MU-Prolog on that one.

\+ (Gen, \+ Test)which is in turn equivalent to

( Gen, ( Test -> fail ; true) -> fail ; true )Logically, this is bounded quantification. For example,

subset(Small, Big) :- forall(member(X, Small), memberchk(X, Big)).Procedurally it is just another form of failure driven loop, permitting iteration without recursion. It is only useful when the only information to leave the loop is the success or failure indication. Used sparingly, particularly when bounded quantification is intended, this can make programs clearer. But in Dec-10 Prolog it is avoided because the compiler generates a call to the interpreter, and remembering to put in all the public declarations can be a pain. One would be tempted to use the expanded form, if the Dec-10 compiler understood it. Calls to forall can be open coded quite effectively. However, as it has this macro definition, we see that a cut inside either the generator form or the test form can escape and cut the parent goal. Just like negation, this differs from existing Prologs. Just like negation, I don't think that matters.

prolog_clause(Clause, Head, Body) prolog_bounded_quantification(Formula, Generator, Test) prolog_negation(Formula, PositivePart) prolog_if_branch(Formula, Hypothesis, Conclusion) prolog_conjunction(Formula, ListOfConjuncts) prolog_disjunction(Formula, ListOfDisjuncts)Versions of these predicates for Dec-10 Prolog are given in an appendix. The point of taking the conjuncts &c as lists is to emphasise Prolog's right to normalise these operators, indeed, to allow for the possibility that the internal forms may

for given L0 if prolog_conjunction(F1, L0) succeeds then prolog_conjunction(F1, L1) succeeds and prolog_conjunction(F2, L1) succeeds and F2 == F1. for given F0 if prolog_conjunction(F0, L1) succeeds and prolog_conjunction(F1, L1) succeeds then prolog_conjunction(F1, L2) succeeds and L2 == L1.with similar axioms for prolog_disjunction and prolog_if_then_else. Any formula has a unique decomposition, if any, it is just that the formula might have been initially constructed by some other means, and might not actually make sense. For example, we might find that

prolog_conjunction((1,2), [1,2]) succeeds but prolog_conjunction(X, [1,2]) fails.This section of the draft is the most tentative. All things considered, it might be much cleaner to have separate compose_THING and decompose_THING operations.

cases Hypothesis1 -> Conclusion1 ; Hypothesis2 -> Conclusion2 ... ; Hypothesisn -> Conclusionnas follows:

DoneIt is local, Doneit := 0, ( Hypothesis1, DoneIt := 1, Conclusion1 ; DoneIt =:= 0, Hypothesis2, DoneIt := 1, Conclusion2 ... ; DoneIt =:= 0, Hypothesisn, Conclusionn )Every case except the first has the test "DoneIt=:=0" before its hypothesis (it is redundant before the first case). Every case except the last has the assignment "DoneIt:=1" after its hypothesis (the variable is dead after the last case). If a case lacks the if-then-else arrow, the translation retains the test but drops the assignment. This corresponds almost exactly to the desired logical reading of an if-then-else:

( H1 and C1 or not H1 and H2 and C2 ... or not H1 and not H2 and... and not Hn-1 and Hn and Cn )where the negations are not actually calculated but are implicit from position. In particular, the hypotheses may backtrack; having passed an arrow once it thereafter acts just like a comma. Since tests are generally determinate, this makes little or no practical difference. The "cases" construct is implemented in C Prolog is exactly this fashion. Given a reasonable Prolog compiler which can handle disjunction, it is very easy to add the cases construct, whether using this technique or whether by fiddling with the choice point list directly. Note that in so far as it can be expressed in Prolog at all, the cases construct is open coded, so that it is "natural" for it to be transparent to cuts. Cases and if-then-else are to a large degree a substitute for the cut, and one might expect that a programmer who used them would not be putting cuts in them. So it might seem a matter of indifference what a cut inside a cases or if-then-else does. However, there is at least one Prolog programmer here whose code includes

p(...) :- ... ( ... -> ! ; true ), ... .I laughed immoderately when I saw it, then groaned when I realised that he really meant to keep this in his program and thought it genuinely useful. He does at least deserve to be told when this will work and when it will not, while the rest of us continue to look for efficient alternatives to the cut that make our code even clearer, the Philosopher's Stone, the Elixir of Life, and a 1 GigaLIPS machine. The conventional if-then-else form is exactly the same except that it doesn't have "cases" in front.

( H1 -> C1 H2 -> C2 ... ; Else )is

(cases once(H1) -> C1 ; once(H2) -> C2 ... ; true -> Else )except that cuts can escape from the Hi (should anyone be so incredibly foolish as to put them there) as well as the Ci. The if-then-else arrow functions exactly as a cut with narrow scope. It is correspondingly easy to implement. My feeling is that for most practical programs it doesn't matter whether we use the if-then-else form or the cases form, and that beyond that it is too early to say which is better. There is a fair bit of existing Dec-10 and C Prolog code which relies on the current if-then-else and its interaction with cut. So the if-then-else

current_atom(Atom) if var(Atom) { try binding Atom to each "currently known" atom in turn; the order is not specified } else { if atom(Atom) then succeed else type_fail(1, atom). }It is obvious that this is the analgoue of InterLisp's MAPATOMS. What is not obvious is which atoms are "currently known". Can the question

?- current_atom(X), X = fred.ever fail? If we list all the atoms at one time, and list all the atoms at a later time, must the first lot be a subset of the second? In Dec-10 Prolog, C Prolog, and Prolog-X the answers are no and yes. In these Prologs atoms are never forgotten. Bearing in mind that future Prolog systems will be able to interface to external data bases, and so may have a much larger turnover of atoms, all that we can reasonably require is that

Every atom appearing in any clause or othercurrent_atom is easily implemented by searching the symbol table rather than the data base and stacks. It is up to the garbage collector to prune the symbol table if it so wishes. A warning to implementors: beinternal data base record (possibly as the key) and every atom which currently appears in the stacks is "currently known", but if all references to and from an atom are lost it may become unknown.

current_functor(Atom, Term) if nonvar(Atom) and \+atom(Atom), type_fail(1, atom). if constant(Term) and \+atom(Term), type_fail(2, compound). if nonvar(Term) { unify Atom with the principal function symbol of Term } else { current_atom(Atom) try functor(Term, Atom, N) for each N such that Atom/N is a "currently known" functor; the order is not defined }Here we have an even nastier problem. In Dec-10 Prolog, C Prolog, Prolog-X, and NIP, each current functor is represented by a physical block of memory in the symbol table. In Poplog and Quintus Prolog, however, the functor is implicit. This looks like bad news; we might have to examine the stacks and data base to find out what functors are "current". The answer is to redefine what "current" means.

A functor is current if it appears in a clause or internal data base record (possibly as the key).This is a minimal requirement; other functors may be "current" as well. One unpleasant aspect of the Dec-10 definition is that if you have a "vector" class which you represent say as vector(E1,...,En), a functor block is left hanging around forever for each size of vector you have ever used. Such behaviour is permitted by this standard, but not required. vector/N must show up if there is such a term somewhere in the data base, but vector-like structures are commonly held only in the stacks, and this definition permits current_functor to ignore the stacks. An easy way to implement current_functor is to associate a set of integers with each atom in the symbol table. Assert{|a|z} and Record{|a|z} have to update these sets to include all the functors which appear in the data base (and of course they may appear there differently from the way they appear in the stacks), and the garbage collector may but need not prune the sets. The integer 0 need not appear explicitly in these sets, as current_atom(X)=>current_functor(X,X). The mapping might be represented as a hash table in any of several ways. In particular, current_functor is

current_predicate(Atom, Goal) :- current_functor(Atom, Goal), "there have been clauses for Goal". current_predicate(Goal) :- current_predicate(_, Goal).The Dec-10 Prolog manual is actually very clear on this point. It says "it only generates functors corresponding to predicates for which there currently exists an interpreted procedure". In David Warren's papers, "procedure" means "clause". So what this means is that we could define the Dec-10 version of current_predicate as

current_predicate(Atom, Goal) :- current_functor(Atom, Goal), \+ \+ clause(Goal, _).Now if we want that definition, we can easily program it. I have argued earlier in this document that a more useful definition would be that a predicate is "current" if and only if there has been at least one 'assert' for that predicate since its last 'abolish'. One way to accomplish this is to have a mapping from atoms and arities to "procedure headers", where this mapping is void for undefined or abolished predicates, but "retract" leaves the header intact even when it removes the last clause.

system(Atom, Goal) :- current_functor(Atom, Goal), "Goal has compiled code and compiled code only". system(Goal) :- system(_, Goal).The distinction between system/[1-2] and current_predicate/[1-2] is that a "system" predicate has

safe_clause(Head, Body) :- current_predicate(Head), clause(Head, Body).which has the additional benefit of being able to search the whole data base. (clause will report an instantiation fault if Head is unbound.) The fact that 'assert' in Dec-10 Prolog does

with_input_from_file(FileName, Body) with_input_from_chars(ListOfChars, Body) with_input_from_string(String, Body) with_input_from_call(Pred, Body)and output redirected by

with_output_to_file(FileName, Body) with_output_to_chars(ListOfChars, Body) with_output_to_string(String, Body) with_output_to_call(Pred, Body)and of course, for interacting with the user,

with_io_to_terminal(Body)But none of this redesign is anywhere near being well enough thought out to include in a standard, and it does require new machinery in an implementation. (For example, it would be

@Include(Util:Arith.Pl)

Term =.. [FunctionSymbol|Args] :- nonvar(Term), !, functor(Term, FunctionSymbol, Arity), $argument_list(0, Arity, Term, Args). Term =.. [Term] :- atomic(Term), !. Term =.. [FunctionSymbol|Args] :- atom(FunctionSymbol), length(Args, Arity), functor(Term, FunctionSymbol, Arity), $argument_list(0, Arity, Term, Args). $argument_list(N, N, _, []) :- !. $argument_list(I, N, Term, [Arg|Args]) :- J is I+1, arg(J, Term, Arg), $argument_list(J, N, Term, Args).

repeat(N) :- telling(Old), tell(user), write('It is pointlessly stupid to use repeat/1.'), nl, write('Why don''t you use between/3 instead, fathead?'), nl, tell(Old), between(1, N, _).

@Include(Util:Sorts.Pl)

@Include(Util:DeCons.Pl)