In this post I will be talking about *axioms* in C++ concepts. C++ (i.e., C++11) does not have language support for concepts, so I will be talking about a nonexistent feature; a possible future addition to C++. Yet, since concepts in C++ have been considered and discussed for a long while, you are probably familiar with the idea. If you also noted a new keyword `axiom`

proposed along concepts, and feel you need to know more about what it is good for, and you find information in N2887 insufficient, this post is just for you.

### Generic programming in C++

The STL, the most notable library built utilizing the generic programming (GP) paradigm, has been around since 1994. C++ was able to ‘host’ it without any language-level concepts. If we look at SGI’s documentation of STL’s algorithm `find`

, we can see that the function’s third parameter’s type must *model* *concept* Equality Comparable. If we look at the concept definition, it requires a number of things. First, expressions `x == y`

and `x != y`

must be well defined and their result must be convertible to `bool`

. Second, both operators must be defined ‘consistently’, i.e. expression `x != y`

must always return the same value as expression `!(x == y)`

. More such semantic requirements are defined in section ‘Invariants’; for instance, invariant *Reflexivity* says that any element must be always equal to itself, or more formally, `(x == x) == true`

for every `x`

.

More, Boost’s Concept Check library enables you to define concepts and verify if template arguments model the concepts — it prevents long stacks of error messages. This is not the only library for concept checking; for other examples consider Catsfoot library or concept checking module in Origin library. So why do we still need the language support for concepts in C++?

- For simpler and cleaner template argument validation. Concept Check library does amazing things to enable template checking, but it has some limitations. Error messages are still long and a bit cryptic. It uses lots of macros, and requires to change the way you define functions. You have to manually create concept archetypes, which effectively forces you to define the concept’s interface twice. Some requirements cannot be checked. This is very elegantly described by Dave Abrahams in his article.
- To enable concept-based function overloading. This subject has been explained by Herb Sutter a couple of times. For instance, see here.
- To enable
*concept-based optimizations*. This is the part where axioms may play an important role. I will try to expand on that in this post. - To make generic programming easy to learn for casual programmers. This is more a psychological but a very valid argument. If you provide an elegant way of specifying concepts, it is easy to pick-up, attractive. And defining your concepts is a valuable lesson because it gives you an opportunity to think what you really do and do not require of the template parameters.
- To make the compilation faster. If concepts are compiler-intrinsic rather than a macro+template-based library, they will compile and work faster.

This all appears to be a sufficient motivation to add concepts to C++. And in fact much work is done to introduce them in C++. We have one ready concepts implementation: ConceptGCC and another implementation in active developement: ConceprClang. Dave Abrahams is now writing a series of posts on the developement of new experimental features in Clang in C++Next. A new concept design is also in development. The controversial question was and is whether expressing semantic (as opposed to syntactic) requirements on template arguments should also be aided by the new language features. Semantic requirements like *Irreflexivity* for concept `EqualityComparable`

are obviously made, assumed and utilized; but it is not obvious if they should be also supported directly by the language. Axioms are meant to express semantic requirements or assumptions about template arguments.

### A short example

Let’s try to illustrate how a definition of the abovementioned SGI’s concept Equality Comparable looks in concept-enabled C++. There is an important problem with that. C++ does not support concepts, and since there are a couple of controversies as to how concept syntax and semantics should look and behave, it is not clear how the future concepts will work. So, do not get the below syntax too literally.

concept EqualityComparable<typename T> { bool operator==(const T& x, const T& y); bool operator!=(const T& x, const T& y) { return !(x == y); } axiom Consistency(T x, T y) { (x == y) <=> !(x != y); } axiom Identity(T& x, T& y) { &x == &y => x == y; } axiom Reflexivity(T x) { x == x; } axiom Symmetry(T x, T y) { x == y => y == x; } axiom Transitivity(T x, T y, T z) { (x == y && y == z) => x == z; } }

Let’s analyze what it says. First, we require all models of `EqualityComparable`

to provide `operator==`

. Second, concepts constrained by `EqualityComparable`

are allowed to use `operator!=`

; models don’t need to define it however: if it is not defined we fall back to the supplied default implementation. Next comes the axiom. The axiom parameter declaration reads “For every value `x`

and `y`

of type `T`

(that appears in our function template)”. The body of axiom `Consistency`

says that wherever expression `x == y`

occurs in the body of function template constrained by our concept, it can be safely replaced with expression `!(x != y)`

, and vice-versa. Token `<=>`

indicates that two expressions are equivalent: render the same value and side effects. Now for axiom `Identity`

, The parameter declaration reads “for every *object* `x`

and `y`

of type `T`

…” That is, the addition of `&`

says that we are more interested in objects than values: we want to change the value they store or take their address. The body reads “if `x`

and `y`

are located at the same memory address (if they represent the same object), then expression `x == y`

is expected to return `true`

. That is, token `=>`

indicates that some semantics are required only when a specified condition holds. Axiom `Reflexivity`

does not contain any special token inside. It says that any value of type `T`

(encountered while executing a function instantiated from template constrained by our concept) shall always compare equal to itself.

Unlike syntactic requirements (e.g., requiring `operator==`

), semantic requirements expressed with axioms cannot be validated by the compiler when matching a model against the concept. So axioms only express how we would like the models to behave, but the there is no way to force the models to implement the required semantics.

After this brief introduction to axioms, we can see that it is possible to express our expectation of some of the models’ semantics in C++. Now comes the main question of this post. Is it worth expressing semantic requirements directly in C++? The fact that it is possible to do so, does not imply that it needs to be added to the language. Perhaps expectations of semantics should be left for documentation? The following is not a balanced discussion of pros and cons of axioms. In this post I only want to show the pros, that is, the motivation that led to the introduction of axioms. I leave the discussion of drawbacks and the criticism of axioms for another post.

### Axioms for documentation

Everyone who appreciates the power of assertions will easily recognize that axioms provide similar assistance in expressing some assumptions about code. The power of `assert`s is not (or, not only) about their potential to terminate the program, but in that you can see what is being asserted. Similarly for axioms, they will always be syntax- and type-checked by the compiler and always stay in sync with the rest of the code (unlike comments or documentation).

Also, note that the syntax of axioms very closely follows first-order logic. Token `=>` works like implication; token `<=>` works like equivalence, axiom parameters work like universal quantifier. It is also possible to ’emulate’ an existential quantifier using a 0-argument function in concept’s interface. This form of notation is very precise and well understood. Formal description like this may appear arcane at first, but once you learn it, it lets you communicate your assumptions in a clear unambiguous way. For the same reason mathematics is a universally accepted language of science. Even computers can understand assertions stated this way and use this knowledge in various ways. One of the goals is to establish an ISO-standard notation for assumptions about semantics of C++ code, that can be utilized by different programs for code analysis.

Another reason for introducing axioms is to distinguish concepts that are identical in terms of syntactic requirements, but still differ *conceptually* — by required semantics. For instance, we may have a concept (say, `HasPlus`) that requires `operator+` and makes no other requirements on the operator’s behavior. Then we may need a refinement of the concept where `operator+` is associative (like` std::string`). Then we may need a further refinement of the latter concept that additionally requires that `operator+` is commutative (like the addition for arithmetic types). Some (most?) programmers will usually not read documentation shipped with the code (if such even exists and is up to date) but look directly at concept definitions to understand what concepts they are supposed to model. If they read the following definitions, they might get a bit confused.

concept HasPlus<typename T> : Regular<T> { T operator+( T, T ); }; concept HasAssociativePlus<typename T> : HasPlus<T> { }; concept HasCommutativePlus<typename T> : HasAssociativePlus<T> { };

A programmer might ask, “why have three concepts that require exactly the same thing?”. And it really looks confusing, and the three concepts are not in fact the same even though concept definitions appear to look the same. With axioms, the three concepts look more clear:

concept HasPlus<typename T> : Regular<T> { T operator+( T, T ); }; concept HasAssociativePlus<typename T> : HasPlus<T> { axiom Associativity( T x, T y, T z ) { (x + y) + z <=> x + (y + z); } }; concept HasCommutativePlus<typename T> : HasAssociativePlus<T> { axiom Commutativity( T x, T y ) { x + y <=> y + x; } };

Now it really looks as if each concept adds something. Similar thing can be observed between STL concepts of `InputIterator` and `ForwardIterator`. Syntactically, they are equivalent. The difference is only in semantics and it is expressed with axiom `MultiPass`.

axiom MultiPass( IT a, IT b ) { (a == b) => *a <=> *b; (a == b) => ++a == ++b; }

And one can find more such examples. E.g., concept `StrictWeakOrder` without axioms is indistinguishable from any 2-argument predicate.

### Axioms for enforcing sound concepts

A new, alternative approach to implementing concepts described by Andrew Sutton and Bjarne Stroustup (see here) requires that in order to define a concept you have to specify at least one axiom. This is based on the premise that concepts, as sound, meaningful and universal abstractions, always carry some semantic requirements. In other words, if it is not possible to come up with an axiom for your abstraction, then your abstraction does not qualify for a concept: you should use a simpler tool, like a type trait, or a *constraint* instead.

One of the criteria for allowing new features into C++ is whether they change the way people design and think about the programs, as opposed to just adding some convenience. In this sense axioms change the way people think. They shift the attention towards specifying expectations of types, they encourage the focus on the design of interfaces.

### Axioms for program transformations

Axioms enable the programmers who define concepts to convey some information to various tools, using C++ code. Such tool can be a compiler, and it can use the the semantic assumptions described in axioms to optimize the program. Consider the following, non-template, function.

unsigned numberOfSeconds( unsigned days ) { return 24 * days * 60 * 60; }

Compiler will not respect the left-to-right rules of applying multiplication. Instead, in order to achieve maximum run-time performance, it will only be performing one multiplication: `days * 86400`. This optimization is called *constant folding*. Now, let’s try to make the function a template that will work for any `int`-like type.

template< template T > T numberOfSeconds( T days ) { return T(24) * days * T(60) * T(60); }

Now, the compiler while compiling the template will perform no multiplication optimizations, because it cannot assume anything about the operation. `operator*` might do anything, based on how it is defined for `T`. Without knowing the type enough, the compiler cannot know and assume that the type is commutative, or associative. Now, if we instantiated the template with type `int`, after the instantiation, another pass of optimizations could recognize that `int`’s multiplication is commutative and associative, and restore the optimizations. However, performing optimizations at compile-time increases the compilation time, and next time, the template is instantiated with `long`, the optimization process will have to be repeated and slow down the compilation. More, if the template is instantiated with user-defined type `BigInt`, no optimization will be performed, because the compiler knows nothing about the properties of `operator*` for type `BigInt`.

One of the goals of C++ is that user-defined types should be treated the same way as built-in types (of course, within reason), so that for templates it should not matter if they operate on user-defined or built-in types. Axioms that state equivalence of expressions help unite both types in the aspect of semantic assumptions known to the compiler. Concept-enabled C++ compiler should be able (but not required) to apply the same code transformations and optimizations for user-defined types as it does for built-in types. Obviously, compiler cannot just assume that every multiplication is commutative, because it is not always so; matrix multiplication is one example. So there needs to be a way to *selectively* define multiplication semantic properties. Axioms define semantic properties per concept. In the above example we need to state three semantic properties: associativity, commutativity, and something we may call ‘value propagation.’ The concept with axioms could look the following.

concept IntegerLike<typename T> : CopyConstructible<T> { T::T( int ); // for using literals, like '24' T operator*( T, T ); axiom Commutativity( T x, T y ) { x * y <=> y * x; } axiom Associativity( T x, T y, T z ) { (x * y) * z <=> x * (y * z); } axiom ValuePropagation( int x, int y ) { T(x) * T(y) <=> T(x * y); } }

With the above semantic assumptions, if we constrain the above template with concept `IntegerLike`, multiplication can be optimized within the template code; thus, each time the template is instantiated, no additional optimization is required. We optimize only once for all possible instantiations, even for user-defined types. This is an example of optimization that speeds up run-time execution (and possibly compilation). This is not the only kind of optimization. Consider the following axiom.

axiom AvoidAdditionOverflow( T x, T y ) { y + x - x <=> y; }

The problem with the left-hand side expression (if it appears in the code, or becomes present as the result of the previous pass of optimizations) is that when operations are executed left-to-right `y + x` may overflow. For built-in types addition overflow is an *undefined behaviour,* so the compiler can freely apply the above transformation already without any special transformation rules. However, for user-defined types, we do not know what to expect of the overflow. Most likely it will throw an exception. By substituting the expression we eliminate a potential throw. This may change the observable behaviour of the program if the program intends to catch the exception. Since the throw in this case is used to signal *implementation limitation,* the axiom is saying that the compiler could avoid those limitations if only the expression structure allows that. The next example, although similar, is trickier.

axiom AvoidCancellation( T x, T y ) { y * x / x <=> y; }

The difference is that (for numbers) in case `x` is 0, it *cancels* the result, and dividing it back by 0 does not restore the previous result. It is not obvious to everyone that the two expressions should be interchangeable for `x == 0`. Some people, for some types, would prefer the left-hand side to result in indeterminate value or throw. However, for some types it does make sense. Therefore the transformation should be applied more selectively than the previous one.

The examples above show only the transformations on the original source code. But compilers can also apply such transformations to the results of other optimizations (like function inlining) — we cannot spot these opportunities directly in our source code.

### Axioms for automatic tests

My compiler’s STL implementation, when it comes to calling the predicate for comparing values in `map` operations, uses an intermediate function that is defined more-less as follows, in *release* mode,

return pred(a, b); // a < b

whereas in debugging mode the same function is defined as follows.

if( pred(a, b) ) { // if a < b ASSERT( !pred(b, a) ); // then a <= b return true; } return false;

(In fact, the code in the library is different, but the idea behind it is the same.) This additional code for detecting incorrectly defined predicates is based on the already existing concept `StrictWeakOrder` and axiom `Antisymmetry`. Even though concepts as a language feature are not there, every STL user is familiar with semantic assumptions about comparing: if `a` precedes `b` then `b` mustn’t precede `a`. If we had a concept-enabled C++ compiler, the antisymmetry axiom would be defined like this.

concept StrictWeakOrder<typename T, typename Pred> { // requirements axiom Antisymmetry( Pred pred, T a, T b ) { pred(a, b) => !pred(b, a) <=> true; } }

The additional equivalence to `true` indicates that calling `pred` has no side effects, or more precisely, that calling or not calling `pred` the second time can be treated as equivalent by the compiler. This is exactly the requirement that we expect of C assertion predicates, so that they can be enabled and disabled without the risk of affecting program behavior in an uncontrolled way. If we are able to define assertions as the one in `Atisymmetry` with axioms, the compilers or other tools will be able to automatically choose to insert additional tests into template code, in situations where performance impact may be acceptable, as for debug builds.

One could even think of automatically creating unit test suites based on axioms. In such cases axioms could be treated as ordinary functions called in tests. However, it is not clear how the arguments for axiom parameters would be chosen.

### Axioms for code analysis

It is possible to think of a tool that would be able to statically analyze the C++ code and make some assertions about it. E.g., given our axiom `StrictWeakOrder::Antisymmetry`, if the function template constrained by our concept is instantiated and called with values that can be deduced statically (only by looking at program code), the tool might check if the property defined by the axiom holds for given values. It is just a theory, though, because for most practical usages, values that get into functions are only known at run-time. Code analyzers shouldn’t check validity of axioms ‘for all values’, because C++ axioms only apply to values rather than types. Similar tools could detect if the axioms you specify are not self-inconsistent.

### More…

And that’s it for today. There is much more to be told about axioms. Like, what is wrong with them, what technical issues they cause, how they should be used and how they can be overused. What they cannot do. But I will leave it for the next posts, if this one rises sufficient interest.

I think that simply improving error messages is reason enough to introduce concepts. That is, if all they do is check some basic syntactic requirements of the template parameters that would be sufficient to be a really useful extension.

The problem now with templates, layered within templates, is that when the compiler can’t match a function/variable, it cannot determine what the problem is. Is the type wrong or is the code using the type wrong. Since it has no clue it is thus forced to produce pages of error information showing the complete instantiation path to the problem.

A concept would allow a valuable pre-check. If the type has a concept the compiler can check the type for concept compliance before it even instantiates the template. That way, if an error is foundi n the type it can cleanly report. Similarly, if you are a using a concepted type and try to use a field that doesn’t exist in the concept, the compiler also knows it is this code, not the type, which is doing something wrong. Thus again being able to print a short error statement for that exact code.

But this aspect of concepts — reducing the “stack” of template instantiations — can already be achieved with Boost.Concept Check or similar library-only solutions. They eliminate function templates where concepts are not satisfied from the overload resolution set.

Alternatively, you can just write something like:

Then the first error message will be that of the static_assert.

The problem with these solutions is that, while we eliminate the template instantiation stack problem, they are still just ugly, because they use tricks to do that.

Pingback: Les concepts avec C++17 – Informaticienzero