Desired compile-time failures

When new features are proposed to C++ it is desired that they do not introduce breaking changes. This is typically understood as:

  1. Every program that used to compile (was well formed), continues to compile with the same semantics.
  2. A program that failed to compile (was ill-formed), can now be made well formed and assigned new desired semantics.

For instance, the following was an invalid C++03 program:

void feature(std::string s1)
{
  std::string s2 = std::move(s1); // no std::move in C++03
}

Therefore, it is no harm when we make it well formed code in C++11 and assign it some useful semantics. This rule is not followed in 100% of the cases, but this is the idea in general.

However, even though it works in most of the cases, I believe that this criterion of a “safe addition” is not technically correct, as it fails to take into account an important fact: failure to compile certain programs is a useful, important feature, and if these programs suddenly start to compile, it can cause harm. In this post we will go through the cases where compile-time failure is considered a useful feature.

First, type system errors is so basic and common a feature that we may often forget how helpful it is. If we try to compile the following program:

void consume(int i);

int main()
{
  std::vector<int> v;
  consume(v); // ERROR
}

We get a type-system error: one cannot convert a vector to an integer. What I am saying is plain obvious. This saved my day many a time. This is a feature of statically typed languages: the compiler informs you about type-system errors — rather than users. We may even appreciate it, but can it be called a feature? In the case like the one above, this would be artificial. It is an obvious consequence of the type system: if a function (like consume) does not have a signature that works with the given type, then it doesn’t work. Period. But the picture looks different when you consider a library like Boost.Units, which performs a compile-time dimensional analysis. It works more-less like this:

Time     t1(10.1 * seconds), t2(10.2 * seconds);
Distance s1(12.0 * meters),  s2(21.0 * meters);

Time     tX = t1 + t2; // ok
Velocity vX = s1 / t1; // ok
Time     tY = t1 * t2; // ERROR: unit mismatch
Distance sX = s1 + t2; // ERROR: unit mismatch

Such library can turn a unit mismatch (a term from dimensional analysis) into type mismatch (a term understood by the compiler). Now, this can definitely be advertized as a feature, and indeed this is a flag feature in Boost.Units. Every user of this library relies on the library that the last two lines should always compile-fail. If for some reason they should ever compile on some newer compiler, or library upgrade, this would be a serious bug. Because the goal of such library is to render compile-time (type system) failures.

Boost.Units is a spectacular use of the C++ type system, but there are more mundane cases where we have to make some non-trivial effort to make certain statements fail to compile. For one example consider the bug in Boost.Rational library that I described the other day. This is one of many bugs caused by implicit conversions. In short, this could be illustrated with the following example

struct Rational
{
  int num, den;
  Rational (int n, int d = 1) : num(n), den(d) {}
};

Rational r = 0.5;

The last line just works; and because it works, a normal user will assume that the result will be:

assert (r.num == 1);
assert (r.den == 2);

But the result is different, because the meaning of this initialization is:

Rational r = (int) 0.5;

We never wanted this initialization to be valid. We never declared the constructor taking a double. It was injected there against our will; and now we have to go through an extra effort to make the conversion form double illegal.

Counteract implicit conversions

There is a couple of ways in which we can make the adverse conversion illegal:

  1. Declare the converting constructor (from double) private.
  2. Declare the converting constructor template and use enable_if to disable it for floating point types.
  3. Declare the converting constructor and put a static_assert inside.
  4. Declare the converting constructor as deleted.

Declaring the unwanted constructor private is probably the oldest and best known way of achieving the goal. It has certain limitations, though. A member function declared private is still a function, it is accessible by other member functions and friends. And we do not want them to use our function either. We can leave our function declared but not defined, which is slightly better, but still has some drawbacks. The problem is not found at compile-time, but only at link-time. If you are compiling a library, you will not be warned at all. Also, it is a hack: the message that the linker cannot resolve the symbol is not likely to help you identify the problem. And additionally, the hack with private function will not work for free (non-member) functions. It is not the problem in our example, but it is a problem in general.

Another solution, available even in C++03, is to use the SFINAE trick in the form of enable_if. The mechanism behind it was briefly described in this post. Whether enable_if will work or not depends on how we use it. If we just add another constructor template to our class, the trick will not work:

using boost::disable_if;
using boost::is_floating_point;

# define DISABLE_IF(C) typename disable_if<C, int>::type = 0

struct Rational
{
  int num, den;
  Rational(int n, int d = 1) : num(n), den(d) {}

  template <typename T>
    Rational(T n, DISABLE_IF(is_floating_point<T>))
    : num(n), den(1) {}
};

Rational r = 0.5; // still compiles!

This is because of how enable_if mechanism works. If the condition is satisfied (or, in the case of disable_if, if the condition is not satisfied) the corresponding function is removed from the set of candidate functions. It will not be considered when selecting the best overload; but the process of selecting the best overload will continue, considering the remaining functions. If there is other one that matches (as in the case above), it will be used. The following, on the other hand, will achieve our goal:

struct Rational
{
  int num, den;
  
  template <typename T>
  Rational(T n, T d = 1, DISABLE_IF(is_floating_point<T>))
    : num(n), den(d) {}
};

Rational r = 0.5; // fails as expected

Now, after removing the function from the candidates, there is no other candidate function left, and we achieve our desired compile-time failure. The only inconvenience we could experience now is that the failure message may convey too little information: “cannot convert double to Rational.” Not that bad in our case, but we could have wished for a better one: “Conversion from floating point numbers not yet implemented. Consider casting the argument to int or using function rational_from_double.”

In order to produce a custom message upon failure, we can use static_assert, or — if it is not available on your compiler — Boost.StaticAssert. Let’s give it a try:

struct Rational
{
  int num, den;
  Rational(int n, int d = 1) : num(n), den(d) {}

  Rational(double n) : num(n), den(1)
  {
    static_assert(false, "conversion from floating point...");
  }
};

Rational r = 0.5;

This doesn’t work as expected. Static assertion fails the compilation in every TU that includes our header. Whether we try to convert from double or not, or whether we even try to use type Rational is irrelevant. This illustrates how static assertions work. They render a compilation error as soon as the condition in the assertion can be determined. In our case, we can determine the result as soon as we see the declaration of the class.

In order to get the thing right, we have to make sure that the condition cannot be determined while compiling the declaration of the class, but at the same time it should be determined upon an attempt to convert from double. This is where templates can help us again:

struct Rational
{
  int num, den;
  Rational(int n, int d = 1) : num(n), den(d) {}
 
  template <typename T>
    Rational(T n) : num(n), den(1)
    {
      static_assert(!is_floating_point<T>::value,
                    "conversion from floating point...");
    }
};


Rational r = 0.5; // fails-as-expected

Now, the Boolean value in the condition can only be computed when we instantiate the template with a particular T. Plus, we are in control of the error message. But the solution is not ideal yet. Some people in some contexts want to test if one type is convertible to another type with is_convertible type trait. They expect the following tests to pass:

using boost::is_convertible; // or std::

static_assert(is_convertible<int, Rational>::value, "");
static_assert(!is_convertible<double, Rational>::value, "");

If you try it, you will find that the second assertion fails: even though converting double to Rational results in a compile-time failure, double is convertible to Rational! So, you may ask what it really means “to be convertible” then. For the purpose of is_convertible and also for the SFINAE mechanism, to be convertible means that there exists a conversion function form source type to the destination type that we can select in the overload resolution process: it is irrelevant if this function is only declared (and not defined), or if it has a static assertion inside or if it triggers other template instantiations inside that would render an error. We only check if the overload resolution would succeed.

The fourth option is to use a C++11 feature: deleted function:

struct Rational
{
  int num, den;
  Rational(int n, int d = 1) : num(n), den(d) {}
  Rational(double n) = delete;
};

Rational r = 0.5; // fails-as-expected

The intuitive meaning of such declaration with keyword delete is “I do not want this function”. But to be more precise, a deleted function participates in the overload resolution: when looking for the best matching function overload, it is treated as a normal function; but when it is selected, this results in an ill-formed program, and in the case of SFINAE, in a substitution failure, and in the case of is_convertible, in returning a negative response. So, this is somewhat different than enable_if solution. The deleted function is “sticky”: it wins the overload resolution and therefore immediately triggers an error. The error message is slightly better now: “cannot convert double to Rational because the selected constructor has been explicitly deleted” — at least it gives an indication that someone has made an effort to prevent this conversion, and there is single place in the code responsible for the failure: in this place you can put a comment with an explanation.

Thus, as we could see, deleted functions offer a certain advantage over functions with a static_assert inside; but the latter has one advantage over the deleted functions: with static_assert you can display an arbitrary, clear message to the user. There appears to be no ideal solution at the moment. One has been recently proposed, though, in N4186. If it were to be accepted, we would be able to use it like this:

// NOT IN C++ (YET):
struct Rational
{
  int num, den;
  Rational(int n, int d = 1) : num(n), den(d) {}
  Rational(double n) = delete("conversion from floating...");
};

Rational r = 0.5; // fails-as-expected

This works like a deleted function, except that when the compiler error is generated, like in the case of static_assert, the message that we provide is required to be displayed along.

Testing the feature

Better or worse, all the above solutions achieve their goal: they result in a compilation failure when someone tries to inadvertently pass a double where Rational belongs. We may want to call it a “negative feature”: its value is in that some programs fail to compile — and this is desired. Like any other feature, we want to test it with some sort of a unit test. In C++11 to a great extent it is possible with the extended SFINAE rules. But to make the task more difficult (and more real-life), we want to test it also on C++03 compilers. We also have to consider the situations where static_assert is used to trigger a failure, in which case type traits or SFINAE techniques will not work. This is the task I am facing when maintaining a Boost library.

Such a negative feature is not testable within C++. There is no function that tries to compile the program and when it fails (for any reason) it “returns false”. We have to test it from outside a C++ program. The way it is solved in Boost is this. I write a small (minimal) program (ill-formed) that illustrates what construct I expect my library to turn into a compiler error. With a build tool, I run a compiler and inspect the compilation result. If compilation fails, the build tool reports success; if compilation succeeds, the tool reports failure. Typically, there would be a number of such to-be-failed programs, and I want to test all of them. For instance, in our example, we could decide to also prevent the following use, which as of now is still legal:

Rational r(0.5, 0.4);

You can see this technique used in Boost regression tests results. See here for the results of Boost.Optional. An example of such expected-to-fail program can be found here.

This entry was posted in programming and tagged , . Bookmark the permalink.

12 Responses to Desired compile-time failures

  1. Matt Dziubinski says:

    Nice article!

    One quick question — I’m wondering, what are your thoughts on marking the ctor explicit, i.e., `explicit Rational (int n, int d = 1) : num(n), den(d) {}`?

    This also prevents implicit conversion from `double`. One disadvantage(?) is that it would also prevent other — possibly desirable — implicit conversions (e.g., from `unsigned int`).

    That being said, given the context (using type safety for unit safety), perhaps it’s better to err on the safe side (and leave the option to enable more conversions to the class designer, by explicitly adding converting ctors where — and only where — desirable)?

    • One quick question — I’m wondering, what are your thoughts on marking the ctor explicit

      –I tried to outline the situation with Boost.Rational in this article. In short, they wanted to offer a conversion from int. Without going into discussion whether it is a good idea or not, once they offer conversion from int, the other conversion from double immediately kicks in.

      Explicit constructor fixes a subset of problems, but not all. We would still have this:

      struct Toy
      {
        Rational factor_;
        Toy() : factor_(0.5) {}
      };
      

      That being said, given the context (using type safety for unit safety), perhaps it’s better to err on the safe side (and leave the option to enable more conversions to the class designer, by explicitly adding converting ctors where — and only where — desirable)?

      — You lost me here. Are you trying to say that implicit conversions are bad as a rule? I am slowly starting to lean towards this view. I am also not sure what you mean by “explicitly adding converting constructors”.

      • mttpd says:

        Thanks for the reply!

        I see, Boost.Rational basically follows a decision to favor convenience here (to an extent).

        I’ve also noticed that this would only take care of copy initialization (`Rational r = 0.5;`) but not of direct initialization (`Rational r(0.5);`), which seems to be somewhat limiting. (Makes sense, given that implicit conversion is defined in terms of the former.)

        Yes on the implicit conversions. (From which follows the explicit ctors remark: By banning implicit conversions we effectively allow only construction from the types for which the ctors are explicitly defined — as in: the ones we write manually/explicitly add to the class at the point of definition.)

        I can see that there’s a convenience-safety trade-off — and perhaps in some situations the convenience aspect can be preferable. That being said, implicit conversions bugs can be really hard to track. Thus, overall (like you) I’m also becoming increasingly inclined to avoid these.

  2. gnzlbg says:

    I’ve wished a lot of times for a:

    static_assert_illformed(expr, message);

    feature. I really don’t know why the STL implementation maintainers aren’t fighting for this:

    std::list<int> a = {1, 2, 3};
    static_assert_illformed(std::sort(std::begin(a), std::end(a)),
    "sort requires random access iterators");

    • You may be interested in reading this discussion. The committee decided not to allow speculative compilation, as it might put too much burden on compiler vendors, and you can fairly easily achieve a similar result with an external build system.

      • gnzlbg says:

        Thanks for the link, I see you (as a library writer) just wanted to have the same feature. I also see that most library writers in the discussion also agree with the feature.

        I disagree, however, with the argument that you can _fairly easily_ achieve a similar result with an external build system. That is just a lie.

        You can test for compilation failures by putting every single test in its own TU and using a build and testing system that allows you to check for compilation failures (like bjam).

        OTOH we have static assert which lets you write the tests immediately, next to the relevant code. A very easy, natural, and maintainable solution.

        Finally, we have library writers that want to do this now anyhow. If it is possible, why then most Boost libraries and most STL implementations don’t have any of such compile failure tests?

        Because those tests in single TUs are a pain to write, to set up, to maintain, they are out of line, and if you are not using bjam (who uses bjam besides boost?) but e.g. CMake, you are in for even more pain. People prefer to don’t test at all, than to write those tests. Hell people even prefer to implement full Concepts as a library using SFINAE to partially test these libraries than to write those compilation-failed tests.

        Can it be done right now? Yes. People that need it do it? No. Why? Because compared with e.g. static assert, it is not something “fairly easy” to do, but more like “insanely difficult”.

        I know it will be hard to implement such a feature (I can think of 1000 reasons why it would mess things up). But I cannot understand why people are discussing adding testing support to C++, and this feature is not the first one on the list.

        Code not compiling is C++ first line of defense. Right now, this line of defense is untestable. This has to be fixed, now, somehow. If there is no other solution than speculative compilation, I’m very sorry for the compiler writers, but they decided to write C++ compilers, so they cannot really argue that it is not their own fault.

        • I do not think the situation is that bad. If you choose not to use static_assert (provided you can afford it) but stick to SFINAE and deleted functions, you *are* able to test. As Paul suggests in the other comment, if you “hack” the conditions so that they also contain a C-string with a custom message, you can get closer to the capability of .static_assert.

    • szborows says:

      i believe concepts lite is what you are looking for 🙂

      • Not necessarily. Concepts Lite only check the valid expressions “on the surface”, much like SFINAE and is_convertible. They will not detect that an expression would be invalid due to a nested static_assert.

  3. Paul says:

    > The only inconvenience we could experience now is that the failure message may convey too little information: “cannot convert double to Rational.”

    This is not true. I compile this:

    #define REQUIRES(…) typename std::enable_if::type = 0

    struct Rational
    {
    int num, den;

    template
    Rational(T n, T d = 1, REQUIRES(!std::is_floating_point()))
    : num(n), den(d) {}
    };

    int main()
    {
    Rational r = 0.5; // fails as expected
    }

    And I get an error like this:

    rational.cpp:17:14: error: no viable conversion from ‘double’ to ‘Rational’
    Rational r = 0.5; // fails as expected
    ^ ~~~
    rational.cpp:6:8: note: candidate constructor (the implicit copy constructor) not viable: no known conversion from ‘double’ to ‘const Rational &’ for 1st argument
    struct Rational
    ^
    rational.cpp:6:8: note: candidate constructor (the implicit move constructor) not viable: no known conversion from ‘double’ to ‘Rational &&’ for 1st argument
    struct Rational
    ^
    rational.cpp:11:26: note: candidate template ignored: disabled by ‘enable_if’ [with T = double]
    Rational(T n, T d = 1, REQUIRES(!std::is_floating_point()))
    ^

    So it explains pretty cleary why each of the overloads can’t be called, including the disabled constructor. However, if `[with T = double] !std::is_floating_point()` is not clear enough, you can always insert a custom message by changing the `REQUIRES` clause to `REQUIRES(!std::is_floating_point() && “conversion from floating point…”)`, so now you have an error like this:

    rational.cpp:18:14: error: no viable conversion from ‘double’ to ‘Rational’
    Rational r = 0.5; // fails as expected
    ^ ~~~
    rational.cpp:6:8: note: candidate constructor (the implicit copy constructor) not viable: no known conversion from ‘double’ to ‘const Rational &’ for 1st argument
    struct Rational
    ^
    rational.cpp:6:8: note: candidate constructor (the implicit move constructor) not viable: no known conversion from ‘double’ to ‘Rational &&’ for 1st argument
    struct Rational
    ^
    rational.cpp:12:26: note: candidate template ignored: disabled by ‘enable_if’ [with T = double]
    Rational(T n, T d = 1, REQUIRES(!std::is_floating_point() && “conversion from floating point…”))
    ^

    Using `enable_if` should be preferred over `static_assert`, since it produces the most ideal error. First, it points the error where the user made the error instead of inside of library code, and secondly, it gives information about why it can’t be called.

  4. ++supporters for N4186, that WOULD be nice.

    • Chris Glover says:

      I really really hope not. In my opinion this just adds unnecessary complication when a simple comment on the deleted function will do.

      // Conversion from floating point not implemented because…
      Rational(double n) = delete;

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s