Why make your classes final?

In this short post I wanted to share a small but interesting thing I learnt. You probably know that in C++11 it is possible to require that noöne must inherit from your class. You just declare your class as “final”:

struct B final {
  int i;
};

struct D : B {   // ERROR
  int j;
};

This feature is already implemented in new compilers. I know at least of three: Clang 3.0, GCC 4.7 and VC 11. I was often wondering why anyone would want to prohibit inheritance from one’s class; in C++. Recently I came across one good example where this is useful.

Perhaps there are some good reasons to do that, which I always failed to understand or notice. I would be glad to learn about them. The one I found comes from the recent ISO C++ Committee’s mailing. The mailing is in general an interesting resource to learn about the development and the future of C++. You can find the pre-Portland mailing here. The proposal in question is N3407 (Proposal to Add Decimal Floating Point Support to C++). The author proposes the addition of three types that would represent decimal floating-point types: decimal32, decimal64 and decimal128. Implementations (i.e., compiler writers) would have the freedom to implement them as either built-in or user-defined types. This freedom has certain impact on programmers. Suppose you happen to work with the implementation that provides decimal64 as user-defined type. Suppose you decide to inherit from it in order to provide some additional features. So far, so good. But now, suppose you need to migrate your program or library to other platforms (and compilers), and one of them implements decimal64 as built-in type. Now your program no longer compiles and it may take long time to rewrite it.

In contrast, if type decimal64 (on platforms where it is a user-defined type) is declared as final, you are warned by the compiler even on your initial platform that you must not inherit from it: in order that your program/library be platform-independent; or compiler version-independent. So, in general, you can prevent inheritance because you allow the possibility of changing your implementation to a built-in type, and want to spare your users unpleasant surprises in the future. This same technique could be applied when implementing type std::vector<T>::iterator, which on some platforms/configurations could be implemented as T*.

This entry was posted in programming and tagged . Bookmark the permalink.

44 Responses to Why make your classes final?

  1. Not an impressive reason. Java and .NET have dozens of final classes, and they don’t have platform-dependence problem.
    It’s just that I want my class *not* to be treated as “is-a” relative of another class.

  2. Alb says:

    Hello,

    Thanks for your post, it was quite interesting. I was wondering since a while why the adding of final keyword.

    I never found what could be the interest for deciding to stop a hierarchy inheritance (of course, optimisation while being a real acceptable goal is not a *design* interest).

    In fact, I am still not convinced on the interest of such a new feature. The argument of telling that it can then avoid having platform migration problems did not convinced me. First, it seems to concern only few developers, the compiler/platform ones. Second, there exists tricks to prohibit inheritance in C++03. Third, in a migration point of view, it’s just like making assumption on the size of standard types, of vector implementation, and so on : it’s just a big mistake. Then, it has no sense of deriving from a decimalXX type since inheritance is more appropriate with reference semantic and decimalXX is clearly a value semantic (*!~^@ # Java).

    In short, I am not convinced with the example proposed and I had still not found a design interest for the new final keyword, neither for inheritance nor for virtual function.

    But I still hope finding such an example since I am convinced that it can’t have been added just to ‘be like the other mainstream languages’…

  3. Hi Ajay.
    Thanks for taking the time to talk about it. But I believe it does not address my question. It just re-states the question using different words. Why would you “want your class not to be treated as ‘is-a’ relative to another class”?

    I know some good answers for Java, which does not have a concept of a constant object, but not for C++. I would be glad to learn some, though.

    • Constant object and final/sealed class are two different things, your comparison is wrong! Final keyword, however is equivalent for ‘const’ C++ keyword, for data assignment.

      I said, in a situation, where I don’t want my class to be inherited (or say, be in “is-a” relationship), I may use final keyword.
      But not the reasoning you mentioned. In fact, Java/.NET will not have independence problem, still they have final class. C++ would have this problem, but doesn’t have final keyword (before C++11).

      • Certainly, constant objects and final classes are different things; but the latter feature can be used to cover up for the lack of the former feature in Java, as described here. But this argument makes only sense for Java — not C++.

        I said, in a situation, where I don’t want my class to be inherited (or say, be in “is-a” relationship), I may use final keyword.

        — True, bot the most interesting question remains unanswered: why would you not want your class to be inherited from?

        In fact, Java/.NET will not have independence problem, still they have final class. C++ would have this problem, […]

        — What is an “independence problem”?

        • I meant platform/compiler independence problem over the code. Managed languages wouldn’t suffer from that (The justification you gave decimal classes).

          To answer, let me try:
          1. The type I created is concrete, and doesn’t make sense to be “is-a”. But it logically fits as “has-a” relationship. Design Patterns do favor “has-a”, over “is-a”. A ‘Rectangle’ class, a Time class are examples.
          2. In a inheritance chain, I have written my class, which implements few virtual methods. This new class is designed such that it cannot work for other classes, which may derive from same base class/interface. Say IView, my class CView, and another class CViewEx. But CViewEx, if instantiated *against* IView, it may break, because of CView, which I implemented in different way. May be I used DirectX to render text, but CViewEx implementer may assume GDI (as might be documented by IView).
          3. I have exported a class, whose data-members I don’t want to make public. By this, I also mean, I don’t want *types* of those members to be public. May be I am using concurrent_vector, and client (inheritor) may not have this vector. For this, I would just hide everything in handle/pointer. If I do that, it won’t be inheritable, anyway, which actual data-types are not at all known to inheritor!

          I personally encountered (2), and (3).

  4. Michael Price says:

    One great reason is to force consumers of your library to use composition instead of inheritance.

    This would be important for a type that does not have a virtual destructor (for performance reasons). Case-in-point: deriving from a standard library container such as std::vector.

    Sure, there’s always private inheritance, but why allow something that’s so very easy for newcomers to your library to get wrong.

    • Hi Michael, thanks for the explanation. I must admit I am a bit suspicious about this argument, though. If my intention is to “force my consumers to use composition instead of inheritance”, then by all means: final classes is the perfect tool for the job. I am just not sure if this idea of forcing my consumers to do anything is a noble intention.

      I agree that inheriting from a class with non-virtual public constructor is likely to cause problems, but: (1) the author of the to-be-derived-from class may not be the best person to make the call to prohibit this, (2) I am aware of use cases where such inheritance makes sense and is reasonably safe.

      In my programs I often define lots of ‘concrete’, or value-semantic, classes, whose destructor is non-virtual and public. I do not expect that there would ever be a need to inherit from them, but I leave this question open for my consumers. Does your example mean that in order to follow the good practice I should make every one of them final? Or in other words, If my class has a public non-virtual destructor, should I automatically define it as final?

      Finally, let me give you an example where I find it useful to have a non-final value-semantic class with non-virtual destructor:

      struct CarrierKey
      {
        string carrierName;
      };
      
      struct ServiceKey : CarrierKey
      {
        string flightNumber;
      };
      
      struct FlightKey : ServiceKey 
      {
        Date flightDate;
      };
      

      Say, I use the three classes as keys for different DB tables: some tables store information per carrier, some per service, some per flight. When I define my keys this way, I can use FlightKey as ServiceKey without no risk of slicing regardless if I pass arguments by value or by value:

      void use1( ServiceKey key );
      void use2( ServiceKey const& key );
      
      FlightKey myFlight;
      use1(myFlight);  // ok
      use2(myFlight);  // ok
      

      The example of std::vector is a better one, but still I can think of situations where you want to inherit from it only to provide a custom constructor that states a special initial value of the vector, but leaves the guts of implementation intact, for instance it does not need to do anything in destructor. Here again, I can pass the vector around by value and even destroy through a pointer without any harm.

      You could argue that one should use a factory function instead, but that would be at the cost of invoking a move-constructor, which might be unacceptable in some situations.

  5. Thanks for the examples Ajay. I am trying to figure out if they make a sufficient rationale for the addition of final classes to C++; for instance if there already exist other ways to address the same problem.

    1. The type I created is concrete, and doesn’t make sense to be ‘is-a’. But it logically fits as ‘has-a’ relationship. Design Patterns do favor ‘has-a’, over ‘is-a’. A Rectangle class, a Time class are examples.

    Certainly, in many (most?) cases it is better to just contain an object of type Time than to derive from it. But it may not be the best place to prohibit the inheritance once and forever at the time of defining class Time. In other words, the guy that considers inheriting his type from Time should (IMO) make the call whether to contain, or inherit, becasuse he has all the necessary knowledge and the context. The programmer that defines class Time cannot imagine all possible use cases of his type, so his decision to prohibit inheritance is likely to be premature and overrestrictive.

    2. In a inheritance chain, I have written my class, which implements few virtual methods. This new class is designed such that it cannot work for other classes, which may derive from same base class/interface. Say IView, my class CView, and another class CViewEx. But CViewEx, if instantiated *against* IView, it may break, because of CView, which I implemented in different way. May be I used DirectX to render text, but CViewEx implementer may assume GDI (as might be documented by IView).

    I am not sure I understand. This is what I understand: CView inherits directly from IView and CViewEx inherits directly from IView. If CViewEx inferited from CView, it wouldn’t work because of the incompatibility you described. Did I get it right? But if so, I also think that a final class is not a good tool for the job. If the author of CViewEx knows that inheriting from CView would not work, he should make the decision not to inherit from CView. CView need not prevent the inheritance. Imagine that some different programmer wants to add another class CSuperView and it so happens that it is ‘compatible’ with CView and many of CView‘s logic can be re-used. If CView was defined as final CSuperView would have to repeat lots of redundant code. In short, again, the author of CView is not in a good position to judge whether no-one ever would have reason to inherit from his class.

    3. I have exported a class, whose data-members I don’t want to make public. By this, I also mean, I don’t want *types* of those members to be public. May be I am using concurrent_vector, and client (inheritor) may not have this vector. For this, I would just hide everything in handle/pointer. If I do that, it won’t be inheritable, anyway, which actual data-types are not at all known to inheritor!

    I do not understand what you are trying to say. See this declaration

    class Mystery
    {
      struct SecretType1{ /*...*/ }; // private type
      struct SecretType2{ /*...*/ }; // private type
      
      SecretType1 mem1;
      SecretType2 mem2;
      // ...
    };
    

    Types SecretType1 and SecretType2 are private. They have not been made public and I did not have to use ‘final’. Is this what you mean? Alternatively, I could have used a “handle-body” (or pimpl) idiom:

    class Mystery
    {
      class MysteryImpl; // only declaration
      unique_ptr<MysteryImpl> pimpl;
    
    public:
      // ...
    };
    

    But again, I did not need any ‘final’ here? So, I do not understand how making my class ‘final’ would help hide anything?

  6. alfC says:

    But isn’t this going against Strouptrup goal (or desire) of making built-in types and user-defined types to be in equal footing? The goal should be to make possible to inherit from built-in types, not preventing it.

    • Being able to inherit from built-in types — while surprising at first glance — appears doable and well defined; and indeed it would help reduce the gap between built-in types and classes. But since we do not have it (yet), it looks N3407 took the reasonable approach to fit into the reality.

      But one might say that if C++ ever adds the ability to inherit from built-in types (and enums?), the list of arguments for having final classes is shrank even more.

  7. philippe dunski says:

    Hello,
    First, please excuse my bad english, but i’m french speaker 😛
    Second, i don’t have read all comments, then, someone has maybe allready said something similar, please excuse me too in that case.

    My thinking is that the keyword final should be related to the classse’s value or entity semantic and to the programmation by contract’s rules.

    I mean : Only classes that have an entity semantic should be candidate to an inheritance relation, but, even if a class has entity semantic, you have to respect some programmation by contract rules too in order to comply with LSP.

    In one hand, you may; for sample, have to create a class “Point” : a point has, clearly, value semantic since you can have two points in differents memory addresses whose are considered equals ( with same X-axis and Y-axis coordinates).

    This “Point” class will maybe have an equality operator and will never have virtual member functions. It’s simple: YOU HAVE NOT TO inherit from this class, The Point type will NEVER BE PART OF AN (public) INHERITANCE HIERARCHY;

    As far as i see, it should be usefull to declare this Point class as “final” ;).

    In other hand, to comply with LSP, you have to respect three programming by contract’s rules:
    1- preconditions cannot be strenghtened in the subtype
    2- postconditions cannot be weaked in the subtype
    3- base type’s invariants have to be respected in the subtype.
    Following those rules, you CANNOT considere to have a (public) inheritance relationship between a “simple list” and a “sorted list”, since sorted list has precondition and postcondition: it is sorted before any acces to its element and should be sorted after any acces to its element.

    As having a precondition is strengher than having no precondition at all, the precondition rule is violated, and you should never make “sorted list” inherit (publicly) form the simple list.

    I then should not be shocked if the std::list (and other collections classes) were one day considered as “final” ;).

    • Hi Philippe. Thank you for your comment. Having read it, as well as a couple of other comments in this post, I can see that programmers from “pure OO” languages have different view on what class inheritance is for than C++ programmers where OO is only one of a couple of programming styles (or paradigms). This is probably why it is so easy to see why final classes are useful in a “pure OO” language, but very hard (if they are useful at all) in a multi-paradigm language like C++.

      Yes, similar observations have been made in other comments also. In C++ it is useful to derive from structs (or PODs) that not only do not have virtual methods, but in fact they do not have any method at all. For one example, see this comment. You may have other reasons to inherit from some class than represent a is-a relationship from OO world. Sometimes you may want to derive from a “value type”.

      One should never have SortedList inherit from List — this is true. However, some other type LoggingList (which logs more thoroughly some list operations) could (under the rules of Liskov substitution principle) and should be able to inherit from List. If List declares “no-one can inherit from me” it prevents many cases of valid inheritance, only to prevent some potential invalid inheritance.

      To give a more hard-core counter-example: class Dog should never inherit from class DropDownList: because the two classes have nothing whatsoever in common. But does it mean DropDownList should be declared as ‘final’?

      Regards,
      &rzej

  8. ethouris says:

    For me, the idea of final classes is completely wrong for C++. The idea is perfectly fine for final methods – it’s because you should be able to make the deriving classes not automatically expose particular method as a slot for overriding (which helps precisely define what exactly overridable methods that class exposes).

    For classes it’s different – in languages like Java it may make sense because there every method is virtual (unless final) and you derive a class to make use of overriding. It’s not always the case of C++ – you may derive just to have a little syntactic sugar (and you find it better than creating an external function to do the same), you’d like to derive typedefs, or you extend some template instantiation to tailor it to your needs – none of these things can be found in other OO languages. So, final, declared for a class, should not make the class not derivable, but it should only mark all virtual methods as final. This way, you can still derive, but you cannot “subclass”.

    As you have found a good example of using final, this may have an additional thing – the compiler should issue a warning when such a class is being derived. The standard may express it as a good advice, as warnings are normally not part of the standard.

  9. QbProg says:

    I have an interesting usage for final classes: devirtualization.
    I don’t know if compilers will actually do that, but a final class could avoid the usage of virtual calls if the type is known to be final.
    There are many times, that despite you have a virtual hierarchy , you end up using the effective type object. In that case , the compiler must still call the virtual slot, instead of the direct function call (because eventually the effective object could be another derived class). With final the compiler can assume there’s not child class.
    class Base { virtual A (); };
    class Deriv final { virtual A() override; };

    Base * baseptr = ….
    Deriv * derivptr = static_cast(baseptr);
    derivptr->A(); // with final could be a direct function call instead of a virtual one

    • QbProg says:

      Ok, too quick, very bad code 🙂

      class Base { virtual A (); };
      class Deriv final : public Base { virtual A() override; };
      
      Base * baseptr = ...;
      Deriv * derivptr = static_cast<Deriv*>(baseptr);
      derivptr->A(); // with final could be a direct function call instead of a virtual one
      
  10. Cedric says:

    QbProb: that was exactly what I was about to suggest: final keyword can be used by compilers to avoid virtual calls, and this can lead to dramatic optimizations.
    I have been wondering too about which good usage of the final keyword.

  11. andyp says:

    Most of the reasons why not to inherit from some class are listed here:
    http://stackoverflow.com/questions/6806173/why-should-not-i-subclass-inherit-standard-containers
    Mostly it is related to std containers but applicable to other classes too:

    1. Lack of virtual destructor
    2. Assumptions on method implementation (e.g. complexity of operations on std containers). You certainly would like to prohibit inheritance if you want to force the assumptions.
    3. (Least relevant and most debatable) Inherit to extend functionality is almost always a bad idea. Inheritance should rather be used to provide polymorphic behavior.
    • Here again, I would say that the answer is too much biased towards OO, and not taking into account the multi-paradigm nature of C++. The StackOverflow title uses phrase “subclass/inherit” in a way that suggests that “to subclass” means the same as “to inherit”. I am not an expert on the namings but to me the two are slightly different. And in the context of this discussion the subtle difference becomes significant. In my vocabulary, “To inherit” means to type the colon in the class definition in the source code:

      class X : public Y // ...
      

      “To subclass” (to me) means to introduce an ‘is-a’ relation in OO sense, in UML diagrams sense. Adhere (or not) to Liskov Substitution Principle, etc.

      People may use inheritance (at the source code level) to describe sub-classing (in OO thinking level) or for other purposes, like “veneers” or opaque typedefs, or other things mentioned in this discussion. The question in StackOverflow only covers inheritance for subclassing.

      • andyp says:

        Veneers does not look like a good practice for me. Why not to have a free-standing function operating on parts of class hierarchy instead? Why should everything be a class?

        Back to the “Why make your classes final?” question – The most sounding reason for me is the assumptions on method implementation in other parts of the code. By making the class final you indicate that you just do not want to deal with any overloaded implementations of your class methods. This can be useful for library writers.

        • @andyp: I believe we are reaching the argument that could be summarized as “What I consider good? What others consider good? Should I prevent others from doing things that I consider bad and they consider good?”

          Veneers does not look like a good practice for me. Why not to have a free-standing function operating on parts of class hierarchy instead? Why should everything be a class?

          I believe that others do consider them a good practice, given that they write papers on it. I agree that lot of what they do could be done by free-standing functions, and there is no harm or shame in using them (free-standing functions). However, veneers do offer some advantage. First, one single name (class name) already unambiguously indicates a number of things that will work differently (rather than using a number of functions in a number of places). Second, if you want to customize the behavior of the constructor, you cannot do the same with a function without a potential loss in efficiency.

          But this is just veneers. Consider opaque typedefs for another example. Do you consider opaque typedefs a bad programming practice?

          Back to the “Why make your classes final?” question – The most sounding reason for me is the assumptions on method implementation in other parts of the code.

          Yes, for you. Opaque typedefs or extension of POD types are considered by you as less sound, right?

          By making the class final you indicate that you just do not want to deal with any overloaded implementations of your class methods.

          Not quite so. You indicate far more: you do not want to allow opaque typedefs of your type, using veneers, and any other techniques that programmers other than you may want to use.

          This can be useful for library writers.

          If the library author’s goal is to impose his way of looking at things (“inheritance shall only be used for OO subclassing”), then yes, I agree. But if library author intends to address a wide variety of programmers that use different styles of programming, making his types final may disturb.

  12. Dinka says:

    Having just recently re-read Scott Meyer’s “Item 33: Make non-leaf classes abstract”, I think final would come in handy in his Animal/Lizard-Chicken example. The problem of an assignment operator, as presented, is solved by making Animal, Lizard and Chicken leaf classes. However, the semantics of the assignment operator will break as soon as a new class inherits from any of the leaf classes. By making the leaf classes final, you make sure that the person extending the design will be aware that the current design counts on Animal, Lizard and Chicken being leaf classes and to make adjustments accordingly.

    Or to try and generalize a bit more, if your design represents a certain concept and counts on certain classes to be leaf classes, then making the classes final will make sure your point gets across if there is ever a conceptual change that requires a design extension/change.

    • @Dinka: Thanks for your comment. Again, as in the case of a couple of other posts here, I believe that your point of view is biased towards OO programming techniques. Note that in C++ people use inheritance for reasons different that representing is-a relation in OO sense. These reasons could be:

      1. Opaque typedefs
      2. extension of POD types
      3. “veneers”

      For all of these techniques statement the semantics of the assignment operator will break as soon as a new class inherits from any of the leaf classes is incorrect.

      • Dinka says:

        Hi Andrzej,
        Thank you for replying to my comment.
        You mention 3 reasons of non-breaking inheritance cases.
        I would not discuss 2) extending POD types, as I am not talking about POD types.

        Regarding 1) Opaque typedefs:
        From the document that I know of (N1891: Progress toward Opaque Typedefs for C++0X). it doesn’t look to me as if opaque typedefs would use inheritance.
        From Chapter 2 of the proposal:
        3. Are OT (opaque-type) and UT (underlying-type) related by inheritance?
        Proposed answer: is_base_of(UT,OT) shall be false, and
        is_base_of(UT,OT) shall also be false.

        I don’t know what happened to this proposal, but to an untrained eye it doesn’t look like it made it to the standard, so basing a validity of one accepted new feature on a theoretical one is a mute point. (Please correct me if I’m wrong in any of these statements, or if you were referring to something else when you talk about opaque typedefs.)

        Regarding 3) Veneer types.
        I do accept that you would not be breaking the design with a veneer type. However, I would still argue it’s a conceptual change, since what used to be designed in mind to be a leaf class is a leaf class no more.
        Also, while veneers might be the best approach in certain cases, in others they are not. If I am free to modify the code (re-design as suggested in Scott’s example would suggest we have that freedom), and there are only a few classes where addition makes sense, I would argue it is better to extend the class than to add a veneer.

        My point being that there are certain situations when final improves the clarity of design. While it might be the case that it only applies to some design choices, and that it is biased towards OO programming techniques, I would still argue that, as long as it improves some designs, it is a valid addition.

        • @Dinka: this discussion is getting interesting. After what you have written I am able to express my position more clearly. You say that you sometimes need to require that no-one must “specialize” your class (in OO sense), and final classes enable you to say that. Am I right?

          But if we want to answer the question “what do final classes do”, the answer is not “they ensure that no-one specializes my class in OO sense, and nothing more”. The answer is “they prohibit number of things simultaneously: one is class specialization, but also lots of other programming techniques”.

          One can think of a different mechanism, which would prevent class specialization in OO sense, but would still allow the usage of other techniques that rely on C++ inheritance. It is possible. We could allow inheritance, but prevent any method overriding. In that case everyone would be happy. But with final classes we have a controversy: you solve one problem by introducing difficulties to other programmers.

          I mentioned three techniques that rely on inheritance but have nothing to do with OO sub-typing. I am sure there are much more, but these three came to my mind quickly. For one other example, one can choose to change composition into private inheritance in order to enable Empty Base Optimization.

          To answer your question about opaque typedefs, I understand is as a concept or an idea that can be “materialized” in different ways. One is a language extension as proposed in N1891, another can be a library solution:

          template <typename UT>
          struct Opaque : UT
          {
            // 'inherit' constructors (C++11)
            using UT::UT;
          };
          

          Now, X and Opaque<X> have the same public interface. But it only works when UT is a non-union class type that is not declared final.

  13. I think ‘final’ can be used to prohibit polymorphism. Say you have a Base class and there’s a function – maybe an allocator or factory class – that receives a Base* pointer and rely on it being a real instance of Base, not a Derived with possibly different memory requirements, and you want to make sure that the user does not accidentally pass a derived class pointer to it.

    I’m not sure if it would be useful in real world though, since the side effects of disabling inheritance are too big for that small benefit.

  14. Dinka says:

    Hi Andrzej,

    I think I see what you are getting at. Your premise is that final is undesirable because it prevents valid design choices as well as preventing misuse of implemented behaviour ? To elaborate, in my example it would bad (read “break the design”) to implement a derived class which contains additional data, but it would not be bad to implement a veneer-like derived class.

    I tried to think of a reason when there is no safe way of extending a class, but nothing came to mind.

    I then tried to think of other design choices that do the same (i.e. prevent inappropriate as well as valid use cases). The first thing that came to mind is any design pattern that makes the constructor private (because private constructor is so similar in its consequences to making the class final). However, you could argue examples like this do not provide reasoning that final is good, but that these other methods are also flawed.

    My next step was to try and find the original motivation for final (otherwise known as find_the_smarter_and_more_experienced_person_to_fight_your_battles approach 🙂 ) which drew a blank. The only motivation I found was “This is a good attribute because it allows the compiler to emit a message if the class or function is extended “ (N2761). IMHO, that is not sufficient motivation.

    To conclude, I think I’m now leaning towards your view of final. I would, however, still use it in the environment where I have control of the complete code (i.e. where final can be removed if there is a legitimate reason to do so) as a hint to the future developer in situations like the one I mentioned in my first post.

    Thank you for making me flex my brain cells 🙂

    Regards,
    Dinka

    • This is also my impression that there was not sufficient motivation to add “final classes” to C++. I tried to ask the experts (see this discussion), and they also appeared to confirm that the only motivation for it was “because other languages have it”, which sounds poor.

      Then, from the discussion in this post I learned that it can be useful in some cases. One is yours: when the entire class hierarchy is your implementation detail that you do not expose to anyone, and you want the compiler to warn you about some things.

      Second, mentioned by QbProg in this discussion, is “devirtualization” of some function calls: it gives the guarantee to the compiler that certain calls will never be virtual and it can skip the vtable look-up. You sacrifice some elegance or extensibility/usage options in order to gain performance — a valid compromise in C++ realm.

      Third, when you do not want your users to get used to your class being “derivable from” because you consider changing your implementation into using a built-in type. This is the situation I tried to describe in the post.

      I believe it is the most interesting discussion in this blog (so far).
      Regards,
      &rzej

  15. Tony says:

    How many times have I cursed the authors of a very well known and widely used libraries for making all the other methods virtual except the one that I really need? Many. Would final make things much worse? I doubt it. If C++ is multiparadigm rather than pure OO, should it avoid a simpler syntax for prohibiting inheritance that users coming from more pure OO languages may want? Argument for this is probably at the same level as the argument that other languages have it.

  16. Thomas says:

    Final is wrong and bad. Just because Java stuck its head in the fire to fix a design error doesn’t mean all other languages have to, and they shouldn’t. Final causes far more pain than it fixes. A class’s method list is an API and by making any part of that final you’re pretending to be able to second guess all the uses that API will ever be put to for the rest of eternity.

    Don’t be silly; don’t use “final”.

    • Hi Thomas. To a great extent I agree with your position, but let me point out one thing that I learned from the comments for this post. “Using «final»” means two things that are different:

      1. Annotating member functions as final.
      2. Annotating entire classes as final.

      While I do not know of any practical usage of the former (much like you), I see one usage of the latter: giving a hint to the compiler for it to be able to apply useful run-time optimizations. QbProg mentioned that in the comments above. This optimization is called de-virtualization.

      • Cedric says:

        Agreed with Andrzej,
        We may see the final keyword more as a pragma to inform the compiler that there is no chance that the actual run time class differ from declared class, opening for optimizations like bypassing the virtual table and mainly inlining function calls.

      • Thomas says:

        I understand that, but I think this is a bit like handing out shotguns to handle a mosquito problem; the damage done by people who use it for other things far outweighs the utility for the very few that actually need this functionality.

        “Final” not only sounds rather positive, the fact that one has to use it in Java (as a kludge for a bug in the language design) means that users coming from Java think it’s perfectly normal and it’s all too common to see it plastered all over the place, even in code by people who have only read Java code. A “devirtualize” keyword that did exactly the same thing would be substantially better as it at least sounds like a special action.

        An even better solution would be a compiler (or compiler option) that spots the inlining opportunity without having to mangle the language with such a massive over-reaction to what is a very minor optimization chance.

        • Cedric says:

          Hi Thomas,

          I understand your position and sometime the cure can be worse than the disease. Though I don’t see it like this in this situation – personal point of view and I did not practice Java.

          One last thing, I doubt that a compiler could be able to inline virtual functions because the code could be compiled in a separate unit of compilation (say a lib) and called from outside. At least in this situation the compiler has no information on the pointers that may be used. But it is true that optimization pragma should be identified as such, and that could have been the case for the final keyword.

          Now it’s done, let’s focus on best practice and train people to use final wisely…

      • Chris says:

        The former also permits avoiding a virtual function call. Any call to a final overrider via an instance or reference or pointer that has the static type that declares the method final can be de-virtualized.

        class IFile
        {
        public:
          virtual bool Write(/*...*/) = 0;
          // etc
        };
        
        class DiskFile : public IFile //< Not a final class
        {
        public:
          virtual bool Write(/*...*/) final override;
          // etc
        };
        
        void Foo()
        {
          DiskFile df;
        
          df.Write(/*...*/); //< Can be de-virtualized, even inlined
        }
        
  17. Ricardo Costa says:

    Another article has been written about this subject with some insights:
    http://msinilo.pl/blog/?p=1329

  18. dyp says:

    There might be a surprising way in which final could be useful: To manipulate the “layout” of the type. Please note that the following is speculative.

    The argument is based on the observation that the assembly produced by g++ for an assignment to a struct with padding at its end is dependent on whether or not that struct has an empty base class:

    struct nonderived
    {
        double d;
        char c[3];
        // 7 bytes padding
    };
    
    struct empty {};
    struct derived : empty
    {
        double d;
        char c[3];
        // 7 bytes padding
    };
    

    If one assigns to nonderived, g++ will overwrite the padding, while it will generate three one-byte assignments for the character array to not overwrite the padding when assigning to a derived. (clang++ does not overwrite the padding in either case and produces a two-byte plus a one-byte assignment for the character array.)

    This has been brought to my attention via a StackOverflow question. I speculate that the reason is C compatibility: In C, since there is no derivation, one can safely overwrite the padding. In C++, this padding can be reused by a derived class:

    struct nonderived_child : nonderived
    {
        char x;
        // 7 bytes padding
    };
    
    struct derived_child : derived
    {
        char x;
        // 6 bytes padding reused
    };
    

    The size of nonderived_child in g++ is 24 bytes, while a derived_child is only 16 bytes large and equivalent in size to a derived; the padding is reused.

    Now, in theory, final should make it possible to safely overwrite trailing padding:

    struct derived_final final : empty
    {
        double d;
        char c[3];
        // 7 bytes padding
    };
    

    This might be useful if overwriting the padding can lead to more efficient code, which is suggested by some crude benchmarks I’ve run, comparing derived to nonderived (e.g. std::sort becomes 3 % faster and `= {}` assignment becomes 17 % faster when run in a tight loop). g++ does not seem to overwrite the padding when the class is marked as final; it does not even overwrite the padding when it has to deal with an array of objects passed via pointer. That is, I suspected pointer arithmetic to guarantee the dynamic type of the array elements is known locally via the type of the pointer, but the optimizer doesn’t make use of this guarantee.

  19. boccaraj says:

    Hi Andrzej,
    I’m no Java expert, but I gather that the point of final classes in Java is to guarantee that objects are immutable.
    Maybe this can apply to C++ too: if a class is final and all of its methods are const, then its interface says that no objects of this class can be modified. Without final, you could have a derived class that adds new member functions that modify the object.
    As a result, if you’re passed a reference (or a const reference) to an object of a final class, you have the guarantee that it won’t be modified by someone else, so you can safely use it across threads, reason about it, or whatever benefits of immutable objects.
    What do you think of that?

    • Well, personally, I find this unconvincing. Do I need a type tat is constant by definition? The practice I am used to is that you have types that are potentially mutable and that you create immutable instances of the class (const objects). If I really need a mutable class, I declare its member data as const, and no inheritance can change that. Someone could still const_cast it and get a mutable access. But if they are so determined is there a point in me stopping it? And in that case even final classes would give me no thread-safety guarantee. Similarly, if I make it clear that I want all my class objects to be immutable, and someone makes an effort to counteract my intentions (by deriving and giving mutable access) maybe he knows what he is doing. In C++ safety features are used to prevent accidental misuse, but not a deliberate misuse.

  20. Ricardo Abreu says:

    One reason I see to make a class final is to relieve other code from having to worry about chopping. For instance, you can safely declare `void f(A a);` if A is final, knowing that no one can call f with a derived CustomA. I suppose no one *should* call such an f with CustomA anyway, but it’s an easy mistake and not something the author or `CustomA` can do anything about.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.