C++’s best feature

Update. My remark on exceptional life-time of temporaries in array initialization was incorrect. This part is now fixed. I also included some essential information, as suggested by Herb Sutter.

C++, if you want to learn all of it, is big, difficult and tricky. If you look at what some people do with it, you might get scared. New features are being added. It takes years to learn every corner of the language.

But you do not need to learn all of it. Effective use of C++ requires only the knowledge of a couple of its essential features. In this post, I am going to write about one C++ feature that I consider one of the most important. The one that makes me choose C++ rather than other popular programming languages.

Determined object life-time

Each object you create in your program has a precisely determined life-time. Once you decide on a particular life-time, you know exactly at which point your object’s life-time starts, and when it ends.

For automatic variables, its life-time starts upon its declaration, after the initialization finishes normally (not via an exception); its-life time ends upon leaving the scope it has been declared in. For two automatic objects declared next to each other, the one whose life-time has begun earlier, its life-time also ends later.

For function parameters, their life-time begins just before the function execution starts, and their life-time ends just after the function is executed.

For globals (variables defined in namespace scope), their life-time starts before main starts, and their life-time ends after main ends. For two globals defined in the same translation unit (file, after header inclusion), the one define higher starts life earlier and ends later. For two globals defined in different translation units, no assumptions about their relative life-times can be made.

For almost all temporaries (with two well-defined exceptions), their life-time begins when a function returns by value inside a bigger expression (or when they are created explicitly), and ends after the full expression has been evaluated.

The two exceptions are: when a temporary is bound to a global or automatic reference, its life time lasts as long as that of the reference’s:

Base && b = Derived{};
int main()
{
  // ...
  b.modify();
  // ...
}
// life-time of temporary Derived ends here

(Note that the reference is of different type (base class of) than a temporary, and that we can modify the temporary. ‘Temporary’ typically means ‘short-lived’, but when bound to a global reference it will live as long as any other global in the program.)

Second exception to the rule is applicable when we initialize a built-in array of user-defined types. In that case, if a default constructor is used to initialize n-th array element, and that default constructor has one or more default arguments, the life-time of each temporary created in a default argument ends when we proceed to initializing (n + 1)-th element. But you will probably never need to know this one.

For member sub-objects of classes, their life-time begins when their mother’s life-time starts and their life-time ends when the mother’s life-time ends.

Similarly for other kinds of object life-time: function-local statics, thread-locals, or when we manually control the life time of an object, for instance with new/delete or with allocators or with optional or some such. In all these cases, it is well-defined and predictable where exactly the life-time of an object starts and when its life-time ends.

In case an initialization of an object, whose life-time is about to start, fails (an exception is thrown), its life-time does not start.

This is, in short, the nature of a deterministic object life-time. So, what is an indeterministic one? You do not have it in C++ (yet), but you can see it in other languages, with ‘garbage collection’ support. In such cases, you create an object — its life time starts — but you do not know when its life-time will end. You are guaranteed that it will not end until you still hold a reference to it, but once you drop the last reference to it, it can live arbitrarily long, including up to the termination of the process.

So, what is so important about the deterministic object life-time?

The destructor

C++ guarantees that for any object of class type, its destructor will be called the moment the object’s life-time ends. A destructor is a member function in our object’s class: guaranteed to be the very last function that will be called for our object.

Everyone knows that already, but not everyone appreciates what it really buys us. First, and most importantly, you can use the destructor to clean up the resources that our object acquired throughout its life-time. This clean-up becomes encapsulated: hidden from the user: you (the user) do not have to call any dispose or close function. There is no way you can forget to clean the resources up. You do not even need to know if this type you are currently using (especially inside a template, where it is just a T for you) manages resources or not. Also, the object’s resources are cleaned up immediately when the object is no longer needed: not in some indeterminate future. Resources are released as soon as possible. This prevents resource leaks. No garbage is left, blocking resources for unpredictable time. (Do not think about memory, when you read ‘resource’: think about open DB connections or sockets.)

The deterministic object life-time also determines the relative destruction order of the objects. In case there are a number of automatic objects declared in scope they are destroyed in the inverse order of their declaration (and initialization). Similarly for class member sub-objects: they are destroyed in the inverse order of their declaration in class body (and initialization). This is essential in case the resources depend on one another.

This feature is superior to garbage collection in the following aspects:

  1. It provides a uniform approach to all resources one can think of — not only memory.
  2. Resources are released immediately after they are no longer used. Not when (if ever) garbage collector decides to clean memory (and call the finalizers).
  3. It does not require the run-time overhead of garbage collector.

Garbage collector-based languages tend to offer substitutes for resource-handling techniques: using statement in C# or try-with-resources statement in Java. While they are step in good direction, they are still inferior to the usage of destructor.

  1. Resource management is exposed to the user: now you need to know that the type you are using manages a resource and put some extra code to request the resource release. You may forget to put the guard statement and then the resource is leaked.
  2. If class maintainer decides to change the implementation from one that doesn’t manage a resource to one that does, the user has to change her code also: this is the consequence of resource disposal not being encapsulated.
  3. This doesn’t work well with generic programming: you cannot write the code that uniformly operates on types that do and do not use resources.

Finally (pun intended), such guarding statements are only a substitute for the cases handled by C++ ‘automatic’ objects, created in function or block scopes. C++ provides other kinds of object life-time. For instance, you can make a resource-managing object a member of another ‘master’ object and thereby say that the resource must be held as long as the master object lives.

Imagine the following function that opens n file streams and returns them in a collection. Another function reads them and automatically closes:

vector<ifstream> produce()
{
  vector<ifstream> ans;
  for (int i = 0;  i < 10; ++i) {
    ans.emplace_back(name(i));
  } 
  return ans;
}

void consumer()
{
  vector<ifstream> files = produce();
  for (ifstream& f: files) {
    read(f);
  }
} // close all files

How do you do that with ‘using’ or ‘try-with-resources’ statement?

Note a certain trick here. We make use of another important C++ feature: move constructor. We use the fact that std::fstream is not copyable, but is moveable. (However, users of GCC 4.8.1 will not notice this.) The same — transitively — goes for std::vector<std::ifstream>. The move operation is like an emulation of yet another, unique, object life-time. We have a ‘virtual’ or ‘artificial’ life-time of a resource (a collection of file handles) that starts with the life-time of object ans and ends with the life-time of a different object defined in a different scope: files.

Note that throughout the ‘extended’ life-time of the collection of file handles, each handle is protected against being leaked in case an exception is thrown. Even if function name throws in the 5th iteration, all previously created 4 elements are guaranteed to be released in function produce.

Similarly, what you cannot do with ‘guarding’ statements is this:

class CombinedResource
{
  std::fstream f;
  Socket s;

  CombinedResource(string name, unsigned port) 
    : f{name}, s{port} {}
  // no explicit destructor
};


This already gives you a couple of useful resource safety guarantees. Your two resources will be released when CombinedResource’s life-time ends: this will be done in the implicitly defined destructor, in the inverse order of initialization — you do not have to type anything. In case the initialization of the second resource, s, fails in the constructor (ends in exception), the destructor of the already initialized f will immediately be called, before the exception is propagated from our constructor. You get all these guarantees for free.

How do you do that with ‘using’ or ‘try-with-resources’ statement?

The dark sides

It is fair to mention here, that there are reasons people do not like destructors for. There are situations where garbage collector is superior to ‘no garbage’ solution offered by C++. For instance, with garbage collector (if you can afford using one), you can nicely represent a cyclic graph, by just allocating nodes and connecting them with pointers (or ‘references’ if you will). In C++ it will not work even with ‘smart’ pointers. Of course, nodes in such garbage-collected graphs cannot manage resources, because they might be leaked: ‘using’/’try-with-resources’ statements cannot help there, and finalizers may not be guaranteed to be called.

Also, I heard people say that there exist efficient concurrent algorithms that can only work with garbage-collector’s assistance. I admit I have never seen one.

Some people dislike the fact that you cannot see the destructor call in the code. What is an advantage to some, discourages others. When you analyse or debug the code, it can slip your attention that destructor is called and can have some side effects. I did fell into such trap when debugging a big, entangled piece of code once. An object under the raw pointer I was holding suddenly would become rubbish for an unknown reason: I could see no function that would cause this. Only later did I realize that the same object was referred to by another unique_ptr, which silently went out of scope. For temporary objects it may be even worse: you can see neither the destructor, nor the object itself.

There is one restriction on using destructors: in order for them to correctly interact with stack unwinding (caused by exceptions), they must not emit exceptions themselves. This restriction can prove very hard for people who need to signal resource release failure, or who use destructors for other purposes.

Note that in C++11, unless you declare your destructor as noexcept(false), it is implicitly declared as noexcept and will call std::terminate if an exception is emitted from it. In case you want to signal resource release failure with exception, the recommendation is that your type should also provide a member function like release which users should call explicitly, and then have the destructor check if you released the resource already, and if not, try to release it ‘silently’ (swallowing any exceptions).

One other potential downside connected with the usage of destructor for resource release is that sometimes you need to introduce an additional, somewhat artificial scope/block inside your function only to trigger the destructor call of an automatic object, long before the function’s scope ends. Consider:

void Type::fun()
{
  doSomeProcessing1();
  {
    std::lock_guard<std::mutex> g{mutex_};
    read(sharedData_);
  }
  doSomeProcessing2();
}

Here, we had to put an additional scope, so that the mutex is not locked while we are performing doSomeProcessing2: we want to release our resource (the mutex) as soon as we stop using it. This now looks somewhat similar to ‘using’/’try-with-resources’ statement, but there are two differences:

  1. This is an exception rather than the rule.
  2. If we forget about the scope, the resource is held for too long, but not leaked — because the destructor is still bound to be called.

And that’s it. Personally, I find the destructor one of the most elegant and practical features in programming languages. And note that I didn’t start explaining its other strengths: interaction with exception handling mechanism. This is what attracts me to C++ far more than performance: the elegance.

And one final note: I did not want to make a claim that this feature is really the best one in the language. I just wanted the title to be catchy.

This entry was posted in programming and tagged , , , . Bookmark the permalink.

34 Responses to C++’s best feature

  1. Michal Mocny says:

    Great article, as always. I also happen to agree that the destructor is one of the key features of the language. That, combined with value semantics/explicit references means you always know what is going on with your data.

  2. Fernando Pelliccioni says:

    Hi Andrzej,

    “Second exception to the rule is applicable when we initialize a built-in array. In that case, the life-time of a temporary created for the purpose of initializing n-th element ends when we proceed to initializing (n + 1)-th element. But you will probably never need to know this one.”

    Could you point to the Standard ?
    I can not prove the sentence using Clang.
    GCC works fine.

    struct myclass
    {
    myclass(int a)
    : a(a)
    {
    std::cout << "myclass() " << a << std::endl;
    }
    ~myclass()
    {
    std::cout << "~myclass()" << a << std::endl;
    }
    int a;
    };

    myclass operator+( myclass const& a, myclass const& b )
    {
    return myclass( a.a + b.a );
    }
    void f()
    {
    {
    myclass arr_temp[] = { myclass(8) + myclass(8), myclass(3), myclass(1) };
    }
    std::cout << "f ending…" << std::endl;
    }

    Thanks!

    • Hi Fernando, no wonder you could not prove it, because what i told was wrong. In fact, what the standard (N3290) says is:

      “There are two contexts in which temporaries are destroyed at a different point than the end of the full-expression. The first context is when a default constructor is called to initialize an element of an array. If the constructor has one or more default arguments, the destruction of every temporary created in a default argument is sequenced before the construction of the next array element, if any.”

      This is sect 12.2, para 4.

      Sorry for the confusion, and thanks for digging into this.

      &rzej

  3. Joel Lamotte says:

    If you don’t have deterministic lifetime, it might not end well.
    *badam-tish*

  4. Herb Sutter says:

    Great article, Andrzej. Two comments:

    1. Definitely mention exception safety! RAII code is just naturally exception-safe with immediate cleanup, not leaking stuff until the GC gets around to finding it. This is especially important with scarce resources like file handles, as you showed.

    2. One nit: C# ‘using’ and Java ‘try-with-resources’ are halfway solutions, and this is perhaps best shown by C++’s automatic destruction of member subobjects which ‘using’ and ‘try-with-resources’ don’t touch — in C# and Java you have to manually call each member’s Dispose from your Dispose, coding it all by hand. I think that shows off the difference even more than the produce/consumer example in the article. — That said, the produce/consumer example is still good to show part of the difference, but perhaps you should also mention exception safety. If you ignore exception safety, then in C# or Java you would write a ‘using’ or ‘try-with-resources’ inside consumer() only and the code has a the desired effect — not too bad. However, doing that leaves produce() not exception-safe… or at least, if an exception occurs, stuff leaks until collected.

    Thanks for shining a light on this! BTW I don’t think you need your disclaimer at the end — even Bjarne has pointed out that this is C++’s most important feature, and I fully agree. Good stuff, thanks.

  5. Stephan T. Lavavej says:

    Instead of “ans.push_back(ifstream{name(i)});” you can say “ans.emplace_back(name(i));” which is shorter and more efficient.

  6. mmmmario says:

    I’ve also heart from Herb Sutter many times that garbage collection in C++ is necessary to enable some lock-free algorithms, but newer heart any of them.

  7. Krzysiek says:

    Well, destructors have another serious downside: they must not release any exceptions or the whole resource clean-up cycle is catastrophically interrupted. This means you either need to release the resources explicitly before the destructor is called (as in GC languages) or have try-catch blocks around all function calls in dtors.

    I once stumbled upon an exception interrupting the clean-up. Some deeply nested sub-object threw an exception in its destructor, which was silently caught and swallowed a dozen level higher leaving hundred of megabytes of unreleased memory resulting in subsequent bad allocations (sometimes I bless 32-bit address space for easier spotting of memory leaks).

    The issue may be more related to how exceptions work than to destructors, but nonetheless it would be nice to have a way of properly signalling errors by destructors.

    • Just to add something atop of what you said. Programmers are encouraged to design destructors in such a way that they do not throw exceptions (whether one likes it or not). In C++11 it is even more important, because if you will try to throw from destructor, it will call std::terminate. There will be no chance for you to catch the exception, like in the example you describe.

      • fjw says:

        I think the call to std::terminate isn’t that bad though: If you cannot end the lifetime of a ressource, you have most likely done something wrong at another point in your code and chances are that you already entered undefined-behaviour-land. As calling std::terminate from this position reduces the dangers of an attack and may prevent further bad stuff, it is really a nice thing to do here.

  8. I’m wondering… why is it tagged C++03 when it talks about move-semantic, unique_ptr, std::lock_mutx etc. ?

    • Wow, I didn’t realize people are looking at such things. Indeed, it now fits more into C++11. When I started to write it, I was only going to talk about the destructor. Let me fix it.

      • That’s where you come to realize C++11 is a pretty great improvement for C++ that manages to give even more power to the OLTC (“Object LifeTime Control” as I like to refer to scoping, move-semantic, construction/destruction order, determined object life-time, etc.) approach which I believe is really one of the most important C++’s features. It permits secured, bug-safe, robust and clear code : exactly what we need in a good language (and for our greatest pleasure, C++ also has a lot more to offer) !

  9. mortoray says:

    I agree this is a very strong feature of C++.

    I think part of the exception problem with destructors is they purpose is not clear. There is no way to distinguish between an exception flow and a non-exception flow. Consider a stream object. On normal cleanup it should flush any buffers it has, but this can fail and cause an exception (whcih makes sense, since you’ve lost data). However, in an exception flow it doesn’t make sense to flush the buffers, you should close the file handle and unwind. Flushing buffers is something you should only be doing in the non-error return path. The exception path should be limited to doing minimal cleanup.

    This creates a problem for developers. Anytime a destructor has two roles, logical completion and cleanup, you’ll be stuck for a good solution.

    • You bring up a very interesting point: there are generally two reasons for using destructors: “logical completion” and “resource clean-up”.

      If you only use destructors for logical clean-up, you do not face this problem. The requirement that you must not throw is usually acceptable. You feel that you are using the right tool for the right job.

      When you use destructors for “logical completion” (flushing buffers, committing transactions), as you say, you do want exceptions, you do have a problem. And this perhaps answers the question about the purpose of destructors. Perhaps using them for this purpose is a bit of abusage. Perhaps such usage requires an additional language feature, or one needs to rely on explicit function calls, which is not a bad idea for functions that potentially throw.

      • In theory, a destructor could distinguish between an exception and non-exception path by using the C++11 facility std::current_exception. If the return value is an empty std::exception_ptr, no exception is being handled.

        I think we should all feel wildly conflicted as to whether this should be considered in production code, but it’s a possibility for the cases where it really is paramount to distinguish between the two cases. 🙂

  10. Superb article, as always. Let me have three (unrelated) observations:

    It is very interesting catch that one of the always-mentioned advantage of GC languages that they make memory management a bit easier (although I never considered C++’s memory management hard, the choice between stack or heap object is definitely more option than having just heap objects), but the same feature actually makes managing any other resource considerably harder.

    The scoped_lock is an interesting beast. Note that in the example you don’t want *automatic* management of the lock: you want it to be locked here, and released there, period. You trick the language to do the “automation” where you want it to happen; if you really don’t like the extra scope you might as well just write “mutex.lock(); read(sharedData_); mutex.unlock();” without any extra scopes, that’s what the in-lined constructor/destructor will do anyway. (But I guess we both agree that it’s unsafe – I think the situation can be improved, with e.g. some kind of using-like construct.)

    This one is for comments: I always found the throwing-destructor problem a non-issue: Conceptually, releasing resources cannot fail: it is always some other operation that the library is trying to do, and not the actual resource release. It’s a bad interface: if you cannot flush at destruction of your file stream, is because you cannot write to the file, so you should have thrown the exception at writing to the stream, it’s not the destructor that should “fail” (or ignore failures). Realistically though I see any better solution than this yet. If the library has a separate flush() call, and can guarantee that destruction will not fail (or ignore any failure silently) after a successful flush(), then the user has the option to handle the error. If the user decides to skip manual error handling, what supposed to happen when it actually fails, anyway? I see two options: a, ignore it (that’s what happens currently in std::fstream) b, do throw an exception, and if the user can and want to handle it, then everyone’s happy, else kill the program. All options are supported by C++.

  11. yyyy says:

    Nice article, could you give me some opinion how could I make c programmers believe that destructor and some basic template skills could make the codes much more easier to maintain?
    I tried my best to explain the benefits of destructor to many c programmers in ex companies, but most of those programmers refuse to accept the benefits of destructor. They told me it “will” kill the performance + only lazy, incompetence programmers would rely on something like destructor, if you want to close to the metal you should handle all of the resources by yourself, c is always a much better solution than c++ and so on.Whatever, it was not a discussion but more like a “religious fight” in my viewpoint, even you show them the codes and benchmark, they still insist that c is a far more superior choice.

    • My opinion: it is not possible. You yourself (adequately) use words “believe” and “religious”, which means you also figured out that such discussions are not about convincing one another with evidence or logical arguments.

      To me, it is not clear if your colleagues already use a C++ compiler but refuse to use C++-specific features, or if you are trying to convince them to switch from C to C++.

      In general, it boils down to personal sense of balance between advantages and disadvantages. My sense, based on my experience, is that the advantages of using destructors far outweigh the disadvantages. I can imagine that someone else’s sense is different. If your program throws exceptions, you practically cannot survive without destructors. C programmers usually do not accept the exception handling mechanism. And if you reject exceptions, the destructor does not look that appealing any-more (although it is still appealing to me).

      But if you are trying to convince someone to switch from C to C++, then their fear may be well justified. C++ is big compared to C. They may fear they would be forced to learn all of it. And to learn C++ well, it takes lots of time.

  12. Howard Hinnant says:

    Great article. I agree 100%, more so if I could. I’ve long believed the C++ destructor to be the defining feature of the language. And I’ve grown incredulous at other languages which have obviously been influenced by C++ and failed to steal its very most important and effective feature: the destructor.

    I once asked Bjarne about his invention of the destructor. He replied nonchalantly, it just seemed like the right thing to do to balance the constructor. I’m paraphrasing, I don’t recall the exact quote. But that has got to be the biggest understatement I’ve ever heard in programming language design.

    If I had to pick C++’s cornerstone, I would most certainly point to the destructor.

  13. Regarding the concurrent algorithms, I think it’s about designing lock-free data structures. In such data structures there is a big problem when you need to destroy nodes because you need to do so in a way that is well sequenced and stil lock-free, and it cannot be done with a simple atomic operation because it usually involves releasing some memory. If only you could *leak* that memory, it would be so much simpler: all you would need to do would be to CAS away the link to the dropped node. If you have a garbage collector around, it enables exactly that: it lets you “leak” that memory as it will catch up with it later. Your code becomes super simple. There are a couple of algorithms to solve this problem without a garbage collector, but they are rather complex and arguably they amount to implementing your own highly specialised variant of garbage collection.

    • mortoray says:

      Can you give a specific data structure that is made easier with GC. I’ve done a lot of low-level lock-free and wait-free algorithms and have never needed a GC. Now granted some manual memory management tends to be involved, but that is something which tends not to be available at all in languages with GC. The manual memory management I’ve done has also been extremely simply, either a simple pool or ring buffer.

    • Krzysiek says:

      I am no expert in GC languages, so here is my question: doesn’t “leaking” memory mean that at some undetermined time the garbage collector has to step in to reclaim the memory stopping the rest of the program? How does GC interact with lock-free algorithms? I guess the lock-free guarantees still hold but is the performance hit acceptable (in places where LF-algorithms are used not only to avoid deadlocks but also to gain performance)?

  14. Vinipsmaker says:

    Republicou isso em Vinipsmaker labse comentado:
    I think I could make a beautiful XML construction code exploring this feature. I wish some scripting language adopt this feature.

  15. mgaunard says:

    There is nothing that prevents one to write precise garbage collection in C++, using destructors to manage the heap.

    There are indeed problems with memory reclamation with many lockfree algorithms. Special algorithms and techniques have however been devised to deal with this.

  16. Alex Bath says:

    Well said. My favourite feature of C++ too, even though I hardly ever put code in any destructor!

  17. priyanka says:

    nice content to improve c++ skill..

  18. Joker_vD says:

    Well, the article is great, but it basically says “C++ allows to flexibly overload }, which is brilliant, and so C++ is a probably better choice than other languages”. Well, the ability to overload } is important, but it doesn’t outweigh C++’s other warts, sadly—there are too many of them. Like, that problem with “perfect forwarding” constructor?

    And I wouldn’t even agree that objects have precisely defined lifetime—if there are many std::shared_ptr’s referencing one object, you never know when it will die, and if that object happen to hold last std::shared_ptr’s to some other objects… suddenly, its destruction causes avalanche of other destructors being called, and you basically get the same behaviour as a sudden garbage collection: execution seem to stop for some time, then it resumes.

    Oh, and about how to do with “using” in the first example: well, one can define class DisposableList: List, IDisposable where T: IDisposable { … } and write “foreach (var x in this) { x.Dispose(); }” in its Dispose() method and use DisposableList as a replacement for vector… but writing Dispose correctly is surprisingly non-trivial, so yeah, no big gain. One could alternatively write “foreach(var f in files) { using(f) { read(f); } }” in consumer(), but that has a bit different semantics.

    All that said, RAII is definitely one of the few things that make me not to give up on C++ completely. I bloody want predictable resource usage times! But then again, destructors must not throw—however, I’ve seen proposals on propagating multiple exceptions at once, but that would require one common base class for all things throwable to exist. I mean, sometimes you just can’t release the resource in the destructor: the ReleaseResource() returns you E_BUSY or ERROR_INPROGRESS, and now what? You have to stash it somewhere, say in a global “std::vector to_release_later”, or something. Such thing would probably go into the factory which you use to acquire resources, but then again, if you destroy your factory, you probably want to release all resource acquired with it (say, that factory was representing a DB connection, and the resources are prepared statements associated with it—when the connection closes, all those queries are rotten)—but the factory can’t track where the spawned resource wrappers go.

    • @Joker_vD: Thanks for an interesting comment. I would like to clarify a couple of things.

      “C++ allows to flexibly overload }” — “overload }”? What do you mean by that?

      “I wouldn’t even agree that objects have precisely defined lifetime” — I see what you mean, and to some extent I agree. But let me add this for clarification. The shared_ptr example is just one instance of a broader class: in C++ you have the choice to manually control object lifetime with new and delete. With this ability you can implement an arbitrarily bizarre and unpredictable situations. for instance:

      int * p = new int {10};
      // ...
      if (rand() % 2) delete p;
      

      But if you choose to do it, the responsibility for the mess is on you rather than on the language. The situation with shared_ptr is somewhat similar. If you choose to use them, you are responsible for the consequences. You have the ability, though, not to choose shared_ptr.

      In my experience people generally use shared_ptr for two primary reasons:

      1. Because they have a clear situation where a number of “clients” use the same object, and it is not clear which client will finish last, and there is no better way to manage such situation. E.g., when a number of threads shares state and we do not want to keep a global reference to the object.
      2. Because they want garbage collection back, and be able to freely produce garbage. They think that by using shared_ptr they have re-enabled garbage collection.

      The first case is legitimate, but it doesn’t occur as often as people think. the second is just asking for trouble. If not for any other reason, then because reference counting is not garbage collection and you will soon get into trouble e.g., with reference cycles.

      But when you use shared_ptrs correctly (have no cycles), even if you get the “avalanche”, the destructors are called in a well defined order. In contrast when garbage-collecting a cycle of references there is no one good order of finalization.

      The “avalanche” is not the property of shared_ptrs. You get the same effect when destroying a vector of elements that contain other nested elements. You also get a lot of destructors called. Releasing resources lasts, no matter at which point it happens. The point is that in C++ you choose exactly when it happens. (or as it is with new and delete, you choose to depart from predictable results — but you are not forced to do it).

      “I mean, sometimes you just can’t release the resource in the destructor: the ReleaseResource() returns you E_BUSY or ERROR_INPROGRESS, and now what?” — In order for me to be better able to understand your point, could you give me an example of a C++ library out there where releasing a resource might result in E_BUSY? I admit that so far I was only able to see that attempts to acquire or alter a resource cause E_BUSY — never the release.

      Regards,
      &rzej

      • Joker_vD says:

        What do I mean by “overload }”? That reaching the end of scope causes specific, user-written code to be executed. Pretty much what I would mean by “overload =”: that reaching “a = b” statement would cause specific, user-written code to be executed.

        But when you use shared_ptrs correctly (have no cycles), even if you get the “avalanche”, the destructors are called in a well defined order. In contrast when garbage-collecting a cycle of references there is no one good order of finalization.
        The unspecified finalization order is rarely an issue, as I recall from my C# experience.

        Releasing resources lasts, no matter at which point it happens. The point is that in C++ you choose exactly when it happens.
        When the last of the shared_ptr’s goes out of scope, which happens who knows when, maybe even on an other thread. Really, your first bullet talks about “situation where a number of “clients” use the same object, and it is not clear which client will finish last, and there is no better way to manage such situation”, that still is non-deterministic.

        As for when trying to release the resource may yield E_BUSY error… I doubt I could point to a C++ library (since throwing in destructors is verbotten), but here are two examples from C libraries: shutdown()/closesocket() from WinSock returns with WSAEINPROGRESS error if a blocking send/recv/whatever is going on that socket; sqlite3_close() returns SQLITE_BUSY if there are unfinalized prepared statements associated with that database connection; heck, I once gotten ERROR_BUSY from TerminateProcess()—that one finally made my decision on finally buying more powerful computer.

        Arguably, that may be the case where you indeed want to throw in your destructor, because such a situation indicates a bug in your program.

        Of course, there are not many C APIs doing such tricks because, well, how can you require from the failure during the cleanup? It’s much easier to force and free the resource-in-use and then make functions working with it just return EINVAL. That may introduce race conditions though, but who cares.

Leave a reply to Simon Ask Ulsnes Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.