A customizable framework

In this post I want to describe a problem my colleagues have faced a couple of times recently, and show how it can be solved with C++. Here is the goal. We want to provide a function (or a set of overloaded functions) that would ‘do the right job’ for ‘practically any type’, or for ‘as many types as possible’. As an example of such ‘job’ consider std::hash: what we want to avoid is the situation, where you want to use some type X as a key in the standard hash-map, but you are refused because std::hash does not ‘work’ for X. In order to minimize the disappointment, the Standard Library makes sure std::hash works with any reasonable built-in or standard-library type. For all the other types, that the Standard Library cannot know in advance, it offers a way to ‘customize’ std::hash so that they can be made to work with hash-maps.

For another popular example, consider Boost.Serialization library. Its goal is that almost any type should be serializable with the same interface: the library knows how to serialize, built-in, std and boost popular types, and it offers a way to teach it to serialize new types.

We are going to see a number of ways how such a customizable framework can be implemented. We will be using information from the previous post: “Overload resolution”.

The task

For an example, I have chosen a task that is quite simple, but it should serve my goal of illustrating the practical problems the developers face underway. We want to be able to tell how much memory a given object occupies: both on the stack and on the heap. Let me give you an example; if we illustrated an std::vector as follows:

We can see the blue part, representing the “handle”: those are the three pointers that give us access to the rest of the data; the size of this part can be measured with operator sizeof. The green color represents the allocated heap memory (let’s forget there are different allocators for the moment). This cannot be measured with sizeof and needs to be computed manually for each type.

Now, we want to provide a ‘framework’ that can already compute the memory usage for many popular and built-in types, and offers a way to compute the usage for new types.

Conceptually, the plan is as follows. We are going to have a function called something like mem_usage. Using it on a given type X should have the following effects:

  1. For scalar types, it just uses operator sizeof. In fact, it can be generalized to all trivially-copyable types.
  2. We provide a custom definition for a number of common types that we know in advance (e.g., std::vector, boost::optional).
  3. For other types, by default a compile-time error should be issued.
  4. We offer a way for the users to customize our framework for their own types.

Required member function — a non-solution

The first thing that usually comes to mind in this case, a sort of habit, is to think: let’s require of every type that participates in our framework, that it provides a member function mem_usage. And we just use it.

But this will not work. While you can force your colleagues from the team to add member function mem_usage to all their classes, you cannot force the built-in scalar types to have a member; or you cannot force std types to have a member of your liking. In fact this requirement is moot for any third party library you may need to work with.

A perhaps even more hardcore idea is to require that if you want some type X to be usable with our framework, it should derive from a polymorhic interface class MemUsageAble. Not only does it not solve the problem (of framework working with any type), but also requires that we unnecessarily increase the size occupied by the objects (they need to store a pointer to vtable): and in case of our task this affects the measurement. Also imagine that we need to use two frameworks: one forces the types to inherit from one polymorphic interface, the other requires the types to inherit from another. This becomes unbearable.

Therefore rather than expecting a member function of all the types, we had better define functions outside the type: this works for built-in scalar types, 3rd party types, and your own types alike.

Function overloads

From the previous post we already know we cannot use function template specializations. Thus, for our first attempt we will use function (template) overloading and rely on the ADL (argument dependent lookup).

We can implement requirements 1 and 3 above with one function template. In order to test whether a given type is trivially copyable or not we can use type trait std::is_trivially_copyable. However, because I found that the trait is not available in GCC until version 5.0, I decided to use another one: std::is_trivial, so that the examples work on more compilers.

We will use the enable_if trick to conditionally remove our function from the overload set:

namespace framework
{
  template <typename T>
    typename std::enable_if<std::is_trivial<T>::value,
                            size_t>::type
    mem_usage1(const T& v) { return sizeof v; }
}

Type trait std::is_trivial is only available since C++11, but otherwise all the things we will be discussing here apply to C++03. (And you can use Boost libraries to emulate some missing features (if you use Boost.TypeTraits or Boost.StaticAssert.) In the remaining examples, I will use a C++14 alias template std::enable_if_t: this is to make the examples shorter, but I still claim the similar is achievable in C++03, with a bit longer syntax.

Now, how do you compute memory usage of std::vector? Assuming the default allocator, it is the size of the handle + the recursive memory usage of each vector element + the remaining capacity. But before we proceed to implementation, we will have to face a technical question: what namespace should we define the function in?

It is reasonable to assume that our framework will also come with a number of ‘algorithms’: function templates that make use of mem_usage1, for instance:

namespace framework
{
  namespace algo
  {
    template <typename T>
    size_t score(const T& v)
    {
      // do some more things
      return mem_usage1(v);
    }
  }
}

In the previous post we have concluded that in order for the overload resolution to be immune to header inclusion order, we have to declare our overload in the namespace enclosing the type for which we are overloading. But this would mean declaring an overload of mem_usage1 in namespace std. And this in turn triggers an undefined behavior. Quoting the Standard ([namespace.std]/1):

The behavior of a C++ program is undefined if it adds declarations or definitions to namespace std or to a namespace within namespace std unless otherwise specified.

Luckily, because std is almost part of the language, and every piece of the program knows about it, and its contents, we can define our mem_usage1 inside namespace framework, just below the primary overload, prior to any other function that may need to use it:

namespace framework
{
  // primary overload:
  template <typename T>
    std::enable_if_t<std::is_trivial<T>::value, size_t>
    mem_usage1(const T& v) { return sizeof v; }

  // overload for std::vector:
  template <typename T>
    size_t mem_usage1(const std::vector<T>& v)
    { 
      size_t ans = sizeof(v);
      for (const T& e : v) ans += mem_usage1(e);
      ans += (v.capacity() - v.size()) * sizeof(T);
      return ans;
    }
}

We can get away with this only because std is so special: any other library in the world knows about std and can include it.

There is a price we have to pay for the trick, though. Now our framework unconditionally includes header <vector>. Even the user’s that never need to use it with vectors now transitively include the standard header.

But here comes the first problem. Suppose we want to also provide an overload for std::pair. We could include them in the following order:

namespace framework
{
  // primary overload:
  template <typename T>
    std::enable_if_t<std::is_trivial<T>::value, size_t>
    mem_usage1(const T& v) { return sizeof v; }

  // overload for std::vector:
  template <typename T>
    size_t mem_usage1(const std::vector<T>& v)
    { 
      size_t ans = sizeof(v);
      for (const T& e : v) ans += mem_usage1(e);
      ans += (v.capacity() - v.size()) * sizeof(T);
      return ans;
    }

  // overload for std::pair:
  template <typename T, typename U>
    size_t mem_usage1(const std::pair<T, U>& v)
    { 
      return mem_usage1(v.first) + mem_usage1(v.second);
    }
}

But if we want to use these definitions with type std::vector<std::pair<int, int>>:

int main()
{
  std::vector<std::pair<int, int>> vp;
  framework::mem_usage1(vp);
}

We get a compile-time error. This is because of the lookup rules in templates:

  1. For overloads defined in the namespaces of the types they operate on: we can see all of them,
  2. For overloads defined in the namespace of the function template we are parsing: we only see the overloads declared prior to our template.

In our case, we are first selecting and parsing the overload for std::vector, and this works; but inside it, we want to find an overload for std::pair, but: we are in namespace framework, so we see only the previous declarations, and the overload for std::pair is only defined later.

If we reversed the overload declarations, we would fix our particular problem, but we would introduce a similar one for type std::pair<std::vector<int>, std::vector<int>>.

The way to solve it is to use forward declarations:

namespace framework
{
  // primary overload:
  template <typename T>
    std::enable_if_t<std::is_trivial<T>::value, size_t>
    mem_usage1(const T& v) { return sizeof v; }

  // forward declare overload for std::pair:
  template <typename T, typename U>
    size_t mem_usage1(const std::pair<T, U>& v);

  // overload for std::vector:
  template <typename T>
    size_t mem_usage1(const std::vector<T>& v)
    { 
      size_t ans = sizeof(v);
      for (const T& e : v) ans += mem_usage1(e);
      ans += (v.capacity() - v.size()) * sizeof(T);
      return ans;
    }

  // overload for std::pair:
  template <typename T, typename U>
    size_t mem_usage1(const std::pair<T, U>& v)
    { 
      return mem_usage1(v.first) + mem_usage1(v.second);
    }
}

Now, suppose we want to provide an overload for type boost::optional. This task is somewhat easier, because namespace boost is not special in any way, and we are allowed to add declarations to it:

namespace boost
{
  template <typename T>
    size_t mem_usage1(const optional<T>& v)
    {
      using framework::mem_usage1;
      
      size_t ans = sizeof(v);
      if (v) ans += mem_usage1(*v) - sizeof(*v);
      return ans;
    }
}

Memory occupied by boost::optional is its sizeof (the initialized-flag, and the storage for T) plus, if optional contains a value, the memory usage of the remote parts (because the handle of T is already included in the sizeof).

Now, because we define this overload in the same namespace as the argument type, we can put this declaration after every template that may be using this, because it will be picked by the ADL in the second phase of the overload resolution. However, we have to make sure, that this overload is defined after the overload for std::vector, because otherwise the former will not see the latter if we use the framework with type boost::optional<std::vector<int>>. By now it looks convoluted from the framework implementer’s perspective; but for the users we allow the flexible header inclusion model. That is, the following two orders will work:

#include <framework.hpp>
#include <boost/opitonal.hpp>
#include "glue_between_framework_and_optional.hpp"
#include <boost/opitonal.hpp>
#include <framework.hpp>
#include "glue_between_framework_and_optional.hpp"

Also note that in the implementation of the last overload, I used the using-declaration. This is in order for the overload resolution to consider both namespace framework and the argument-dependent namespaces. If I forgot that, I would get a compile time error. Similarly, if I just called framework::mem_usage1(), I would have disabled the ADL, and got another compile-time error in other cases.

Now, anyone who is going to use our function overloads mem_usage1 will have to do the same: put a using-declaration, and then call without namespace qualifications. In order to spare the users this trouble, we can provide a convenience function that already does this:

namespace framework
{
  template <typename T>
    size_t mem_usage(const T& v) { return mem_usage1(v); }
}

Because I am declaring it also in namespace framework, I can skip the using-declaration: it is implied. But the users can now call it qualified:

int main()
{
  boost::optional<std::vector<int>> ov;
  framework::mem_usage(ov); // works!
}

Going back to the overload for boost::optional, it works because optional in the current Boost version (1.59) is declared directly in namespace boost:

namespace boost
{
  template <typename T>
    class optional;
}

If it was changed to:

namespace boost
{
  namespace optional_ns
  {        
    template <typename T>
      class optional;
  }

  using namespace optional_ns;
}

My overload would cease working, even though boost::optional would still work. (And there are good reasons to add such additional namespace optional_ns; and at some point it might in fact just happen.) I do not know how to have this framework solution be prepared for such a namespace change.

Another drawback of this overload-based solution is that it is easy to mis-spell the name of one of the overloads. The compiler will not protest at the point where you define your framework. It will only protest when the users try to use it.

This framework design has been chosen for std::swap (and boost::swap is an equivalent of wrapper framework::mem_usage from our example). Our example differs from std::swap, though. This is for two reasons. First, we cannot afford to define our framework in namespace std. Second, we do not provide a default implementation that works for any type T until the user provides her customization. This way we avoid a class of ODR violation problems that std::swap comes with.

For a full working example of this framework design, see here.

Function overloads with ADL tag

A lot of complications in the previous design comes from the fact that we cannot declare overloads in namespace std. Declaring overloads in foreign namespaces (like boost) works, but is susceptible to injecting in-between namespaces (like boost::optional_ns in the above examples); and also looks a bit inelegant and confusing: why do we want to declare something in somebody else’s namespace?

These problems can be avoided with a clever trick. Change the interface of our function (template), so that it takes a second argument, which does not change in the overloads, and is declared in our namespace framework:

namespace framework
{
  struct adl_tag {}; // empty class

  // primary overload:
  template <typename T>
    std::enable_if_t<std::is_trivial<T>::value, size_t>
    mem_usage2(const T& v, adl_tag)
    {
      return sizeof v;
    }
}

Can you see what it buys us?

If we now define an overload for std::vector in namespace framework, and call it without scope qualifications:

namespace framework
{
  template <typename T>
    size_t mem_usage2(const std::vector<T>& v, adl_tag tag)
    { 
      size_t ans = sizeof(v);
      for (const T& e : v)
        ans += mem_usage2(e, tag); // pass the tag down
      ans += (v.capacity() - v.size()) * sizeof(T);
      return ans;
    }
}

int main()
{
  std::vector<int> v;
  mem_usage2(v, framework::adl_tag{});
}

It just works! It works because now we have two function arguments: one from namespace std, the other from namespace framework. The second phase of overload resolution in templates (as well as the overload resolution outside templates) performs an argument dependent lookup. And because we have two arguments, two namespaces are looked up. This way we can force the ADL to search namespace framework regardless of in which namespace the first argument is defined, and because in the second phase we consider the overloads declared even after the template that is calling them, we do not have to be concerned about the order of overloaded declarations.

To some extent, this is the approach taken by Boost.Serialization library: it expects that one of the arguments in the overloads is always a serialization ‘archive’, which corresponds to our ADL-tag; but because the ‘archive’ has a meaningful state, the solution does not look that tricky.

We can conceal the confusing tag from the users by defining a wrapper function:

namespace framework
{
  template <typename T>
    size_t mem_usage(const T& v)
    { 
      return mem_usage2(v, adl_tag{});
    }
}

For a full working example of this framework design see here.

Class template specializations

As we have seen in the previous post, the natural choice for such framework customizations — function template specializations — do not work because one is not allowed to partially specialize a function template. However, this restriction does not apply to class templates, so we might as well use those. It is going to be a bit artificial, as we do not really need any class with state, but it happens to work. We will be specializing and instantiating classes only to call one static member from their scope.

So, first task: how to make a generic function that will return sizeof(X) for a trivially copyable type (or trivial, due to GCC shortcoming) and compile-fail for other types, and implement it all as classes?

namespace framework
{ 
  template <typename T>
  struct mem_usage3 
  {
    static_assert (std::is_trivial<T>::value, "customize!");
    static size_t get(const T& v) { return sizeof v; }
  };
} 

Our master template (unless specialized) binds to any type; except that for non-trivial types it fires a compile-time error with static_assert. The usage is a bit clumsy:

int main()
{
  int i = 0;
  framework::mem_usage3<int>::get(i);
}

But again, we can wrap it into a convenience function:

namespace framework
{
  template <typename T>
  size_t mem_usage(const T& v)
  {
    return framework::mem_usage3<T>::get(v);
  }
} 

int main()
{
  int i = 0;
  framework::mem_usage(i);
}

We customize the framework by declaring (partial or full) class template specializations. Here is an example for std::pair:

namespace framework
{
  template <typename T, typename U>
  struct mem_usage3< std::pair<T, U> >
  {
    static size_t get(const std::pair<T, U>& v)
    { 
      return mem_usage3<T>::get(v.first)
           + mem_usage3<U>::get(v.second);
    }
  };
}

Specializing for other types is quite simple: you do it exactly in the same namespace as the master template. The additional safety feature that comes with this technique is that if you do a spelling mistake while customizing the framework, the compiler will immediately send a compile-time error because you are specializing a nonexistent class template.

Another prominent difference from the overload-based solutions is that a class specialization for X does not automatically work for types publicly derived from X. Let me explain. If you have two types related by inheritance:

namespace ns_x
{
  struct X {};
}

namespace ns_y
{
  struct Y : ns_x::X {};
}

And you define a function (overload) for X reachable via ADL:

namespace ns_x
{
  size_t mem_usage1(const X&) { return 1; }
}

It becomes immediately reachable via ADL for Y, even if the two types reside in unrelated namespaces:

int main()
{
  ns_y::Y y;
  mem_usage1(y); // works
}

This may be a desired or an adverse feature depending on the types X, Y, and the logic of the overloaded function; but the thing is: you get it when using the overload based techniques, and you do not get it when using class template specialization technique.

For a full working example of this framework implementations see here. This technique has been chosen for std::hash, although it may not be visible at first, because in case of std::hash, a non-static member function is used (the function call operator), which requires creating a temporary object:

int main()
{
  int i = 0;
  std::hash<int>{}(i);
}

But the idea stays the same.

This technique becomes an attractive choice when a framework requires two or more operations to be available on the type. The class scope becomes a convenient way of bundling the operations together.

Conclusion

In all tree techniques, there is one common aspect: the customization point (names mem_usage1, mem_usage2 and mem_usage3) are separate from the exposed interface: function mem_usage. This is a particular application of a general good practice: separate implementation from customization points. This applies not only to templates and overloads. For another application see article “Virtuality” by Herb Sutter.

While std::swap could be used as both customization point and the interface, there are many problems with it (forgetting to include some overloads being one of them). There are attempts at providing a separate interface and customization points but with the same name as proposed by Eric Niebler in N4381.

I am very grateful to Tomasz Kamiński for sharing his insights on the subject with me, and helping improve this post.

Referneces

  1. Robert Ramey, Boost.Serialization library.
  2. Herb Sutter, “Virtuality”.
  3. Eric Niebler, “Suggested Design for Customization Points”.
  4. Bjørn Reese, “Partiality for Functions”.
This entry was posted in programming and tagged , , . Bookmark the permalink.

28 Responses to A customizable framework

  1. Patrice Roy says:

    Hi!

    Nice article, as usual. The adl_tag trick is nice. Small typos:

    – «mem_usage1(const T& v) { return sizeof v; }» where the parens around v are missing
    – a few and glue_between_framework_and_opitonal where letters ‘t’ and ‘i’ have been swapped 🙂

    Cheers!

  2. Sebastian Redl says:

    I use the adl_tag trick in my own stuff as well, though I’ve taken to call it extension_point. I feel that this name better expresses its purpose (“this is an extension point of the library”) and less a technical detail (“enable ADL for this function”).

  3. Robert Ramey says:

    As the author of the Boost Serialization library I’m very pleased to see the library used in this example. It’s also quite illuminating to me. For the first time I more or less understand what I was doing in 2003. At the time, I was had a list of requirements – non-intrusiveness among them and I just did whatever it took to “make it work”. I don’t really have a point here, though I’m sure there is some sort of lesson here somewhere. As usual, a very interesting and well written article.

  4. viboes says:

    Customizing using specialization of class templates use to be names traits classes. The standard has already chat_traits and allocators_traits and there is a proposal for executor_traits. Note that these traits can be specialized even if in namespace std.

    There is an additional characteristic of the traits. We associate the customized operation to a single class, independently of the number of parameters of the operations. This makes the customization more predictive, as it is expected that the author of a class add also the customization of this class.

    I’m experimenting since a some time with a similar structure. I use to find out the concept/type of classes that are related to the operations to customize (let me call it MyConcept, Functor, Monad, Monois). I place the operations associated to the concept inside the namespace MyConcept and name the customization point MyConcept_traits. There is an additional concept_tag tag type so that the user can inherit from MyConcept::concept_tag to associate the namespace MyConcept to the type if desired. This allows to have direct access to operators.

    A Concept R that is a refinement of another concept C add a using namespace C. Having the functions in a namespace allows the user to introduce the function associated to a concept all at once.

  5. Pingback: Visto nel Web – 219 | Ok, panico

  6. I know it’s just a toy example but doesn’t your mem_usage3 specialization for std::pair potentially fail to report space taken by padding due to alignment requirements on U?

  7. Dusketha says:

    I have waited this reserch for a long time.
    Thank you very much for share.
    I have tryed to make customizable function template that is compiled as expected.
    But my study is not enough. I could only trials not backed by theory.
    http://stackoverflow.com/questions/34454223/how-to-write-customizable-function-templates-in-modern-c
    I think, coming C++1z concepts will let customization more complex.
    I hope customization method will be studied more and standardized.
    Especially want to see std:: function templates become customizable.
    Thanks again.

    • Dusketha says:

      one more thing I want is customizing point for operator function.
      I am not sure to customize binary operator*() template is safe.

  8. Wow, quite impressive. It’s amazing what you can do with C++ when you put your mind to it.

  9. Anatoliy says:

    What do you think about this http://talesofcpp.fusionfenix.com/post-8/true-story-i-will-always-find-you kind of customization points design? They uses simple transforming function and operator decltype in default template argument of class template. Does it fit your classification from mem_usage1 through mem_usage3?

    • Hi Anatoliy. Thanks for an interesting link. If my understanding of the article is correct, this does not even qualify for a solution to the problem I am solving.

      One of my goals is that the customization must not be intrusive: it must not affect the declaration (or even the header file) of the type T we want to add support for. In the case of Spirit customization points, they require that the T (in the article it is called geometry::literal_point_parser) must already been declared as being a parser. This is what they use the CRTP for:

      struct point_parser : x3::parser<point_parser>
      // this means declare point_parser as x3::parser
      

      You cannot make a decision that any type, like int is a parser.

      • Anatoliy says:

        I think I can make int a parser by specializing boost::spirit::x3::extension::as_parser (surely, in its own namespace, this is inconvenient) or using boost::spirit::x3::extension::detail::as_parser_guard::deduce_as_parser machinery.

        • I do not know Boost.Spirit well enough form a definite opinion. But from what I understand the point of the article is how you teach the framework to recognize your parser as a parser in the framework’s sense. To me, clearly not every type is or can be a parser. You could probably force an int to be recognized as a parser, but how would it parse text? You could probably add a number of associated functions that can parse whatever, but that is beginning to go against the grain.

          Frameworks I was describing work with any type in the sense that you can swap any object that contains a data, or hash it, or serialize it, or measure its size.

      • viboes says:

        IIUC Sprit X3 has two level of customization points. 1 uses the trait as_parser. The second uses the ADL overload as_spirit_parser. There is a specialization of as_parser when as_spirit_parser is found by ADL

        I have used these 2 level customization point as a transition to be backward compatible, e.g. std::swap, std::begin, …

        The user can be explicit and specialize the trait. Users that overloaded the function in its own namespace continue to work. With this 2 levels I can now specialize swap_trait for my own concept, as e.g. ProductType. Note that oveload will not works even with SFINAE, as we don’t know where to locate the overload.

        I’m not completely against using ADL if the overloaded function has a name that is specific enough, as it is the case of as_spirit_parser. However having customization points with names as swap, begin, end, size, data, get, … make them almost keywords (as N1691 reported). Did the user wanted really to customize the standard library. What if a 3PP library uses the same names? How the user can choose?

        IMHO, using traits is the more explicit way. N1691 proposed a feature to been able to specialize traits from its own namespace, so that the user doesn’t need to exit from its own namespace and enter the namespace to specialize:

        “3. Invite clients to (explicitly or partially) specialize a library class template that has a static member function implementing the procedure”

        I believe that we shouldn’t abandon the traits approach even if we don’t have this syntactic sugar feature in the language.

        http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2004/n1691.html

  10. Dmitri says:

    Very good article, Andrzej. Thank you.

    I have one question though. I have discovered that for the first approach (mem_usage1), VS 2015 compiles fine the following code w/o forward declaration for std::pair:

    std::vector<std::pair> v2();
    framework::mem_usage1(v2);

    Do you see the same behavior? Is it a bug of VS 2015 C++ compiler or I have missed something?

    *****
    namespace framework
    {
    template
    typename std::enable_if<std::is_trivial::value, size_t>::type mem_usage1(const T& v)
    {
    return sizeof v;
    }

    //template
    //size_t mem_usage1(const std::pair& v);

    template
    size_t mem_usage1(const std::vector& v)
    {
    size_t ans = sizeof(v);

    for (const T& e : v)
    ans += mem_usage1(e);

    ans += (v.capacity() – v.size()) * sizeof(T);
    return ans;
    }

    template
    size_t mem_usage1(const std::pair& v)
    {
    return mem_usage1(v.first) + mem_usage1(v.second);
    }

    template
    size_t mem_usage(const T& v) { return mem_usage1(v); }
    }
    *****

    • Indeed, in VS 2015, removing the forward declaration does not affect the compilation. The compilation is affected on GCC and Clang. Apparently VS has its own rules for name look-up.

      (Note: WordPress has a peculiar rules for putting code snippets inside comments. I put it here.)

  11. viboes says:

    How to avoid slicing while overloading?

    This is a variation of the “Function overloads with ADL tag”. In addition to this adl tag type contains the type we want to overload.

      template <typename T>
      struct adl_tag {}; // empty class
    }
    

    As adl_tag<D> don’t inherits from adl_tag<B> if D inherits from B, the overload will not be taken in account if there is not an exact match.

    namespace framework
    {
      // primary overload:
      template <typename T>
        std::enable_if_t<std::is_trivial<T>::value, size_t>
        mem_usage2(const T& v, adl_tag<T>)
        {
          return sizeof v;
        }
    
      template <typename T>
        size_t mem_usage(const T& v)
        {
          return mem_usage2(v, adl_tag<T>{});
        }
    }
    
  12. viboes says:

    How to allow slicing while specializing?

    This is a little bit more complex. We need to add an Enabler parameter that extend the conditions. Take a look at Hana. I believe that this could be used in combination of is_base_of.

    namespace framework
    { 
      template <typename T, typename Enabler=void>
      struct mth3 : mth3<T, when<true>> {}
      // specialization for trivial types
      template <typename T>
      struct mth3<T, when<std::is_trivial<T>::value>> 
      { ...  };
      // specialization for a polymorphic type X
      template <>
      struct mth3<X>  {
        static size_t get(const X& v)    {       return ...;    }    
      };
      // specialization for for a polymorphic type X and types inheriting from X
      template <typename T>
      struct mth3<T, when<std::is_base_of<X, T>::value>> { 
        static size_t get(const X& v)    {       return ...;    }    
      };
    } 
    

    This works a far as the enabling conditions are independent. Otherwise we need a more specific specialization.

  13. Adam Badura says:

    Do you know why std::begin and std::end functions use the std::swap approach? Why the class template specialization approach wasn’t chosen?

    • I do not know what were the authors’ intentions, but my guess is: consistency. To have one uniform (not necessarily ideal) solution is probably better than to have different and different solutions to the same problem in different places.

      • Adam Badura says:

        Well, this could be a reasonable justification (even though it doesn’t convince me) if not for std::hash that you mentioned yourself… But if you don’t know the reasoning behind this choice then there is not much to talk about…

    • viboes says:

      I believe the committee doesn’t like too much the specialization approach. Overload and ADL is a mechanism that allows the user to declare the overloads near the class and even declare it as a friend function. The traits approach request the user to specialize the class template inside the std namespace. I prefer with difference the trait approach (as Boost.Hana does).

      I don’t like neither having the same name for the point of use and the point of customization. I would prefer to have something more specific for the point of customization so tat we don’t customize by accident.

      Note that in addition of having std::begin/std::end we have also range based for loops which are based on ADL. We have the same for structure binding and `get(t)`.

      There is an old paper that describes all the problems we have with the ADL approach. It presents an approach that solves the problems identified. I don’t know why the ideas were rejected.

  14. viboes says:

    I wanted to note that approaches 2 (tags) and 3 (traits) have the advantaged to allow the use of the customization point `framework::mem_usage` instead of the detail of the customization. Instead of using `mem_usage2(v, tag)` or `framework::mem_usage3::get(v)` we can use `framework::mem_usage(v)`. This allows to concentrate in `framework::mem_usage` some constraints and really follow the Method pattern (as Eric describes in N4381).

    * tags: http://melpon.org/wandbox/permlink/Koth5V5nlap9KaPC
    * traits: http://melpon.org/wandbox/permlink/96W9BNYaYOLAjxtO

    I have added an specialization
    * traits and concepts : http://melpon.org/wandbox/permlink/TCW6a0pYAujO5e0O
    * functions (~N4381): http://melpon.org/wandbox/permlink/V7dheXF5olhkaxJ5

  15. viboes says:

    Alternatively we could forward declare the mem_usage function instead of each one of the mem_usage1 overloads.

    http://melpon.org/wandbox/permlink/v5wePVY5teyDnPOf

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s