Self Unemployed

C++ Needs Better I/O Facilities

C++ is not well liked by a large number of non-C++ programmers, much in the same way that some people don’t like burritos. These people are wrong and there’s nothing we can do to change their minds. They will continue to hate on the purity of burritos for their entire lives never knowing how delicious they taste; hearing from others in the echo chamber they were burned by a burrito once. Thus, for them, all burritos are the second coming of the anti-christ.

One of the arguments that has stood the test of time against C++ is the streams library. It’s even a point of contention within the C++ community. The main arguments against the entire streams library are usually:

  • Overloading operator <<
  • Performance
  • 'God Objects'

We’re going to ignore that first bullet point, however, because it is entirely opinionated. Developers can argue about overloading operators and what should and should not be overloaded all day (and they usually do).

However, streams are (to say the least) less performant than their C stdio counterparts. Enough to warrant a variety of stackoverflow posts about it. Out of all the classes and interfaces in the C++ standard library, the streams library is the least modern (and indeed the one that received the least attention during the C++11 and C++14 revisions). Among these issues are inheritance, virtual functions, and a lack of user defined allocator support at type instantiation time. Out of all the C++ standard libraries, streams are the least C++, and arguably the most over-engineered. C++’s motto is “pay for what you use”, and a C++ programmer shouldn’t have to pay such a heavy tax for something as simple as reading bytes from a file, string, or other resource.

The last bullet point, that streams are God Objects, is the most important. If there is one issue that I have with C++ streams, it is that they do too much. They do formatting and scanning of text, reading and writing of binary data, the handling of locale information (but not in a way that doesn’t require the use of something Boost.Locale to get it right), buffer overflow and buffer underflow, position seeking, and worst of all, the very nature of a stream results in an overcomplicated user-defined stream insertion/extraction overload. When we rely on Argument Dependent Lookup and the ostream& operator << overload, we end up in a bit of a pickle. What happens if we want our user-defined type to be output as binary instead of text? How do we, as the user of this library, decide if we want to have a binary formatted output function or a text formatted output function? There is no good (or rather easy) answer. What ends up happening is a user provides an ostream& operator << so they can just dump text to the console or to a string for debug information. There is no way to say “When writing to this resource, we treat it as binary data. When writing to a different one, we will use text formatting for printing log information”. We can’t just let ADL kick in and take care of the rest for us. It is for this reason that we end up with libraries like Boost.Serialization and cereal, so that developers can explicitly state “here’s how we store our data”, even if it is for the smallest of utilities.

How do we solve this? The committee won’t (or maybe can’t, backwards compatibility is a big deal). The streams library is here to stay. But, we need a better alternative. One that lets us rely on ADL, that doesn’t focus on the use of inheritance or virtual functions, that works in a way that lets the user decide how they read and write binary data vs formatting or scanning text data, or even printing.

What we need are better, more generic I/O facilities for C++. We need a library that splits up the different Concepts of Resources, Readers, Writers, Streams, Formatters, Scanners, Buffers, and even smaller concepts such as position Seeking within a Resource.

Something like this wouldn’t replace serialization library’s like Boost.Serialization, but would be a better foundation for their output.

I’ve looked to other programming languages for inspiration on what a potential Modern C++ I/O API might look like. Rust has some pretty good concepts, some that would even map 1 : 1 with C++. However, there are some rust specific language features that they rely on. They do not make a difference between Read/Write and Format/Scan, nor do they treat each possible Resource as one, opting instead of write one Reader for each possible Resource (MemReader, FileReader, BufReader, etc). Even Java has some decent concepts (allowing for a writeObject function), however because Java is “all aboot the oop”, it relies on a user defining a class to handling these Read/Write vs Format/Scan as well as its builtin reflection system.

With these ideas in mind, I was able to develop some brief concepts that a Modern C++ I/O API should express. First, we have our verbs. These are the functions that are used for ADL to allow a generic approach to perform actions on those types which meet our Concepts. All of them relate to either a Concept, or are taken from a C stdio-like name.

  • read : Read binary data from a Resource
  • write : Write binary data to a Resource
  • scan : Read text from a Resource
  • format : Write text to a Resource
  • open : Opens a resource for I/O operations
  • close : Closes a resource for I/O operations
  • flush : Flushes a Buffer with its Resource
  • sync : Synchronizes a resource with the operating system if possible
  • tell : Gets the current position for I/O operations
  • seek : Sets the current position for I/O operations
  • skip : Read and discard data from a Resource until a non-white-space or given character is encountered. Used only on Scanners
  • print : Output text to stdout or stderr.

We now need to express our Concepts. These would have an equivalent type trait available to check if a given type meets one of these requirements, allowing for SFINAE within APIs that rely on them.

  • Reader
  • Writer
  • Stream : Is both a Reader and Writer
  • Pipe : Holds both a Reader and Writer. For every Read, there is a Write
  • Scanner
  • Formatter
  • Channel : Is both a Scanner and Formatter
  • Filter : Holds both a Scanner and Formatter. For every Scan, there is a Format
  • Resource : Represents a data source or data target.
  • Buffer : Manages a Resource by buffering I/O operations.
  • Seeker : Allows moving the current I/O operation position of a Resource.

This kind of approach would work well for mixins, as well as allowing user-defined resources. For instance, implementing a Reader-only Resource for the SQLite3 Blob object would be as simple as implementing a basic read function that wraps the sqlite3_blob_read function. One would also be able to hook in to additional types in other libraries, such as libsdl’s SDL_RWops. The possibilities for inter-library interaction are numerous, possibly limitless.

If there were a library that expressed these concepts, it would most certainly make C++’s approach to I/O competitive with other languages.

INVOKE Unpackable Types in C++11

I’ve been working on a library that adds some C++14 library support to C++11 where possible. It’s currently at RC1 status (I’ve not built and uploaded the documentation, nor have I created packages yet) and can be downloaded here. This is the first in a series of posts where I’ll talk about what is inside this library, and the hows and whys of each component. Today, this post will discuss unpacking tuples as arguments to a function.

A while back on the C++ std-proposals list (before the proposal of a std::invoke) someone had suggested an apply_tuple function. I got to thinking that I could perhaps express tuple application in terms of the INVOKE expression. Specifically, given a std::tuple of an integer and a string, and some functor that takes an integer and a string, could I unpack that tuple directly into the functor? What’s more, given a tuple of some functor, and some set of arguments equivalent to the rest of the tuple’s size, could I invoke the tuple directly?

To sum up the answer to this question: YES.

To extend on it just a little bit: Not only can we unpack a tuple, we can unpack anything that interacts with std::get<N> (where N is some value of type std::size_t), and std::tuple_size. we can also go a step further and ‘runtime-unpack’ (runpack) any type that has an ‘at’ member function that takes a std::size_t.

So how did I do it? We have to take a few (read: A whole buttload) of steps to get the semantics required to express all this in terms of the INVOKE pseudo-expression. For the purpose of attempting to keep this short and to the point, I’ll be skimming over quite a bit of code. First let’s identify what we need.

  • A way to express whether a set of types fulfill an INVOKE expression (i.e., a type trait)
  • A way to get all values from an unpackable (via std::get<N>) and to do so with a variadic template.
  • A way to signify we wish to unpack a template value explicitly (so as to not interfere with the regular INVOKE expression rules)

To save time, we’re going to assume we have a working std::invoke expression.

Now then, let’s get that third bullet out of the way. We’re going to want two different semantics for our unpacking. The first and primary form is general compile time unpacking. This relies on the use of std::get<N>, and in C++14 will allow one to perform unpacking with constexpr. We can’t do this in c++11 because of std::get<N> not being marked constexpr. The second form is wide element access runtime unpacking. We rely on the ‘.at’ member function for a given type to also perform bounds checking at runtime. This means the runpack form has the capacity to throw an exception. And that’s fine. The easiest way to differentiate between the normal INVOKE expressions is to simply make some sentinel types like so:

Link To Gist

But that’s easy as hell.

Next, we need a way to generate index sequences. With the wonderfully added integer_sequence type we can do this easily:

Link To Gist

Next we need to know if something is_invokable. We’re not going to call our type trait that, however. Within the std::invoke paper, a reference is made to some arbitrary std::invoke_of and std::invokable traits that do what we require. So let’s write our own!

Link To Gist

The class_of_t trait is able to figure out the type of class that a member function is a part of, or get the type that a functor is directly.

We need to define the INVOKE that takes an unpackable. First it will take our sentinel type, followed by a callable, and finally the type to unpack. In the case of the compile time unpacker, we have a second overload to take just the unpack sentinel, and the unpackable object itself.

The runpack overload uses function traits to determine the arity of the callable, and as such, does not allow one to invoke just the runpackable type.

Link To Gist

In the above source, the is_unpackable and is_runpackable type traits simply check if std::get<N> and std::tuple_size are available for the unpackable, and if Runpackable::at(std::size_t) is valid.

In the case of the runpack, we use some function traits to determinate the arity of the callable. In the case of the unpack, we simply use std::tuple_size. This value then generates our index_sequence, which is then used to call the impl::runpack or impl::unpack function. The parameters are then unpacked in place and passed through to invoke. Thanks to perfect forwarding, we get no overhead.

So, how would some of this code work? Like so.

Link To Gist

And there you have it. A way to execute a std::tuple directly, or unpack a std::vector into a function as arguments. We also got really lucky and walked away from a very poorly thought out pun (runpack).

RValue Reference Implications In C++11

There are several implications that one has to realize when it comes to using rvalue references in C++11 APIs. For starters, requiring an rvalue reference as a parameter to a function implies the following

  • The object being passed in may be mutated
  • The object being passed in may exist until program exit
  • You (the caller) no longer own the object being passed in

This is fairly large in terms of writing an API. In days of yore, before the existence of rvalues and move semantics, one might have used a pointer, or non-const reference to designate that an object would be mutated. They may have gone a step further and passed the object in by value to show that a deep copy would be made as the object passed in was no longer your concern. But none of these implied one very important feature that C++ technically lacks.

C99 added the restrict keyword. When applied to a pointer, the programmer is declaring to the compiler that for the lifetime of the pointer, only that pointer or a value derived from the pointer (such as pointer + some_offset) will access the data to which it points. This limits pointer aliasing, and helps the compiler to perform some cache optimizations.

C++ however does not have such a word. Yes, gcc and clang support the keyword being applied to references, but this is non-standard (and I should note, also breaks type traits and SFINAE if you expect a T const& but pass in a T const& restrict). Now there are several proposals currently up for C++14 and beyond where we will be able to apply restrict as a modifier to types, and I’m all for that as forcing constant lvalue references to not refer to the same object is a good idea. BUT! Let us not forget the last implication of an rvalue reference (a properly moved one, at that). Specifically

  • The rvalue reference is declaring to the compiler for the lifetime of that object that only that object, or a value derived from the object (such as T const& = rvalue_reference will access the data contained within the object

In other words, an rvalue reference can declare to the compiler that the programmer intends for the object to only ever be accessed at one point

Of course, this isn’t one hundred percent true, as an object that uses shallow copies internally would get around it. However, this also results in the same result as it would in C99: Undefined Behavior. After all, declaring to the user that your API must fully and wholly own the object being passed in, and then magically mutating data contained within could be considered the same as casting away const on a reference, just so that one may mutate it (which, I should note, is always 100% guaranteed to be undefined behavior)

As far as I know, there are no compilers that will make assumptions or attempt to optimize based on this last implication. Whether they will or not also remains to be seen.

A Day With Rust-Lang 0.4

I was fairly sick today, so I decided to spend some time trying out mozilla’s rust language. To briefly sum up my experience in a sentence: “Why is the documentation inaccurate?”. A majority of my time was spent shotgun programming. When examples contained within the rust-lang tutorial resulted in internal compiler errors, I knew I was in for a bad time.

Rust is definitely interesting. It’s effectively a research project at the moment, but it has potential to be “the next big thing”. Coming from a primarily C++ background, I found the syntax to be terse, but satisfying and expressive. While the creators of Rust cite Go, Ruby, SML, etc. and others as their source of input, I get a large C++/Python/OCaml vibe, especially in terms of tool support. (And frustration at confusing compiler errors)

I do have a few issues with the syntax in its current form. There is no way for the compiler to understand that pure fn add(Type, Type) -> Type and pure fn add(Type, f32) -> Type are not the same function. This resulted in an overload issue, while implementing two trait instances for the Add<R, S> trait. In C++, Python, or OCaml, this wouldn’t be an issue, especially if Type is explicitly stated, and is not generic.

Rust also requires you to explicitly cast a float to an integer (or vice versa). I actually like this. It’s something that bites people in C++, due to integral types performing implicit casting all over the place. What I don’t like is that the float type is the largest floating point type available on a given host. Instead, to be explicit about the size of a floating point type you want, you need to use f32, or f64. This also has the consequence of requiring you to cast floating point literals to a 32-bit float like 0.4f32. This is fine, but I do take issue with automatic float type aliasing depending on the host platform. It should be an explicit thing, I’d argue. That, or float should simply be a trait. (As it stands now, float implements the num::Num trait, which all other C-like integral types implement)

I’m also a bit disappointed in the engineers working on the language. Despite the language having been in development for quite some time, at no point has anyone suggested to change the keyword fn to fun. The inability to make the joke that rust puts the “fun” back into “fun”ctional programming is, in my opinion, one of the worst crimes against puns. The developers of rust need to seriously rethink this opinion. If anything, I would put this as their highest priority. In other words, rust needs to be a bit more punnable.

I’m definitely going to be keeping a close eye on the language. It’s gained my attention.

Your First Job Is the One You Remember Most

The first real job I had was at a small software company. Our team was small. Really small. 3 people to be exact. When looking to hire on someone new to the team, our team lead would occasionally show me people’s resumes, usually with a statement of “look at how bad this was”. As a brief note, our team lead was from Russia which, while related to this story, is for another post at another time.

There was one resume that really stood out. It’s the kind people usually get rejected for. 1 page of summary, statement, and references. 1 page of technical experience. 7 1/2 pages of work experience. Yes, that’s not a typo.

Every single position this man had held dating back to 1988. 24 years (older than I am, at the time of this post) of GPS related software. Hardware for satellites. Mentions of assembly, Ada, C, C++, a bit of Java. But it was the oldest job on his resume, the one at the end of his monster list of work experience (his first job), that really caught my attention. The one that management didn’t seem to notice. When I pointed it out to our team lead, his eyes got really wide. The job was from his college days. Moscow, 1998 to be specific. The job description was simple: “Designed and implemented tracking and navigation software for intercontinental ballistic missiles.”

We unfortunately didn’t hire anyone for another 6 months.

Conditional Inheritance in Python

I was messing around earlier today, trying to come up with a decent default value for a class to be imported in a build system I’ve been writing for a bit now. I was using a silly if-elif-else chain, and a function to return the proper class. I then wondered if it would be possible to have a default class using conditional inheritance. In python the ternary is expressed like so:

x = value if condition else not_value

This expression can be used anywhere, even when defining classes. In the case of the build system, I was trying to set the default C/C++ compiler. This resulted in the following:

import sys

windows = sys.platform == 'win32'
macsox = sys.platform == 'darwin'
linux = 'linux' in sys.platform

class GCC(object): pass
class Clang(GCC): pass
class MSVC(object): pass

class CXX(MSVC if windows else (GCC if linux else Clang)): pass

Surprisingly (or maybe not surpirsingly), it worked without issue. Whether this is something that is actually recommended is up for debate, but now I can set a ‘default’ compiler class within the toolchain (while allowing the user to change it if they wish).

Figured this was worth sharing. However, a word of warning:

Don’t actually use this in production code

Substitution Failure Is Not An Error (It Is Also Not Human) Part 2

Last time, I wrote about SFINAE in relation to functions. This time, I’ll be covering their use in structs, and all the neat little things this lets you do.

I had promised to show off a neat trick involving platform specific code within templates with extremely minimal use of the C preprocessor (simple defines are used).

Unfortunately, I found out that accrding to the C++11 standard that what I had done actually violates the standard, and in gcc 4.7 this actual trick will not work. The reason it currently does is due to all the compilers out there only having implemented some of the new rules, while following some from C++03.

I’m still going to show off the trick, however, don’t expect it to work. The concepts behind it still do, but for platform specific calls, everything breaks.

But enough about that.

Like functions, structs and classes are also templatable. However, we can go a step further and partially specialize a struct, like so

template <typename T, bool condition>
struct special { };

template <typename T>
struct special<T, true> { };

In the above example, we are telling the compiler that whenever the second template paramter (condition) results in a true statement, we want to use the second definition. Better yet, template parameters work more or less like function parameters, so we can even give the condition a default value

template <typename T, bool condition=true>
struct special { };

Even better, since we would most likely not use the condition variable name, we don’t even need to give it a name (just a type, like C++ functions allow)

template <typename T, bool=true>
struct special { };

With this in mind, let’s say we wanted to use this knowledge to get some compiler specific intrinsics used. And maybe we want to do it without #if statements all over the place. Luckily, we can simply use a #define to know what compiler we are using.

#if defined(_MSC_VER)
  #define COMPILER_IS_MSVC 1

For this example, we’re going to implement a basic integer only byteswap functions to handle 16, 32, and 64 bit integers. The three compilers we see all have builtin intrinsics for this functionality, but only gcc and clang use the same naming conventions (and with good reason, considering this is a compiler specific feature). With the previous post of SFINAE under our belts, we can take our knowledge of functions and wrap them. But first, we need to create a simple class (the default one, where COMPILER_IS_MSVC is going to be true).

template <typename T, bool=COMPILER_IS_MSVC> class swap {
    static inline uint64_t call(uint64_t v) { return _byteswap_uint64(v); }

template <typename T> class swap<T, false> {
    static inline uint64_t call(uint64_t v) { return __builtin_bswap64(v);}

Now if we tried to compile the above on any platform it would fail. This is because the types of the function call are known when the function is instantiated. The types aren’t dependent, so no substitution is actually going to take place. So we need to get a bit clever, and create a wrapper object, where the returned input type is the same as the output type.

template <typename T, typename U> class combo {
  typedef typename std::conditional<
    std::is_same<T, U>::value,
  >::type return_type;

  typedef typename std::conditional<
    std::is_same<T, return_type>::value,
  >::type param_type;

The combo type can be used like so

typedef typename combo<T, uint64_t>::return_type qword;
typedef typename combo<T, uint64_t>::return_type qparam;

static inline qword call(qparam val) { return __builtin_bswap64(val); }

Now, in the event that type T is not a uint64_t, the qword type becomes a void, and qparam becomes T*. This means that according to the rules of SFINAE, this function cannot satisfy the type substitution of T. Assuming that we are using a compiler that performs two-phase identifier lookup, the contents of the function are never evaluated, and the compiler continues its work. We can do this with ALL of the basically sized types.

template <typename T, typename U> struct combo {
  typedef typename std::conditional<
    std::is_same<T, U>::value,
  >::type return_type;

  typedef typename std::conditional<
    std::is_same<T, return_type>::value,
  >::type param_type;

/* msvc */
template <typename T, bool=COMPILER_IS_MSVC> class swap {
  typedef typename combo<T, uint64_t>::return_type qword;
  typedef typename combo<T, uint32_t>::return_type dword;
  typedef typename combo<T, uint16_t>::return_type word;

  typedef typename combo<T, uint64_t>::param_type qparam;
  typedef typename combo<T, uint32_t>::param_type dparam;
  typedef typename combo<T, uint16_t>::param_type wparam;

  static inline qword call(qparam val) { return _byteswap_uint64(val); }
  static inline dword call(dparam val) { return _byteswap_ulong(val); }
  static inline word call(wparam val) { return _byteswap_ushort(val); }


/* gcc/clang */
template <typename T> class swap<T, false> {
  typedef typename combo<T, uint64_t>::return_type qword;
  typedef typename combo<T, uint32_t>::return_type dword;
  typedef typename combo<T, uint16_t>::return_type word;

  typedef typename combo<T, uint64_t>::param_type qparam;
  typedef typename combo<T, uint32_t>::param_type dparam;
  typedef typename combo<T, uint16_t>::param_type wparam;

  static inline qword call(qparam val) { return __builtin_bswap64(val); }
  static inline dword call(dparam val) { return __builtin_bswap32(val); }
  static inline word call(wparam val) {
    return ((val & 0xFF00) >> 8) | ((val & 0x00FF) << 8);

We’re not done of course. We need to come up with a way to wrap this struct, so let’s create a mini wrapper function which will also do the error handling for us, so that we don’t have to worry about incorrect types getting through and an actual error resulting from no proper substitution from being available.

template <typename T> inline T swap_bytes(T val) {
                "Only integral types are allowed");
  return swap<T>::call(val);

And that’s that. We’ve now created a non C preprocessor implementation of compiler specific code. Granted, it’s verbose as heck because we didn’t resort to preprocessor macros to cut down on the return_type and param_type typedefs, but it’s a pretty darn good example of “what could have been” if C++ compilers weren’t getting so good at what they do.

Substitution Failure Is Not An Error (It is also not a Waffle House) Part 1

With the finalization of the new C++ standard, I’ve begun tinkering with some features of the language that I haven’t really needed until now, or chose a hackier route. At the suggestion of a friend, I’ve decided to ‘esplain’ what I’ve done recently that, aside from basic #ifndef and #include statement, allows the circumvention for platform or compiler specific code without use of the C preprocessor.

But, before I actually show that off, I think some explanation of some concepts that are used to achieve this might be necessary, especially because some folks struggle with it. As such, I’m writing this as a series, with the explanation of SFINAE (the easier to type acronym of the title) in conjunction with function overloading and deduction to be the main focus of this post. To help explain the concept, we’ll be using a non-car analogy. Not because car analogies are bad (they are, in fact, the worst kind of analogies), but because the example for SFINAE involving food is a little easier to stomach.

So, first, we need to comprehend a few basic, simple concepts. Specifically, that you can overload functions, and you can overload template functions.

template <typename T>
T conjunction(T) { return T(); }

Above is a simple function with a signature of (T) -> T (this is an optional new function declaration syntax available, which is used elsewhere such as in Haskell, OCaml, F#, and even Python sort of). That is, whatever the type of the parameter it takes, will be the type of the item returned. This makes it easier for the compiler to deduce what is returned by the function.

template <typename T>
T conjunction(T) { return T(); }

auto val = conjunction(5);

Effectively, the compiler says “conjunction deduction, what’s the function?” To which it replies “Hooking up integrals and scalars and numbers”.

Now, here’s where we can get a little neat. Because functions support overloading, we can specialize the conjunction function to make the deduction require fewer assumptions.

template <typename T>
T conjunction(T) { return T(); }

template <> void conjunction(void) { }

auto x = conjunction(5);

Now, conjunction can be used in a situation where type T is actually void. We can do this with literally anything, as long as we are explicit about what we overload with the function. However, what happens when we have the following?

void conjunction(int16_t);
void conjunction(uint32_t);


Which conjunction overload will the compiler select? In this case it selects the second, and always the second, because 0xFFFF is an int literal. However, the int16_t variant of conjunction was declared first, so why did the compiler skip it, and why did it not error out because 0xFFFF is anint literal and is not unsigned? (thereby making it an int32_t) Because of SFINAE. The compiler looked at the varying factors we gave it, and discarded the functions whose signatures did not match, or were not able to be converted.

Let’s get analogical all up in this. Suppose you wished to dine at a restaurant in your current residence. Before selecting what you think is the best place to eat, varying factors are taken into account. These can range from information you know beforehand (e.g., never setting foot inside an establishment contained within a city-block sized market/emporium) to information that might be dependent on information you may not know until you need to know it (Is this the place where that guy took a monkey hostage and sang the Russian National Anthem backwards? Or was that Denny’s?). Sometimes, a restaurant can be right out (Waffle House), or not appropriate for the occasion (an Anniversary Dinner at Waffle House), or maybe result in undefined behavior (Waffle House).

Effectively, SFINAE can be viewed as a reason to avoid an error (or in the case of the restaurant analogy, a Waffle House) based on constraints that are known before the actual visitation to the restaurant, while still having the option to attempt to visit another if it turns out that possible constraints are ineligible (Just because the W is out in the Waffle House sign doesn’t mean it stopped being a Waffle House).

With SFINAE, we can sometimes get this bizarre effect where the compiler will choose the best function available (and if we’re not careful, the wrong one), simply so that it may not error. Applied to the restaurant analogy, we can sometimes get this bizarre effect where a person will choose the best restaurant available (and if we’re not careful, the wrong one), simply so that they don’t have to step foot into a Waffle House.

Next time, we’ll talk about Waffle House partial template specialization and how it relates to SFINAE.