0

During development of libraries (mainly for use in internal projects) I have come across the "problem" of how to design them in a generic way. I am going to demonstrate with an example library that I have implemented and use in code.

But first, here is a summary of the ways I know to "distribute" a library in C++:

  1. Statically linked
  2. Dynamically linked
  3. Header only
  4. Direct inclusion into source tree

In (1), the library code is compiled into a set of object files which is later linked into the final project at compile time. The set of object files and include headers are included in the library "package." (2) is similar to (1) except that the "linking" occurs at run-time using O.S. facilities.

(3) is quite common in C++ with libraries that use templates (e.g., the C++ Standard Template Library). All the code is contained within header files distributed to and included by the project. This code is either all inline or templated (which requires the complete definition at the time of use).

In (4) the complete source tree (headers and sources) are "included" into the project's tree and compiled along-side the project as if they were part of it. For example, this is often seen when using git submodules to "link to" code.

If I am missing any options, please enlighten me.


Now, this is where my real problem comes into play...

Ideally, I design a library in such a way that it is generic enough to be used with any project (within limits). This often means that the library needs to be "configured" in differently to be compatible with the specifics of the project. Here are the methods I know of for providing this configurability:

  1. Dependency injection (Dynamic/Run-time polymorphism)
  2. Templates/CRTP (Static polymorphism)
  3. Preprocessor conditionals
  4. "Tweaks" (configuration headers)

(5) is (in my opinion) the simplest to work with. Pieces of code are separated to not directly depend on each either via abstract interfaces. The library depends only upon the abstract interface which is passed in (i.e., injected) as a parameter (e.g., to a function or constructor). In this way, the library may be compiled as its own unit (i.e., as a static or dynamic library) without having any other dependency than the interface header and the compiler.

(6) allows a library to be generic for any changes in type or usage at compile time via templates. This restricts the library to only being distributed only via (3) or (4). It also can lead to too much code having to be a template (and thus header-only).

(7) is very common in C/C++ but it causes may problems. Preprocessor defines are use to change the way the library compiles and perhaps even the interfaces. The problem with this solution is it does not play well with "distribution" methods (1) and (2) (static and dynamic libraries) because the project and library must be compiled with exactly the same preprocessor definitions. If not, at best a link-time error is encountered. At worst, the project is linked but some unexpected things happen at runtime.

I learned about (8) from https://vector-of-bool.github.io/2020/10/04/lib-configuration.html. The concept is similar to (7) in that it uses the preprocessor, however the compile-time configuration is contained with in a single "tweak" file. Again, the restricts the library to being distributed via (3) and (4).


Time for a real example. I wanted to write a logging library for C++ that met requirements I couldn't find in any other library. But, I wanted to be able to use this library both on embedded systems and a standard PC. The latter is freer to use the standard library datatypes, strings, standard IO, etc... The former must be restricted to more deterministic constructs.

For example, I wanted an AbstractEvent to "stream" a string representation of itself (i.e., to be printed to some output).

class AbstractEvent
{
public:
    virtual OStream& stream(OStream& os) const = 0;

    friend OStream& operator<<(OStream& os, const AbstractEvent& v)
    {
        return v.stream(os);
    }
};

OStream is a configurable type depending on the use case that much meet a set of named requirements (same as how the standard template library works). However, I opted not to use templates because I didn't want the complexity of everything becoming a template. I would like to, for example, compile my logging library for embedded systems where OStream is defined to be a "light" stream. I want to compile the library on a PC where OStream is defined to be std::ostream. The functionality of the library remains the same because both OStreams expose the same interface.

My solution for my library was to use a combination of (8) and (4) (via submodules).

//In AbstractEvent.hpp
#include "OStream.hpp"

//In OStream.hpp
#if __has_include(<logging/OStream.tweaks.hpp>)
#   include <logging/OStream.tweaks.hpp>
#endif

#ifndef LOGGING_OSTREAM
#   define LOGGING_OSTREAM std::ostream
#endif

#ifndef LOGGING_OSTREAM_INCLUDE
#   define LOGGING_OSTREAM_INCLUDE <iostream>
#endif

#include LOGGING_OSTREAM_INCLUDE

namespace logging
{

using OStream = LOGGING_OSTREAM;

} //namespace logging

//For embedded systems
//In logging/OStream.tweaks.hpp
#define LOGGING_OSTREAM_INCLUDE <etl/string_stream.h> //ETL is a STL-ish library for embedded systems
#define LOGGING_OSTREAM etl::string_stream

//For PC
//Use defaults (i.e., don't define a tweaks header)

I then add the library to my project via a git submodule.

My question is, have I missed any other design alternatives in my lists above? Are "configurable types" an indication of bad design?

1
  • In the spirit of implementing and debugging the logging library only once, why not build and package ETL for both the embedded and PC projects, implement the logging library to depend only on ETL, and package it as a separate project, then add it as dependency to both application projects? Does the PC variant of the logging library need STL definitions which are not supported in ETL? The submodule way is helpful if you're planning to continue developing the logging library in parallel with the application library; in that case just make it part of the app. You can spin it off when stable. Commented Sep 13, 2023 at 20:28

1 Answer 1

0

I am not a C++ professional, but if you design a logging library should should keep a in mind that logging is a highly environment and context dependable thing, and in the future if people adopt your library, there will be different functionalities to customize:

  • Log Formatting
  • Log Collection/Transport
  • Log Display/Storage

Of each of those areas, you could later encounter a lot of different implementations on different platforms. Some implementations could be platform specific, some of them more generic.

The idea in the design of your log framework would be to keep your basic logging library independent of the platform and do platform specific implementations in separate independent modules. For embedded devices, you may have no way to display logs, you would rather want to transport them via network to a log server or store them in files. For desktops you may want to show them on the console. So that means, that I would design the log framework in a way, that the user chooses the components they want to use. That means you embedded implementation goes into one module, your desktop implementation goes to another. And your interfaces and the core go yet to another.

When someone wants to use your library, they choose which log transport, which log formatter, etc. they want to use. Maybe they implement their own variants.

And in that case, you can share your library components in both ways:

  • as source code for static linking (for people that want to customize for their platform) including a static library definition.
  • as dynamic link library precompiled for different platforms

Not sure if that helps, hope so.

2
  • Your point "as dynamic link library precompiled for different platforms" helps. You write the library to have flexibility if the user wants it, at the cost of requiring it be statically linked into (compiled with) the project, but have the option to choose defaults for popular platforms. Commented Sep 8, 2023 at 18:11
  • 1
    Yes, and I advocate for instead of using preprocessor to munch the code of different platforms into the same codebase, to instead try to factor the platform dependent implementations to separate libraries and have a platform independent core library. Commented Sep 8, 2023 at 18:39

Not the answer you're looking for? Browse other questions tagged or ask your own question.