2
\$\begingroup\$

So I tried implementing a easily extendable solution for Sqrt decompostion, I deduced that only identity value, operation and block update logic change and rest of the code is same. So i created 3 functions

  • T identity()
  • T operation(const T&a, const T&b)
  • T block_update_calc(const T old_block_value, const T, const T &old_data_value)

I earlier tried inheritance, where these were pure virtual functions in base class and these are implemented in derived classes. But I then found out that we can't call virtual functions of derived class in constructor of base class which can be mitigated at cost of syntax.

Current solution is template based and AdditionSqrtDecomposition and MultiplicationSqrtDecomposition behave exactly like I want, but code looks bit messy, how can I improve this code?

Usage:

vector<int> nums(N);

AdditionSqrtDecomposition<int> sd_addition(nums);
MultiplicationSqrtDecomposition<int> sd_multiplication(nums);

Implementation

#ifndef SQRT_DECOMPOSITION_HPP
#define SQRT_DECOMPOSITION_HPP

#include <cmath>
#include <vector>

template <typename T, class Core> class SqrtDecomposition {
public:
  std::vector<T> _data, _decomp;
  int _block_size;

  // constructors, asssignment, destructor
  explicit SqrtDecomposition(const std::vector<T> &data)
      : _data(data), _decomp(ceil(sqrt(data.size())), Core::identitiy()),
        _block_size(::sqrt(data.size())) {

    int curr_block = -1;
    for (int i = 0; i < _data.size(); ++i) {
      if (i % _block_size == 0) {
        curr_block++;
      }
      auto &d = _decomp[curr_block];
      d = Core::operation(d, _data[i]);
    }
  }

  T query(int l, int r) {
    T result = Core::identitiy(); // identitiy
    for (; l <= r and l % _block_size != 0; l++) {
      // individuals before a block
      result = Core::operation(result, _data[l]);
    }

    for (int i = l / _block_size; l + _block_size <= r; ++i, l += _block_size) {
      // block representatives
      result = Core::operation(result, _decomp[i]);
    }

    for (; l <= r; ++l) {
      // individuals after a block
      result = Core::operation(result, _data[l]);
    }
    return result;
  }

  void update(int index, T value) {
    int representating_block = index / _block_size;
    _decomp[representating_block] = Core::block_update_calc(
        _decomp[representating_block], value, _data[index]);
    _data[index] = value;
  }
};

// interface
template <typename T> class OperationCore {
public:
  static T identitiy() = 0;
  static T operation(const T &a, const T &b) = 0;
  static T block_update_calc(const T old_block_value, const T new_data_value,
                             const T &old_data_value) = 0;
};

template <typename T> class AdditionCore {
public:
  static T identitiy() { return 0; }
  static T operation(const T &a, const T &b) { return a + b; }
  static T block_update_calc(const T old_block_value, const T new_data_value,
                             const T &old_data_value) {
    return old_block_value + new_data_value - old_data_value;
  }
};

template <typename T> class MultiplicationCore {
public:
  static T identitiy() { return 1; }
  static T operation(const T &a, const T &b) { return a * b; }
  static T block_update_calc(const T old_block_value, const T new_data_value,
                             const T &old_data_value) {
    return (old_block_value * new_data_value) / old_block_value;
  }
};

template <typename T>
class AdditionSqrtDecomposition : public SqrtDecomposition<T, AdditionCore<T>> {
  using SqrtDecomposition<T, AdditionCore<T>>::SqrtDecomposition;
};

template <typename T>
class MultiplicationSqrtDecomposition
    : public SqrtDecomposition<T, MultiplicationCore<T>> {
  using SqrtDecomposition<T, MultiplicationCore<T>>::SqrtDecomposition;
};

#endif // SQRT_DECOMPOSITION_HPP

\$\endgroup\$

1 Answer 1

4
\$\begingroup\$

Design review

Your design is mostly sound. It can be simplified a little bit, and optionally made a bit more rigorous, but the basic design is not bad.

The name for what you’re doing is the “Policy” design pattern. The gist is that you create a class templated on policy types, and then concrete policy classes to control the behaviour of the main class. Even the standard library uses that pattern in places, though mostly as the closely related “Traits” design pattern. For example, std::string is actually std::basic_string<char, std::char_traits<char>, std::allocator<char>>, where std::char_traits is a traits type, and std::allocator is a policy type for (de)allocation. You can replace the allocator policy type with anything you want: you can have an allocator that logs every allocation, an allocator that allocates from a memory pool, an allocator that does nothing but uses stack memory… literally anything you want.

Your policy types are very good, too; you’ve boiled them down to a small number of minimal operations. That’s perfect.

Now, the OperationCore class is pure nonsense. I’m surprised it even compiles. (For the record, it looks like Clang compiles it, but GCC does not.) You seem to have confused static member functions with virtual functions. Those are two entirely different things; completely unrelated. A function cannot be both static and virtual; the idea doesn’t even make sense. So this:

static T identitiy() = 0;

makes no sense.

But that’s okay, because the entire OperationCore class serves no purpose, and isn’t used anywhere, so it can just be dumped.

If you want a way to validate the policy interface, the modern, C++20 way to do it would be to use a concept:

template <typename Policy, typename T>
concept sqrt_decomposition_policy = requires(T t)
{
    Policy::identity() -> std::same_as<T>;
    Policy::operation(t, t) -> std::same_as<T>;
    Policy::block_update_calc(t, t, t) -> std::same_as<T>;
};

And your class would be like:

template <typename T, typename Policy>
    requires sqrt_decomposition_policy<Policy, T>
    // also maybe floating_point<T> or integral<T> or a concept combining them
class sqrt_decomposition
{
    // ...

But since you’re apparently working in C++14, your only practical option is a traits type. This would be much easier to do with C++17, because you’d have std::void_t, std::conjunction, and so on. But it’s still doable in C++14. Here’s a very brief, very rough overview:

namespace _detail {

template <typename...>
using void_t = void;

template <typename Policy, typename T, typename = void>
struct has_valid_sqrt_decomposition_policy_identity_function : std::false_type {};

template <typename Policy, typename T>
struct has_valid_sqrt_decomposition_policy_identity_function<Policy, T, void_t<
    // this only checks that an identity function exists
    decltype(Policy::identity())
    // you still need to confirm that it returns a T
>> : std::true_type {};

// ... and so on for the other functions ...

} // namespace _detail

template <typename Policy, typename T>
struct is_sqrt_decomposition_policy
    : std::integral_constant<bool,
        _detail::has_valid_sqrt_decomposition_policy_identity_function<Policy, T>::value
        and _detail::has_valid_sqrt_decomposition_policy_operation_function<Policy, T>::value
        and _detail::has_valid_sqrt_decomposition_policy_block_update_calc_function<Policy, T>::value
    >
{};

And the class would be like:

template <typename T, typename Policy>
class sqrt_decomposition
{
    static_assert(is_sqrt_decomposition_policy<Policy, T>::value, "");

    // ...

Frankly, I wouldn’t bother. I would either use C++20 concepts, or just forget it completely. The legacy solution is so ridiculously over-complicated, and already obsolete, it’s not worth it. It’s not worth learning how to do it. It’s not worth the headache of trying to maintain it.

Once you have the main class, and the policy classes, there is no need for inheritance at all, never mind polymorphism. All you need is this:

template <typename T, template Policy>
class SqrtDecomposition
{
    // ...
};

template <typename T>
class AdditionPolicy
{
    // ...
};

template <typename T>
class MultiplicationPolicy
{
    // ...
};

template <typename T>
using AdditionSqrtDecomposition = SqrtDecomposition<T, AdditionPolicy<T>>;

template <typename T>
using MultiplicationSqrtDecomposition = SqrtDecomposition<T, MultiplicationPolicy<T>>;

It is also possible to use template templates (no, that is not a typo: “template templates”) to simplify the last two lines a tiny bit:

template <typename T, template <typename> class Policy>
class SqrtDecomposition
{
    // ...
};

template <typename T>
class AdditionPolicy
{
    // ...
};

template <typename T>
class MultiplicationPolicy
{
    // ...
};

template <typename T>
using AdditionSqrtDecomposition = SqrtDecomposition<T, AdditionPolicy>;

template <typename T>
using MultiplicationSqrtDecomposition = SqrtDecomposition<T, MultiplicationPolicy>;

I would not recommend this, though. It’s too expert-level, and it doesn’t really add anything for the complexity. In fact, it removes flexibility. So don’t do it. I just had to mention it for completeness.

Okay, that’s about all for the design. Let’s get to the actual code.

Code review

template <typename T, class Core> class SqrtDecomposition {

Why mix typename and class? That sends the signal that you are using the different terms as code for something… but what? This only creates confusion. Stick to typename, or class.

Also, I would suggest instead of “Core” everywhere, you use “Policy”, since that is the name of the design pattern, and everyone who knows design patterns will understand immediately exactly what’s going on.

public:
  std::vector<T> _data, _decomp;
  int _block_size;

You probably want these to be private, not public.

std::vector<T> _data, _decomp;

Don’t do this. Place each declaration on its own line.

int _block_size;

You have a number of subtle bugs in your code that all come back to the fact that you are treating std::vector<T>::size_type and int interchangeably. They are not.

First, std::vector<T>::size_type is an unsigned value, while int is signed. Mixing signed and unsigned values is a gateway to grief.

Second, std::vector<T>::size_type could be many times larger than an int (and I believe it is on Windows). By forcing _data.size() into an int, you could be truncating values and getting garbage numbers as a result.

The best solution to this problem may lie in the constructor:

  explicit SqrtDecomposition(const std::vector<T> &data)
      : _data(data), _decomp(ceil(sqrt(data.size())), Core::identitiy()),
        _block_size(::sqrt(data.size())) {

So, here’s the thing: _data never grows. Neither does _decomp. So right here, in this constructor, you can check one time to make sure that _data.size() can fit in an int, and if so, you’re safe throughout. Here’s what you could do:

template <typename T, typename Policy>
class SqrtDecomposition
{
    std::vector<T> _data;
    std::vector<T> _decomp;
    int _data_size;
    int _block_size;

public:
    explicit SqrtDecomposition(std::vector<T> data)
    {
        // you could expose this value as a static const data member, because
        // it is useful information to users
        constexpr auto max_data_size = static_cast<unsigned int>(std::numeric_limits<int>::max()):

        if (data.size() > max_data_size)
            throw std::domain_error{"data set is too large to decompose"};

        // (at this point, you might as well also check that data.size() isn't
        // zero, and any other checks you want to do on the input data before
        // moving it)

        _data = std::move(data);

        // now you know this is safe:
        _data_size = static_cast<int>(_data.size());

        _block_size = static_cast<int>(std::sqrt(_data_size)); // or sqrt(data.size()), doesn't matter anymore

        auto const decomp_size = static_cast<std::vector<T>::size_type>(std::ceil(std::sqrt(_data_size)));
        _decomp = std::vector<T>(decomp_size, Policy::identity());

        // ...
    }

Note the addition of a new _data_size data member, which keeps _data.size() as an int. Having the data size as an int will save you a lot of grief later on. The downside is: it’s an extra data member in your class, which takes up space. It probably doesn’t matter, given that you’re already working with large amounts of data. But if it does, then you can drop the _data_size member, and just remember to do auto const data_size = static_cast<int>(data.size()); in every member function.

Also, I recommend making max_data_size a static data member, because it would allow people to check the preconditions for the constructor. Incidentally, you should document the preconditions and postconditions of the constructor and every other function. For now, since we don’t have contracts in C++ yet, you should at least include comments explaining what the expected input is. Clearly you’re not expecting a zero-sized vector (or are you? if so, you have numerous bugs…). You should at least document that in a comment. Same goes for the max vector size, and the range of values for the data vector contents, and so on. I should be able to read a comment at the top of your function telling me what the valid inputs are. It shouldn’t be a mystery.

(Incidentally, you forgot some std:: prefixes for std::ceil() and std::sqrt().)

    int curr_block = -1;
    for (int i = 0; i < _data.size(); ++i) {
      if (i % _block_size == 0) {
        curr_block++;
      }
      auto &d = _decomp[curr_block];
      d = Core::operation(d, _data[i]);
    }

This loop is dangerous for a couple of reasons.

First, you’re comparing signed and unsigned integers. That’s always a bad idea. The fix here is to use:

for (int i = 0; i < _data_size; ++i) // if you've defined _data_size as above
// or:
for (int i = 0; i < static_cast<int>(_data.size()); ++i)

You know the latter is safe if you did the check described earlier.

The second problem is that you are doing int curr_block = 1; then _decomp[curr_block]. In between you have a block that will increment curr_block to zero on the first iteration, so it probably seems safe. And it is… for now… until someone else (possibly you in the future) goes monkeying around in the code, and maybe moves the if block to after the other statements, or who knows what else. (This isn’t even that crazy an idea, because an if check in a loop is inefficient, and especially so when it’s doing an expensive modulo op every loop iteration. Someone could very easily be tempted to refactor this to loop in block steps.) This is a very brittle pattern, very easy to break.

I’m not going to give a concrete suggestion of what to refactor it to, though. There are just too many options, all with pros and cons. At the very least, I would add a comment that warns about the fact that that if block must run on the first iteration, or UB.

Also:

auto &d = _decomp[curr_block];
d = Core::operation(d, _data[i]);

seems a lot less readable than simply:

_decomp[curr_block] = Core::operation(_decomp[curr_block], _data[i]);

Also also, and this applies everywhere. In C++, the convention is to write the type modifier next to the type, not the identifier. In plain English:

  • auto &d: this is C style.
  • auto& d: this is C++ style.

This applies also to things like const std::vector<T>&:

  • const std::vector<T> &data: this is C style.
  • const std::vector<T>& data: this is C++ style.
  • std::vector<T> const& data: also C++ style, and more technically correct, never mind what the core guidelines say, but if you prefer west const, that’s fine too.

Also also also, I know x % y == 0 is widely understood as meaning “true if x is a multiple of y”, but it still helps readability to spell it out:

// somewhere else, maybe in a file of simple utility functions:
template <typename T, typename U> // could be constrained in C++20, or you could use a static assert
constexpr auto is_multiple(T const& t, U const& u) noexcept(noexcept(bool(t % u == 0))) -> bool
{
    return bool(t % u == 0);
}


    // and in your code:

    int curr_block = -1;
    for (int i = 0; i < _data.size(); ++i) {
      if (is_multiple(i, _block_size)) {
        curr_block++;
      }
      auto &d = _decomp[curr_block];
      d = Core::operation(d, _data[i]);
    }

    // ...

    for (; l <= r and not is_multiple(l, _block_size); ++l) {
      // individuals before a block
      result = Core::operation(result, _data[l]);
    }

There’s no reason to make your code obscure when it costs nothing to make it clear.

  T query(int l, int r) {

Again, there’s no documentation telling me what the preconditions for this function are. Is it okay if r is less than l? Is it okay if l is greater than the data size? Is it okay if either are zero? Or negative? Document your preconditions!

Now, let’s go back over the SqrtDecomposition class, and consider some other improvements. First, I would suggest adding constexpr everywhere. Because why not? Even if very little can be constexpr in C++14, as you start using more modern versions of C++, more and more things can be done at compile time. Might as well future-proof your class.

I would also suggest either documenting your expectations for the types, or spelling them out explicitly. In C++20 and beyond, that usually means concepts, but you can go a long way with static asserts, even in C++14:

template <typename T, typename Policy>
class SqrtDecomposition
{
    // Requirements on T.
    static_assert(std::is_arithmetic<T>::value, "");
    static_assert(not std::is_same<std::remove_cv_t<std::remove_reference_t<T>>, bool>::value, "");

    // ...

In your constructor, you take the data vector by const&. Prior to C++11, that was always the right thing to do. But now we have move semantics.

Since you are going to be copying the data vector anyway, this is a taking parameter… not merely a reading parameter. When you are reading a value, it makes sense to use const&, because you never want to copy it or change it, you just want to inspect it. But whenever you are taking a value, you should take it by value. Here’s why, suppose I do:

auto decomp = AdditionSqrtDecomposition<int>{std::vector<int>{1, 2, 3, 4, 5}};

If your constructor takes a const&, then this will create a vector… then copy that vector into a new _data vector inside the constructor.

If your constructor simply takes the vector by value, then this will create a vector… then move that vector into _data.

So your constructor should be:

    explicit constexpr SqrtDecomposition(std::vector<T> data)
      : _data{std::move{data}}
      , _decomp(std::ceil(std::sqrt(_data.size())) , Core::identitiy())
      , _block_size(::sqrt(_data.size()))
    {

Note that once you move data, you can’t do data.size() anymore. You have to do _data.size().

I would recommend validating the input data before moving it into _data. But you could validate it after; doesn’t matter.

Now, once you’ve done this, you could consider allowing other input than just vectors. For example, what if I want to use a std::array of input data? Or a boost::numeric::ublas::vector<double>? Or maybe I want to read the input data right out of a data file?

Supporting arbitrary ranges is much easier since C++20… but it’s not exactly hard in C++14:

    template <typename InputRange>
    explicit constexpr SqrtDecomposition(InputRange&& r)
    {
        for (auto& v : r)
            _data.push_back(v);

        _initialize();
    }

    template <typename InputIterator, typename Sentinel>
    constexpr SqrtDecomposition(InputIterator first, Sentinel last)
    {
        for (; first != last; ++first)
            _data.push_back(*first);

        _initialize();
    }

    constexpr SqrtDecomposition(std::initializer_list<T> il)
    {
        _data.resize(il.size());
        std::copy(il.begin(), il.end(), _data.begin());

        _initialize();
    }

    // private helper function
    static constexpr auto _initialize() -> void
    {
        // do the rest of the initialize of _decomp and so on
    }

This will work for any arbitrary range, or any arbitrary iterator/sentinel pair. You could do:

AdditionSqrtDecomposition<int>{std::vector<int>{1, 2, 3, 4, 5}};

// or with an array:
auto data = std::array<double, 3>{1.0, 3.0, 5.0};
AdditionSqrtDecomposition<double>{data};

// or directly out of a file:
auto in = std::ifstream{"datafile"};
AdditionSqrtDecomposition<double>(std::istream_iterator<double>{in}, std::istream_iterator<double>{});

// or input the data directly in the constructor:
AdditionSqrtDecomposition<double>{2.0, 4.0, 6.0};

Now what I’ve written isn’t the most efficient way to do it, but increasing efficiency is simply a matter of adding more overloads, and maybe some enable_ifs. (In C++20, it’s trivial to make this super efficient with concepts and ranges.)

Another efficiency improvement you might consider is abandoning vectors. Normally std::vector is the default container to use for runtime-sized data sets. However, there are two factors that make your case different:

  1. You never resize the vectors. After constructing _data and _decomp they stay the same size… forever. That pretty much voids most of the benefits of using a vector.
  2. You need to calculate and maintain the sizes separately anyway. Because you are using the sizes in calculations and loops where you expect them to be ints, you might as well just keep them as ints.

So maybe:

template <typename T, typename Policy>
class SqrtDecomposition
{
    std::unique_ptr<T[]> _data;
    std::unique_ptr<T[]> _decomp;
    int _data_size;
    int _decomp_size;

public:
    template <typename InputIterator, typename Sentinel>
    constexpr SqrtDecomposition(InputIterator first, Sentinel last)
    {
        // For efficiency, you have to handle things differently depending on
        // whether the iterators are input iterators or forward iterators
        // (or better). Not hard to do even in C++14, but verbose, so I'll
        // leave it to you to figure out.

        // Input iterators only ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        auto buffer = std::deque<T>{};
        for (; first != last; ++first)
            buffer.push_back(*first);

        _data_size = _validate_data(buffer.size());

        _data = std::unique_ptr<T[]>(new T[_data_size]);
        std::copy(buffer.begin(), buffer.end(), _data.get());

        _initialize();

        // Forward iterators or better ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        // note: this won't work prior to C++20 with sentinel types; you'll
        // have to roll your own distance function.
        _data_size = _validate_data_size(std::distance(first, last));

        _data = std::unique_ptr<T[]>(new T[_data_size]);
        // again, won't work with sentinel types before C++20. not hard to
        // roll your own.
        std::copy(first, last, _data.get());

        _initialize();
    }

    template <typename InputRange>
    explicit constexpr SqrtDecomposition(InputRange&& r)
    {
        // For efficiency, you need to check for sized ranges, and input
        // versus forward-or-better ranges. This requires C++20, but you can
        // *almost* get the same effect before, with a little work. I'll
        // leave it as an exercise for the reader.
    }

    constexpr SqrtDecomposition(std::initializer_list<T> il) :
        SqrtDecomposition(begin(il), end(il))
    {}

private:
    template <typename SizeType>
    constexpr auto _validate_data_size(SizeType s) -> int
    {
        // Check whether SizeType is signed or not, of course. Trivial to
        // handle in C++17 with if constexpr, or in C++20 with concepts.
        // Not hard in C++14, but you're on your own.

        if (s == 0)
            throw std::domain_error{"dataset is empty"};
        if (s > max_data_size)
            throw std::domain_error{"dataset size is too large"};

        return static_cast<int>(s);
    }
};

One other thing you might consider adding is an allocator argument, to allow users to customize where/how the internal allocations happen. Not much changes in the code above when using an allocator. You just need an allocator-aware smart pointer (unfortunately, one doesn’t exist in the standard, but it’s not hard to make), and of course you need use it rather than new, but otherwise, pretty much nothing changes. Well, you’d need to add allocator template arguments, of course:

template <typename T, template Policy, template Allocator>
class SqrtDecomposition
{
    // ...
};

template <typename T>
class AdditionPolicy
{
    // ...
};

template <typename T>
class MultiplicationPolicy
{
    // ...
};

template <typename T, typename Allocator = std::allocator<T>>
using AdditionSqrtDecomposition = SqrtDecomposition<T, AdditionPolicy<T>, Allocator>;

template <typename T, typename Allocator = std::allocator<T>>
using MultiplicationSqrtDecomposition = SqrtDecomposition<T, MultiplicationPolicy<T>, Allocator>;

Finally, I’ll just review one of the policy classes, since they’re more or less identical:

template <typename T> class AdditionCore {
public:
  static T identitiy() { return 0; }
  static T operation(const T &a, const T &b) { return a + b; }
  static T block_update_calc(const T old_block_value, const T new_data_value,
                             const T &old_data_value) {
    return old_block_value + new_data_value - old_data_value;
  }
};

As mentioned earlier, I’d suggest naming this AdditionPolicy rather than AdditionCore, or maybe even SqrtDecompositionAdditionPolicy to be really explicit. It’s not a big deal if it’s verbose, because no one will be using it directly. (They’ll use the AdditionSqrtDecomposition<T> (or maybe AdditionSqrtDecomposition<T, Allocator>) alias.)

There’s no reason all of these functions can’t be constexpr.

Check the spelling of identity()!

Also, identity(), and only identity() can be noexcept. noexcept means “cannot possibly fail”, and there is no conceivable way identity() can fail. But operation() (for example) could fail if you tried to add std::numeric_limits<int>::max() and anything other than zero. noexcept does NOT necessarily mean “might throw an exception”. It just admits that a function can conceivably fail, and even if it doesn’t throw an exception now, it might do so in the future, or maybe in debug mode or something.

In fact, you could even make a regular addition policy… and a “debug” addition policy that does everything exactly the same, but checks for over/underflow, and maybe logs or throws or whatever. You could define the decomposition type with the regular policy when not debugging or testing, and the special debug policy type when debugging. For example:

template <typename T>
struct AdditionPolicy
{
    static constexpr auto identity() noexcept -> T { return T{}; }

    static constexpr auto operation(T a, T b) -> T { return T(a + b); }

    static constexpr auto block_update_calc(T old_block_value, T new_data_value, T old_data_value)
    {
        return T((old_block_value + new_data_value) - old_data_value);
    }
};

template <typename T>
struct DebugAdditionPolicy
{
    static constexpr auto identity() noexcept -> T { return T{}; }

    static constexpr auto operation(T a, T b) -> T
    {
        if ((std::numeric_limits<T>::max() - a) < b) // very basic check...
            // do something, like throw, assert, or log, or whatever

        return T(a + b);
    }

    static constexpr auto block_update_calc(T old_block_value, T new_data_value, T old_data_value)
    {
        // do checks here, too

        return T((old_block_value + new_data_value) - old_data_value);
    }
};

#ifdef NDEBUG
    template <typename T>
    using AdditionSqrtDecomposition = SqrtDecomposition<T, AdditionPolicy<T>>;
#else
    template <typename T>
    using AdditionSqrtDecomposition = SqrtDecomposition<T, DebugAdditionPolicy<T>>;
#endif // NDEBUG

Summary

  • Dump OperationCore. It’s not even legal C++. Even if it were, it serves no purpose.

  • If you really want a way to validate policies at compile time, consider moving to C++20. It’s possible to validate policies with C++14-era type traits… but it’s massively complex, and verbose. Not really worth it since C++20 is already out.

  • Don’t create AdditionSqrtDecomposition and MultiplicationSqrtDecomposition by inheritance. It’s overly complicated, and misguided; if SqrtDecomposition were meant to be a base type, it should have a virtual destructor. Use aliases instead.

  • Document everything better. You don’t necessarily need to explain the whole purpose of the class, because, presumably, anyone using it should know what it’s for. But at a bare minimum, you do need to document the preconditions and postconditions for every function (including constructors, destructors, etc.). That’s because while you can reasonably presume that only someone who understands your class would actually use it in code… the person reviewing or maintaining that code should not need the same level of understanding. A reviewer (or a static analyzer!) should be able to look at a function’s preconditions and postconditions, and—without understanding what black magic is happening inside—be able to spot when someone is passing invalid arguments (or when your functions’ returns are being used as invalid arguments to something else). (To verify that the black magic inside the class is okay, that’s what unit testing is for. Which, by the way…)

  • Test your code properly. That means writing unit tests, preferably with a proper unit testing framework. Test various normal input values, and make sure everything works as expected. Test edge cases. You don’t need to test failures due to precondition violations. But make sure that whenever the preconditions are satisfied, the postconditions are as well.

  • Consider using the established and widely-understood design pattern term “policy”, rather than the vague, idiosyncratic term “core”.

  • Watch out for signed/unsigned comparisons, and for cramming larger types (like std::vector<T>::size_type) into types with smaller ranges (like int, on some platforms).

  • Consider adding more constructors to make the type more flexible, so it can be constructed from things other than vectors, including arbirary ranges, arbitrary iterator/sentinel pairs, and even directly constructed with data.

  • (More difficult, complex, long-term suggestion.) Consider adding an allocator policy.

\$\endgroup\$
1
  • \$\begingroup\$ I can't thank you enough! I learned a lot and also about the things to learn, thanks for taking out the time for such a detailed explanation and review. \$\endgroup\$ Commented Nov 28, 2021 at 6:24

Not the answer you're looking for? Browse other questions tagged or ask your own question.