29

I'm a junior in my company, and one of the coding rules they have is: "a constructor object must never fail" (i.e., never throw). So what if I give them an invalid parameter? Then, the object should be constructed anyway and there should be another way to check if the object is valid, like an isValid() method.

For the context, we are developing a desktop software (no web) in C++.  I work mainly on the core of the app, which can be seen as a DLL used by the rest of the software.

I am surprised by this, and after a bit of questioning, I understand the following reasoning:

  1. When something throw, at one point in the call stack this should be handled (using try...catch).

  2. If a throw is not caught, stack unwinding will go all the way to the top, at which point it will encounter a global catch block that will do something like this: display the message "Something went horribly wrong", try to generate a crash report, and shut down the app.

  3. We cannot be sure that future programmers will correctly use your object and handle correctly all edge cases. If someone uses my object, forgets to add catch, and passes through testing, we are at risk of having a "Something went horribly wrong" message in production, and this should be avoided at all cost.

  4. So, an object constructor can never throw. In fact, no function should throw (outside of a very little part of the code that has a specific catch clause that doesn't crash the app).

While I understand the reasoning, I find the consequences to be unsettling: I may not use exceptions.

And it's a shame, because they are useful. Instead, we have a lot of if (myobj.isValid()) everywhere, and empty constructors alongside with init(....) methods, which I have a feeling are a bad pattern.

But we can't find a breach in this reasoning, especially point 3.  What would be a better approach, that allows programmers to throw when needed, and ensure that we never trigger the ultimate "Something went wrong" fail-safe?


Basic example: if my object requires to store a size as a double, and size shall always be positive.  But C++ doesn't have unsigned double, so we have

class MyObj {
   MyObj(double size) {m_size = size};
   double GetSize() {return m_size};
   double m_size;
}

What if someone calls MyObj(-3)?  I can't throw in the constructor, so the object must be created. Then, I have to add a method bool isValid(){m_size>0;}.

And all callers have to check validity; it's tedious. What I want is have my object and always be sure that size is positive when I use it.

24
  • 5
    One phrase to look up for background reading would be "define errors out of existence"; particularly for the if (size < 0) {m_size = 0} example Commented Apr 19 at 9:30
  • 10
    @sayanel You can't never trigger the failsafe. But put in place good software engineering practices (code reviews, appropriate levels of testing, etc) and you can get to a point where it's triggered rarely enough not to be an issue. At the moment, your company's cure (no throw constructors) is arguably worse than the disease (sometimes triggering a failsafe). Commented Apr 19 at 10:53
  • 53
    @sayanel, the counter-argument for point 3 is that future programmers might forget to call isValid on an object that isn't valid and as a result cause a malfunction of the program. That malfunction could be less obvious but more devastating than an "oops" message to the user and it includes corrupting the user's critical data. Commented Apr 19 at 11:14
  • 7
    A better, more common rule is "A constructor should never give an invalid object". To follow this coding rules, public constructors should not be used, and a factory returning an Either<Object, Error> (or whatever the C++ equivalent is) sounds more appropriate.
    – njzk2
    Commented Apr 21 at 11:55
  • 12
    Re 'We are at risk of having a "Something went horribly wrong" message in production, and this should be avoided at all cost.' I must disagree strongly with the "at all cost" part. Something much worse than a "something went wrong" crash is ... something went wrong and the app didn't crash, it did the wrong thing. If you can avoid throwing because you can arrange to always leave a valid state, then that's good, 100% agree you should strive for this. If you avoid throwing by just leaving things in an invalid state and trusting all other code will handle it, then that's worse.
    – Ben
    Commented Apr 22 at 2:42

14 Answers 14

51

We cannot be sure that future programmers will correctly use your object and handle correctly all edge cases. If someone uses my object, forgets to add catch, and pass through testing, we are at risk of having a "Something went horribly wrong" message in production, and this should be avoided at all cost.

This seem like a very dangerous mentality, as it could easily result errors being hidden. Some developer might reason that crashing is horrible, so if there is a failure we can just ignore it, or retry endlessly, use some default value, or some other non standard and poorly documented error handling.

The failure is still not caught by QA, so the failure handling is never tested, and there are no "something went horribly wrong" messages in production. Except the system might not actually do what it is supposed to, it may give faulty data. And if you are unlucky it may not be immediately obvious that the data is faulty. By the time the failure is actually discovered it may have caused untold amount of harm.

Because of this it is usually a good idea to "fail fast" to more quickly find errors, and to limit the harm errors can cause. Reliability and failure handling is an entire engineering discipline in itself. But if you are concerned about software issues a good place to start is to ensure the following exist:

  • Specifications/documentation about the intended behavior
  • Code analysis tools ("treat warnings as errors", static analyzers etc)
  • Code reviews
  • Automated tests
  • Manual Testing

Each can be done at a level suitable for the reliability you require. For high reliability applications the QA work may greatly exceed the actual development work.

A similar argument applies against having a separate IsValid()-method, since that could just as easily be missed, resulting in hidden errors. A factory + Result/Option object, as suggested by other answers, is a better solution since the compiler will help ensure correct usage.

In any case, the reliability of your application should not rely on a single person "not forgetting something", several things should need to go wrong before a problem affect actual users.

2
  • 3
    I'd argue that it is just as likely that a developer forgets to put .IsValid() after creating the object as them forgetting to put a try/catch. "The developer might forget" excuse is a bad one to begin with. And it might be easier to spot a missing try/catch in code reviews. Or to create a code analyzer to check for. Commented Apr 22 at 12:38
  • @TheEvilMetal agreed, added a paragraph that I hope make that point.
    – JonasH
    Commented Apr 22 at 13:44
30

These kind of complaints usually come out in companies/teams that are themselves new to enterprise level software engineering. If developers forget to add a catch and the code goes all the way to production, then it simply means the company has a deficiency in QA.

But there are ways to solve the problem, up to some point.

Particularly the factory pattern can be used to avoid the problem of throwing an exception. If an object cannot be created from the passed parameters, either return null, or to be a little more advanced, a Result class which will contain the object created successfully or a collection of error codes if something happened during object creation.

Of course, if the developers forget to check the outcome of the factory methods to see if everything went fine, then you will have issues. But at least a clear return value will at least rise some questions to its users.

Academically speaking, constructors are supposed to ensure objects created are valid and ready to be used. That means throwing exceptions. In the real world, things are a little bit more complicated. That's why creational design patterns exist.

The no-throw policy can also be dealt with the operation result design pattern. It might have some merit based on performance, especially if a lot of exceptions are thrown in quick succession (in a loop e.g.). But this depends on the specifics of the project. Note that both the result pattern and the throw exception pattern assume that errors will be handled and will not go unnoticed.

There are libraries that help implement the result pattern, for example this one for c++ https://github.com/bitwizeshift/result

6
  • teams that are themselves new to enterprise level software engineering. As I said I'm a junior so I'm not the best judge for it, but I doubt this is the case: the company has 30+ years of existence, developing this software from the ground, with dev that were there from the start. There is no doubt their solution do work, even if it baffles me.
    – sayanel
    Commented Apr 19 at 10:13
  • 35
    @sayanel unfortunately it's totally possible for a company to last 30 years whilst using questionable software engineering practices
    – Caleth
    Commented Apr 19 at 10:30
  • 1
    @sayanel a global no throw property might make sense based on performance requirements. An exception has some degree of performance overhead due to stack unwinding. It depends on the specifics of the project, which is why I did not explicitly mention anything about it. The Result return value can be used to deal with that as well, but that is a broader discussion. Most applications will not benefit from a no-throw policy like that.
    – Ccm
    Commented Apr 19 at 10:39
  • 12
    @sayanel Exceptions were not accepted as best practice 30 years ago. Since you have the original developers still on the team, and they have only worked in this environment, their practices may be outdated.
    – Schwern
    Commented Apr 19 at 19:48
  • 6
    Don't try to solve the 'no throw' policy. You are a junior, and are trying to change 30 year old practices while one of the founders is still developing. It seems you have talked to the seniors about it, that's what you should do. But they don't want it changed. Once they have made up their decision you should accept it, even though you are right. If you push it further it might affect your career at the company. Just learn from the experience, remember it and take advice from juniors when you're a senior. Or at least explain to them why it's not possible.
    – Polygorial
    Commented Apr 21 at 7:16
25

Rather than have a throwing constructor, you can have a function that returns either an object, or nothing. Callers will have to handle the possibility of there being no object.

This isn't quite "Have my object", but it does satisfy "always be sure that size is positive when I use it".

From your example:

class MyObj {
private:
   MyObj(double size) {m_size = size};
   double m_size;
public:
   double GetSize() {return m_size};
   static std::optional<MyObj> Create(double size) { 
      if(size<0) return std::nullopt; 
      return MyObj(size); 
   }
}

The reasoning you were supplied with is somewhat faulty. The immediate caller of a function does not have to be wrapped in try ... catch .... It can be some distance up the call stack, where an appropriate action can be taken. The try ... catch ... you describe in point 2 is such a place, and is sometimes called the "last-chance exception handler".

It is also somewhat worrying that part of the last-chance exception handler is an autosave, because by definition, you don't know what's gone wrong when you get there, so you might be saving an inconsistent state.

Point 3 is also faulty reasoning. You've swapped the possibility of having a "Something went horribly wrong" message in production, for the possibility of something going horribly wrong silently, and corrupting all your data. It doesn't actually address "We cannot be sure that future programmers will correctly use your object and handle correctly all edge cases", it just makes errors quieter.

8
  • 1
    Regarding point 1: I expressed it wring and corrected it. Regarding the autosave: you're right, it's not an autosave, it's a crash report. Regarding point 3: I guess the idea is that when you code it you know what you are doing (as opposed to someone who uses your code three years in the future). So it's your responsibility to write code that make valid object.
    – sayanel
    Commented Apr 19 at 10:25
  • 2
    @sayanel also optional, although for void functions you'd probably just swap to bool to indicate success, as optional<void> isn't valid
    – Caleth
    Commented Apr 19 at 10:42
  • 4
    @sayanel Everyone who uses the code needs to understand it, especially the person who has to maintain it 3 (or 30) years in the future.
    – Schwern
    Commented Apr 19 at 19:52
  • 4
    +1 for the final paragraph, which is the real problem the OP's management haven't understood. Even in a simple desktop application, it's better to lose the last hour's work than to save a corrupted file and lose everything. If this is an application with more significant implications when it goes wrong - a banking app, for example, or stock or forex trading - then there could be legal consequences if the application carries out incorrect operations after an error.
    – Graham
    Commented Apr 20 at 11:29
  • 4
    ... For an example of how bad that can go, the UK is currently dealing with a 25-year-long scandal, where failures in Post Office IT systems resulted in over 500 people being wrongly convicted. Many were sent to jail, many were bankrupted, most lost friends or family as a result, some were unable to work again. We're still waiting to see what consequences there will be for the people who carried it out.
    – Graham
    Commented Apr 20 at 11:33
13

For the problem you're trying to solve of programmers forgetting to deal with failure states when creating an object, there isn't really a difference in foolproofishness between throwing an exception and expecting a caller to check if the object is valid after making it (or null as suggested in other answers), other than perhaps IDE support for null check reminders. However, the consequences of failure to check are quite different. If you throw an exception in the constructor, you get an exception with specific detail of the failure at the point of failure. If you return null, you get a null reference exception or a segfault trying to access the object the first time, hopefully close to where you tried to construct the object. Since this option generally has better IDE support it might be preferable to exceptions for you. The worst of the options is what you're doing, because you might have an object that doesn't fail until some other part of the code in some unexpected way, because that part of the code expects the object it gets to be in a valid state.

The way to safeguard against exceptions crashing the whole program is to put global try-catch blocks in places where the application can recover from an unknown error. Depending on your IDE there might also be a way to have it warn when a function or constructor can throw an exception that isn't handled.

4
  • In case of nullptr (and probably optional nullopt?), static analysis can detect that a pointer has been assigned and not checked, but I don't think it can detect that a throw is not caught
    – sayanel
    Commented Apr 19 at 12:28
  • 2
    @sayanel That's true, and you can also include [[nodiscard]]. IMO that pattern is fine if it works better for you since it still prevents an invalid object from possibly being used. You could also do something like C#'s "try" methods e.g. bool TryCreateFoo(int arg1, char arg2, Foo &newObj). Commented Apr 19 at 13:20
  • Actually the third argument might have to be a pointer, I don't use C++ much. Commented Apr 19 at 13:27
  • This exactly. Whether you're forgetting to handle an exception or check a null pointer, you've already screwed up, the train is off the rails, and crashing is the best thing you can do - it's better than, say, passing this null to a Save() method and overwriting the user's data with an empty file (and then probably crashing a couple calls down the line anyway). And as a caller, if I call your constructor/factory and get a null where I don't expect one (because if I did I would've checked for validity myself), what exactly am I supposed to do with it? Commented Apr 19 at 18:38
11

The real danger with exceptions in constructors is that the destructor will not be called. This is true no matter how nicely you handle the error in your surrounding code. That means if your class does anything that requires teardown, and then encounters an exception later in the constructor, it will never run your destructor. (It will run the destructors of instance variables that fall out of scope, though. If you use new you have problems, if you use std::vector you're fine.)

class MyObj {
    char *myArray;
public:
    MyObj(int size);
    ~MyObj();
};

MyObj::MyObj(int size) {
    std::cout << "Constructor called!" << std::endl;
    myArray = new char[size];
    throw std::runtime_error("Oh no!");
}

MyObj::~MyObj() {
    if (myArray) {
        std::cout << "Destructor called!" << std::endl;
        delete[] myArray;
        myArray = NULL;
    }
}

int main()
{
    try {
        MyObj example(10);
    } catch (...) {
        std::cout << "Handled error nicely!" << std::endl;
    }
}

Output:

Constructor called!
Handled error nicely!

We didn't free myArray! Not good! So you shouldn't throw an error once you've already done something that requires your dtor to be called.

Another issue is that constructors are often called in initializer lists. Suppose you have a constructor that might error:

class SubObj {
public:
    SubObj(int size);
};

SubObj::SubObj(int size) {
    if (size < 0) {
        throw std::runtime_error("Oh no!");
    }
}

class MyObj {
    SubObj mySubObj;
public:
    MyObj(int size);
};

MyObj::MyObj(int size) : mySubObj(size) {}

How do you handle this? It turns out there is a way, but it's likely unfamiliar syntax to most developers, and there's no way to try/catch just a single initializer in the initializer list.

MyObj::MyObj(int size)
try : mySubObj(size) {
    // entire constructor enclosed in this block.
}
catch (...) {
}    
5
  • 4
    But isn't this really a programming error when creating the constructor? Just as one have to write the destructor in a fashion that prevent leaks, you have to write the constructor in the same way? When you write "new" in your code it is about time to really be vigilant. I would say that writing "new" in one line and "throw" in a subsequent line without releasing the memory before the throw is an error in any context, not just the constructor.
    – daramarak
    Commented Apr 21 at 12:57
  • @daramarak Yes it's a programming error, but a very easy one to make and part of the reason that doing exception handling correctly is hard. Therefore I think it's definitely worth pointing out as a "gotcha" that you need to be aware of. Commented Apr 22 at 7:03
  • 1
    @JorisTimmermans i agree. My point was that this is not something about constructors and throwing in constructors. It is just as hard in any other context where you do resource allocation. But if your point is that exceptions make programming in C++ harder - yes it does. Exceptions require strict control of resource allocation.
    – daramarak
    Commented Apr 22 at 12:08
  • @daramarak If the ctor completes, I believe it's guaranteed that the dtor will be called at the end of lifecycle, including if the object falls out of scope due to an exception and catch. So Obj a(); a.alloc(); throw myError(); should call a's dtor and free resources.
    – Kaia
    Commented Apr 22 at 22:00
  • @Kaia I don't disagree to this. My point was that any function that has new something(); throw somethingElse(); are bound to leak if you do not explicitly set up some way of cleaning up. Not just constructors. My main point is if you don't have extreme vigilance when writing new something will bite you. Always prefer make_unique and friends. Throwing and constructors are not what should be avoided - new is.
    – daramarak
    Commented Apr 28 at 7:48
8

...one of the rule coding rule they have is: "a constructor object must never fail" (i.e. never throw)

I think the best way to understand these rules is to separate "a constructor object must never fail" from your interpretation, "i.e. never throw." Let's start with "A constructor must never fail" and we'll tie it to "never throw" later.

First off, permitting failure is mandatory in programming. There's a theorem, Rice's Theorem which says a particular semantic property (such as "failing") cannot be proven for all programs (unless it's a trivial property, which we won't worry about here). So if proving that your program doesn't fail is important (and in enterprise settings, it absolutely is), it is essential that the company rephrase "failure" in a syntactic way. We cannot prove that a program cannot "fail" directly, but we can prove that a particular boolean will never be false if we're really careful with our rigor.

At the top levels, this sort of mathematical proof is essential. I must prove that my server will not crash, or I must prove that my server will not leak sensitive customer information. Of course, we also have users inputting incorrect things (and sometimes adversaries intentionally constructing malformed requests). So what do we do? We end up needing to prove that the failing behavior is handled. Instead of "open" being defined as a function which opens a file, we re-define it to be a function which opens a file if it exists, or provides a way to notify the caller that the file was not opened. As part of the verification of that caller, they'll look at your "open" API, and make sure the caller handled both the "open a file" and "don't open a file" paths.

It's not so much that things cannot fail, which would be an afront to Rice's theorem. Instead it is that a thing must handle any failures in a well defined way, and then that is thought of as "handling an issue" rather than "failing."

So how do we do this? Your company is using a common approach: returning booleans. This is, arguably, the obvious solution. Every function returns a success/fail indicator, and the caller checks it.

You suggest another obvious alternative, exceptions. These are a big deal in many languages. With exceptions, functions are not required to return a success/fail indicator, and the caller is no longer required to check it. Instead, all of these checks can be "lumped together," into catch blocks.

Your argument appears to be that these catch blocks are superior. And as someone who uses both approaches, I can see why. It's nice to be able to write a bunch of code that doesn't worry about failures, and "wrap" it in a single try/catch block which handles all of my errors. It's easier to see that that catch block is in place.

However, the devil is in the details. What does it mean to find yourself in a catch block? It means you have executed an "exceptional control flow" that jumped you to your catch block, possibly from inside a sub-sub-sub function. What do you know at that time? You know the contents of the exception. Nothing else.

So now you are in an environment where you must tell a caller enough information to let them "not fail," by handling the errors. What can you tell them? Can you tell them enough? Let's give an example. You have a function that opens a file, reads in some numbers, and then does some arithmetic on them. This arithmetic may divide by zero, and you choose to report this via an exeception. The caller gets a DivideByZero error.

Is the file still open? Could I possibly run out of file handles because your function kept opening files and not closing them?

This is a surmountable problem. You can write exception handling systems which provably handle all of these details correctly. But it turns out to be hard. Much harder than you might think, near the start of your career. It's not bad for a 1000 line program, but as you get to 100,000 or a million lines of code, it gets downright difficult. So in high-reliability situations, it is not uncommon to fall back on the uglier and more verbose approach of if-statements. That approach typically permits easier proofs of correctness.

So now we can get back to the idea of constructors not throwing exceptions. We see that this isn't actually an attribute of constructors at all. It's an attribute of the entire approach to providing guarantees of the behavior of functions. So how do we get to their statement, "Constructors must never fail?" Well, how do you report the failure? A constructor cannot return a boolean to indicate failure. And we just discussed that it can't throw an exception? What is it to do?

Well, the only remaining answer is to always put the class into a meaningful state. I've seen code with plenty of mIsValid Booleans sprinkled through it, which are set to false if construction "failed." If your company defines "fail" such that setting an "invalid" flag isn't "failure," because you handled it in a defined documented way, then their position is consistent. Or, one can go down the approach it sounds like they are doing, where the constructors trivially cannot fail, and there is a separate init function which can fail (and can return a boolean).

And this is why we get to the situation you describe. Good job for getting to the end of this post. I wanted to be long-winded to describe the full thought process. I think the issues regarding exceptions are obscured by lots of people who know the rules for handling failures, but not the full rationale why. This can lead to lots of false starts where an individual will tell you "we don't do X because of Y," but Y is a total red herring. They know they need junior developers to follow the rules, but they may not really understand why. To understand why, one has to go all the way up to the senior staff who are held responsible for the success of a company's products, and what their needs are. Then one has to go all the way down to the junior developer fresh out of school, and ask how to we ensure their product meets the senior staff's needs. It's a big round trip!

3
  • It's a very interesting read, thank you. And unlike most responses, it explain why my company has this rule. We are indeed closer to the million lines of code, and from your post I understand that this "pattern" (boolean return, init method...) is safer and less error prone. But I am a bit surprised that the more "modern" approach (exceptions) isn't safer/better/easier at large scale (I know newer isn't always better, but I have the feeling that in software, newer means simplifying dev work)
    – sayanel
    Commented Apr 20 at 7:18
  • 1
    @sayanel When exceptions were invented, they were trying to make things easier, but probably overshot the mark a bit. For lots of software, where high reliability is desirable but not required, they're quite effective. Several languages have explored ways to make things more provable (such as requiring you to explicitly list what exception classes you might throw), but we're still working on it as an industry.
    – Cort Ammon
    Commented Apr 20 at 19:36
  • I wouldn't assume your company is doing this just because this is a huge codebase, @sayanel, unless there are more things in line to make this other approach robust as well.
    – Ángel
    Commented Apr 20 at 21:03
6

Throwing an exception from inside a constructor is just an absolute pain. Say you have a local variable with an instance that threw an error during construction. What’s the state of that instance? Or an instance in dynamic memory. If the constructor throws, what happens to the memory? You really need to read the C++ standard very, very, very carefully.

And that’s easily avoided by having a factory creating instances. Or a class method if you prefer that (which then is just a simple factory). That code allocates an instance with a simple constructor that doesn’t throw, and then calls instance methods to set up the object the way you like it. So you avoid having to know the intricacies of the language and having to deal with them correctly.

At that point, if there are problems, you can return an instance where invalid() returns true, or deallocate the instance and return null, or throw an exception. If you think the problem is due to a programming error, assert.

If your colleagues don’t like exceptions but you use them, they will ask you to remove them. That’s not a hill you want to die on. Personally I’d prefer returning a null pointer because the caller cannot ignore that, but that’s just me. And whatever you do, make sure an instance is always in a state where the destructor can be called safely.

Anyway, I don’t mind how many exceptions you throw - just don’t throw them from constructors if you value your sanity! Calling a function that either returns a constructed instance, or returns NULL or throws an exception is much healthier just because of the C++ language.

5

A related topic is 2-phase construction, e.g. https://stackoverflow.com/questions/3021572/two-phase-construction-in-c

There are sometimes reasons not to rely on exception handling, and there is some language support in C++ to ensure that the actual return value of the init (or factory method) isn't ignored:

https://stackoverflow.com/questions/76489630/explanation-of-nodiscard-in-c17

However, as the other answers have also indicated: if there is an error you need to deal with it at some point, and moving where that point is doesn't really reduce the complexity necessarily.

0
4

Initially I wanted to comment, but I guess I'll post an answer after all, albeit one that tackles the problem from a completely different angle.

There seems to me to be a very fundamental difference between the mindsets of SW engineers and businesspeople.

Many SW engineers demand, first and foremost, that the application's functionality is well defined. Therefore, if the app behaves in a way that violates the developers' assumptions (eg a 'boneheaded' exception is thrown, in Eric Lippert's nomenclature) then the app MUST crash. (The problem should be logged and fixed ASAP; the app may be immediately relaunched; QA efforst and good programming patterns should ensure such issues to be rare; nonetheless bugs are never completely avoidable and if one is detected at runtime, the app MUST crash.)

There are many good reasons for these demands, many were posted on this site many times already, I'm not going to reiterate them here.

Rather, I want to attempt to show the opposing mindset and show that the opposing mindset also has some merits.

From my experiencie businesspeople often demand the direct opposite: that the app must NEVER crash. This doesn't include only managers of the software house, who can be accused of trying to sweep problems of their app under the carpet. Crucially, this also includes clients.

The reasoning is that malfunctioning is partial functioning, which is still functioning. Crashing, to the cotnrary, is no functioning at all. If the client depends on the app working - if the client's operations cannot continue without the app working - then the app not working is like the worst thing that can happen to the clinet. In such a case the client cannot continue their operations until fix arrives. This is most unacceptable.

If, however, the app 'just' malfunctions, then the client is inconvenienced, however they can continue operations. Bugs can often be worked around. Data corruption can manually be fixed, especially since data corruption due to bugs tends to be somewhat localized.

Yes - someone can say that in case of malfunction the app must crash, but needs to be immediately relaunched, therefore providing the best of both worlds. However bugs can often occur deterministically at well defined places. For example, until fix arrives, the app always crashes when the client attempts to process a given order. Then the client cannot process the order at all. If, however, the app allowed the client to process the order, but corrupted its data, then the client could 'just' manually fix the corrupted data and continue operating.

Real life example. We were making an app that, in the background, was supposed to transfer orders form an e-commerce platform such as WooCommerce to an ERP system. End users were intended to be small stores, not having a dedicated IT specialists. They were supposed to be enabled to just only look at their ERP system for new orders, not at the ERP system as well as manually at ecommerce platforms.

Assume that, due to a bug in our application, a single problematic order deterministically crashes our service. Problem is logged, application is restared automatically, but then it crashes again because of this buggy order. Best case scneario, this order is never transferred. Worst case scenario, no other order is transferred. This is the worst thing that may happen. The very purpose of the app is to transfer orders. Yes, the crashes are logged, but no one looks at logs (remember, the small store does not have a dedicated IT team). If the client's stuff does not see the order in their ERP system, they never act on it. The consequences for a store that just ignores a (paid) order may be dire. The conseuqences for a store that ignores all orders may be even more dire. Vendor will probably need a few days, if not weeks to fix the bug; this means that the store is effectively closed without notice to buyers for weeks. This is a tragedy that must be avoided at all costs.

Now assume that, on the other hand, the order is transferred, however in a buggy way. It does appear in the ERP system, however its data is corrupted and is visibly garbage. Then the store employers are at least alerted that an order arrived - they may manually look at their ecommerce system to get the non-corrupted data. Alternatively, the data is subtly corrupted, for example the price is wrong. Then the worst case scenario is that a corrective invoice will have to be issued with an apology to the buyer. This is strictly preferable to an order not appearing at all.

I could not pierce through this mentalitly. There were non-negotiable demands that all orders must always be transferred, no matter what, and that I must avoid crashes at all costs and no reason is good enough to crash the app. Again: yes, I should strive to avoid bugs, however, trying my best to avoid bugs is not a good enough reason to crash the app if a bug happens nevertheless, since bugs can never be completely avoided and everyone knew that so everyone was aware that it is not reasonable to demand and expect the programmers to write code in such a way that no bugs may ever, ever happen.

From the comment under the OP:

@sayanel, the counter-argument for point 3 is that future programmers might forget to call isValid on an object that isn't valid and as a result cause a malfunction of the program. That malfunction could be less obvious but more devastating than an "oops" message to the user and it includes corrupting the user's critical data. – Bart van Ingen Schenau 23 hours ago

Well, no. Many times businesspeople explicitly prefer malnfuction to crash, as I explained above.

I don't necessarily support the above mindset. Nonetheless, in my experience, it is quite common and I believe it does have its merits that tend to be overlooked by developers insisting that apps should always crash to prevent possible malfunctions.

9
  • 4
    You make a good point showing the "business mentality". Still, I consider it heavily depends on the kind of bug. Using your very same scenario: if all orders failed, it will be bad, but it will be detected very quickly. If only 60% orders got through, it may go undetected for months or even years with your customer blissfully unaware of all the clients it is discarding.
    – Ángel
    Commented Apr 20 at 21:16
  • A strategy that you can sometimes use is to accept any input from the outside, and if you don’t like it, replace/remove things in a deterministic way until it is acceptable. Basically input that you might have received and processed without any complaints. A trivial example is supposedly UTF-8 input that can always be turned into valid UTF-8 by removing the first offending byte.
    – gnasher729
    Commented Apr 20 at 21:19
  • 5
    @Ángel is right, "business people" don't want the app to crash, but its because they haven't thought of the "the worst thing that may happen". Its not that a few orders dont get through immediately and need manual correction. Its that you send thousands of orders over a period of years with the wrong price and end up going bust, being sued and being thrown in prison for fraud en.wikipedia.org/wiki/British_Post_Office_scandal
    – Ewan
    Commented Apr 21 at 12:13
  • 6
    I come from a company that had the philosophy of never crashing, because chances that something serious is wrong is very slim, and the clients were dependent on the system always being up and running. It went very very bad, keeping a malfunctioning system up lead to more and more corruption and the errors being impossible to capture, and the clients felt that the system very horrible unstable despite never crashing. Only after adapting a "crash on error" policy with a stringent test regimen we were able to restore the stability. Instead we supply a watchdog restarting the application.
    – daramarak
    Commented Apr 21 at 13:11
  • 4
    I come from an industry where the customers expect our products to function 24/7 with a 100% availability. That would suggest a no-crash policy, but to the contrary, to ensure the safety of our customers,we must be able to crash when things go wrong. We take a lot of measures to avoid crashing, but if we hit a state where we are no longer sure what the software is doing, it is better to crash and ensure safety than try to limp on and possibly kill customers. Commented Apr 22 at 6:53
3

I'm personally on "Team no-Throw", so I would not rely on exception handling in this situation.

I would do one of two things:

  1. Do as the team suggests and create the object in an error state that can be checked.

or

  1. Alternatively, use a factory style function to create the object, and return null if it fails (the function validates input). You can check for null on the return to know if its in an error state and handle things from there.
3

Sometimes, when we try to improve an approach to work well for even more use cases, we lose track of the first goal and end up becoming the very thing we swore to destroy. It seems like this has happened in your company.

This requires some autopsy, so I'll address points from your question in order:

Then, the object should be constructed anyway and there should be another way to check if the object is valid, like an isValid() method.

It's subjective whether that is the right approach or not. Not necessarily a problem; but I do want to point out that a significant number of people disagree with your approach so it's not as clear cut as your company is making it out to be.

This is one part business context, one part subjective coding style.

  1. When something throw, at one point in the call stack this should be handled (using try...catch).

Ideally, yes, exceptions should be caught. The application blowing up due to a foreseeable mistake is not good quality.

Foreshadowing: notice the stressed part in the above.

  1. If a throw is not caught, stack unwinding will go all the way to the top, at which point it will encounter a global catch block that will do something like this: display the message "Something went horribly wrong", try to generate a crash report, and shut down the app.

I would put an asterisk on having a global catch-all that catches everything without even considering if it can do anything with it. It's contextual.

If there is a security concern, such as not leaking implementation information to an untrusted consumer (which is often the case for web-based APIs), yeah a last line of defense that catches any exceptions that bubble to the top, logs them, and returns an uninformative 500 Something went wrong to the requester is perfectly okay.

However, this does not seem to be your scenario, as you're (a) dealing with a desktop app and more importantly (b) are intending to still crash your app and are just logging details before you do so.

Be very careful about this catch-all making your developers lazy in not bothering to see if specific exceptions can be handled more gracefully on another level. The existence of a last line of defense should not lead to you ignoring all other defences.

I suspect that your company may have run afoul of the above advice, and their "no exceptions" rule is in response to developers never really considering if there's an appropriate catch for thrown exceptions. It just sounds like it.

  1. We cannot be sure that future programmers will correctly use your object and handle correctly all edge cases. If someone uses my object, forgets to add catch, and passes through testing, we are at risk of having a "Something went horribly wrong" message in production, and this should be avoided at all cost.

This is where it goes severely off the rails. At its very core, the issue here is the expectation that an implementation is aware of and responsible for any and all of its consumers.

The designer of the throw should not be aware of who is using their design, nor if and how they choose to handle a raised exception. That's the entire point of an exception: the consumer gets to opt in to handling specific exception or otherwise letting them bubble up further (in cases where they can't meaningfully handle it anyway).

Dictating that the design of the class should be different because you can't universally prevent that the consumer might do something wrong with it is asinine. This is arguing that we shouldn't have forks because someone might stab themselves with it. Not only is that helicopter management, it also misses the point that stabbing things is the explicit purpose of a fork.

Stepping out of the analogy, giving your consumer the ability to choose to handle the exception or not is inherently part of the purpose of an exception in the first place. Anyone who suggests that potentially not catching an exception is the reason to disallow throwing them does not understand exceptions.

Instead, we have a lot of if (myobj.isValid()) everywhere, and empty constructors alongside with init(....) methods, which I have a feeling are a bad pattern.

As unforgiving as I was in the previous paragraph, I'm going to call for nuance here. While throwing exceptions can be a very useful practice, it should not be done for expected outcomes. Exceptions are, by their very definitions, expressing an exceptional scenario.

With regards to object construction, the question is who is constructing the object. For example, if you are deserializing JSON from an external source, then receiving bad data (i.e. structurally sound but not passing an arbitrary validation rule) is well within reasonable expectations and therefore throwing an exception would not be a great way to handle this.

On the other hand, if this class is only being instantiated by your own private domain logic, then any bad values being passed into it are a sign of a significant flaw in your domain logic, which is something you want to have fail loudly, because if your domain logic is not doing what you think it does, that can have significant impact on your application's behavior.

Anyone who tries to tell you that the approach should be a universal one without even investigating the context, is not a good source of information and advice.

What would be a better approach, that allows programmers to throw when needed, and ensure that we never trigger the ultimate "Something went wrong" fail-safe?

As established above, I disagree with the premise that we should always have perfect coverage in conscious catching of specific exceptions; by the very nature of what an exception is.
As established slightly further above, it can make sense to at the same time have a catch-all last line of defense in cases where you'd otherwise be leaking sensitive information to people who should not be privy to exception details.

A more healthy flow would be something along the lines of:

  • Design phase - do we throw an exception?
    • Is the bad path a "normal" outcome or is it describing something unreasonable?
    • Can we continue with the current situation or has it become impossible to deliver what the consumer expects?
  • Consumer phase - do we handle any exceptions?
    • Which exceptions are being thrown?
    • Can we meaningfully respond to (some of) these scenarios and handle them better than letting them bubble up?
  • Security phase - do we need to stop the bubbling up?
    • Is the eventual recipient of the exception information a trusted actor, or should we block this information from reaching them?

After all this has been considered, if there's an exception that bubbles up and crashed the app, there's only three possible remaining explanations.

  • Maybe this is a new exception that didn't exist before. The solution is to evaluate it using the above guideline.
  • Maybe we made a mistake in how we handle this scenario. The solution is to re-evaluate it.
  • Maybe our evaluation is correct, and no one along the way has any way of handling this elegantly. The solution here is that there is no better solution available than what you already have.
1

Your company's policy is making simplifying assumptions that lead to wrong decisions.

That the user sees a "something goes horribly wrong" message is irrelevant. What's relevant is whether you can reasonably well prevent something from actually going horribly wrong. Preventing a message from popping up isn't helping here in any way.

It's actually the opposite: In any big-enough project, allowing to have objects that are not actually valid (as seems to be your company's "solution") is virtually guaranteed to cause things to get horribly wrong later on, because now no consumer of an object can assume that it's valid. Every single function that the object is passed to needs to check its validity and have a work-around in the !is_valid() case, and it's virtually guaranteed that some callers will forget that and break.

A better solution is to make the constructor of your class Foo private (so that it can't be called from outside the class) and delegate the construction to a static factory methods that returns a std::optional<Foo>. Any code that constructs an instance of Foo would

  • call that factory method
  • check the return value for std::nullopt, and handle that case
  • extract the Foo from the std::optional<Foo> and proceed with only using the Foo object, which is guaranteed to be valid (because there's no way to construct an invalid Foo)

This way, all code that just uses a Foo can rely on the fact that all Foos are valid. Only code that wants to construct a Foo has verify that the Foo was indeed constructed correctly. You'll need code reviews and tests to ensure that that's indeed the case, but this is still so much safer than also having to check all users of Foo.

0

If your constructor can never throw, then presumably it is assumed that the arguments passed to it will never be invalid.

So assert that to be the case. Then if your assumption is wrong, the program will terminate immediately instead of entering an undefined state. At that point, you can fix the bug and move on.

8
  • 5
    This seems the worst of both worlds. If an assert fails in production you still wind up with "Something went wrong". If you turn off asserts in production you wind up not checking the error and lots of undefined behavior possibly trashing production data.
    – Schwern
    Commented Apr 19 at 19:55
  • @Schwern It depends on what you consider worst. I work on a project where we develop software for heatpumps that also periodically sends data to cloud. We can deal with single instance sending trash to cloud, we cannot deal with heatpump restarting every 15 minutes because our code is crashing at the same spot every time. Commented Apr 21 at 22:20
  • @Yksisarvinen How do you know the invalid controller code isn't damaging the heat pump? Or heating the refrigerator? Or about to cause an electrical fire? Why is the system designed such that an error sending data to the cloud, something very predictable, is allowed to crash the whole controller? Pretending exceptional inputs never happen in production doesn't make your product more robust, it just hides the problems until they start sending smoke signals.
    – Schwern
    Commented Apr 22 at 0:05
  • @Schwern The OP's team is assuming that the preconditions of a constructor will never be violated. (And rightly so. Calling functions with data you don't understand and counting on them to handle it no matter what is going to result in bug-ridden garbage.) The assertion approach recognizes that coders are only human and will screw up from time to time, but it doesn't try to "handle" the garbage input; it just lets the coders know they have a bug to fix. The point of assertions is that by the time the code makes it to the users, the class of bugs they check for shouldn't exist anymore.
    – Ray
    Commented Apr 22 at 17:23
  • @Ray What you described applies to all error handling, but assertions are unique in that they get turned off in production. Testing will find some bugs, but it can never find all bugs. Production often stresses code in unique ways that reveal bugs. With no checks for bad input in production, without "handling the garbage", those bugs are much harder to find and can cause chaos to the production environment which leads to corrupt data being passed around. This could be corrupting production data, turning their house into a sauna, or destroying their heat pump.
    – Schwern
    Commented Apr 22 at 17:36
-4

The need to throw exceptions in a constructor is often a result of a poorly designed class. Meaning that your class does not accurately represent the kind of objects that you're intending to represent. Ideally, you should write your class in such a way that all the possible combinations of the values taken by the class fields present a valid instance of your class.

For instance, one way to represent objects that have a start date and an end date as a part of their defining characteristics looks something like:

public class Foo {
   private Date startDate;
   
   private Date endDate;

   public Foo(Date startDate, Date endDate) {
      if(startDate > endDate) {
         throw new StartDateBiggerThanEndDateException();
      }

      this.startDate = startDate;
      this.endDate = endDate;
   }

   public Long getDuration() {
      return endDate - startDate;
   }

}

In this example, you throw an exception whenever someone tries to create an instance with a start date that is superior to an end date.

You can rewrite this class to prevent having invalid state:

public class Foo {

  private Date date1;
   
  private Date date2;

  public Foo(Date date1, Date date2) {
     this.date1 = date1;
     this.date2 = date2;
  }

  public Date getStartDate() {
     return DateUtils.min(date1, date2);
  }

  public Date getEndDate() {
     return DateUtils.max(date1, date2);
  }

  public Long getDuration() {
     return DateUtils.diff(getEndDate(), getStartDate());
  }

  @Override
  public boolean equals(Object other) {
    // regular
    Foo otherFoo = (Foo) other;
    if(this.getStartDate().equals(other.getStartDate() &&
       this.getEndDate().equals(other.getEndDate()) {
         return true;
    }
    return false;
  }

}

Of course, depending on the specific use case, such rewrites are not always as straightforward, but they can significantly cut down on the cost of the number of exceptions that you'll otherwise have to handle.

16
  • 3
    Yeah I think these other answers have missed the point, "Don't throw in constructors" isn't really avoided by returning null or throwing in a factory. You have just dodged the rule.
    – Ewan
    Commented Apr 19 at 16:18
  • 1
    It might not be a good idea to introduce additional complexity to your class for the sake of not throwing or otherwise rejecting behaviour that isn't really needed. Let's say 99% of people are going to use this class with date1 <= date2, and you have a bug that only occurs when date2 > date1 - how long until such a bug gets caught when it's likely that even maintainers of Foo are going to forget you can provide the dates backwards? What if such a bug is an exploitable vulnerability? Commented Apr 19 at 18:26
  • 2
    Would it not be simpler to have the constructor simply swap the dates if they're in the wrong order? e.g. this.startDate = DateUtils.min(date1, date2); this.endDate = DateUtils.max(date1, date2); Then you don't have to override Equals() or provide those getStartDate(), and getEndDate() functions, since every object will have the dates in the correct order. Commented Apr 19 at 18:34
  • 1
    Ewan, throwing in a constructor is tricky. You’ll have to consult the C++ standard to do it safely and it is always tricky. I don’t like tricky things.
    – gnasher729
    Commented Apr 19 at 19:35
  • 2
    @MaciejStachowski I think you have an incorrect impression here. If you define the class such that instead of passing in min and max dates, you just pass two dates and the smallest one is assigned to the min, then your not "over stuffing" you class with logic, youve just provided a constructor that cant fail.
    – Ewan
    Commented Apr 21 at 12:07

Not the answer you're looking for? Browse other questions tagged or ask your own question.