A computer program is essentially a group of components executing with each other with a predefined set of Interface.
Lot of times in computer programming, an unexpected input/race condition (scenario in a general term) occurs, which the component does not support, in those cases the component raises an exception, and hopes the appropriate method in the stack is going to handle it.
If an exception is left unhandled, it crashes the process.
I like looking at similarities between life and programming, and this was one case where I thought the safety nets in life are more like exception handlers, they are designed to handle a specific type of exception, and do an appropriate action in response. Because if no one handles the exception, it will end up crashing the process.
In general there are 2 school of thoughts on unhandled exceptions,
One, that a process should be quickly recoverable. In case of an unhandled exception, the process should crash, and restart from scratch. It will reallocate memory and CPU from the system and start afresh. This approach is considered good if we don’t want to hide exceptions, and we want to enforce the system administrators to look at exceptions as they happen with urgency, otherwise the process will no longer run at all.
The second approach is a more compromising, here we have a handle all exception scenario, and we do not let the process crash, at all. If there are unhandled exceptions, we will eat up those exceptions and continue the process. The exceptions can still be logged for the administrators to look at, but there is no urgency in fixing them.
The advantage of the first approach, is that the process always remains clean, we know any unexpected issues will cause the process to crash, so by definition it should be clean, and handle all scenarios. The disadvantage, if at the time the process crashes, the system has High CPU or Low memory, the system will not be able to accommodate the crashed process, and it wont be able to come up after crashing.
On the other hand, in the second approach, the process never trusts the system, and always handles exceptions for itself. By doing some dirty workarounds, it keeps itself up no matter what, which lead to over time many dangling threads, handles and mismanaged components that remain in the bad state, as long as the process is up.
I personally found the first approach more desirable, its clean, idealistic and robust. But it only works if everyone on the system is behaving the same way. If only 1 process follows the first approach, and rest follow the second, only looking after themselves, the first process will not find enough memory/CPU to rise up again after crash.