This was actually asked in an interview, but it’s also good GTD knowledge.
https://stackoverflow.com/questions/4012498/what-to-do-if-debug-runs-fine-but-release-crashes points out —
- fewer uninitialized variables — Debug mode is more forgiving because it is often configured to initialize variables that have not been explicitly initialized.
- For example, Perhaps you’re deleting an uninitialized pointer. In debug mode it works because pointer was nulled and delete ptr will be ok on NULL. On release it’s some rubbish, then delete ptr will actually cause a problem.
https://stackoverflow.com/questions/186237/program-only-crashes-as-release-build-how-to-debug points out —
- guard bytes on the stack frame– The debugger puts more on the stack, so you’re less likely to overwrite something important.
I had frequent experience reading/writing beyond an array limit.
- relative timing between operations is changed by debug build, leading to race conditions
Echoed on P260 [[art of concurrency]] which says (in theory) it’s possible to hit threading error with optimization and no such error without optimization, which represents a bug in the compiler.
P75 [[moving from c to c++]] hints that compiler optimization may lead to “critical bugs” but I don’t think so.
- poor use of assert can have side effect on debug build. Release build always turns off all assertions as the assertion failure messages are always unwelcome.