Having discussed possible symptoms, now let's try to understand their possible root-causes.
1. True leaks.
When developing in native (C/C++) code you may just forget to free allocated memory. This can be for example:
a. Something as simple as such:
char* p = (char*)malloc (1 * 1024);
//free (p); //will never forget to uncomment this later
b. Architecture design deficiency. Unclear object ownership, management of their life-span leading to failure in proper destroying objects.
I found this shortage in Salome (the SMESH module in particular). There is proliferation of plain pointers (not smart pointers, like boost::shared_ptr) with complex dependencies between objects. I presume multiple developers maintaining the code just forgot some day which objects should destroy which. Here is the most recent work-around I had to make – to destroy sub-meshes in SMESH_Mesh you have to call SetShapeToMesh() with a null shape. This will destroy all objects stored in internal map which otherwise will be leaked (the SMESH_Mesh::~SMESH_Mesh() destructor does not destroy them):
/*! Frees resources allocated in SMESH_Mesh which otherwise leak
- bug in Salome.
where mySMesh is defined as follows:
c. Cycles between smart pointers. If you have two smart pointers referring to each other, they won't get destroyed (as reference counter will never reach zero). I described this issue in the very first post.
True leaks are usually well caught by memory checkers.
2. Memory caching by memory allocators.
Many complex software comes with integrated memory allocators that are able to manage memory more efficiently (at least in terms of speed and/or footprint) than default allocators (part of OS or C run-time library). Open CASCADE comes with its own (activated by environment variable MMGT_OPT=1), with Intel TBB (MMGT_OPT=2), or default system allocator (MMGT_OPT=0).
Though OSes provide better and better allocators, custom ones are likely to stay for foreseeable future due to efficient solving of particular problems (e.g. thread-safety and scalability as TBB). If you are curious, you might want to check some comparisons I conducted with default, OCC and TBB allocators here.
The central idea of allocators is caching and reuse of previously allocated memory chunks for further allocations. Thus, when your application object is destroyed, its memory is effectively retained by the allocator and is not returned to the system. That is why, in particular, you won't see in Task Manager the memory level returning to the previous level even if all your document objects got destroyed after closing the MDI document. Allocators may apply different policies to retain/return these memory blocks. For instance, both OCC and TBB have different approaches for small and large blocks; the latter are returned faster (as the chances of their reuse are smaller), while the former may never be returned until application terminates.
3. Static objects residing in memory.
It is a wide spread practice to create static objects which live throughout the application life-time and get destroyed only upon program termination. Consider this:
theSingleton will be created during loading the library containing it, and will be destroyed when it is unloaded (effectively when the application terminates unless it is explicitly unloaded before that).
There are multiple examples of such constructs in OCC code.
Below is the Inspector screenshot of the (false positive) leak reported on the screenshot in Part1:
4. Unused data residing in memory.
Similar to above, there are cases when some data are stored with the help of static objects and used to pass between the algorithm calls. I gave some examples in an earlier post. I believe this is a bad design and should be avoided but it may happen in third-party code. It's not really a leak but essentially wasting memory, which again only gets freed upon program termination.
(to be continued)