Is my memory leaking? Part3.


OK, now when you have some clues of possible root-causes, what's next ?

#1.If you believe you found a memory leak, first of all, do not panic. Be patient to identify if this is a true leak or a false positive. For that:

#2. Use efficient memory checking tools (Intel Parallel Inspector XE, Valgrind, etc).

#3. If you want to exclude custom memory allocator, be sure to set environment variable MMGT_OPT=0 and experiment further.

#4. If the issue has gone and you suspect it to be an allocator problem, then the chance is that it's a false positive, not an allocator issue. Though you may continue investigating, of course ;-).

#5. When designing architecture of your application, make sure you understand how your memory is managed. Do consider consistent use of smart pointers – e.g. boost's or Open CASCADE handles. That will save you multiple hours of tedious debugging.

#6. If you are paranoic about memory consumption then you might want to periodically call Standard::Purge() when using the OCC allocator (MMGT_OPT=1). It is supposed to free unused small memory blocks. You could do this when closing the MDI doc, for instance. (I never used myself though.)

#7. Advanced developers may want to step further and use fine-tune optimization techniques like use of memory pools (or regions). See NCollection_IncAllocator as an example of such. NCollection containers can accept it as an extra argument. TBB is going provide support for thread-safe pools in future versions.

#8. Black belts may also want to experiment with memory tracing routines that have been enabled in OCC allocator. See Standard_MMgrOpt::SetCallBackFunction() which is called after each alloc/free. This can be any user callback function that traces sizes of requested/freed chunks, addresses, etc.

So, here is what I was able to recall on this subject, and hope it will be helpful.
As I said in the beginning, any extensions or other best practices are welcome.



Is my memory leaking? Part2.


Having discussed possible symptoms, now let's try to understand their possible root-causes.

1. True leaks.
When developing in native (C/C++) code you may just forget to free allocated memory. This can be for example:

a. Something as simple as such:
char* p = (char*)malloc (1 * 1024);

//do work...

//free (p); //will never forget to uncomment this later

b. Architecture design deficiency. Unclear object ownership, management of their life-span leading to failure in proper destroying objects.
I found this shortage in Salome (the SMESH module in particular). There is proliferation of plain pointers (not smart pointers, like boost::shared_ptr) with complex dependencies between objects. I presume multiple developers maintaining the code just forgot some day which objects should destroy which. Here is the most recent work-around I had to make – to destroy sub-meshes in SMESH_Mesh you have to call SetShapeToMesh() with a null shape. This will destroy all objects stored in internal map which otherwise will be leaked (the SMESH_Mesh::~SMESH_Mesh() destructor does not destroy them):

/*! Frees resources allocated in SMESH_Mesh which otherwise leak
- bug in Salome.

TopoDS_Shape aNull;
mySMesh->ShapeToMesh (aNull);

where mySMesh is defined as follows:

boost::shared_ptr mySMesh;

c. Cycles between smart pointers. If you have two smart pointers referring to each other, they won't get destroyed (as reference counter will never reach zero). I described this issue in the very first post.

True leaks are usually well caught by memory checkers.

2. Memory caching by memory allocators.

Many complex software comes with integrated memory allocators that are able to manage memory more efficiently (at least in terms of speed and/or footprint) than default allocators (part of OS or C run-time library). Open CASCADE comes with its own (activated by environment variable MMGT_OPT=1), with Intel TBB (MMGT_OPT=2), or default system allocator (MMGT_OPT=0).

Though OSes provide better and better allocators, custom ones are likely to stay for foreseeable future due to efficient solving of particular problems (e.g. thread-safety and scalability as TBB). If you are curious, you might want to check some comparisons I conducted with default, OCC and TBB allocators here.

The central idea of allocators is caching and reuse of previously allocated memory chunks for further allocations. Thus, when your application object is destroyed, its memory is effectively retained by the allocator and is not returned to the system. That is why, in particular, you won't see in Task Manager the memory level returning to the previous level even if all your document objects got destroyed after closing the MDI document. Allocators may apply different policies to retain/return these memory blocks. For instance, both OCC and TBB have different approaches for small and large blocks; the latter are returned faster (as the chances of their reuse are smaller), while the former may never be returned until application terminates.

3. Static objects residing in memory.

It is a wide spread practice to create static objects which live throughout the application life-time and get destroyed only upon program termination. Consider this:


static boost::shared theSingleton = new MyClass();

MyClass* MyClass::Instance()
return theSingleton.get();

theSingleton will be created during loading the library containing it, and will be destroyed when it is unloaded (effectively when the application terminates unless it is explicitly unloaded before that).

There are multiple examples of such constructs in OCC code.

Below is the Inspector screenshot of the (false positive) leak reported on the screenshot in Part1:

Static objects in TKTopAlgo

4. Unused data residing in memory.

Similar to above, there are cases when some data are stored with the help of static objects and used to pass between the algorithm calls. I gave some examples in an earlier post. I believe this is a bad design and should be avoided but it may happen in third-party code. It's not really a leak but essentially wasting memory, which again only gets freed upon program termination.

(to be continued)


Is my memory leaking? Part1.

There often appear posts on the Open CASCADE forum, either as questions or as blames that there are persistent memory leaks. Truth to be told, this happens on many forums of other software products I visited, so this is not something OCC-specific.
So I'd like to shed some light, which would hopefully help someone to understand the issue in the future. As always, extensions and any comments are welcome.

So how one detects there is a memory leak? I would suggest the following possibilities (presumably in the order of decreasing frequency of use by developers):

Using Visual Studio built-in features
(I never use these though).
Put the following lines in the source file or your executable:

#include <stdlib.h>
#include <crtdbg.h>

And into the main() function:

For more information you can read this MSDN page (select appropriate VS version).

Under the debugger, in the Output window you will see something like:

Detected memory leaks!
Dumping objects ->
{35171} normal block at 0x04CD0068, 260 bytes long.
Data: < > 02 00 00 00 00 00 00 00 CD CD CD CD CD CD CD CD
{349} normal block at 0x0330E350, 100 bytes long.
{246} normal block at 0x03309390, 108 bytes long.
Data: < E ( > 01 00 00 00 1A 00 00 00 45 01 00 00 28 0A 00 00

Once you see this you might find yourself yelling – "how may this !@#$% product exist with such fundamental bugs ?!!". But as soon as you see this attributes to your code, you may go "hmm, is this really an error?" As you start digging deeper, things may go different ways. You may discover a bug in your product or a tool limitation (the latter is more likely though).

Windows Task Manager
You notice the level of consumed memory before you start some piece of your code (e.g. opening a document in an MDI application).

MDI app memory level before opening a document

Then do some actions and see the new level. For instance, after closing the document in the MDI app, you would expect the level returns to a previous one. If not (and as a rule, not!) you start asking why.

Same app memory level after closing an opened document

Specialized memory checking tools
This includes Valgrind, Bounds Checker (never used those two), Rational Purify, Intel Parallel Inspector XE, etc. I used Purify in late 1990es and as of 2008 sticked to Intel Inspector. Here is a sample report generated by Inspector:

Intel Parallel Inspector XE reporting memory leaks

You can try an evaluation version on the Intel site here.

Debug print
Adding simple outputs in constructor and destructor is simple yet effective practice to quickly check if your object is destroyed whenever you expect.

Custom memory checkers/profilers
You might want to write some ad-hoc memory profiler – some hooks that trace allocation and deallocation routines (malloc/free, new/delete) – counting allocated and deallocated bytes. I myself created one when tracing memory in CAD Exchanger. If there is sufficient interest, I could publish it. The idea is to detect pieces of code where you expect that all allocated memory during that region will be deallocated upon its end.

(to be continued...)