Skip to main content

Quiz - Where am I?

The question is, in C++, how do detect if an object is allocated on the stack or heap.

On Windows, the stack address is in the range of 0x80000000. If the address of the variable is in this range, then you could say that the object is allocated on the stack; else it is allocated on the heap. This technique of detecting is not preferrable since it may not work on other operating systems (such as linux), and deals with the platform specific information making it a non-portable solution.

Let us try to use standard C++ stuff to solve the problem. Ok, we want to know where an object is allocated. In C++, new operator is responsible for allocating an object.

It was very thoughtful of Stroustoup to keep the object allocation and initialization\construction separate. new operator is responsible only for allocation. The compiler is responsible for woving up the code for allocation and calling the constructor. It was also very thoughtful of being able to specify the location where the object needs to be allocated, which of course does not necessarily require overloading the new operator.

C++ allows taking control of object allocation by overloading the new operator. By taking control, you would be able to detect where an object is allocated, and also keep a count of the objects allocated on the stack\heap. The following code snippet illustrates the same:-

//
// SomeClass.h
//

#include <iostream>
#include <stdlib.h>
#include <deque>
#include <algorithm>

using namespace std;

class SomeClass;
typedef std::deque<SomeClass*>      SomeClassDB;
typedef SomeClassDB::iterator    SomeClassDBIter;
typedef SomeClassDB::const_iterator SomeClassDBConstIter;

class SomeClass
{
private: static SomeClassDB heapObjects;
private: static SomeClassDB stackObjects;

private: double value;
private: bool isOnHeap;

public: SomeClass(double d) : value(d), isOnHeap(SomeClass::IsOnHeap(this))
        {
           if (!IsOnHeap())
           {
              SomeClass::stackObjects.push_back(this);
           }

           PrintLocationInfo();
        }

public: ~SomeClass()
        {
           SomeClassDBIter iter = std::find(SomeClass::heapObjects.begin(),
                    SomeClass::heapObjects.end(),
                    this);

           if (iter != SomeClass::heapObjects.end())
           {
              SomeClass::heapObjects.erase(iter);
           }
        }

public: double Value() const
        {
           return this->value;
        }

public: bool IsOnHeap() const
        {
           return this->isOnHeap;
        }

public: bool IsOnStack() const
        {
           return !IsOnHeap();
        }

public: std::string Location() const
        {
           return IsOnHeap() ? "Heap" : "Stack";
        }

public: void PrintLocationInfo() const
        {
           std::cout << Value() << " is on " << Location().c_str() << std::endl;
        }

private: static bool IsOnHeap(SomeClass* scPtr)
         {
            SomeClassDBConstIter iter = std::find(SomeClass::heapObjects.begin(), SomeClass::heapObjects.end(), scPtr);
            return iter != SomeClass::heapObjects.end();
         }

private: static bool IsOnStack(SomeClass* scPtr)
         {
            return !IsOnHeap(scPtr);
         }

public: static void* operator new(size_t size)
        {
           SomeClass* scPtr = (SomeClass*)malloc(size);
           SomeClass::heapObjects.push_back(scPtr);
           return scPtr;
        }

public: static void operator delete(void* ptr)
        {
           free(ptr);
        }

public: static size_t HeapCount()
        {
           return SomeClass::heapObjects.size();
        }

public: static size_t StackCount()
        {
           return SomeClass::stackObjects.size();
        }

public: static void PrintObjectCount()
        {
           std::cout << "OnHeap: " << HeapCount() << " .... OnStack: " << StackCount() << std::endl;
        }
};

//
// SomeClass.cpp
//

#include "SomeClass.h"

SomeClassDB SomeClass::heapObjects;
SomeClassDB SomeClass::stackObjects;

int main (int argc, char *argv[])
{
   SomeClass sc(0.123);
   SomeClass* scPtr = new SomeClass(1.234);

   SomeClass::PrintObjectCount();

   {
      SomeClass sc1(2.345);
      SomeClass::PrintObjectCount();
   }

   delete scPtr;

   SomeClass::PrintObjectCount();

   return 0;
}

You should be aware of implementing the custom delete if you have provided a custom new operator. It is logical because only a custom implementation that allocated memory for the object could possibly know how to deallocate. The above technique of overloading new\delete is portable and safe since it is standard C++. As always, writing standard C++ is fun.

But why would one care to know where an object is allocated or keep a count of objects. I don't think it would something you would require for production purposes. It could be for development\debugging purposes; maybe you want to detect memory leaks or a general distribution of objects. You could take control of the allocation by overloading new for a particular class or for all classes by declaring a global new operator.

Comments

Popular posts from this blog

Extension Methods - A Polished C++ Feature !!!

Extension Method is an excellent feature in C# 3.0. It is a mechanism by which new methods can be exposed from an existing type (interface or class) without directly adding the method to the type. Why do we need extension methods anyway ? Ok, that is the big story of lamba and LINQ. But from a conceptual standpoint, the extension methods establish a mechanism to extend the public interface of a type. The compiler is smart enough to make the method a part of the public interface of the type. Yeah, that is what it does, and the intellisense is very cool in making us believe that. It is cleaner and easier (for the library developers and for us programmers even) to add extra functionality (methods) not provided in the type. That is the intent. And we know that was exercised extravagantly in LINQ. The IEnumerable was extended with a whole lot set of methods to aid the LINQ design. Remember the Where, Select etc methods on IEnumerable. An example code snippet is worth a thousand ...

Implementing COM OutOfProc Servers in C# .NET !!!

Had to implement our COM OOP Server project in .NET, and I found this solution from the internet after a great deal of search, but unfortunately the whole idea was ruled out, and we wrapped it as a .NET assembly. This is worth knowing. Step 1: Implement IClassFactory in a class in .NET. Use the following definition for IClassFactory. namespace COM { static class Guids { public const string IClassFactory = "00000001-0000-0000-C000-000000000046"; public const string IUnknown = "00000000-0000-0000-C000-000000000046"; } /// /// IClassFactory declaration /// [ComImport(), InterfaceType(ComInterfaceType.InterfaceIsIUnknown), Guid(COM.Guids.IClassFactory)] internal interface IClassFactory { [PreserveSig] int CreateInstance(IntPtr pUnkOuter, ref Guid riid, out IntPtr ppvObject); [PreserveSig] int LockServer(bool fLock); } } Step 2: [DllImport("ole32.dll")] private static extern int CoR...

Passing CComPtr By Value !!!

This is about a killer bug identified by our chief software engineer in our software. What was devised for ease of use and write smart code ended up in this killer defect due to improper perception. Ok, let us go! CComPtr is a template class in ATL designed to wrap the discrete functionality of COM object management - AddRef and Release. Technically it is a smart pointer for a COM object. void SomeMethod() { CComPtr siPtr; HRESULT hr = siPtr.CoCreateInstance(CLSID_SomeComponent); siPtr->MethodOne(20, L"Hello"); } Without CComPtr, the code wouldn't be as elegant as above. The code would be spilled with AddRef and Release. Besides, writing code to Release after use under any circumstance is either hard or ugly. CComPtr automatically takes care of releasing in its destructor just like std::auto_ptr . As a C++ programmer, we must be able to appreciate the inevitability of the destructor and its immense use in writing smart code. However there is a difference...