Skip to main content

Managed Debugging Assistant !!!

The Loader Lock is a synchronization object that hepls to provide mutual exclusion during DLL loading and unloading. It helps to prevent DLLs being re-entered before they are completely initialized [in the DLLMain].

When the some dll load code is executed, the loader lock is set and after the complete intialization it is unset. But there is a possibility of deadlock when threads do not properly synchronize on the loader lock. This mostly happens when threads try to call other other Win32 APIs [LoadLibrary, GetProcAddress, FreeLibrary etc] that also require the loader lock. Often this is evident in the mixed managed/unmanaged code, whereby it is not intentional but the CLR may have to call those APIs like during a call using platform invoke on one of the above listed Win32 API.

For instance, if an unmanaged DLL's DllMain entry point tries to CoCreate a managed object that has been exposed to COM, then it is an attempt to execute managed code inside the loader lock.

MDA - Managed Debugging Assistant, facility available in .NET 2.0/VS 2005 helps to find out this situation while debugging and pops up a dialog box. Then we can break into the code, have a look at the stack trace and resolve it. The feature can be disabled if not needed.

So what could be the effect of this deadlock ? It saved me whole of time and effort that I would have wasted when such a box poped up in my project, and I do not know if I would have found the reason. If the thread that deadlocks happens to be the GC thread or any thread that loads and unloads my assemblies, I do not have explain further the disasterous effect. And for a programmer like me, new to the .NET environment, who has not yet gotten out of the fascinating external features, will not ponder into the internals.

Comments

Popular posts from this blog

OrderedThreadPool - Task Execution In Queued Order !!!

I would not want to write chunks of code to spawns threads and perform many of my background tasks such as firing events, UI update etc. Instead I would use the System.Threading.ThreadPool class which serves this purpose. And a programmer who knows to use this class for such cases would also be aware that the tasks queued to the thread pool are NOT dispatched in the order they are queued. They get dispatched for execution in a haphazard fashion.In some situations, it is required that the tasks queued to the thread pool are dispatched (and executed) in the order they were queued. For instance, in my (and most?) applications, a series of events are fired to notify the clients with what is happening inside the (server) application. Although the events may be fired from any thread (asynchronous), I would want them or rather the client would be expecting that the events are received in a certain order, which aligns with the sequence of steps carried out inside the server application for th…

sizeof vs Marshal.SizeOf !!!

There are two facilities in C# to determine the size of a type - sizeof operator and Marshal.SizeOf method. Let me discuss what they offer and how they differ. Pardon me if I happen to ramble a bit. Before we settle the difference between sizeof and Marshal.SizeOf, let us discuss why would we want to compute the size of a variable or type. Other than academic, one typical reason to know the size of a type (in a production code) would be allocate memory for an array of items; typically done while using malloc. Unlike in C++ (or unmanaged world), computing the size of a type definitely has no such use in C# (managed world). Within the managed application, size does not matter; since there are types provided by the CLR for creating\managing fixed size and variable size (typed) arrays. And as per MSDN, the size cannot be computed accurately. Does that mean we don't need to compute the size of a type at all when working in the CLR world? Obviously no, else I would not be w…

Passing CComPtr By Value !!!

This is about a killer bug identified by our chief software engineer in our software. What was devised for ease of use and write smart code ended up in this killer defect due to improper perception. Ok, let us go!CComPtr is a template class in ATL designed to wrap the discrete functionality of COM object management - AddRef and Release. Technically it is a smart pointer for a COM object.void SomeMethod() { CComPtr siPtr; HRESULT hr = siPtr.CoCreateInstance(CLSID_SomeComponent); siPtr->MethodOne(20, L"Hello"); }Without CComPtr, the code wouldn't be as elegant as above. The code would be spilled with AddRef and Release. Besides, writing code to Release after use under any circumstance is either hard or ugly. CComPtr automatically takes care of releasing in its destructor just like std::auto_ptr. As a C++ programmer, we must be able to appreciate the inevitability of the destructor and its immense use in writing smart code. However there is a difference between …