Anyone who worked on a big enterprise project knows memory leaks are like rats in a big hotel. You might not notice when there are few of them, but you always have to be on guard in case they overpopulate, break into the kitchen, and poop on everything.
Finding, Fixing and learning to Avoid Memory Leaks is an important skill. I’ll list 8 best practice techniques used by me and senior .NET developers that advised me for this article. These techniques will teach you to detect when there’s a memory leak problem in the application, to find the specific memory leak and to fix it. Finally, I’ll include strategies to monitor and report on memory leaks for a deployed program.
Defining Memory Leaks in .NET
In a garbage collected environment, the term memory leaks is a bit
There are 2 related core causes for this. The first core cause is when you have objects that are still referenced but are effectually unused. Since they are referenced, the garbage collector won’t collect them and they will remain forever, taking up memory. This can happen, for example, when you register to events but never unregister.
The second cause is when you somehow allocate unmanaged memory (without garbage collection) and don’t free it. This is not so hard to do. .NET itself has a lot of classes that allocate unmanaged memory. Almost anything that involves streams, graphics, the file system or network calls does that under the hood. Usually, these classes implement a Dispose method, which frees the memory (we’ll talk about that later). You can easily allocate unmanaged memory yourself with special .NET classes (like Marshal) or PInvoke (there’s an example of this further on).
Let’s move on to my best practice techniques list:
1. Detect a Memory Leak problem with the Diagnostic Tool Window
If you go to Debug | Windows | Show Diagnostic Tools, you’ll see this window. If you’re like me, you probably saw this tool window after installing Visual Studio, closed it immediately, and never thought of it again. The Diagnostic Tools Window can be quite useful though. It can easily help you detect 2 problems: Memory Leaks and GC Pressure.
When you have Memory Leaks, the Process Memory graph looks like this:
You can see with the yellow lines coming from the top that the GC is trying to free memory, but it still keeps rising.
When you have GC Pressure, the Process Memory graph looks like this:
GC Pressure is when you are creating new objects and disposing of them too quickly for the garbage collector to keep up. As you see in the picture, the memory is close to its limit and the GC bursts are very frequent.
You won’t be able to find specific memory leaks this way, but you can detect that you have a memory leak problem, which is useful by itself. In Enterprise Visual Studio, the Diagnostics Window also includes a built-in memory profiler, which does allow to find the specific leak. We’ll talk about memory profiling in best practice #3.
2. Detect Memory Leak problems with the Task Manager, Process Explorer or PerfMon
The second easiest way to detect major memory leak problems is with the Task Manager or Process Explorer (from SysInternals). These tools can show the amount of memory your process uses. If it consistently increases over time, you probably have a memory leak.
PerfMon is a bit harder to use but can show a nice graph of your memory usage over time. Here’s a graph of my application that endlessly allocates memory without freeing it. I’m using the Process | Private Bytes counter.
Note that this method is notoriously unreliable. You might have an increase in memory usage just because the GC didn’t collect it yet. There’s also the matter of shared memory and private memory, so you can both miss memory leaks and/or diagnose memory leaks that aren’t your own (explanation ). Finally, you might mistake memory leaks for GC Pressure. In this case, you don’t have a memory leak but you create and dispose of objects so fast that the GC doesn’t keep up.
Despite the disadvantages, I mention this technique because it’s both easy to use and sometimes your only tool. It’s also a decent indicator something is wrong when observing for a very long period of time.
3. Use a memory profiler to detect memory leaks
A memory profiler is like the Chef’s Knife of handling memory leaks. It’s the main tool to find and fix them. While other techniques can be easier to use or cheaper (profiler licenses are costly), it’s best to be proficient by with at least one memory profiler to effectively solve memory leak problems.
The big names in .NET memory profilers
All memory profilers work in a similar way. You can either attach to a running process or open a Dump file . The profiler will create a Snapshot of your process’ current memory heap. You can analyze the snapshot in all kinds of ways For example, here’s a list of all the allocated objects in the current snapshot:
You can see how many instances of each type are allocated, how much memory they take, and the reference path to a GC Root.
A GC Root is an object which the GC can’t free, so anything that the GC root references also can’t be freed. Static objects and the local objects in the current active Threads are GC Roots. Read more in Understanding Garbage Collection in .NET .
The quickest and most useful profiling technique is to compare 2 snapshots where the memory should return to the same state. The first snapshot is taken before an operation, and another snapshot is taken after the operation. The exact steps are:
- Start with some kind of Idle state in your application. This could be the Main Menu or something similar.
- Take a snapshot with the Memory profiler by attaching-to-process or saving a Dump .
- Run an operation where you suspect a memory leak is created. Return to the Idle state at the end of it.
- Take a second snapshot.
- Compare both snapshots with your memory profiler.
- Investigate the New-Created-Instances, they are probably memory leaks. Examine the “path to GC Root” and try to understand why those objects weren’t freed.
Here’s a great video where 2 snapshots are compared in SciTech memory profiler and the memory leak is found:
4. Use “Make Object ID” to find memory leaks
In my last article 5 Techniques to avoid Memory Leaks by Events in C# .NET you should know I showed a technique to find a memory leak by placing a breakpoint in the class Finalizer. I’ll show you a similar method here that’s even easier to use and doesn’t require code changes. This one utilizes the Debugger’s Make Object ID feature and the Immediate Window.
Suppose you suspect a certain class has a memory leak. In other words, you suspect that after running a certain scenario, this class stays referenced and never collected by the GC. To find out if the GC actually collected it, follow these steps:
- Place a breakpoint where the instance of the class is created.
- Hover over the variable to open the debugger’s data-tip, then right-click and use Make Object ID. You can type in the Immediate Window $1 to see that the Object ID was created correctly.
- Finish the scenario that was supposed to free your instance from references.
- Force GC collection with the known magic lines
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
- Type $1 again in the immediate window. If it returns null, then the GC collected your object. If not, you have a memory leak.
Here’s me debugging a scenario that has a memory leak:
Important: This practice doesn’t work well in .NET Core 2.X debugger (issue ). Forcing garbage collection in the same scope as the object allocation doesn’t free that object. You can do it with a little more effort by forcing garbage collection in another method out of scope.
5. Beware of common memory leak sources
There’s always a risk of causing a memory leak, but there are certain patterns that are much more likely to do so. I suggest to be extra careful when using these, and proactively check for memory leaks with techniques like the last best practice.
Here are some of the more common offenders:
-
Events in .NET are notorious for causing memory leaks. You can innocently subscribe to an event, causing a damaging memory leak without even suspecting. This subject is so important that I dedicated an entire article to it: 5 Techniques to avoid Memory Leaks by Events in C# .NET you should know
-
Static variables, collections, and static events in-particular should always look suspicious. Remember that all static variables are GC Roots, so they are never collected by the GC.
-
Caching functionality – Any type of caching mechanism can easily cause memory leaks. By storing cache information in-memory, eventually, it will fill up and cause an OutOfMemory exception. The solution can be to periodically delete older caching or limit your caching amount.
-
WPF Bindings can be dangerous. The rule of thumb is to always bind to a DependencyObject or to
a **INotifyPropertyChanged object. When you fail to do so, WPF will create a strong reference to your binding source (meaning the ViewModel) from a static variable, causing a memory leak. More information on WPF Binding leaks in this helpful StackOverflow thread -
Captured members – It might be clear that an Event Handler Method means an object is referenced, but when a variable is Captured in an anonymous method, then it’s also referenced. Here’s an example of a memory leak:
public class MyClass
{
private int _wiFiChangesCounter = 0;
public MyClass(WiFiManager wiFiManager)
{
wiFiManager.WiFiSignalChanged += (s, e) => _wiFiChangesCounter++;
}
- Threads that never terminate – The Live Stack of each of your threads is considered a GC Root. This means that until a thread terminates, any references from its variables on the Stack will not be collected by the GC. This includes Timers as well. If your Timer’s Tick Handler is a method, then the method’s object is considered referenced and will not be collected. Here’s an example of a memory leak:
public class MyClass
{
public MyClass(WiFiManager wiFiManager)
{
Timer timer = new Timer(HandleTick);
timer.Change(TimeSpan.FromSeconds(5), TimeSpan.FromSeconds(5));
}
private void HandleTick(object state)
{
// do something
}
For more on this subject, check out my article 8 Ways You can Cause Memory Leaks in .NET .
6. Use the Dispose pattern to prevent unmanaged memory leaks
Your .NET application constantly uses unmanaged resources. The .NET framework itself relies heavily on unmanaged code for internal operations, optimization, and Win32 API. Anytime you use Streams, Graphics, or Files
.NET framework classes that use unmanaged code usually implement IDisposable. That’s because unmanaged resources need to be
public void Foo()
{
using (var stream = new FileStream(@"C:\Temp\SomeFile.txt",
FileMode.OpenOrCreate))
{
// do stuff
}// stream.Dispose() will be called even if an exception occurs
The using statement transforms the code into a try / finally statement behind the scenes, where the Dispose method is called in the
But, even if you don’t call the Dispose method, those resources will be freed because .NET classes use the Dispose Pattern . This basically means that if Dispose wasn’t called before, it’s called from the Finalizer when the object is garbage collected. That is, if you don’t have a memory leak and the Finalizer really is called.
When you’re allocating unmanaged resources yourself, then you definitely should use the Dispose pattern. Here’s an example:
public class MyClass : IDisposable
{
private IntPtr _bufferPtr;
public int BUFFER_SIZE = 1024 * 1024; // 1 MB
private bool _disposed = false;
public MyClass()
{
_bufferPtr = Marshal.AllocHGlobal(BUFFER_SIZE);
}
protected virtual void Dispose(bool disposing)
{
if (_disposed)
return;
if (disposing)
{
// Free any other managed objects here.
}
// Free any unmanaged objects here.
Marshal.FreeHGlobal(_bufferPtr);
_disposed = true;
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
~MyClass()
{
Dispose(false);
}
}
The point of this pattern is to allow explicit disposal of resources. But also to add a safeguard that your resources will be disposed during garbage collection (in the Finalizer) if the Dispose() wasn’t called.
The GC.SuppressFinalize(this) is also important. It makes sure the Finalizer isn’t called on garbage collection if the object was already
7. Add Memory Telemetry from Code
Sometimes, you might want to periodically log your memory usage. Maybe you suspect your production Server has a memory leak. Perhaps you want to take some action when your memory reaches a certain limit. Or maybe you’re just in the good habit of monitoring your memory.
There’s a lot of information we can get from the app itself. Getting current memory in-use is as simple as:
Process currentProc = Process.GetCurrentProcess();
var bytesInUse = currentProc.PrivateMemorySize64;
For more information, you can use the PerformanceCounter class that’s used for PerfMon:
PerformanceCounter ctr1 = new PerformanceCounter("Process", "Private Bytes", Process.GetCurrentProcess().ProcessName);
PerformanceCounter ctr2 = new PerformanceCounter(".NET CLR Memory", "# Gen 0 Collections", Process.GetCurrentProcess().ProcessName);
PerformanceCounter ctr3 = new PerformanceCounter(".NET CLR Memory", "# Gen 1 Collections", Process.GetCurrentProcess().ProcessName);
PerformanceCounter ctr4 = new PerformanceCounter(".NET CLR Memory", "# Gen 2 Collections", Process.GetCurrentProcess().ProcessName);
PerformanceCounter ctr5 = new PerformanceCounter(".NET CLR Memory", "Gen 0 heap size", Process.GetCurrentProcess().ProcessName);
//...
Debug.WriteLine("ctr1 = " + ctr1 .NextValue());
Debug.WriteLine("ctr2 = " + ctr2 .NextValue());
Debug.WriteLine("ctr3 = " + ctr3 .NextValue());
Debug.WriteLine("ctr4 = " + ctr4 .NextValue());
Debug.WriteLine("ctr5 = " + ctr5 .NextValue());
Information from any perfMon counter is available, which is plenty.
You can go even deeper though. CLR MD (Microsoft.Diagnostics.Runtime) allows you to inspect your current memory heap and get any possible information. For example, you can print all the allocated types in memory, including instance counts, paths to roots and so on. You pretty much got a memory profiler from code.
To get a whiff of what you can achieve with CLR MD, check out Dudi Keleti’s DumpMiner .
All this information can be logged to a file, or even better, to a telemetry tool like Application Insights.
8. Test for memory leaks
It’s a great practice to proactively test for memory leaks. And it’s not that hard. Here’s a short pattern you can use:
[Test]
void MemoryLeakTest()
{
var weakRef = new WeakReference(leakyObject)
// Ryn an operation with leakyObject
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
Assert.IsFalse(weakRef.IsAlive);
}
For more in-depth testing, memory profilers like SciTech’s .NET Memory Profiler and dotMemory provide a testing API:
MemAssertion.NoInstances(typeof(MyLeakyClass));
MemAssertion.NoNewInstances(typeof(MyLeakyClass), lastSnapshot);
MemAssertion.MaxNewInstances(typeof(Bitmap), 10);
Summary
Don’t know about you, but my new year’s resolution is: Better memory management.
I hope this post gave you some value and I’d love it if you subscribe to my blog or leave a comment below. Any feedback is welcome.
Some good techniques, but the fundamentals are [in my professional opinion] flawed. If there is an accessible reference, there is NOT a leak. If there are unmanaged resources, then the allocating/owning class should always have a finalizer, and once again there is NOT a leak.
Now, I do agree that longer than intended lifecycles can create huge and unnecessary memory demands, and in many cases [after making system level measurements to determine if there is an actual performance impact] this needs to be addressed.
Yet focusing on the elements that have been categorized as leaks will not even look at what can be the biggest culprits!
David, I think it's a matter of definition. My claim is that in .NET the term "Memory Leak" includes both "referenced-but-unused class" and classic memory leak.
Whether objects are referenced without need, or like in C++ are no longer referenced, the result of both is the same: Bigger memory footprint and eventual OutOfMemoryException.
Hi, nice article ???? I was going to make the same comment almost. In your "debugging a scenario that has a memory leak", I don't think this is a memory leak, it is how garbage collector works as I remember testing before. For that same scenario, try leaving the app running for a while, like a few minutes, you would notice your object will become null. I remember the garbage collector is smart enough to know when to start cleaning, but you can also force it to kick in like you did later, but maybe that's not always a good idea too.
Glad you liked it Hasan. Yeah, forcing garbage collection is production is almost always a very bad idea.
I have to disagree with you.
I can write an infinite loop adding integers to a list, and while it will cause me to run out of memory, it is not a memory leak. It is just me doing something stupid. Most of your examples were the same, like the endless cache or the thread instructed to run forever. In all of these cases, the programmer is doing something dumb, and all things--even dumb things--require memory.
There is plenty of useful information in your article, but calling everything a leak will just confuse people about what they really are, how insidious they can be, and thus the best ways to find, fix, and avoid them. I appreciate the time you put into this, but it is really an article about memory management, not memory leaks. Pretending otherwise is counterproductive.
Hi Daniel,
Thanks for the feedback. You are not the first one to point out, but I have to disagree. As I said in a previous comment, it's a matter of definition.
In managed code, an object that is referenced but not in use can still be considered a memory leak. Granted, it's not a memory leak like in C++ where an object isn't referenced by anything.
But, it's accepted in the industry that a memory leak in managed languages like C# and Java includes referenced objects that aren't in use.
Don't worry about these peoples complaints. They don't understand the term memory leak in .NET INCLUDES things like leftover references that weren't handled/etc. When referenced objects not in use start causing memory issues, we call this a leak, don't let their misunderstanding of CONTEXT ruin your great article.
Thanks, Josh. It's OK, I'm not discouraged by criticism and thanks for the feedback.
Cool!! Thanks for this article!)
Thank for sharing. Very informative.
Nice Article
Hi true memory leaks can occur only if you are managing native memory otherwise you have reference to object somewhere. But in C# it's very hard to determine when the object is going to be freed. You can of course use iDisposable mechanism in article and force it, but why use C# then. The main advantage in using language with gc collector is gone. I've seen some C# wrappers to native libraries, that clearly leaks, but a) it's very hard to pinpoint your leak b) you have no way of fixing it (not that in C/C++ it's possible to do that, but at least it behaves consistently).
Is there some tool, that can check GC manually, ie. some tool that can display all allocated memory on breakpoint in debug mode ? That would be very helpful.
Hi Filip,
Absolutely, you can use a memory profiler like dotMemory to see all allocated instances. It can attach to process or open a memory Dump.
Also, Visual Studio Enterprise can show you that information. It has a built-in memory profiler.
Great article.
I want you to add PerfView name on "3. Use a memory profiler to detect memory leaks".
Thanks.
I think the reason is that after you've waited for finalizers to finish, the objects that were referenced by the classes with finalizers can now also be collected.
Thanks for the article. Regarding the sentence in section $4 "You can force garbage collection in the end by typing the magic lines in the immediate window, making this technique a fully debugging experience, with no need to change code." I tried this on my system (VS2017), typing the GC methods in the Immediate Window, and the debugger hangs on the call to GC.WaitForPendingFinalizers(). It all works fine with the magic lines in the code. Is there something I'm doing wrong?
You're doing it right, it should work. I guess there's some kind of problem in the finalizer queue. Maybe one of the finalizers is on a deadlock. Which is strange because you say it works in code. I'd try to run this from Watch window - In some multi-threaded scenarios, it can utilize all threads when the immediate window can't. Also, you can save a dump file when it hangs and debug it. Or attach from a different VS when it hangs.
I tried the dump file. Here's the stack trace:
[Managed to Native Transition] WindowsBase.dll!System.Windows.Threading.DispatcherSynchronizationContext.Wait(System.IntPtr[] waitHandles, bool waitAll, int millisecondsTimeout) Unknown [Native to Managed Transition] [Managed to Native Transition] mscorlib.dll!System.GC.WaitForPendingFinalizers() Unknown [Native to Managed Transition] [Managed to Native Transition] [Function Evaluation]
[My code]
It looks like it's stuck in the DispatcherSynchronizationContext.Wait() call. I'm seeing a few reports of this on the Web.
Deadlock maybe. Or immediate-window multi-threading limitations.
I also tried running the magic code in the Watch window. The results were...interesting. I got a Microsoft Visual Studio Error pop-up saying:
"Evaluating the function 'System.GC.WaitForPendingFinalizers' timed out and needed to be aborted in an unsafe way. This may have corrupted the target process."
Oh... yeah that would explain it. So whatever expression you run in the immediate window or watch window has a timeout (1 minute I think). If that timeout is exceeded, VS aborts the evaluation and shows you that scary message. After which, you pretty much have to stop debugging and start over. So perhaps your finalizers take too much time?
Hi Michael, interesting article!
I'm running into trouble with your leak test. After it failing with some of my custom objects I want to test, I tried it with a simple string, which does no better.
Here's the code:
[TestCase]
public void TestStringReference()
{
WeakReference weakRef;
{
var options = "Test";
weakRef = new WeakReference(options);
}
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
Assert.IsFalse(weakRef.IsAlive);
}
The (string) variable is outside the braces, so it should fall out of scope and release its reference to the string. However, even after the garbage collection calls, weakRef.IsAlive is still true, and the underlying object can still be accessed through weakRef.
What am I missing?
Jamie, if you are running any type of debugger then objects are typically kept alive until the end of the method.
Hi Michael, Thanks for your great article.
I am running into a memory leak problem in recent days. I have a .net core app deployed in AWS ECS, after I upgraded it from 2.1 to 3.1, I encountered the problem. The container reserved 512M memory and no a hard memory limit, after the the app service started, it used the 40% of reserved memory initially, after that it took more and more memory and increased continuously to more than 300%. I want to dump the memory of the app, but have not found a suitable tool to achieve it since it is running as a esc task.
Could you help give me any advice about how to trace this problem. Thanks a lot!
Hi,
Use a memory profiler and compare 2 snapshots as described in item 3 here: https://wordpress-245057-756510.cloudwaysapps.com/find-fix-and-avoid-memory-leaks-in-c-net-8-best-practices/
Is it on Linux?
For Linux you can use dotMemory agent program or PerfView