Memory Footprint Comparison of .net IDEs by Matt Wrock

I got a Amazon gift  certificate a couple weeks ago as a birthday present and decided to get a Asus Netbook (Thanks Mom and Dad!!) So far I really like it. It has 2GB of ram, and a Duo Core Atom Processor. Its very lite and small enough to lug around comfortably but not so small that it is unusable. I also figured out that I can tether with my Samsung Focus phone which is very cool when I don’t have WiFi.

While I do not intend for this device to become my primary at home coding station, I at least want to be capable of doing light coding on it at the times and places of my choosing. (US Military leaders seem to like that phrase and I admit that I am rather attracted to the feeling of free will, self determination and “screw you its all about me” ethos that the phrase exudes).  Oh yeah, I was talking about my new netbook.

So I installed both SharpDevelop and MonoDevelop because both appeared to be strong light weight contenders against VS but still seemed to be full fledged IDEs and not just a snippet or one off file compiler. I have not done any significant coding yet on either beyond loading my OSS project that I have been working on and successfully building it.

What is interesting is the Ram footprint of each IDE:

  • Visual Studio Ultimate on my laptop with Resharper: >300M
  • SharpDevelop: ~90M
  • MonoDevelop: ~30M

I would really like to port my project to an Apache Module one day so unless there is a compelling reason to switch, I’m sticking with MonoDevelop for now.

Recycling an Application Pool with C# (Part 2) by Matt Wrock

The other day I shared how to use the DirectoryServices namespace to restart an app pool via C# code. The code I used had two key flaws:

  1. It used a Thread.Sleep(2000) to wait for the old app pool to be destroyed.
  2. The use of DirectoryServices required the enabling of the windows feature: IIS Metabase and IIS 6 configuration compatibility.

Also just to recap why you would even want to do this: My use of this code is for writing integration tests of web app functionality. It allows me to test scenarios where I want to ensure a certain outcome after the application restarts. It also helps to isolate test state from one test to another.

Anyhoo, a coworker of mine, @mmanela (Matt Manela) mentioned hosting a powershell script instead of the DirectoryServices implementation. As we discussed it further, we assumed that the PowerShell WebAdministrationiModule was probably using some other API and that it would be interesting to discover what that was and see if you could use that. Well after spending some quality time with reflector and the WebAdministrationModule DLLs, I was not able to tell what that API was. However, I did discover another API that appeared to be a better alternative to DirectoryServices.

The API can be found in %WinDir%\system32\Inetsrv\Microsoft.Web.Administration.dll. See this post for a good overview. Here is my new helper method:

public static void RecyclePool(){    using (var manager = new ServerManager())    {        var pool = manager.ApplicationPools["RequestReduce"];        Process process = null;        if(pool.WorkerProcesses.Count > 0)            process = Process.GetProcessById(pool.WorkerProcesses[0].ProcessId);        pool.Recycle();        if(process != null)        {            while (!process.HasExited)                Thread.Sleep(0);            process.Dispose();        }    }}

So in addition to using a different API, I’m also no longer using the hacky Thread.Sleep(2000) to wait for the app pool to die. Instead, I use this API to get the Process ID of the about to be recycled app pool. I then wait for the pool to exit. I have tested this and it works perfectly. So now my tests move on as soon as the app pool is completely destroyed. I don’t have to wait any extra time in case this happens more quickly than two seconds and I don’t risk a failed test if two seconds is not long enough. In case you are wondering why it is so important to wait for the old app pool’s worker process to terminate before proceeding, it is because I may have cleanup code that deletes files and that code will likely fail if the old worker process had a lock on the file.

Recycling an Application Pool with C# by Matt Wrock

I have been developing a CSS background image spriting, merging and minification application where I often force an app pool recycle in my integration tests. This is handy because it essentially allows me to reset the state of a web application in my test and it is a bit more light weight than performing a full IIS reset. I can specifically test certain scenarios where I want to make sure that some data or state can be persisted if the application is shut down. Its also a good way to isolate a test case from the effects of other integration tests. Things like output caching or any staticly held application memory resources are flushed and the next test has a clean slate.

To perform an app pool refresh, I use the following helper method in a class called IntegrationFactHelper:

public static void RecyclePool(){  using (var pool =     new DirectoryEntry("IIS://localhost/W3SVC/AppPools/RequestReduce"))  {    pool.Invoke("Recycle", null);  }  Thread.Sleep(2000);}

Make sure you have a using statement pulling in the System.DirectoryServices namespace. The path above (IIS://localhost/W3SVC/AppPools/RequestReduce) would be the path to your IIS application pool. Note this is the IIS application and not the IIS site.

I'm not too proud of the Thread.Sleep(2000) here. I just have not invested time in a better way to actually wait for the pool to restart. The call to Invoke does not block and wait for the restart to complete. I briefly played with polling the application state but still found that after the application claimed to be on (or whatever the state name is) that the app was unresponsive. I tend to think that I have not investigated that far enough and would be delighted if someone commented with a way to more elegantly accomplish this. Having said that, I have found that on my system, 2 seconds is the sweet spot.

UPDATE: See this post for an improved implementation that avoids this Thread.Sleep kludge and also gets around the dependency discussed below.

One cautionary and rather annoying note on using this DirectoryServices call on IIS applications. You may encounter this not so delightful error:

System.Runtime.InteropServices.COMException : Unknown error (0x80005000)
 at System.DirectoryServices.DirectoryEntry.Bind(Boolean throwIfFail)
 at System.DirectoryServices.DirectoryEntry.Bind()
 at System.DirectoryServices.DirectoryEntry.get_NativeObject()
 at System.DirectoryServices.DirectoryEntry.Invoke(String methodName, Object[] args)

Isn't that nice? I love unknown errors...almost as much as unspecified ones.

There may be other causes, but I have found that one reason this error may occur is if you have not enabled the Windows feature: IIS Metabase and IIS 6 configuration compatibility (see image below). I am using IIS 7, but this feature is required to use the above code.

Three pitfalls to avoid when writing a response filter by Matt Wrock

I was looking at a response filter that someone else had written yesterday and I noticed a few things it was doing that ideally you want to avoid in a response filter. This is a particularly fresh topic for me since I am nearing the end of V1 development on a response filter that will automatically find a response's css, merge them, find their background images, sprite the ones it can then create a new and minified css with these sprites. I'll be blogging much more on that next month.

Now, in order to write a good filter that will work with any site and be performant is not particularly easy. If your filter is limited to a small or smaller sites, this advise may be considered to lie in the category of preoptimization. But real quick...before I elaborate on these pitfalls...

What is a response filter?

A response filter is simply a class that derives from System.IO.Stream. This class is attached to an HttpResponse's Filter property like so:

Response.Filter = new MyFilter(HttpContext.Current.Response.Filter, 
    HttpContext.Current.Response.ContentEncoding);

As the underlying response outputs to its OutputStream, this output is sent to the filter which has the opportunity to examine and manipulate the response before it gets to the browser. The filter does this by overriding Stream's Write method:

void Wite(byte[] buffer, int offset, int count);

When the filter is ready to send its transformed response to the browser or just forward the buffer on unchanged, it then calls the underlying stream's write method. So your filter might have code like this:

        public ResponseFilter(Stream baseStream, Encoding encoding)
        {
            this.encoding = encoding;
            BaseStream = baseStream;
        }

	protected Stream BaseStream { get; private set; }

        public override void Write(byte[] buffer, int offset, int count)
        {
		var header = encoding.GetBytes("I am wrapping");
		var footer = encoding.GetBytes("your response");
		BaseStream.Write(header, 0, header.Length);
		BaseStream.Write(buffer, offset, count);
		BaseStream.Write(footer, 0, footer.Length);
         }

This is a common implementation used for adding compression to a site or ensuring that a site's content is always wrapped in a common header and footer.

So with that background, here are some things to try and avoid in a solid filter: 

Assuming UTF-8

This is easy to overlook and honestly it will work most of the time, but if you think that your filter will ever be dropped on a Japanese website, or a website that is intended to be localized to a double byte unicode locale you might be disapointed. Very disapointed. Avoid doing something like this:

BaseStream.Write(encoding.GetBytes("I am wrapping"), 0, 
    "I am wrapping".Length);

In a Japanese locale, the underlying encoding will be unicode and the length of the byte array will be twice the size of  "I am wrapping".Length which is likely UTF-8. So the users see just half the stream. But thats ok, the first half was way better.

Copying the buffer to a string

 You might be tempted to do something like this:

public override void Write(byte[] buffer, int offset, int count)
{
    var output = encoding.GetString(buffer, offset, count)
    var newOut = encoding.GetBytes("header" + output + "footer");
    BaseStream.Write(newOut, 0, newOut.Length);
}

You have now managed to double the memory footprint of the original response by copying it to a new variable. This can be a sensitive issue with filters since they often process almost ALL output in a site. Unfortunately, if you need to do alot of text searching and replacing on the original byte array and you want to be efficient, this can be difficult and tedious code to write, read and test. I intend to devote a future post to this topic exclisively.

Ignoring the offset and count parameters

You might think that using the offset and count parameters in your Write override is not necessary. After all, you are confident that your transformations can go to the browser as is because you don't have any code that would need to do further processing on the buffer. Well maybe you don't but someone else might. You may have no control over the fact that someday another HttpModule will be added to the site that registers another filters. Response filtering fully supports the ability to chain several filters together. Someone elses module might have have the code mentioned above in their own class:

Response.Filter = new MyFilter(HttpContext.Current.Response.Filter, 
    HttpContext.Current.Response.ContentEncoding);

So if this is called after your own filter was added to the response, then YOU are HttpContext.Current.Response.Filter. That new filter might do something like:

public override void Write(byte[] buffer, int offset, int count)
{
    int[1] headBoundaryIndexes = FindOpeningHead(buffer, offset, count);
    BaseStream.Write(buffer, 0, headBoundaryIndexes[0]);
    BaseStream.Write(anEvenBetterHead, 0, anEvenBetterHead.Length);
    BaseStream.Write(buffer, headBoundaryIndexes[1], (offset + count) - headBoundaryIndexes[1]);
}

So if your filter is this filter's BaseStream and your Write looks like this:

public override void Write(byte[] buffer, int offset, int count)
{
    var output = Encoding.UTF8.GetString(buffer);
    var newOut = output.Replace("super", "super duper");
    BaseStream.Write(Encoding.UTF8.GetBytes(newOut), 0, newOut.Length);
}

Ouch. Your users are probably looking at something other than what you intended. The upstream filter was trying to replace the head but now they three. After several years in the industry and meticulous experimentation, I have found that 1 is the perfect number of heads in a web page.

Oh and look, this code managed to violate all three admonitions in one blow.