RequestReduce

Released RequestReduce 1.8: Making website optimization accessible to even more platforms by Matt Wrock

This week RequestReduce 1.8 was released expanding its range of platform compatibility along with some minor bug fixes.

Key Features Released

  • Syncing generated sprites and bundles across multiple web servers using sql server is now .net 3.5 compatible. Thanks to Mads Storm (@madsstorm) for migrating the EntityFramework 4 implementation over to PetaPoco
  • Added support for Azure CDN end points. See below for API details needed to enable this.
  • Fixed dashboard and cache flushing to function on IIS6
  • Ability to manually attach the RequestReduce Response Filter earlier in the request processing pipeline via a new API call.
  • Fixed .Less implementation to pass querystring parameters. Thanks

    Andrew Cohen (@omegaluz) for this bug fix.

There were a couple bugs caught by some users the day of release but those were fixed in the first 24 hours and all is stable now. You can now get this version from Nuget or RequestReduce.com. Its been very satisfying hearing from users who use RequestReduce on platforms such as classic ASP and even PHP on IIS and I’m glad to be able to expand this usage even further.

Why RequestReduce is no longer using Entity Framework

The short answer here is compatibility with .Net 3.5. It may seem odd as we stand on the precipice of the release of .Net 4.5 that this would be a significant concern, but I have received several requests to support Sql Server synchronization on .net 3.5. A lot of shops are still on 3.5 and the Sql Server option is a compelling enterprise feature. Its what we use at Microsoft’s EPX organization to sync the generated bundles and sprites across approximately 50 web servers. Since Entity Framework Code First is only compatible with .Net 4.0, we had to drop this in favor of a solution that would work with .Net 3.5.

The reason I chose to originally implement this feature using Entity Framework was mainly to become more familiar with how it worked and compared to the ORM that I have historically used, nHibernate. The data access needs of RequestReduce.SqlServer are actually quite trivial so I felt like it would be a good project to test out this ORM with little risk. In the end, I achieved what I wanted which was to understand how it worked at a nuts and bolts level beyond the white papers and podcasts I had been exposed to. I have to say that it had come a long way since my initial exposure to it a few years back. The code first functionality felt very much like my nHibernate/Fluent nHibernate work flow. It still has some maturing to do. especially in regards to caching.

Mads Storm was kind enough to submit a pull request overhauling the EF implementation using a Micro ORM called PetaPoco. While I certainly could have ported RequestReduce to straight ADO given its simple data needs, the PetaPoco migration was simple given that it follows a similar pattern to Entity Framework. I would definitely recommend PetaPoco to anyone interested in a Micro ORM that needs .Net 3.5 compatibility. I had previously held interested in using a framework like MassiveSimple.Data or Dapper. However all of these make use of the .Net 4 Dynamic type. PetaPoco is the only micro ORM that I am aware of that is compatible with .Net 3.5.

How to integrate RequestReduce with Azure CDN Endpoints

Azure’s CDN (content delivery network) implementation is a little different from most standard CDNs like Akamai. My experiences working with a couple of the major CDN vendors has been that you point your URLs to the same Url that you would use locally with the exception that the host name is one dedicated to static content and whose DNS points to your CDN provider. The CDN provider serves your content from its own cache which is geographically located close to the requesting browser. If the CDN does not have the content cached, it makes a normal HTTP call to the “origin” server (your local server) using the same url it was given but using the host name of your local site. Azure follows this same model with the exception that it expects your CDN content to reside in a directory (physical or virtual) explicitly named “CDN”.

Standard Implementation:

Browser –> http://cdn.yoursite.com/images/logo.png –> CDN Povider (cold cache) –> http://www.yoursite.com/images/logo.png

Azure Implementation:

Browser –> http://azurecdn.com/images/logo.png –> CDN Povider (cold cache) –> http://www.yoursite.com/cdn/images/logo.png

RequestReduce allows applications to serve its generated content via a CDN or cookie less domain by specifying a ContentHost configuration setting. When this setting is provided, RequestReduce serves all of its generated javascript and css and any local embedded resources in the CSS using the host provided in the ContentHost setting. However, because not only the host but also the path differs when using Azure CDN endpoints, this solution fails because http://longazurecdnhostname.com/images/logo.png fails to get content from http://friendlylocalhostname.com/images/logo.png since the content is actually located at http://friendlylocalhostname.com/cdn/images/logo.png. RequestReduce’s ContentHost setting will now work with Azure as long as you include this API call somewhere in your application’s startup code:

RequestReduce.Api.Registry.UrlTransformer = (x, y, z) => z.Replace("/cdn/", "/");

This tells requestReduce that when it generates a URL, remove the CDN directory from the path.

Attaching the RequestReduce response filter early in the request

RequestReduce uses a Response Filter to dynamically analyze your web site’s markup and manipulate it by replacing multiple css and javascript references with bundled javascript and css files transforming the background images in the CSS with sprites where it can. RequestReduce waits until the last possible moment of the request processing pipeline to attach itself to the response so that it has all of the information about the response needed to make an informed decision as to whether or not it should attach itself. This works well in almost all cases.

There are rare cases where an application may have another response filter that either simply does not play nice with other response filters by not chaining its neighboring filter correctly or it manipulates the content of the response in such a way that makes it necessary that RequestReduce filters the content after this filter has performed its manipulations.

I ran into this last week working with the MSDN and Technet Dev Centers in their adoption of RequestReduce. They have a ResponseFilter that gets attached in an MVC controller action filter which is before RequestReduce attaches itself. The nature of chained response filters is that the first filter to attach itself is the last filter to receive the response. Since the dev center Response Filter explicitly removes some excess css and javascript, it is important that RequestReduce receives the content last and is therefore attached first. To accommodate this scenario, I added the following API method that they were able to call in their action filter just before attaching their own method:

RequestReduce.Api.Registry.InstallResponseFilter();

This tells RequestReduce to attach itself Now.

Now excuse me as I slip into my 100% polyester leisure suit…

So what are you waiting for? Head over to Nuget and download RequestReduce today! It will make your site faster or my name isn’t Matt Wrock. Oh…and its Freeeeeeeeeeeeeee!!!!

Bug fixes and enhancements included in RequestReduce 1.7.26 by Matt Wrock

I usually don’t blog on bug fix releases. However the bug fix release I deployed today addresses a couple serious bugs (albeit edge cases) and their fixes forced a few significant enhancements I want to call out.

  1. Css that reference the same image twice may produce sprite sheets that cut out some of the images at the end of the sprite sheet.
  2. The logic that determines which css selectors can be included within another css selector sometimes breaks with child selectors (eg. .parent-class > .child).
  3. Upgraded to v4.44 of the MS Ajax Minifier which addresses a bug that causes a JS error in IE8 with Google’s Prettify.js. While IE8 throws an error, all other browsers suffer from the fact that this bug essentially breaks the intentions of prettify.js.
  4. RequestReduce was not handling @font-face urls or some data URIs. These are now supported.

There were other bugs but these are the really important ones worth mentioning. At work (Microsoft MSDN and Technet web properties) we are preparing for our February release and there have been a couple minor issues raised in addition to the bugs above that have forced the following enhancements.

  1. RequestReduce now handles background-position properties that use pixels, percentages or the directional attributes of top, left, right, center, bottom. Previously, RequestReduce did not consider percentage values or positive pixel units.
  2. Some tweaks to the nQuant quantizer parameters increase the quality of the images without sacrificing file size. While RequestReduce generally produces high quality png8 sprites, there are rare cases where images may appear grainy or overly opaque. Today’s fix addresses most of these instances. There may still be times where image quality is sub par. This is most likely to happen in very smooth gradients that has a lot of variance in transparency levels. The best way to deal with these is to either disable the image quantization or to decrease the spriteColorCount setting. The fewer colors that the nQuant quantizer has to reduce to 255, the higher the quality of the final image. There is really no hard fast rule here. RequestReduce defaults to 5000, but depending on your images, 10,000 may be fine or you may need to reduce to 2000.
  3. Thanks to PureKrome, the RequestReduce dashboard is a little easier on the eyes when displaying lists of urls and it includes exception info for failed requests. I really like this and it improved my quality of life doing some debugging last night.
  4. Added a API hook to allow one to transform the absolute url generated by RequestReduce. A popular application of this is to add prefixed subdomains to CDN hosts to allow browsers to load more content at the same time. See the API Wiki for details.
  5. Upgraded the SassAndCoffee dependency to 2.0.2. This eliminates some of the x64 instability issues of the former version and uses the new IE Chakra JS engine if available. It no longer uses V8.

Released RequestReduce 1.7.0: Giving the RequestReduce onboarding story a happy beginning by Matt Wrock

About six weeks ago I blogged about an issue with RequestReduce and its limitations with resolving image properties of each CSS class. To recap, until today, RequestReduce treated each CSS class as an atomic unit and ignored any other classes that it may be able to inherit from. The worst side effect of this is a page that already has sprites but uses one class to specify the image, width, height, and repeatability. Then uses several separate classes each containing the background-position property of each image in the sprite sheet. Something like this:

.nav-home a span{    display:block;    width:110px;    padding:120px 0 0 0;    margin:5px;float:left;    background:url(../images/ui/sprite-home-nav.png?cdn_id=h37) no-repeat 0 1px;    cursor:pointer}

.nav-home a.get-started span{background-position:0 1px}

.nav-home a.download span{background-position:-110px 1px}

.nav-home a.forums span{background-position:-220px 1px}

.nav-home a.host span{background-position:-330px 1px}

What RequestReduce would do in a case like this is resprite .nav-home a span because it has all of the properties needed in order to construct the viewport and parse out the sprite correctly. However, once this was done, the lower four classes containing the positions of the actual images rendered a distorted image. This is because RequestReduce recreated a new sprite image with the original images placed in different positions than they were on the original sprite sheet. So the background positions of the other nav-home images point to invalid positions.

If you are creating a site that pays tribute to abstract art, you may be pleasantly surprised by these transformations. You may be saying, “If only RequestReduce would change my font to wing dings, it would be the perfect tool.” Well, unfortunately you are not the RequestReduce target audience.

RequestRecuce should never change the visual rendering of a site

One of the primary underlying principles I try to adhere to throughout the development of RequestReduce is to leave no visible trace of its interaction. The default behavior is always to optimize as much as possible without risk of breaking the page. For example, I could move scripts to the bottom or dynamically create script tags in the DOM to load them asynchronously and in many cases improve rendering performance but very often this would break functionality. Any behavior that could potentially break a page must be “requested” or opted in to via config or API calls.

This spriting behavior violated this rule all to often. I honestly did not know how wide spread this pattern was. My vision is to have people drop RequestReduce onto their site and have it  “just work” without any tweaking. What I had been finding was that many many sites and most if not all “sophisticated” sites already using some spriting render with unpleasant side-effects when they deploy RequestReduce without adjustments. While I have done my best to warn users of this in my docs and provide guidance on how to prepare a site for RequestReduce, I had always thought that the need for this documentation and guidance would be more the exception than the rule.

I have now participated in onboarding some fairly large web properties at Microsoft onto RequestReduce. The process of making these adjustments really proved to be a major burden. Its not hard like rocket science, its just very tedious and time consuming. I think we’d all rather be building rockets over twiddling with css classes.

Locating other css properties that may be used in another css class

It just seemed to me that given a little work, one could discover other properties from one css class that could be included in another. So my first stab at this was a very thorough reverse engineering of css inheritance and specificity scoring. For every class, I determined all the other classes that could potentially “contribute” to that class. So given a selector:

h1.large .icon a

Dom elements that can inherit from this class could also inherit from:

a.icon ah1 .icon a.large .icon a.large aetc...

For every class that had a “transformable” property (background-image or background-position), I would iterate all other classes containing properties I was interested in (width, height, padding, background properties) and order them by specificity. The rules of css specificity can be found here. Essentially each ID in a selector is given a score of 100, each class and pseudo class a score of 10 and each element and pseudo element a score of 1. Inline styles get a score of 1000, but I can’t see the dom and the “Universal” element or * is given a score of 0. Any two selector with a matching score determines its winner by the one that appears last in the css.

Once I had this sorted list, I would iterate down the list stealing missing properties until all my properties were occupied or I hit the end of the list.

At first this worked great and I thought I was really on to something but I quickly realizing that this was breaking experience all too often. Given the endless possibilities of dom structures, there is just no way to calculate without knowledge of the dom, which class is truly representative. Eventually I settled on only matching up a background image less selector with a background-property with the most eligible and specific selector containing a background-image. While even this strategy could break down,  so far every page I throw this at renders perfectly.

Although this feature does not add any optimization to other sites and only assists larger sites to RequestReduce, I’m excited to provide a smoother adoption plan. As a project owner who wants his library to be used, I want this adoption plan to be as smooth and frictionless as possible.

What else is in v1.7.0?

Here is a list in the other features that made it into this release:

  1. Improved performance of processing pages with lots of sprites. This is done by loading each master sprite sheet into memory only once and not each time an individual sprite image is found.
  2. Prevent RequestReduce from creating an empty file when it processes a single script or css file containing only a comment. After minification, the file is empty.
  3. Support Windows Authentication when pulling script/css/sprite resources.

What’s Next?

Good question. Probably being able to process scripts due to expire in less than a week. Soon after that I want to start tackling foreground image spriting.

Comparing RequestReduce with other popular minifiers by Matt Wrock

I have been asked several times now how RequestReduce compares or is different to such popular minification and bundling solutions like squishit, cassette and the upcoming Asp.Net 4.5 Minification and Bundling features. Before I say anything let me comment that RequestReduce is an OSS project and I make no money from this project and in fact lose quite a bit of time to it. This comparison is not at all intended to make a statement that the other solutions out there suck and therefore you should use my super cool Wrock star solution. The solutions currently out there are all great tools written by great developers. Also, I am a Microsoft employee and do not in any way wish to compete with my employer. I am nothing but supportive of the asp.net team’s progress in enhancing performance out of the box with ASP.NET 4.5. That all said, RequestReduce does take a unique approach to bundling and minification that I want to point out in this post.

Automatically Discovers CSS and Javascript with no code or config

One of my primary objectives with RequestReduce is to have it optimize a website with absolutely no coding intervention. My philosophy is that a developer or team should not have to adjust their style or conventions to work with an auto minifying tool. Currently most of the popular tools require you to either inject code or a control into your asp.net page or MVC view to act as the touch point that defines what should be minimized.

Being able to avoid adding suvh code is obviously ideal for legacy apps where you might not even have the ability to change code or have no idea where to begin. I also like it for green field projects. I just don’t think that a tool like RequestReduce should have a noticeable presence in your code.

RequestReduce uses a response filter to scan your response for all <link> tags in the head and all <script> tags in the page. As long as the href or src points to a url that returns a css or javascript content type, it will be processed by RequestReduce. The exception to this rule is javascript with no-store or no-chache in their response or that expire in less than a week. RequestReduce ignores those. Also, RequestReduce, by default, ignores javascript pulled from the google or Microsoft CDNs. The idea there is that such content has a high likelihood of already being cached on the user’s browser. RequestReduce does expose configuration settings and an API to give more fine tuned control of what CSS and Javascript to filter.

Minifies and Combines External and Dynamic Content

Most of the popular minification and bundling solutions are file based. In other words, they pull down the original unminified resources via the file system and assume everything is already on your server. While this obviously covers most cases it does not cover external scripts or things like WebResource.axd and ScriptResource.axd which are generated dynamically.

RequestReduce is HTTP based. It pulls down original content via http which means it can pull down any css or javascript as long as it is publicly available from a url. This is great for a lot of blog and cms systems that heavily rely on Webresources and scriptresources. It is also great for external content. Now, as started above, RequestReduce ignores “near future” expiring scripts. However, toward the top end of my backlog is a feature to handle those. Imagine being able to include those pesky social media scripts.

Automatically Sprites CSS Background images

Anyone who has created sprite sheets from scratch knows how tedious that process can be. As a site adds images on new releases, those sprite sheets have to be revised which has an engineering cost and a risk of being forgotten. Ask your engineering team who wants to do the spriting and don’t expect a huge show of hands. RequestReduce parses the CSS and looks for images that it thinks it can sprite and then generates the sprite sheets on the fly.

There are limitations in what RequestReduce will find and potential to distort the page rendering in some cases when images are already sprited. Much of that can be easily mitigated. Please see this wiki and also this one for hints and explanations on how to improve and optimize the RequestReduce spriting experience. The very next feature I am working on should alleviate a lot of the mess that can sometimes occur with a fraction of sites that already have sprites and will also allow RequestReduce to sprite even more images. I have a few upgrades planned to address sprites. I also plan to address spriting foreground images. How cool would that be?

Deferred Processing

RequestReduce never blocks a request while waiting to minify and bundle resources. If RequestReduce has not already done the minification and bundling, it will send the original response and queue the resources for processing. In the case of RequestReduce this is particularly important since the spriting can be quite costly. Once resources have been processed, all subsequent requests for those resources will serve the optimized content using optimized caching headers and etags.

SqlServer Content Synchronization and easy integration with CDNs and Cookie-Less domains

RequestReduce allows you to easily configure an alternate hostname where you would like requests for static resources to be sent. This works great for CDNs and cookie less domains and it supports web performance best practices.

Also, since RequestReduce can synchronize optimized content via sql server, it becomes an ideal solution for many web farm implementations. A common problem in a web farm scenario is a request for the base page provides Urls for scripts and CSS that point to the optimized files. Then a different server receives these requests and if those resources have not been processed yet on that server, a 404 can ensue. This can also be handled with a common static file share. See this wiki for more into on this.

Now a lot of the current solutions out there do provide integration points for you to extend their processing and plug in these kinds of features into their frameworks. RequestReduce attempts to provide these features out of the box.

Why not just do all of this at Build Time?

This is another common and somewhat related question I get. On the one hand, I totally agree. In fact in probably most scenarios out their on the net today, a build time solution will suffice. Most sites don’t deal with dynamic or external content which are the areas where a build time solution simply won’t work. A build time solution also imposes a lot less risk. There are no extra moving parts running in production to minify and bundle your code that can break. If this breakage interferes with your sites ability to serve its css and Javascript ,the results can be akin to total down time. Also, with a build time solution, you know exactly what is going into production and your test team can confidently sign off on what they tested.

I intend to eventually add features to RequestReduce to provide a better build time experience. To me, the beauty of a run time solution is not having to worry about declaratively adding new resources to the build tasks. As long as the tool is stable, I can have confidence that the new resources (images, css and javascript) will get picked up in the optimization. Also, the potential for optimizing external resources can potentially be huge. There is a fair amount to be done here to fully leverage this potential but it is a fact that much of a web’s performance degradation can be blamed on resources served from external sites.

I really hope this answers many of the questions about what makes RequestReduce different from other similar tool. Please do not hesitate to ask for more clarification in the comments if it does not or if you feel like I have missed anything significant

Microsoft blogging platform gains 33% performance boost after adopting RequestReduce by Matt Wrock

Here is a Keynote performance graph covering a few hours before and after launch.

pic.twitter.com/UoqwYVWz

Looks good to me.

A win for Microsoft. Another win for the Open Source Community

Today Microsoft completed its onboarding of RequestReduce on its MSDN and Technet blogging platform. Huge thanks and shout out to Matt Hawley(@matthawley) who played a pivotal roIe in this launch! In fact, he DID launch it. Matt also made a significant contribution to RequestReduce by yanking out the .net 4.0 dependent SqlServer Store into a separate package and tweaking the build script to make it compatible with .net 3.5.

I am very pleased to report that upon launch, readers can now access a site that is 33% faster than it was before launch. This is a win for Microsoft, the multitudes of readers that visit its content every day and a win for the open source software (OSS) community at large. I like to think that this demonstrates Microsoft’s growing commitment to OSS. In just the past couple of years, Microsoft has made giant strides to fostering the open source ecosystem. This is just another small step forward.

Just to be clear: I am a Microsoft employee. However, I develop RequestReduce on my own time away from work and there have been non-Microsoft contributions. The features that I build into RequestReduce do not originate from some grand Microsoft Project gnat chart. My releases of RequestReduce downloads do not require multiple levels of sign off from the Microsoft covenant of elders.

Furthermore, my team and the teams I work closely with use tons or various OSS projects with the full blessing of the Microsoft Legal department (I hear they wear special underwear but am not sure – more on this later). We use nHibernate, StructureMap, Moq, XUnit, Caste Windsor, Service Stack, Json.Net, PSake and a lot more. These are baked into Microsoft properties that many .net devs visit every day like the Visual Studio Gallery, various MSDN properties and even the web service you call when you go to Visual Studio’s Extension Manager.

RequestReduce: Optimized for improving performance of brown field apps

While RequestReduce is suited to optimize any website, it is particularly ideal for sites that have already been built and suffering from poor, and sometimes extremely poor, performance. The MSDN and Technet blogging platform is a perfect example of this. They are built on top of 3rd party non-Microsoft blogging software that uses asp.net 3.5. Lets just say there were no lack of webresource.axd and scriptresource.axd especially if you are fond of fitting…say…30 or 40 in a single page. I mean why not? Certainly god created the .axd extension for a reason and intended it to multiply and be fruitful.

RequestReduce is ideally architected for these  scenarios. It follows a simple drop in and just work model. Unlike a lot of the other minify and bundling solutions it filters your page and dynamically locates your JavaScript and CSS regardless if its on your server or twitter’s CDN. It can process any resource with a text/css or JavaScript  mime type even if they are dynamically generated like in the case of ScriptResources. It works well with high traffic, multi server enterprise topologies because it is fast and provides multiple caching solutions including one that caches in a central SqlServer instance.

So what exactly does RequestReduce do?

Well after helping you lose weight, quit smoking and significantly enhancing your sex life (I did say “significantly” right?…good), it makes your website faster by minimizing your css and JavaScript. Combining your css and what JavaScript it can without breaking your scripts and it attempts to locate background images it can combine into sprite files. It also optimizes the color palette and compression of these sprites. Its like running YSLOW and then clicking the “optimize now” button. Really? You haven’t seen that button?

The end result is a less bytes and fewer HTTP requests your browser has to handle to render a page with very little work. Do you really want to generate your own sprites each time you are handed a new image or remember you have forgotten to add that new script to your bundling config?  Do you get frustrated when your site goes to production and you realize you forgot to spell Cace-Cntrl and your test team did not catch this because they were busy ensuring your app solved world hunger even if a user entered a space followed by an asterisk in your search box? Well you can get this functionality on your own site now at http://www.requestreduce.com or via Nuget.