Poll: How Often Should New Firefox Versions Be Released?

calendarSeptember 21, 2011 in Firefox , HttpWatch , Internet Explorer

Earlier this year Mozilla shifted from releasing a new version every year or so, to once every six weeks.

So in the previous four years we had five major new builds of Firefox, but this year we’ve already had versions 4, 5 and 6.

Releasing often seems like a good idea; unless you’re in a controlled corporate environment or you develop extensions for a living.

While changing to this new model, Mozilla largely gave up on backwards compatibility to speed up their development process. In the past many interfaces were said to be ‘frozen’ meaning that script based and native binary extensions could rely on using them at any point in the future. That’s all changed so anything can be updated. There’s no guarantee that code in an extension will work with a new version of Firefox.

For native binary components like HttpWatch the picture is much worse. Binary components must be recompiled to work with each new release:

That means it’s impossible for us to ship a version of HttpWatch that will work with a future release of Firefox. Also, we have to add at least one new DLL to our install program for every new Firefox release. It’s not just developer centric tools like HttpWatch that are affected. Even consumer focussed add-ons like RoboForm need updating for every Firefox release.

Of course, Chrome has always been frequently updated but it has a much smaller extension ecosystem because it doesn’t have the range of APIs available in Firefox or Internet Explorer. Therefore, the frequent updates to Chrome don’t cause as many issues because there are less extensions and the extension API is less likely to change as it is so much more restricted.

In comparison, Microsoft has been the master of backwards compatibility across versions of Internet Explorer. For example, HttpWatch 3.2 was last compiled nearly 5 years ago but still works with IE 9 on Windows 7:

IE’s longer release cycles and excellent backwards compatibility really appeal to corporate users compared to Firefox’s new release model.

There was even some talk of increasing the frequency of the Firefox releases to once every five weeks or less. The resulting discussion on Slashdot gave rise to these negative comments about the change:

Have they totally lost it? It’s not like the browser world is making sudden great progress. It’s a mature technology. The big problem today is getting stuff fixed.

Sorry i have other things to do than repackage FF for deployment every 5 weeks.

What FF user actually wants this model? Most of them don’t. Releasing at the same speed as Chrome isn’t going to win over Chrome users, but it will chase FF users off. That’s what we’re seeing here.

If they keep this up, I will remove it from our labs. I am not going to deal with this s**t. Release bug fixes as often as you need to, but new features need to be something that doesn’t happen too often. I can’t go and test this s**t every few weeks, nor do I want to deal with things that are outdated. I like FF, but this policy they have is pushing me to dump it. I haven’t yet, but we’ll see.

Extensions stop working at random without any good reason and in record time. So many of us use Firefox over Chrome because of extensions. This plan is just terrible.

Of course, we are biased because short release cycles for Firefox create more work for us. What do you think?

[polldaddy poll=”5521932″]

Investigating the Network Performance Of Firefox 5

calendarJune 10, 2011 in Firefox , HttpWatch , Internet Explorer , Optimization

Things are happening fast at Mozilla. Although Firefox 4 was only released three months ago, Firefox 5 is only days away from final release. We can also expect to see versions 6 and 7 later this year.

One of the major performance related changes in Firefox 5 is an improvement in the way that keep-alive HTTP connections are re-used. Previously, there was a simple FIFO queue. So if Firefox ever tried to reuse a TCP connection it would simply use the connection that had been idle for the longest period of time.

However, not all connections are equal. Connections that have transmitted the most data are likely to be faster than those that have only received a small amount of data. This effect is caused by the congestion window mechanism in TCP.

To find out more please take a look at John Rauser’s excellent and entertaining talk at last year’s Velocity conference:


TCP and the Lower Bound of Web Performance – John Rauser

Slides for TCP and the Lower Bound of Web Performance

One of the major changes in Firefox 5 is that it now sorts the idle connections by congestion window size. Connections with the highest congestion window will be used first as described in the related bug report:

Right now the idle persistent connection pool is a FIFO.

What really distinguishes different connections to the same server is the size of the sending congestion window (CWND) on the server. If the window is large enough to support the next response document then it can all be transferred (by definition) in 1 RTT.

Connections with smaller windows are going to be limited by the RTT while they grow their windows.

All else being equal, which as far as I can tell it is, we want to use the big ones. We cannot directly tell what the server’s CWND is of course, but the history of the connection provides a clue – connections which have moved large flights of data (single responses, or aggregate pipelines of responses) will have given the server the best chance for opening that window in the past.

We’ve just updated HttpWatch to work with Firefox 5 beta 5 and decided to see if we could measure any performance gain from this change. Initially, we were disappointed to find no obvious improvement. This was probably because the use of up to six connections per host name allows a fair amount of averaging out of the congestion window. Also, even if a re-used connection is particularly fast its effect may be swamped by the other resources being downloaded at the same time.

The blog post that originally described the benefit of the ordering of connections by congestion window size used a particularly scenario:

Using an experiment designed to show the best case, the results are better than I expected for such a minor tweak. This was my process:

  • construct a base page mixed with several small and several large images plus a link to a 25KB image. There are 6 objects on the base page.
  • load the base page – FF4 will use six parallel connections to do it
  • click on the link to the 25KB image – this will use an idle persistent connection. Measure the performance of this load.

Based on this we tried the same sort of process. First opening the HttpWatch Overview page and then clicking on a link to open a full resolution screen shot:

The performance benefit we measured in this scenario was substantial. We consistently found that the screenshot image loaded about twice as fast in Firefox 5 as it did in Firefox 4.

Here’s the screen shot image being loaded in Firefox 4:

and then in Firefox 5:

With HttpWatch it’s possible to track how the browser uses connections. You can do this by adding the Client Port column to the main grid. Each new TCP connection will use a different local TCP port and this is a handy way to see how connections have been used on a page.

In Firefox 4 you can see that the screen shot image was loaded on the first connection that became idle:

This connection hadn’t done much work so its congestion window would have been relatively small.

In Firefox 5, the screenshot image was downloaded using the connection that had loaded the largest amount of data on the previous page. It would have a much larger congestion window and would therefore be able to download the image much more quickly:

Although, this relatively simple change in Firefox 5 typically didn’t make much difference in certain scenarios the performance improvement can be dramatic.

So how does IE 9 compare? It appears to use the same FIFO algorithm as Firefox 4 with similar loading times:

IE 9 – What’s Changed?

calendarMay 4, 2011 in HTTPS , HttpWatch , Internet Explorer , Javascript

Now that IE 9 has been released and is widely used, we wanted to follow up on some of our previous IE related blog posts to see how things have changed.

1. Using a VPN Still Clobbers IE 9 Performance

We previously reported about the scaling back of the maximum number of concurrent connections in IE 8 when your PC uses a VPN connection. This happened even if the browser traffic didn’t go over that connection.

Unfortunately, IE 9 is affected by VPN connections in the same way:

There is a subtle difference though. IE 8 would dynamically change it’s behavior as you connected or disconnected the VPN. In IE 9 it just seems to look for dialup or VPN connections at startup to determine the connection behavior for the rest of the session. For example, any active dial-up or VPN connection found when IE 9 starts will cause it to use a maximum of two connections per hostname. This limit remains until IE 9 is closed regardless of whether the dialup or VPN connections remain active.

2. IE 9 Mixed Content Warning Improved But Needs PRG

In previous blog posts we’ve covered the mixed content warning issues in IE and the problems it causes. It got even worse in IE 8 as the modal dialog was worded in a way that caused a great deal of confusion with no apparent benefit for ordinary web users.

A big step forward was taken in IE 9 by using a modeless dialog. It displays a simple message to indicate that not all the content was downloaded because some resources used unencrypted HTTP connections:

You can now ignore the message or simply click on the X to dismiss the warning.

Watch out for the ‘Show all content’ button though. Previous mixed content warning dialogs just blocked the download of non-secure content until you clicked the appropriate button. In IE9 ‘Show all content’ causes a complete refresh of the page. If your page was the result of a POST (e.g. form submit) and you didn’t use the POST-Redirect-GET pattern then the user will see this dialog instead of the updated page:

3. Another Reason to Favor IE 9 32-bit over IE 9 64-bit

We previously wrote about why IE 8 64-bit was the not the default version of IE on Windows Vista 64-bit. This was because commonly used plugins such as Flash, Silverlight and Java did not support 64-bit.

IE 9 32-bit remains the default version used on Windows 7 x64 for exactly the same reason:

However, there’s another reason to favor IE 9 32-bit. That’s because IE 9 ships with an advanced JIT compiler that compiles JavaScript into native assembly language code for improved performance. However, this JIT compiler only supports the x86 instruction set at the moment and therefore most Javascript bench marks run much more quickly in IE 9 32-bit than in the 64-bit version.

Here’s what ZDNet had to say about the 32-bit and 64-bit versions of IE 9:

OK, so what conclusions can we draw? Well, let’s begin with the obvious and say that Internet Explorer 9 64-bit is an absolute dog when it comes to JavaScript performance. This is to be expected given that IE 9 64-bit is using an older, slower JavaScript engine, while IE 9 32-bit was using the newer, more efficient Chakra JIT.

4. IE 9 Pinned Sites Are Great But They Disable All Add-ons

One nice feature of IE 9 is the ability to create pinned sites in Windows 7. A pinned site sits on the taskbar like a pinned application and can be quickly accessed when required. The web site can also provide customizations such as jump lists.

Unfortunately, all add-ons including HttpWatch are disabled when you do this. The reason given for this is:

The reason Add-ons don’t run on pinned sites is that we wanted to remove any non-site specific extension points (like toolbars and BHOs) from altering the original browsing experience created by the site.

It doesn’t seem unreasonable to block a debugging tool like HttpWatch, but it’s a shame that productivity tools such as Roboform are not available.


 

 

 

 

Ready to get started? TRY FOR FREE Buy Now