How Secure Are Query Strings Over HTTPS?

calendarFebruary 20, 2009 in HTTPS , HttpWatch

A common question we hear is “Can parameters be safely passed in URLs to secure web sites? ” The question often arises after a customer has looked at an HTTPS request in HttpWatch and wondered who else can see this data.

For example, let’s pretend to pass a password in a query string parameter using the following secure URL:

https://www.httpwatch.com/?password=mypassword

HttpWatch is able to show the contents of a secure request because it is integrated with the browser and can view the data before it is encrypted by the SSL connection used for HTTPS requests:

If you look in a network sniffer, like Network Monitor, at the same request you would just see the encrypted data going backwards and forwards. No URLs, headers or content is visible in the packet trace::

You can rely on an HTTPS request being secure so long as:

  • No SSL certificate warnings were ignored
  • The private key used by the web server to initiate the SSL connection is not available outside of the web server itself.

So at the network level, URL parameters are secure, but there are some other ways in which URL based data can leak:

  1. URLs are stored in web server logs – typically the whole URL of each request is stored in a server log. This means that any sensitive data in the URL (e.g. a password) is being saved in clear text on the server. Here’s the entry that was stored in the httpwatch.com server log when a query string was used to send a password over HTTPS:


    2009-02-20 10:18:27 W3SVC4326 WWW 208.101.31.210 GET /Default.htm password=mypassword 443 ...

    It’s generally agreed that storing clear text passwords is never a good idea even on the server.

  2. URLs are stored in the browser history – browsers save URL parameters in their history even if the secure pages themselves are not cached. Here’s the IE history displaying the URL
    parameter:

    Query string parameters will also be stored if the user creates a bookmark.

  3. URLs are passed in Referrer headers – if a secure page uses resources, such as javascript, images or analytics services, the URL is passed in the Referrer request header of each embedded request. Sometimes the query string parameters may be delivered to and stored by third party sites. In HttpWatch you can see that our password query string parameter is being sent across to Google Analytics:

Conclusion

The solution to this problem requires two steps:

  • Only pass around sensitive data if absolutely necessary. Once a user is authenticated it is best to identify them with a session ID that has a limited lifetime.
  • Use non-persistent, session level cookies to hold session IDs and other private data.

The advantage of using session level cookies to carry this information is that:

  • They are not stored in the browsers history or on the disk
  • They are usually not stored in server logs
  • They are not passed to embedded resources such as images or javascript libraries
  • They only apply to the domain and path for which they were issued

Here’s an example of the ASP.NET session cookie that is used in our online store to identity a user:

Notice that the cookie is limited to the domain store.httpwatch.com and it expires at the end of the browser session (i.e. it is not stored to disk).

You can of course use query string parameters with HTTPS, but don’t use them for anything that could present a security problem. For example, you could safely use them to identity part numbers or types of display like ‘accountview’ or ‘printpage’, but don’t use them for passwords, credit card numbers or other pieces of information that should not be publicly available.

You can check SSL/TLS configuration our new SSL test tool SSLRobot . It will also look for potential issues with the certificates, ciphers and protocols used by your site. Try it now for free!

The Firefox Process Model

calendarFebruary 10, 2009 in Automation , Firefox , HttpWatch

One of the interesting new features in Google’s Chrome browser is the use of one Windows process per site or tab. This helps to enforce the isolation between tabs and prevents a problem in one tab crashing the whole browser.

In comparison, Firefox seems to have a simplistic process model on Windows. It doesn’t matter how many tabs or windows you open, or how many times you start Firefox  – by default you get one instance of firefox.exe:

Firefox Process Model

In Internet Explorer you can create a separate instance of the browser process just by starting another copy of iexplore.exe.

There are advantages to Firefox’s single process model:

  1. It uses less system resources per tab compared to creating multiple Windows processes.
  2. Firefox can use fast in-process data access and syncronization objects when it interacts with the history, cookie and cache data stores.

However, the lack of isolation means that if anything causes a page to crash, you’ll lose all your Firefox tabs and windows. This is mitigated to some degree by Firefox’s ability to restart the browser and reload the set of pages displayed in the previous session.

So what do you do if you are developing an add-on for Firefox or you want to run automated tests in Firefox whilst still using Firefox to browse in the normal way?

In Firefox, multi-process support is provided through the use of profiles. When Firefox is installed, you automatically get one default profile that contains user settings, browsing history, the browser cache and persistent cookies. Additional profiles can be created using the Firefox Profile Manager.

The Profile Manager is built into Firefox and is started by running this command in Start->Run:

firefox -P -no-remote

The -P flag indicates that the Profile Manager should be started and the -no-remote flag indicates that any running instances of Firefox should be ignored. If you run without this flag and you have already started Firefox, the command will simply open a new Firefox window without displaying the Profile Manager.

The Profile Manager has a simple user interface that allows you to create, delete and rename profiles:

You can start Firefox in a non-default profile by using the following command line:

firefox -P <myprofile> -no-remote

For example, if you created a new profile called AutoTest:

You could set up a shortcut like this to start Firefox in the AutoTest profile:

Each profile uses its own copy of the firefox.exe process, as well as its own settings, browser cache, history and peristent cookies. This provides more isolation than you would achieve by running multiple processes in IE. You can even separately enable or disable add-ons like Firebug or HttpWatch in each profile.

Internet Explorer’s cache and persistent cookies are maintained on a per user basis making it difficult to run separate instances with their own storage and settings. With Firefox you simply use different profiles. For example, you could use your default profile for normal browsing and have a separate profile to use for another purpose such as automated testing:

The HttpWatch automation interface in version 6.0 supports the use of profiles with Firefox. The profile name can be passed to the Attach and New methods of the Firefox plugin object. Passing an empty string indicates that you want to use the default profile.

Here’s a modified version of the page load test that we previously featured. It’s written in C# and uses a non-default profile to run the test:

// Set a reference to the HttpWatch COM library
// to start using the HttpWatch namespace
using HttpWatch;                
 
namespace EmptyCacheTest
{
    class Program
    {
        static void Main(string[] args)
        {
            string url = "http://www.httpwatch.com";
            string profileName = "AutoTest";
 
            Controller controller = new Controller(); 
 
            // Create an instance of Firefox in the specified profile
            Plugin plugin = controller.Firefox.New(profileName);                
 
            // Clear out all existing cache entries
            plugin.ClearCache();                
 
            plugin.Record();
            plugin.GotoURL(url);                
 
            // Wait for the page to download
            controller.Wait(plugin, -1);                
 
            plugin.Stop();                
 
            // Find the load time for the first page recorded
            double pageLoadTimeSecs =
                plugin.Log.Pages[0].Entries.Summary.Time;                
 
            System.Console.WriteLine( "The empty cache load time for '" +
                url + "' was " + pageLoadTimeSecs.ToString() + " secs");                
 
            // Uncomment the next line to save the results
            // plugin.Log.Save(@"c:\temp\emptytestcache.hwl");                
 
            plugin.CloseBrowser();
        }
    }
}

Ready to get started? TRY FOR FREE Buy Now