Web Performance


Why should you care about web performance?


Over the last decade or so web pages have become increasingly complex and rich. Modern web pages can contains several images, multiple CSS and JavaScript files, Videos, Flash or Silverlight objects. This complexity comes at a cost – increased page load times.

A number of organisations have conducted studies of how performance relates to user experience, conversions and page abandonment. In a nutshell:

  • Slow web pages can lead to
    • higher rates of page abandonment
    • a poor user experience and may deter future visits
  • These factors in turn can impact conversion rates, whether that is a directly through products/ services you sell through your site or via display advertising.
  • Complex, unoptimised pages containing a large number of resources – increases your hosting costs due to increased traffic.
  • Faster web pages are like to have a higher search engine ranking (at least in the case of Google, which now includes this as a factor in its search ranking algorithms). 
  • You can increase scalability through optimising your site for performance.


Major technology companies such as Google, Amazon, Microsoft and Yahoo have placed great importance on web performance. Google has a site dedicated to “making the web faster”, Yahoo! also provides information on how to speed up your website. Steve Souders a former Yahoo! now Google employee has some good guidance on web performance on his website.

Google have also developed the SPDY protocol to enhance the existing HTTP protocol in order to improve performance.  From a client perspective, Google Chrome, Firefox and the Opera Browser all support the protocol. On the web server side, both Apache and NGINX will support SPDY in future releases.  Some Google services also support the protocol.  Twitter has enabled SPDY on their web servers and Facebook and Wordpress have plans to implement the protocol.

So you see performance matters.

Of course some might say that if you’re business revolves around search and displaying advertising it is of course in your interest that web sites load quickly, so that:
  • You are able to crawl web pages faster;
  • Your bandwidth costs for crawling those sites are lower;
  • The sites that use display Google Ads will load faster and therefore more likely to lead to click through/  impressions.

That may be some truth in that but it’s also of benefit to you and besides it’s believed that Google’s Internet connectivity is obtained primarily through peering agreements rather than through purchasing transit.

You can’t improve what you don’t measure

There are plenty of resources on the web with advice on how to improve the performance of your site (such as the Google and Yahoo links provided earlier in this article). However, it’s difficult to improve something you do not measure and more importantly you’ll be able to make smarter decisions on what to invest your time and money on improving.

There are a number of tools that you can use to analyse the performance of a web page, some of which focus on the “front-end” and some on the back-end infrastructure serving your web site. The best place to start is the front-end, simply because this is what your users/customers will see and front-end optimisations are generally easier to implement, cost less and will deliver the most bang for your buck.

Browser Based Tools

Developer Tools which are built into the Chrome and Safari web browsers includes a number of tools for capturing and analysing information on the performance of a web page.


  • Network – the Network panel lets you inspect resources (CSS, images, JavaScript etc) that are downloaded over the network. The information is presented as a timeline waterfall, which is extremely useful for understanding how various page elements are retrieved and which elements are taking the longest to be retrieved.

  • Timeline – the Timeline panel gives you a complete overview of where time is spent when loading and using your web app or page. All events, from loading resources to parsing JavaScript, calculating styles, and repainting are plotted on a timeline.

  • Profile - The Profiles panel lets you profile the execution time and memory usage of a web app or page so that you can optimise your code to improve performance.

  • Audits – the Audits panel lets you run tests to analyse a web page in terms of the overall page performance (how long it takes to fetch resources, parse and render the page etc) and the network utilisation.


To run an audit click the run button then press the circular button to begin 'recording' the session.

The button will turn red once the to indicate the information is being recorded.


The Firebug extension provides similar information to Developer Tools for the Firefox web browser.

PageSpeed Insights is a browser extension available for Firefox and Chrome that can analyse the performance of a web page and provide suggestions on how to improve performance. If you don’t use Firefox or Chrome you can also run PageSpeed from the following website: https://developers.google.com/speed/pagespeed/insights.




The WebPageTest is an online tool used to measure the loading time of a web page. In addition to providing similar information to Developer Tools, PageSpeed on performance and suggests - the tool has a number of great features such as:

  • The ability to simulate tests based on locations
  • Test with different browsers
  • Screenshots of what your page looks like from the point that the page starts to render.



Here’s an example run of WebPageTest against http://itv.com with the location set to London and Browser set to IE7. The summary page shows snapshot of the page load time, content breakdown and PageSpeed scores.




Here’s a waterfall timeline from Details tab:



Each row in the waterfall graph is a resource used by the page and time is plotted on the horizontal axis. You can see that for the base HTML on www.itv.com much of the time is spent on downloading the content (the blue part of the bar). The “Time to First Byte” measures how long it takes from the client sending the request to when it starts receiving the response. This figure provides an indication of the time spent processing the request on the server side. Note the emphasis on indication, this is because this includes other sources of latency (i.e. network latency on the internal network etc). If a significant proportion of the total page load time is spent on server side processing then this would be a good area to focus your optimisation efforts on.

 Ideally what you want to see is parallel downloads rather than sequential downloading of resources on a web page as with this example. This is just the top half of the timeline but you can see that a number of JavaScript files and WebResource.axd files being loaded upfront and these block the rendering of the page.

In all around 172 requests are made, the Content Breakdown tab of WebPageTest shows that of these 71 are images (40%) and 64 are JavaScript files (36%).

The Performance Review tab shows for each of the resources on the page whether various optimisations have been applied such as Gzip compression, minified CSS and JS and use of Content Delivery Networks (CDNs). Minifying CSS and JS removes extra whitespace for formatting, comments and in the case of JS will “shrink” variable names.



The Domains tab provides a breakdown of the page content by domain.



The PageSpeed tab shows which Google PageSpeed rules the page adheres to and what improvements can be made.




 In applying the suggestions from these tools, be aware that there is a trade off between performance and cost, time or manageability. For example minifying CSS and JS will improve performance but make your code less readable and harder to debug – however, in this particular case this can be remedied by making minifying part of the deployment process so you always maintain a copy of CSS and JS that contains comments and meaningful variable names.

Server side tools

As mentioned you should look to optimise on the “front end” first (i.e. minify CSS, JS, enable compression etc) as these are generally quick wins. However, if the tools discussed above show that most of the page load time is due to server side processing (a high Time to First Byte for resources may be indicative of this) then you need to look at the back end.

Unfortunately measuring performance at the server side is a bit more tricky and dependent on the server side language / platform you are using.  That said there are a number of commercial offerings in this area such as NewRelic, Gomez and DynaTrace that can provide some level of insight.

In terms of “free” tools your best bet would be to instrument your code to collect metrics as there don’t seem to be many open source tools in this area (at least in terms of ones that I am aware of). Sure there are monitoring tools such as Zabbix, Nagios, Zenoss but these tools are focused on server related monitoring and not application performance measurement.

That said there are a number of free web server load testing / benchmarking tools which can be useful for understanding the load that can be handled by your web application and server. They can also be useful for measuring the impact of any optimisations that you make.


Apache Benchmark (AB) is a tool supplied with the Apache HTTP (Web) Server that is used for benchmarking your web server.  Ab allows you to simulate an arbitrary number of users requesting a specific web page on a web server and simulate a simultaneous visit by any number of users (concurrent requests).
For example here we send 10 concurrent HTTP GET requests and a total of 100 requests. Note the trailing forwards slash – if you do not specify a particular page you need to include the trailing slash otherwise ab will issue an invalid URL error message.

ab –n 100 –c 10 http://example.com/

The results include general information about the web server, time taken to run the tests, the number of complete requests, failed requests etc. The Connection Times section is of particular interest as it breaks down the timing information into:
  • Connection  – how long it takes for a connection to be opened to the web server
  • Processing – how long the request takes (from the connection being opened to the end of the request).
  • Waiting – how long the server takes to process the request and send the complete response to the client
  • Total – how long the entire transaction takes
If the waiting time is very high relative to the other metrics this code indicate that your code is too slow.
Although you can use ab for POST requests it’s best and easier to use for GET requests. Here’s an example for POST requests:

 ab -n 100 -c 10 -T 'application/x-www-form-urlencoded' -p post_params.txt http://www.example.com/form.php  

the –T command line switch specifies the content-type and the post_params.txt is a text file containing the parameters for the post request in plain text format, such as: key1=value1&key2=value2

Another Apache project, JMeter which is GUI based can also be used for benchmarking. 

Siege, like AB allows you to simulate user traffic to your web page. It also provides a number of additional features such as 
  • The ability to run load simulations on a list of URLs you specify within a text file. 
  • The ability to pause (delay) before conducting another request, giving the feeling of a user reading the document before moving onto another page on your web site.
The following command, simulates 5 concurrent users for a period of ten seconds. The –d options specifies the delay between requests.

 siege –c 5 –d1 –t10S http://www.example.com/  

httperf is a tool to measure web server performance. The following command causes httperf to create a connection to host www.example.com send and receive the reply, close the connection, and then print some performance statistics.

The following command sends a total of 100 connections are created and that connections are created at a fixed rate of 10 per second:

 httperf --hog --server www.example.com --num-conn 100 --ra 10 --timeout 5  


For ASP.NET based sites Windows Capacity Analysis Tool (WCAT) can provide similar information to ab. The following turorial provides a good overview of using WCAT.


Code Profiling


There are two commonly used tools for profiling PHP Xdebug and XHProf. The later has been written by Facebook and has an associated GUI XHGui. 

While XDebug can provide a lot of granular information it does have a significant performance overhead and therefore may not be suitable for production environments. For a good tutorial on using XHProf checkout the link below: http://phpmaster.com/the-need-for-speed-profiling-with-xhprof-and-xhgui/

Other web development languages most likely have similar tools, you should be able to find them through a Google search or browsing through StackOverflow should. http://stackoverflow.com/

You should be aware that the profiling code will incur a performance penalty and this should be taken into account when interpreting the results. Code profiling requires a great deal of expertise and is complex, so unless your code is really slow – go for the low hanging fruit and optimise your CSS, JS, use caching and compression before optimising backend code or indeed before you look to add faster storage, CPUs or more RAM.

Comments

Popular Posts