1-401-515-5115 info@rwblackburn.com

When I was a teenager I had a job at Burger King. If you have never noticed it before, there is a large digital clock next to the drive through window, which tells the employee how long the cusotmer has been waiting for their order. Measuring drive through wait time is a core KPI (of course, this was before I know what core-KPI meant) for these stores. In the end what this really meant was that the drive through attendant would often just hit the “served” button as soon as the car drove up to window, instead of when the customer was actually served their order. To counter this, the stores started installing pressure pads underneath the drive through windows, to detect when the car actually drove away instead of relying on the teenager at the window. This is the real reason they will sometimes ask you to pull through and they will bring you your food, instead of making you wait at the window for a delayed order – it’s not for your convenience, they want to get you off that pressure pad.

My point here is the importance of counting the entire transaction when measuring performance. Web developers have a similar problem when measuring performance as the fast food drive through. Next time you hear someone talk about how quickly their page responded ask yourself, are they talking about the amount if time it took for the customer to actually be able to view and interact with the page, or are they referring to the amount of time it took for the HTML to be received by the browser (the “first byte” measurement).

Measuring the point that the HTML page was server up by the server is like a fast food chain only measuring the time it took to get the customer to the window. It’s a part of the story, but as far as the customer is concerned it’s not as important as the time it takes to complete the entire transaction and they can begin to chow down. According to Steve Souders, Google’s Head Performance Engineer, 80-to-90% of performance gains on  a web page can be found on the front-end and not the back-end or network layer; yet many developers still ignore the time it take the browser to actually process the HTML DOM and display the initial page.

As developers we should be measuring the “to screen” time, the time it took for the page to actually display to the user. However, this can be a tricky measurement to capture. That is why I took special notice when I recently stumbled on the navigation timing API while watching Steve Souders’ InfoQ video. This API presents some interesting possibilities for performance timing which can include the server performance, network performance, and browser rendering performance. As a test of this potential I created a little JavaScript component.I have added it to GitHub, as JSPagePerf, if you would like to check it out. Here is an example of the output when I timed a page on my local machine (which is why there is no DSN/TCP timing results):


[Download JSPagePerf Component]

There are two things to keep in mind when looking at this component and the navigation timing API. First, this timing would not include any asynchronous events that took place after the page loaded (such as when using AJAX calls). To capture the timing of those events you would need to use a different method; and in many cases would be specific to the application and implementation. Second, is that the navigation timing API is only supported on IE9+, Firefox 19+, and Chrome 24+.

After experimenting with the API and creating this basic component, I am quite excited about the API’s potential. Not only can this help the developer fine tune the performance of their application, it could easily be used to create intelligent applications which can change its behavior for users with slower network response times or browser rendering speeds. After all, this API seems far more elegant then installing pressure pads under the web servers.


If you are interested in learning more about the navigation timing API. Check out this great article on the navigation API by Sam Dutton’s on HTML5 Rocks.