Speedtest Page Optimization

In the last weeks I optimized the page load speed for the new typing test version and wanted to record my steps.

Using Google Page Speed Addon

I first installed the Google “Page Speed” Addon which analyzes the page and suggests optimizations. Without any optimizations I receive 86 points out of 100, pretty good 🙂

Using Yahoo’s “Best practices for speeding up your website” article

I’ll also use the Best Practices for Speeding Up Your Web Site article from Yahoo to optimize the Typing Test.

Step 1: Creating a sprite

Before creating the sprite its a good advice to replace your current images with the optimized versions the “Google Page Speed Addon” offers you.

The first thing I will do is create a sprite of all the graphic files (1 sprite contains all the images of the website), therefore reducing the http-hits to a minimum. I will use photoshop to create the sprite, but there are also a few tools out there, which can help you create the sprite (search for “Sprite generator”).

One thing you have to keep in mind while creating your sprite is, that you can’t group “repeatable”-images into the same file with “non-repeatable” images. If you are using “background-repeat” in your CSS you have to create an extra file for every “repeat-direction”, e.g.:

  • one image-sprite for all non repeatable images (like the “logo”, “icons” and “User Interface Elements”)
  • one for “repeat-x” images (these are probably the ones you will use as well for your backgrounds etc.)
  • one for “repeat-y” (sometimes you want to repeat a background vertically, for a specific box which has a fixed width but a dynamical height)

I used the non-repeatable and the repeat-x sprites for the speedtest. But I had a “repeat-x and repeat-y” image as well. Sadly those can’t be combined with the other sprites and have to be loaded separately (one http-hit for every “repeat-all” image).

Step 2: Caching

10-fast-fingers.com has around 58% new visitors and 42% repeating visitors. Caching can therefore reduce the server load significantly. I’m not really familiar about the best approaches to cache static files. I just added this code to my .htaccess and hoped for the best:

# CONFIGURE media caching

ExpiresActive on
ExpiresDefault "access plus 1 year"

seems to do the trick…

One thing you have to keep in mind, using this approach: If you change one of your static files, don’t forget to add a “random parameter” to enforce the client to load the updated file e.g.:
You change your style.css. Some clients won’t load the updated file (because the file hasn’t reached the expiration date). By adding a random parameter like “style.css?version=23423797” the browser is forced to reload the style.css file.

Step 3: combining and minimizing files

To further reduce the number of http-hits, I need to combine as many static files as possible. I could create a big minimized JS-File which would contain all the Javascript-Code and a big minimized CSS-File which would contain all the CSS-Code, but it would be hard to maintain. Everytime I would change something, I would have to recreate these files (and when humans repeat things, errors occur).
Instead I will use a PHP Script which merges and minimizes all the necessary files, and then gzip’s it. If the file already exists (and no files have been changed) it will use the cached one, otherwise it will recreate the file (on the fly).

The whole process is described here: Make your pages load faster by combining and compressing javascript and css files

This method is especially useful if you have a lot of stylesheets and scripts and it even gives you the possibility to split your stylesheets into logical parts. But keep in mind that you probably won’t be able to merge all of your scripts and styles. Especially if you use komplex Jquery Plugins (those with there own file-structure) or IE-fallback-stylesheets it might be tricky to integrate those (but you can still load them the way you used to do it).

There is more to optimizing a website for performance than what I did here. Still, these steps make out at least 50% of the performance gain, if not more. If you have any additional (and easy to implement) suggestions, let me know in the comments 🙂