Google Maps Places API Nearby (Example)

By Coding, Javascript, Playground No Comments

Nearby App Example of Google Maps Places API to find different places like ATM, Restaurant, Airport etc nearby you. (with full source-code)

Preview on

Etsy API Search with jQuery (Example)

By Coding, Playground No Comments

Playing around t with Etsy API. Here’s a working example of how to make a simple search request to the Etsy API with jQuery.

Preview and Edit on

Top Bootstrap Widget Components add-ons

By Coding, CSS No Comments

Must-have Bootstrap widgets/components extensions that enhances and extends  Twitter Bootstrap for effective development with the framework.

Top Workflow Tools for Front-end Developers

By Coding No Comments

A curated list of workflow tools for front-end developers to speed up our existing workflow processes and take it to the next level.

If you want to contribute, please comment and your resource will be added.

[starlist] [/starlist]

Speed up your site load time to meet expectations of visitors and search engines

By Coding No Comments

How to speed up your site to meet the expectations of visitors and search engines.

Below you will find a list of tools to help you measure your site’s speed and identify bottlenecks. We’ll also review techniques like CSS sprites, JavaScript minification, lazy loading, and CDN.

Even with increasingly fast Internet access and responsive browsers, online visitors have little-to-no tolerance for slow websites. No pun intended, but the quest for faster websites is a never-ending race for web designers and developers alike. Several surveys and research projects have revealed that a strong correlation exists between page speed and overall user experience, which ultimately governs conversion rate and engagement levels. According to some of these surveys, nearly half of all web users expect a site to load within three seconds or less. Further, a one-second delay in page response decreases user satisfaction by 16 percent.

In addition to affecting user experience, site load time is now also a factor search engines use to determine rankings. And, of course, optimizing a web page can help significantly in reducing operating costs such as bandwidth and other server resources. Several techniques can help in optimizing site speed, and the good news is that most of them aren’t that difficult to implement.

Our featured article this month teaches you how to speed up your site to meet the expectations of visitors and search engines. We’ve compiled a list of tools to help you measure your site’s speed and identify bottlenecks. We’ll also review techniques like CSS sprites, JavaScript minification, lazy loading, and CDN.

Tools to Measure your Site Speed

Before undertaking any site optimization tweaks, it might be wise to run your site against a site speed tool. This will allow you to benchmark your site and identify bottlenecks that are slowing your pages. Below is a selection of online services that you can use to measure your site’s speed:

Pingdom offers a really comprehensive load time testing tool. By simply typing in your URL, Pingdom will help you analyze the load speed of your websites and learn how to make them faster. It lets you identify what about a web page is fast, slow, too big, what best practices you’re not following, and so on.

GTMetrix has a suite of features and options to make optimizing your website speed clear and easy. By signing up for an account, you will be able to track your page load time history in interactive graphs, test from multiple locations and set up monitored alerts.

YSlow Plugin
YSLow is a popular Firefox plugin that sits in your browser and help you analyze web pages and why they’re slow based on Yahoo!’s rules for high performance web sites.

Recommended Reading on Site Speed Optimization

To accelerate your site effectively, you might want to go through the following articles. They will teach you the basics of optimization and provide handy tips.

Best Practices for Speeding Up Your Web Site 
Yahoo’s ‘Best Practices for Speeding Up Your Web Site’ is a must-read. This article outlines 35 best practices divided into 7 categories for making web pages fast.

Google’s Make the Web Faster
Google has a dedicated resource center that covers several aspects of page speed optimization. You will find tools that will help you analyze and optimize your site as well as articles and best practices on caching, request overhead, payload size, browser rendering and mobile speed optimization.

Why You Need a Seriously Fast Website
A recently-published article that explains how your page load time can affect every metric you care about from bounce rate, search ranking, conversion, pageviews and even revenue.


Caching is a popular method used to speed up pages – it can be implemented on the client-side (browser) or server-side.

Caching Tutorial for Web Authors and Webmasters
A great introductory article on caching. Through this article, you will learn what exactly is a web cache, when are it is the most effective as well as notes on implementing caching on server and client sides.

Caching HTTP Headers with Cache-Control
An article that explains how you can use cache-control: max-age to inform browsers that component won’t change for a defined period. This way you can avoid unneeded further requests if browser already has the component in its cache and therefore primed-cache page views will be performed faster.

HTML5: Offline Application Caching 
HTML5 introduced several new features to the web such as multi-threaded JavaScript, cross document messaging and local storage. This article will teach you how to create an offline application caching feature of HTML5.

PHP Caching Class
With this script you easily can make your PHP scripts 50% faster by caching SQL results, big arrays or for example HTML tables that contain dynamic content.

Lazy Loading

Lazy loading is the opposite of pre-loading whereby which elements are loaded on-demand only when they are needed. It is a commonly used technique for long pages with graphical content – the images are loaded only when the user reaches a certain viewpoint (usually when they scroll down further the page). It is however not limited to images only – you can use it to lazy-load HTML content, JavaScript files, and much more.

Lazy Load jQuery Plugin
Lazy Load is a jQuery plugin written in JavaScript. It delays loading of images in long web pages. Images outside of viewport (visible part of web page) won’t be loaded before user scrolls to them.

YUI Image Loader
The ImageLoader Utility is a Yahoo User Interface (YUI) component that allows you as an implementer to delay the loading of images on your web page until such a time as your user is likely to see them.

Image Sprites

Images sprites are becoming increasing popular as they offer a great way to reduce site load time and server resources. It works by combining several images into a single larger image to avoid multiple HTTP requests. Below is our selection of resources that will help you get started with image sprites:

The Mystery of CSS Sprites
A nice introductory article on CSS sprites – you will learn what they are, when to use them, what’s the best approach to slicing your images, etc.

SpriteBox eases the process of creating CSS classes and IDs from a single sprite image. After you upload your image, you will use its selection tool to mark the position of the individual sprites and SpiteBox will generate the relevant horizontal and vertical background positions.

Creating CSS Sprites with PHP
The one major drawback of CSS Sprites is maintaining the images and recompiling and uploading every time one of the sprites changes. CSS Auto Sprites changes all that by automatically building the CSS sprite image from files in a directory as well as generating the CSS code for you.


Minification is the process of removing unnecessary characters from code to reduce its size thereby improving load times. When applied to your CSS and JS files, you can achieve up to 20% reduction in file size. Here are some tools that will allow you to minify your files:

Minify is a PHP5 app that helps you follow several of Yahoo!’s Rules for High Performance Web Sites. It combines multiple CSS or Javascript files, removes unnecessary whitespace and comments, and serves them with gzip encoding and optimal client-side cache headers.

Microsoft Ajax Minifier
The Microsoft Ajax Minifier enables you to improve the performance of your web applications by reducing the size of your Cascading Style Sheet and JavaScript files.

Online JavaScript/CSS Compression Using YUI Compressor
An online tool provided by YUI that will help you compress your JavaScript and CSS files. You will be given several compression options and you can upload or copy-and-paste your source.

CDN Networks

CDN networks are distributed servers spread across different geographic locations aimed at serving web objects (text, scripts), files (document, software) and multimedia content (audio, video). By serving content from a location that is closest to the end-users, CND networks are able to dramatically speed up the time your site takes to load. Below is a list of popular CDN networks:

Amazon CloudFront CDN
Amazon CloudFront is a web service for content delivery. It integrates with other Amazon Web Services to give developers and businesses an easy way to distribute content to end users with low latency, high data transfer speeds, and no commitments.

Windows Azure Content Delivery Network
The Windows Azure Content Delivery Network (CDN) offers developers a global solution for delivering high-bandwidth content that’s hosted in Windows Azure. The CDN caches publicly available objects at strategically placed locations to provide maximum bandwidth for delivering content to users.

Google PageSpeed Service
PageSpeed Service is an online service to automatically speed up loading of your web pages. PageSpeed Service fetches content from your servers, rewrites your pages by applying web performance best practices and serves them to end users via Google’s servers across the globe.

CloudFlare designed its CDN (Content Delivery Network) without the legacy of the last 15 years. Their proprietary technology takes advantage of recent changes to hardware, web server technology and network routing.

Hosted Libraries

Based on the concept of CDN, hosted libraries allows you to call popular JavaScript libraries hosted on content distribution network. Below is a list of some of the hosted libraries:

Google Hosted Libraries
Gogole hosts several popular JavaScript libraries including Dojo, Ext Core, jQuery, jQuery UI, Mootools and Prototype. It also makes an effort to make any latest stable version available as soom as they are released and will host prior releases indefinitely as well.

Microsoft AJAX Content Delivery Network 
Microsoft also runs their own CDN for hosting popular libraries. It includes jQuery and several ASP.NET JavaScript files like ASP.NET Web Forms and ASP.NET MCV.

One of the issues with the Google and Microsoft CND is that they only host popular libraries. CDNJS offers a similar CDN service but it hosts over 250+ libraries making it the most comprehensive hosted library service.

Front-end performance for front-end developers and web designers

By Coding No Comments

It’s hard, if not impossible, to deny that performance is by far one of the most critical aspects of any decent web project, be it a small portfolio site, a mobile-first web app, right through to a full-scale ecommerce project. Studies, articles and personal experience all tell us that fast is best.

Performance is not only hugely important, it is incredibly interesting, and something I am getting more and more involved in at both work (I’m forever pestering our Lead Performance Engineer) and in side projects and CSS Wizardry (I’m forever pestering Andy Davies).

I’m going to share – in this massive article – a load of quick, simple and downright intriguing bits of perf knowledge to act as a primer for web designers and front-end developers alike; hopefully this article will serve as a decent introduction for anyone wanting to start learning about perf, and making their front-ends blazingly fast. These tips are all things your can implement by yourself very easily. It just takes a bit of cunning and some basic knowledge of how browsers work and you’re ready to game the system!

This huge post won’t have loads of confusing graphs and numbers to crunch, but instead concerns itself with theory and first-hand performance techniques that I have arrived at as a result of reading, monitoring, collaborating and tinkering (I spend a lot of time glued to CSS Wizardry’s waterfall charts). I will also link to other articles on similar topics to help reinforce any key points. Enjoy!

N.B. This article does require a small amount of basic performance knowledge up-front, but anything covered that you aren’t familiar with should be just a Google search away!

  1. The basics
    1. Styles at the top, scripts at the bottom
    2. Make fewer requests
    3. Maximising parallelisation
  2. HTTP requests and DNS lookups
    1. DNS prefetching
  3. Resource prefetching
  4. CSS and performance
  5. Gzipping and minifying
  6. Optimising images
    1. Spriting
    2. Retina images
    3. Progressive JPGs
    4. Use no images at all
  7. Further reading

The basics

There are a few things all designers and front-end developers will likely know about performance, things like making as few requests as possible, optimising images, putting stylesheets in the <head>, putting JS before the </body>, minifying JS and CSS and so on. These fundamentals will already get you on your way to faster experiences for users, but there’s more… much more.

It is also very important to remember that – for all they give us headaches every day of our working lives – browsers are very clever; they do a lot to optimise performance for you, so a lot of perf knowledge combines knowing where the browser is at work, and knowledge of how best to exploit that. A lot of perf know-how is merely understanding, exploiting and manipulating what a browser does for us already.

Styles at the top, scripts at the bottom

This is a really basic rule, and one that should be super easy to follow most of the time, but why does it matter? Put very shortly:

  • CSS blocks rendering, so you need to deal with it right away (i.e. at the top of the document, in your <head>).
  • JS blocks downloads, so you need to deal with these last to ensure that they don’t hold up anything else on the page.

CSS blocks rendering because of a browsers desire to render pages progressively; they want to render things as they get to them, and in order. If styles are a long way down the page the browser can’t render that CSS until it gets to it. This is so that the browser can avoid redraws of styles if they alter something that was previously rendered further up the document. A browser won’t render a page until it has all the available style information, and if you put that style information at the bottom of the document you’re making the browser wait, blocking rendering.

So, you put your CSS at the top of the page so that the browser can start rendering right away.

JavaScript blocks downloads for a number of reasons (this is the browser being clever again) but firstly, we need to know how downloading assets in browsers actually happens; simply put, a browser will download as many assets as it can from a single domain in parallel. The more domains it is pulling from, the more assets can be downloaded, in parallel, at once.

JavaScript interrupts this process, blocking parallel downloads from any and all domains, because:

  • The script being called might alter the page, meaning the browser will have to deal with that before it can move on to anything else. In order for it to deal with that eventuality it stops downloading anything else in order to focus soleley on that.
  • Scripts usually need to be loaded in a certain order for them to work, for example, loading jQuery before you load a plugin. Browsers block parallel downloads with JavaScript so that it doesn’t start downloading jQuery and your plugin at the same time; it should be pretty obvious that if you were to start downloading both in parallel, your plugin would arrive before jQuery would.

So, because browsers stop all other downloads whilst JavaScript is being fetched, it is usually a good idea to put your JavaScript as late in the document as possible. I’m sure you’ve all seen blank sections of pages where a third party piece of JS is taking ages to load and blocking the fetching and rendering of the rest of the page’s assets; this is JavaScript’s blocking in action.

Apparently, however, modern browsers get smarter still. I’m going to give you an excerpt from an email from Andy Davies to me, because he explains far better than I can:

Modern browsers will download JS in parallel and only rendering is blocked until the script has been executed (it obviously has to be downloaded too).

Downloading of the script will often be done by the browser’s look ahead pre-loader.

When a browser is blocked from rendering page e.g. waiting for CSS, or JS to execute, the look ahead pre-parser scans the rest of the page looking for resources it could download.

Some browsers e.g. Chrome, will prioritise the download of assets e.g. if scripts and images are both waiting to be downloaded it will download the script first.

Smart stuff!

So, to allow a page to begin rendering as fast as possible, put your styles at the top. To prevent JS’ blocking affecting your rendering, put scripts at the bottom.

Make fewer requests

The other really obvious and basic performance optimisation is simply downloading less. Every asset a page requires is an extra HTTP request; the browser has to go off and get every single asset required to render a page. Each of these requests can incur DNS lookups, redirects, 404s etc. Every HTTP request you make, whether it is for a stylesheet, an image, a web font, a JS file, you name it, is a potentially very expensive operation. Minimising these requests is one of the quickest performance optimisations you can make.

Going back to browsers and parallelisation; most browsers will only download a handfull of assets from each referenced domain at a time, and JS, remember, will block these downloads anyway. Every HTTP request you make should be well justified, and not taken lightly.

Maximising parallelisation

In order to get the browser to download more assets in parallel, you can serve them from different domains. If a browser can only fetch, say, two assets at once from a domain, then serving content from two domains means it can fetch four assets at once; serving from three domains means six parallel downloads.

A lot of sites have static/asset domains; Twitter, you can see, use to serve static assets:

Facebook use

Using these static, asset domains, Twitter and Facebook can serve more assets in parallel; assets from and can be downloaded in tandem. This is a really simple way to get more concurrent downloads happening on your page, and even better when coupled with actual CDN technology that can help decrease latency by serving assets from a more suitable physical location.

This is all well and good, but later we’ll discuss how serving from subdomains can actually, in certain circumstances, be detrimental to performance.

So, there are our performance basics out of the way:

  • Put stylesheets at the top of a document
  • Put JavaScript at the bottom (where possible)
  • Make as few HTTP requests as possible
  • Serving assets from multiple domains can increase the number of assets a browser can download in parallel.

HTTP requests and DNS lookups

Every time you request an asset from any domain, out goes an HTTP request with the relevant headers, the resource is reached, and a response is sent back. This is a vast over-simplification of the process, but it’s about as much as you really need to know. This is an HTTP request, and all assets you reference are subject to this round trip. These requests are the main bottleneck when it comes to front-end performance because, as we covered, browsers are limited by how many of these requests can happen in parallel. This is why we often want to use subdomains; to allow these request to happen on several domains, allowing a greater number of requests to happen at the same time.

A problem with this, however, is DNS lookup. Each time (from a cold cache) a newdomain is referenced, the HTTP request is subject to a time-consuming DNS lookup (anywhere between 20 and 120 milliseconds) in which the outgoing request looks up where the asset actually lives; the internet is tied together by IP addresses which are referenced by hostnames which are managed by DNS.

If each new domain you reference has the upfront cost of a DNS lookup, you have to be sure that it’s actually going to be worth it. If you are a small site (like CSS Wizardry, for example) then serving assets from a subdomain will likely not be worth it; the browser can probably fetch several under-parallelised assets from one domain quicker than it can perform DNS lookups across multiple domains and parallelise those.

If you have perhaps a dozen assets, you might want to consider serving them from onesubdomain; an extra DNS lookup is probably worth it in order to better parallelise that amount of assets. If you have, say, 40 assets, it might be worth sharding those assets across two subdomains; two extra DNS lookups will be worth it in order to serve your site from a total of three domains.

DNS lookups are expensive, so you need to determine which is more suitable for your site; the overhead of lookups or just serving everything from one domain.

It is important to remember that as soon as the HTML is requested from, say,, that DNS lookup for that host has already happened, so subsequent requests to anything on are not subject to DNS lookups.

DNS prefetching

If you, like me, want to have a Twitter widget on your site, and Analytics, and maybe some web fonts, then you will have to link to some other domains which means you’llhave to incur DNS lookups. My advice would always be not to use any and every widget without properly considering its performance impact first, but for any you do deem necessary, the following is useful…

Because these things are on other domains it does mean that, for example, your web font CSS will download in parallel to your own CSS, which is a benefit in a way, but scripts will still block (unless they’re async).

The problem here, really, is the DNS lookups involved with third party domains. Fortunately, there is a super quick and easy way to speed this process up: DNS prefetching.

DNS prefetching does exactly what is says on the tin, and could not be simpler to implement. If you need to request assets from, say,, then you can prefetch that hostname’s DNS by simply adding this early on in the <head> of your page:

That simple line will tell supportive browsers to start prefetching the DNS for that domain a fraction before it’s actually needed. This means that the DNS lookup process will already be underway by the time the browser hits the <script> element that actually requests the widget. It just gives the browser a small head start.

This simple link element (which I use on CSS Wizardry) is totally backward compatible and will not negatively impact performance. Think of it like performance progressive enhancement!

Further reading

Resource prefetching

As well as prefetching DNS, it can be handy to prefetch any resources your site will need as well. In order to know what we might want to prefetch, we first must understand how and when a browser would usually request a resource.

Web fonts and images referenced in CSS behave in much the same way; the browser will start downloading them once it hits a piece of HTML that requires them. Like I mentioned before, browsers are very clever and this is another great example of that. Imagine if browsers downloaded images referenced in CSS as soon as it saw the declarations:

If the browser didn’t wait until it encountered the HTML that needed these images then hitting the home page would download all four of them. This is wasteful, so browsers make sure they definitely need the image before they start downloading them. The problem here is that the download, therefore, doesn’t happen until pretty late on.

If we can be completely sure we always want a certain CSS image used on every page, then we can trick the browser into downloading it early, before it encounters the HTML that needs it. To do this is incredibly simple, but can be a little dirty, depending on how you go about it.

The dirty way, and likely most bulletproof, is to have a hidden <div> in every page which houses the CSS images as <img> elements with empty alt attributes. I do this with CSS Wizardry’s sprite; because I know it needs using on every page I can confidently prefetch it by referencing it in my HTML. The way browsers handle inline<img>s is quite nice in that it prefetches them and grabs them super early, so by making the browser load my sprite as an <img> in my markup, it can begin downloading it before the CSS requires it. I can get a headstart on that download by referencing it (hidden) in my HTML first.

There is some confusion around the second, ‘cleaner’ way, which looks a lot like the DNS prefetching example:

This explicitly tells my browser to start prefetching my sprite, regardless of any decision it might make after evaluating my CSS.

The confusion lies around a seeming disparity between two articles; based on this, from MDN, it seems that the prefetch is a hint for browsers to maybe begin prefetching an asset if it’s idle. Contrarily, however, is this article from Planet Performance which seems to suggest the browser will always prefetch the assets if it supports rel="prefetch" and makes no mention of idle time. Waterfalls I’ve looked at seem to suggest the latter is true, but an odd WebKit quirk whereby you can’t observe prefetch in action if you have Developer Tools open (talk about Schrödinger’s Perf…) means I can’t be 100% sure. Any clarification on this would be greatlyappreciated.

I mentioned that fonts and images work very much the same way; the above rules apply exactly the same for font-files, but you can’t load a font in a hidden <div> (you’ll need to use the prefetch link).

So basically, what we’re doing here is ‘tricking’ the browser into downloading an asset ahead of time, so that by the time it does come to applying our CSS, it already has the resource downloaded (or at least on its way). Nifty!

Further reading

CSS and performance

A lot of advice states that, if you are using asset domains, you should serve all static assets from them; this includes CSS, JS, images etc.

One thing we found at work, though, is that you should not serve CSS from an asset/subdomain…

Remember earlier, when we discussed how CSS blocks rendering? A browser wants to get hold of your CSS as soon as it can, if not sooner; CSS is on your critical path. Your critical path is the necessary journey between a user requesting your page and then actually seeing something. Because it blocks rendering, CSS is on the critical path, JS and images aren’t. You want to keep this journey along the critical path as quick as possible, this means no DNS lookups.

At work, we were building a site whose staging environment served all assets from the same host (e.g., but when it came round to making the staging environment more live-like, we began serving all our assets from and This meant all images, JS, CSS, fonts etc were all coming from different domains, thus incurring DNS lookups. The problem here is that, from a cold cache, the DNS lookup required to start grabbing the CSS file was actually slowing the critical path right down. Our graphs spiked massively, indicating a delay that in theory shouldn’t have happened; best practice dictates that you should shard lots of assets over subdomains, right? Not CSS. The DNS lookup required took up a substantial amount of time which delayed rendering of the page.

CSS is one of performance’s worst enemies, as outlined by Stoyan Stefanov, because of this rendering blockage. It’s also worth noting that a browser will download all CSS before it will begin rendering your page. This means that your print.css will be requested even if the browser is only rendering a page on screen. It means that any stylesheets that are only used based on a media query (e.g. <link rel="stylesheet" media="screen and (min-device-width: 800px)" href="desktop.css">) will still be downloaded even if they aren’t needed.

That said, I have been informed by Andy Davies that WebKit actually prioritises its CSS download order so that only CSS needed to initially render the page comes first, and other styles, e.g. print.css are deferred until as late as possible. Awesome!

Knowing this about CSS allows us to make a few decisions, all based around the knowledge that CSS blocks rendering, is all requested, and is on your critical path:

  • Never serve it from a static/asset domain because this can incur DNS lookups that further delay rendering.
  • Serve it early so the browser can get on with things.
  • Concatenate it. Because a browser will fetch all CSS anyway, you’re better off squishing it all into the same HTTP request.
  • Gzip and minify it so that the browser has less to download.
  • Cache the hell out of it so that the whole process above happens as infrequently as possible.

CSS is on your critical path so you need to get shut of it early, it blocks rendering, meaning slow experiences for your users. Moving CSS to a subdomain killed our performance.

Further reading

Gzipping and minifying

These are two really simple things you can (and should) be doing with your text assets; minifying them to remove any comments and whitespace, and gzipping them to compress them down further.

If you were to pick one over the other, gzip alone is more effective than minification alone. However, you should really do both, if you can.

Turning gzip on usually requires a little .htaccess trickery, but as my good friendNick Payne points out, .htaccess is actually not particularly performant from a server-side perspective; .htaccess is evaluated on every incoming request, so it’s actually a lot of overhead.

This, taken from the Apache docs:

You should avoid using .htaccess files completely if you have access to httpd main server config file. Using .htaccess files slows down your Apache http server. Any directive that you can include in a .htaccess file is better set in aDirectory block, as it will have the same effect with better performance.

If you do only have access to .htaccess then I wouldn’t worry; the cost of this overhead usually won’t be a concern. Enabling gzip with .htaccess is actually really simple to implement. Minification isn’t necessarily so easy unless you have a build process, or use something like CodeKit or a preprocessor which compiles straight to minified output.

Interestingly, the main reason I moved inuit.css over to Sass was – initially – so I could conveniently compile a minified version.

Minification, for the most part, simply removes whitespace and comments; if you write as many comments in your code as I do then you really do want to minify your assets.

Gzip, like any compression algorithm, takes any text based source and compresses it based on repeated/repeatable strings. Most code compresses really well via gzip as there’s a tendency for all code to have repeated strings in; e.g. background-imageover and over in CSS, <strong> over and over in markup…

Gzip really squishes the size of your assets massively, and you should definitely enable it. For a decent .htaccess snippet, check out how the HTML5 Boilerplate handles stuff.

Compressing your content makes a gigantic saving. At the time of writing, inuit.css weighs in at 77 kilobytes. Compressed and gzipped it weighs just 5.52 kilobytes. Minify and gzip give us a 93% saving. And, because gzip works well on text-based assets, you can even gzip SVGs and even some font file formats!

Optimising images

I’m not overly knowledgable in the art of optimising images other than running them through an optimisation tool, but dealing with images themselves, post-processing, is a pretty interesting topic.


Sprites are pretty much mandatory if you want a performant site; load one larger image over one HTTP request instead of several images over several requests. The problem is, however, that not all images are immediately spriteable; you might have an icon that needs to be the background image on a fluid-width element, but you clearly can’t sprite this as sprites don’t work on non-fixed-dimension elements. You could always just put a lot of whitespace around the image in the sprite sheet, but wasted pixel in sprites are a performance concern of their own.

In order to combat the unspriteability of certain elements, we need what is known as a ‘spriting element’. This is basically an empty element, usually an <i>, whose sole job it is to remain empty and just carry a background image.

I used these when I built Sky Bet, YouTube use them, Facebook use them, Jonathan Snook has an entire section on them in SMACSS.

The basic premise is that, if you can’t sprite an element because it’s fluid, you place an empty element inside it that you can fix the dimensions of that you then can sprite up, for example:

Here we can’t sprite the <li> or the <a> so we have an empty <i> in there which carries the icon instead. This is one of the things I love most about performance; you’re combining clever techniques to improve page speeds whilst also using traditionally ‘bad’ markup. Lovely stuff!

Further reading

Retina images

You don’t need to retina everything. A 2x image contains four times the amount of pixels as the same image in standard resolution. Four. Times. Whilst this doesn’t necessarily mean four times the file size over the wire – thanks to the image’s own encoding – this does mean that once the image is decompressed for rendering in the browser, there are four times the usual amount of pixels that need storing in memory.

If we stop and think; retina images are most frequently (although not always) needed to provide a crisper UI to phones. Phones have much less memory than most other devices. Retina serves memory-hogging images to devices that aren’t known for having copious amounts of memory… Think twice about whether you really need retina images across the board, or can you make sensible compromises?

Retina is great for a nice, sharp experience, but there’s no point in a crisp experience if it takes five seconds to download. In most cases speed should win over aesthetics.

You can be clever and serve 1.5x images to all devices, in a bid to provide good-enough images to everyone, but the best option would – in my opinion – to only use retina sparingly.

If your stats are forgiving enough, you could perhaps opt for SVGs or an icon font instead of bitmap images. I use SVG on CSS Wizardry which gives me the benefits of:

  • Resolution independent
  • Minifiable
  • Gzippable

At work Matt Allen made us an icon font which can be used with your spriting element to provide retina-ready, scalable icons.

You could also look to use services like to load images depending on the device and its context.

Progressive JPGs

One interesting aspect of performance is perceived performance; not necessarily what your numbers tell you, but how fast a site feels.

When displaying large, JPG images, you’re probably more than familiar with its jerky loading; a hundred pixels of the image, pause, another fifty, pause, in jump another two hundred pixels then bam, the whole image loads.

This is the traditional baseline JPG at work, a very jerky experience indeed. By switching over to progressive JPGs you can make them load in a much nicer fashion; their first appearance is of the whole image, but very pixellated, then it slowly comes into focus. This sounds worse than the previous method but it feels faster; the user has something to look at instantly and the quality of the image progressively improves. These images are typically a tad larger than their baseline counterparts, but make the overall experience feel a lot faster.

To enable progressive JPGs you simply need to check the relevant box when saving for web and devices in Photoshop; job done!

Further reading

Use no images at all

Better than spriting, and SVGs, and avoiding retina, is to avoid images altogether. If you can 100% replicate a design using an image, but 75% replicate it using pure CSS, then prefer the pure CSS solution (provided it doesn’t result in 100 more lines of code, of course!). Avoiding images means potentially avoiding HTTP requests, but also aids maintenance. If you can avoid using images then try and do so.


So there we have a shed load of (but still just a few) things you can do to exploit the browser and make your front-ends even faster. Knowing a few things about how browsers work can really allow us to manipulate it further and make our front-endseven faster.

If you have any additions – or indeed disagreements or corrections – then please do join the discussion over on Branch; this whole world of perf is still relatively new to me so I’m keen to keep learning from others, and push things even further.

I really hope that wall of text managed to enlighten you in at least some way, and that you came away with some new knowledge of things you’d perhaps never thought about. I also hope that, if you weren’t already, this article has helped to get you at leasthalf as interested in performance as I have found myself.

I would like to explicitly thank Nick Payne and Andy Davies for their help clearing a few things up in the writing of this article; cheers guys!

Further reading

If you enjoyed that, and want to know more, I can’t recommend the following enough:

Animated Responsive Layout with CSS3 Media Queries

By Coding No Comments

What Are Media Queries

CSS3 brings us many new features we use for our web designs and one of the features that can help improve the web site usability is Media Queries.

Media Queries Boilerplate

Animating the layout change

With CSS we can add animation to your different elements, we can also assign animation to the different properties on those elements.

If we are using media queries then we are most likely going to change the Width and Height of an element so that it can fit on the page. As we know that width and height are changing we can add animation to the CSS property of Width and Height by using the following code.

Adding the above to an element of your media queries adds animation to the width and height changes of your media queries.

Web developers go mobile by mastering new development skills (Dutch Mobile Conference)

By Coding No Comments

Web developers go mobile by mastering new development skills

Amsterdam, March 13, 2012 — The first Dutch Mobile Conference (DMC), June 7, 8 & 9 in Amsterdam, will help developers create their next innovative mobile web apps.

The rise of web applications

Back in the 90’s computers had propriety (native) software which had to be recreated for every type of operating system (Windows, Macintosh, Linux). Once the internet became largely adopted many of these software packages moved to the web. This is exactly what is happening on the mobile market now. There are various solutions for packaging and distributing these web-apps through the different app stores (like the AppleStore and Android Market) so they do not need to be distributed over the web only. This technology has major advantages like not having to rewrite your entire code-base for each mobile platform, or you can even bypass the app stores by distributing through the web. We already see popular products like Google’s GMail, Financial Times and Facebook doing just that.

Moving to mobile devices

Moving software development from desktop systems to mobile devices requires developers to master new skills. The Dutch Mobile Conference focuses on educating and connecting developers to achieve this so everyone will benefit from a better mobile experience. Keynotes by Douglas Crockford and Joe Stump As a highlight the conference is being kicked off with Yahoo’s Douglas Crockford talking about programming and ‘your brain’. Douglas, also the inventor of the popular JSON format, puts it like this; “Programming is serious business that should never be undertaken in ignorance.” Joe stump, CTO and co-founder of SimpleGeo and formerly Lead Architect for Digg, closes the conference.

Speakers from all over the world

The first day visitors will have a choice of 4 tutorials which go in-depth on their subjects. On the second and third day there will be a total of 23 sessions on various subjects like; Phonegap, CocoonJS, PhantomJS, Titanium, User interface improvements, using the new touch interfaces to make applications feel natural. The conference offers a large variety of international speakers like Sencha’s Ed Spencer (known for his work on Sencha Touch and Ext.js), Jim Hoskins(working for Treehouse and writer for Think Vitamin), Ibon Tolosana (CTO at Ludei, known for apps like Sumon) and many others. For more information on the conference you can visit the website at

Ibuildings, Vlissingen
tel: +31 88 00 24 000
Martin de Keijzer, [email protected] (host)
Angela Sleutel, [email protected] (marketing & PR)
(logo: )
Source: Dutch Mobile Conference / Ibuildings

About Ibuildings

Ibuildings is established in 1999 with the goal to specialize in the development of websites and webapplications. The company started out with using just PHP-technology and later expanded on various other areas which surround this ecosystem. Through this focus we have built up a lot of knowledge and experience and attracted some of the best developers. Ibuildings also supports the community by supporting external events, organising the Dutch PHP Conference
and Dutch Mobile Conference.

Learning HTML, CSS, and Javascript from the Ground Up

By Coding No Comments

Are you looking for a basic understanding of how UIs are created on the web or who wants to brush up outdated UI development knowledge? Or maybe you’d like to learn more about the medium you’re designing for and gain basic tools for prototyping designs? Do you want a better understanding of the web and how Google makes the pages that are its face to the world? If so, “HTML, CSS, and JavaScript from the Ground Up” is for you.

Click here to download the css, html and javascript lesson exercises and supporting files



HTML Presentation

HTML Exercise


CSS Presentation

CSS Walkthrough

CSS Exercise


JavaScript Presentation

JavaScript Exercise

Create Pure CSS Icons using just CSS and semantic HTML

By Coding, Mobile No Comments

An experiment that uses pseudo-elements to create 84 simple GUI icons using CSS and semantic HTML. Shared as an exercise in creative problem solving and working within constraints. This is not a “production ready” CSS icon set. The HTML is very simple and it relies on CSS pseudo-elements rather than extraneous HTML elements.

Known support: Firefox 3.5+, Safari 5+, Chrome 5+, Opera 10.6+.


Pseudo-elements provide many possibilities to developers interested in writing semantic HTML. If you’re looking for practical uses of pseudo-elements, this experiment gave me the idea for the technique used to create Multiple Backgrounds and Borders with CSS 2.1. But these GUI icons were atechnical exercise.

Example code

Some of these GUI icons can only be created in browsers that support CSS3 transforms.

The HTML is a basic unordered list of links.

Each icon uses its own set of styles. For example, the key parts of the CSS responsible for the “expand” icon are as follows:

The demo page contains a full set of user interaction and media player control icons, as well as other common icons. For now, several icons actually require more than one element as CSS 2.1 only specifies 2 pseudo-elements per element that can contain generated content. The CSS3 Generated and Replaced Content Module allows for an unlimited number of pseudo-elements but has yet to be fully implemented in any modern browser.


Make your website faster with Optimizing JavaScript code

By Coding No Comments

Recommended experience: Working knowledge of JavaScript

Client-side scripting can make your application dynamic and active, but the browser’s interpretation of this code can itself introduce inefficiencies, and the performance of different constructs varies from client to client. Here we discuss a few tips and best practices to optimize your JavaScript code.

Working with strings

String concatenation causes major problems with Internet Explorer 6 and 7 garbage collection performance. Although these issues have been addressed in Internet Explorer 8 — concatenating is actually slightly more efficient on IE8 and other non-IE browsers such as Chrome — if a significant portion of your user population uses Internet Explorer 6 or 7, you should pay serious attention to the way you build your strings.

Consider this example:

var veryLongMessage =
'This is a long string that due to our strict line length limit of' +
maxCharsPerLine +
' characters per line must be wrapped. ' +
percentWhoDislike +
'% of engineers dislike this rule. The line length limit is for ' +
' style purposes, but we don't want it to have a performance impact.' +
' So the question is how should we do the wrapping?';

Instead of concatenation, try using a join:

var veryLongMessage =
['This is a long string that due to our strict line length limit of',
' characters per line must be wrapped. ',
'% of engineers dislike this rule. The line length limit is for ',
' style purposes, but we don't want it to have a performance impact.',
' So the question is how should we do the wrapping?'

Similarly, building up a string across conditional statements and/or loops by using concatenation can be very inefficient. The wrong way:

var fibonacciStr = 'First 20 Fibonacci Numbers
for (var i = 0; i < 20; i++) {
fibonacciStr += i + ' = ' + fibonacci(i) + '

The right way:

var strBuilder = ['First 20 fibonacci numbers:'];
for (var i = 0; i < 20; i++) {
strBuilder.push(i, ' = ', fibonacci(i));
var fibonacciStr = strBuilder.join('');

Building strings with portions coming from helper functions

Build up long strings by passing string builders (either an array or a helper class) into functions, to avoid temporary result strings.

For example, assuming buildMenuItemHtml_ needs to build up a string from literals and variables and would use a string builder internally, instead of using:

var strBuilder = [];
for (var i = 0, length = menuItems.length; i < length; i++) {
var menuHtml = strBuilder.join();


var strBuilder = [];
for (var i = 0, length = menuItems.length; i < length; i++) {
this.buildMenuItem_(menuItems[i], strBuilder);
var menuHtml = strBuilder.join();

Defining class methods

The following is inefficient, as each time a instance of baz.Bar is constructed, a new function and closure is created for foo:

baz.Bar = function() {
// constructor body = function() {
// method body

The preferred approach is:

baz.Bar = function() {
// constructor body
}; = function() {
// method body

With this approach, no matter how many instances of baz.Bar are constructed, only a single function is ever created for foo, and no closures are created.

Initializing instance variables

Place instance variable declaration/initialization on the prototype for instance variables with value type (rather than reference type) initialization values (i.e. values of type number, Boolean, null, undefined, or string). This avoids unnecessarily running the initialization code each time the constructor is called. (This can’t be done for instance variables whose initial value is dependent on arguments to the constructor, or some other state at time of construction.)

For example, instead of:

foo.Bar = function() {
this.prop1_ = 4;
this.prop2_ = true;
this.prop3_ = [];
this.prop4_ = 'blah';


foo.Bar = function() {
this.prop3_ = [];

foo.Bar.prototype.prop1_ = 4;

foo.Bar.prototype.prop2_ = true;

foo.Bar.prototype.prop4_ = 'blah';

Avoiding pitfalls with closures

Closures are a powerful and useful feature of JavaScript; however, they have several drawbacks, including:

  • They are the most common source of memory leaks.
  • Creating a closure is significantly slower then creating an inner function without a closure, and much slower than reusing a static function. For example:

    function setupAlertTimeout() {
    var msg = 'Message to alert';
    window.setTimeout(function() { alert(msg); }, 100);
    is slower than:

    function setupAlertTimeout() {
    window.setTimeout(function() {
    var msg = 'Message to alert';
    }, 100);
    which is slower than:

    function alertMsg() {
    var msg = 'Message to alert';

    function setupAlertTimeout() {
    window.setTimeout(alertMsg, 100);

  • They add a level to the scope chain. When the browser resolves properties, each level of the scope chain must be checked. In the following example:

    var a = 'a';

    function createFunctionWithClosure() {
    var b = 'b';
    return function () {
    var c = 'c';

    var f = createFunctionWithClosure();
    when f is invoked, referencing a is slower than referencing b, which is slower than referencing c.

See IE+JScript Performance Recommendations Part 3: JavaScript Code inefficiencies for information on when to use closures with IE.

Avoiding with

Avoid using with in your code. It has a negative impact on performance, as it modifies the scope chain, making it more expensive to look up variables in other scopes.

Avoiding browser memory leaks

Memory leaks are an all too common problem with web applications, and can result in huge performance hits. As the memory usage of the browser grows, your web application, along with the rest of the user’s system, slows down. The most common memory leaks for web applications involve circular references between the JavaScript script engine and the browsers’ C++ objects’ implementing the DOM (e.g. between the JavaScript script engine and Internet Explorer’s COM infrastructure, or between the JavaScript engine and Firefox XPCOM infrastructure).

Here are some rules of thumb for avoiding memory leaks:

Use an event system for attaching event handlers

The most common circular reference pattern [ DOM element –> event handler –> closure scope –> DOM ] element is discussed in this MSDN blog post. To avoid this problem, use one of the well-tested event systems for attaching event handlers, such as those in Google doctypeDojo, or JQuery.

In addition, using inline event handlers can lead to another kind of leak in IE. This is not the common circular reference type leak, but rather a leak of an internal temporary anonymous script object. For details, see the section on “DOM Insertion Order Leak Model” in Understanding and Solving Internet Explorer Leak Patterns and and an example in this JavaScript Kit tutorial.

Avoid expando properties

Expando properties are arbitrary JavaScript properties on DOM elements and are a common source of circular references. You can use expando properties without introducing memory leaks, but it is pretty easy to introduce one by accident. The leak pattern here is [ DOM element –> via expando–> intermediary object –> DOM element ]. The best thing to do is to just avoid using them. If you do use them, only use values with primitive types. If you do use non-primitive values, nullify the expando property when it is no longer needed. See the section on “Circular References” inUnderstanding and Solving Internet Explorer Leak Patterns.

Improve web standards support and accessibility in html emails

By Coding No Comments

Web standards are so important for email, while working with email client developers to ensure that emails render consistently.

The Email Standards Project is about working with email client developers and the design community to improve web standards support and accessibility in email. The project was formed out of frustration with the inconsistent rendering of HTML emails in major email clients.

Doctype TV is the show for people who make websites.

By Coding No Comments

Doctype is the show for people who make websites. Whether you’re a designer that wants to learn some coding or a developer that thinks everything they make looks like crap, Doctype is the show for you!

Every episode, Nick and Jim break down the latest topics and trends in web design and web development, from CSS3 and iconography to jQuery and web security. Created by Done21, the show was started to help web designers, web developers, freelancers, engineers, and creative professionals improve their skills and stay on top of their game.

Increase site speed will improve your Google page ranking

By Coding No Comments

Site speed reflects how quickly a website responds to web requests.

Speeding up websites is important — not just to site owners, but to all Internet users. Faster sites create happy users and we’ve seen in our internal studies that when a site responds slowly, visitors spend less time there. But faster sites don’t just improve user experience; recent data shows that improving site speed also reduces operating costs. Like us, our users place a lot of value in speed — that’s why we’ve decided to take site speed into account in our search rankings. We use a variety of sources to determine the speed of a site relative to other sites.

If you are a site owner, webmaster or a web author, here are some free tools that you can use to evaluate the speed of your site:

Read More