Writing efficient CSS selectors

Efficient CSS is not a new topic, nor one that I really need to cover, but it’s something I’m really interested in and have been keeping an eye on more and more since working at Sky.

A lot of people forget, or simply don’t realise, that CSS can be both performant and non-performant. This can be easily forgiven however when you realise just how little you can, err, realise, non-performant CSS.

These rules only really apply to high performance websites where speed is a feature, and 1000s of DOM elements can appear on any given page. But best practice is best practice, and it doesn’t matter whether you’re building the next Facebook, or a site for the local decorator, it’s always good to know…

CSS selectors

CSS selectors will not be new to most of us, the more basic selectors are type (e.g. div), ID (e.g. #header) and class (e.g. .tweet) respectively.

More uncommon ones include basic pseudo-classes (e.g. :hover) and more complex CSS3 and ‘regex’ selectors, such as :first-child or [class^="grid-"].

Selectors have an inherent efficiency, and to quote Steve Souders, the order of more to less efficient CSS selectors goes thus:

  1. ID, e.g. #header
  2. Class, e.g. .promo
  3. Type, e.g. div
  4. Adjacent sibling, e.g. h2 + p
  5. Child, e.g. li > ul
  6. Descendant, e.g. ul a
  7. Universal, i.e. *
  8. Attribute, e.g. [type="text"]
  9. Pseudo-classes/-elements, e.g. a:hover

Quoted from Even Faster Websites by Steve Souders

It is important to note that, although an ID is technically faster and more performant, it is barely so. Using Steve Souders’ CSS Test Creator we can see that an ID selector and a class selector show very little difference in reflow speed.

In Firefox 6 on a Windows machine I get an average reflow figure of 10.9 for a simple class selector. An ID selector gave a mean of 12.5, so this actually reflowed slower than a class.

The difference in speed between an ID and a class is almost totally irrelevant.

A test selecting on a type (<a>), rather than a class or ID, gave a much slower reflow.

A test on a heavily overqualified descendant selector gave a figure of around 440!

From this we can see that the difference between IDs/classes and types/descendants is fairly huge… The difference between themselves is slight.

N.B. These numbers can vary massively between machine and browser. I strongly encourage you to run/play with your own.

Combining selectors

You can have standalone selectors such as #nav, which will select any element with an ID of ‘nav’, or you can have combined selectors such as #nav a, which will match any anchors within any element with an ID of ‘nav’.

Now, we read these left-to-right. We see that we’re looking out for #nav and then any a elements inside there. Browsers read these differently; browsers read right-to-left.

For a quick graphical reason as to why browsers do this then it is the same reason most of you will save time solving this puzzle by starting at the smiley face (the target) first and working your way back:

Maze as a model of how browsers read CSS selectors right to left

It’s route 1, by the way.

For an in-depth reason as to why they do this see this discussion on Stack Overflow.

It’s more efficient for a browser to start at the right-most element (the one it knows it wants to style) and work its way back up the DOM tree than it is to start high up the DOM tree and take a journey down that might not even end up at the right-most selector—also known as the key selector.

This has a very significant impact on the performance of CSS selectors…

The key selector

The key selector, as discussed, is the right-most part of a larger CSS selector. This is what the browser looks for first.

Remember back up there we discussed which types of selector are the most performant? Well whichever one of those is the key selector will affect the selector’s performance; when writing efficient CSS it is this key selector that holds the, well, key, to performant matching.

A key selector like this:

#content .intro{}

Is probably quite performant as classes are an inherently performant selector. The browser will look for all instances of .intro (of which there aren’t likely to be many) and then go looking up the DOM tree to see if the matched key selector lives in an element with an ID of ‘content’.

However, the following selector is not very performant at all:

#content *{}

What this does is looks at every single element on the page (that’s every single one) and then looks to see if any of those live in the #content parent. This is a very un-performant selector as the key selector is a very expensive one.

Using this knowledge we can make better decisions as to our classing and selecting of elements.

Let’s say you have a massive page, it’s enormous and you’re a big, big site. On that page are hundreds or even thousands of <a>s. There is also a small section of social media links in a <ul> with an ID #social; let’s say there is a Twitter, a Facebook, a Dribbble and a Google+ link. We have four social media links on this page and hundreds of other anchors besides.

This selector therefore is unreasonably expensive and not very performant:

#social a{}

What will happen here is the browser will assess all the thousands of links on that page before settling on the four inside of the #social section. Our key selector matches far too many other elements that we aren’t interested in.

To remedy this we can add a more specific and explicit selector of .social-link to each of the <a>s in the social area. But this goes against what we know; we know not to put unnecessary classes on elements when we can use (c)leaner markup.

This is why I find performance so interesting; it’s a weird balance between web standards best practices and sheer speed.

Whereas we would normally have:

<ul id="social">
    <li><a href="#" class="twitter">Twitter</a></li>
    <li><a href="#" class="facebook">Facebook</a></li>
    <li><a href="#" class="dribble">Dribbble</a></li>
    <li><a href="#" class="gplus">Google+</a></li>
</ul>

with this CSS:

#social a{}

We’d now have:

<ul id="social">
    <li><a href="#" class="social-link twitter">Twitter</a></li>
    <li><a href="#" class="social-link facebook">Facebook</a></li>
    <li><a href="#" class="social-link dribble">Dribbble</a></li>
    <li><a href="#" class="social-link gplus">Google+</a></li>
</ul>

with this CSS:

#social .social-link{}

This new key selector will match far fewer elements and means that the browser can find them and style them faster and can move on to the next thing.

And, we can actually get this selector down further to .social-link{} by not overqualifying it; read on to the next section for that…

So, to recap, your key selector is the one which determines just how much work the browser will have to do, so this is the one to keep an eye on.

Overqualifying selectors

Okay so now we know what a key selector is, and that that is where most of the work comes from, we can look to optimise further. The best thing about having nice explicit key selectors is that you can often avoid overqualifying selectors. An overqualified selector might look like:

html body .wrapper #content a{}

There is just too much going on here, and at least three of these selectors are totally unnecessary. That could, at the very most, be this:

#content a{}

So what?

Well the first one means that the browser has to look for all a elements, then check that they’re in an element with an ID of ‘content’, then so on and so on right the way up to the html. This is causing the browser way too many checks that we really don’t need. Knowing this, we can get more realistic examples like this:

ul#nav li a{}

Down to just:

#nav a{}

We know that if the a is inside an li it has to be inside the #nav so we can instantly drop the li from selector. Then, as the nav has an ID we know that only one exists in the page, so the element it is applied to is wholly irrelevant; we can also drop the ul.

Overqualified selectors make the browser work harder than it needs to and uses up its time; make your selectors leaner and more performant by cutting the unnecessary bits out.

Is all this really necessary?

The short answer is; probably not.

The longer answer is; it depends on the site you’re building. If you’re working on your next portfolio then go for clean code over CSS selector performance, because you really aren’t likely to notice it.

If you’re building the next Amazon, where microseconds in page speeds do make a difference then maybe, but even then maybe not.

Browsers will only ever get better at CSS parsing speeds, even mobile ones. You are very unlikely to ever notice slow CSS selectors on a websites but

But

It is still happening, browsers still are having to do all the work we’ve talked about, no matter how quick they get. Even if you don’t need or even want to implement any of this it is something that is definitely worth knowing. Bear in mind that selectors can be expensive and that you should avoid the more glaring ones where possible. That means if you find yourself writing something like:

div:nth-of-type(3) ul:last-child li:nth-of-type(odd) *{ font-weight:bold }

Then you’re probably doing it wrong.

Now, I’m still kind of new to the world of selector efficiency myself so if I’ve missed anything, or you have anything to add, please pop it in the comments!

More on CSS selector efficiency

I cannot recommend the website and books of Steve Souders enough. That’s pretty much all the further reading recommendation you’ll need. The guy knows his stuff!

By Harry Roberts on Saturday, September 17th, 2011 in Web Development. Tags: , , | 23 Comments »

+

23 Responses to ‘Writing efficient CSS selectors’


  1. Kevin said on 17 September, 2011 at 12:16 pm

    I knew browsers read from right-to-left but I never really knew what it meant. Very insightful article, Harry! It’s definitely something I’ll keep in the back of my mind from now on!


  2. Fitchy said on 17 September, 2011 at 1:42 pm

    Great article Harry, learnt a lot from it. I’ll definitely take more care with my selectors. It’s one of those things I tend not to think about too much but if I can get into these good habits it won’t be a problem I have to think about when the size of our projects starts to grow.


  3. clokey2k said on 17 September, 2011 at 4:45 pm

    I think the best thing to take away from this is to avoid over-qualified selectors. You may shave 1-2 microseconds off of the render, but you may would have definitely saved a few bytes in your CSS!! It’s win / win.
    It lends itself to scalability.


  4. Nikki Strømsnes said on 17 September, 2011 at 4:50 pm

    Thanks for a great article. Many of us tend to forget these small things, especially when we’re busy… or we think “meh, I’ll optimize it later” (yeah, right). In the end, it does make difference, especially when it comes to selector specificity (I am at war against !important’s).


  5. Ralph said on 17 September, 2011 at 11:05 pm

    You definately cleared up some things for me re CSS selectors. I also knew that browsers read from right to left, but now I know why and it makes perfect sense.

    As a freelancer I have small clients which require small websites and for those I choose for a clean & lean HTML, so CSS3 selectors (in combination with selectivizr & DOMAssistant for IE) is what I use. If there comes a content heavy website project on my path where speed is a must, then I will go for a HTML full with classes to hook my styles on. Great article!


  6. Jitendra Vyas said on 18 September, 2011 at 5:29 am

    #social .social-link{} instead this can’t we use #social > a {} .

    adding more classes will increase the size of html which also make the page slower


  7. Jitendra Vyas said on 18 September, 2011 at 5:37 am

    If even this is slower #nav a{} because first browser will check all a{} first. then should we fill out our mark-up with classes all over to make the selector more performant.


  8. Patrick Samphire said on 18 September, 2011 at 8:27 am

    Excellent reminders, Harry. As Nikki says, it’s far too easy to think you’ll tidy it all up later and then forget.

    I think the biggest obstacle to more efficient selectors is that designers will eventually hand over sites non-expert content contributors, and there’s no way that these people are going to be putting in the appropriate classes, for example (and no reason why they should be expected to). So we end up either overqualifying selectors or choosing less efficient means, to make sure the website remains robust.

    Luckily, as you say, it doesn’t matter for most sites, and where it does, developers can probably put in place a CMS that handles some of the content issues.

    Despite this, you do remind me to make sure the CSS is more efficient (and just more beautiful) on my next project.


  9. Scott said on 18 September, 2011 at 2:25 pm

    Instead of the descendent selector, you should use the child selector, ie
    #nav > li > a
    will be faster than
    #nav a
    in most cases because the browser onlys needs to check 2 parent elements, not every one right up to the root.


  10. John Boxall said on 18 September, 2011 at 5:31 pm

    Better to use specific selectors so you spend less time negating the values of an overly zealous general selector. By writing non-specific CSS you’re setting yourself up for maintenance headaches later on.

    Much of this advice is in direct opposition to the benefits provided by CSS supersets like LESS and SASS which provide support for selector nesting. Indeed 37Signal’s recently posted about the benefits of using complex selectors to make SCSS more maintainable:

    http://37signals.com/svn/posts/3003-css-taking-control-of-the-cascade

    I think it’s great to be ‘aware’ of the performance implications of complex CSS selectors. Just don’t let it get in the way of making great websites!


  11. Caroline Murphy said on 18 September, 2011 at 9:12 pm

    …using your “.social-link” as the example, would’nt the half-way house be,

    #social .twitter
    #social .facebook

    etc? Then you’d still being specific enough for say, a different background image, and clean enough for us humans to understand the styling?


  12. Ben Cooper said on 19 September, 2011 at 9:50 am

    Great article Harry …. more of this stuff please?

    @Caroline – What would you put the #social on? because if you put the id on the UL it would still mean to target all the element you would have to write “#social li”


  13. Pablinho said on 19 September, 2011 at 1:25 pm

    I knew JS ( jquery particulary ) read selectors from right to left, but I had no idea CSS was read the same way.

    This will make me change a bit the way I code and use my selectors !

    thanks for sharing !


  14. David Fitzgibbon said on 19 September, 2011 at 2:49 pm

    Like Pabinho, above, I’ve thought about how this can apply to JS.

    Harry do you know if this is the same way that jQuery would search the DOM?

    If that’s the case then using these practices could really speed up js applications.

    Great article either way, you’ve taught me something I didnt even know I didnt know!


  15. Pacoup said on 19 September, 2011 at 5:21 pm

    Well I think it goes further than just CSS selectors. These are one thing, but the way content is laid out can make a huge difference.

    Some websites will display dog slow, even with GPU-accelerated layout in IE9. See http://www.k2nblog.com for an example. If you open this in Chrome, which has probably the least efficient layout engine despite having the fastest page loads, the browser will basically die, even on a Core i5; or in any cases it’s not comfortable to browser at all.

    This isn’t the first time I’ve seen this. So many websites of little complexity seem to have just crazy code.

    Taking a look at the code from K2NBlog in a performance auditor such as the one included in Chrome, you will see that the stats are rather staggering in comparison to a website like Engadget which has comparably more content on page.

    6303 unused CSS rules vs 2704 for Engadget
    38 JS files vs 20 JS files
    15 CSS files vs 10 CSS files

    Moreover, loading K2NBlog generates over 20 browser errors & warnings in Chrome’s console, while Engadget generates 9.

    Neither websites are particularly well made either, with K2NBlog generating 149 validation errors and 33 warnings, and Engadget going as far as 349 errors and 96 warnings.

    I think all of these factors combined really contribute to a website being unnecessarily slow. So rather than saying optimizing CSS depends on the website you’re making, I’d rather stress it doesn’t at all. Making quality code should be of utmost importance.

    Too many people in this industry are allowed to work on HTML when they are highly unqualified, which gives way to the mess the Web is in general.

    The same applies to software in general too. Too many times have I seen brilliantly manufactured hardware fail to work because of bad software, smartphones being a prime example IMO.


  16. Bruno said on 19 September, 2011 at 8:04 pm

    Hi,

    Interesting post indeed.

    I am personally reluctant to strip out too much code, as it lowers the maintainability. This is why I tend to accept marginally slower sites for ease of evolution/fixing. As you wrote it, it’s a matter of compromise. And fortunately, it’s often not perceptible. (In a couple of years, a just-in-time optimizer à la Hotspot will do the work for us anyway.)

    And as a side note: it’s not “if the a is inside an li it has to be inside the #nav”, it’s “if the a is inside the #nav it has to be inside a li” that allows us to remove the li.

    Chers,
    Bruno.


  17. Marcos Zanona said on 22 September, 2011 at 8:00 am

    Thanks a lot for the post it is fantastic, one question though.

    Are `.container.piece` and `.piece.container` different in terms of performance?

    Saying you have like 20 `pieces` on your document but only 3 of them are also `container` ?

    Cheers


  18. Arturo Molina said on 14 October, 2011 at 3:46 pm

    Great post man. I was doing the mistake of overqualifying selectors because I thought they were actually more performant (like in jQuery, which now I’m even not sure about).


  19. fjpoblam said on 20 October, 2011 at 3:08 pm

    How about a site nested at most two qualifiers deep (e.g., header>a) and with only a few pseudos (e.g., td:first-child)? No ids, no classes. Can that be efficient? Or is it *under* qualified?


  20. Aaron Layton said on 20 October, 2011 at 11:10 pm

    Great article and definitely a great read :-D

    I also seem to over qualify my selectors but hardly ever tag qualify them.

    The reason I over qualify them is for usability and extensibility. As I work on large sites I sometimes need to make changes to only a certain page, in this instance I always add a unique body class.

    As for the social links, I wouldn’t do this
    #social .twitter
    #social .facebook

    Instead I would just do
    .social-link {}
    and then
    .social-link.twitter {}

    or if you still need to use ie6
    #social .twitter


  21. Andres Roberto Rojas said on 7 June, 2012 at 4:54 am

    Nice article Harry. I am optimizing my CSS code right now! I really like all the theory around Front End. People think it is just as easy as write a lot of messy code, but I am happy people like you make other remember that Front End has it best practices and its science : )


  22. Andres Roberto Rojas said on 7 June, 2012 at 5:23 am

    Hey Harry, I got a question! If I write a selector like a.myclass

    … Browser will look for every anchor in the page and then select the ones with the myclass class or it will look for every myclass classed element and then check if there are anchors?


  23. JAYA said on 13 June, 2012 at 6:02 am

    hiii Andres Roberto..
    if you will used like a.myclass then,
    Browser will look for every anchor in the page and then select the ones with the myclass class .


Leave a Reply

Respond to Writing efficient CSS selectors

Hi there, I am Harry Roberts. I am a 21 year old web developer from the UK. I Tweet and write about web standards, typography, best practices and everything in between. You should browse and search my archives and follow me on Twitter, 7,791 people do.

via Ad Packs