Memory footprint of Facebook’s Like button iframes

I have made a rough review of Facebooks Like button memory footprint. Since every like button sits in an iframe I am assuming that this would have a heavy impact on a browsers memory footprint. Web developers should keep that in mind when putting many of them on a single page. Not to mention that each like button comes with it’s own Javascript and CSS files and that Facebook is most certainly tracking even people not using the button and so on and so forth…

There *are* very good reasons not to use Facebook’s Like button regardless of its memory footprint.

The tests were made with Firefox 3.6.10 on Vista with Firebug and some other plugins enabled on a client’s WordPress blog. I have recorded the amount of iframes, peak memory value (the highest value during or shortly after page load) and idle memory value (20-30s after page load).

Keep in mind that this is by no means scientific. YMMV.

Amount iframes ~MB peak ~MB idle
1 105 50
11 108 57
21 125 70
31 149 80
41 175 100

Obviously idle memory footprint scales at about 1,0-1,2 MB per iframe. That is quite a lot.

And do remember that each iframe comes with its

  • own HTML,
  • inline JavaScript,
  • inline CSS,
  • external JavaScript and
  • external CSS and
  • at least one image.

I have not counted the exact amount of requests though I expect ~4, maybe ~5 HTTP requests per Facebook Like button iframe. With 40 of them on one page that sums up to 160 HTTP requests just for the Facebook Like buttons.

From a Green Computing point of view it is quite insane.

My conclusion is if you *really* want to use Like buttons that you do not to put more than a dozen or half a dozen Like buttons on a page.

CSS Level 3 Media Queries

In my humble opinion, the upcoming CSS Level 3 Media Queries are one of the major improvements in web development.

Yesterday I have implemented those in my web site so that it adapts to various browser window widths.

See them in action on Youtube!

Try it for yourself and tell me what you think :) (CSS 3 media queries are currently supported in FF 3.5+, Chrome, Opera, Safari and IE9+)

Measures against Slowloris attacks

In a Slowloris attack a client (or a botnet) opens a large amount of connections to a web server and holds them open. It does not send complete requests so you might find no request of the attacker in an Apache log — quite devious. So… The malicious client continues to open new connections using incomplete requests while Apache is waiting for complete requests in order to serve the client. Meanwhile, regular clients cannot open new connections and thus do not get to be served by the host — the site gets unresponsive.

I just wanted to share my experience with anti slowloris measurements on small-scale Apache webservers.

Apache 2.2.15 comes with mod_reqtimeout. The module’s default settings work out of the box. During my own local slowloris attack http latency fluctuated quite a lot but Apache remained responsive. If you are using Apache 2.2.15, go for mod_reqtimeout and you are done.

Debian 5.0 provides Apache 2.2.9 only and there is no mod_reqtimeout for this version.

My first choice for Apache 2.2.9 was mod_qos which I got compiled smoothly. When an attack is launched, http latency rises sharply for about five seconds. After that, latency normalizes quickly. A quite impressive result. OTOH, while there were no obvious problems reported by site users, the module spammed Apache’s error log with backtraces which forced me to search for another solution.

Next option was mod_antiloris which is a very small module with just over 5 kb of source. It compiled smoothly and worked out of the box. There are no configuration options though. During an attack, http latency rises quickly and remains high. The site gets somewhat less responsive but Apache continues to answer requests. Not as impressive as mod_qos but at least it does not clog error.log with backtraces so I am sticking with mod_antiloris for the time being.

Keep in mind that I have only tested attacks from a single client. A botnet executing a slowloris attack is a completely different story.

And BTW: YMMV.

If you have a different approach for plain Debian systems, feel free to comment.

Web fonts slowly picking up pace

Web fonts have been adopted by all major web browsers. What Firefox 3.1+ w/ Noscript users might not know is that Noscript blocks “font face” by default so they do not get to see nice fonts. There are two easy ways though to enable web fonts:

  1. Left-click Noscript icon on the bottom right corner of the browser’s window and enable all “font@” entries in “Blocked Objects”. This will temporarily allow web fonts on the current page.
  2. To enable web fonts permanently left-click Noscript icon on the bottom right corner of the browser’s window and select “Options”. Select tab “Embeddings” and uncheck “Forbid @font-face” and click “ok”.
 

Do we actually need web fonts?

Yes and no.

Yes: Typography enthusiasts have been waiting for years. Now browsers have widely adopted web fonts and every now and then even a free (to use) font comes along. The time for web fonts is now. And why should print designers have fonts and web designers should not? If man can travel into space the web should have web fonts.

No: Have I been waiting for web fonts? Certainly not. The web got along very well for 16 years without web fonts. I like the web the way it is. Bad readability was mostly caused by bad colors only. To me, web fonts primarily are just another possibility of hurting someone’s eyes.

Beware though: Most fonts have strict licenses which do not allow web distribution. So web designers have to resort to fonts with liberal licenses, Bitstream Vera and Droid Sans being two of them. Check font licences carefully before jumping on the web fonts train!

Browser detection on the verge of IE6’s death

Internet Explorer 6 is on the verge of death. Google will phase out support of this old fellow very soon. FINALLY I might add. While they will invite IE6 users to install Chrome, Firefox or some other modern browser, other websites redirect old browsers to their mobile sites. So it is important to minimize false positives in order not to confuse or aggravate users.

Just out of curiosity I have examined about 1 million Internet Explorer HTTP requests. The shocking result: about 5% of them were far from being unambiguous. Want to see examples? Look at this:

Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; Mozilla/4.0 (compatible; MSIE 7.0; Win32; 1&1); .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.648; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; Mozilla/4.0 (compatible; MSIE 7.0; Win32; 1&1); Mozilla/4.0 (compatible; MSIE 7.0; Win32; 1&1))

This is a mess to say the least. It’s an IE 8 alright, but if a program does look for MSIE 7, this user agent would match too.

IE 6 is also affected. I have encountered user agent strings with up to three different “MSIE \d” matches, this one for example:

Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; Mozilla/4.0 (compatible; MSIE 8.0; Win32; WEB.DE); SIMBAR={omitted}; Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1) ; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 1.1.4322; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)

This string matches MSIE 8, MSIE 7, and MSIE 6. You cannot even assume that the first MSIE match would indicate the browsers real version.

Conclusion

I do not know what the cause of all this is, maybe poorly written browser extensions. In any case, you cannot reliably detect browsers by matching a single string anymore. Developers, watch out. Things got complicated again, thanks to Microsoft :)

One way to detect an Internet Explorer’s real version could be to do a regexp match for “MSIE (\d)” and then take the highest version number as indicator.