Impossible tri-bar

Digital Phenomena - Your first stop for internet consultancy 
Site Building

Overview

Lesson 1

Lesson 2

Lesson 3
1  Site Optimization — Lesson 3
2 Heavy Baggage
3  Cache In
4 URL Abbreviation

Lesson 4




Site Optimization Tutorial
Lesson 3

by Jason Cook

Page 3 — Cache In

Network (or Proxy) Caching

We previously discussed how browser-side caches store commonly-used images on the users' hard drives, but it's important to note that similar caches exist all alongside the highways and byways of the Internet network. These "Network Caches" make websites appear more responsive because information doesn't have to travel nearly as far to reach the user's computer.

Some webmasters are leery of network caches. They worry that remote caches might serve out-of-date versions of their site — an understandable concern, especially for sites like weblogs that update frequently. But even with a constantly-updating site, there are images and other pieces of content which don't change all that often. Said content would download a lot faster from a nearby network cache than it would from your server.

Thankfully, you can get a site "dialed in" pretty nicely with just a basic knowledge of cache-controls. You can force certain elements to get cached days on end while keeping other elements from being stored at all. META tags in your document won't cut it. You'll need to configure some HTTP settings to make caching really work for you. (Improved cache-controls, incidentally, are another benefit of using HTTP 1.1.) Anyhow, if all this piques your interest, start with Mark Nottingham's excellent Caching Tutorial For Web Authors.

Every Bit Counts

Alrighty. So you've done the big stuff — dropped bit depths on your every PNG, cranked up the HTTP 1.1 compression, and taken a (metaphorical) weedwhacker to your old, convoluted table layouts. Yet you're still obsessed with how small, how fast, and how modem-user-friendly you can make your site. Ready to jump into some seriously obsessive-complusive optimization?

You know those TV commercials where they zoom in on a supposedly "clean" kitchen counter, only to reveal wee anthropomorphic germ-creatures at play?

Well, you can similarly clean every extraneous detail from a site's layout, and still have some nasty, nasty cruft living in the source code. What's the point of novel-length META keyword lists and content tags? C'mon, do you still believe that search engines look at that stuff? Not in this millenium. You'll get better search referrals by thinking carefully about the real content on your pages and building an authoritative site that's linked to widely.

Streamlining the <head>er section of unneeded META keyword/author/description content, and likewise junking giant scripts makes a bigger impact, kilobyte-per-kilobyte, than sacrifices made elsewhere on the page. Having a short <head> to your document ensures the initial chunks of data the user receives contain some "real" content, which gets displayed immediately. That's another notch for "perceived speed" improvements.

Of course, there are plenty of regular <body> bytes still worth tossing. Start with HTML comments, redundant whitespace, and returns. Stripping all these invisible items from your source code yields extra kilobytes of space savings on the average. You can do this manually if you like, or check out utilities like VSE Web Site Turbo and WebOverdrive that can batch-process the drudgery of culling extraneous spaces, tabs, comments, line breaks, and the like.

next page»


|Home|About Us|Services|Search|
|Software|Products|Support|Links|Latest|
W3C validatedW3C validated CSSCompatible with all browsers