Network (or Proxy) Caching
We previously discussed how browser-side caches store commonly-used images on the users' hard drives, but it's important to note that similar caches exist all alongside the highways and byways of the Internet network. These "Network Caches" make websites appear more responsive because information doesn't have to travel nearly as far to reach the user's computer.
Some webmasters are leery of network caches. They worry that remote caches might serve out-of-date versions of their site an understandable concern, especially for sites like weblogs that update frequently. But even with a constantly-updating site, there are images and other pieces of content which don't change all that often. Said content would download a lot faster from a nearby network cache than it would from your server.
Thankfully, you can get a site "dialed in" pretty nicely with just a basic knowledge of cache-controls. You can force certain elements to get cached days on end while keeping other elements from being stored at all. META tags in your document won't cut it. You'll need to configure some HTTP settings to make caching really work for you. (Improved cache-controls, incidentally, are another benefit of using HTTP 1.1.) Anyhow, if all this piques your interest, start with Mark Nottingham's excellent Caching Tutorial For Web Authors.
Every Bit Counts
Alrighty. So you've done the big stuff dropped bit depths on your every PNG, cranked up the HTTP 1.1 compression, and taken a (metaphorical) weedwhacker to your old, convoluted table layouts. Yet you're still obsessed with how small, how fast, and how modem-user-friendly you can make your site. Ready to jump into some seriously obsessive-complusive optimization?
You know those TV commercials where they zoom in on a supposedly "clean" kitchen counter, only to reveal wee anthropomorphic germ-creatures at play?
Well, you can similarly clean every extraneous detail from a site's layout, and still have some nasty, nasty cruft living in the source code. What's the point of novel-length META keyword lists and content tags? C'mon, do you still believe that search engines look at that stuff? Not in this millenium. You'll get better search referrals by thinking carefully about the real content on your pages and building an authoritative site that's linked to widely.
Streamlining the <head>er section of unneeded META keyword/author/description content, and likewise junking giant scripts makes a bigger impact, kilobyte-per-kilobyte, than sacrifices made elsewhere on the page. Having a short <head> to your document ensures the initial chunks of data the user receives contain some "real" content, which gets displayed immediately. That's another notch for "perceived speed" improvements.
Of course, there are plenty of regular <body> bytes still worth tossing. Start with HTML comments, redundant whitespace, and returns. Stripping all these invisible items from your source code yields extra kilobytes of space savings on the average. You can do this manually if you like, or check out utilities like VSE Web Site Turbo and WebOverdrive that can batch-process the drudgery of culling extraneous spaces, tabs, comments, line breaks, and the like.