共用方式為


Are You A Ctrl-Clicker Too?

They say that the first step in achieving a cure is to actually admit you are an addict. So here we go: "My name is Alex and I'm an inveterate Ctrl-clicker". There, I feel better already. Perhaps I'm already half-way to kicking the habit, though I have to report that I'm having trouble finding an appropriate support group. I was looking forward to sitting round in a circle in some draughty church hall and confessing that I often have up to ten browser tabs open on the same website!

Perhaps I'm just extraordinarily impatient, but surely the whole reason that browsers have tabbed windows is to allow you to multi-view pages? When I'm looking for something on a site, be it information on some programming technique on MSDN, a gift for my wife's birthday on Amazon, or just reading the news on The Register, my left thumb automatically strays to the Ctrl key as I click multiple links in the page. Then I can quickly skip through the now-loaded pages to see if they contain anything I'm looking for, without losing the initial starting point. Surely this is how everyone uses the web?

A while ago I noticed my ISA Server would occasionally log an event that "may indicate the computer at [some IP address] is infected with a virus or is attacking the host", and that it would no longer be allowed to create TCP connections. After the initial panic attack I realized that this event exactly coincided with a spasm of mad Ctrl-clicking, but I never investigated further until I worked on some sample code for our "Claims Based Identity & Access Control Guide". This required installing the Fiddler HTTP proxy utility to examine the packets sent over the network that implement the authentication exchanges.

Out of interest I cleared the log and hit the Amazon.co.uk home page to look for a replacement hard disk for one of my aging machines while Fiddler was running. I suppose the fact that the browser made 67 requests to the server isn't unexpected - there's a lot of content on the home page. However, searching for suitable hard disks and then Ctrl-clicking on the seven that looked interesting had - within a couple of seconds - generated a total of 639 requests. And 254 of these were not to Amazon, but were related to various tracking and advertisements. In total my browser downloaded 9,031,111 bytes! No wonder ISA Server gets suspicious...

And even something innocuous such as using a search engine results in a free flow of requests. In my tests Bing made 22 requests to the server when it loaded, and then another request each time I typed in the search box; downloading something like 800 bytes each time. Google only made 16 requests initially, but every letter typed generated five requests; returning something like 14,000 bytes each time as it updated the list of links in the page. Such is the magic of JavaScript running within the pages to make searching easier and quicker...

However, if you read about the latest plans Google have to speed up searching in their Chrome browser, you'll probably need to prepare yourself for even heavier network loads. The idea is to have the browser automatically download the entire target page for the link they figure you are most likely to click into a hidden tab, so that it can spring instantly into view when you do what they expect. It sounds like a great plan until you consider the ramifications.

For example, they are already suggesting web page developers use their implementation of the W3C's Page Visibility API to minimize the activities occurring in the page when it's not visible. And presumably the hidden page will change each time you type a letter in the search box and a different one becomes the "most likely choice". Perhaps it will store them all, so typing a really long search string in the text box is not going to be a good idea unless you have plenty of available memory.

Even more worrying, of course, is if the "most likely choice" happens to be a page you really don't want to download. Maybe it contains some of that type of content euphemistically labelled "NSFW" (not safe for work); so a simple typing error when searching for the definition of the C# keyword "public" could get you fired when the administrator examines the proxy server logs. Or initiate a visit from the FBI when you just wanted to find out why your server bombed again.

And what happens if the "most likely choice" page happens to be from one of those free antivirus scan sites that you studiously avoid when they pop up in the search results? Are writers of drive-by viruses likely to abide by the Page Visibility recommendations, and not execute their malicious code when the page is running in a hidden tab? But I suppose I'm just being a bit naive and old-fashioned in thinking that I'd like my browser to only access resources and sites that I want to load. As Fiddler reveals, that stopped happening a long while ago.

So maybe there is a market opening up for a new search engine - one that just takes your search string and sends back a plain HTML page containing the list of matching links. In fact, I seem to remember that one company used this approach as its USP at one time, though I suppose there's no money in that approach now. But I can't help wondering how our continual drift from browser/server to almost full client-server interaction, where the server has increasing control over the client (your browser and computer) will lead us. And I've voiced my opinions plenty of times in the past on the prospect of JavaScript becoming our default programming language.

Mind you, I watched the video of the Windows 8 public preview the other day. The default interface is all HTML5 and JavaScript, and the browser runs full screen by default. And I didn't see any Ctrl-clicking going on there either - it's all scroll, drag, hold, and point - so maybe my addiction will be forcibly curtailed once I upgrade...