Поделиться через


You Have To Trust Somebody...

After spending part of the seasonal holiday break reorganizing my network and removing ISA Server, this week's task was reviewing the result to see if it fixed the problems, or if it just introduced more. And assessing what impact it has on the security and resilience of the network as a whole.

I always liked the fact that ISA Server sat between my internal domain network and the different subnet that hosted the router and modems. It felt like a warm blanket that would protect the internal servers and clients from anything nasty that crept in through the modems, and prevent anything untoward from escaping out onto the ‘Net.

The new configuration should, however, do much the same. OK, so the load-balancing router is now on the internal subnet, but its firewall contains all the outbound rules that were in ISA Server so nothing untoward should be leaking out through some nefarious open port. And all incoming requests are blocked. Beyond the router are two different subnets connecting it to the ADSL and cable modems, and both of those have their firewalls set to block all incoming packets. So I effectively have a perimeter network (we're not allowed to call it a DMZ any more) as well.

But there's no doubt that ISA Server does a lot more clever stuff than my router firewall. For example, it would occasionally tell me that a specific client had more than the safe number of concurrent connections open when I went on a mad spree of opening lots of new tabs in IE.

ISA Server also contained a custom deny rule for a set of domains that were identified as being doubtful or dangerous, using lists I downloaded from a malware domains service that I subscribe to. I can't easily replicate this in the router's firewall, so another solution was required. Which meant investigating some blocking solution that could be applied to the entire network.

Here in Britain, out deeply untechnical Government has responded to media-generated panic around the evils of the Internet by mandating that all ISPs introduce filtering for all subscribers. What would be really useful would be a system that blocked both illegal and malicious sites and content. Something like this could go a long way towards reducing the impact of viruses and Trojan propagation, and make the Web safer for everyone. But, of course, that doesn't get votes.

Instead, we have a half-baked scheme that is supposed to block "inappropriate content" to "protect children and vulnerable adults". That's a great idea, though some experts consider it to be totally unworkable. But it's better than nothing, I guess, even if nobody seems to know exactly what will be blocked. I asked my ISPs for more details of (a) how it worked – is it a safe DNS mechanism or URL filtering, or both; and (b) if it will block known phishing sites and sites containing malware.

The answer to both questions was, as you'd probably expect, "no comment". They either don't know, can't tell me (or they'd have to kill me), or won't reveal details in order to maintain the integrity of the mechanism. I suspect that they know it won't really be effective, especially against malware, and they're just doing it because not doing do would look bad.

So the next stage was to investigate the "safe DNS services" that are available on the ‘Net. Some companies that focus on identifying malicious sites offer DNS lookup services that automatically redirect requests for dangerous sites to a default "blocked" URL by returning a replacement IP address. The idea is that you simply point your own DNS to their DNS servers and you get a layer of protection against client computers accessing dangerous sites.

Previously I've used the DNS servers exposed by my ISPs, or public ones such as those exposed by Google and OpenNIC, which don't seem to do any of this clever stuff. But of the several safe DNS services I explored, some were less than ideal. At one of them the secondary DNS server was offline or failed. At another, every DNS lookup took five seconds. In the end the two candidates I identified were Norton ConnectSafe and OpenDNS. Both require sign-up, but as far as I can tell are free. In fact, you can see the DNS server addresses even without signing up.

Playing with nslookup against these DNS servers revealed that they seem fast and efficient. OpenDNS says it blocks malware and phishing sites, whereas Norton ConnectSafe has separate DNS server pairs for different levels of filtering. However, ConnectSafe seems to be in some transitional state between v1 and v2 at the moment, with conflicting messages when you try to test your setup. And neither it nor the OpenDNS test page showed that filtering was enabled, though the OpenDNS site contains some example URLs you can use to test that their DNS filtering is working.

The other issue I found with ConnectSafe is that the DNS Forwarders tab in Windows Server DNS Manager can't resolve their name servers (though they seem to work OK afterwards), whereas the OpenDNS servers can be resolved. Not that this should make any difference to the way DNS lookups work, but it was annoying enough to make me choose OpenDNS. Though I guess I could include both sets as Forwarders. It's likely that both of them keep their malware lists more up to date than I ever did.

So now I've removed all but the OpenDNS ones from my DNS Forwarders list for the time being while I see how well it works. Of course, what's actually going on is something equivalent to DNS poisoning, where the browser shows the URL you expect but you end up on a different site. But (hopefully) their redirection is done in a good way. I did read reports on the Web of these services hijacking Google searches and displaying annoying popups, but I'm not convinced that a reputable service would do that. Though I will be doubly vigilant for strange behaviour now.

Though I guess, at some point, you just have to trust somebody...