Why does Silverlight have a restricted port range for Sockets?
Silverlight restricts the ports of outgoing TCP socket connections to the range 4502 – 4534. Connecting to a different port requires the use of a server-side proxy or port redirector.
One of the most common questions we hear from customers about this is, “Why do you restrict the port range in Silverlight? It doesn’t add any extra security.”
Actually, it does. The short explanation is, it gives network administrators control over their infrastructure by providing a convenient way to distinguish and route Silverlight traffic. For the long answer, read on.
Desktop trust model
When you run an application on your desktop you typically:
- Have intentionally downloaded and/or installed the application
- Have intentionally executed the application
In the case of a managed corporate environment you:
- Have been granted permission by an IT administrator to install and/or execute applications on your PC
In short, by being able to install and explicitly execute an application, you are asserting your trust of that application to not go rummaging around your file system or corporate network, for instance.
Web trust model
The web trust model is different. A web browser is a trusted desktop application, so per above, there are expectations it will not do anything malicious. Since web content can come from anywhere, there are no security guarantees about the intentions of the content provider.
Moreover, the explicitness of application install and execution is not present by design. That is, just by navigating to a website, a number of Silverlight applications could be started – an advertisement playing in the corner of the page, a hidden application with no UI, etc.
None of these Silverlight applications should be able to break out of the “sandbox” trust model without your knowledge, and nothing short of application signing, a domain trust model, prompting, etc. could establish that trust.
We’ve worked hard to keep the experience as unobtrusive as possible by generally avoiding prompting. But even when necessary, such trust models are fragile in nature because there is such a tendency to just click OK when you’re on a trusted site, even if the content came from elsewhere.
User vs. IT admin security decision
The other consideration is whether the decision to trust a website or Silverlight application rests with the web browser user or with the IT administrator. An insecure client on the network can be an entry point to other normally secured systems.
Here is one well-known FTP attack for why a trust decision like this is necessary and why it needs to rest with the IT admin.
The FTP protocol has a PORT command which is typically used to establish connections using “active” FTP. It can also be used to initiate server-to-server transfers.
In a malicious case, the command can be exploited to perform port scanning, or in the case of some active packet filtering devices, to actually open ports in the firewall.
With a desktop FTP client application that the user or IT admin has installed, the behavior of that client is trusted to be benign and these commands are issued at the request of the user. The active packet filter is doing what it was configured to do by opening the necessary ports to allow the connection.
Now imagine if Silverlight were allowed to send those same commands. You visit a website, a hidden application sets up a TCP connection back to its server of origin, and then it sends the PORT command which promptly opens a hole in your firewall. The website then uses this open port to establish a connection back to a victim machine on the internal network. The user never intended this action and therefore the trust model is broken. Moreover, the entire network has been placed at risk since the connection could be to a different computer than the user’s who indicated trust of the application.
Now, such attacks can typically be mitigated through additional configuration and patching, but this class of attack tends to re-surface with various protocols because of the liberties active filters take.
Similar exploits exist for HTTP. You’ll notice that Silverlight, Flash, XmlHttpRequest, etc. block a number of request headers. Now imagine if we allowed TCP connections over port 80 and a malicious application could craft their own HTTP request, effectively bypassing our HTTP implementation which has these checks.
For a detailed look at one such threat, please see this article about CERT’s VU#435052. Security researcher Dan Kaminsky has published a presentation and paper on the subject.
In a corporate environment, IT administrators need to be able to secure their networks against such attacks, so there must be a way for them to retain control of these security decisions and to also be able to distinguish Silverlight traffic from trusted application traffic that might be using similar protocols.
Port ranges and transparent proxy abuse
So how do port ranges help? Well, it’s unlikely your active packet filter scanning FTP port 21 is going to notice if someone attempts to send a maliciously crafted command over port 4502. Moreover, with a clearly identified port range of Silverlight-only traffic, it’s easy to configure such filtering devices to handle that traffic differently and with an appropriate level of trust.
Other solutions
There are other solutions to this problem. I briefly mentioned some approaches the application model could take along with some of the drawbacks. Another alternative would be to require obfuscation or encryption of all traffic to keep active packet filters from inspecting packets. This has the drawback of not necessarily being compatible with all protocols, with defeating the optimizations active packet filters can provide, and with making it more difficult to distinguish between desktop application vs. Silverlight initiated network traffic.
Conclusion
For the time being, we’ve settled on the port range restrictions as the best compromise to maintain security and protect our customers. We understand this makes interoperability and connectivity challenging for some deployments, but we are planning to address that feedback with future work in this area.
I hope this helps to clear up some of the questions out there about this, and if you have more, please let us know. Thanks!
Comments
- Anonymous
June 26, 2009
The comment has been removed - Anonymous
June 26, 2009
Thanks for the feedback! So there are two things at work here:
- How to give trusted SL apps permission to do more
- How to ensure network admins have ultimate control over these decisions For the first case, you suggested a firewall in the browser. During feature design, we did discuss this as a potential option, but we ruled it out for some of these reasons:
- Firewalls are only about network trust; maybe there are other features besides networking that should be enabled too
- Firewalls require networking expertise. You need to know what a "port" is and you need to know enough about domains to be able to distinguish contoso.com from contoso.com.zz.
- Such a firewall could rely on prompting, but because content can come from multiple locations, it's difficult to make a good trust call in some circumstances. This is more so depending on your level of expertise. Imagine being on a perfectly valid site like contoso.com, and some advertisement in the corner attempts an operation which causes prompting, "contoso.com.zz requires access to port 21, allow?" The user is on the Contoso site, so they may presume anything that comes up from that site must be safe. It's also difficult to tell on a page with multiple applications which one actually initiated the request for elevation, and it might even have no UI at all. The article mentions a couple other ways this trust could be established that would require less networking expertise and still maintain a comparable level of security. For the second case of network admin control, Active Directory + Group Policy does seem a worthy candidate, doesn't it? :) Even with such options at our disposal, there are still a number of challenges and security implications. Rest assured though, we hear the feedback loud and clear and would like to provide a secure solution for our customers who need additional connectivity.
Anonymous
October 08, 2009
This does not make sense. Adobe who uses a domain file which is served by port 943, does not have those port restrictions at all. and attacks afaik has not yet been an issue at all. We have developed this finance application: http://www.softcapital.com/radarlite And it gets its streaming quotes through port 80. Our clients are at work and normally restricted from ports other than 80 and 21. So porting this application to Silverlight, which was our intention is completely useless. Please rething this, serving a policy file should be sufficientAnonymous
December 07, 2009
The comment has been removedAnonymous
January 18, 2010
The comment has been removedAnonymous
January 18, 2010
Thank you for the feedback. One thing you might want to consider with that approach is that users behind a corporate proxy (as in most enterprises) will still be unable to connect. Your solution would need to support proxy configuration and authentication. The path you're going down is workable on the desktop where you're not in a sandbox and can discover some of that information or rely on native APIs, but in the browser or in a web only environment, you really need something that can transparently traverse HTTP proxies. There are a number of existing solutions like Comet & BOSH, or emerging ones like Web Sockets (http://tools.ietf.org/html/draft-hixie-thewebsocketprotocol-68), which may be a better fit for the end-to-end connectivity you would like to achieve. For now, that's the best advice I can offer. Your scenario and feedback is appreciated and will be considered in product planning.Anonymous
January 19, 2010
Thanks for that, Aaron. I hadn't considered proxies before; I was counting on being able to connect to port 80. I think I will have to use long polling HTTP to pass messages back to the client. I'm not excited about this approach because it will add a lot of overhead due to headers and the need now to encode data in some way, which will add more overhead. I'm also concerned about the added latency proxies will introduce. I understand that this is no fault of Silverlight's, it's just the nature of the web.