Share via


Dirty Little Secrets of Desktop Security

A few issues were raised in rhe great comments from Norm, Chaim, and Tim in response to my recent post on DLL security. To address some of these issues (and add a few more), I've decided to re-post an article I wrote a few years ago when I was with Falafel Software . Soberingly, the article remains timely because state of desktop security has made only incremental progress since I originally wrote this piece. The article in its entirety follows below, reprinted with permission from Falafel.

The Dirty Little Secrets of Desktop Security
(or, Why We’re All Screwed)

If you can read this you are too close. That tired little pun pops into my head whenever I stop to consider the problem of desktop security. By the time you (and your desktop security software vendors) even know of the latest virus or worm it’s entirely possible that you have already been infected. “But wait,” you may say, “I’m much more security conscious than the average bear. I always promptly download and install vendor-supplied security patches, and I run antivirus, intrusion detection/prevention, and personal firewall software on my PCs.” Surely such security conscious users aren’t as susceptible to the next Code Red or Blaster worm as users less concerned with security? Right? Wrong. The dirty little secret of desktop security is that none of these measures may be enough to save you from the next wave of PC worms.

Malware spreads
To understand how this can be we must first understand how these infections spread from one computer to another. The most widespread recent outbreaks have replicated themselves in one of two ways: via email and by compromising legitimate services running on the target computer.

Email-borne malware, while remaining an enormous problem, is among the simplest to protect against. Most commercial antivirus or personal firewall software is capable of filtering incoming email for potentially dangerous payloads. Most email-borne malware is also spread more as a result of social engineering (e.g., tricking people into invoking an email attachment by taking advantage of natural human trust or curiosity) than as a result of any technical wizardry. For these reasons this article will not focus on this type of malware.

Services running on the target computer that are vulnerable to malware can be generalized into two flavors: those that do not need to receive unsolicited incoming network traffic and those that must listen for unsolicited incoming network connections to function properly.

The former group includes applications such as web browsers, that accept inbound network traffic only in response to a corresponding outbound request, as well as various services that unnecessarily leave the computer open to external access. These open services can provide a convenient means for malware to enter your computer. The recent Blaster worm, for example, exploited a vulnerability in the Microsoft Windows Remote Procedure Call (RPC) service. Personal firewalls offer excellent protection against for this class of applications and services. Personal firewalls block all incoming network traffic that is not a response to a corresponding outbound request or does not originate from a trusted network host.

The latter group of applications and services, those that must accept unsolicited network traffic in order to function, present a much larger problem. These applications and services include instant messenger (IM) applications, file and printer sharing, web servers, database servers, and other kinds of internet servers. I will refer to these applications and services generically as server software. What makes server software so difficult to secure? Let’s look at how traditional desktop security technology is employed to protect server software.

Patching
The industry’s growing awareness of security issues has forced software publishers to become more diligent about patching security vulnerabilities. That’s certainly good news; patching faulty software absolutely helps make the Internet safer. The bad news is that, simply put, patching can never solve the problem. The most glaring reason why patching can never solve the problem is because patches are often only available after an exploit has been released in the wild. When one considers that SQL Slammer infected about 90% of the vulnerable hosts on the internet (about 75,000 servers) in its first 10 minutes of existence, it becomes almost laughably obvious that such a voracious worm could never problem could never have been prevented by an after-the-fact patch (See https://www.caida.org/analysis/security/sapphire/ for SQL Slammer analysis).

While patches remain useful and necessary for preventing against known attacks, the continued abundance of Code Red (see https://www.caida.org/analysis/security/code-red/) scans still ambling about the Internet are a living testament to the fact that even old and well-known vulnerabilities continue to go un-patched in far too many cases. Keeping up with patches is difficult for a number of reasons.

For home users, ignorance of the threats or limited technical expertise are the mostly likely causes for un-patched software. Microsoft, in particular, has made great progress in improving this situation. Newer version of Windows have the ability to automatically download (and even install) critical security updates and the https://windowsupdate.microsoft.com website does a great job automating the otherwise painful process of keeping the operating system up-to-date. However, this progress is stymied in some respects by the still vast numbers of older Windows installations out there and the fact that some users simply do not use the automated tools available to them.

Corporate environments present a different an more disturbing set of problems. Patches are released with such brisk frequency that it becomes a very painful (and sometimes downright impossible) task for an IT staff to keep up with their distribution across their user base. Patching in the corporate environment normally involves testing interactions with standard software and other patches, developing a deployment strategy, performing the deployment, and then handling any issues that arise amongst the user base. Often patches are bunched up and distributed en masse rather than onsie-twosie in order to make the process more manageable and reduce the number of desktop “touches.” Even still, keeping up with patches across a large user base remains and expensive and resource-intensive enterprise.

Keep in mind also, that even if patching is performed timely and perfectly every time, users remain potentially susceptible to that new attack for which the patch does not yet exist.

Antivirus
Antivirus technology works by scanning executable files and comparing them against a database of known malware. If the antivirus software detects a match it can take corrective action such as deleting or quarantining the file in question. Once again, this means that there will often been a window of opportunity for malware: that period of time between a new exploit because unleashed and the signature database being updated for that exploit. Also making detection difficult is some malware’s use of self-modifying code, which attempts to defy signature-based detection. Additionally, most antivirus technology does not typically provide protection for threats – including many buffer overflow exploits such as SQL Slammer – that do not deposit executable files on the local machine. Despite the limitations, it’s important to reduce the window of opportunity for infections by setting your antivirus tool to automatically download updates on a frequent basis – preferably daily. A number of vendors have quality antivirus offerings, including Symantec, McAfee, Trend Micro, and Computer Associates.

Intrusion Detection/Prevention
Intrusion detection and prevention (ID/P) applies the antivirus concept to network traffic: it works by inspecting network traffic and comparing patters against a database of known attack profiles. ID/P again falls short in the area of new attacks; the system can only protect against what is contained in the signature database. ID/P is also notorious for false positives, which can make the system challenging to manage, especially for non-technical users. Because of this, quality personal firewalls often deliver better network security than ID/P utilities.

Personal Firewalls
Personal firewalls work by blocking unapproved network traffic in or out of a computer. Rather than rely on a database of “bad” behavior, modern personal firewalls policies are usually programmed to understand “good” behavior and to assume anything not explicitly good is bad and therefore to be blocked. While highly effective in blocking unwanted traffic, personal firewalls fall short in protecting legitimate server software. This is due to the fact that firewalls see the network activity as valid (an incoming web server request, for example) but they cannot detect that the network payload may be exploitive (e.g., a Code Red request).

Regarding personal firewalls, it’s important to note the distinction between application-aware personal firewalls and more limited port-protocol-address firewalls. Application-aware firewalls look not only at network traffic but a what applications are engaging in the network traffic and leading vendors of this type of software include Zone Labs, Symantec, and Sygate. More limited port-protocol-address firewalls can only manage traffic based on its destination, source, or type and do not deal in what application is communication on the network. An example of this type of firewall is that build into Windows XP. I recommend an application-aware firewall for the most complete protection again untrusted applications.

Some vendors are blending these traditional security methods to provide more broad protection, which is fundamentally a good thing, although the issue of protection against new, unknown vulnerabilities remains unaddressed. For example, running all the latest patches, antivirus, ID/P, and a personal firewall on a machine on the day when SQL Slammer first appeared would not have prevented infection of a SQL Server installation.

While I do want to raise awareness of the menace presented by new server software worms, I would also like to point out that you should not interpret the issues raised here as reason to not use desktop security software. Thanks to those not-so-conscientious computer owners, well-known malware continues to bounce around the Internet despite having been addressed by security vendors long ago. Diligently applying vendor-supplied patches, running antivirus software with up-to-date signatures, and using personal firewall software, will protect you against the thousands of known viruses and worms as most forms of network attacks. What these things cannot protect you from is new exploits to trusted server software. If antivirus and ID/P technologies are always behind the curve and firewalls are unable know good from bad data, the question becomes: how can server software be protected?

Looking for solutions
First off, the software development community needs to take responsibility for writing better software. While progress is being made, we’re not quite there yet. Defensive coding and security analysis must become as native to the development process as version control and unit testing.

Given that any database of known exploits would almost by definition never be totally up-to-date, it seems safe to exclude a signature-based approach as a solution to the server software problem. The opposite, “guilty until proven innocent” approach used by firewalls is much better at combating new, unknown threats. In this spirit, a new area of technology known as “application firewalls” has been emerging. Application firewalls inspect network traffic coming into (and potentially out of) server software and validate that the traffic constitutes a well-formed, valid communication. An example of this type of protection is Microsoft’s URLScan utility, which enables the administrator to define what web server requests should be allowed to reach the web server. This same type of technology can be applied to other kinds of servers as well. However, because an application firewall needs to be tuned for each specific installation, configuration will remain a challenge. Until the configuration problem is solved, application firewalls will remain in the realm of trained IT professionals.

One novel solution to the worm problem is the use of so-called antivirus virus. This is a worm that uses the same techniques as the bad guys to seek out and “infect” vulnerable machines. Once resident, the worm attempts to patch one or more of the security holes on the host – a kind of cyber vigilante, if you will. A recent example of this type of worm is the Welchia, or Nachi, worm (see https://www.symantec.com/avcenter/venc/data/w32.welchia.worm.html for the details), which was written to find and patch computers vulnerable to the RPC exploit mentioned earlier. To date, these “good” bugs haven’t been well received by the security community, mostly because they are of unknown origin, unknown quality, hog network bandwidth, and make fundamental changes to the OS without the user’s permission. However, I feel there is some promise in this approach. While it doesn’t remove the gap between observation and antidote, it does close this gap by racing the infection around the Internet and patching vulnerable hosts before they can be exploited. If such antivirus viruses were produced and supported by legitimate software vendors they could constitute the white blood cells in the bloodstream of the Internet.

Whatever shape the solutions make take, one thing is certain, and that is that the PC software industry must find a solution to the problem of vulnerable server software.