Security technologies

Traditional antivirus solutions – are they effective against today’s threats?

It’s clear that the nature of the threat to PC users has changed significantly over the years. Today’s threats are more complex than ever before. Much of today’s malware, and this includes Trojans, backdoors and spammers’ proxy servers as well as viruses and worms, is purpose-built to hijack users’ machines; and a single Trojan can easily be found on many thousands of infected PCs. Malicious code may be embedded in e-mail, injected into fake software packs, or placed on ‘grey-zone’ web pages for download by a Trojan installed on an infected machine. The scale of the problem, in terms of numbers alone, has also continued to increase – the Kaspersky Lab antivirus databases now contain close to 100’000 records.

In recent years, the ability of traditional signature-based antivirus solutions to deal with the growing complexity of today’s attacks has been called into question. Starting with the CodeRed outbreak, in July 2001, some started to assert that the signature scanner has become obsolete and that the future belongs to behavioral analysis of one kind or another.

‘The signature-based desktop antivirus software used by most enterprises provides marginal value today, and that value is steadily decreasing. Gartner believes that enterprises should begin to augment and eventually replace signature-based techniques with more-robust approaches. Enterprises that don’t will be swamped by malicious software riding the coming wave of Web services.’
Signature-Based Virus Detection at the Desktop is Dying, Gartner, 2001.

So how has the threat changed and what conclusions can be drawn about the effectiveness of traditional antivirus solutions?

The speed at which today’s threats strike has changed out of all proportion. In the ‘good old days’, the ability of a virus to spread was limited by the user. Viruses could only travel as fast as user’s activity let them. Boot sector viruses, which represented the biggest threat to PC users, accounting for more than 75% of all infections until the mid-1990s, relied on the exchange of floppy disk in order to spread. This meant that they moved only slowly in global terms and infections tended to be localized. Things changed significantly when macro viruses first appeared in 1995. The fact that they targeted data files (primarily Microsoft Office documents and spreadsheets), gave them something of a head start. So too did the fact that email was at this time starting to become a key business tool. This meant that the virus could ‘piggyback’ all emails sent by the infected user and so find its way to anyone contacted by the user. But even for macro viruses, user action was required and they relied on an unsuspecting user exchanging infected files.

Melissa, which appeared in March 1999, marked a quantum leap forward in terms of speed of infection. On the face of it, Melissa seemed to be just another macro virus. However, unlike earlier macro viruses, which waited for the user to send the infected data, Melissa ‘hijacked’ the email system to distribute itself proactively. (Melissa wasn’t the first virus that sought to use email to ‘mass mail’ itself. In 1998, the RedTeam virus included code to send itself via the Internet. However, this virus targeted Eudora mail only and failed to spread in significant numbers.) All that was required of the user was to double-click on the infected email attachment. After this, the virus harvested email addresses from the Outlook address book and sent itself directly to the contacts listed in it. This ‘mass-mailing’ allowed Melissa to spread further and faster than any previous macro virus. Corporate email systems quickly became clogged with email and many simply bowed under the pressure. It’s hardly surprising that Melissa set a trend. Since March 1999, nearly all of the major viruses and worms to threaten corporate and home users alike have included mass-mailing capability.

Other developments have also combined to enable threats to spread further and faster than ever before.

In the first place, an increasing number of threats now make use of system exploits to enable them to get a foothold in the corporate network and spread more rapidly. Such attack methods were previously associated with the activities of hackers, rather than virus writers, so this has marked a significant departure from the older generation of viruses. Previously, virus writers relied on their own code in order to spread and let the unsuspecting user do the rest. Increasingly, today’s threats have woken up to potential ‘helping hand’ provided by vulnerabilities in common applications and operating systems. Interestingly, Melissa was the first threat to make use of such a vulnerability, tapping into the spreading capability offered by Microsoft Outlook. However, it wasn’t until 2001, with the appearance of CodeRed and Nimda, that this started to become the stock-in-trade of viruses and worms. CodeRed, which appeared in July 2001, was a ‘file less’ worm. In a complete departure from existing virus practice, the worm existed just in memory and made no attempt to infect files on an infected machine. The worm used a Microsoft IIS server vulnerability (described in MS01-033 ‘Uncheck Buffer in Index Server ISAPI Extension Could Enable Web Server Compromise’) to attack Windows 2000 servers. It spread via TCP/IP transmissions on port 80, launching itself in memory via a buffer overflow and then sending itself in the same way to other vulnerable servers. Nimda appeared shortly afterwards, in September 2001. Nimda infected files but, unlike earlier mass-mailing threats, didn’t rely on the user to click on an infected EXE file attached to an email message. Instead, it made use of an Internet Explorer vulnerability to launch itself automatically on vulnerable systems (described in MS01-020, ‘Incorrect MIME header can cause Outlook to execute e-mail attachment’). This was a six month old vulnerability, but a great many systems remained un-patched and vulnerable to attack and the use of this vulnerability helped Nimda to infect systems all over the globe in the space of just a few hours.

The use of system exploits has now become commonplace. In fact, some threats have avoided the use of ‘traditional’ virus techniques altogether. Lovesan, Welchia and, more recently, Sasser are examples of Internet worms pure and simple. There’s no mass-mailing, there’s no requirement for a user to run an infected program. Instead, these threats spread directly across the Internet, from machine to machine, using various system vulnerabilities. Lovesan exploited the MS03-026 vulnerability (‘Buffer Overrun In RPC Interface Could Allow Code Execution’). Welchia exploited the same vulnerability, plus MS03-007 (‘Unchecked Buffer In Windows Component Could Cause Server Compromise’). Sasser exploited the MS04-011 vulnerability (a buffer overflow in the Windows LSASS.EXE service).

Others have combined the use of system exploits with other infection methods. Nimda, for example, incorporated several attack mechanisms. As well as the mass-mailing aspect of the virus outlined above, Nimda also appended viral exploit code (in the form of infected Java code) to HTML files. If the infected machine were a server, a user became infected across the web when they accessed the infected pages. Nimda went even further in its efforts to spread across the corporate network by scanning the network for accessible resources and dropping copies of itself there, to be run by unsuspecting users. On infected machines, the virus also converted the local drive(s) to open shares, providing remote access to anyone with malicious intent. For good measure, Nimda also used the MS00-078 exploit (‘Web Server Folder Traversal’) in Microsoft IIS (Internet Information Server) to infect vulnerable servers by downloading a copy of itself from already infected machines on the network. Nimda’s multi-faceted attack strategy, coupled with its use of system vulnerabilities, led many to refer to this as a ‘compound’ threat.

This trend has continued. Many of today’s ‘successful’ threats (successful from the author’s perspective, that is) make use of multiple attack mechanisms and use system vulnerabilities to bypass the user and launch code automatically, dramatically reducing the ‘lead time’ between the appearance of a new threat and it reaching epidemic proportions. There’s no question that today’s threats are faster than ever before. Where it used to take weeks, or even months, for a virus to achieve widespread circulation, today’s threats can achieve worldwide distribution in hours – riding on the back of our business-critical email infrastructure and exploiting the increasing number of system vulnerabilities that give them a springboard into the corporate enterprise.

Not all successful threats make use of system vulnerabilities. The mass-mailing technique pioneered by Melissa in 1999 continues to prove successful, particularly where ‘social engineering’ is used to conceal the malicious intent of the e-mail borne virus or worm. Social engineering refers to a non-technical breach of security that relies heavily on human interaction, tricking users into breaking normal security measures. In the context of viruses and worms specifically, it means any trick used to beguile naïve users into running the malicious code, typically by clicking on an attached file. The results can be very effective for the virus author. The Swen worm (appeared September 2003) masqueraded as a special patch from Microsoft that would close all security vulnerabilities. Swen was very convincing, not just because it used realistic dialog boxes, but because it came hard on the heels of Lovesan and Welchia and appeared at a time when the patching vulnerable systems was being given a high profile. Mimail.i (November 2003) combined social engineering with a ‘phishing’ scam, trying to trick users into providing PayPal credit card information which was then forwarded to the author of the worm. Such phishing scams have become increasingly common in recent months.

The number of new threats continues to grow steadily, with several hundred new threats appearing every day. As outlined above, many of today’s threats are a composite ‘bundle’ containing different types of threat. Malicious code writers have at their disposal a wide-ranging malware ‘menu’. Alongside the ‘traditional’ threat from viruses, there are now e-mail and Internet worms, Trojans and various other types of threat. Often a virus or worm will drop a backdoor Trojan onto the infected system. This allows remote access of the machine by the author of the virus or worm, or by whoever has ‘leased’ the Trojan from them for spam propagation or other malicious purposes. It’s clear, for example, that successive variants of Sobig, Mydoom, Bagle and Netsky have been used to ‘seed’ computers in the field for later use. Or the code may include a downloader Trojan, specifically designed to pull down malicious code from a remote site – perhaps an update to the virus or worm. Then again, it may include a denial-of-service (DoS) attack, designed to bring down particular web sites, as with CodeRed, Mydoom and Zafi.b.

The increasing speed of attack has placed a greater emphasis than ever before on the speed at which antivirus vendors respond to new threats. In the ‘good old days’ referred to above, quarterly updates were enough for most customers. Later, monthly updates became standard. Then, in the wake of Melissa and LoveLetter, most anti-virus vendors switched to weekly virus definition updates. Now, several of them offer daily updates. (One antivirus vendor, Kaspersky Lab, offers hourly virus definition updates.) Speed of response has even featured in some recent independent antivirus tests (AV-Test.org & GEGA IT-Solutions GbR reviewed speed of response several times in 2004). Nevertheless, although virus definitions are available faster than ever before, there is still a gap between the appearance of a new threat and the means of blocking it, a gap that allows a virus or worm to spread unchecked. And this brings us back to the questions posed earlier. Can anti-virus products cope with the threats of today and tomorrow? Is the signature scanner obsolete? Does the future belong to behavioral analysis or other generic technologies? In order to answer these questions, we need to take a look at the proposed alternatives.

One of the technologies often seen as a successor to signature-based scanning is behavior blocking. It’s not a new idea. Some security vendors adopted this approach in the early 1990’s, in response to the sharp rise in the number of viruses that they believed might overwhelm antivirus researchers.

Traditional antivirus scanners hold signatures of malicious code in a database and cross-check against this database during the scan process. Behavior blockers, by contrast, determine whether an application is malicious or not according to what it does. If an application does something that falls outside the range of acceptable actions, its operation is restricted. For example, trying to write to certain parts of the system registry, or writing to pre-defined folders, may be defined as a threat. The action can be blocked, or the user notified about the attempted action. This fairly simple approach can be further refined. It’s possible, for example, to restrict the access of one application (let’s say allowing Internet Explorer read-only access to limited portions of the system registry) while giving unrestricted access to other applications.

An alternative behavioral method is to ‘wrap’ a downloaded application and restrict its action on the local system – the application is run in a protective ‘sandbox’ (or ‘playground’, or ‘secure cache’) to limit its actions according to a pre-defined policy. The activity performed by the program is checked against a set of rules. Depending on the policy set, the program’s actions may be considered a violation of the policy, in which case the rogue action is blocked.

The chief benefit of a behavior blocker, according to its proponents, is that it’s able to distinguish between ‘good’ or ‘bad’ programs without the need for a professional virus researcher to analyze the code. Since there’s no need for ongoing analysis of each specific threat, there’s also no need to keep updating virus signatures. So users are protected against new threats in advance, without the need for traditional anti-virus updates.

However, there have always been potential drawbacks with behavioral analysis. One major problem lies in the fact that there’s a grey area between actions that are clearly ‘bad’ and those that are legitimate. What’s bad in a hostile program may be good in a legitimate program. For example, the low-level disk writes carried out by a virus, worm or Trojan, perhaps to erase data from your hard disk, are also used legitimately by the operating system. And how is a behavior blocker deployed on a file-server to know whether a modification to (or deletion of) a document is being carried out legitimately by a user or is the result of a hostile program on the infected user’s machine? After all, a virus or worm is simply a program that copies itself. Beyond this, it may do what any other normal program does. As a result, it can be very difficult to determine what rules you should use to define something as ‘bad’. And there’s clearly a risk of false alarms, flagging an application or process as ‘bad’ when it’s perfectly legitimate. This risk increases with the growth of spyware, adware, dialers and other ‘unwanted’, but non-viral, programs.

Leading on from this, there’s another potential problem. Where a behavior blocker is installed at the desktop, the user may be left to decide whether an alert it generates is valid or not. However, many users will be ill equipped to know whether the program should be allowed to continue, or they may not even understand the alert. There are two possible outcomes here. The first is that the user, unable to distinguish between a legal and an illegal operation, simply keys in ‘OK’ – and keeps doing so. One the one occasion when it is a genuine virus, they also key in ‘OK’ … and the virus slips through the net! The second is that the corporate IT department, plagued by constant questions from users, switch off the behavior blocker.

One way to side-step this problem altogether is to apply behavioral analysis at the Internet gateway, rather than at the desktop. This allows the corporate IT department to make the call on what’s considered suspicious, rather than the bemused end-user. However, it’s important to note that while e-mail is a common source of infection (88% of respondents to the ICSA Computer Virus Prevalence Survey 2003 reported infection via e-mail), it’s not the only one. And nor do all threats enter via the Internet gateway.

Intrusion prevention systems (IPS) may be considered as the successor of the behavioral analysis techniques discussed above, although some IPS systems also offer signature-based detection.

Host-based IPS, designed to protect desktops and servers, typically employ behavioral analysis to detect malicious code. They do this by monitoring all calls made to the system and matching them against policies based on ‘normal’ behavior. Such policies can be quite granular, since behavior may be applied to specific applications. In this way, activity such as opening ports on the system, port scanning, attempts to escalate privileges on the system and injection of code into running processes can be blocked as abnormal behavior. Some IPS systems supplement behavioral analysis using signatures of known hostile code.

Network-based IPS, deployed inline on the network, filter packets for malicious code, looking for abnormal bandwidth usage or for non-standard traffic (like malformed packets). Network-based IPS is particularly useful for picking up DoS attacks, or the traffic generated by network-based worms.

IPS vendors claim that these technologies offer effective protection against so-called ‘zero-day’ attacks based on vulnerabilities that are either unknown, or for which no patch exists. Unlike intrusion detection systems (IDS) that alert on abnormal activity, IPS is able to block malicious code. That said, few IPS vendors would claim that they offer a replacement for traditional anti-virus protection, but rather position their products as a complementary solution.

One of the key problems with IPS, as with older behavioral analysis programs, is the risk of false alarms. To try and minimize this risk, most of them include an alert-only ‘learning mode’ that allows the product to build up a picture of what ‘normal’ behavior looks like in a specific environment. The down-side to this is that they need to be carefully tuned, so there’s a much bigger administrative overhead than with traditional anti-virus protection.

Moreover, like traditional signature-based scanners, IPS require periodic update. Remember that they handle specific types of threat. Since new types of threat appear all the time and the behavior blocker will be unable to find a hostile program that uses a new attack method until it has been updated.

Personal firewalls offer another way of monitoring and blocking unwanted traffic. A personal firewall, as the name suggests (and in contrast to a traditional gateway firewall), is installed on a desktop or server. It works like a desktop ‘traffic warden’, inspecting inbound and outbound traffic on the computer and allowing or blocking connections based on pre-defined policies. There are typically two aspects to personal firewalls. On the one hand, they offer application filtering. That is, they allow rules to be set for commonly used applications like web browsers, ICQ, instant messaging programs and others. Personal firewalls also provide packet filtering. They analyze data transfers (headings, protocol used, ports, IP addresses, etc.) and filter packets based on the policies set.

As with IPS, the aim is to protect against attacks designed to steal confidential information, network worms and malicious code that seeks to turn the victim’s machine into a ‘zombie’ for use in spam attacks.

The up-side of personal firewalls is that they provide generic protection from unknown malicious code attacks, rather than relying on signatures of known threats. The down-side is that they have to be carefully tailored, to ‘learn’ what’s ‘normal’ or ‘acceptable’ for each specific environment.

So far, we’ve looked at the changing threat and some of the technologies seen as alternatives to traditional signature-based scanners. However, it’s important to remember that each successive generation of malicious code has changed the threat landscape and antivirus programs have constantly evolved to meet the new types of threat. In this sense, there’s no such thing as ‘traditional’ antivirus protection.

In the first place, it’s worth remarking that antivirus programs have never been purely signature-based. In the early days, when viruses were numbered in their tens, rather than tens of thousands, checksumming techniques formed a core element of antivirus defenses. And detection of polymorphic viruses has always involved the use of additional techniques, including code analysis and emulation.

Equally, the use of heuristic analysis for finding new, unknown threats has been in use for more than ten years. Heuristic analysis involves inspecting the code in a file (or other object) to see if it contains virus-like instructions. If the number of virus-like instructions crosses a pre-defined threshold, the virus is reported as a possible virus and the customer is asked to send in a sample for further analysis. Heuristic analysis has been refined over the years and has brought positive results in detecting new threats. Of course, if heuristics isn’t tuned carefully, there’s a risk of false alarms. That’s why most anti-virus vendors that incorporate heuristics into their products reduce their sensitivity to minimize the risk of false alarms. And many vendors disable heuristics by default. The other drawback is that heuristics is ‘find-only’. In order to clean, it’s necessary to know what changes the specific virus, worm or Trojan has made to the affected object.

During recent years, several anti-virus vendors have developed generic detection methods for finding and removing unknown threats. The starting-point for generic detection is that ‘successful’ threats (that is, from the malware author’s perspective) are often copied by others, or further refined by the original author(s). The result is a spate of viruses, worms or Trojans, each one distinct but belonging to same family. In many cases, the number of variants can run into tens, or even hundreds. Generic detection involves creating a virus definition that is able to identify all threats belonging to the same family. So, when NewVirus appears, the definition created to detect it will also successfully identify NewVius.b, NewVirus.c, NewVirus.d, etc. if and when they’re created. Such techniques extend also to detection of exploit code that may be used by a virus or worm. Of course, generic detection is not guaranteed to find all variants in the family. Nevertheless, it has yielded considerable success for a number of antivirus vendors.

More recently, antivirus vendors have started to look at additional technologies to enhance their ability to find new, unknown threats. This includes behavioral analysis, IPS and personal firewall technology. In some cases, vendors have acquired technology developed by third parties. Others have developed the technology themselves. Currently, most vendors market these technologies separately, although integration of all these different technologies has begun and it seems likely that this will continue. Ultimately, it is almost certain that anti-virus vendors will offer single products, containing complementary technologies designed to deal with new threats without the need for specific signatures. And this includes not only behavioral analysis, IPS and personal firewall technology. Increasingly, the blocking of spam e-mail and other undesirable content, together with detection of ‘unwanted’ programs (spyware, adware, dialers, etc.) are also being provided by anti-virus vendors as part of a ‘holistic’ solution. This convergence of technologies to block malicious code mirrors the recent trend in malicious code, with its ‘melting-pot’ of viruses, worms, Trojans and other malicious code. One thing seems certain. Signature-based detection will continue to form part of this overall solution to malicious code attacks.

Traditional antivirus solutions – are they effective against today’s threats?

Your email address will not be published. Required fields are marked *

 

Reports

How to catch a wild triangle

How Kaspersky researchers obtained all stages of the Operation Triangulation campaign targeting iPhones and iPads, including zero-day exploits, validators, TriangleDB implant and additional modules.

Subscribe to our weekly e-mails

The hottest research right in your inbox