Automation Potentials in the Realm of IT Security
by Mixter
http://mixter.void.ru or http://mixter.warrior2k.com


Introduction

Only recently, the automation and sophistication of processes related to information security and penetration techniques seem to become an increasingly important paradigm in the information security world.

Obviously, all forms of technology and software change and develop into new, more advanced, more practical forms. After all the time that information security has been around and matured -- being a commercial profession at least since the Internet began to become widely used -- we may soon be going to see changes in this profession that are bigger and more radical than what we've seen before, with both new and old technical concepts as a fundament for new forms of security technologies, services, and standards.

Most of the recent developments in automated tools related to information security were of a malicious nature. I decided not to write this article as a scare story of dangerous things to come, because there is already too much fear, uncertainty, and doubt about the future of incidents on the Internet, and because I also see such hype as a contributing factor to an escalation in incidents, being a possible motivation for media-attention seeking blackhats, such as website defacers, and therefore only contributing to a self-fulfilling prophecy. I prefer to think of automation of security related software as a paradigm that can help both blackhats and whitehats, and to emphasize on the constructive, legitimate uses of this concept in information security.

Taking into account the needs of the security community, of customers, and the nature of current security problems, technical and non-technical, this article will try to demonstrate how automation may become an essential concept in information security, and how it may be used in the future.


Recent incidents and related factors

Interestingly, many of the things happening recently and currently on both blackhat and whitehat sides of the infosec frontier are relevant to the topic of automated security software.

This may be more obvious on the blackhat side, with the recent waves of worms, increasingly automated and simplified intrusion and trojan techniques, and more unsophisticated attackers discovering and using worms and other automated tools. The development of more effective worms also has other consequences. Even when the vulnerability that a worm uses to spread has been widely patched, the parts of the worm other than the actual exploit continue to be useful, such as the algorithms for installing trojans and backdoors, picking targets, and stealth. Certain parts of the code of popular and effective worms may continue to serve as frameworks for the creation of similar worms using new exploits, long after the originally exploited vulnerability has been patched.

One also has to take into account that the shift toward more automation in the process of compromising and penetrating of systems, leads to other benefits for the blackhats, such as more anonymity, less accountability, and therefore less risk of getting caught. Certainly, automation of intrusion procedures facilitate the process of launching intrusions and scans from already compromised hosts, or even a chain of compromised hosts. Being able to commence attacks remotely through chains of compromised hosts makes it easier for blackhats to evade being caught by tougher surveillance, intrusion detection, even better cooperation through both private and law enforcement efforts. Though harder punishments and surveillance laws are still being pushed forward at an astonishing rate, laws, law enforcement, and deterrence are increasingly losing their existing abilities to fend off intrusions.



Needs and problems of information security

While the current incidents are short-term developments (you really can't predict too much about future incidents, only about the possibilities, because that depends on what kinds of individuals plan, prepare, and commit them), there may be real economical, technical and practical incentives for redesigning information security related technology to become more automated and autonomous in the long term.

It is true that most of the practical whitehat appliances of things like automated penetration tools and worms, have yet to emerge. But already, some existing programs demonstrate some of the possibilities. Thinking of security scanners with scripting languages, such as Nessus, or NAI's CASL standard, they already automate more than the usual security and penetration tools did. The can evaluate input of remote services, and determine their own behavior dynamically, dependent from the dynamic responses they get from scanned services. There a notable differences to simple comparisons to determine whether a particular server version is vulnerable or not. A script-driven, automated process can consist of any custom number of evaluation procedures and behavioral responses.

In my opinion, some existing intrusion detection solutions, and related tools for simplifying and evaluating logfiles can also be seen as an early form of automated software. Why? Because they have the potential to facilitate an originally time-consuming task -- the human work behind setting up relatively reliable system audit trails, securing them, e.g. by setting up loghosts, and, the hardest of all, evaluating all the input and searching for intrusion patterns. Alright, I won't argue about the fact that with the current rate of false positives and other miscellaneous problems like performance and IDS evasion, the automation and "smart" behavior currently present in Intrusion Detection Systems isn't perfect or even advanced. However, the trend toward improving capabilities like in artificial intelligence and autonomous behavior (e.g. intrusion response) in software like IDS, is certainly given, and already improving things today.

Besides the increasing complexity of handling regular administrative security tasks manually, and the necessity of facilitating such task, what other problems exist that can be solved through automation? To answer that, we have to take a few basic human factors into account. In reality, people usually default to little effort and hard thinking. After all, this is why we have (too) easy operating systems, all the default configurations, and the many security problems arising out of the use of these default configurations. More automated security can facilitate administrative tasks, giving security professionals and administrators the opportunity to preprogram complex tasks, such as auditing, in a more secure manner.

Most people tend to follow authority, so they will chose popular software. Software becomes popular because it is easy. Most people want easy software because they like to avoid thinking. And the worst thing for security is that they demand feature-rich, and therefore complex, systems. Making servers and operating system distributions complex but still easy to use usually means making them insecure, for example by opening up services by default so that people won't have to set them up. Many vendors know this, but to be successful, they have to adapt their own software to the more popular, easy software designs, often making it less secure. Rather than trying to go against these human problems, people need to start to fully realize them and try to find ways around them. Automation of security features may offer such ways, even without making software hard. By implementing active auditing in operating systems and server software, e.g. by implementing the querying of vulnerability databases and checking of own configurations and versions according to the latest info, by default, could mitigate problems caused by the usual way that people operate software. The essential task will be to implement mandatory security checks not requiring user interaction, and to make them a part of the defaults. Also, manually performed regular administrative processes often produce small but fatal mistakes, and neglecting of good security due to time constraints, or simply due to uncertainty. Consider that even in well-maintained systems, administrators have a hard time coping with all newly appearing vulnerabilities and installing all the necessary patches for their systems. They may simply lack the sufficient time and the resources to keep track of vulnerabilities and manage security updates in a fully reliable manner, especially if they have to supervise a large and diverse network structure. Automation could be used to cope with such uncertainties and unreliable conditions, with automated tools accessing standardized databases that describe system and software configurations, known vulnerabilities, and known intrusion patterns. Private exploits continue to exist, but such a system could help an admin to be reasonably certain that any given system is secured against all publicly known exploits at any given time. An automated security tool not reacting on a publicly known vulnerability would then represent, by definition, a vulnerability in that tool, or its database, but not a conceptual problem.

When taking in account these problems arising from the way people usually think and interact with computer systems, we start to see how automation of information security related processes may facilitate a change from old paradigms -- the avoidance of increasingly complex tasks of securing, operating, and maintaining software, networks and policies -- to new paradigms of programming and preconfiguring such complex routine tasks according to a set of standard guidelines and databases, and automating much of this complexity for the end users, including administrators.



Possible forms of automated software

The extent of automation of security software can be determined by a few significant key factors: smartness (artificial intelligence algorithms, situation-based behavior, anomaly- and trial-and-error-based learning), the range of autonomous behavior (e.g. the amount of features that can be performed without direct user interaction), and the flexibility of input evaluation and autonomous behavior (flexibility and effectiveness of a database of rules, pattern, learned or external data, that plays a role in determining the software's behavior).

I think that the appliances of automation for purely malicious software are relatively limited, as much as the types of malicious actions on the Internet are limited. The areas I can think of include automated picking of target networks (to target for an intrusion, e.g. automating the search for webpages to deface), automation of sniffing, with more intelligent sniffing being able to understand what are passwords and what informations can be used to gain access using trust relationships (possible that we will see a worm that propagates just or mostly by sniffing passwords to spread), as well as toolkits that run on single hosts and automate bulk exploitation of larger networks (those are out today, but we are going to see things like better evaluation algorithms, more flexible frameworks for exploits, traffic polymorphism, stealth, databases, modular and updateable tools, and so on), and of course worms. Malicious worms will probably continue to be an interesting topic; although they always attack and spread, the things they might do afterward are hardly predictable. From DoS worms to worms stealing documents and even credit cards from a webserver, anything is theoretically possible.

More advanced forms of malicious tools for reaching this limited set of blackhat objectives are certainly imaginable, though. For example, a company is supposedly planning to offer an automated intrusion framework to commercial security enterprises, which can not only work as a worm, but link compromised hosts together in a peer-to-peer network with encrypted, stealthy links, possibly allowing an intruder to control a network of hosts through an arbitrary remote point of control. Since I have superficially considered and planned, but never implemented, a similar peer-to-peer application some time ago, I can acknowledge that while this concept creates a lot of detail problems and questions for its designer, it is theoretically feasible. To sum it up, we may be able to predict the intentions of blackhats in the future, but not in detail how sophisticated the technology of their software may become.

In the area of security software, possible uses of automated technology are much more diverse. They exist at least for the improvement of tasks such as: vulnerability assessment, prevention, penetration testing, and even crisis management. I will try to explain such potential uses by giving a few simple examples.



Vulnerability and versioning standards

Thinking about vulnerability assessment, much could be improved compared to todays common practices. What's actually needed to make vulnerability assessments more automated, efficient and reliable, is a standardized protocol for categorizing the details, versions and patchlevels of the configuration of existing operating systems, and the software in use, particular servers and suid applications. This is also an example of how security automation doesn't necessarily have to be a remote network process. Vulnerability assessment would mean to develop a set of functions and standards for analysing and categorizing the configuration (most of the process being local analyses) according to a generalized versioning standard of some sort, then merge the extracted information with a common vulnerability database such as ICAT (icat.nist.gov) or CVE (cve.mitre.org) extended with ID fields of that versioning standard, to yield a clear, reliable report on what known vulnerabilities exist in a system, and perhaps additional information that can be sent to a tool for automatic updates capable of understanding the same standardized protocol. Naturally, development of this standard, and especially of a common framework to extract the necessary information, would be a larger project, requiring coordination and teamplay. Apart from that point, a standard and implementation like the one described above could relatively soon be developed and used to improve efficiency and reliability of vulnerability assessment dramatically.




Automated system penetration

In scenarios related to penetration testing, the improvement of network scanners and related reconnaissance and auditing tools for could work together with standardized vulnerability databases in a fashion very similar to the one described above. While talking about penetration testing and the proof-of-concept automation of intrusions and exploits, I would like to present a little more information on all the concepts of an intrusion, to help form a picture in the reader's mind of all the possible steps that actually are related to intrusions, some of which are optional or depend on others, but all of which can be automated:

Step 1: Target discovery. An intruder (person or software agent) has to start off with determining viable targets. This may include resolving, random searches through network blocks, sequential scans of neighbor networks, or reading of specified targets from a database, depending on his intentions.

Step 2: Detection of security measures. If we are talking about a sophisticated intrusion or penetration test, it will include the searching for possible traps between the attacker and his victim, such as detectable IDS systems, firewalls, or other logging mechanisms.

Step 3: Evading existing security measures. As a result of positive results in step 2, alternatively to leaving a secured system alone, an attacker may want or need to use known methods of IDS evasion, obfuscation and stealth techniques, such as fragmented traffic, decoy packets, non-protocol compliant traffic encoding, and so on.

Step 4: Target analysis. This is what is normally covered by the basic functions of network scanners, such as port scanning, banner capturing, and establishment of sessions or application protocol specific scanning to determine versions or specific vulnerabilities. A non-modular tool for automating the whole process of system penetration would definitely need to incorporate this functionality.

Step 5: Evaluation of gathered information. From the data accumulated in step 4, vulnerabilities have to be determined, and appropriate exploits have to be chosen to try and exploit these vulnerabilities. An automated tool would need to posses a comprehensive database of the all vulnerable versions of the software it can test, and the related programs and methods known to be able to exploit each vulnerability.

Step 6: Attack. The actual intrusion, using remote exploits, possibly also other means of intrusion like abusing trust relationships or brute force.

Step 7: Privilege elevation. Compromising internal security from inside the host. Could be the breaking of a chrooted environment, of a non-root environment, but also subversion of the kernel and audit trails by installing trojans and backdoors.

Step 8: Spying. Snooping and intelligent evaluation of data and traffic, abusing traffic relationships, spreading to a LAN or intranet, abusing trust relationships to gain access to other parts of the network, etc.

However, automation of active vulnerability scanning would not necessarily have to take such "dangerous" forms with considerable potential for abuse; instead of automating steps like active exploitation, they could simply start using a common set of requests and queries specific to each application layer protocol, to determine version information, and verify responses to supposedly vulnerable commands (which might, for example, be needed to check remotely for applied patches that didn't change version numbers). Generally, as long as there is some kind of interaction, including harmless reconnaissance-only activities, between a smart agent and a network service, we can speak of an automated penetration testing technology.




Prospects for security policies

One of the hardest and most complex tasks for security professionals that could eventually benefit from automation, is the design and auditing of security policies. If you think about it, policies are supposed to be general, logical guidelines that describe how systems and networks should be used, may be used, and how they must not be used. Consider a program that understands a markup language describing how a network is set up, how existing security measures are deployed, and what kinds of people have access to which parts of the infrastructure, and that computes a general set of basic rules and guidelines for a security policy that applies to this scenario, on which security experts can then build and develop a full-fledged security policy. Consider smart agents working inside and outside of a network to simulate or even emulate client behavior and interactions, testing what parts of a policy are being enforced, and finding infrastructure deficits, e.g. in firewalling, auditing, remote access, and so on. I doubt that automation of processes related to security policies can ever fully replace human interaction and analysis in this field, but as I say, it could create and supervise a basic policy framework, on which can be built further.



Automated crisis management

Crisis management, as in, security services in response to an existing threat or intrusion scenario offers interesting possibilities for automated software solutions. In this area, I'm mostly thinking about fighting fire with fire using so-called "anti-worms". Anti-worms are a concept or technology that I didn't come up with personally, as far as I can remember, they were mentioned as an idea to combat Code Red and related incidents of widespread worms. The basic idea here is to use self-replicating worms in a controlled fashion. Imagine a self-replicating worm, one with whitehat- instead of blackhat ambitions, that goes around and exploits vulnerable services that another malicious worm targets, trying to spread to a limited amount of hosts, then patching and securing its victim, and alerting a local administrator account, finally removing itself from the system completely. Alternatively, anti-worms could scan for and target known backdoors set up by a malicious worm instead.

Man security professionals have recently also been able to observe large numbers of incidents with the same pattern, e.g. whole network ranges being targeted for an automated exploit that installs simple port backdoors, always on the same port. The anti-worm strategy could also work to find and report automated compromises, and those of of less sophisticated attackers.

However, such worms don't even have to be actively scanning and self-spreading. An additional concept, the use of passive "worms", could make such rather intrusive activity superfluous. What are passive "worms"? Normal worms work by propagating themselves actively, using each compromised target as a new base to launch attacks and replicate further. It is important to take into account here that a successful Internet-worm, to become widespread, will usually have to target hosts in as many different networks as possible, infecting hosts in most larger parts and segments of the Internet, and also since it will probably use random or half-random target network selection for the efficiency of this method. Wait a minute! We actually have a method of detecting remote attacks that generate traffic that can be seen in many different parts on the Internet. That's right, the backscatter analysis, originally a strategy developed to gather statistics about DoS and DDoS attacks (as spoofed Denial-of-Service naturally generates some error response packets, such as TCP RST and ICMP errors to the spoofed sources).

The implementation of the backscatter (and of passive worms) is rather self-explanatory. Simply install an amount of sensors (hosts that wait for signs of attack-related traffic) on many physically different networks around the Internet, and let them wait for signs of an attack -- in our case, not for DoS-related traffic, but for exploit attempts characteristic for a current worm. Such an installation could perhaps be compared with a global network of honeypots. However, the real goal is to gather the data necessary for a passive "worm" to take action. This means simply to have automated software ready and waiting for a worm attack, which then tries to attack each infected machine it sees. Again, this could be either by using the favorite exploit the targeted worm uses, or a backdoor left behind by that worm. The passive "worm" itself never needs to spread as long as it is widely deployed on a sufficient number of sensors. Additionally to the fixing of the security glitch and removal of the worm on each infected machine, it could optionally do things like skimming through the remote audit trails in search for more worm attack signatures, and thus, more hosts to fix.

While reading this paragraph, I'm sure you have been asking yourself how this could ever be implemented in reality, taking into account the arising legal problems? Of course, this concept is not meant to be implemented without finding a legal, professionally acceptable solution.

It is imaginable that companies may be signing security auditing contracts, pro-actively, or in response to an existing worm threat, thus giving a company with such a "worm solution" explicit permission to (counter-)attack them in the possible event of a worm infection. Anti-worms and passive worms could easily be programmed to commence active attacks only on network ranges of contracted customers, and only issue a warning or note when encountering unknown infected systems. A special information security insurance policy of some sort would also be possible, covering against damage done by worms, or specially, by involvement in third-party incidents related to worms, DDoS installations or other automated attacks, with an extra clause covering the legality of anti-worm deployment within their networks in the case of an ongoing related incident.



Legal and law enforcement factors and their implications

More surveillance laws, harsher penalties, and a faster coordination of law enforcement agencies are all responses of law enforcement to a growing threat of infosec related incidents. However, which side effects may stronger legislation and law enforcement action have?

Inevitably, more and more Internet users are becoming aware of these trends, including most blackhats. Many recent incidents show that the primary way blackhats are evading these threats is to increasingly automate, relay, encrypt, and otherwise facilitate their activities. And this is why trend in the blackhat community goes toward using only compromised hosts to launch further activities, even including simple scans and telnet sessions. Naturally, the use of relayed, encrypted, and obscured pathways for their activities becomes a necessity for blackhats, in order not to risk being a subject to new laws and surveillance tactics.

Through the relaying and automating of attacks arises a new problem: classic intrusion detection, monitoring, and logging is being rendered increasingly useless. While it can still document most attacks and exploits that occur, it can not find the original source of the attack, the machine and the real origin of the blackhat himself, but just a subset of the compromised hosts that blackhat uses for launching attacks, automated or manual.

This is my favorite example to demonstrate that new legislation and law enforcement actions can not be seen as a last recourse or solution to malicious activities in cyberspace; rather, law enforcement actions can occasionally make a situation worse in the long term when the strategy behind such actions is inconsistent. But what could be the solution to this problem of decreasing accountability? Such solutions may lie in future forms of Intrusion Detection technology. You may have noticed that I am not talking much about how Network Intrusion Detection could change and improve, as other areas of security technology are evolving. This is because I am currently working on an IDS solution called SASS, a company project, which tries to establish new key paradigms for IDS, one of them being to ensure the accountability of a party causing an incident.

However, a form of legislative approach that I tend to support is the emphasizing on third-party responsibilities. The problem of relaying and obscuring origins through compromised machines demonstrates very clearly that accountability on the Internet can only be as good and reliable as the weakest parts of the Internet. Many people like to symbolically compare security related tools having potential for malicious and intrusive activities with weapons in real life, and the publishers of such tools with arms manufacturers. However, it could be argued in a similar fashion that third parties leaving open their systems to attackers, are like negligent owners of remotely controllable rocket launching silos, protecting them only poorly, and sometimes not protecting them at all.



Full disclosure by default, the future of security?

From the widest perspective, the currently biggest problem aren't the problems in existing security measures, and neither the people that don't apply them correctly; the biggest threat for the overall security and accountability on the Internet are the people with no security at all. Every day, hundreds of people gather first experiences with the basics of computer security, but every day thousands if not more end users and perhaps at least a dozen of new enterprises enter the Internet for the first time, often with no clue about security, threats and protection.

I've come to the conclusion that the mass of security problems, from the possibility of installations of remote agents in thousands of systems, to the de-facto-anonymity of attackers, to the unhindered exponential spreading of Internet worms through third parties, is a result of so many participants on the Internet being fully, totally, unaware of security practices, including the need of upgrading, checking vendor information, and looking at system logs and processes. That percentage may be relatively small, but even with only 5% of the Internet infrastructure being fully insecure, that might be already significantly more than a million hosts.

Nowadays, infosec companies are offering services like security intelligence and vulnerability alerts to notify their customers of new security problems that affect them, with some of the better services keeping an updated profile of the software versions their clients use. In the future, security processes might become more effective in fighting security problems if the full disclosure movement will become more effective. This could happen through the use of new standards and protocols for the computerized classification and distribution of vulnerability information, instead of only advisories that first need to be read, understood, and acted upon by humans, therefore being slower and less efficient. Security companies could be contracted for automated patching, auditing or monitoring in instant responses to new vulnerabilities; alternatively, new software could perform such tasks.

But with respect to the idea of full disclosure and transparency, more radical approaches for improving security on the Internet are imaginable which could involve the deployment of automated processes. Reflecting on the problem of the amount of people fully unaware or uncaring about security on the Internet, seeing the whole problem from a different metaphorical perspective may help us understanding how it could be solved. Thinking of the whole Internet as an organism, with the same needs for protection, security practices could be seen as it's immune system. The important parallels between the Internet and an organism are that a disease or injury in any part of it is relevant to the rest, and can possibly affect other parts. And like an immune system, security includes the need for coordination (a factor that has been improving in the recent years), and proactive measures.

In more concrete terms, this could mean to actively analyze certain remote services on the Internet on a broad scale, just enough to compile a database of used software and it's versions. The networks for which security problems could be deduced from the data could then be notified in a systematic fashion, possibly by an automated notification and response handling system. Such a project, as controversial as it may seem, could largely improve the security situation on the net, and I know of different people who are already thinking about the possibilities such a project.

There may be less radical venues, however, to actively make the Internet more transparent and more secure. For example, we may have different security classes on the Internet in the future. High security classes could include networks which are certified to have passed an audit, and networks which run software for intrusion detection, automatic self-auditing and vulnerability updating. Middle security classes could represent networks who are enlisting themselves, but only doing irregular auditing and monitoring. And lower to blacklisted security classes might be comprised of temporary and permanent entries for hosts with a history of incidents, no administrative response to intrusions, spamming, and similar activity. Naturally, for such a hypothetical model to work, a standardized protocol and database would need to exist, and applications as well as server software would need to be able to understand that standard in order to differentiate between different security classes.



New security technologies and their acceptance

Obviously, there is still much fear about the effects of automated security technologies, because of the existing abuse potential in some future kinds of security software. I'm finishing my article with the following anecdote about the acceptance of security software, that everyone anxious of the dangers that automated tools might bring, should remember.

Think back to the time when the first network security scanners were released. Among them were the web-based SATAN, and ISS, a scanner to check and penetrate a handful of services commonly known to be vulnerable. ISS was a commercial scanner from the beginning, designed for white-hat use by security professionals. Back then, many people saw this differently. At that time, scanners like ISS were seen as a new threat to the Internet; CIAC, CERT and other organizations even released warnings and advisories about it; it's author may have been seen by some people as gray-hatted renegade coder...

It's important to point out the fact that ISS is and was a commercial tool, with precautions and protection mechanisms in place. However, it still leaked to the underground sooner than later -- as I think commercial tools for automating penetration testing will do, too, even if they are to be kept closed-source -- and even today, older versions of ISS count to the most widespread commercial security tools available as pirated software.

The time of the release of the first network scanners was also what I'd see as early roots of today's full-disclosure movement, oriented around articles like "Improving the security of your own site by breaking into it", and the earliest open-source scanners like SATAN. Over SATAN -- nowadays a humble, simple, and very outdated tool -- there was even more of a controversy, wave of alerts and advisories, and more media hype that any other security tool ever got.

Yet, network scanners, both open-source and commercial, are commonplace today, deployed by all penetration testers, security companies, owners of big networks, security conscious administrators and an army of other open-source geeks. This development may be a significant parallel to the future of automated security tools. Today, few if any people would argue about the many positive effects of having a multitude of open-source security tools available.


Layout editing by Travis Owens <owens@oneida-nation.org>