Network State Monitoring: A Network Security Assessment Concept Andrew J. Stewart (Co-Creator: Andrew D. Kennedy) Version 1.1 May 2, 2000 1 Introduction This white paper introduces Network State Monitoring - a network security assessment concept. Network State Monitoring involves the transposition of established host-based security state monitoring ideas into the network security assessment problem-domain. The purpose of this paper is to present the concept of Network State Monitoring, and to generate discussion on the possibility of its construction and productization. 2 State Monitoring It is necessary to begin with a description of "state monitoring" itself. State monitoring refers to the process of comparison between desired and actual state. In the context of computer security, the desired state is a security policy compliant configuration; the actual state is the current configuration. Fundamentally, state monitoring concerns comparison in order to detect change. 3 Host State Monitoring Host-based state monitoring tools enable the difference between desired and actual state to be identified; this security "gap" can then (hypothetically) be closed. Traditionally, host-based state monitoring tools are configured to perform their "sweep" (the process of determining actual state, i.e. the current configuration) at scheduled, regular intervals, for example - once per day or once per week; this ongoing process enables trend reporting. Host-based state monitoring is an established computer security technology; ISS System Scanner is one example. 4 Network State Monitoring Network State Monitoring is a new approach to network security assessment. The impetus for attempting to re-invent (or perhaps evolve) network security assessment comes from a realization that the rate of growth of the modern enterprise network environment, coupled with its complexity, makes the detection of change, and more precisely the detection of change over time, increasingly crucial for security. This philosophy is different from traditional network security assessment (in which the focus is largely on the unambiguous and rapid determination of security vulnerabilities). As described, host-based state monitoring involves the scheduled, regular analysis of security on a host, in order that change can be identified. It is logical to attempt to transpose this concept onto the task of network security assessment in order to enable change monitoring for networks. ISS Internet Scanner and other traditional network security assessment tools are fundamentally concerned with the identification of security vulnerabilities, for example - the use of an insecure version of a 'daemon' (a network enabled application), the misconfiguration of a network service (such as a web server), etc. Within such tools, the security problem identification heuristic (and consequently the majority of the intellectual capital and "value" that is embedded in the tool) resides in both the network traffic injected into the network by the tool - the stimulus, and in the interpretation of the response generated by the target host(s); an instance of this process comprises a 'signature' or 'check'. The number of checks a tool can perform is often used as a basis for comparison between tools. The Network State Monitoring approach inverts this paradigm. Data on the composition and configuration of the target network is collected using a relatively static set of techniques, and only then is a security heuristic separately applied. Network State Monitoring is a two-stage process of data collection followed by data analysis - a clean separation exists between each stage. This two-stage process is a valid model for performing network security assessment, because for many network security assessment tasks there is no implicit need to perform analysis "on the network" (i.e. to embedded intelligence in a check). Also, it eases the task of collecting data on a target environment over time, through the regular and automated repetition of data collection (network interrogation) techniques. Below, the data collection and data analysis stages of Network State Monitoring are described. 5 Data Collection In order to detect change, the existing state of affairs must be recorded. Traditional network security assessment tools function by attempting to "snap shot" the security status of a target network using checks. The Network State Monitoring approach is different - it involves the regular re-population of a database with data gathered from the target environment; this enables data analysis to be separately performed on an ad-hoc basis. This separation between data collection and data analysis enables multiple instances of data analysis to be performed in parallel (by separate users, and at different levels of privilege to the underlying data, etc.), and also removes the perceived "effort" in having to run numerous network scans, because the task of data collection is both scheduled and automated (as it is in host-based state monitoring tools). The data collection stage of Network State Monitoring would employ generic network information gathering techniques; generic - in the sense that the techniques are relatively static compared to the numerous checks (of which there is an ever increasing number) performed by traditional network security assessment tools. Data regarding network population (the identities and number of servers, workstations, routers, etc.), network services, and network topology, would be gathered using a set of well known and established techniques; indeed, such techniques are already implemented within existing network security assessment tools. In the terminology of network information gathering, these techniques are: "ping sweep" (host detection), "port scan" (service detection), and "traceroute" (network topology detection). The advantage in utilizing a set of static and well understood network information gathering techniques, opposed to a large number of checks, is that the "value add" proposition shifts from the number of checks which can be performed, to the quality of data analysis that can be subsequently performed (albeit on more generic data). The data collection stage therefore, involves the collection of generic data ("I detected a host", "I detected a listening port"), not atomic results like a traditional network security assessment tool may return ("I saw X vulnerability", "I saw Y vulnerability"). 6 Data Analysis Simply put, the data analysis stage involves querying the database. Below, three types of query are described: 'network inventory', 'host profiling', and 'vulnerability detection'; multiple other types of query exist. Adjunct: Symbolic Network Representation A mechanism must exist for a user to symbolically and intuitively conceptualise (model) their network environment, and subsequently utilize that model in the interface to a Network State Monitoring Tool (this is a key requirement for any enterprise security technology). The process of data collection (network interrogation) is inextricably linked to the problem and study of network modelling and visualization. In any operational network environment, security prioritisation occurs. This fact must be reflected in the data collection process. For example, it is logical to scan the set of hosts that comprise a firewall more regularly than a group of desktop workstations, and so on. The intrinsic weakness in host-based state monitoring software has always been that a security compromise could occur, and yet also be hidden, in the time between the last and next "sweep" a tool performs; of course, this weakness also exists in Network State Monitoring. Unless a mechanism exists to enable a user to categorize their network - "this IP address range is the London firewall", "this IP address is the PDC", "this IP address range is DHCP allocated", and so on, the process of data collection (network interrogation) cannot be tailored (i.e. prioritised) appropriately. It is proposed that the mechanism employed should allow a symbol (for example "London firewall") to be associated with an arbitrary, not necessarily contiguous, IP address range. In effect, a symbol provides an intuitive mapping (similar in ethos to the DNS). The set of symbols a user defines to represent their network can then be used by that user to appropriately specify the regularity and type of data collection to be employed, according to the users notion of security criticality. Because network interrogation techniques are by definition invasive (to varying degrees), a trade-off occurs between the "freshness" of the data in the database, and the bandwidth and processing impact on the target networks and hosts (caused by performing data collection itself). In a nutshell, the more regular scanning is performed, the more efficiently (and the more often) change can be detected, but the heavier the impact on network bandwidth and host CPU. The concept of symbolic representation can also be employed to allow a user to intuitively construct complex database queries through the combination of the users symbolic representation of their network, and the symbolic representation of simplistic database queries. For example a query might read: "hosts in London" AND "hosts in the Equities Division" AND "host OS is Solaris" AND "host is running a web server"; in this example, the first two symbols represent geographical and organisational groupings (user-defined), and the second two symbols represent simple queries to the database. 6.1 Network Inventory Assuming symbolic groupings have been assigned, queries can be made to the database to ascertain the numbers of network entities (hosts, routers, switches, etc.) per symbolic group; for example: the number of hosts per region, per business unit, etc. The ability to determine the number of hosts on a network and to track that population over time is a fundamental requirement and foundation for network security. Detecting change in network inventory is invaluable for security, for example - when a new IP device "appears" on the network, or when an existing IP device disappears from the network (either sporadically or permanently - this could be due to a Denial of Service attack, a security compromise, or because of a link outage, etc.). Note that a link outage still has a security implication - security concerns availability as much as integrity. 6.2 Host Profiling It is a relatively simplistic task to programmatically identify the OS of a remote host - it can be achieved through banner parsing, pattern of listening ports recognition, and TCP/IP stack fingerprinting (the complexity of each technique tends to be proportional to the ease with which the results can be faked or obscured by the target host). Such techniques, in conjunction with symbolic network visualization, would enable metrics to be generated as to the number, ratio, location, usage, and deployment of OS types in the environment, for example - per business unit, per geographic region, etc. More importantly, these figures can be tracked over time. Host profiling could be used to direct the deployment (per OS type, for example), of host-based security monitoring tools such as RealSecure System Agent or System Scanner. Because host-based security monitoring tools are almost ubiquitously network-enabled (i.e. can communicate across the network), a Network State Monitoring tool could generate metrics as to the number of installations, the rate at which installations are being performed, and so on. Using a list of 'known port assignments' (the document which describes the mapping between a network-enabled application and the network port it conventionally uses to function) a Network State Monitoring Tool can monitor the usage, over time, of any network-enabled application within the environment. The security implication (manifesting perhaps as a numeric rating) associated with each detected network-enabled application could be used to feed a risk calculation model; for example, the presence of 'SSH' would rate differently to 'Telnet'. 6.3 Vulnerability Detection The database can be parsed for any configuration which implies the existence of a vulnerability (a 'vulnerability' in this context relates to any observed state which may have a security implication). Note the phrasing: "which _implies_ the existence of a vulnerability"; for example - a host which is detected to have an active port known to be associated with a "back door" program, should be flagged. Network State Monitoring adds the detection of change to the contextual information available for the analysis of an event. For example: a port is determined to be "listening" on a host; Network State Monitoring can provide answers to the questions: "how long has that port been open on that host?", "which hosts also have that port open?", "how has the use of the application which uses that port changed over the last year throughout the entire enterprise?", and so on. 7 Extensibility Traditional network security assessment tools have always had "cross over" potential for their functionality to be extrapolated for non-security related applications such as network inventory, network mapping, network management, etc. The design of Network State Monitoring - a separation of data collection and data analysis, allows data to be collected and queried in multiple ways, e.g. from a network security perspective, from a network management perspective, and so on. This is a crucial advantage in the design - to collect data without preconceptions about its eventual use, opposed to collecting data and immediately calculating inferences from that data, then discarding it. This approach reverses the problem of "information overload" that many network security assessment tools suffer from, because each query (and consequently the results of each query) are both defined and can be "built up" by the user. A modular approach could be taken to extending the data analysis functionality of the tool, for example with a 'web crawler' plug-in, a 'trust relationship mapper' plug-in, a 'network map builder' plug-in, etc. 8 Conclusions The network security assessment concept presented is a simple "first cut" which describes the core components of the model; no-doubt many enhancements and extrapolations can be made. It may be logical to "layer" traditional network security assessment techniques "on top" of Network State Monitoring. For example - a database query could harvest data from the database relating to hosts which are of a particular OS type and which also have a "listening" web server port; an OS-specific web server vulnerability scan could then be performed against those hosts. Multiple other scenarios can be envisioned in which a database query is used to initiate further, more specific (i.e. less generic), network interrogation. A hybrid model in which traditional network vulnerability assessment and Network State Monitoring are combined, may possibly be the best way forward: Network State Monitoring techniques used to enable change detection, and traditional network vulnerability assessment techniques used for specific vulnerability identification.