fbpx

ORGANIZATIONS THAT START to address information security in a meaningful way will come to a point in their maturity when they have a lot of machine data. The challenge many CISOs face is how to leverage that data quickly and correlate events dynamically across the enterprise to track down advanced persistent threats (APTs). The Sony Pictures Entertainment hacking incident in 2014 underscored the importance of security monitoring and rapid incident response to clamp down on damages before disaster strikes. IT security managers cannot protect what they cannot see, and to “see” associations or patterns that can help detect APTs enterprises must have comprehensive logging in place across multiple layers within a network. The greater the visibility, the larger the machine data, and the harder it is for cybersecurity incident response teams to “follow the thread” and correlate security events with threat intelligence in a meaningful way. The answers to many security questions about fraudulent activity, user behavior, communications, security risk and capacity consumption lie within these large data sets.

COMPREHENSIVE LOGGING

All of this logging can result in close to a million pings a day about potential security events at larger enterprises and terabytes of logging data a month. While comprehensive logging is needed, several factors have to be considered when you increase logging across the enterprise. Infrastructure that is already heavily utilized might experience performance issues with additional logging. The network team should be involved in the design of the logging infrastructure to make sure the aggregation of enterprise-wide logging does not affect performance when all log sources are pointed at a few destinations. It’s important to involve key stakeholders in the design and to balance the need for logging with the function of the applications. To see across an enterprise, verbose logging should be enabled throughout as follows:
■ Layer 2 switching and choke points on enterprise distribution switches.

■ NetFlow enabled and logged where possible.

■ Critical services to send access and systems’ logs. ■n AD to log user behaviors. ■n All Internet-exposed devices to log access and system events.

■ Endpoint protection systems to log alerts.

■ All firewall devices to log inbound access (accepts) and outbound (accepts and denied).

■ Other security devices to log alerts and access.
Why so much logging? Most advanced adversaries gain access to a victim’s network via malware, driveby links or Web shells. Once the initial attack “phones home”—malware will initiate outbound connection to C2 hosts to get around inbound firewall rules—rootkits are delivered, and they quickly gain access to a user account and drive around the network as a fully credentialed user. It is difficult to lock down a Microsoft network in any meaningful way without destroying its functionality. A successful strategy to defeat this type of attack includes the following:
■ Detect the malware or drive-by links before users click on them. To do this a cybersecurity incident response team has to be able to compare user behavior against threat intelligence. This requires full packet logging of all ingress and egress traffic on an enterprise’s edge.
■ Detect malware or rootkit delivery to the endpoint. To do this the cybersecurity team needs verbose logging
on antimalware and endpoint protection systems.
The cybersecurity team needs to be able to analyze user behaviors and access across the entire enterprise. Security information and event management (SIEM) tools can alert you to unusual activity, such as account usage during off hours. This is only possible with comprehensive logging of Active Directory (AD) and host access events.

actions and input within the applications, so you can understand if they are being used as a bridge to your network. This logging should include not only internally developed Web applications and services but also vendorprovided appliances and applications that reside on those systems. The logging needs to enable you to see what is behind all network communications to and from your environment. Any security device or system software within your network should also create logs. These security systems usually include, but are not limited to, antivirus or other host intrusion detection software. You can review the host logs on the systems to gain an understanding of the network accounts and computer systems that are used within the scope of the threat. Host firewall logs can be critical to understanding how the threats are moving around within the network after an initial compromise. Similar to the host-based firewall logs, NetFlow can help monitor the traffic within your network and identify areas that require further investigation. NetFlow can alert your team to data-transfer activity that is happening within your network that might not be authorized or sensitive information that is being prepared for transmission outside of your network.
CENTRALIZED SYSTEM

Network authentication logs from AD and other LDAPbased services used for central authentication of users
Most security programs begin with logs from the devices at the edge of the network, because those are usually easier to obtain. Firewall, network intrusion detection system and other network-based security products have robust and mature logging capabilities that most companies are already using. The level at which the logging is configured is paramount for visibility into the various APT traffic as it is leaving or entering your environment. This means that if there is an active intrusion, traffic coming and going from the network edge has to be correlated with the suspicious traffic to see the entire communications channel—malicious actors infiltrating the network, driving a compromised account, and then moving laterally across the enterprise. It’s critical to be able to see both successful and denied traffic at the network edge to get a profile of what is normal for your business.
NETWORK CONNECTIVITY AND COMMUNICATIONS At the network edge, be sure that your logging doesn’t have additional blind spots to traffic that can be used to bypass your security controls. Encrypted traffic, such as SSL/HTTPS, and services that are traditionally used for communication and data transfer, such as IRC and FTP/ SFTP/SSH, should also be logged with detail. Logging of services available to the public Internet is also of great interest, as these systems are the gateways to and from your infrastructure. Any Web server should log not only the connections into the server, but also the and network systems enable you to trace access within your environment and begin to frame up which systems are involved with the threat. Many of the applications and systems in this list will have the capability to send logs off to a centralized system, either through syslog or another facility. Having a central log collection and analysis system is crucial because trying to look in all of these systems, with multiple sources and locations, for the log information is tedious work. This log information will be written to system logs on the hosts, which systems administrators will want to constrain so the data doesn’t consume usable system disk space. Security logs kept on systems will usually contain data for a few days at most, and in many situations only a few hours. This is not sufficient time to allow for analysis and review. Most intrusions are not detected for months after the initial compromise (which may have been the case with Sony). If log data is not collected and retained during those months, the ability to identify the system of source or persistence is impossible, and the threat may remain within your network for a very long time.
BIG DATA PROBLEM

When the cybersecurity incident response team investigates an incident they must be able to follow the thread of events through logged data, and that path is interwoven through the Microsoft domain, security devices, edge devices, switches and routers. During a security event,
time is essential in stopping the unauthorized exfiltration of data from a network. From the point of discovery to when an active defense is put in place and the adversary is stopped is a critical time. To be successful in seeing, stopping and investigating a cyberevent, an enterprise must have the ability to quickly query very large sets of machine data. The notion of having a commercial off-the-shelf tool that has all the answers programmed into its graphical user interface is a fallacy. There is no fixed solution. Queries against large sets of machine data must be dynamic, and results must be presented quickly. For security analysts to be successful, they have to be able to manage big data. As the number of log sources grows, so does the volume of the log data being collected. This growth never follows a linear path. Each system generates more and more data; and with each system, another system comes into the scope. If all systems and devices are sending logs to a centralized system, which is the ultimate goal, the volume of data quickly becomes unmanageable. With systems now producing more log data than ever before, and diverse data sources required to search out and locate a threat within the network, a new way to perform data analysis and identify correlated events is needed. The commercial SIEM companies are trying hard to play catch up and positioning their products to support the large volumes of data produced and collected.

Categories: Knowledgebase

0 Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.