Assessment and prioritization of your SOC feeds can help reduce cost & ensure quality
Effective breach detection and threat hunting in the SOC depends on collecting many logs and alerts from a wide variety of infrastructure and applications, including clients and servers, networks, email and clouds, and the wide variety of security solutions deployed in the enterprise. There are significant upfront costs to onboarding those logs onto the SIEM and ensuring those logs are available for detections. Critical questions that SOC Managers must answer include:
- Which logs should be onboarded next based on our threat detection priorities?
- Are we making adequate use of all the existing logs we have onboarded?
- Are the quality of ingested logs adequate for ensuring the quality of the detections?
- Can we automate this analysis and prioritization of log feeds?
At Anvilogic, we have been working with SOCs in helping them automate their detection program. Here are our observations of how mature SOC’s are answering these questions, and using automation to give them continuous, real-time visibility into their log ingestion quality and prioritization.
The Log Ingest Problem
Enterprises bear substantial upfront costs to collect logs and make them available in the SIEM for detections. These include:
- Log Source Configuration.
a. There is an engineering cost of readying the sources for sending logs. For example, onboarding PowerShell logs from windows client devices require the following:
i. Request the infrastructure engineering team to turn on PowerShell logging at the endpoints of interest.
ii. Ensuring the proper audit policy has been configured by choosing one or more of the modules: script blocks, script executions, transcription logging.
iii. Configuring a log forwarding solution at each of those endpoints.
b. Endpoints have to be configured to generate the required sets of events. Do you need process command line logging in WIndows event logs? You have to configure the audit policy appropriately.
2. Log Transport. Network resources need to be allocated to allow high volume log traffic from the log sources (endpoints, servers, clouds, etc.) to your SIEM installation.
3. Log Field Normalization. Once the logs hit the SIEM, their fields should be normalized — it makes using those logs easier for detection engineering and threat hunting.
4. Log Ingest Costs. Based on your daily ingest volume, you may have additional licensing costs owed to your SIEM vendor based on daily ingest volume.
Prioritizing Logs For Onboarding
All of this cost is upfront, even before you can get any detection value out of these logs. The sets of logs that need onboarding are constantly expanding as your organization adopts new infrastructures (EKS containers in AWS?), new SaaS services, and deploys new security solutions whose alerts also need to be onboarded. SOC’s need to prioritize log onboarding based on identifying their critical threat priorities regarding the attack surface, adversaries, and their TTP’s.
Analyzing Log Quality
Now that we have the logs onboarded, the next question is whether or not they are good enough to be used for detecting TTP’s used by your adversaries? These are some of the best practices we have seen in mature SOCs towards analyzing log data quality.
- Source Asset Coverage. How many assets are logging?
- Log Delivery Timeliness. What is the time lag from event occurrence to that event showing up in the SIEM?
- Log Field Availability. Are all the fields that are needed for detection available in the log line?
- Log Normalization. Have the fields been normalized by using standards such as CIM? For example, you can see that the raw log view of Windows Event logs can be much more complex to use in detection engineering than a normalized view of that same log.
Knowing the quality of the log feeds will help you assess the quality of the TTP detections you can develop against these log feeds. An automated solution can help to inspect a feed and give a quality assessment score.
From Logs To TTP Detections
Now you have gotten the logs onboarded, and you have ensured that log quality is good enough. What can we detect with these logs?
Mapping to ATT&CK Data Sources
The folks at MITRE ATT&CK have come up with a helpful model of data sources which are abstractions of log and alert feeds. For each of the ATT&CK techniques, they have identified the data sources required to detect the underlying threats.
Identifying Threat Detections
You can use this mapping to go from your feeds to data sources to ATT&CK techniques that can be detected using those feeds.
A Threat Detections First Approach
Mature SOCs address this problem by starting with the critical threats and attack surface priorities in their environment. These are tracked with respect to various frameworks, foremost among them the ATT&CK framework. This framework lets us analyze the techniques concerning the data sources required to detect those techniques. The following steps are used to review every feed onboarding request:
- Identify the data sources that are mapped to the feed
- Detect the threats with the data sources reviewed. Based on those threat priorities, the onboarding of that log feed is priorities.
At Anvilogic, we help Enterprise SOCs bring automation into their threat hunting and detection program. One of the areas we help our customers is by offering real-time recommendations for which feeds to onboard based on the threat priorities that have been identified.