Content Conundrums for the SOC: Part II

Content
Share:

If you survived the first SOC conundrum for content development, then your organization is well on its way towards building new and exciting content without having to worry about those senseless questions when new threats emerge such as whether the threat is even relevant to your organization. Instead, your organization should be quick to determine threat applicability and move forward with a well-defined and streamlined content development process using the build, attack, and defend methodologies. If you aren’t at this point, you can refer back to my previous post, and part 1 of this series, here. With this stream of new and applicable content being developed into your SIEM for consumption by your Incident Response team, you now have several new detections queries actively looking for threats within your environment but how in the world are you going to manage this growing content repository when each piece of code was developed individually with no sense of reusability and how exactly are you planning to ensure that the past pieces of content developed are even still applicable now that time has passed? And in here lies the next content conundrums for the SOC.Let’s start with a few issues that need to be addressed:

  1. How is your organization going to handle a situation where a critical component of these detection queries change?
  2. How is your organization going to ensure that logic blocks within your queries are repeatable and can continue to be refined over time?
  3. How is your organization going to validate that content previously produced remains applicable and working correctly over time?

The idea that your organization just produced a large quantity of content which took a painstaking amount of time and resources all to be rendered fully useless by a change in a data storage schema or even just partially useless to newly onboarded business units because their data needs to be incorporated into the logic is a nightmare. No one wants to have to go through each individual piece of content and update components one by one especially when there is a lot of room for human error.

So how would you go about making sure that manually going through all your past developed content is not a reality? Easy, modularize your content to its fullest extent. This can be done via functions or macros or whatever reusable logic grouping your organization's SIEM may utilize. The idea here is to modularize every piece of logic that can be reused for another piece of content, even how you call a certain data set should be modularized just to account for data storage schema changes and new data storage objects being created for new business units. Luckily, this idea resolves the second issue addressed above as well by having complex logic blocks modularized into these functions or macros, you can ensure that any statistical aggregation, machine learning algorithm, or anything else can continuously be improved and have all of your old content be automatically improved with each update.

Sounds great not having to go back and update everything one by one every time you update a way you perform a particular statistical analysis or even something as simple as a data call, right? It is, and it is going to save your analysts an incredible amount of time while granting them reassurance that everything is running with the latest and greatest. With that, we have also introduced the third issue stated above which is how exactly do you know that those great new updates to those code blocks didn’t break anything.In a worst-case scenario, those modularized logic block updates would require you to go back and do the same thing you have been trying to remove from your workload just to have peace of mind that everything is still functioning.

You just saved all of this time by making your updates ensure that everything is running on your most up-to-date logic developments just to add that time right back due to lack of validation and regression testing. Here, the best method is to automate the regression testing of your rules when updates are made. This can be done via custom scripts or even via playbooks within a SOAR system as long as that SIEM you are testing against supports some sort of API call to run remote searches. Of course there are a couple things you are going to have to incorporate into your B.A.D. processes referenced in Part I of this series which is essentially having your Red/Attack Team take note of exploit time simulations that way that, in conjunction with your detection queries, those variables can be passed via remote searches for very particular time frames where you can ensure that results will appear.

If your automation comes back with certain detection queries returning null results after a modularized logic block update, then you know something went wrong. At the same time, you can utilize this same automation to check whether certain detection queries are no longer applicable by performing a periodic review of the outputs and metadata associated with every search ran in your automated regression tests. This will help to ensure that you are keeping system utilization as low as possible to make way for the detection content that is still applicable to your environment.We hope that these insights are helping your organization tackle your SOC content conundrums head-on. If there are any other difficult problems your organization is faced with today, feel free to reach out as we would love to hear you out.

In the meantime, stay tuned for Content Conundrums for the SOC: Part III. At Anvilogic, we take SOC content development seriously going through several vigorous cycles to ensure that all content developed is as robust, efficient, and applicable as possible and we hope to help your SOC’s detection content achieve the next level. Let us know how we can help you.

Chat with our team to receive a free maturity assessment

Get in Touch

Ready to learn more about Anvilogic?

Kickstart your security operations

Anvilogic provided the necessary threat detection automation for our small SOC, adding a significant force-multiplier advantage for my team.