My 15 year old son followed in my footsteps and became a lifeguard last summer (proud dad moment). His job is at a large water park and he had nine water saves in just his first summer. While I have two water saves after 4+ years as guard, it’s safe to assume a larger pool means more people and more risk.

This lifeguard paradox has parallels to today’s security analyst in that as the amount of alerts and data available to them grows, they must take on more responsibility. More logs and/or alerts to triage, which take time to analyze, easily means there is a bigger need for more lifeguards. However, as we in the security industry know, there are far more jobs than possible applicants and recent graduates can accommodate.

There is quite a symbolic cohesion between a lifeguard and a security analyst. A lifeguard’s duty is to maintain a watch on the pool and ensure everyone is safe, jumping in as needed to rescue. Security analysts may not be saving lives, but they are monitoring the alert landscape while jumping into alerts as needed to rescue. If a security analyst is able to stand guard (or sit) and watch the entire environment, they quickly get overwhelmed as there is not a standard alert to person ratio, unlike pools’ lifeguard-to-child ratios.

Many studies estimate a security analyst can triage, escalate or close about 8-12 security alerts per hour. This means that, on average, an analyst only dedicates 5-7 minutes to any one alert. Adding more log events and alerts then pushes teams to expand and orchestrate their activities. I’ve seen teams filter out events saying we are only going to watch “X”  level or “Y” type of event, essentially trying to make the analyst more efficient by only looking for the “assumed bad.” In this process, the context and story analysts see becomes fragmented. Worse yet, the assumed “good” or “unimportant” that is filtered is typically where the malicious activity is hiding.

What analysts need is an easy button, providing an asset-centric view to all data with autonomous correlation, enabling them to spend more time looking through and understanding the relationships of various data types down to a protocol level – i.e what we call the “deep end”. The deep end, for the security analyst, is all the data output from security tools and enterprise systems. This, however, can come in different forms and from many different point solutions. An analyst’s ability to sift through this data and determine what story it is telling is the true purpose of collecting all this data. Simply put, in order for an analyst to effectively do their job, they must utilize the deep end – be able to review all the data in the current systems that were built for a 5-7 minute allowance per alert, and triage. Unfortunately, we all know this is not only impossible, it’s broken!

Why is this so? As an analyst, if I get an alert and do a search in any SIEM or log management solution for that IP, I will inevitably find a tremendous amount of information. I may find heaps of proxy, endpoint, dhcp, AD and countless other logs available, but its haphazard organization in today’s SIEM makes it very challenging to understand how all of this information ties together in a meaningful way. For that reason, it’s hard to stay in the “deep end”  – it’s hard to find the relevance necessary to complete the story of the alert and triage with an accurate measure of its “risk” to my organization.

I call this phenomena “analyst blinders.” With those blinders on, many investigations may end with: “I don’t have enough information to make a decision.” This happens because when an analyst has too much unorganized information and to little time, they are limited in triage capabilities and will often unintentionally miss or ignore the relevant data they need.

Security teams are building tools to make searching faster, but how does one build a system to cope with the amount of data brought forth in order to find what is really relevant? I propose inserting a “pool skimmer.” In pools, pool skimmers continuously cycle through all the water in the enclosure and catch all the dirt and debris. If pool skimmers were operating like today’s security methodologies, it would be the equivalent of putting a bypass in the system for up to 80% of the water flow. As you can imagine, quite a bit of filth would remain. In the security industry, we need that skimmer, allowing us to see everything with no filter, but still allowing us to find what is relevant – and for that, we need Artificial Intelligence.

Humans simply cannot scale. AI has been tasked to change how we look at this data problem.  Applying AI removes scaling issues and saves teams from having to write manual correlation rules and malicious logic to understand abnormal behavior. AI can make sense of the patterns and behavior of each asset from your enterprise. Some individual security and system logs can be meaningless. However, when combined with other “meaningless” logs, the activity becomes more significant. The AI would examine all the data – high-fidelity and seemingly meaningless alike – ultimately identifying the relationships and plotting them over time to quickly understand the relevance. AI applied in this manner becomes your lifeguard, watching the pool with an awareness of the history for each swimmer.

Using AI keeps the analyst in the deep end of the data pool. The analyst is able to review all of the relevant data used by the AI to pull apart a possible compromise. The JASK timeline view is great example of showing the story of anomalous activity. Each flag represents a datapoint that the AI associated to the asset. Analysts are then presented with a view of all relevant information in an organized in an easily understood format. The analyst can drill down on any of these and get the specific details for supporting context of the investigation.

Think of each data point above as an action which can lead to drowning, and it becomes very clear multiple pieces of the puzzle are needed to make an informed decision. Individually, each event can be interesting, but do not tell a full story. It is only when all the data comes together that a real risk can be identified. Splashing, screaming, and a head underwater are all indicators of drowning. Alone, however, they all are typical actions of kids in a pool. If a lifeguard is able to understand all these actions, in the order they happened, as a story, it is much easier to realize that someone is in danger, before it is too late. Of course, this example has oversimplified an analyst’s job. It is not humanly possible to match all these events and times and create a story without AI.

This is exactly why I chose to come to JASK, to help the security analyst as a lifeguard. The JASK platform allows the analyst to watch more data without overwhelming them with more alerts. Let the AI be the frontline analyst, keep your analysts in the deep end. Remove the blinders and provide the freedom to investigate and understand.

Learn more about how JASK provides the right context to your security alerts by signing up for our webinar here.