In this blog post
Three types of data for anomaly detection
What are the three types of data?
To do Anomaly Detection right within your own IT environment you’ll require three types of data. The first type of data you’ll need, answers the question which events are very likely to happen and what events are less likely to happen – Or in other words: what are the anomalies? The second type of data predicts the impact of an anomaly. The third and final type of data is ingested from the user itself (this could be you) in order to learn which anomalies matters most to the user.
What is an anomaly?
The first type of data is general monitoring data. For example, usage of CPU, latency of a service and disk usage of a database. This data is very abundant, and all these data can have anomalies in it. However, anomaly detection is in this case an underspecified task. It is purely subjective what to consider an anomaly and what not. For example, imagine that updating a certain application results in an increase of the usage of resources. If this increase is anticipated AND it also doesn’t affect the performance of the system, then it is not an anomaly. But if the increase wasn’t expected or there were some unexpected consequences, then it should be considered an anomaly.
What is the predicted impact?
The second type of data predicts the impact of an anomaly. StackState's 4T Data Model allows StackState Artificial Intelligence (AI) to reason about the significance of the detected anomalies using a rich context of the past and present events. For example, StackState knows when the topology of an environment changes. This allows it to reason whether a particular type of anomaly might trigger a change in the topology and calculates the impact of such a change. StackState also detects version changes of the application, so it can tell if a certain anomaly might trigger a version roll back. Using a multitude of these signals StackState AI can learn which anomalies to detect and which to ignore. This technique is called weakly supervised learning and it requires access to a rich set of signals – something only possible with a rich data model like that from StackState’s.
The third type of data shows if the algorithm is really detecting anomalies and deciding which ones are important for the user. Think of it as ‘a clickbait type of anomaly detection’. Because based upon which anomalies are clicked on the most the platform is learning which anomalies are most important for the user. If we also use click-through rate as a measure of value the user gives to the anomaly, then an algorithm that shows the user anomalies with a big impact will appear very useful. Even if the anomalies turn out to be false positives in the end. The richness of StackState’s 4T Data Model allows us to use user-based learning using a combination of user-, KPI-, and IT-based signals.
Anomaly Detection and StackState
So the three types necessary for Anomaly Detection are knowing what the anomalies in your own data are. This is often just your general monitoring data. The second step is to calculate the predicted impact of the anomaly. StackState's 4T Data Model allowing AI can help with this. With these two types of data you're able to detect anomalies and act upon them because you know their impact. While you're acting upon the anomalies, the third and final type of data is collected: User-based learning; making sure the the platform is learning from your actions and presenting the anomalies that matter to you most.
Do you want to learn more about how StackState can help with anomaly detection? Book a guided tour with one of our experts to give you some more information about StackState. Or play around for free with StackState GO and connect your own AWS and/or Kubernetes environment.