SupportContactLogin
Live Demo
Blog
Blog   >   Application Infrastructure Protection   >   Cybersecurity Alert Visibility: NOT Black Box Detection

Cybersecurity Alert Visibility: NOT Black Box Detection

Chris Ford, RVP of Product and Engineering at Threat Stack / Application Infrastructure Protection, (part of F5,) has had many customers who are frustrated with other vendors who take a “black box”, “It’s magic” approach to finding vulnerabilities and reporting threats through alerts. That is, they can’t (or won’t) explain the logic, context, or reasons a behavior may qualify as a cybersecurity threat or alert. In this webinar snippet, Ford discusses how ThreatML with Supervised Learning is deliberately designed to define and describe situations to DevSecOps and other cloud security managers who want and need in-depth understanding of their high-efficacy alerts.

This webinar snippet comes from a larger DataBreachToday.com webinar on “Machine Learning Done Right.”

Full High-Efficacy Alert Visibility: More Than “Black Box” Detection Section — “Machine Learning Done Right” – Video Transcript

Chris Ford, Threat Stack / F5: The benefit of [Threat Stack’s Application Infrastructure Protection supervised machine learning] approach is we deliberately chose machine learning models that could be described to users of our platform. This isn’t what we would call black box detection.

With some other tools, machine learning makes a finding. And you look at it and go: “Okay. I guess I have to trust these models.”

We’re able to show our work here. We’re able to show exactly what behaviors and events leading up to a behavior of interest cause us to believe that it was not predictable. And we can show that to our users. And in that way, they’re able to get confidence in our machine learning to detect things of real value.

The other thing about our machine learning models is that they are informed by the rules that are in place. And so our customers, at the end of the day, do have the ability to steer those models, based on the rules that they put in place. So they can say, “Hey, Threat Stack. This behavior matters to me. I’m going to create a rule. And then I’m going to enable supervised learning on that rule.”

So they’re saying, “Hey, this matters. And I want the machines to do the work for me.” And because we’re using supervised learning here, there is, in the future, every opportunity for our customers to help us in tuning their own models, if they choose, which is to say, reduce findings, and say: “Yes, this is important to me. No, that’s not important to me,” and update the models.

That is not something you can do with other techniques.

If you’re interested in learning more about ThreatML with Supervised Learning, you can visit our website at threatstack.com/ThreatML, to get an overview of the open view ThreatStackML provides. Or you can reach out and let us know that you’d like to have a deeper conversation by sending an email to: [email protected]

For More Full Visibility Information:

View the original full webinar here. To request a demonstration or a quote of ThreatML with supervised learning, answer the autobot or fill in the form above. OR email us at [email protected]