Machine Learning Explained
Machine learning models are often dismissed on the grounds of lack of interpretability. There is a popular story about modern algorithms that goes as follows: Simple linear statistical models such as logistic regression yield to interpretable models. On the other hand, advanced models such as random forest or deep neural networks are black boxes, meaning it is nearly impossible to understand how a model is making a prediction.
Why rely on complex models?
The infosec industry is accustomed to rules, blacklisting, fingerprints and indicators of compromise — so explaining why an alert triggered is the natural next step. In contrast, machine learning models are able to identify complex non-linear patterns in large data sets, extrapolate answers and make predictions based on non-trivial compositions, making it nearly impossible to grasp its inner workings. [Read More]