In the past decade, we have witnessed a significant technological shift in data and computation, which also led to the rise of Machine Learning. As we continue to explore machine learning in our daily use-cases, we need to understand the reason behind the prediction. For some specific tasks, it is mission-critical. Questions like- “Why did the model predicted an employee is going to leave the company? Why the model is saying this year’s stock prices are going down?” can arise.
Why should you trust your Machine Learning model?
Machine learning models are based on some exploratory data analysis. You find the main weak point that, when improved, might strengthen your business. Then, build a model to predict the factor. However, you need to know if it is something that adds value to your business.
Suppose employee turnover has been causing loss, and you created a model to predict the turnover. Does it bring value to your business? It doesn’t prove to be useful until you present results for “Why the employee turnover is happening and how to stop it.” You want an interpretable model.
A short Intro to Interpretable Models
Interpretable models refer to methods and techniques in the application of Artificial Intelligence (AI) such that human experts can understand the results of the solution. Machine Learning is notorious for producing results. It might even drive false results in some cases. Like this famous case where the model predicts sheep in the images just by looking at the green background.
If we think about model complexity and model interpretability, we can set rank for machine learning models. Logistic regression is the easiest to interpret, whereas Support Vector Machine and Deep Learning lies higher up in the chart.
As the model complexity increases, we start to know less and less about how the model is working. Model interpretability is so vital that in spite of the availability of advanced models, data scientists try to use Logistic Regression in their work as a trade-off for a little bit of accuracy. Yet the practice of interpreting your model has still not been on trend in the world of data, and people try to jump straight to deep learning, sometimes using CNNs where Bayesian Inference could have worked.
Requirements of Model Interpretability
We now discuss the requirements of model interpretability.
- Answer the “Why?”
Suppose you have created a model to predict the risk factor before providing loans to the customers in a bank. It is sometimes legally required to provide reasoning on why the bank can’t provide a loan to a customer. One should be able to answer that question with reasonings besides “because the model thinks so.”
- Validation of model
Let us suppose the model predicts the performance of individuals and recommends employees for a promotion. But only men were promoted due to gender inequality in the past, and we have collected partial data over the last 10 years. In such a condition, our model is biased, and we would need our interpretable model.
- Finding features that matter
Different businesses have different KPIs defining factors that are vital in predicting the future. It is crucial to share the information the model has with the product team and the marketing team to take appropriate action. Suppose pensioners are more likely to buy travel offers than a student, now if we can relay this information to the marketing team, more effort is focused on the pensioners.
- Providing interventions
If a model predicts an employee is going to leave, but we don’t know why it is of no use. We have to provide information like why we think the employee is most likely to leave so that timely intervention can happen.
- Checking for AI Bias
Some years ago, an automated passport checker told a man of Asian descent that his eyes weren’t open even when they were. These kinds of faults not only decreases the trust upon Machine Learning but can also cause severe implications upon the brand of a company. Bias in AI can happen in many ways, like from the data collected, way of training the model, and use of it. Here is an excellent blog explaining AI Bias. The explainable model helps us to remove bias. Like when we are predicting which employee to promote, we can look to see if the model is looking at the gender or other factors like performance and ownership. Here is an example that favored gender more than other factors.
- Reducing features
We have all heard of curses, including the Curse of Dimensions. As the number of input features increases, it becomes harder for the model to predict well, making the approximation function complex. It is very wise to reduce the dimension by selecting the important ones.This practice should persist in every problem that we solve with Machine Learning. From image classification, sentiment analysis to customer ratings and predicting stock prices interpretable model has its place.
In general, there are two ways to interpret a model:
General interpretation means that we get an overall sense of how our model works. Here are a few methods that can be applicable to do so.
- Linear Regression/ Logistic Regression (Using simple ML models)
We can extract the weights of each feature in the model. The feature with higher value plays a positive role in the prediction; the lower value plays a negative role. And the feature having values closer to zero doesn’t have much importance. These weights are also called feature importance. But don’t forget to normalize your data. (This is a separate huge topic which I recommend to read).
Decision trees are very intuitive as we get a tree interpreting the decision. We also get feature importance based on Gain (By what factor the prediction loss is decreasing because of the node), Frequency (The model uses the feature ‘n’ number of times), and Coverage (How much data does the node includes in the feature splits?). For interpreting advanced ensemble trees like XGBoost, we use these features.For models that don’t have an inbuilt procedure to get the feature importance, we can always decrease a feature in our model and see the drop in the value of accuracy. In this principle, there are many methods we can apply, but it is purely up to you to make a choice. Some of them are:i. Recursive Feature Elimination (For models with feature importances):
We remove the least important feature and note down the accuracies or the opposite, remove the most important ones. These methods give us insight into which information is most vital.ii. Additive Feature Extraction (For models with feature importances):
We start with the most essential feature and then keep adding features in order of their importance and look at the results
An example of using Additive Feature Elimination should look like this:
And from here, we can choose a set of features to use for our model. As here in this example, choosing more than 5 most essential features gives us no benefits.iii. Taking all subsets (hotchpotch for all models)
Taking all subsets is a brute force technique of providing all the combinations of all sizes of the set of features to the model and observing which subset performs the best.Note: Playing around with the loss types (L1 and L2) in logistic and linear regression is a good idea as L1 tends to squeeze the unimportant features to zero, and L2 tends to make the best model in evaluation.
In local explanations, we try to predict for a single row of data that the model is predicting. These are entirely new and exciting techniques even promising to work on neural networks.
LIME (Local Interpretable Model-Agnostic Explanations)
It is a model prediction for each prediction. Lime tries to create a linear model around the points assuming the small change in input has a small change in output. After playing around with each variable, we get the sense of which variable change is sensitive to output. It does explanation with the same way for texts and images too.
You can read more about LIME explanation for prediction of a single record here.
SHAP (SHapley Additive exPlanations)
SHAP connects game theory with local explanations, uniting several previous methods, and representing the only possible consistent and locally accurate additive feature attribution method based on expectations.
You can read more about Shap value explanation for all records featuring which feature play what kind of role here.
Code for this local interpretation can be found here for the titanic data set.
This isn’t a formula to solve such a complex problem but just a starting point where we can explore more creative and domain-based techniques to interpret our Machine Learning models. If we start on creating an interpretable model early on, it will be easier to integrate the model in scale. After all, as a Data Scientist, you have to know what is going inside a model.
Happy “Machine Learning”
About the author
Bipin KC is a Data Scientist/ Machine Learning Engineer at Leapfrog. He has a keen interest in Computer Vision, Interpreting models, and delivering elegant analytic solutions.