Understanding the correctness of a decision made by a machine learning model may now be feasible through a new technique as per a study at the Massachusetts Institute of Technology, to be presented at the Conference on Human Factors in Computing Systems.
Generally, models are trained using millions of data inputs, making it almost impossible for a human to evaluate enough decisions to identify patterns. Although the machine-learning model might correctly predict that a skin lesion is cancerous, it could have done so using an unrelated blip on a clinical photo.
‘Reasoning of a machine-learning model is compared to that of a human by a new technique that allows the user to see patterns in the models behavior.’
The study team has created a method that enables a user to aggregate, sort, and rank these individual explanations to rapidly analyze a machine-learning model’s behavior.
“In developing Shared Interest, our goal is to be able to scale up this analysis process so that you could understand on a more global level what your model’s behavior is,” says lead author Angie Boggust, a graduate student in the Visualization Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL).
“On one end of the spectrum, your model made the decision for the exact same reason a human did, and on the other end of the spectrum, your model and the human are making this decision for totally different reasons. By quantifying that for all the images in your dataset, you can use that quantification to sort through them,” says Boggust.
Source: Medindia