Will DNNs Ever Really be Explainable

added 17.05.2020 16:50

Share

There are several techniques currently in use that can explain the output of DNN models or can be used to test for potential bias and harm.

We would wish that all our application of DNNs would be decision support systems that imply the system makes a recommendation but a human must decide to take the action.

Military personnel taking action based on perceived risk identified by DNNs may well be held accountable for their actions and the same applies to medical personnel who act on DNN-guided decision support systems which, like all models, will have some false positives and false negatives.

So even though more accurate DNN models might more accurately predict outcomes like credit worthiness or best risk adjusted interest rates, that economic efficiency was regulated as potentially unjust and the use of modeling techniques like DNNs and some specific types of data were proscribed.

These are ‘white box’ ML procedures that are inherently explainable like decision trees applied to the input and output of the DNN model to mimic the behavior of the DNN model.