Deep learning models have been criticized for their lack of easy interpretation, which undermines confidence in their use for important applications. Nevertheless, they are consistently utilized in many applications, consequential to humans\&$\#$39; lives, mostly because of their better performance. Therefore, there is a great need for computational methods that can explain, audit, and debug such models. Here, we use flip points to accomplish these goals for deep learning models with continuous output scores (e.g., computed by softmax), used in social applications. A flip point is any point that lies on the boundary between two output classes: e.g. for a model with a binary yes/no output, a flip point is any input that generates equal scores for \"yes\" and \"no\". The flip point closest to a given input is of particular importance because it reveals the least changes in the input that would change a model\&$\#$39;s classification, and we show that it is the solution to a well-posed optimization problem. Flip points also enable us to systematically study the decision boundaries of a deep learning classifier. The resulting insight into the decision boundaries of a deep model can clearly explain the model\&$\#$39;s output on the individual-level, via an explanation report that is understandable by non-experts. We also develop a procedure to understand and audit model behavior towards groups of people. Flip points can also be used to alter the decision boundaries in order to improve undesirable behaviors. We demonstrate our methods by investigating several models trained on standard datasets used in social applications of machine learning. We also identify the features that are most responsible for particular classifications and misclassifications.

}, url = {https://arxiv.org/abs/2001.00682}, author = {Roozbeh Yousefzadeh and Dianne P. O{\textquoteright}Leary} } @article {2370, title = {Interpreting Neural Networks Using Flip Points}, year = {2019}, month = {03/20/2019}, abstract = {Neural networks have been criticized for their lack of easy interpretation, which undermines confidence in their use for important applications. Here, we introduce a novel technique, interpreting a trained neural network by investigating its flip points. A flip point is any point that lies on the boundary between two output classes: e.g. for a neural network with a binary yes/no output, a flip point is any input that generates equal scores for \"yes\" and \"no\". The flip point closest to a given input is of particular importance, and this point is the solution to a well-posed optimization problem. This paper gives an overview of the uses of flip points and how they are computed. Through results on standard datasets, we demonstrate how flip points can be used to provide detailed interpretation of the output produced by a neural network. Moreover, for a given input, flip points enable us to measure confidence in the correctness of outputs much more effectively than softmax score. They also identify influential features of the inputs, identify bias, and find changes in the input that change the output of the model. We show that distance between an input and the closest flip point identifies the most influential points in the training data. Using principal component analysis (PCA) and rank-revealing QR factorization (RR-QR), the set of directions from each training input to its closest flip point provides explanations of how a trained neural network processes an entire dataset: what features are most important for classification into a given class, which features are most responsible for particular misclassifications, how an adversary might fool the network, etc. Although we investigate flip points for neural networks, their usefulness is actually model-agnostic.

}, url = {https://arxiv.org/abs/1903.08789}, author = {Roozbeh Yousefzadeh and Dianne P. O{\textquoteright}Leary} } @article {2454, title = {Investigating Decision Boundaries of Trained Neural Networks}, year = {2019}, month = {8/7/2019}, abstract = {Deep learning models have been the subject of study from various perspectives, for example, their training process, interpretation, generalization error, robustness to adversarial attacks, etc. A trained model is defined by its decision boundaries, and therefore, many of the studies about deep learning models speculate about the decision boundaries, and sometimes make simplifying assumptions about them. So far, finding exact points on the decision boundaries of trained deep models has been considered an intractable problem. Here, we compute exact points on the decision boundaries of these models and provide mathematical tools to investigate the surfaces that define the decision boundaries. Through numerical results, we confirm that some of the speculations about the decision boundaries are accurate, some of the computational methods can be improved, and some of the simplifying assumptions may be unreliable, for models with nonlinear activation functions. We advocate for verification of simplifying assumptions and approximation methods, wherever they are used. Finally, we demonstrate that the computational practices used for finding adversarial examples can be improved and computing the closest point on the decision boundary reveals the weakest vulnerability of a model against adversarial attack.

}, url = {https://arxiv.org/abs/1908.02802}, author = {Roozbeh Yousefzadeh and Dianne P O{\textquoteright}Leary} } @article {2489, title = {A Probabilistic Framework and a Homotopy Method for Real-time Hierarchical Freight Dispatch Decisions}, year = {2019}, month = {2019/12/8}, abstract = {We propose a real-time decision framework for multimodal freight dispatch through a system of hierarchical hubs, using a probabilistic model for transit times. Instead of assigning a fixed time to each transit, we advocate using historical records to identify characteristics of the probability density function for each transit time. We formulate a nonlinear optimization problem that defines dispatch decisions that minimize expected cost, using this probabilistic information. Finally, we propose an effective homotopy algorithm that (empirically) outperforms standard optimization algorithms on this problem by taking advantage of its structure, and we demonstrate its effectiveness on numerical examples.\

}, url = {https://arxiv.org/abs/1912.03733}, author = {Roozbeh Yousefzadeh and Dianne P. O{\textquoteright}Leary} } @article {2455, title = {Refining the Structure of Neural Networks Using Matrix Conditioning}, year = {2019}, month = {8/6/2019}, abstract = {Deep learning models have proven to be exceptionally useful in performing many machine learning tasks. However, for each new dataset, choosing an effective size and structure of the model can be a time-consuming process of trial and error. While a small network with few neurons might not be able to capture the intricacies of a given task, having too many neurons can lead to overfitting and poor generalization. Here, we propose a practical method that employs matrix conditioning to automatically design the structure of layers of a feed-forward network, by first adjusting the proportion of neurons among the layers of a network and then scaling the size of network up or down. Results on sample image and non-image datasets demonstrate that our method results in small networks with high accuracies. Finally, guided by matrix conditioning, we provide a method to effectively squeeze models that are already trained. Our techniques reduce the human cost of designing deep learning models and can also reduce training time and the expense of using neural networks for applications.

}, url = {https://arxiv.org/abs/1908.02400}, author = {Roozbeh Yousefzadeh and Dianne P O{\textquoteright}Leary} }