Quantitative Analysis of Machine Learning Model Performance and the need to consider explainability in it
Virtual: https://events.vtools.ieee.org/m/442073[]Free Registration (with a Zoom account; you can get one for free if you don't already have it):<a href="https://sjsu.zoom.us/meeting/register/tZcsc-CoqjwpG9aPDHfg6Axqvn90i4uQRmqrSynopsis:For" target="_blank" title="https://sjsu.zoom.us/meeting/register/tZcsc-CoqjwpG9aPDHfg6Axqvn90i4uQRmqrSynopsis:For">https://sjsu.zoom.us/meeting/register/tZcsc-CoqjwpG9aPDHfg6Axqvn90i4uQRmqrSynopsis:For a long time, the AI/ML community relied on traditional evaluation metrics such as the confusion matrix, accuracy, precision, and recall for assessing the performance of machine learning models. However, the rapidly evolving field has been raising several ethical concerns, which calls for a more comprehensive evaluation scheme. In easy-to-understand language, this talk will delve into the quantitative analysis of model performance, emphasizing the critical importance of explainability. As ML models become increasingly complex and pervasive, understanding their decision-making processes is paramount. We'll explore various performance metrics, their limitations, and the growing need for transparency. Topics covered include Cohen’s Kappa Statistic, Matthew's correlation coefficient (MCC), Confusion Matrix, Precision, Recall, G-measure, ROC Curve, Youden's J statistic, Type II Adversarial attack, R-squared, LIME, SHAP, and <a href="http://more.Speaker(s):" target="_blank" title="more.Speaker(s):">more.Speaker(s): Dr. Vishnu S. PendyalaVirtual: https://events.vtools.ieee.org/m/442073