| Abstract |
The criticality of COVID-19 has burdened our healthcare resources. Thus, there is a growing need to advance our healthcare frameworks. Recently, with the rise of Explainable Artificial Intelligence (XAI) models in healthcare that leverage interpretability and solve the black-box nature of AI, healthcare frameworks and designs in smart cities have improved many folds. To address the crisis in light of future COVID-19-like pandemics in smart cities, we propose a case-study, X-Pand, that utilizes an XAI framework over Random Forest (RF) algorithm for feature selection in COVID-19 detection. Unlike recent schemes that are mostly non-interpretable, X-Pand emphasizes transparency and interpretability, crucial for gaining actionable insights and trust in automated health diagnostics. We present a feature ranking mechanism, which integrates Shapley Additive Explanations (SHAP) for enhanced decision tree interpretability, optimizing feature selection and improving diagnostic accuracy. We evaluate X-Pand on a comprehensive COVID-19 clinical spectrum dataset, demonstrating superior performance in terms of accuracy, sensitivity, and specificity compared to traditional models like XGBoost, Support Vector Machine (SVM), and Multi-Layer Perceptron (MLP). Our findings reveal that X-Pand not only achieves higher prediction accuracy but also offers clearer justifications for its decisions, thus enabling more informed and confident clinical decisions. © The Institution of Engineering & Technology 2024. |