Recent advancements and applications in artificial intelligence (ai) and machine learning (ml) have highlighted the need for explainable, interpretable, and actionable ai-ml. Most work is focused on explaining deep artificial neural networks, e.g., visual and image captioning. In recent work, we established a set of indices and processes for explainable ai (xai) relative to information fusion. While informative, the result is information overload and domain expertise is required to understand the results. Herein, we explore the extraction of a reduced set of higher-level linguistic summaries to inform and improve communication with non-fusion experts. Our contribution is a proposed structure of a fusion summary and method to extract this information from a given set of indices. In order to demonstrate the usefulness of the proposed methodology, we provide a case study for using the fuzzy integral to combine a heterogeneous set of deep learners in remote sensing for object detection and land cover classification. This case study shows the potential of our approach to inform users about important trends and anomalies in the models, data and fusion results. This information is critical with respect to transparency, trustworthiness, and identifying limitations of fusion techniques, which may motivate future research and innovation.keywordsdeep learningmachine learninginformation fusioninformation aggregationfuzzy integralexplainable artificial intelligencexaiprotoformlinguistic summary.
|Published - 2020
|International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems - Lisbon, Portugal
Duration: 15 Jun 2020 → 19 Jun 2020
|International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems
|15/06/20 → 19/06/20