Binary Spiking Neural Networks as Causal Models
The paper presents a causal analysis of Binary Spiking Neural Networks (BSNNs) by representing their spiking activity as a binary causal model. The authors use logic-based methods, including SAT and SMT solvers, to compute abductive explanations for the network's outputs, demonstrating their approach on the MNIST dataset. Their method is shown to produce explanations without irrelevant features, unlike the SHAP method used in explainable AI.
- ▪The authors formally define a Binary Spiking Neural Network (BSNN) and model its behavior using a binary causal framework.
- ▪They apply SAT and SMT solvers to generate abductive explanations for BSNN classifications based on pixel-level features.
- ▪Explanations from their method are compared to SHAP, showing that their approach excludes irrelevant features, which SHAP may include.
- ▪The study uses the MNIST dataset to train the BSNN and validate the explanation methods.
- ▪The work contributes to explainable AI by integrating formal logic and causal modeling into neural network interpretability.
- ▪The paper was presented at the Logics for New-Generation AI 2025 workshop in Luxembourg.
Opening excerpt (first ~120 words) tap to expand
Computer Science > Artificial Intelligence arXiv:2604.27007 (cs) [Submitted on 29 Apr 2026] Title:Binary Spiking Neural Networks as Causal Models Authors:Aditya Kar (CNRS, IRIT), Emiliano Lorini (CNRS, IRIT), Timothée Masquelier (CNRS, CERCO UMR5549) View a PDF of the paper titled Binary Spiking Neural Networks as Causal Models, by Aditya Kar (CNRS and 5 other authors View PDF Abstract:We provide a causal analysis of Binary Spiking Neural Networks (BSNNs) to explain their behavior. We formally define a BSNN and represent its spiking activity as a binary causal model. Thanks to this causal representation, we are able to explain the output of the network by leveraging logic-based methods.
…
Excerpt limited to ~120 words for fair-use compliance. The full article is at arXiv.org.