TY - JOUR
T1 - Exploring the Limitations of Layer Synchronization in Spiking Neural Networks
AU - Koopman, Roel
AU - Yousefzadeh, Amirreza
AU - Shahsavari, Mahyar
AU - Tang, Guangzhi
AU - Sifalakis, Manolis
N1 - Funding Information:
The herein research was partially sponsored and funded by Imec Netherlands, and the EU\u2019s Horizon Europe Research and Innovation programme (under Grant Agreement 101070679). Guangzhi Tang is partially funded by the Dutch Research Council\u2019s programme AiNed XS Europe programme (under Grant agreement NGF.1609.243.044, https://doi.org/10.61686/MYMVX53467). Roel Koopman is funded by the Dutch Research Council (under Grant agreement KICH1.ST04.22.021).
Publisher Copyright:
© 2025, Transactions on Machine Learning Research. All rights reserved.
PY - 2025/1/1
Y1 - 2025/1/1
N2 - Neural-network processing in machine learning applications relies on layer synchronization. This is practiced even in artificial Spiking Neural Networks (SNNs), which are touted as consistent with neurobiology, in spite of processing in the brain being in fact asynchronous. A truly asynchronous system however would allow all neurons to evaluate concurrently their threshold and emit spikes upon receiving any presynaptic current. Omitting layer synchronization is potentially beneficial, for latency and energy efficiency, but asynchronous execution of models previously trained with layer synchronization may entail a mismatch in network dynamics and performance. We present and quantify this problem, and show that models trained with layer synchronization either perform poorly in absence of the synchronization, or fail to benefit from any energy and latency reduction, when such a mechanism is in place. We then explore a potential solution direction, based on a generalization of backpropagation-based training that integrates knowledge about an asynchronous execution scheduling strategy, for learning models suitable for asynchronous processing. We experiment with 2 asynchronous neuron execution scheduling strategies in datasets that encode spatial and temporal information, and we show the potential of asynchronous processing to use less spikes (up to 50%), complete inference faster (up to 2x), and achieve competitive or even better accuracy (up to ~10% higher). Our exploration affirms that asynchronous event-based AI processing can be indeed more efficient, but we need to rethink how we train our SNN models to benefit from it. (Source code available at: https://github.com/RoelMK/asynctorch).
AB - Neural-network processing in machine learning applications relies on layer synchronization. This is practiced even in artificial Spiking Neural Networks (SNNs), which are touted as consistent with neurobiology, in spite of processing in the brain being in fact asynchronous. A truly asynchronous system however would allow all neurons to evaluate concurrently their threshold and emit spikes upon receiving any presynaptic current. Omitting layer synchronization is potentially beneficial, for latency and energy efficiency, but asynchronous execution of models previously trained with layer synchronization may entail a mismatch in network dynamics and performance. We present and quantify this problem, and show that models trained with layer synchronization either perform poorly in absence of the synchronization, or fail to benefit from any energy and latency reduction, when such a mechanism is in place. We then explore a potential solution direction, based on a generalization of backpropagation-based training that integrates knowledge about an asynchronous execution scheduling strategy, for learning models suitable for asynchronous processing. We experiment with 2 asynchronous neuron execution scheduling strategies in datasets that encode spatial and temporal information, and we show the potential of asynchronous processing to use less spikes (up to 50%), complete inference faster (up to 2x), and achieve competitive or even better accuracy (up to ~10% higher). Our exploration affirms that asynchronous event-based AI processing can be indeed more efficient, but we need to rethink how we train our SNN models to benefit from it. (Source code available at: https://github.com/RoelMK/asynctorch).
U2 - 10.48550/arXiv.2408.05098
DO - 10.48550/arXiv.2408.05098
M3 - Article
SN - 2835-8856
VL - 2025-September
JO - Transactions on Machine Learning Research
JF - Transactions on Machine Learning Research
ER -