International Conference on Mathematical Neuroscience (ICMNS)
CONTRIBUTED TALKS
talks
Noise-induced pattern formation in networks of spatially-dependent neural networks | Daniele Avitabile
This talk presents a study of pattern formation in a class of high-dimensional neural networks defined on random graphs and subjected to spatio-temporal stochastic forcing. The connectivity matrices of these neural networks are randomly generated and can be excitatory or inhibitory, dense or sparse, and need not be symmetric. Under generic conditions on coupling and nodal dynamics, we prove that the network admits a rigorous mean-field limit, resembling a Wilson-Cowan neural field equation. The state variables of the limiting system are the mean and variance of neuronal activity. We select networks with tractable mean-field equations and perform a bifurcation analysis using the diffusivity strength of the afferent white noise on each neuron as the control parameter. We identify conditions for Turing-like bifurcations in a system where the cortex is modeled as a ring and provide numerical evidence of noise-induced spiral waves in models with a two-dimensional cortex. We present numerical evidence that solutions of the finite-size network converge weakly to those of the mean-field model.
The talk is based on the following preprint: https://arxiv.org/abs/2408.12540
Authors:
* Daniele Avitabile (speaker), Centre for Dynamics and Computation, Department of Mathematics, Vrije Universiteit Amsterdam.
*James MacLaurin
Department of Mathematical Sciences, New Jersey Institute of Technology,
Fine-tuning of attractors on a ring underlies the learning of robust working memory in mice | Mahrach Alexandre
A. Mahrach1, X. Zhang2,3,4, D. Li2,3,4, C.T. Li2,3,4 A. Compte1
1 IDIBAPS, Barcelona, Spain
2 Institute of Neuroscience, State Key Laboratory of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai, China
3 Lingang Laboratory, Shanghai, China
4 Shanghai Center for Brain Science and Brain-Inspired Technology, Shanghai, China
Working memory (WM) is a process that temporarily holds information when not accessible to the senses. It relies on storing stimulus features in neuronal activity. Despite its importance in cognition, WM is quite vulnerable due to its limited capacity and stability, especially when facing interferences from internal and external sources. Recent studies have explored the neural mechanisms that protect WM and proposed that pre-/post-distraction neural activity decomposes into orthogonal subspaces, thus protecting information. However, whether orthogonalization is acquired through learning is unknown, and the mechanisms supporting it are unclear. Here, we study the learning of orthogonalization using calcium imaging data from the mouse prelimbic (PrL) and anterior cingulate (ACC) cortices as they learn an olfactory dual task. The task combines an outer Delayed Paired-Association task (DPA) with an inner Go-NoGo task. We examined how neuronal activity reflected the process of shielding the DPA sample information against Go-NoGo. As mice learned the task, we measured the overlap between the activity and the low-dimensional subspaces that encode the sample or Go-NoGo odors. Early in the training, pre-Go-NoGo activity overlapped with both sample/Go-NoGo subspaces. Later in the training, pre-Go-NoGo activity was confined to the sample subspace, resulting in a more robust sample code. We present a mechanistic insight into how these low-dimensional representations evolve with learning in a recurrent network model of excitatory and inhibitory neurons with trained low-rank connections. The model links learning to (1) the orthogonalization of sample and Go-NoGo subspaces and (2) modifications of a double-well attractor on a one-dimensional ring. We validated (1) by measuring the angular distance between the sample and Go-NoGo subspaces in the data and (2) by estimating an energy landscape for the recorded neural dynamics. In sum, our study underscores the crucial role attractor dynamics plays in shielding WM representations from concurrent tasks.
Three-factor cortico-striatal plasticity shifts activity of cortico-basal ganglia-thalamic subnetworks towards optimal performance in decision-making tasks | Jyotika Bahuguna
Jyotika Bahuguna 1† , Timothy Verstynen 2 and Jonathan E. Rubin 3
1 Laboratoire des Neurosciences cognitive et adaptive (LNCA), Strasbourg, France
2 Department of Psychology, Neuroscience Institute, Carnegie Mellon University, Pittsburgh, USA
3 Department of Mathematics, University of Pittsburgh, Pittsburgh, USA
OVERVIEW: Understanding how cortico-basal ganglia-thalamic (CBGT) circuits influence decision making remains a challenge, especially considering the different decision policies a biological agent can adopt in response to environmental changes and the complexity of interacting pathways in CBGT networks. We deconstruct the process of value-based learning in a CBGT network into three aspects: (a) defining what is a decision policy, (b) identifying where in the CBGT network these decision policies are effectively generated, and (c) analyzing how the CBGT pathways encode and modulate different aspects of decision policies. We use an evidence accumulation model (the drift diffusion model; DDM) to map the behavioral features (e.g., decision times, choices) of the decision making agent into a decision policy (what). Based on our prior work [1], we identified three low dimensional CBGT subnetworks called control ensembles (responsiveness, pliancy and choice) that represent control over distinct dimensions of the decision policy (where). We study how CBGT networks modulate decision policies by simulating learning via dopaminergic signals acting on the cortico-striatal projections in a model CBGT network performing a simple two-choice task with one optimal (i.e rewarded) target.
RESULTS: While our naive model CBGT networks lay in an exploration regime, we observed that value-based learning breaks the speed-accuracy tradeoff and drives the CBGT networks in a direction of maximal increase in reward rate such that they arrive at an exploitation regime and approach the optimal performance curve (OPC). The OPC is a theoretical estimation of the normalized decision times that maximize the reward rate as a function of rate of error in the context
of the DDM [2]. This approach towards the OPC was also recently observed in rats performing a perceptual learning task, where the decision times are slower than predicted by OPCs during initial phases of learning, but move towards the OPC as learning progresses [3]. Our use of a model network allowed us to generate predictions about the contributions of the CBGT control ensembles to this process: our results suggest that learning induces an increase in responsiveness (shorter evidence accumulation onset delays), increase in choice (higher rate of evidence accumulation), and decrease in activity of the pliancy components (corresponding to heightened decision boundary). On the shorter timescale of consecutive trials, each possible set of reward outcomes
induces a specific adjustment of control ensembles. Interestingly, experiencing at least one unrewarded outcome within two initial trials can lead to faster convergence towards the OPC than that which results from pairs of rewarded outcomes. Overall, our results suggest that dopamine-dependent plasticity in the corticostriatal projection may be a possible mechanism to achieve average reward rate maximization by promoting changes in activity that ripple down through the CBGT network to achieve the coordinated tuning of the activity of its decision policy control ensembles.
[1] Vich C, Clapp M, Rubin EJ & Verstynen T (2022), PloS Computational Biology
[2] Bogacz R, Brown E, Moehlis J, Holmes P & Cohen J (2006) Physiological review
[3] Masis J, Chapman T, Rhee JY, Cox DD & Saxe AM (2023), eLife
Modeling cyclic-sequential brain activity via biologically plausible dynamics | Virginia Bolelli
This study develops a biologically plausible model for cyclic-sequential brain activity, characterized by distinct neuronal populations activating in a defined, repetitive order. We tackle this problem by leveraging tools from dynamical systems theory, such as heteroclinic cycles, and by incorporating machine learning techniques. On one hand, heteroclinic cycles typically occur in Lotka-Volterra-type equations, but these do not adequately reflect the biological mechanisms underlying neural activity. On the other hand, we show that
mean-field models, such as Wilson-Cowan equations, although better suited to describe neural dynamics, cannot exhibit heteroclinic cycles, especially when equilibria are confined to coordinate axes. To bridge this gap, we exploit the Universal Approximation Theorem to approximate Lotka-Volterra dynamics
with Wilson-Cowan-type equations. Our neural network implementation successfully reproduces oscillatory dynamics while overcoming a key limitation of heteroclinic cycles, where residence times in each state grow
indefinitely. Indeed, in our case, residence times stay constant, aligning more closely with biological mechanisms. Finally, we apply this model to investigate the cognitive processes underlying Focused Attention Meditation, showcasing its ability to capture sequential transitions and oscillatory dynamics within a biologically realistic framework.
Authors: Virginia Bolelli, Luca Greco, Hugues Mounier, Dario Prandi
Affiliation: L2S, CentraleSupelec, Universite Paris-Saclay
An analysis of the temporal component of motor preparation and execution in High Frequency Local Field Potentials: A Theoretical Approach | Marc Burillo Garcia
Authors: Marc Burillo1,2, Michael Depass1,3 & Ignasi Cos1,4
1Departament de Matemàtiques i Informàtica, Universitat de Barcelona, Barcelona, Spain
2Sorbonne Université, France
3Universitat Pompeu Fabra, Barcelona, Spain
4Institutes of Neuroscience & Mathematics, Universitat de Barcelona, Spain
Title: An analysis of the temporal component of motor preparation and execution in High Frequency Local Field Potentials: A Theoretical Approach
Electrophysiological recordings have been the fundamental source of brain inner information crucially contributing to current understanding of brain function and dynamics. Typically recorded in the context of controlled laboratory tasks, state-of-the-art multi-electrode arrays can nowadays provide simultaneously recorded high dimensional time-series from across several brain areas, providing an unprecedented insight onto brain dynamics. However, as their richness and complexity increases, obtaining reliable methods yielding interpretable characterizations of the underlying neural substrate remains a matter of vivid interest. In so far, most neural time-series analyses are typically reduced to pairwise electrode statistics, such as cross-correlation and granger causality. Furthermore, it is most often the case that the golden-data neural datasets recorded during specific tasks encompass a few sessions alone, questioning the use of data voracious techniques. For example, deep learning neural networks come at the expense of reduced interpretability and the requirement of prohibitively large datasets. Our purpose here is to provide exploratory techniques aimed at providing rich statistical characterizations of spatiotemporal brain dynamics within the constraint of modest, multivariate dataset time-series recordings. In brief, we describe the use of a vector autoregressive-machine learning pipeline to characterize spatio-temporal dynamics of high frequency local field potentials recorded during a movement planning and execution task by means of Utah arrays implanted in the motor areas of a non-human primate. By contrast to basic cross-correlations and granger-causality analyses, our pipeline provides a principled analysis of multivariate time series while preserving dataset and results interpretability. Importantly, the classification accuracies from single-electrode analysis suggest *high degree of intra-region heterogeneity*, while multi-electrode achieved highest accuracies confirming a network behaviour and the suitability of a more complex description than simply paired time series analysis. In summary, this technique provides a reliable method to characterize the multivariate spatiotemporal neuro-dynamics of motor related brain states using a single session dataset, while preserving high accuracy and interpretability.
A mesoscopic theory for coupled stochastic oscillators | Victor Buendía Ruiz-Azuaga
Abstract:
The celebrated Ott-Antonsen ansatz for coupled oscillators provides a useful framework to work with deterministic systems in the thermodynamic limit, but remains just an approximation for stochastic models. In this work, I construct a general mesoscopic description of finite-sized populations of stochastic coupled oscillators and apply it to study the stochastic Kuramoto model. From such a mesoscopic description it is possible to obtain the natural, multiplicative fluctuations of the oscillator ensemble. The analysis allows one to derive highly accurate, closed expressions for the stochastic Kuramoto model’s order parameter for the first time. Moreover, it is possible to get novel insights into the system’s fluctuations and the synchronization transition’s critical exponents which were inaccessible before. I will also show work-in-progress results for excitable oscillators, including theta-neurons.
Full author list and affiliations: Victor Buendía, Department of Computing Sciences, Bocconi University, 20136 Milano Italy.
Classification of Neural Mass Models based on codimension-2 bifurcations | Gabriele Casagrande
Gabriele Casagrande, Institut de Neurosciences des Systèmes
Viktor Jirsa, Institut de Neurosciences des Systèmes
Damien Depannemaecker*, Institut de Neurosciences des Systèmes
Maria Luisa Saggio*, Institut de Neurosciences des Systèmes
Neural mass models (NMMs) are mathematical frameworks widely used to describe the collective activity of neuronal populations, enabling the study of complex phenomena such as brain oscillations, pathological dynamics, and cognitive processes in a simplified yet biologically meaningful way.
Different qualitative regimes in the dynamics of these models are achieved by changing parameters value. Performing bifurcation analysis allows us to define different regions in the space of parameters characterized by distinct pattern of activity. This kind of study usually relies upon the identification of codimension-1 bifurcations, namely bifurcations that require tuning of one parameter.
High-codimension bifurcations are less frequently discussed in the literature,
both because they are more challenging to identify and because it is unlikely for a model to satisfy the conditions on the parameters required to encounter them. However, it has been highlighted that high-codimension bifurcations play the role of ’organizing centers’ for the dynamics of a model, as the topology of the bifurcation diagram in their neighborhood is organized in a reliable way. The benefits of recognizing these bifurcations are manifold: it helps to more
easily identify different dynamical regimes available in specific models, to identify characteristic structures common to distinct models, or to define a mapping from the normal form of a specific bifurcation, when available.
In this work we systematically review several NNMs, both phenomenological and biophysically inspired ones, to develop a classification based on this correspondences between common behaviors and the presence of high-codimension bifurcations, thereby expanding on the approach proposed by Touboul and colleagues [1]. In particular, we start with planar codimension-2 bifurcations, for which previous studies have given interesting insights about the development of phenomenological models, accounting for reproducing behaviors observed in EEG recordings and different patterns of bursting activity [1, 2]. For instance, the co-existence of up states, corresponding to high firing rates, and down states, corresponding to low firing rates, in multiple neural mass
models (e.g. Jansen-Rit, Montbrio [3]), can be related to the presence of a specific codimension-2 bifurcation, the cusp bifurcation, where two curves of saddle-node bifurcation intersect. Another dynamical feature common to various neural mass models is the co-existence of a silent and active spiking state. Models showing this characteristic, have been widely used in the literature to describe both healthy or pathological oscillatory activity in neural populations. Such behavior can be traced to the presence of specific codimension-2 bifurcations. In particular, the Bautin singularity and the saddle-node loop bifurcation, which organize several neural models and play
a key role those targeting epileptic seizures (e.g. Epileptor [4], Zeta model, Wendling-Chauvel).
Balanced inhibition allows for robust learning of input-output associations in feedforward networks with Hebbian plasticity | Gloria Cecchini
Title:
The Impact of Correlated Patterns on Network Freezing in Hebbian Learning
Authors:
Gloria Cecchini, Alex Roxin
Affiliation:
Centre de Recerca Matematica
In neural networks within the brain, the activity of a post-synaptic neuron is determined by the combined influence of many pre-synaptic neurons. This distributed processing enables mechanisms like Hebbian plasticity to associate sensory inputs with specific internal states, as seen in feedforward structures such as the CA1 region of the hippocampus. By modifying synaptic weights through Hebbian rules, sensory inputs can subsequently elicit outputs that reliably correlate with their associated internal states. When input and output patterns are uncorrelated, this approach allows for the encoding of a large number of distinct associations, enabling efficient memory storage.
However, our study demonstrates a critical limitation when output patterns become weakly correlated with input patterns, such as through the intrinsic feedforward connectivity of the network. In these cases, the Hebbian rule preferentially strengthens synaptic weights shared across patterns, leading to a “freezing” of the network’s structure. This results in highly correlated output patterns over time, effectively reducing the network’s capacity to store diverse associations and limiting its flexibility in learning.
To address this challenge, we propose the introduction of balanced inhibition, a mechanism that counteracts the undesired correlations between inputs and outputs. By dynamically regulating inhibitory input, balanced inhibition prevents the over-strengthening of shared weights, restoring the network’s ability to maintain robust and flexible learning.
Dynamic Mean Field Theories for Correlated Strong Noise in Nonlinear Gain | Shoshana Chipman
Shoshana Chipman (University of Chicago) & Brent Doiron (University of Chicago)
Abstract: Recurrently-coupled networks of excitatory and inhibitory neuron models capture a rich suite of dynamical behavior and pattern formation recorded in the brain. In many cases analytic understanding of these dynamics relies on a linearization of model dynamics, which rest on assumptions of weak noise, weak connectivity, or piecewise-linear gain functions. We develop an analytic technique for deriving dynamic mean field theories in systems with a generic nonlinear transfer function, and arbitrary connectivity strength and noise intensity. We demonstrate the technique in a recurrently-coupled neural network with power-law transfer, strong couplings, and high-intensity noise with a variety of correlations. The dynamic mean-field theory is a better descriptor of the system than an associated linearized model, and is able to capture fluctuations and transients due to external input signals. We further demonstrate that our analytic technique is robust for the fairly generic case of Ornstein-Uhlenbeck noise with arbitrary correlations as input to an arbitrary gain, and provide upper and lower bounds for its efficacy as a model. We further conclude with remarks on the use of this result in spatially extended systems.
A Neural Mass Model with Neuromodulation for Whole-Brain Modeling in Parkinson's Disease | Damien Depannemaecker
We show that the diameter of the directed configuration model with
Uncertainty in stimulus dynamics drives asymmetries in evidence integration | Tahra Eissa
Tahra L Eissa^1 and Zachary P Kilpatrick^1
1. Dept. Applied Math, University of Colorado Boulder, Boulder, CO, USA
The timescale at which an environment changes is critical to informing our beliefs about the environment and its task-relevant statistics. In working memory, learning fairly stable features of the environment based on past experience supports more efficient coding schemes for maintaining estimates of current observations. In contrast, past experiences should be discounted in highly dynamic environments, since they will be irrelevant to the current estimate. Here, we derive a dynamical model which infers the optimal evidence discounting rate in a delayed estimation task. The true stimulus value (e.g., color) on each trial evolves according to a random walk with independently sampled Gaussian steps, assuming the variance is known by the observer. Then, breaking optimality, we determine how over/underestimations of the step variance contribute to working memory error. We find an asymmetry whereby assuming the walk variance evolves faster than it truly does generates far smaller response errors than assuming the walk evolves more slowly. Thus, an impoverished view of the world that discounts relevant past information is more beneficial than incorporating past irrelevant information. We then extend this model using Bayesian sequential analysis whereby the observer can learn the step variance parameter from observations. The inferential model can be approximately implemented in a neural field model which produces bump attractors (whose peak represents the stimulus value estimate) and whose connectivity evolves according to plasticity with a tunable timescale. We propose a metaplasticity mechanism that recursively tunes the timescale to implement hierarchical inference of the random walk variance based on mismatches between neural activity and synaptic potentiation profiles. These models provide a testbed for understanding how the brain can accomplish adaptive, multi-timescale inference using a combination of neural activity dynamics and metaplasticity.
Interspike Interval dynamics in neuronal populations | Luca Falorsi
Title: Interspike Interval dynamics in neuronal populations
Authors:
Luca Falorsi (PhD in Mathematics, “Sapienza” Univ. of Rome, Italy; Natl. Center for Radiation Protection and Computational Physics, Istituto Superiore di Sanità, Rome, Italy)
Gianni V. Vinci (Natl. Center for Radiation Protection and Computational Physics, Istituto Superiore di Sanità, Rome, Italy)
Maurizio Mattia (Natl. Center for Radiation Protection and Computational Physics, Istituto Superiore di Sanità, Rome, Italy)
Abstract: We investigate the joint evolution of time from last spike and membrane potential in a recurrent network of integrate-and-fire neurons using a population density dynamics framework. This approach leads to a two-dimensional partial differential equation describing the system’s dynamics, alongside a hierarchy of equations governing the moments of the time from last spike distribution. These results allow us to analyze the time evolution of the interspike interval distribution when the population significantly deviates from the stationary state, such as in the presence of limit cycles or time-varying external stimuli.
Glial ensheathment of inhibitory synapses drives hyperactivity and increases correlations | Gregory Handy
Authors:
Gregory Handy (ghandy@umn.edu; Assistant Professor at the University of Minnesota)
Nellie Garcia (garc0899@umn.edu; graduate student at the University of Minnesota)
Abstract:
Recent evidence highlights the active role of glial cells, such as microglia and astrocytes, in modulating neuronal dynamics through regulating neurotransmitter concentrations, ion buffering, and releasing neuroactive compounds. A notable recent study found that during and after anesthesia, microglia target inhibitory synapses for ensheathment, disrupting neurotransmitter flow between pre- and postsynaptic terminals. In this work, we develop computational models and mathematical frameworks to explore the effects this ensheathment has on neuronal dynamics.
We first extend a microscale synaptic cleft model to examine how varying strengths of synaptic ensheathment influence synaptic communication. Our findings align with prior work, showing that ensheathment accelerates synaptic transmission but reduces its strength. However, the previous model underestimates glial cells’ ability to switch off synaptic connections. We integrate our updated model into a large network of exponential integrate-and-fire neurons with highly heterogeneous synaptic parameters determined by glial proximity. We extend linear response theory to account for this heterogeneity and use it to analyze not only network firing rate distributions but also noise correlations across excitatory neurons. Despite significant heterogeneity in the system, we find that our mean-field approximation accurately captures network statistics found in the spiking simulations.
Our model reproduces a key experimental finding, namely that increases in glial ensheathment of inhibitory synapses can lead to hyperactivity. It also makes the testable prediction that this ensheathment leads to significant increases in the power spectrum of the excitatory population across a range of task-relevant frequencies. These results suggest that glial-driven synaptic plasticity is an underappreciated mechanism cortical circuits use to modulate recurrent dynamics.
Neural fields with auto-associative memories: collective activity, pattern formation, and memory dynamics | Akke Mats Houben
Modelling large populations of neurons as a homogeneous and isotropic continuous medium has been valuable for the investigation of collective dynamics of large neuronal networks. These neural field models have led to insights into the formation and stability of spatially inhomogeneous (Turing) patterns, travelling waves, and localised (bump) solutions, among others.
However, biological neuronal networks contain heterogeneous connections, which seem to be crucial for their dynamics and functioning. Existing results show that incorporating either simple (two-point) heterogeneous connections or spatial modulations in the connectivity strengths into neural field equations affects the formation and stability of the collective dynamics. However, the effects of numerous, structured and functionally relevant heterogeneous connections on neural field dynamics, as well as the potential novel dynamics arising from these heterogeneities, are still left to be explored.
This work embarks on this exploration by endowing neural fields with auto-associative memories. After a simple derivation of the equations governing the neural field with auto-associative memories, it will first be shown that the system supports collective activity dynamics much like homogeneous neural fields. Secondly, by deriving a set of coupled amplitude equations for the memory patterns, pattern completion and competition dynamics are investigated and shown to be similar to the dynamics of auto-associative neural networks. Third, a novel spatio-temporal phenomenon will be demonstrated: the amplitude equations take the form of coupled parabolic diffusion equations; hence a travelling wavefront solution exists where a memory pattern invades a spatially homogeneous domain, for which the typical propagation speed can be determined analytically.
Finally, the work concludes with a demonstration of the application of the neural field with auto-associative memories to the investigation of the maturation of in-vitro cultured neurons, which constitute a biological model for large-scale neuronal networks organised on a plane without innate –yet with heterogeneous– connectivity structure.
Authors:
Akke Mats Houben
(Departament de Fisica de la Materia Condensada & Institute of Complex Systems, Universitat de Barcelona)
&
Jordi Soriano
(Departament de Fisica de la Materia Condensada & Institute of Complex Systems, Universitat de Barcelona)
Koopman analysis of stochastic oscillator networks | Pierre Houzelstein
Pierre Houzelstein1,†, Boris Gutkin1, Alberto Pérez-Cervera2
1. Group for Neural Theory, LNC2 INSERM U960, DEC, École Normale Supérieure
PSL University, Paris, France.
2. Applied Mathematics Department, University of Alicante, Alicante, Spain.
† email: pierre.houzelstein@ens.psl.eu
Keywords: Networks, stochastic dynamics, oscillations
Abstract
Collective rhythms and node synchronicity are ubiquitous phenomena in neural circuits, and have been linked to important cognitive processes, such as speech and memory. As such, being able to determine whether a network of neurons is an a synchronous state from their activity is an important question.
The Kuramoto Order Parameter (KOP) has proven to be a fertile tool to study phase synchronization in networks. It maps the dynamics of the full network to the complex plane, allowing to detect transitions to and fro full phase synchrony and providing a mean collective phase value.
However, to compute the KOP, one must have a phase function for each node. Getting this phase function might be difficult when the nodes are stochastic, as real-world systems often are – even more so when the oscillations are noise-induced.
In this contribution, building upon the results in [1], we suggest that Koopman theory can be leveraged to provide an alternative path into the analysis of network synchrony.
Using a variant of Dynamic Mode Decomposition called ResDMD [2], we map the network dynamics onto the so-called Q-function, the complex eigenfunction of the Koopman operator K corresponding to the dominant metastable oscillatory mode of the dynamics. This projects the collective network activity onto a linear, complex oscillator whose dynamics is well understood.
Our approach leads to a description of the mean phase dynamics similar to that provided by the KOP, but without requiring the knowledge of the individual node phase functions. Additionally, we recover the Quality Factor (QF) which provides a quantitative measure of the robustness of the collective oscillation: a high QF indicates strong coherence. Finally, using a theoretical framework for stochastic phase reduction that we have recently developed, see [3], we can construct a one-dimensional stochastic differential equation (SDE) which approximates the evolution of the mean collective phase.
We will illustrate our approach with networks with individual nodes exhibiting the
canonical Hopf and SNIC bifurcations, which are of particular relevance in neuroscience.
This approach shows promise as a new way to investigate oscillatory activity in neural data, especially given that Koopman/DMD research is a very active field attracting considerable interest.
References
[1] Pérez-Cervera et al., PNAS (2023).
[2] Colbrook et al., Nonlinear Dynamics (2023)
[3] Houzelstein et al., Physical Review Research (2025) – accepted for publication.
Bias-corrected synaptic plasticity is essential for capacity in mushroom body circuits | Zhanmiao Huang
In fruit flies, the Mushroom Body plays a crucial role in learning odors associated with positive or negative reinforcement signals. Despite extensive experimental and computational studies, the precise mechanisms of learning and plasticity remain open. In particular, the basic experimentally established Hebbian-shape learning rule introduces a bias in the readout of Mushroom Body Output Neuron (MBON) in a simplified single compartment circuit model. There are two solutions to this bias issue: removing the bias in the readout layer, versus implementing a bias correction within connectivity levels. Although appearing similar in effect, we show that the bias-corrected plasticity yields significantly greater and more stable memory capacity than removing the readout bias under the pure Hebbian plasticity under realistic parameters. We rigorously derive the signal-to-noise ratio (SNR) and identify the pivotal reduced second moment crosstalk noise between patterns with corrected plasticity.
Under the condition that the effective firing number of KCs is sufficiently large, the condition of the generalized central limit theorem holds for readout distribution. This allows the SNR to be directly used to estimate circuit performance through the classification accuracy of the approximated Gaussian distribution.
SNR theory also clarifies the parameter regime that will bring about larger capacity gap between two approaches, determined by sparsity and number of patterns.
Our findings extend to more biologically realistic models, including multiple compartments coding of opposite valences by linear or nonlinear decoder, correlated patterns and bounded synapses. The predicted capacity numbers suggest that the bias-corrected plasticity is likely necessary for adult and larval Drosophila and being implemented in the MB circuit. This then predicts a small increase of synaptic strengths from non-activated KCs under dopamine activation, which supplements currently established plasticity and can be tested by experiments. Our work offers theoretical insights into the computational benefit and biologically plausible implementations of adjusting for bias within circuit connectivity.
A Spiking Neural Network Model for Categorization Tasks | Sophie Jaffard
In cognition, response times and choices in decision-making tasks are commonly modeled using Drift Diffusion Models (DDMs), first introduced by Roger Ratcliff in the late 1970s. DDMs describe the accumulation of evidence for a decision as a stochastic process, specifically a Brownian motion, with the drift rate reflecting the strength of the evidence. However, DDMs lack a learning mechanism and are limited to tasks where participants have prior knowledge of the categories. To bridge the gap between cognitive and biological models, we propose a biologically plausible Spiking Neural Network (SNN) model for decision-making that incorporates a learning mechanism. This model is provably close to the DDM, which has repeatedly demonstrated its predictive accuracy for reaction times and choices in cognitive experiments. Our results show that the DDM, one of the most widely used models in cognitive science, can be derived from a network of spiking neurons governed by a local learning rule. Furthermore, we designed an online categorization task, accessible at https://3ia-demos.inria.fr/mel/en/, to evaluate the model’s predictions. By analyzing participants’ reaction times and choices in this task, we show the utility of our SNN model for studying decision-making processes. This work provides a significant step toward integrating biologically relevant neural mechanisms into cognitive models, fostering a deeper understanding of the relationship between neural activity and behavior.
Authors: Sophie Jaffard (Université Côte d’Azur), Giulia Mezzadri (Columbia University), Patricia Reynaud-Bouret (Université Côte d’Azur)
Towards translation of whole-brain neural mass models to clinical practice: finding the right level of model complexity | Xenia Kobeleva
Author list: Xenia Kobeleva, Computational Neurology Group, Ruhr University Bochum, Bochum, Germany;
Riccardo Leone, Computational Neurology Group, Ruhr University Bochum, Bochum, Germany
Gustavo Patow, Universitat de Girona, Girona, Spain,
Gustavo Deco, Universitat Pompeu Fabra, Spain
(if this abstract gets accepted as a talk, please just include the first author Xenia Kobeleva in the author list)
Whole-brain neural mass models can effectively simulate resting-state fMRI signal in subjects with cognitive impairment, however the elevated model complexity of some implementations might hinder their translation to clinical practice. Here, we compared various Hopf models of increasing complexity (e.g., locally vs. globally fitting model parameters at edges and nodes) to a model with arbitrarily fixed model parameters to simulate brain dynamics of elderly subjects with and without cognitive impairment. Our aim was to assess which level of model complexity was needed for better descriptions of empirical rs-fMRI data and whether these models were able to recapitulate brain network properties. We found that all tau-dependent models performed significantly better compared to the fixed model, so no added value was provided by increased model complexity. We conclude that, at the spatial scale commonly used in whole-brain modeling studies, models with globally fitted model parameters meaningful information on subjects’ cognitive abilities and brain dynamics, diminishing the need for more sophisticated heterogeneous models. These results might facilitate the translation of simpler and less computationally complex models to clinical application.
Fluctuations in strongly coupled soft-threshold integrate-and-fire networks | Gabriel Koch Ocker
Authors: Gabriel Koch Ocker, Department of Mathematics and Statistics, Boston University; Michael A. Buice; Allen Institute.
Neuronal activity is striking for its variability. One potential source of this variability is the strong and heterogeneous synaptic connectivity of neurons. Here, we study strongly-coupled networks of integrate-and-fire neurons with stochastic spike emission. Using a statistical field-theoretic formalism, we calculate the Dyson-Schwinger equations for this model: an infinite hierarchy of equations governing moments of the neurons’ membrane potentials and/or spike trains and their input responses. We use these to derive a set of fluctuation-response relations that relate subthreshold and spiking variability to responses to sub- or suprathreshold perturbations. We then examine a transition to internally-generated fluctuating activity in strongly-coupled networks using both weakly and strongly nonlinear (one-loop and dynamical mean-field) Gaussian approximations, and compare the roles of stochastic spike emission and strong coupling in shaping the variability.
Altered slow inactivation of sodium channels carrying an epilepsy mutation promotes depolarization block | Louisiane Lemaire
Authors: Louisiane Lemaire (1), Joanna Danielewicz (2), Mathieu Desroches (1), Fabien Campillo (1), Juan-Manuel Encinas (3,4), Serafim Rodrigues (2,4)
(1) MathNeuro team, Inria Branch at the University of Montpellier, Montpellier, France
(2) MCEN research group, BCAM – Basque Center for Applied Mathematics, Bilbao, Basque Country, Spain
(3) Achucarro – Basque Center for Neuroscience, Leioa, Basque Country, Spain
(4) Ikerbasque – The Basque Foundation for Science, Bilbao, Basque Country, Spain
Dravet syndrome is a developmental and epileptic encephalopathy (DEE) that typically begins in the first year of life. This complex pathology is characterized by drug-resistant seizures, various comorbidities such as cognitive delay, and a risk of early death. Most cases are due to mutations of NaV1.1, a voltage-gated sodium channel expressed in fast-spiking (FS) inhibitory neurons. The pathological mechanism in the initial stage of the disease involves impaired function of those neurons, leading to network hyperexcitability. However, the details remain unclear.
Mutations of NaV1.1 may result in non-functional channels or channels with altered gating properties. We focus on the less studied case of altered gating, by investigating how it impairs neuronal activity in the case of a specific mutation (A1783V). Using recordings in cell lines, Layer et al. (2021) showed that A1783V alters the voltage dependence of channel activation, as well as the voltage dependence and kinetics of slow inactivation. Slow inactivation is a mechanism distinct from the fast inactivation of sodium channels at each spike, developing much more slowly, during prolonged trains of depolarization. Implementing the three effects of the mutation in a conductance-based model, Layer et al. predict that altered activation has the largest impact on channel function, as it causes the most severe reduction in firing rate.
Using conductance-based models tailored to the dynamics of FS inhibitory neurons, we examine how the three alterations affect susceptibility to depolarization block, another firing deficit aside from frequency reduction. We look deeper into slow inactivation, exploiting the timescale difference with the rest of the system. We find that slow inactivation of mutant channels at lower voltage values than wild type channels favors depolarization block upon sustained stimulation. More precisely, shifting the steady-state voltage dependence of slow inactivation destroys the stable limit cycle of the full system corresponding to tonic spiking, and creates a stable equilibrium corresponding to depolarization block (Figure 1). The accelerated kinetics of slow inactivation in mutant channels hastens the transition from tonic spiking to depolarization block. These findings suggest that alterations of NaV1.1 slow inactivation should not be neglected as they might play an important pathological role, adding to the conclusions of Layer et al. on the consequences of altered NaV1.1 activation. We test our predictions with classical ramped electrophysiology protocols.
Parkinsonian patients have a broader range of time-scales of EEG motor cortex activity than healthy subjects | Cheng Ly
TBP
The Hydrodynamic Limit of Neural Networks with Balanced Excitation and Inhibition | James Maclaurin
James MacLaurin (New Jersey Institute of Technology)
Pedro Vilanova (Stevens Institute of Technology)
The theory of `Balanced Neural Networks’ is a very popular explanation for the high degree of variability and stochasticity in the brain’s activity. We determine equations for the hydrodynamic limit of a balanced all-to-all network of $2n$ neurons for asymptotically large $n$. The neurons are divided into two classes (excitatory and inhibitory). Each excitatory neuron excites every other neuron, and each inhibitory neuron inhibits all of the other neurons. The model is of a stochastic hybrid nature, such that the synaptic response of each neuron is governed by an ordinary differential equation. The effect of neuron j on neuron k is dictated by a spiking Poisson Process, with intensity given by a sigmoidal function of the synaptic potentiation of neuron j. The interactions are scaled by O(n^{-1/2}) , which is much stronger than the O(n^{-1}) scaling of classical interacting particle systems ( the most common scaling used in mathematical neuroscience). We demonstrate that, under suitable conditions, the system does not blow up as n tends to infinity because the network activity is balanced between excitatory and inhibitory inputs. The limiting population dynamics is proved to be Gaussian: with the mean determined by the balanced between excitation and inhibition, and the variance determined by the Central Limit Theorem for inhomogeneous Poisson Processes. The limiting equations can thus be expressed as autonomous Ordinary Differential Equations for the means and variances. We finish by studying pattern formation in systems with spatial extension, focussing on conditions under which there are bump attractors in a neural field model of the visual cortex.
Transitions in cartwheel cell electrical activity: bifurcations of super-slow equilibria explain effects of ion current blockers | Matteo Martin
Matteo Martin (1,a), Jonathan Rubin (2,a) and Morten Gram Pedersen (1,b)
(1) Department of Information Engineering, University of Padova, Italy
(a) matteo.martin.2@phd.unipd.it
(b) mortengram.pedersen@unipd.it
(2) Department of Mathematics, University of Pittsburgh, USA
(a) jonrubin@pitt.edu
Cartwheel cells (CWCs) are inhibitory interneurons of the dorsal cochlear nucleus (DCN), a brainstem region where parallel fibers and primary auditory afferents are integrated. CWCs exhibit different types of activity, which can be classified as bursting, spiking and complex spiking. Electro-pharmacological experiments [1] have shown that the L-type Ca\textsuperscript{2+} blocker, nifedipine, impairs complex spiking dynamics by promoting continuous spiking behaviour; on the other hand, iberiotoxin, a BK channel blocker, has the opposite effect on CWC dynamics.
In this work, we aim to explain these transitions by investigating a reduction of a novel conductance-based CWC model. The reduced model comprises a six-dimensional (6D) system of ODEs, and its dynamical variables evolve at different rates. For this reason, we exploit timescale separation along with bifurcation and averaging theories to understand how changes in the L-type Ca\textsuperscript{2+} and BK channel conductances affect the model dynamics.
The timescale hierarchy of the 6D model is complicated due to unclear distinctions between some of the time constants involved. Nevertheless, we find that the use of a three-timescale decomposition provides insights into the mechanisms mediating the transitions observed in the experiments.
Within this hierarchy, the slow-fast subsystem of the full model is studied through the calculation of one- and two-parameter bifurcation diagrams by using the super-slow variables as bifurcation parameters. To analyze the dynamics of these super-slow variables, we apply averaging theory where the slow-fast subsystem exhibits stable periodic orbits. For this computation, we customize numerical continuation techniques to track the super-slow averaged nullclines efficiently. We find that due to the intricate timescale structure of the 6D system, averaging theory predicts the behaviour of the full model only for certain ranges of CWC excitability, comprising the regimes of continuous and complex spiking. In these regimes, we exploit changes in the stability of the unique super-slow averaged equilibrium point to explain the changes in dynamics that occur as the parameters associated with L-type Ca\textsuperscript{2+} and BK channel conductances are varied.
In conclusion, through the development of a novel conductance-based model, analysis of the model dynamics based on a three-timescale hierarchy, and the study of super-slow dynamics based on averaging over slow-fast oscillations, this work proposes a possible mathematical interpretation for the transitions in CWC dynamics observed experimentally with calcium and potassium channel inactivation.
[1] Kim Y. and Trussell L.O., Ion Channels Generating Complex Spikes in Cartwheel Cells of the Dorsal Cochlear Nucleus, Journal of Neurophysiology, (2007), 97:2, 1705-1725.
Optimal control over damped Oscillations via response curves | Kevin Martínez Anhom
Periodic sustained oscillations, such as neural spikes or cardiac rhythms, are represented in dynamical systems by limit cycles. Damped oscillations, on the other hand, find their dynamical counterparts in strong (hyperbolic) and weak (non-hyperbolic) stable foci. However, it is intuitive to think that, with the help of a carefully designed external input, one can force the spiraling trajectories around a stable focus to close onto a periodic orbit and thus, induce sustained oscillations in a damped oscillator.
In the work we present here, we introduce an Augmented Phase Reduction (APR) formalism around strong stable foci, analogous to the one defined for limit cycles (see [1] and [2]), that facilitates the study of the effects that external inputs may exert on the dynamics of this type of damped oscillations. Furthermore, based on this novel APR formalism, we successfully pose and solve an optimal control strategy (inspired by [3]) for the induction of a limit cycle around a strong stable focus using a minimum-energy external input.
We successfully applied this theory to the practical cases of a strong linear focus and the Fitzhugh-Nagumo model (see [4] and [5]). Positive results in the first case (see Figure 1A), regarded as an easy academical model, allowed an exhaustive analysis to find some interesting properties of the control algorithm. Positive results in the second case (see Figure 1B) proved our optimal control strategy to be effective in enhancing the oscillatory regime of the most realistic neuron models, making them excitable even for low-intensity external stimuli. This opens the door to potential practical and therapeutical applications of this technique.
Authors & Affiliations:
Kevin Martínez-Anhom (a), Román Moreno (b), Antoni Guillamon (a,b,c)
(a) Centre de Recerca Matemàtica, Barcelona, Spain.
(b) Departament de Matemàtiques, Universitat Politècnica de Catalunya, Barcelona, Spain.
(c) Institut de Matemàtiques de la UPC – Barcelona Tech (IMTech), Barcelona, Spain.
A stochastic hierarchical model for low grade glioma evolution | Amira Meddah
A stochastic hierarchical model for the evolution of low grade gliomas is proposed. Starting with the description of cell motion using a piecewise diffusion Markov process (PDifMP) at the cellular level, we derive an equation for the density of the transition probability of this Markov process based on the generalised Fokker-Planck equation. Then, a macroscopic model is derived via parabolic limit and Hilbert expansions in the moment equations. After setting up the model, we perform several numerical tests to study the role of the local characteristics and the extended generator of the PDifMP in the process of tumour progression. The main aim focuses on understanding how the variations of the jump rate function of this process at the microscopic scale and the diffusion coefficient at the macroscopic scale are related to the diffusive behaviour of the glioma cells and to the onset of malignancy, i.e., the transition from low-grade to high-grade gliomas.
Evelyn buckwar, Martina Conte, Amira Meddah
tDCS montage optimization for the treatment of epilepsy using Neurotwins | Borja Mercadal
(DCM) [2]. Structural plasticity is activity-independent and only silent (zero strength) synapses are randomly pruned or created.
Movement and reward are encoded in the cerebellar signals to the substantia nigra dopamine neurons | Farzan Nadim
Farzan Nadim1, Kamran Khodakhah2, Germán M. Heim3, Horacio G. Rotstein1
1. New Jersey Institute of Technology, Newark, NJ, USA
2. Albert Einstein Col of Med, NY, USA
3. Univ Nacional del Sur, Argentina
Learning involves multiple brain regions that process sensory input, evaluate outcomes and generate movement. A recent in vivo study characterized functional monosynaptic projections from the cerebellum (Cb) to the substantia nigra (SNc) dopaminergic nucleus and demonstrated the involvement of these Cb-SNc projections in both movement generation and reward-based functions during learning (Washburn, Onate et al, Nat Neurosci 2024). However, the information content of these signals and how they relate to function is still unknown. Moreover, decomposing neural signals into components that encode different predictors and their kinetics, particularly during a learning process remains mathematically and computationally challenging. Standard methods such as generalized linear models (GLM) can provide link functions that estimate the contribution of each predictor to the total signal, but do not estimate the associated signal kinetics.
In this work, we develop and use an optimization technique to decompose the Cb-SNc activity, recorded in a Pavlovian conditioning task, into three components: (i) movement-related, (ii) sensory, and (iii) reward-related. This method produces the respective contributions (kernels) of each predictor component, based on a priori assumptions (such as linear decomposition), which can then be examined to estimate how each predictor changes during the learning process.
Because these kernels are time-dependent, they provide an estimate of both the amplitudes and the kinetics of the signal components contributed by each predictor. Notably, the time-dependent kernels can be interpreted as the output of dynamical systems (solutions to ordinary differential equations) in response to inputs. Therefore, these kernels are interpretable in terms of a systems’ building blocks. We build firing rate models of cerebellar output neurons, SNc dopamine neurons and inhibitory GABAergic neurons and simulate this activity using parameter estimation tools. These models are used to examine how the cerebellum may contribute to dopaminergic signaling in the process of conditioned learning.
Beyond Synchrony: The Role of Electrical Synapses in Neural Pattern Formation | Bastian Pietras
Bastian Pietras (1), Pau Clusella (2), Daniele Avitabile (3,4), Ernest Montbrió (1)
1. Neuronal Dynamics Group, Department of Engineering, Universitat Pompeu Fabra, 08018 Barcelona, Spain
2. EPSEM, Departament de Matemàtiques, Universitat Politècnica de Catalunya, 08242 Manresa, Spain
3. Amsterdam Centre for Dynamics and Computation, Department of Mathematics, Vrije Universiteit Amsterdam, 1081 HV Amsterdam, The Netherlands
4. MathNeuro Team, Inria branch of the University of Montpellier, 34095 Montpellier Cedex 5, France
Electrical synapses, mediated by gap junctions, have historically played a minor role in neuroscience research, often overshadowed by their chemical counterparts. Yet, some hypothesize they may constitute the brain’s “dark matter”—essential but elusive. Gap junctions are notoriously difficult to investigate experimentally and theoretically, but recent advances in both domains illuminate their potential significance in neuronal networks. By integrating previously scattered neuroanatomical evidence into established mean-field approaches for networks of spiking neurons with chemical and electrical synapses, we uncover novel and counterintuitive functions of gap junctions in shaping the collective behavior of inhibitory neurons.
We propose an exactly reduced neural field model for quadratic integrate-and-fire neurons that incorporates the distinct spatial connectivity ranges of chemical and electrical synapses. This analytically tractable model not only reveals, through bifurcation analysis, unexpected roles of gap junctions beyond synchronization and collective oscillations, but it provides insights how electrical synapses contribute to the formation and modulation of neural patterns. Numerical simulations demonstrate that these analytical results are robust and extend to other neuron models, including exponential integrate-and-fire neurons. Our findings suggest a paradigm shift in understanding neural pattern formation, extending Turing’s foundational principles of reaction-diffusion systems to account for neural dynamics that emerge through the intricate interplay of electrical and chemical synapses.
Optimal signal transmission and timescale diversity in a model of human brain operating near criticality | Yang Qi
Yang Qi{1,2,3}
Jiexiang Wang {1}
Weiyang Ding {1,2,3}
Gustavo Deco {4,5}
Viktor Jirsa {6}
Wenlian Lu {7,1,2}
Jianfeng Feng {1,2,3}
{1} Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China
{2} Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence (Fudan University), Ministry of Education, China
{3} MOE Frontiers Center for Brain Science, Fudan University, Shanghai, China
{4} Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain.
{5} Computational Neuroscience Group, Center for Brain and Cognition, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
{6} Aix Marseille University INSERM, INS, Institute for Systems Neuroscience, 13005 Marseille, France
{7} Center for Applied Mathematics, Fudan University, Shanghai, China
jffeng@fudan.edu.cn
These authors contributed equally
Abstract
Cortical neurons exhibit a hierarchy of timescales across brain regions in response to input stimuli, which is thought to be crucial for information processing of different temporal scales. Modeling studies suggest that both intra-regional circuit dynamics as well as cross-regional connectome may contribute to this timescale diversity. Equally important to diverse timescales is the ability to transmit sensory signals reliably across the whole brain. Therefore, the brain must be able to generate diverse timescales while simultaneously minimizing signal attenuation. To understand the dynamical mechanism behind these phenomena, we develop a second-order mean field model of the human brain by applying moment closure and coarse-graining to a digital twin brain model endowed with whole brain structural connectome. Cross-regional coupling strength is found to induced a phase transition from asynchronous activity to synchronous oscillation. By analyzing the input-response properties of the model, we reveal criticality as a unifying mechanism for enabling simultaneously optimal signal transmission and timescales diversity. We show how structural connectome and criticality jointly shape intrinsic timescale hierarchy across the brain.
Metric Framework of Synchronous States Identification in Spiking Neural Networks | Daniil Radushev
Metric Framework of Synchronous States Identification in Spiking Neural Networks
Authors: Daniil Radushev (1), Olesia Dogonasheva (2), Boris Gutkin (3), Denis Zakharov (1)
Affiliations: (1) Higher School of Economics, Moscow; (2) Institut de l’Audition, Institut Pasteur, Paris; (3) École Normale Supérieure, Paris.
Abstract:
For many years, the investigation of synchronization properties of neural networks has been one of the most important directions of research in the synchronization theory. Among the main issues arising in the studies of neural networks’ synchronous regimes is the demand for automatic tools of coherent state identification, since those are used to localize the synchronous states’ existence regions in parameter spaces.
The traditional approach to the problem has been to identify the state based on a numeric estimate of the network activity coherence. While providing adequate tools for full synchrony/full asynchrony differentiation, this strategy lacks power for detailed description of partially synchronous states, which are the most relevant regimes for neuronal network function. In this work, we propose a new approach to the issue of neuronal network synchronous state identification — the Metric Framework.
Metric Framework interprets the network as a metric space and the activity parameters of its neurons as functions on that metric space. Through the identification of the regions of continuous change of the activity parameters, one locates the network’s synchronous clusters. The data of the sizes, the locations, and the internal characteristics of those clusters form an exhaustive high-level profile of the synchronous state, allowing the researcher to draw interpretable and accurate conclusions about the network’s dynamics
Interaction of segregated resonant mechanisms along the dendritic axis in CA1 pyramidal cells: Interplay of cellular biophysics and spatial structure | Horacio G. Rotstein
Ulises Chialva¹ and Horacio G. Rotstein²
¹ Department of Mathematics, Universidad Nacional del Sur, Bahía Blanca, Argentina
² Federated Department of Biological Sciences, New Jersey Institute of Technology & Rutgers University, Newark, NJ, USA
Neuronal frequency filters play a crucial role in cognition, motor behavior, and the dynamics of information processing in neural networks in both health and disease. The filtering properties of neurons are shaped by several factors, including the intrinsic properties of neurons, their dendritic geometry, and the heterogeneous distribution of ionic currents [1]. Experimental results show the existence of two distinct theta (∼ 4 – 10 Hz) resonant mechanisms in CA1 pyramidal neurons: (i) a perisomatic resonance mediated by an M-type potassium current (IM ) and amplified by a persistent sodium current (INap), and (ii) a dendritic resonance mediated by a hyperpolarization-activated current (Ih) [2].
While the propagation of (amplitude) resonances along dendritic trees has been investigated before [3], a number of biologically and mathematically relevant questions remain open. It is unclear how the two experimentally observed biophysically different and spatially segregated types of resonance interact in the presence of a heterogeneous distribution of ionic currents and membrane potential variations. It is also unknown what are the interaction and propagation properties of the associated phasonances (phase-resonances). In this study, we address these issues using CA1 pyramidal neurons as a case study.
We use a multicompartmental model based on the Hodgkin-Huxley formalism. The model includes IM , INap and Ih, spatially and heterogeneously distributed along the dendrites. We also use a linearized version of this model that allows for mathematical tractability. The model is minimal in the sense that it includes enough compartments to capture the filtering properties of CA1 pyramidal cells, but this number is small enough to allow for the conceptual understanding of the underlying mechanisms. In practice we use 20 compartments, which we found to be appropriate to preserve the experimentally observed segregation of the two resonant mechanisms, while allowing for their interaction without creating unrealistic interferences. We apply sinusoidal inputs at proximal, distal, and intermediate dendritic locations, we compute the amplitude and phase profiles across all compartments, and describe them for a number of realistic scenarios.
Our results reveal that voltage variations along the dendritic cable differentially activate ionic channels, creating a diverse range of resonant and phasonant responses. The spatial structure of the dendrites provides the neuron with remarkable flexibility to process these inputs and support a variety of scenarios of resonance interaction. Our findings highlight the complex relationship between dendritic structure, ionic mechanisms, and neuronal filtering properties. These flexible filtering capabilities not only enable individual neurons to adapt to spatially and frequency-specific inputs, but also significantly contribute to the generation of network rhythms and the regulation of neural activity at the network level.
References
[1] G. Buzsáki. Rhythms of the brain. Oxford University Press, 2006.
[2] H. Hu, K. Vervaeke, L. J. Graham, and J. F. Storm. Complementary theta resonance filtering by two spatially segregated mechanisms in CA1 hippocampal pyramidal neurons. The Journal of physiology, 29:14472–14483, 2009.
[3] J. Laudansky, B. Torben-Nielsen, I. Segev, and S. Shama. Spatially distributed dendritic resonance selectively filters synaptic input. PLoS Computational Biology, 8:e1003775, 2014.
Distinct dopaminergic spike-timing-dependent plasticity rules are suited to different functional roles | Jonathan Rubin
Authors: Baram Sosis & Jonathan E. Rubin, University of Pittsburgh, Department of Mathematics
Abstract: Inspired by experimental findings, various mathematical models have been formulated to describe the changes in synaptic strengths resulting from spike-timing-dependent plasticity (STDP). One site where STDP is believed to play a key role is at cortico-striatal synapses, which comprise the primary channel for cortical inputs to the basal ganglia. The neuromodulator dopamine is released by midbrain dopamine neurons when unexpected reward is received and interacts with the timing of pre- and postsynaptic spiking to modulate plasticity of cortico-striatal synapses. Experimental and theoretical analysis of cortico-basal ganglia-thalamic (CBGT) circuits suggest a key role for dopaminergic reward prediction error signals, through their impact on cortico-striatal synaptic strengths, both in updating value estimates associated with available choices and in altering the likelihood that a particular action will be selected in the future [1]. These distinct functions are likely performed by different neurons in different regions of the basal ganglia, however, which raises the possibility that distinct plasticity rules are involved. Unfortunately, despite some exciting experimental investigations of long-term plasticity properties in specific striatal regions and task settings, relatively little is known about the details of these plasticity mechanisms, especially in striatal regions thought to encode value. We sought to address this gap by analyzing, mathematically and with simulations, the performance of a set of three potential dopamine-dependent STDP models across several biologically relevant scenarios.
Two of the plasticity models considered comprise previously proposed STDP rules [2] with modifications to incorporate dopamine, while the third is a dopamine-dependent STDP rule tailored specifically to cortico-striatal synapses, based on experimental observations [3]. We tested the ability of each of the three models to complete simple reward prediction and action selection tasks and to maintain its weights in the face of noise, studying the learned weight distributions and corresponding task performance in each setting (see Figure 1 of the supplementary material). Mathematically, each model is a coupled system of ordinary differential equations for synaptic weights posed at the individual synapse level, driven by a Poisson cortical input, together with additional terms to implement postsynaptic firing and dopamine release and to track each synapse’s time-dependent eligibility for plasticity. Our analysis proceeds via the derivation and analysis of average weight drift equations for each model. Although technical, this step leads to equations for which we can assess the existence of critical points and, in some cases, prove that certain conditions are necessary and/or sufficient for their stability (see Figure 2 of the supplement). For example, in the reward prediction setting, for two of the models, the evolution equation for the average weight w_i is
dw_i/dt = (R* − N⟨w,r⟩)*r*(tdop)*(teli)*[τ*∆f(w_i)*r_i*⟨w,r⟩+(f+)(w_i)*w_i*r_i]
where various constants are model parameters including a target firing rate R*, an input rate vector r, and various time constants, and where the f terms relate to plasticity effects of different relative spike timings, which depend on the specific model being considered.
Interestingly, we find that each of the three plasticity rules is well suited to a subset of the scenarios studied but falls short in others. We show that this result generalizes to more complex variants of these settings in which the reward contingencies or the task changes periodically. These results show that different tasks may therefore require different forms of synaptic plasticity, yielding the prediction that the precise form of the STDP mechanism present may vary across regions of the striatum, and other brain areas impacted by dopamine, that are involved in distinct computational functions.
[1] K. N. Gurney, M. D. Humphries, and P. Redgrave, PLoS Biol., 13(1):e1002034, 2015.
[2] R. Gu ̈tig, R. Aharonov, S. Rotter, and H. Sompolinsky, J. Neurosci., 23(9):3697–3714, 2003.
[3] C. Vich, K. Dunovan, T. Verstynen, and J. Rubin, Commun. Nonlin. Sci. Num. Sim., 82:105048, 2020.
One-shot normative modelling of whole-brain functional connectivity | Janus Rønn Lind Kobbersmed
Many brain diseases and disorders lack objective measures of brain function as indicators of pathology. The search for brain function biomarkers is complicated by the fact that these conditions are often heterogenous and described as a spectrum from normal to abnormal rather than a sick-healthy dichotomy. As a response to this issue, normative modelling has emerged to characterize the normal variation of brain measurements given sex and age. Abnormalities are then identified as deviations from the distribution of normal brain measures. In fMRI studies, brain function is often assessed as functional connectivity (FC), which is calculated as the correlation matrix of activity between pairs of brain regions or networks. Normative modelling of FC requires a large, healthy population and a method to predict FC from sex and age. However, predicting FC is challenging because of its mathematical structure and high dimensionality. Current normative modelling studies have mainly focused on predicting the pairwise FC (i.e. correlation coefficients) individually rather than the full FC (correlation) matrix, thereby ignoring its semi-positive definiteness and generating a large number of hypotheses. Here, motivated by the fact that brain diseases often affect the interplay between multiple brain regions, rather than properties of isolated pairs, we adapt a newly developed method from the statistics literature for the needs of normative modelling, so that we can find linear projections of FC matrices based on sex and age. Using this new approach, which we termed Functional Connectivity Integrative Normative Modelling (FUNCOIN), and resting-state fMRI data from the UK Biobank, we propose a normative model based on whole-brain FC by identifying two sex- and age-dependent projections, which successfully characterize the normal range of functional connectivity. By modelling the entire brain at once, FUNCOIN allows for identifying network-level changes associated with sex and age which traditional elementwise methods cannot reveal. This way, we found that subjects with Parkinson’s disease were significantly, and substantially, more likely than healthy subjects to exhibit an abnormal pattern of FC even on scans up to 5.5 years before being diagnosed.
Authors: Janus R. L. Kobbersmeda (Aarhus University), Chetan Gohil (University of Oxford), Andre Marquand (Radboud University), Diego Vidaurre (Aarhus University)
Paths to depolarization block: modeling neuron dynamics during spreading depolarization events | Marisa Saggio
Marisa Saggio 1, Roustem Khazipov 2,3, Daria Vinokurova 2, Azat Nasretdinov 2, Viktor Jirsa 1, Christophe Bernard 1
1 Aix Marseille Univ, INSERM, INS, Inst Neurosci Syst, Marseille, France
2 Laboratory of Neurobiology, Institute of Fundamental Medicine and Biology, Kazan Federal University, Kazan, 420008, Russia
3 Aix-Marseille University, INMED, INSERM, Marseille, 13273, France
Spreading Depolarization (SD) is a pathological state of the brain involved in several brain diseases, including epilepsy and migraine. SD consists of a slowly propagating wave of nearly complete depolarization of neurons, classically associated with a depression of cortical activity. This homology between SD and spreading depression has been recently challenged [1]: during SD events, which only partially propagate from the cortical surface to depth, neuronal activity may be suppressed, unchanged or elevated depending on the distance to the SD stop depth. These patterns can be explained by analysing the activity of single neurons. In layers invaded by SD, neurons lose their ability to fire entering Depolarization Block (DB) and far from the SD neurons maintain their membrane potential. However, neurons in between unexpectedly displayed patterns of prolonged sustained firing.
In the present work [2], we build a phenomenological model, incorporating some key features observed during DB in this dataset (current-clamp patch-clamp recordings from 10 L5 pyramidal neurons in the rat somatosensory cortex during SDs evoked by distant application of 1M KCl), that is able to predict the new patterns observed. We model the L5 neuron as an excitable system close to a SNIC bifurcation [3], using the normal form of the unfolding of the degenerate Takens-Bogdanov singularity for the fast dynamics [4], under the modulatory effect of two slow variables. The model’s bifurcation diagram provides a map for neural activity that includes DB together with the patterns observed for intermediate values of depolarization. We identify five qualitatively different scenarios for the transition from healthy activity to DB. A SNIC bifurcation accounts for the transition from the healthy state to sustained oscillations, and either a supercritical Hopf or Fold Limit Cycle bifurcation for the transition to DB for strong levels of depolarization. Both can occur with or without bistability between the healthy and pathological state, giving four possible scenarios. These scenarios encompass the mechanisms for DB present in the modeling literature and allow us to understand them from a unified perspective. We add another mechanism based instead on movement in state space. Time series in our dataset are consistent with the scenarios, however the presence of bistability cannot be inferred by our analysis.
Understanding how brain circuits enter and exit SD is important to design strategies aimed at preventing or stopping it. In this work we use modeling to gain mechanistic insights on the ways a neuron can transition to DB or to different patterns of sustained oscillatory activity during SD events, as observed in our dataset. While our work provides a unified perspective to understanding modeling of DB, ambiguities remain in the data analysis. These ambiguities could be solved by scenario-dependent theoretical predictions, for example for the effect of stimulation, for further experimental testing.
A Biologically Plausible Associative Memory Network | Mohadeseh Shafiei Kafraj
1. Mohadeseh Shafiei Kafraj (Gatsby Computational Neuroscience Unit, University College London)
2. Dmitry Krotov( MIT-IBM Watson AI Lab, IBM Research)
3. Brendan A. Bicknell1 ( Gatsby Computational Neuroscience Unit, University College London)
4. Peter E. Latham (Gatsby Computational Neuroscience Unit, University College London)
”A Biologically Plausible Associative Memory Network”
The Hopfield network (Hopfield1982) has been the leading model for associative memory for over four decades, culminating in the recent 2024 Nobel Prize. However, the vanilla version of the Hopfield network has a capacity that scales with the number of connections per neuron (Roudi2007). In the mammalian brain, that’s about 1,000, leading to a capacity of about 50 memories in a spiking network—regardless of its size. Therefore, it cannot possibly account for the capacity of human memory.
To address these limitations, various modifications to the Hopfield network have been proposed. One promising variant, Dense Associative Memory or the Modern Hopfield Network, incorporates a two-layer architecture with memory and feature neurons (krotov2021), which significantly increases storage and recall capacity. However, this model lacks biological plausibility in important ways. Its capacity is bounded by the number of memory neurons, and, critically, recalling a memory requires most neurons in the memory layer to remain silent. This behavior contradicts cortical dynamics, where neurons rarely remain silent for extended periods (Buzsáki2014).
This is not easy to fix: the memory layer contains a large number of neurons, and allowing silent neurons to exhibit even low firing rates can introduce an unacceptable level of noise, preventing the perfect recall of stored memories. To address these challenges, we propose a new biologically plausible model for associative memory. This model supports polynomial capacity while integrating dendritic computations, enabling non-recalled memory neurons to exhibit nonzero firing rates without compromising the perfect recall of a large number of memories. The proposed architecture adheres to key biological constraints, including the presence of both excitatory and inhibitory populations that obey Dale’s law and maintain non-saturated firing rates. These properties enhance the model’s biological plausibility while achieving polynomial capacity, bridging the gap between theoretical and biological constraints on associative memory.
Understanding neuronal responses to transient inputs: a dynamical systems approach | Justyna Signerska-Rynkowska
Experimental studies of neuronal dynamics involve recording of both spontaneous activity patterns and the responses to sustained and short-term inputs. Although spontaneous activity of neurons has received much theoretical attention, the dynamic processes that influence neuronal responses to transient inputs are less understood. We describe underlying dynamical mechanisms shaping these responses in a widely accepted class of nonlinear adaptive hybrid models and discuss related phenomena: post-inhibitory facilitation (PIF) and slope-detection. In PIF an otherwise subthreshold excitatory input can induce a spike if it is applied with proper timing after an inhibitory pulse, while neurons displaying slope-detection property spike to a transient input only when the input’s rate of change is in a specific, bounded range.
Concerning PIF, we provide a geometric characterization of associated dynamical structures. For slope-detection, we give a complete analytical description for tent inputs. Moreover, although these phenomena have been previously associated with Type III neurons in Hodgkin’s classification, we show that PIF and slope-detection extend beyond Type III regime.
This is a joint work with Jonathan Rubin (University of Pittsburgh) and Jonathan Touboul (Brandeis University).
Rate-like dynamics of memory-dependent spiking neural networks | Kasper Smeets
Kasper Smeets(1, *), Valentin Schmutz(2), Wulfram Gerstner(1)
(1) Brain Mind Institute, École Polytechnique Fédérale de Lausanne, Switzerland
(2) UCL Queen Square Institute of Neurology, University College London, United Kingdom
(*) Presenting author
Neuronal populations exhibit the ability to reliably perform complex computations and process signals despite significant trial-to-trial variability in the spiking activity of individual neurons. This observation suggests that the collective dynamics of neuronal populations are not only robust to the stochastic nature of single-neuron spiking but also approximately deterministic at the population level.
Spiking neuron models such as the Leaky Integrate-and-Fire (LIF) and the Spike Response Model 0 (SRM0) with escape noise capture well the stochastic and discrete nature of spike transmission. In contrast, population dynamics are often modelled through rate-based models as a qualitative explanation of collective processing, as they lend themselves well to analytical analyses such as dynamical systems theory. The precise link between these two ends of the spectrum is still poorly understood in non-homogeneous networks of neurons, especially in the case of memory-dependent spiking neurons.
Here, we bridge the gap by showing how the dynamics of complex spiking neural networks can converge to deterministic solutions, even in the absence of neural duplicates. Further, we show that the dynamics of the limit spiking network are equal to those of a generalised rate network, which differ from the conventional rate networks by themselves also having a memory dependence.
Specifically, the activity of a generalised rate neuron here is the expected value of a corresponding spiking neuron, given the input to the neuron. As the spiking neuron itself is memory-dependent, there exists no direct mapping between the current membrane potential and the expected firing rate. Instead, the generalised activity is itself memory-dependent, with analytical form obtained by marginalised over all possible spiking time histories of the neuron.
The noise-robust dynamics arise in part from the mixed representation of a small number of population-level driving factors (Fig. 1B), characterised by connectivity matrices of reduced rank (Fig. 1D). This is described well by latent factors theory and the concentration of measure phenomenon, which together have also been used before to explain the emergence of rate-like dynamics in duplicate-free linear-nonlinear-Poisson spiking networks, holding as long as the number of latent factors grows sub-linearly with network size.
The setup used is a multilayer feedforward spiking network consisting of SRM0 neurons (Fig. 1C). The external input to the first layer is a random, duplicate-free encoding of a set of latent factors, each generated as the formal derivative of a Wiener process. The layer-to-layer connectivity is set as a decoding and a new duplicate-free encoding of the latent factors, apart from the connectivity to the output layer which is only a decoding.
The spiking network is shown to converge to an equivalent generalised rate network as described above, with convergence measured through the mean absolute difference between equivalent output membranes (Fig. 1E-F), and not to a conventional rate network.
These findings provide a link between complex and highly variable spiking behaviour to the empirical belief of approximately deterministic population-level dynamics, as observed in behavioural studies. Furthermore, it shows that rate coding principles can hold even when individual neurons exhibit intricate and diverse dynamics, facilitating novel analytical analyses of heterogeneous memory-dependent networks.
Activity-Dependent Homeostatic Plasticity Maintains Circuit-Level Dynamic Properties with Local Activity Information | Lindsay Stolting
Neural circuits are remarkably robust to perturbations that threaten their function, like changing cellular environments and the constant turnover of membrane proteins which dictate their electrical properties. Nowhere is this robustness more evident than in the motor circuits which direct rhythmic behaviors, such as the crustacean stomatogastric ganglion (STG). Not only must STG neurons rhythmically burst, but they must burst in a specific order for the animal’s pyloric muscles to function properly in a digestive rhythm. Amazingly, they continue to do so despite various challenges, like temperature change and pharmacological manipulation. Research has attributed this resilience to activity-dependent homeostatic plasticity (ADHP), which prevents the chronic over- or under-activation of individual neurons by up- or down-regulating their ionic currents. Previous work has suggested how such a mechanism, operating on information about single-neuron activity levels, might improve circuit-wide signal propagation and encourage rhythmic bursting by calibrating each neuron’s activation function to the magnitude of inputs it receives. But how could such a mechanism maintain other properties that are less directly connected to the average activity of individual neurons, such as burst order? We explored this question in a computational model of the pyloric pattern generator. First, we developed a set of criteria to measure the pyloric character of a pattern generator. We used this measure to optimize a set of pyloric models. Finally, we optimized ADHP mechanisms for these models that could maintain their pyloric character in the face of various parameter perturbations. This proved a relatively easy task for many of our models, suggesting that local information about neural activity levels can indeed be used to maintain circuit-level dynamic properties. We then used our model to more closely examine what makes this possible. Neural activity levels are not necessarily related to pyloric ordering characteristics (i.e. correctly ordered rhythms may have a variety of average activity levels and a given average activity level may occur either for a pyloric rhythm or non-pyloric dynamics). However, in subsets of network parameter space this is not always true. In other words, when only some fraction of possible circuit configurations are considered, there exist average neural activity levels which occur only among pyloric pattern generators and never among non-pyloric ones. ADHP which targets these neural activity levels will therefore be assured to restore pyloricness from a subset of possible perturbations, despite lacking any direct information about this circuit-level characteristic. This highlights the importance of considering what perturbations homeostatic mechanisms are expected to contend with, and may explain ADHP’s success at maintaining functional properties for which individual neural activity is unlikely to be a direct proxy.
With the following author list and affiliation:
Lindsay Stolting & Randall D Beer
Cognitive Science Department & Program in Neuroscience, Indiana University
Dynamics of synaptic weights under spike-timing-dependent plasticity | Jakob Stubenrauch
Jakob Stubenrauch (a,b) and Benjamin Lindner (a,b)
(a) BCCN Berlin, (b) Physics Department, HU Berlin
Spike-timing-dependent plasticity (STDP) has been long proposed as a phenomenological model class for synaptic learning [1]. Yet, in networks of spiking neurons, the stochastic process of synaptic weights implied by STDP rules has not been fully characterized. Here, we leverage recent advancements in the theory of shot noise [2] to analytically compute the drift and diffusion of synapses via which Poisson processes feed into a recurrent network of leaky integrate-and-fire neurons. The theory subdivides the cause of synaptic drift into contributions relating to different properties of the postsynaptic neurons. Possible applications include theory of learning. For instance, under a given training paradigm, one could compute the memory capacity and relate learning success to biophysical parameters.
[1]: G. Bi and M. Poo, Journal of Neuroscience 18, 10464 (1998).
[2]: J. Stubenrauch and B. Lindner, Phys. Rev. X 14, 041047 (2024).
gPC-based robustness analysis of neural systems through probabilistic recurrence metrics | Uros Sutulovic
Authors: Uros Sutulovic, Daniele Proverbio, Rami Katz, and Giulia Giordano.
All the authors are with Department of Industrial Engineering, University of Trento, Italy.
Abstract:
Neuronal systems exhibit an astounding complexity; yet, they manage to preserve their characteristic function and signaling patterns despite large uncertainties, variability, and external disturbances. In fact, their associated models embed uncertain parameters that are hard to estimate and often impossible to measure directly. Robustness analysis is a powerful method for understanding how key characteristics of the model’s network structure and sets of uncertain parameters enable the persistence of desired dynamical patterns and properties. In particular, probabilistic robustness analysis provides tools that quantify not only whether a property holds, but also with which likelihood, given uncertain parameters with a known probability distribution. Probabilistic analysis typically relies on Monte Carlo (MC) methods, which employ many simulations with sampled random variables and then compute summary statistics from the random realizations, to quantify the likelihood of the emergence of a desired pattern. However, MC methods suffer from poor scalability: especially for complex systems in neuroscience, they would require prohibitive computational power to adequately conduct probabilistic analysis, thus making large parameter spaces or combinations inaccessible.
To make probabilistic robustness studies more efficient and scalable for complex neuroscience models, we explore the effectiveness of surrogate models as an alternative to MC approaches. We employ generalized polynomial chaos (gPC) methods, which represent stochastic processes as a series expansion with respect to an appropriate basis of orthogonal polynomials related to the distribution of the uncertain system parameters; these methods leverage the linearity associated with the spectral representation to directly compute the summary statistics of interest, thus allowing fast, efficient and accurate extraction of statistical moments for any system.
We apply various gPC methods on widely used models of neural dynamics that can exhibit multiple dynamical regimes, at different scales: the Hindmarsh-Rose (HR) model for single-neuron, the Jansen-Rit (JR) model for neuron networks and the Epileptor model for whole brain regions. We assess the trade-off between efficiency and accuracy of different gPC approaches, selecting the most performing computational settings to study the effects of parametric uncertainty on the average signaling of these neural models.
To perform a comprehensive exploration of parameter spaces, we develop a novel pipeline. Since standard metrics in neuroscience, such as inter-spike intervals and firing rates, fall short for stochastic time series, we quantify probabilistic robustness with a novel methodology that combines recurrence plot analysis and automated persistency analysis. This new method has two main benefits: first, within the parameter space of interest, it quantifies the level of uncertainty for which a certain regime gets disrupted; second, it systematically identifies “regions of safe operation”, i.e., areas in the parameter space where a certain regime, quantified by the persistency of the associated pattern in the recurrence plot, holds despite stochasticity.
The results obtained for the HR and JR models enrich the biological insight generated with bifurcation analysis, by clarifying the effect of uncertainties and stochasticity and allowing to formulate new hypotheses and possibly falsify models. The proposed methodology enables new possibilities to unravel the robustness properties of complex systems in neuroscience and provides a powerful and versatile tool to all researchers in the field.
Neural Signal Prediction and Demixing via Multi-Time Delay Reservoir Computing | Kamyar Tavakoli
Brains process information through large-scale networks of interconnected neurons. Their collective activity offers a high-dimensional space for flexible input processing—a principle used in reservoir computing. In classical reservoir computing, a randomly connected network of nonlinear units is employed, and only the readout layer is trained—an approach that is well-suited for classification and prediction tasks due to the reservoir’s inherent memory and high-dimensional properties. Inspired by these properties, time-delay reservoir computing was developed as an alternative approach that replaces large-scale networks with a single nonlinear delay differential equation, which can be implemented in various physical and computational systems. Time delays are inherent components of the dynamics of brain feedback circuits that constitute its multiple networks. In this sense, time-delay RC explores the signal processing potential of such brain networks. In this work, we specifically study how introducing multiple delays influences prediction performance for tasks of varying complexity and involving signals with different levels of correlation (1). The properties of this supervised learning depends on the eigenvalue spectrum at the fixed point around which the input to be learned induces high-dimensional transients. We furthermore exploit time-delay reservoirs for signal demixing, where a single channel input to the RC is presented with a mixture of two chaotic signals (2). This scenario is akin to challenges in the auditory system that require speech separation, as well as to the cancellation of redundant (i.e. predictable) signals, such as self-generated motion or the superposition of rhythms that occur when weakly electric fish are in each other’s proximity. Our findings reveal that tuning the delay distribution parameters and the feedback gain can improve signal separation, providing insight into the broader applicability of delay-based reservoir models. We furthermore employed a multi-layer reservoir architecture, as described in the literature, to improve the demixing of chaotic signals, thereby demonstrating the potential of deeper reservoir designs for more complex separation tasks.
1.
Tavakoli S Kamyar, Longtin A (2024) Boosting reservoir computing performance with multiple delays. Phys. Rev. E 109, 054203. (DOI: 10.1103/PhysRevE.109.054203) (see viewpoint in the APS Physics Magazine)
2.
Tavakoli S Kamyar, Longtin A (2025) Signal de-mixing using multi-delay multi-layer reservoir computing. PLoS Complexity (in press)
Homeostatic gain modulation drives changes in heterogeneity expressed by neural populations | Daniel Trotter
We show that the diameter of the directed configuration model with
Human brain dynamics are shaped by rare long-range connections over and above cortical geometry | Jakub Vohryzek
A fundamental topological principle is that the container always shapes the content. In neuroscience, this translates into how the brain anatomy shapes brain dynamics. From neuroanatomy, the topology of the mammalian brain can be approximated by local connectivity, accurately described by an exponential distance rule (EDR). The compact, folded geometry of the cortex is shaped by this local connectivity and the geometric harmonic modes can reconstruct much of the functional dynamics. However, this omits the fundamental role of the rare long-range cortical connections, crucial for improving information processing in the mammalian brain, but not captured by local cortical folding and geometry. In this talk, we show the essential contribution of harmonic modes combining rare long-range connections with EDR (EDR+LR) in describing functional dynamics (specifically long-range functional connectivity and task-evoked brain activity) compared to geometry and EDR representations. Importantly, the orchestration of task dynamics is carried out by a more efficient manifold made up of a low number of fundamental EDR+LR modes. In summary, these results unify the different anatomical constraints by showing the importance of rare long-range connectivity together with EDR in capturing the complexity of functional brain activity.
Minimizing information loss reduces spiking neuronal networks to differential equations | Zhuo-Cheng Xiao
Spiking neuronal networks (SNNs) are widely used in computational neuroscience, from biologically realistic modeling of local cortical networks to phenomenological modeling of the whole brain. Despite their prevalence, a systematic mathematical theory for finite-sized SNNs remains elusive, even for idealized homogeneous networks. The primary challenges are twofold: 1) the rich, parameter-sensitive SNN dynamics, and 2) the singularity and irreversibility of spikes. These challenges pose significant difficulties when relating SNNs to systems of differential equations, leading previous studies to impose additional assumptions or to focus on individual dynamic regimes. In this study, we introduce a Markov approximation of homogeneous SNN dynamics to minimize information loss when translating SNNs into ordinary differential equations. Our only assumption for the Markov approximation is the fast self-decorrelation of synaptic conductances. The system of ordinary differential equations derived from the Markov model effectively captures high-frequency partial synchrony and the metastability of finite-neuron networks produced by interacting excitatory and inhibitory populations. Besides accurately predicting dynamical statistics, such as firing rates, our theory also quantitatively captures the geometry of attractors and bifurcation structures of SNNs. Thus, our work provides a comprehensive mathematical framework that can systematically map the parameters of single-neuron physiology, network coupling, and external stimuli to homogeneous SNN dynamics.
Authors:
Jie Chang (School of Life Sciences, Peking University)
Zhuoran Li (School of Life Sciences, Peking University)
Zhongyi Wang (Courant Institute of Mathematical Sciences, New York University)
Louis Tao* (School of Life Sciences, Peking University)
Zhuo-Cheng Xiao* (NYU-ECNU Institute of Mathematical Sciences and NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai)
Modeling disorders of consciousness at the patient level reveals the network's influence on the diagnosis vs the local node parameters role in prognosis | Lou Zonca
Disorders of Consciousness (DoC) regroup a wide spectrum of conditions ranging from coma to more aware (awake) states of consciousness but for patients which remain largely unable to communicate. Although there are universal clinical procedures, to assess the level of consciousness of a DoC patient, precise diagnosis and prognosis remains a challenge. In this talk, I will discuss my current work regarding the development of DoC mathematical models calibrated at the single-patient level. The ultimate goal is to use these models as digital twins to propose better biomarkers, enhance prognosis, and test potential therapeutic approaches using numerical simulations.
I will present my latest results regarding the construction of a modeling pipeline that takes DoC patients’ fMRI resting state data as an input and provides automatically fitted mathematical models for each patient.
The pipeline is decomposed as follows: first, the data is first projected, using Auto-Encoders, into a latent-space of optimal reduced dimension that I will describe. Second, in this latent-space, I implement an automatic parameter fitting procedure that can be applied to different mathematical models. I will present and describe two models: (1) the Hopf model, which can be seen as a network of noisy oscillators, which is known to provide good results for fMRI modeling but whose biological interpretation is limited. (2) A new model that indirectly accounts for the regulatory role of astrocytes (a type of glial cells) on neuronal activity: the main advantage of this model, despite its higher complexity, is its more straightforward biological interpretation. Finally, the fitted parameters of the models provide us with two types of biomarkers: (1) The connectivity matrices, revealing the network interactions at the global brain scale, tend to give us information regarding the diagnosis of the patients, i.e. the severity of their condition. (2) On the other hand, the local node parameters tend to correlate to other relevant clinical information such as age, etiology and prognosis.
co-authors:
Lou Zonca [1], Anira Escrichs [1], Gustavo Patow [1,2], Dragana Manasova [3,4], Yonathan Sanz-Perl [1], Jitka Annen [5], Olivia Gosseries [5], Jacobo Diego Sitt [3], Gustavo Deco [1].
[1] Center for Brain and Cognition, University Pompeu Fabra, Barcelona, Spain
[2] Girona University, Spain
[3] Sorbonne Université, Institut du Cerveau – Paris Brain Institute – ICM, Inserm, CNRS, Paris, France
[4] Université Paris Cité, Paris, France
[5] Centre du cerveau, University Hospital of Liege and GIGA Consciousness, Liege University, Belgium
posters
Intrinsic Dimension of Working Memory Functional Neuro-Imaging Networks in Neurofibromatosis Type-1 | Amir-Abbas Khanbeigi
Amir-Abbas Khanbeigi{1}, Marta Czime Litwińczuk{1}, Shruti Garg{1,2}, Laura M. Parkes{1,2}, Mojtaba Madadi Asl{3,4*}, and Caroline A. Lea-Carnall{1,2*}
{1}: School of Health Sciences, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester Academic Health Science Centre, UK
{2}: Geoffrey Jefferson Brain Research Centre, Manchester Academic Health Science Centre, UK
{3}: School of Biological Sciences, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
{4}: Pasargad Institute for Advanced Innovative Solutions (PIAIS), Tehran, Iran
{*}: these authors contributed equally to this work
Introduction
Working memory, the ability to store and manipulate information temporarily, plays a central role in human cognition. Neurofibromatosis type-1 (NF1) is a genetic condition associated with working memory deficits. Emerging evidence suggests that disrupted network dynamics in NF1 may underlie working memory dysfunction [1] [2].
Insights from the physics of complex systems and dynamical systems theory can shed light on the network dynamics. It has been shown that the states of a system evolve towards the “attractor manifold” [3]. Theoretically, alterations in the dynamics leads to changes in the intrinsic dimension (ID) of the attractor manifold [4].
ID estimators have been broadly studied in the context of neural recordings [5] but have not been systematically studied in fMRI time series. Here, we focus on two main ID estimators: Two-Nearest-Neighbors (2NN) [6], which can detect ID in a minimal neighbourhood, and Fisher-Separability (FS) [7], which focuses on the internal structures of manifolds. We apply these techniques to fMRI data collected from NF1 and control adolescents during a working memory task to understand differences in brain network dynamics.
Methods
Imaging was acquired on a 3 T Philips Achieva scanner (TR= 2 s, TE = 12 ms and 35 ms) for 28 NF1 and 16 control adolescents. fMRI time series were obtained during 6 minutes of a working memory test which includes 6 blocks of task each consisting of 30 s of 0-back and then 30 s of 2-back working memory test. Time series of voxels were parcellated using Schaffer-300 scheme [8], including 300 ROIs of cortical regions. This was achieved by averaging the time series of all voxels in each parcel. 300 ROIs are then assigned to 14 networks, 7 each belonging to the right and left hemispheres [9]. (See Fig. 1 A,B and C).
Image processing was performed with SPM12 (www.fil.ion.ucl.ac.uk/spm/), steps included: dual-echo image extraction and averaging by DEToolbox [10], slice-time correction and realignment to the first image, estimation of motion parameters (ART toolbox), unified segmentation by DARTEL algorithm [11], normalization to MNI space, functional denoising using Conn toolbox [12] and band-pass filtering (0.009 to 0.08 Hz) [13].
Sections of the time series related to the 2-back task were extracted and concatenated per person, each consisting of 72 time points. The ID of the time series for each of the 14 networks (see Fig. 1 C) was calculated using 2NN and FS ID estimators, per subject. The distribution of IDs for each brain network in the two cohorts was compared (See Fig. 1 E,F,G and H). Fig. 1 shows the procedure for 2NN ID estimation. A similar procedure is performed for FS ID estimation. Importantly, we note that these estimators are based on different mathematical assumptions and would not necessarily be expected to agree.
The Mann-Whitney U test was used to assess differences between the groups and the Bonferroni correction was applied.
Results
Using 2NN, it was shown that the 2-back working memory time series of LH Ventral Attention Network And LH Frontoparietal Network both possess higher ID in controls than NF1 (See Fig. 2 A).
Using the FS method, we find that RH Ventral Attention and RH Dorsal Attention Networks exhibit lower ID in controls than in NF1 (see Fig. 2 B).
Conclusion
Using two ID estimators, 2NN and FS, that have been previously studied in the context of neural recordings, we show that ID estimators can detect significant differences between fMRI activity in specific brain networks in NF1 and control adolescents (see Fig. 2).
Higher 2NN network ID of the neuro-typical population compared to NF1 reflects more complex dynamics with the capacity to represent more diverse patterns of network activation. In addition, it suggests a less constrained manifold, which potentially requires more parameters to generate or label the brain states.
The higher FS network ID in the NF1 population compared to the control cohort suggests that there is more overlap in the network states of NF1. In other words, the activation states of the networks are more mutually separated and distinct in the control cohort.
References:
[1] Shilyansky, Carrie, et al. “Neurofibromin regulates corticostriatal inhibitory networks during working memory performance.” Proceedings of the National Academy of Sciences 107.29 (2010): 13141-13146.
[2] Ibrahim, Amira FA, et al. “Spatial working memory in neurofibromatosis 1: Altered neural activity and functional connectivity.” NeuroImage: Clinical 15 (2017): 801-811.
[3] Dudkowski, Dawid, et al. “Hidden attractors in dynamical systems.” Physics Reports 637 (2016): 1-50.
[4] Kuznetsov N, Reitmann V. “Attractor Dimension Estimates for Dynamical Systems: Theory and Computation.” vol. 38 of Emergence, Complexity and Computation. Cham: Springer International Publishing
[5] Altan, Ege, et al. “Estimating the dimensionality of the manifold underlying multi-electrode neural recordings.” PLoS computational biology 17.11 (2021): e1008591.
[6] Facco, Elena, et al. “Estimating the intrinsic dimension of datasets by a minimal neighborhood information.” Scientific reports 7.1 (2017): 12140.
[7] Albergante, Luca, Jonathan Bac, and Andrei Zinovyev. “Estimating the effective dimension of large biological datasets using Fisher separability analysis.” 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019.
[8] Schaefer, Alexander, et al. “Local-global parcellation of the human cerebral cortex from intrinsic functional connectivity MRI.” Cerebral cortex 28.9 (2018): 3095-3114.
[9] Yeo, BT Thomas, et al. “The organization of the human cerebral cortex estimated by intrinsic functional connectivity.” Journal of neurophysiology (2011).
[10] Halai, Ajay D., Laura M. Parkes, and Stephen R. Welbourne. “Dual-echo fMRI can detect activations in inferior temporal lobe during intelligible speech comprehension.” Neuroimage122 (2015): 214-221.
[11] Ashburner, John. “A fast diffeomorphic image registration algorithm.” Neuroimage 38.1 (2007): 95-113.
[12] Nieto-Castanon, Alfonso. Handbook of functional connectivity magnetic resonance imaging methods in CONN. Hilbert Press, 2020.
[13] Garg, Shruti, et al. “Non-invasive brain stimulation modulates GABAergic activity in Neurofibromatosis 1.” Scientific Reports12.1 (2022): 18297.
Learning and connectivity in heterogeneous recurrent neural networks | Martina Acevedo
Authors:
Martina Acevedo (1, 2), Soledad Gonzalo Cogno (3), Germán Mato (1,4)
Affiliations:
(1) Balseiro Institute, Argentina.
(2) Hospital Del Mar Medical Research Institute (IMIM), Barcelona, Spain.
(3) Kavli Institute for Systems Neuroscience, NTNU, Norway.
(4) Bariloche Atomic Center, National Atomic Energy Commission, Argentina.
Neurons with different firing properties are known to be interconnected within the same circuits in the brain. Moreover, these cells are also known to be modulated by global network dynamics, such as neural oscillations with different time scales. Yet, it remains unknown what the connections between neurons with different firing properties are, and how these connections and the global network dynamics, for example in terms of oscillations, are related.
To address these questions, we first modelled recurrent neural networks (RNN) whose individual neurons responded to two variants of the Integrate-and-Fire family of models: the leaky integrate-and-fire (LIF) and the quadratic integrate-and-fire (QIF) model. We studied networks with different proportions of these neurons. The connections were random and independent. We characterized the global dynamics by calculating the participation ratio, which is an estimation of the planar dimensionality of the network dynamics. Previous studies have explored the link between the participation ratio and connectivity in systems with homogeneous intrinsic dynamics characterized by a linear relationship between the firing rate and the input current. We extended these quantifications to heterogeneous networks and analyzed the relationship between the participation ratio and the connectivity strength for different network compositions. We found that the dimensionality approaches the functional form of that in purely linear models, although now the dependence with the connections is rescaled due to the change of gain in the non-linear models.
Next we sought to enforce specific ordered population dynamics onto the network. We applied FORCE learning to train the connections so that neural inputs would reproduce oscillatory, sequential, and Ornstein-Uhlenbeck stochastic patterns. Upon training, we analyzed the resulting connectivity. We found that the strength of the connections depends on the composition of the network. These changes in connectivity were associated to outliers in the eigenspectrum of the connectivity matrix. In addition, reciprocal connectivity motifs, in which two neurons are reciprocally connected, emerged in networks trained with oscillatory and sequential patterns. The abundance of this type of motif depended both on the neural activity time constant, and on the network composition. Sequential patterns also gave rise to divergent motifs, where one neuron forms strong connections to two other neurons. In contrast, networks trained with Ornstein-Uhlenbeck patterns exhibited no local connectivity motifs. These results are compared to statistical analysis of connectivity patterns in human and mouse cortex.
Generation, Stability, and Robustness of Rhythmic Locomotion Patterns | Zahra Aminzare
Rhythmic activity in neuronal networks underlies a wide range of repetitive behaviors essential for survival, including locomotion, digestion, and breathing. While oscillatory patterns may be produced locally within circuits, their functional impact often depends on interactions across neural populations and their response to feedback from the behaviors they control. This work focuses on locomotion patterns generated in the metathoracic segment of the stick insect’s middle leg, modeled using a central pattern generator and antagonistic motoneuron pairs. Employing an 18-dimensional system of coupled ODEs, we identify the dynamic mechanisms responsible for generating specific stepping rhythms; analyze the robustness and adaptability of these patterns to parameter changes; and investigate the coordination of rhythmic outputs across limb segments. Due to its experimental accessibility and diverse stepping patterns, stick insect locomotion provides a valuable model for studying rhythm generation and control. This study not only deepens our understanding of rhythm generation in locomotion but also offers insights that extend to other rhythm-generating neuronal systems.
Zahra Aminzare, University of Iowa
Jonathan Rubin, University of Pittsburgh
Impact of age-related perturbations on a bump attractor model of working memory | Alexandra Antoniadou
Alexandra Antoniadou (1,2), Sara Ibañez (1), Klaus Wimmer (1,2)
1 – Computational Neuroscience Group, Centre de Recerca Matemàtica
2 – Dept. of Mathematics, Universitat Autònoma de Barcelona
Normal aging in humans and non-human primates is associated with progressive cognitive changes, which are especially evident in working memory tasks. Working memory is governed by the dorsolateral prefrontal cortex (dlPFC), that is undergoing pronounced alterations during normal aging, including myelin loss, synapse loss, and neuronal hyperexcitability. Despite a wealth of experimental data, a coherent theoretical framework of how these age-related neuronal changes interact and alter network dynamics in normal aging is currently lacking. Here, we investigated how both the individual and shared contributions of aging factors can lead to working memory decline in bump attractor networks.
We developed a bump attractor network that models the dynamics of the dlPFC neural representations underlying spatial working memory, incorporating two key aging factors: myelination deficits and neuronal hyperexcitability. The simulations were carried out using an adaptation of a spiking neural network consisting of excitatory and inhibitory populations of leaky integrate-and-fire neurons with sparse connectivity, that also incorporate short-term synaptic depression and facilitation (Hansel and Mato, 2013). Demyelination was modeled as an increase in the action potential failure rate (Ibañez et al., 2023), and changes in excitability were introduced by modifications to the f-Ι curve of leaky integrate-and-fire neurons, fitted to empirical data (Ibañez et al., 2020).
Our models predict that biologically plausible levels of myelin loss and hyperexcitability can account for substantial working memory impairment with aging, although via distinct mechanisms. Hyperexcitability leads to a spread of activity among neurons and increased neuronal correlations within the network. Consistent with theory, these changes in the network dynamics lead to increased diffusion of the activity bump and therefore predict less precise working memory representations.
In contrast, myelin loss severely impacts the amplitude but not the width of the activity bump, resulting in reduced firing rate levels and stability over time, thereby primarily impairing memory duration.
These findings highlight the different impacts of age-related changes on working memory circuit functionality, providing insights into the mechanisms of cognitive decline along with potential pathways for prevention and treatment.
Spatiotemporal integration properties in MT neurons affect motion discrimination | Lucia Arancibia
Lucia Arancibia1, Jacob Yates2, Alexander Huk3, Klaus Wimmer1, Alexandre Hyafil1
1. Computational Neuroscience Group, Centre de Recerca Matemàtica, Barcelona, Spain
2. Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
3. Fuster Laboratory, Departments of Psychiatry & Biobehavioral Sciences and Ophthalmology, UCLA, Los Angeles, CA, USA
Perception requires integrating noisy dynamic visual information across the visual field to identify relevant stimuli and guide decisions. While temporal integration has been studied extensively in experiments with highly controlled visual stimuli and reverse-correlation techniques, the nonlinear mechanisms underlying spatial integration are often neglected. More concretely, neurons in the Middle Temporal area (MT) show antagonistic motion-direction selectivity within their receptive fields (RFs) [1-3], and these neurophysiological properties could mediate the spatial suppression effects observed in perceptual motion discrimination tasks [4]. However, such a link between neural dynamics and behavioral responses is yet to be established. Here, we show that the spatial structure of the stimulus modulates MT responses in monkeys performing a decision making experiment with rich spatiotemporal motion stimuli. Further, we propose a model to generate global motion perception from local direction-selective neurons and find that spatial integration impacts perceptual choices.
In particular, we found that monkeys integrate spatial evidence sublinearly in a motion discrimination task due to (i) weaker impact of motion further away from the fovea, and (ii) surround suppression effects causing an attenuation of the responses to motion in the center of the stimulus. To investigate the neural basis of these effects, we used nonlinear regression models and show that the instantaneous firing rates of MT neurons can be predicted from both excitatory and suppressive contributions of the motion fields within the neurons’ RFs.
We propose to link the findings on the neural and behavioral level in a hierarchical model of spatiotemporal decision making in which spatial context effects modulate spatial stimulus integration in neurons of sensory areas, and a decision area supports temporal integration to give rise to perception [5]. In the input layer, two populations of sensory neurons respond preferentially to motion in opposite directions but the structure of the connectivity generates heterogeneous spatial modulation of the responses, and nonlinear spatial integration of the motion stimuli. A second layer consisting of two decision-encoding populations, each associated with a possible choice, integrates sensory evidence across time until the network reaches the attractor state due to winner-take-all dynamics. This model reproduced the spatial effects found in monkey’s choices, evidencing that contextual modulation mechanisms at the level of sensory neurons could be responsible for the perception of spatially distributed motion signals.
Taken together, our results provide a deeper understanding of how the brain processes dynamic visual information, and how specific nonlinear properties of sensory perception in sensory neurons shape perceptual choices.
1. Allman, J., Miezin, F., & McGuinness, E. (1985). Direction-and velocity-specific responses from beyond the classical receptive field in the middle temporal visual area (MT). Perception, 14(2), 105-126, 10.1068/p140105
2. Raiguel, S., Van Hulle, M. M., Xiao, D. K., Marcar, V. L., & Orban, G. A. (1995). Shape and spatial distribution of receptive fields and antagonistic motion surrounds in the middle temporal area (V5) of the macaque. European journal of neuroscience, 7(10), 2064-2082, 10.1111/j.1460-9568.1995.tb00629.x
3. Pack, C. C., Hunter, J. N., & Born, R. T. (2005). Contrast dependence of suppressive influences in cortical area MT of alert macaque. Journal of neurophysiology, 93(3), 1809-1815, 10.1152/jn.00629.2004
4. Tadin, D., Lappin, J. S., Gilroy, L. A., & Blake, R. (2003). Perceptual consequences of centre–surround antagonism in visual motion processing. Nature, 424(6946), 312-315, 10.1038/nature01800
5. Wimmer, K., Compte, A., Roxin, A., Peixoto, D., Renart, A., & De La Rocha, J. (2015). Sensory integration dynamics in a hierarchical network explains choice probabilities in cortical area MT. Nature communications, 6(1), 6177, 10.1038/ncomms7177
Co-evolutionary dynamics of two adaptively coupled Theta neurons | Felix Augustsson
Recent interest in co-evolutionary networks has highlighted the intricate interplay between dynamics on and of oscillator networks with mixed time scales.
We explore the collective behavior of excitable oscillators in a simple network of two Theta neurons with adaptive coupling without self-interaction.
Through a combination of bifurcation analysis and numerical simulations, we seek to understand how the level of adaptivity in the coupling strength influences the dynamics.
We first investigate the dynamics possible in the non-adaptive limit; our bifurcation analysis reveals stability regions of quiescence and spiking behaviors, where the spiking frequencies mode-lock in a variety of configurations.
Second, as we increase the adaptivity, we observe a widening of the associated Arnol’d tongues, which may overlap and give room for multi-stable configurations.
For larger adaptivity, the mode-locked regions may further undergo a period-doubling cascade into chaos. Our findings contribute to the mathematical theory of adaptive networks and offer insights into the potential mechanisms underlying neuronal communication and synchronization.
Joint work with Erik Martens, Centre for Mathematical Sciences, Lund University
Hodgkin-Huxley framework-based associative memory for neural adaptation in the human temporal lobe | Diletta Bartolini
A key role of the human brain is to adapt to novel situations by leveraging prior experiences. This adaptability manifests behaviorally as faster reaction times to repeated or similar stimuli, and neurophysiologically as reduced neural activity, which can be observed through bulk-tissue measurements like fMRI or EEG. Various single-neuron mechanisms have been proposed to account for this macroscopic reduction in activity. A study using human single-neuron recordings in chronic invasive epilepsy monitoring has recently investigated these mechanisms using visual stimuli with abstract semantic similarities within an adaptation paradigm.
We are developing a mathematical model to replicate these experimental results, designing an associative memory model based on a variant of the Hodgkin-Huxley framework with time-delayed couplings, capable of storing patterns in its synaptic weights.
Our model simulates key neuronal processes by incorporating Fast-Spiking neurons, essential for sensory information processing in the neocortex, and Regular-Spiking with Adaptation neurons, the predominant excitatory neurons in the cortex with adaptable firing patterns. These neurons are interconnected via either electrical or chemical synapses, as both coexist in all nervous systems. Each neuron receives inputs from multiple others, enabling complex network dynamics. Electrical synapse weights are fixed, while chemical synapse weights are chosen based on what the network is required to memorize.
Preliminary results suggest that a model consisting of only 300 neurons successfully reproduces behavioral priming, evidenced by reduced latency in reinstating a learned pattern. Future steps will focus on elucidating the underlying mechanisms that facilitate the reinstatement of firing patterns. Additionally, we aim to implement learning algorithms that enable the network to encode more meaningful stimulus representations and achieve accurate classification of stimuli.
Authors:
Diletta Bartolini
Department of Mathematics, University of Pavia, Pavia, Italy
Faculty of Mathematics and Computer Science, UniDistance Suisse, Brig, Switzerland
Thomas P. Reber
Faculty of Psychology, UniDistance Suisse, Brig, Switzerland
Department of Epileptology, University of Bonn Medical Centre, Bonn, Germany
Matthias Voigt
Faculty of Mathematics and Computer Science, UniDistance Suisse, Brig, Switzerland”
Neuronal rhythmic activity induced by the electrogenic Na+/K+-ATPase | Mahraz Behbood
Neuronal rhythmic activity induced by the electrogenic Na+/K+-ATPase
Mahraz Behbood(1, 2), Louisiane Lemaire(1, 2,3), Jan-Hendrik Schleimer(1, 2), Susanne Schreiber(1, 2)
1. Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Germany
2. Bernstein Center for Computational Neuroscience, Humboldt-Universität zu Berlin, Germany
3. Inria Branch at the University of Montpellier, France
Brain rhythms observed during slow-wave sleep or seizures exhibit frequencies significantly lower than typical neuronal firing rates. Their underlying mechanisms of slow brain waves are still subject of research. These rhythms are thought to arise from synaptic interactions, network delays, intrinsic neuronal properties, or a combination of all. This study aims to investigate a generic mechanism through which any individual neurons can generate slow rhythms – the interplay with ionic concentration dynamics without expressing dedicated ion channels with slow kinetics.
Previous theoretical studies have linked neuron-intrinsic slow bursting to slow ion channels. Here, we demonstrate that the electrogenic nature of the Na+/K+-ATPase, through its influence on extracellular potassium dynamics, could play a role in generating rhythmic activity in all neuron models with class I excitability. While this concept was first suggested by Ayrapetyan (1971) based on experimental observations, it lacked precision and received limited attention.
Specifically, we demonstrate that such pump-mediated slow rhythmic activity appears as square-wave bursting, a rhythmic activity pattern found in class I excitable neurons (Fig 1A), organised around a hysteresis loop in a bistable region resulting from a saddle-node loop codim-2 bifurcation. Using slow-fast analysis, we demonstrate that the hysteresis loop formation relies on shear in the bifurcation diagram induced by the electrogenicity of the pump (Fig 1B and C).
Accordingly, we find that, depending on the density of the sodium-potassium pump, the system can exhibit four distinct regimes: rest, tonic spiking, bursting, and depolarization block. Through a comprehensive bifurcation analysis of the entire system, we identify that the transition from tonic spiking to bursting occurs via a cascade of period doubling, ultimately leading to chaotic behaviour at the border of changing dynamics (Fig 1D).
Finally, we propose an approach to reduce our conductance-based model to a quadratic integrate-and-fire framework, a simple class I excitable system, capturing the interaction between extracellular potassium and voltage dynamics. To preserve the heterogeneity and rich dynamics of the original system, we fit the bifurcation structure rather than voltage traces. This simplification allows for the study of network behaviour during fluctuations in extracellular potassium.
Our analysis clarifies how the Na+/K+-ATPase electrogenicity not only drives slow bursting but also plays a fundamental role in shaping diverse neuronal dynamics in simple class I excitable neuron models. The dynamics which have been linked to various healthy and pathological conditions. Moreover, to facilitate the analysis of the dynamics we identified within a network framework, we propose a method for simplifying class I excitable systems through fitting the bifurcation structure.
Frequency-dependent communication of information in networks of neurons in response to oscillatory inputs | Andrea Bel
Neurons and networks of neurons exhibit preferred frequency responses to oscillatory inputs, referred to as resonance (Hutcheon & Yarom, Trends in Neurosci., 2000; Richardson, Brunel & Hakim, J. of Neurophysiology, 2003; Ledoux & Brunel, Front. Comp. Neurosci., 2011; Rotstein & Nadim, J. of Comp. Neurosci., 2013). Resonance can be inherited from one level of organization to another (e.g., from neurons to networks) or be created at different levels of organization in an independent manner by a variety of mechanisms (Stark, Levi & Rotstein, PLoS Comp. Biology, 2022). The frequency-dependence (FD) of the communication of oscillatory information between neurons in a network can be characterized by a (FD) communication coefficient K, defined as the ratio between the (FD) responses of the indirectly and directly perturbed neurons.
How the FD properties of K are related to these of the participating neurons is not known. It is also unclear whether K can exhibit resonance in networks of non-resonant neurons. In this work, we address these issues by using minimal biophysically plausible network models, numerical simulations and dynamical systems analysis.
We consider a minimal two-cell network with graded synaptic connections where one of the neurons (C1) receives an external oscillatory input and an input from the other cell (C2), while the latter receives only an input from C1.
Each cell is described by a linearized biophysical (conductance-based) model and does not exhibit intrinsic oscillations. The synaptic connectivity is nonlinear. The cells can be either low- or band-pass filters, can have different filtering properties, and not only the filtering properties, but also the filter type can change when the cells are connected in a network. These formalism can be easily adapted to rate models.
We extend the concept of the coupling coefficient K used in the gap junction literature to chemical synaptic connections. Specifically, K is defined as the quotient between the network impedances of C2 and C1. We extend the methods introduced in (Rotstein, J. Math. Neurosci., 2014) to develop dynamical system tools for the qualitative analysis of the communication coefficient K, which becomes a curve parametrized by frequency in the phase-space diagram. In particular, we investigate how the shape of K depends on the interplay of the input frequency, the intrinsic properties of the nodes and the synaptic connectivity. We show that for linear networks, when C2 is a low-pass filter, so is K. However, when the connectivity is nonlinear, K can be a band-pass filter even if C1 and C2 are low-pass filters. In general, we show that the presence of nonlinearities contributes to the generation of more complex and qualitatively different shapes for K than these for linear networks, including multiple resonances and anti-resonances. Our results highlight the ability of certain neuronal networks to communicated information in a FD manner and to develop preferred communication of information responses at non-zero frequencies.
Authors:
Andrea Bel, Departamento de Matemática, Universidad Nacional del Sur (UNS), Bahía Blanca, Argentina and CONICET, Argentina
Horacio G. Rotstein, Federated Department of Biological Sciences, New Jersey Institute of Technology & Rutgers University, Newark, NJ, USA.
Graduate Faculty, Graduate Program in Neuroscience, Center for Molecular and Behavioral Neuroscience (CMBN), Rutgers University. Corresponding Investigator, CONICET, Argentina.
Estimating neural connection strengths from firing intervals | Maren Bråthen Kristoffersen
Joint work with Bjørn Fredrik Nielsen (Norwegian University of Life Sciences) and Susanne Solem (Norwegian University of Life Sciences)
The standard activity-based neuron network model and firing data are utilized to compute the effective connection strengths between neurons in a network. This approach assumes a Heaviside response function, given external input, and a known initial state of neural activity. The Heaviside response function results in a highly nonlinear forward operator, mapping given connection strengths to firing intervals. Despite this complexity, the inverse problem of determining the connection strengths from firing intervals can be solved in a transparent manner. In fact, the inverse problem reduces to solving a linear system of algebraic equations. Additionally, a series of numerical experiments are presented to investigate the nature of the inverse problem.
Wavelet packets and graph neuronal signal processing | Iulia Martina Bulai
TBP
Secondary Bifurcations in a Next Generation Neural Field Model | Oliver Cattell
Epilepsy is a dynamic complex disease involving a paroxysmal change in the activity of millions of neurons, often resulting in seizures. Tonic-clonic seizures are a particularly important class of these and have previously been theorised to arise in systems with an instability from one temporal rhythm to another via a quasi-periodic transition. We show that a recently introduced class of next generation neural field models has a sufficiently rich bifurcation structure to support such behaviour. A linear stability analysis of the space-clamped model is used to uncover the conditions for a Hopf-Hopf bifurcation whereby two incommensurate frequencies can be excited. Since the neural field model is derived from a biophysically meaningful spiking tissue model we are able to highlight the neurobiological mechanisms that can underpin tonic-clonic seizures as they relate to levels of excitability, electrical and chemical synaptic coupling, and the speed of action potential propagation. We further show how spatio-temporal patterns of activity can evolve in the fully nonlinear regime using direct numerical simulations far from a Turing bifurcation.
The effect of systemic ketamine on working memory history dependencies | Konstantinos Chatzimichail
Konstantinos Chatzimichail 1,2, Melanie Tschiersch 1,2, Junda Zhu 3, Zhengyang Wang 3, Alexandre Mahrach 1, Christos Constantinidis 3, Albert Compte 1
1 – Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain
2 – Universitat de Barcelona, Spain
3 – Vanderbilt University, Nashville, Tennessee, United States
Working memory (WM) representations in humans and monkeys are attracted to the immediate prior stimuli (serial dependence) but repulsed away from the long-term distribution of stimuli (history dependence). Recently, it has been shown that WM attractive serial dependence is reduced in patients with schizophrenia or autoimmune anti-NMDA receptor encephalitis – two symptomatically related diseases linked to hypofunctional NMDA receptors[1,2]. In contrast, people with dyslexia show a reduction in history biases while serial dependence is unaffected[3]. These distinct patterns across disorders offer valuable insights into the neural and biophysical mechanisms underlying WM processes, which remain unclear.
Here, we study the mechanisms underlying WM serial and history dependence in four male and female macaque monkeys performing a biased oculomotor delayed response task (biased-ODR). In each session, stimuli followed a bimodal Gaussian distribution, with two diametrically opposed reference locations, which changed from session to session. We recorded the neural activity of prefrontal neurons in two monkeys using acute recordings (with 128-contact Diagnostic Biochips probes). In some sessions, monkeys were administered ketamine, an NMDA receptor antagonist.
We used linear models to assess the serial and history dependence of the monkeys’ saccadic responses, and their dependence on ketamine. Surprisingly, monkeys did not exhibit attractive, but mostly repulsive serial and history biases when these were combined. Ketamine reduced repulsive serial dependence but increased repulsive history bias. Moreover, we used neural population decoders to predict the stimulus location from prefrontal neural activity and analyzed the relationship between the decoded locations and the responses of the monkeys. Our analyses suggest that different mechanisms underlie serial dependence and history biases in the prefrontal cortex, based on their inverse modulation by systemic NMDAR disruption, and the close correspondence between decoding errors from prefrontal populations and behavioral errors in the task. These results have strong implications for attractor model simulations that implement serial dependence[1] and history effects[4] based on biophysically plausible NMDAR-dependent mechanisms.
Refs:
1. Stein, H. et al. Reduced serial dependence suggests deficits in synaptic potentiation in anti-NMDAR encephalitis and schizophrenia. Nat. Commun. 11, 4250 (2020).
2. Bansal, S. et al. Qualitatively Different Delay-Dependent Working Memory Distortions in People With Schizophrenia and Healthy Control Participants. Biol. Psychiatry Cogn. Neurosci. Neuroimaging (2023)
3. Lieder, I. et al. Perceptual bias reveals slow-updating in autism and fast-forgetting in dyslexia. Nat. Neurosci. 22, 256–264 (2019)
4. Eissa, T. et al. Learning efficient representations of environmental priors in working memory. PLoS Comput Biol 19(11) (2023)
Kuramoto model for populations of quadratic integrate-and-fire neurons with chemical and electrical coupling | Pau Clusella
Authors: Pau Clusella (UPC), Bastian Pietras, Ernest Montbrió (UPF)
Abstract:
The Kuramoto model (KM) is a minimal mathematical model for investigating the emergence of collective oscillations in populations of heterogeneous, self-sustained oscillators, including large-scale neuronal oscillations. Yet, it remains unclear how the parameters of the KM relate to parameters—such as chemical or electrical synaptic strengths—critical for setting up synchronization in biophysically realistic neuronal models.
Here, we derive the Kuramoto model (KM) corresponding to a population of weakly coupled, nearly identical quadratic integrate-and-fire (QIF) neurons with both electrical and chemical coupling. The ratio of chemical to electrical coupling determines the phase lag of the characteristic sine coupling function of the KM and critically determines the synchronization properties of the network. We apply our results to uncover the presence of chimera states in two coupled populations of identical QIF neurons. We find that the presence of both electrical and chemical coupling is a necessary condition for chimera states to exist. Finally, we numerically demonstrate that chimera states gradually disappear as coupling strengths cease to be weak.
Altogether, these results support the use of the KM for modeling studies in computational neuroscience and introduces the powerful mathematical framework of the KM for the analysis of the dynamics of QIF networks.
The geometry of primary visual cortex representations is dynamically adapted to task performance | Julien Corbo
Leyla R. Çağlar 1*, Julien Corbo 2*, O. Batuhan Erkat 2,3, Pierre-Olivier Polack 2
1- Windreich Department of AI & Human Health, Icahn School of Medicine at Mount Sinai, New York, NY, USA
2- Center for Molecular and Behavioral Neuroscience, Rutgers University—Newark, Newark, NJ, USA
3- Graduate Program in Neuroscience, Rutgers University—Newark, Newark, NJ, USA
* Contributed equally
Perceptual learning requires brains to change the representations of sensory inputs to optimize perception and facilitate discrimination and generalization. Although these mechanisms’ biological implementation remains elusive, recent advances in the analysis of large neuronal population recordings suggest that the geometry of the sensory representations is key to this process by preparing population activity to be read out at the next stage. Notably, it has been suggested that learning a discrimination task lowers the dimensionality of the representations to facilitate their linear separation. To test this hypothesis, we investigated the changes in dimensionality and representational geometry of perceptual manifolds in mice performing an orientation discrimination Go/NoGo task. The task’s difficulty was varied progressively by making the NoGo orientation closer to that of the Go. We used calcium imaging data recorded in the V1 of trained mice performing the task, as well as naïve mice passively viewing the same stimuli. This design allowed us to compare the representations of stimuli with variable separability, in the context of task performance and passive viewing. We found that the dimensionality of the responses increased with the similarity of the stimuli in both trained and naive animals. Strikingly, the dimensionality was lower in animals performing the task, suggesting that the biological implementation of the task relies on reducing the representations’ dimensionality. Accordingly, using the raw high-dimensional neuronal activity to classify the responses with SVMs and ANNs vastly outperformed the animals, but using dimensionally reduced responses captured the mice’s performance. However, while the dimensionality of responses to visual cues predicted task performance across difficulty levels, differences in dimensionality at the same difficulty level did not explain performance variability. Instead, we found that the separability of the representations in their embedding space is a better predictor of the individual performance. This was evidenced by measuring the neural manifold’s capacity and their geometric properties including manifold dimension and manifold radius. These results confirm that learning changes the geometric properties of early sensory representations in a way that would favor linear readout mechanisms.
Stochastic Optimal Control and Estimation with Multiplicative and Internal Noise | Francesco Damiani
Title:
Stochastic Optimal Control and Estimation with Multiplicative and Internal Noise.
Authors (with affiliations):
-Francesco Damiani, Center for Brain and Cognition, Department of Engineering, Pompeu Fabra University, Barcelona, ES.
-Akiyuki Anzai, Department of Brain and Cognitive Sciences, University of Rochester, Rochester, USA.
-Jan Drugowitsch, Department of Neurobiology, Harvard Medical School, Boston, USA.
-Gregory C. DeAngelis, Department of Brain and Cognitive Sciences, University of Rochester, Rochester, USA.
-Rubén Moreno-Bote, Center for Brain and Cognition, Department of Engineering, Pompeu Fabra University, Barcelona, ES.
Abstract:
A pivotal brain computation relies on the ability to sustain perception-action loops. Stochastic optimal control theory offers a mathematical framework to explain these processes at the algorithmic level through optimality principles. However, incorporating a realistic noise model of the sensorimotor system — accounting for multiplicative noise in feedback and motor output, as well as internal noise in estimation — makes the problem challenging. Currently, the algorithm that is commonly used is the one proposed in the seminal study in (Todorov, 2005). After discovering some pitfalls in the original derivation, i.e., unbiased estimation does not hold, we improve the algorithm by proposing an efficient gradient descent-based optimization that minimizes the cost-to-go while only imposing linearity of the control law. The optimal solution is obtained by iteratively propagating in closed form the sufficient statistics to compute the expected cost and then minimizing this cost with respect to the filter and control gains. We demonstrate that this approach results in a significantly lower overall cost than current state-of-the-art solutions, particularly in the presence of internal noise, though the improvement is present in other circumstances as well, with theoretical explanations for this enhanced performance. Providing the optimal control law is key for inverse control inference, especially in explaining behavioral data under rationality assumptions.
Delving into UP and DOWN States in Cortical Networks: Mechanisms Underlying Synaptic Plasticity | Rosa Maria Delicado Moll
Authors: R.M. Delicado-Moll ^(1), G. Huguet^(1,2), C. Vich^(3,4)
(1) Universitat Politècnica de Catalunya
(2) Centre de Recerca Matemàtica
(3) Universitat de les Illes Balears
(4) Institute of Applied Computing and Community Code
Abstract:
Cortical neuronal circuits undergo transitions between periods of sustained neural activity (UP state) and periods of silence (DOWN state), alternating in a rhythmic manner. This UP-DOWN dynamic is commonly observed in cortical activity, and the mechanisms responsible for generating this alternation have been widely explored in the literature (see [1], among others). However, when the network is influenced by short-term synaptic plasticity (STP), either through depression (STD) or facilitation (STF), the alternating behavior can be modified depending on the specific level of STP present in the network. Despite numerous mathematical models proposed to explain the underlying mechanisms of these plasticity-induced changes, the precise functional role of STP in shaping network dynamics is still not fully understood.
In this work, we present a rate model (see [1] in supporting information) that reproduces the neuronal dynamics observed when different levels of plasticity are applied to a bio-inspired computational model of an Excitatory-Inhibitory (EI) network of Hodgkin-Huxley type neurons, introduced in [2,3,4], which simulates cortical activity in V1. To conduct the study, we identify three distinct states of the rate model. Initially, a two-dimensional rate model is introduced, capturing the firing rate dynamics of both excitatory and inhibitory populations in the network. By incorporating neuronal adaptation into both populations, we next model the transition between UP and DOWN states, reflecting the dynamic changes between sustained neural activity and silence (see [2] in supporting information). Finally, by adding dynamics to the synaptic connections, we model the effects of short-term synaptic plasticity (STP), which alters the network’s behavior based on the type and strength of the involved plasticity (see [3] in supporting information). Specifically, we explore how depression (STD) drives the transition from UP and DOWN states to asynchronous activity, while facilitation (STF) pushes the network from a silent state to UP and DOWN states. Our simulations reveal the existence of three distinct activity states: (1) UP and DOWN states, (2) an asynchronous activity regime, and (3) a silent state, along with the underlying bifurcations driving these transitions. These states are determined by the level of depression and facilitation in the network, as well as the steady-state probability of facilitation release.
References:
[1] Jercog, D., Roxin, A., Bartho, P., Luczak, A., Compte, A., & De La Rocha, J. (2017). UP-DOWN cortical dynamics reflect state transitions in a bistable network. Elife, 6, e22425.
[2] Compte, A., Sanchez-Vives, M. V., McCormick, D. A., & Wang, X. J. (2003). Cellular and network mechanisms of slow oscillatory activity (< 1 Hz) and wave propagations in a cortical network model. Journal of neurophysiology, 89(5), 2707-2725.
[3] Benita, J. M., Guillamon, A., Deco, G., & Sanchez-Vives, M. V. (2012). Synaptic depression and slow oscillatory activity in a biophysical network model of the cerebral cortex. Frontiers in computational neuroscience, 6, 64.
[4] Vich, C., Giossi, C., Massobrio, P., & Guillamon, A. (2023). Effects of short-term plasticity in UP-DOWN cortical dynamics. Communications in Nonlinear Science and Numerical Simulation, 121, 107207.
Brain rhythms based inference for energy-efficient speech processing | Olesia Dogonasheva
Human listeners seamlessly understand speech in real time, even under noisy conditions, diverse accents, or interruptions. This remarkable ability is achieved without prior exposure to these specific situations. Recent evidence supports that speech processing in the brain operates as an approximate Bayesian inference system that uses rhythmic activity to segment and temporally structure processing. Hierarchically organized rhythms, such as theta and delta oscillations, align the perception and processing of speech units, such as syllables and phrases, with the natural rhythm of speech.
Neural Field Equations with Slowly Evolving Parameters | Dirk Doorakkers
Single-cell biophysical neural models are naturally written as systems of ordinary differential equations (ODEs) with time-scale separation, and this feature has a
strong influence on their dynamical repertoire. A mathematical multiple time-scale theory for neural excitability, and for the generation of complex neural rhythms such as mixed mode oscillations or bursting, has been developed for single-cell models, but little is known at the level of neural populations, and for spatially-extended neural networks.
A major obstacle towards exploring time-scale separation in networks of neurons is the lack of a comprehensive Singular Perturbation Theory for spatially-extended models and infinite-dimensional dynamical systems. In this poster, we present progress in this direction, with specific applications in neural field models. We study neural field equations in which a (possibly large) number of parameters are varying on a slow time scale compared to neural activity. For these systems, we develop from scratch an analytical theory analogous to Fenichel Theory for ODEs.
We use a Lyapunov-Perron type approach, grounded in functional analytical methods, which can be adapted to other infinite-dimensional problems, such as Partial Differential Equations subject to slowly-varying parameters.
The poster gives an overview of the theory, and provides examples using neural field models available in the literature.
Presenter and main author: Dirk Doorakkers, co-workers: Daniele Avitabile and Jan Bouwe van den Berg (all affiliated to VU Amsterdam).
Collective Behaviour of Chaotic Hénon Particles with Short-range Spatial Interaction | Congcong Du
Discrete synaptic transmission impacts the onset of rhythmic network dynamics | Rainer Engelken
Sven Goedeke, Fred Wolf, Agostina Palmigiano, Rainer Engelken
Rhythmic population oscillations are a ubiquitous feature of brain dynamics, with inhibitory interactions often playing a key role in generating rhythms such as inhibitory network gamma (ING) oscillations. Both hippocampal and cortical circuits exhibit these oscillations and their presence can selectively reshape the recruitment of different interneuron populations by input streams, e.g.~in the prefrontal cortex (Merino et al. 2021). Mathematically, delays in synaptic transmission strongly impact the emergence of population oscillations. Quadratic integrate-and-fire (QIF) neurons represent the normal form of the firing type of most cortical and hippocampal neurons and are known to exhibit limits in the encoding of high-frequency inputs. Recently, it has been shown that networks of QIF neurons even with undelayed synaptic interactions are particularly prone to generating oscillations (Goldobin et al. 2024).
We explored the emergence and stability of oscillations in balanced inhibitory QIF-type networks with tunable spike onset dynamics. Surprisingly, the dynamic population response for biologically realistic spike onset could not be correctly described using standard Fokker-Planck theory due to the breakdown of the underlying diffusion approximation. By employing a novel shot noise-based approach, we analyzed the network’s stability systematically. Our results show that rapid spike onset stabilizes inhibitory network oscillations. Effectively, the slow spike onset in QIF neurons acts similarly to a synaptic delay, enhancing oscillatory dynamics. The results of our shot-noise theory show excellent agreement with spiking network simulations, particularly in predicting the firing rate (f-I curve), the stationary voltage distribution, and the frequency response of the neuron as well as the onset of network oscillations. These findings emphasize the importance of spike onset dynamics in shaping network oscillations. Our study opens a new avenue to dissect the local circuit basis of the distinct susceptibility of different neuron populations in the prefrontal cortex to oscillatory state-modulated information rerouting.
Synaptic Plasticity and Spatial Patterning in the Next-Generation Neural Field Model | Niamh Fennelly
Title:
Synaptic Plasticity and Spatial Patterning in the Next-Generation Neural Field Model
Authors and affiliations:
Niamh Fennelly (School of Mathematics and Statistics, University College Dublin), Áine Byrne (School of Mathematics and Statistics, University College Dublin).
Abstract:
Synaptic plasticity, the mechanism underlying learning and memory, enables neural networks to dynamically rewire in response to activity, optimising their structure for efficient information processing. This adaptivity reshapes synaptic landscapes, influencing the emergence and organisation of spatial patterns in neural activity.
Neural field models are powerful tools for understanding how spatially organised patterns, such as bumps and waves of activity, develop and evolve in brain networks. However, traditional models often assume a high degree of neural synchrony, limiting their ability to capture changes in population-level synchrony. In our previous work, we incorporated adaptive coupling into a network of theta-neurons and derived a mean-field approximation for the system [1]. Inspired by the next-generation neural field model of Byrne, Avitabile, and Coombes (2019) [2], this study extends our model to include spatial dynamics, allowing us to explore biophysically realistic plasticity in a spatial framework.
We demonstrate that incorporating spatial interactions and adaptive plasticity leads to a rich repertoire of spatiotemporal dynamics, including Turing and Turing-Hopf bifurcations (Figure 1). Numerical simulations reveal the emergence of complex patterns such as localised bumps and travelling waves, which may encode information relevant to cognitive functions like working memory and attention. We investigate how adaptive connectivity governs the formation and stability of these patterns. This work advances our understanding of the interplay between plasticity, connectivity, and pattern formation, providing a more comprehensive framework for modelling neural self-organisation and its role in cognitive function.
References:
[1] Fennelly, Niamh & Neff, Alannah & Lambiotte, Renaud & Keane, Andrew & Byrne, Áine. (2024). Mean-field approximation for networks with synchrony-driven adaptive coupling. 10.48550/arXiv.2407.21393.
[2] Byrne, Aine & Avitabile, Daniele & Coombes, Stephen. (2019). Next-generation neural field model: The evolution of synchrony within patterns and waves. Physical Review E. 99. 10.1103/PhysRevE.99.012313.
Spatiotemporal Dynamics in Networks of Stochastic Integrate-and-Fire Neurons | Lauren Forbes
Authors: Lauren Forbes, Jared Grossman, Montie Avery, Ryan Goh, Gabriel Koch Ocker
We study bifurcations in networks of integrate-and-fire neurons with stochastic spike emission, focusing on the effects of the spatial and temporal structure of the synaptic interactions. Performing bifurcation analysis of a deterministic mean-field approximation of the population dynamics, we understand the spatial, temporal, and spatiotemporal transitions between patterns of macroscopic activity. In the mean-field theory, synaptic delays give rise to uniform oscillations across the population through a subcritical Hopf bifurcation of the stationary uniform equilibrium. We confirm bistablility between the oscillatory and homogenous states of the network in the stochastic spiking network near this bifurcation. Additionally, we show that with particular spatial coupling profiles of neurons across the network–such as global uniform inhibition, local inhibition with long-range excitation, and others–the network undergoes a Turing bifurcation, resulting in a localized area of sustained activity, or stationary bump. We identify the locations of these instabilities in the phase diagram and the resulting regimes of different spatiotemporal dynamical patterns of network activity.
Dendritic excitability controls overdispersion | Zachary Friedenberger
A neuron’s input-output function is a central component of network dynamics and is commonly understood in terms of two fundamental operating regimes: 1) the mean-driven regime where the mean input drives regular and frequent firing, and 2) the fluctuation-driven regime where input fluctuations drive responses at relatively low firing frequency but with variable intervals. Active dendrites are expected to profoundly influence the input-output function by either controlling gain modulation or through the additive modulation by dendritic inputs that have been transformed nonlinearly as in an artificial neural network. However, both additive and gain modulations are thought to be weak in the presence of background fluctuations. Here we investigate how active dendrites, falling in either regime, shape the input-output function. Extending cable theory with features of generalized integrate and fire models, we develop a mean-field theory for neurons with active dendrites, capturing the integrative properties of dendrites and the soma in the presence of noise. We find that dendritic input controls interspike interval dispersion, reaching overdispersed states unaccountable by Poisson processes, but commonly observed in vivo. This effect appears in the fluctuation-driven regime, largely before dendritic input makes additive or multiplicative modulation of the firing frequency. We show that this mechanism implies that increasing the strength of somatic inputs can increase interval dispersion as long as dendritic spikes are not consistently above threshold. Consequently, neurons display not two but three fundamental operating regimes, depending on whether dendritic spikes or the somatic input reaches threshold. We validate our prediction that dendritic input controls overdispersion by re-analyzing previously published patch clamp recordings of cortical neurons. This perspective of neuronal input-output functions has implications for theories of neural coding, the credit-assignment problem, control of trial-to-trial variability, and how attractor networks can reach highly dispersed firing states.
Zachary Friedenberger [1,2], Richard Naud [1,2,3]
1. Centre for Neural Dynamics and Artificial Intelligence, University of Ottawa, Ottawa, Ontario, Canada.
2. Department of Physics, University of Ottawa, Ottawa, Ontario, Canada.
3. Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, Ontario, Canada.
Bistable perception emerges from loopy inference in strongly coupled probabilistic graphs | Alexandre Garcia-Duran Castilla
Alexandre Garcia-Duran (1,2), Martijn Wokke (1), Neus Pou-Amengual (1), Manuel Molano-Mazón (2), Alexandre Hyafil (1)
1. Centre de Recerca Matemàtica (CRM), Bellaterra, Spain
2. Universitat Politècnica de Catalunya – BarcelonaTech (UPC), Barcelona, Spain
During perception the brain is constantly processing sensory information, accepting and rejecting alternative, competing interpretations about the environment. This probabilistic inference process also determines the certainty —confidence— associated with such interpretations, which typically correlates with the strength of sensory evidence. However, in bistable perception, one single interpretation is confidently perceived at a given time, yet alternates over time despite no change in the stimulus. Here, we investigate the properties of bistable stimuli that are responsible for such dissociation of stimulus strength and confidence. We propose that bistability emerges due to an approximate probabilistic inference process running in a strongly coupled binary Markov Random Field. We illustrate our framework in the Necker cube, a classical bistable stimulus. Here, the nodes in the graph represent the perceived depth of the 2D shape vertices: their coupling indicates that, in nature, horizontal and vertical lines usually bind features at the same depth. We derived analytical expressions for the dynamics of perception in three popular classes of approximate inference: Belief Propagation, Mean Field, and sampling. In all three classes, bistability emerges due to strong probabilistic coupling between different stimulus features, producing reverberatory loops that lead to circular inference. This creates a double-well energy potential for perception dynamics, with intrinsic noise generating alternations between the two high-confidence percepts. We thus predict that decreasing the coupling leads to a shift from bistable to monostable potential, whereas sensory evidence modulates the asymmetry in the potential. We tested these results experimentally with a bistable stimulus (rotating cylinder), where the level of coupling between features and stimulus strength are independently modulated. Preliminary results show that high coupling leads to overconfidence in participant reports, irrespective of stimulus strength. Our work shows that bistability emerges from loopy inference in stimuli composed of strongly coupled features.
Hydra’s Neural Symphony : Tuning into the Rhythms and Connections of cnidarians | Sarah Gaubi
Authors : Gaubi Sarah, Manich Maria, Olivo-Marin Jean-Christophe and Lagache Thibault
Affiliations : Biological Image Analysis Unit, CNRS UMR 3691, Institut Pasteur and Sorbonne Université, Paris, France.
ABSTRACT :
Hydra vulgaris is a freshwater organism from the cnidarian family, representing a prototype of one of the earliest nervous systems while exhibiting a wide repertoire of behaviors. Since 1964, electrophysiological recordings by Passano and McCullough have revealed that Hydra vulgaris possesses identifiable neuronal ensembles that generate rhythmic patterns [1]. These biologically organized rhythmic neural circuits, known as Central Pattern Generators (CPGs), are responsible for producing stereotyped motor actions in humans and animals, particularly insects. Among its diverse behaviors, Hydra demonstrates somersaulting, a swaying locomotion composed of six steps related to sequential activation of specific neuronal ensembles and muscle. The use of the calcium indicator GCaMP in specific transgenic lines (Figure 1), has enabled visualization of Hydra’s entire nervous system [2]. Such studies reveal that Hydra’s nervous system consists of non-overlapping nerve nets organized into distinct neuronal ensembles. Imaging during somersaulting has identified two key ensembles: contraction bursts (CB) and rhythmic potentials (RP1) that form a half-center oscillator regulated by neuropeptides associated with each neuronal group [3].
To uncover the fundamental neuroscience principles underlying this primitive CPG, we developed a multiscale mathematical model that connects neuronal activity and neuropeptide signaling to somersaulting behavior. First, using single-cell RNA sequencing data [4] and a set of selected ionic channels, we modeled the excitability of neurons within each CB and RP1 ensembles involved in somersaulting. We calibrated the primary ionic conductances in standard Hodgkin-Huxley models using [5] and successfully reproduced the observed neuronal rhythms, including the alternation between rhythmic RP1 firing and CB bursts, as well as the increase in RP1 tonic spiking frequency preceding somersaulting (Figure 2). Furthermore, leveraging the neuronal clustering t-SNE map from [4], we conducted a rapid gene expression analysis for hyperpolarization-activated ion channels (HCN channels) across neuronal groups suggesting that pacemaker neurons, characterized by the presence of HCN channels, belong to the CB ensemble only.
Then, using high-speed calcium imaging (~50 Hz), together with an image-based simulation framework for neuron population placement and neurite connectivity (Figure 3), we found that synaptic connections should play a crucial role in integrating spatial information, highly impacting on the temporal excitability of the system.
Finally, to address the challenge of neuropeptide modulation, we proposed an integrated approach to enhance the coupled oscillator model. Specifically, we extended the previous Hodgkin-Huxley conductance-based framework by coupling it with a reaction-diffusion model to account for neuropeptide secretion and transport across Hydra’s body.
The model successfully reproduces the neuronal activity of both neuronal ensembles during the distinct steps of the somersaulting behavior. It allowed us to propose various plausible scenarios, offering insights into the cellular pathways and basic neuroscience principle underlying a prototype of CPG.
References
[1] Passano, L. M., & McCullough, C. B. (1964). Co-ordinating systems and behaviour in Hydra: I. Pacemaker system of the periodic contractions. Journal of Experimental Biology, 41(3), 643-664.
[2] Dupre, C., & Yuste, R. (2017). Non-overlapping neural networks in Hydra vulgaris. Current Biology, 27(8), 1085-1097.
[3] Yamamoto, W., & Yuste, R. (2023). Peptide-driven control of somersaulting in Hydra vulgaris. Current Biology, 33(10), 1893-1905.
[4] Siebert, S., Farrell, J. A., Cazet, J. F., Abeykoon, Y., Primack, A. S., Schnitzler, C. E., & Juliano, C. E. (2019). Stem cell differentiation trajectories in Hydra resolved at single-cell resolution. Science, 365(6451), eaav9314.
[5] Drion, G., Franci, A., Dethier, J., & Sepulchre, R. (2015). Dynamic Input Conductances Shape Neuronal Spiking. eNeuro, 2(1), ENEURO.0031-14.2015.
Low-Frequency Electrical Stimulation in Epilepsy: a Biophysical and Mathematical Representation | Guillaume Girier
Authors:
G. Girier 1, I. Dallmer-Zerbe 1,2, J. Chvojka 2, J. Kudlacek 2, P. Jiruska 2, J. Hlinka 1,3, H. Schmidt 1
Affiliations :
1 Department of Complex Systems, Institute of Computer Science of the Czech Academy of Sciences, Prague, Czech Republic
2 Department of Physiology, Second Faculty of Medicine, Charles University, Prague, Czech Republic
3 National Institute of Mental Health, Klecany, Czech Republic
Abstract:
The biological mechanisms underlying the recurring seizures in the epileptic brain remain poorly understood. As a consequence of that, a substantial proportion of epilepsy patients cannot be sufficiently treated by currently available treatment options. Newly developed brain stimulation protocols have been shown to successfully reduce the seizure rate (1,2). However, their success critically depends on chosen stimulation parameters, such as the time point, amplitude and frequency of stimulation. This study focuses on the seizure delaying effect of 1 Hz stimulation in an animal model of epilepsy. We study this effect using a computational model, a modified version of the so-called Epileptor-2 model (3), in close comparison with a real dataset of local field potential recordings from four hippocampal rat brain slices under high potassium condition. The Epileptor-2 model describes the dynamics of population potential, synaptic resources, potassium and sodium ions.
First, we study the parameters that control the duration of seizures and inter-seizure intervals. These parameters are then optimized to fit the model to the data. In the experiments, potassium was added to the extracellular bath until spontaneous seizures appeared. We study this phenomenon through bifurcation analysis, based on the potassium bath parameter Kbath, and a slow ramp up of Kbath (Kbath = ε, with 0 < ε ≪ 1) in order to reproduce the transition between the different dynamical states. Thus, different thresholds (appearance of seizure, passage to status epilepticus) can be defined. In addition, corresponding to experimental observations, we show that seizures can be induced by external stimulations, from the moment when a certain potassium concentration is exceeded. Next, we reproduce the experimental stimulation protocol and the seizure delay in the model. For instance, we demonstrate that it is possible to delay seizures indefinitely in the model for as long as it is stimulated, which can be explained by the emergence of a new stable attractor. Here, the effect of the stimulation is balanced by the activity of the Na-K pumps, which counteracts the accumulation of potassium in the extra-cellular medium, and therefore prevents seizures. We will also show that this phenomenon is preserved even in the presence of noise. Beyond confirming that the model can produce the delay effect, we establish the presence of a critical value for the amplitude of the stimulation, and the,moment at which to stimulate. Finally, we ensure that the stimulation itself does not cause small-scale seizures: based on slow-fast theory, we produce a 3D-bifurcation diagram to break down the bifurcation diagram into the different components of the epileptic phenomenon (seizure and interseizure domain), and we show that the attractor induced by stimulation lies outside the seizure domain.
References:
(1) Elena Y. Smirnova, Anton V. Chizhov, and Aleksey V. Zaitsev. Presynaptic GABAB receptors underlie the antiepileptic effect of low-frequency electrical stimulation in the 4-aminopyridine model of epilepsy in brain slices of young rats. Brain Stimulation, 13(5):1387–1395,
2020.
(2) Mohamad Z. Koubeissi, Emine Kahriman, Tanvir U. Syed, Jonathan Miller, and Dominique M. Durand. Low-frequency electrical stimulation of a fiber tract in temporal lobe epilepsy. Annals of neurology, 74(2):223–231, 2013.
(3) Anton V. Chizhov, Artyom V. Zefirov, Dmitry V. Amakhin, Elena Y. Smirnova, and Aleksey V. Zaitsev. Minimal model of intericta and ictal discharges “Epileptor-2”. PLOS Computational Biology, 14(5):1–25, 05 2018
Experimental and Modeling Insights into Neural Dynamics Under Alternating Current Stimulation (tACS) | Camille Godin
The model provides a probability theory paradigm to understand neuronal network observables such as Hebbian potentiation, why stronger synapses survive longer, the role of silent synapses, and the timescales to required to obtain certain network structures/activity patterns under network plasticity. The model can naturally be extended for multiple plastic synapse types, such as excitatory and inhibitory synapses.
Exploring the role of age as a moderator in the relationship between brain structure and cognition | Ben Griffin
Understanding how brain structure influences cognition across the lifespan is fundamental for advancing clinical applications in neuroscience, such as age-related cognitive decline. However, age, brain structure and cognition are intrinsically linked: age drives changes in the brain, which in turn results in changes in cognition. Consequently, if we predict cognition as a function of brain data, we might just be predicting age, which itself is strongly related to brain structure. Therefore, accounting for the influence of age is of utmost importance when studying brain-behaviour relationships. To avoid this trivial result, the usual approach is to regress age (and age-related confounds) out of both the brain data and the cognitive variables. This is a conservative approach, but at least any effects we discover are truly reflective of a brain-behaviour association.
In this study, we explored whether the nature of the brain-cognition relationship is consistent across different age groups, or if age moderates this relationship. We hypothesise that the differences in brain anatomy, function, and connectivity that are behind the differences in cognitive performance in older subjects (e.g. related to disease) may be different from younger subjects. To investigate this, we used data from the UK Biobank, focusing on structural imaging-derived phenotypes. Our findings suggests that beyond the “raw” effect of age, the relationship between brain structure and cognitive performance varies across age groups (Figure 1), and that models trained in one age-group do not generalise to another. Despite this, we did not find evidence that age-specific models outperformed the conventional approach of training across the entire population.
Ben Griffin – Oxford Centre for Functional MRI of the Brain (FMRIB), Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, University of Oxford
Mark Woolrich – Oxford Centre for Human Brain Activity (OHBA), Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford
Stephen Smith – Oxford Centre for Functional MRI of the Brain (FMRIB), Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, University of Oxford
Diego Vidaurre – Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University; Oxford Centre for Human Brain Activity (OHBA), Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford
Exploring Dynamics and Network Behavior of the Laminar Neural Mass Model: From Individual Units to Complex Networks | Maria Guasch-Morgades
Maria Guasch-Morgades1, Raul Palma2, Roser Sanchez-Todo1 and Giulio Ruffini1
1 Neuroelectrics Barcelona SLU, Barcelona, Spain
2 Universitat Pompeu Fabra, Barcelona, Spain
The Laminar Neural Mass Model (LaNMM) has emerged as a promising framework for simulating cortical dynamics, offering a more nuanced representation of neural activity across different cortical layers. This study presents an in-depth analysis of the LaNMM, focusing on both individual model behavior and the complex dynamics that arise in networks composed of these units.
We first present a detailed examination of the individual LaNMM, which integrates two well-established neural mass models: the Pyramidal Interneuron Network Gamma (PING) model and the Jansen-Rit model. These components represent distinct cortical layers and capture both fast (gamma) and slow (alpha) rhythms, respectively. We explore how the biologically-inspired coupling between these models gives rise to cross-frequency interactions, a key feature of cortical dynamics. Our analysis employs a range of mathematical techniques, including bifurcation analysis and spectral methods, to provide insight into the mechanisms underlying the emergence of coupled oscillations.
Building upon this foundation, we extend our study to networks of interconnected LaNMM units. This second part of our project investigates how network topology influences the collective dynamics of these systems. Our analysis focuses on identifying and characterizing different dynamical regimes and fixed points that emerge and disappear as the network size and complexity increase. We employ tools from network theory and dynamical systems analysis to map the relationship between network structure and functional outcomes.
In conclusion, this work represents a step towards unraveling the complex interplay between local neural mass dynamics and large-scale brain network behavior. While our research is still in its early stages, it promises to provide valuable insights into the mechanisms underlying brain function – it may contribute to the development of more sophisticated computational models in neuroscience which can better model the different substates observed during resting state.
Stochastic cubic models of EEG dynamics during sleep-onset | Zhenxing HU
Zhenxing Hu1, J. Nathan Kutz2, Xu Lei3, Jean-Julien Aucouturier1
1 Université de Franche-Comté, SUPMICROTECH, CNRS, institut FEMTO-ST, Besançon, France
2 Department of Applied Mathematics, University of Washington, Seattle, Washington, United States
3 Sleep and NeuroImaging Center, Faculty of Psychology, Southwest University, Chongqing, China
The sleep-onset period (SOP) exhibits dynamic and complex changes of Electroencephalogram (EEG) with high intra- and inter-individual variability (Lacaux, 2024). To investigate this, biophysical model of the ascending arousal system has been used to empirically explain the power spectrum changes during the SOP period (Yang, 2016), but not considering the changing rate of system parameter as well as non-negligible variance of input noise, in which the latter could cause jumping between bistable states and may play a role in narcolepsy and microsleeps (Roberts, 2017). Furthermore, validation by real EEG dataset is lacking. To tackle the limitations mentioned above, we firstly applied dimension reduction on EEG spectrogram to extract the dominant mode on wake and sleep states, in which the low-dimensional dynamics were modelled as output of stochastic parametric cubic (SPC) systems, to explain the dynamic changes of spectrum during transitional period. Also, we come up with a way to estimate the model parameter, which may be served as a biomarker to distinguish different subtypes of sleep-onset disorder.
Exploring Neural Communication via Phase-Amplitude Dynamics: Efficient numerical methods | Gemma Huguet
David Reyner-Parra (Universitat Politècnica de Catalunya)
Alberto Pérez-Cervera (Universitat Politècnica de Catalunya)
Gemma Huguet (Universitat Politècnica de Catalunya i Centre de Recerca Matemàtica)
Abstract: Macroscopic oscillations in the brain play a crucial role in cognitive tasks, yet their exact functions remain incompletely understood. One prominent hypothesis suggests that oscillations enable dynamic modulation of communication between neural circuits by rhythmically altering neuronal excitability. In this study, we use mean-field models to explore synchronization dynamics between connected Excitatory-Inhibitory (E-I) networks generating Gamma rhythms. These networks interact with one another while receiving external periodic inputs.
Our investigation employs the phase-amplitude framework, which extends classical phase reduction methods by incorporating amplitude coordinates (or isostables) to describe transient dynamics transverse to limit cycles. To simplify the system while maintaining accuracy, we focus on reducing the dynamics to a slow attracting invariant submanifold associated with the slowest contracting direction. In this work, we present an efficient numerical method to compute the parameterization of the attracting slow submanifold of hyperbolic limit cycles and the simplified dynamics in its induced coordinates. Additionally, we compute the infinitesimal Phase and Amplitude Response Functions restricted to this manifold, which characterize the effects of perturbations on phase and amplitude. These methods offer a powerful framework for understanding the interplay of phase, amplitude, and synchronization in neuronal network communication.
Stochastic random network dynamics describes endogenous fluctuations and Event-related Synchronisation and Desynchronisation | Axel Hutt
Experimental brain activity is known to show oscillations in specific frequency bands, which reflects neural information processing. Changes of power in frequency bands indicate changes in information processing. For instance, it has been observed that strong activity about 10Hz and 2Hz emerge in electroencephalographic activity (EEG) when a subject loses consciousness in general anaesthesia. We show numerical simulations of stochastic neural models, which exhibit such a change by changing the variance of external additive Gaussian uncorrelated noise [1]. An analytical description explains the Additive Noise-Induced System Evolution (ANISE), which is the underlying mechanism by random endogenous fluctuations.
In a next part, this noise-supported mechanism is detailed mathematically by applying random matrix theory and mean-field theory [2,3]. It is shown in unbalanced Erdös-Renyie networks including excitatory and inhibitory connections that a modification of the extrinsic noise level on the microscopic neural level tunes the frequency of neural masses on the mesoscopic level. The work demonstrates the relation to coherence resonance.
The mesoscopic mean-field equation describes experimental observations, such as Event-Related Synchronization and De-Synchronization (ERS/ERD) observed in EEG. An additional application of ANISE explains frequency switches between human occipital gamma-and alpha-oscillations when opening/closing eyes [3]. As a last application, ANISE explains the effect of transcranial Direct Current Stimulation in a ketamine animal model for psychosis [4].
[1] A. Hutt, J. Lefebvre, D. Hight and J. Sleigh, Suppression of underlying neuronal fluctuations mediates EEG slowing during general anaesthesia,
Neuroimage 179: 414-428 (2018)
[2] J. Lefebvre and A. Hutt, Induced synchronisation by endogenous noise modulation in finite-size random neural networks: a stochastic mean-field study.
Chaos 33(12):123110 (2023)
[3] A. Hutt, S. Rich, T.A. Valiante and J. Lefebvre, Intrinsic neural diversity quenches the dynamic volatility of neural networks,
Proceedings of National Academy of Sciences USA 120 (28):e2218841120 (2023)
[4] A. Hutt and J. Lefebvre, Arousal fluctuations govern oscillatory transitions between dominant gamma- and alpha occipital activity during eyes open/closed conditions,
Brain Topography 35:108-120 (2021)
[5] T. Wahl, J. Riedinger, M. Duprez and A. Hutt, Delayed closed-loop neurostimulation for the treatment of pathological brain rhythms in mental disorder: a computational study,
Frontiers in Neuroscience 17:1183670 (2023)”
Phase Oscillator Networks with Distance-Dependent Delays: How Does Conduction Speed Affect Large-Scale Brain Dynamics? | Grace Jolly
Conduction speed of neural signals between brain regions varies with age and degenerative diseases such as multiple sclerosis. In this work, we investigate how changes in conduction speed affect large-scale brain dynamics. We consider coupled phase oscillator models with time delays, where the delays are proportional to the distance between nodes and inversely related to conduction speed. Our study includes network configurations such as a two-node network with a single delay and a ring network with multiple delays. We examine phase-locked states, and find synchronous, asynchronous, splay, and cluster states. Using linear stability analysis, we determine the stability of these states and generate bifurcation diagrams where conduction speed acts as the bifurcation parameter. These diagrams provide insights into what states are stable for different conduction speeds. To further validate our results, we complement our analytical findings with direct simulations of the networks.
Author list: Stephen Coombes, Rachel Nicks, Grace Jolly, Gulistan Iskenderoglu
Affiliations: School of Mathematical Science, University of Nottingham
Impact of meningioma and glioma on whole-brain dynamics | Albert Juncà Sabrià
Impact of meningioma and glioma on whole-brain dynamics
Albert Juncà1, Anira Escrichs2 , Ignacio Martı́n1 , Gustavo Deco2,3 , and Gustavo Patow1,2,*
1ViRVIG-UdG, Girona, Catalonia, Spain
2Computational Neuroscience Group, Center for Brain and Cognition, Department of Information and
Communication Technologies, Universitat Pompeu Fabra, Barcelona, Catalonia, Spain
3ICREA
*e-mail: gustavo.patow@udg.edu
Introduction: Brain tumors, such as meningiomas and gliomas, can significantly impair neural function. However, their specific effects on brain dynamics remain poorly understood. Understanding these impacts is crucial for advancing knowledge of tumor-related pathophysiological mechanisms and developing effective biomarkers.
Objective: This study aims to investigate alterations in brain dynamics caused by meningiomas and gliomas. Specifically, it seeks to assess these changes using the Intrinsic Ignition Framework, which quantifies dynamical complexity through metrics such as intrinsic ignition and metastability.
Methods: The study analyzed resting-state fMRI data from 34 participants, including both meningioma and glioma patients, as well as controls. Brain dynamics were quantified using intrinsic ignition and metastability metrics. Additionally, resting-state network analysis was performed to examine correlations between brain regions.
Results: Distinct patterns of disruption were observed between the two tumor types. Glioma patients exhibited significant reductions in intrinsic ignition and metastability metrics compared to controls, indicating widespread network disturbances. Meningioma patients showed significant changes primarily in regions directly affected by the tumor. Resting-state network analysis revealed strong metastability and metastability/ignition correlations in controls. These correlations were slightly weakened in meningioma patients and severely disrupted in glioma patients.
Discussion: These findings demonstrate the differential impacts of gliomas and meningiomas on brain function, reflecting their distinct pathophysiological mechanisms. Gliomas are associated with more widespread disruptions in brain dynamics, while meningiomas primarily affect tumor-involved regions. Additionally, the study underscores the potential of brain dynamics metrics as biomarkers for identifying disruptions in brain information transmission caused by tumors.
This research was partially funded by:
GP: Grant PID2021-122136OB-C22 funded by MICIU/AEI/10.13039/501100011033 and ERDF A way of making Europe.
Dynamical analysis of the Chialvo model under a locally active memristor as electromagnetic radiation | Ajay Kumar
Ajay kumar, V.V.M.S. Chandramouli
Affiliations Indian Institute of Technology Jodhpur, INDIA
Abstract: The study of neuron models under the influence of electromagnetic radiation is essential for understanding brain functions and developing treatments for neurological disorders. In this talk, we discuss the design of the discrete locally active memristor (DLAM) model and thoroughly analyze its characteristics. Further, this DLAM is utilize to introduce electromagnetic radiation into the reduced Chailvo neuron model (called the M-rChialvo model). We analyze the equilibrium points of the M-rChialvo model and discuss the dynamical characteristics, including phase portraits, stability analysis, forward and backward bifurcation, chaotic attractor, and multistability. This study reveals rich dynamical behaviors and diverse neuron firing patterns.
Additionally, we explore the behavior of a network of neurons governed by the proposed model within the ring-star network structure. Simulations highlight the emergence of various dynamical states, such as multi-chimera patterns, synchronization, and imperfect synchronization, under varying coupling strengths.”
Combining Genetic Algorithms and Bifurcation Analysis to Link Bifurcation Structure and Evolutionary Objectives | Ece Kuru
Ece Kuru(1), Jan-Hendrik Schleimer(1,2), and Susanne Schreiber(1,2)
(1): Institute for Theoretical Biology, Humboldt-Universität zu Berlin, 10115 Berlin, Germany
(2): Bernstein Center for Computational Neuroscience, Humboldt-Universität zu Berlin, Philippstr. 13, Haus 6, 10115 Berlin, Germany
One of the central questions in theoretical neurophysiology is to understand the versatile electrical signaling behavior of neurons under different physiological conditions which neurons in biological systems are subject to such as temperature fluctuations, neuromodulators or fluctuations in ionic concentrations. An important computational method to this end is the use of genetic algorithms as a tool to explore optimal physiological parameter sets. While these can deliver optimal solutions for given objectives, additional analysis is required to understand the qualitative changes needed for optimization. Bifurcation analysis serves as a method to understand such qualitative switches in signaling behavior such as the emergence of bistability or changes to the neurons spike-onset bifurcation. Here we propose to combine genetic algorithms and bifurcation analysis in a systematic fashion to better understand qualitative commonalities of optimal points in the parameter space and the changes to the bifurcation structure leading to significant improvements during optimization. We illustrate our approach with an analysis of a temperature sensitive conductance-based neuron model, which we optimize using a multi-objective evolutionary algorithm for energy efficient action potential generation in the typical mammalian body temperature range and with firing rates robust to physiological fluctuations in brain temperature. We make use of the fundamental bifurcation structure of conductance-based neuron models, which allows us to judge the topological relation of different spike-onset mechanisms and how this evolves in the population of neurons. With this approach it is possible to pinpoint qualitative changes to models of biological systems that have been selected to fulfill functional objectives in the evolutionary process.
Model selection methods for estimating learning behavior in cuttlefish and octopuses | Louis Köhler
Learning, whether in animals or humans, is the process by which behaviors become better adapted to the environment (Rescorla, 1988). This process is highly individualized and is often only observable through the actions of the learner. By leveraging individual behavioral data, we can identify models that best explain this learning process (Aubert et al., 2023). To achieve this, we propose two model selection methods—a general hold-out procedure and an AIC-type criterion (Aubert et al., submitted)—both tailored for non-stationary, dependent data (Aubert et al., 2024). Theoretical error bounds for these methods are derived, showing performance close to that in the standard i.i.d. case. To demonstrate their effectiveness, we apply these methods to contextual bandit models (Auer et al., 2002; Lattimore and Szepesvári, 2020), which approximate the learning behavior of agents interacting with their environment. These contextual bandit algorithms capture how agents balance exploration and exploitation when seeking to maximize their rewards. By observing the learning process, we can estimate how an agent integrates contextual information to make decisions.
Extending these approaches, we explore their application to experimental learning data from cephalopods, specifically octopuses and cuttlefish (Jozet-Alves et al., 2013; Poncet et al., 2022). These animals are renowned for their remarkable problem-solving abilities and contextual learning in dynamic environments. By designing tasks that simulate adversarial conditions, we collect data on how these cephalopods adapt their behavior over time to achieve specific goals, such as obtaining food rewards (Jozet-Alves et al., 2013). Using our proposed model selection framework, we aim to identify the models that best capture their learning dynamics. This application not only validates our methods on non-human experimental data but also provides new insights into the cognitive processes underlying cephalopod learning. These methods and results are part of a joint collaboration with Julien Aubert (Univ. Côte d’Azur, France), Christelle Jozet-Alves (Univ. Caen Normandie, CNRS, France), Louis Köhler (Univ. Côte d’Azur, France), Luc Lehéricy (Univ. Côte d’Azur, CNRS, France) and Patricia Reynaud-Bouret (Univ. Côte d’Azur, CNRS, France).
Julien Aubert, Luc Lehéricy, and Patricia Reynaud-Bouret. On the convergence of the MLE as an estimator of the learning rate in the exp3 algorithm. In Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pages 1244–1275. PMLR, 7 2023.
Julien Aubert, Luc Lehéricy, and Patricia Reynaud-Bouret. General oracle inequalities for a penalized log-likelihood criterion based on non-stationary data. Working paper or preprint, 5 2024. URL https://hal.science/hal-04578260.
Peter Auer, Nicolo Cesa-Bianchi, Yoav Freund, and Robert E Schapire. The nonstochastic multiarmed bandit problem. SIAM journal on computing, 32(1): 48–77, 2002.
Christelle Jozet-Alves, Marion Bertin and Nicola Clayton. Evidence of episodic-like memory in cuttlefish. Current biology : CB. 23, 2013 R1033-5. 10.1016/j.cub.2013.10.021.
T. Lattimore and C. Szepesv´ari. Bandit Algorithms. Cambridge University Press, 2020.
Lisa Poncet, Coraline Desnous, Cécile Bellanger and Christell Jozet-Alves. Unruly octopuses are the rule: Octopus vulgaris use multiple and individually variable strategies in an episodic-like memory task. Journal of Experimental Biology. 225, 2022. 10.1242/jeb.244234.
Robert A Rescorla. Behavioral studies of pavlovian conditioning. Annual review of neuroscience, 1988.
Effects of local gain modulation on probabilistic selection of actions | Elif Köksal-Ersöz
Adaptation of behavior requires the brain to change goals in a changing environment. Synaptic learning has demonstrated its effectiveness in changing the probability of selecting actions based on their outcome. In the extreme case, it is vital not to repeat an action to a given goal that led to harmful punishment. Here, we propose a multiple timescale model where a simple neural mechanism of gain modulation that makes possible immediate changes in the probability of selecting a goal after punishment of variable intensity. Results show how gain modulation determine the type of elementary navigation process within the state space of a network of neuronal populations of excitatory neurons regulated by inhibition. Immediately after punishment, the system can avoid the punished populations by going back or by jumping to unpunished populations. This does not require particular credit assignment at the ‘choice’ population but only gain modulation of neurons active at the time of punishment. Gain modulation does not require statistical relearning that may lead to further errors, but can encode memories of past experiences without modification of synaptic efficacies. Therefore, gain modulation can complements synaptic plasticity.
Elif Köksal-Ersöz (Inria, Villeurbanne, France; Cophy Team, Lyon Neuroscience Research Center, Bron, France), Pascal Chossat (Project Team MathNeuro, Inria-CNRS-UNS, Sophia Antipolis, France), Frédéric Lavigne (Université Cote d’Azur, CNRS, BCL, Nice, France).
Nonlinear plasticity models increase noise robustness and pattern retrieval capacity | Eddy Kwessi
Neural plasticity is fundamental to memory storage and retrieval in biological systems, yet existing models often fall short in addressing noise sensitivity and unbounded synaptic weight growth. This paper investigates the Allee-based non- linear plasticity model, emphasizing its biologically inspired weight stabilization mechanisms, enhanced noise robustness, and critical thresholds for synaptic reg- ulation. We analyze its performance in memory retention and pattern retrieval, demonstrating increased capacity and reliability compared to classical models like Hebbian and Oja’s rules. To address temporal limitations, we extend the model by integrating time-dependent dynamics, including eligibility traces and oscillatory inputs, resulting in improved retrieval accuracy and resilience in dynamic environ- ments. This work bridges theoretical insights with practical implications, offering a robust framework for modeling neural adaptation and informing advances in artificial intelligence and neuroscience
Breaking the flow: How a temporal gap restructures decision-making mechanisms | Encarni Marcos
Alejandro Sospedra, Santiago Canals, Encarni Marcos*
Decision making involves assessing potential options and their expected outcomes. In laboratory studies, this process is often examined through perceptual discrimination tasks, where sensory streams are presented sequentially. Temporal gaps, or pauses, within such sensory inputs can significantly alter the decision-making process, yet the precise impact of such gaps remains underexplored. In this study, we designed a task based on the original tokens task [2]. In brief, fifteen tokens were presented on a central circle, each sequentially jumping to one of two peripheral circles (targets). Participants were required to report which target they believed would have the majority of tokens by the trial’s end. Our task introduced two key modifications: tokens disappeared after they jumped [1] and half of the trials included a temporal gap, during which no information was presented. This design allowed us to explore how a temporal gap within sequences of perceptual stimuli influences the information weighting and subsequent choices. We show that, although decisions are made with less information following a temporal gap, accuracy remains comparable to conditions without gaps when stimulus dynamics enhance the salience of the post-gap token. Critically, the token presented immediately after the gap exerts a disproportionately strong influence on decision making, a phenomenon that persists even when the post-gap token lacks inherent saliency. These findings suggest that the influence of post-gap evidence is not dependent on its saliency. Traditional decision-making models, including the urgency gating model, fail to account for this behavior, highlighting the need to extend current models to capture the uneven weighting of information, particularly in the presence of temporal disruptions. By highlighting the distinctive role of post-gap evidence, our results offer new insights into how temporal gaps and the sequence of sensory input shape the decision-making process under complex conditions.
[1] Ferrucci, Genovesio & Marcos (2021) PLoS Comp Biol
[2] Cisek, Puskas, El-Murr (2009) J Neurosci”
ToMATo clustering algorithm for spike sorting | Louise Martineau
Recording and decoding the activity of multiple neurons is a major subject in contemporary neuroscience. Extracellular recordings with multi-electrode arrays is one of the basic tools used to that end. The raw data produced by these recordings are almost systematically a mixture of activities from several neurons. In order to find the number of neurons which contributed to the recording and identify which neuron generated each of the visible spikes, a pre-processing step called spike sorting is required. Spike sorting is nowadays a semi-automatic process that involves many steps. Indeed, following some initial steps (data normalization, spike detection, events construction), spike sorting boils down to a clustering problem in high dimension. It is therefore accompanied most of the time by a dimension reduction, since classic clustering algorithms do not perform well in high dimension. This dimension reduction step is sensitive to the presence of event superposition (akin to outliers) that lead to poor clustering results. Neuroscientists are then usually led to perform an extra pre-processing step to remove these superpositions; this step is not completely automatized and does a much better job when supervised by an expert in spike sorting. The use of the ToMATo clustering algorithm helps to simplify and streamline this whole spike sorting procedure. \\
ToMATo (Topological Mode Analysis Tool) is a clustering method coming from the field of Topological Data Analysis. It was presented in 2013 in [Chazal et al., 2013, J.ACM,60:1], but seems still little known in neuroscience. It is a mode-seeking algorithm, that consists in seeking peaks in the observations density $f$, and in assigning observations falling under the same peak to the same cluster. Two classic problems arise in mode-seeking. The first is that we have only access to an approximation $\hat{f}$ of the true density $f$ and the mode-seeking phase can be very sensitive to small perturbations of $f$: the peaks of $\hat{f}$ do not in general coincide with the ones of $f$. The second is that density estimation is actually practically impossible in high dimension. This second issue is addressed by using the estimations $\hat{f}$ only at the data points and not in the whole space. In fact we do not need a proper estimate of $f$ : a function that gives weights to points, even if it is not an actual density, is enough for our purpose. We then work on an auxiliary structure, a neighborhood graph, and perform a graph-based mode-seeking instead of a mode-seeking on a proper estimate of the density itself. The ideas of a graph-based mode-seeking were already introduced in the algorithm of [Koontz et al., 1976, IEEE Trans Comput.,C-25:936] on which ToMATo is based. The ToMATo algorithm is thus usable even in high dimension. Addressing the first issue, methods such as the Mean-Shift try to smooth $\hat{f}$, but require a smoothing parameter choice. The innovative approach of ToMATo resides in the use of persistence theory. With persistence, a notion of prominence of peaks can be defined, and prominent peaks of $\hat{f}$ correspond to prominent peaks of $f$. Clusters found by mode-seeking are merged together so that the final clusters correspond only to prominent peaks of the true density $f$, and not some spurious, noise induced, peaks of $\hat{f}$. The correct number of clusters is therefore recovered, explaining the effectiveness of the ToMATo method. \\
We show that ToMATo can be easily applied for spike sorting, enabling to reduce the numbers of steps typically involved in this procedure. Indeed, since it can be applied directly to high dimensional data (in our application the data is $180$-dimensional), there is no need for dimension reduction before clustering, making the sensitivity to superpositions disappear. Moreover, ToMATo provides a very easy way of choosing the right number of clusters, solving thereby a significant problem in clustering. Finally, the algorithm is easy to use even for non specialists of Topological Data Analysis. Our use case demonstrate the spectacular performances of this approach. \\
Supported by the ANR: project ANR-22-CE45-0027.
The Role of Synaptic Dynamics in the Dynamical Behavior of Mean-Field Models of Neural Populations | Ana Mayora-Cebollero
In recent years, the study of neural populations is of increasing interest. Mean-field models are widely used to study the macroscopic dynamics of large neural populations. In the literature, we can find two recent mean-field models that describe the dynamics of heterogeneous all-to-all coupled Quadratic Integrate-and-Fire spiking neural networks with synaptic dynamics [1] and without it [2]. In this presentation, we show how these models are linked through a parameter related to the synapsis, as well as the different dynamical regimes they exhibit [3]. Furthermore, we analyze in depth the dynamical changes induced when this parameter varies, and the bifurcations underlying these changes [4]. To perform these analyses, different techniques as Lyapunov exponents, spike-counting sweeping and numerical continuation are applied.
This is joint work with Roberto Barrio, Jorge A. Jover-Galtier, Carmen Mayora-Cebollero, Sergio Serrano (Universidad de Zaragoza, Spain), and Lucía Pérez (Universidad de Oviedo, Spain)
[1] Dumont, G.; Gutkin, B.: Macroscopic phase resetting-curves determine oscillatory coherence and signal transfer in inter-coupled neural circuits. PLoS computational biology 15(5), e1007019 (2019).
[2] Montbrió, E.; Pazó, D.; Roxin, A.: Macroscopic description for networks of spiking neurons. Physical Review X 5(2), 021028 (2015).
[3] Barrio, R.; Jover-Galtier, J.A.; Mayora-Cebollero, A.; Mayora-Cebollero, C.; Serrano, S.: Synaptic dependence of dynamic regimes when coupling neural populations. Physical Review E 109, 014301 (2024).
[4] Mayora-Cebollero, A.; Barrio, R.; Li, L.; Mayora-Cebollero, C.; Pérez, L.: Dynamics of coupled neural populations: The role of synaptic dynamics. Preprint (2025).
Deep Learning for Dynamical Behavior Analysis of Excitable Cells | Carmen Mayora-Cebollero
TBP
Analysis of a group of Hindmarsh-Rose neurons with directional connections | Noah Marko Mesić
Authors: Noah Marko Mesić (University of Zagreb, Faculty of Electrical Engineering and Computing)
Abstract
The Hindmarsh-Rose model is widely used to simulate neuronal dynamics and replicate diverse patterns of activity observed in biological neurons. Leveraging dynamical systems theory, this study investigates bifurcations and chaotic behavior in a system of two bidirectionally coupled Hindmarsh-Rose neurons, with input current serving as the bifurcation parameter. The analysis was conducted using the MatCont toolbox in MATLAB. The study identifies Andronov-Hopf bifurcations associated with a fixed point and explores Neimark-Sacker and saddle-node bifurcations of the resulting periodic orbits. Three-dimensional phase-space trajectories were examined to characterize synchronous and asynchronous neuronal activity. Furthermore, Lyapunov exponents were computed to detect chaotic behavior, which is characteristic of the Hindmarsh-Rose neuron. The findings validate the occurrence of Andronov-Hopf bifurcations leading to synchronized limit cycles and reveal previously unreported bifurcations that result in asynchronous cycles. Additionally, the emergence of chaos via the Afraimovich-Shilnikov scenario was visualized, with Lyapunov exponents providing further verification.
Learning synaptic properties from neural activity in a recurrent neural network model of insect olfaction | Maria Gabriela Navas Zuloaga
M. Gabriela Navas-Zuloaga 1, Shruti Joshi 1,2, Autumn McLane-Svoboda 3, Yoshimasa Kubo 1, Debajit Saha 3,4, Maksim Bazhenov 1,2
1 Department of Medicine, University of California, San Diego, La Jolla, CA
2 Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA
3 Department of Biomedical Engineering, Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI
4 Neuroscience Program, Michigan State University, East Lansing, MI
Abstract
A key part of insect olfactory processing is the recurrent computation occurring in the antennal lobe (AL), where complex interactions between excitatory projection neurons (PNs) and inhibitory local neurons (LNs) shape odor representations. To elucidate the underlying mechanisms of olfactory coding, we developed a biologically constrained continuous rate recurrent neural network (RNN) model of the locust AL, trained to reconstruct in vivo electrophysiological data. Our model, comprising 830 PNs and 300 LNs, accurately captured the temporal dynamics and diverse response patterns of AL neurons. The trained network revealed sparse connectivity with differential connection densities between excitatory and inhibitory neuron populations, and no connections between excitatory neurons, consistent with empirical observations. Learned time constants predicted slower LN dynamics and diverse PN response patterns, with low and high time constants corresponding to early and late odor-evoked activity as reported in vivo. This approach demonstrates the utility of data-driven RNN models in inferring circuit properties and uncovering key mechanisms of odor representation in the insect AL, offering insights beyond traditional hand-tuned computational models.
Delayed auto-feedback and the precision of a neural oscillator | Parisa Nazemi
Delayed auto-feedback and the precision of a neural oscillator
Parisa Nazemi & John Lewis, Biology Dept, and Brain and Mind Research Institute, University of Ottawa
Precision and reliability of neural oscillations are critical for many brain functions. Among all known biological oscillators, the electric organ discharge (EOD) in wave-type electric fish is the most precise, with sub-microsecond variations in cycle periods and a coefficient of variation of CV~10^(-4). The timing of the EOD is set by a medullary pacemaker network comprising 150 neurons with weak electrical coupling. How this pacemaker network achieves such high precision is not clear. One hypothesis is that pacemaker activity is regularized by electrical feedback from the EOD itself. To investigate this, we use a computational model of a pacemaker neuron and mimic the electric field effect as an auto-feedback current stimulus with a delay. Our results show that the feedback either increases or decreases the CV of the period, depending on the phase of delay. We identified two distinct regions of stability and instability based on the slope of the phase response curve (PRC), corresponding to low CV (regular oscillations) and high CV (variable oscillations). We also tested simpler neural models with pulse-type delayed feedback and observe similar results as long as the PRC was type II. Furthermore, for low CV delays, the model neurons exhibit reduced sensitivity to perturbations and rapid recovery after transient disruptions. These findings provide insights into how time-delayed feedback influences the regularity and sensitivity of neural oscillations, offering a potential mechanism for the exceptional precision observed in weakly electric fish.
Modelling dopamine dynamics: encoding predicted reward in the striatum enables adaptive decision-making within a spiking CBGT network | Alex O'Hare
Authors: Alex O’Hare (1, 2), Catalina Vich (1, 2), Jonathan E. Rubin (3, 4), Timothy Verstynen (3, 5)
Affiliations: (1) Dept. de Matemàtiques i Informàtica, Universitat de les Illes Balears, Palma, Illes Balears, Spain, (2) Institute of Applied Computing and Community Code, Palma, Illes Balears, Spain, (3) Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania, United States of America, (4) Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America, (5) Department of Psychology & Neuroscience Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
of America
Abstract:
The cortico-basal-ganglia-thalamic (CBGT) pathways are widely held to be responsible for reinforcement learning within vertebrates. Nigrostriatal dopaminergic projections have been shown to signal the value of reward prediction error, and facilitate learning via plastic changes in the synaptic weights of spiny projection neurons (SPNs) in the striatum. Recent studies have proposed that the predicted reward (Q-value) may be reflected in the difference between the corticostriatal synaptic weights of the D1 and D2 receptor-expressing SPNs. Building on this idea, in this work we develop a model of dopaminergic learning and incorporate it into a fully spiking neural network of the CBGT pathways.
We investigate the extent to which this model may enable adaptive decision-making under a multi-armed bandit task and resolution of the explore-exploit dilemma by incorporating an uncertainty bonus which would align with experimental observations from humans and non-human primates. In this work, we develop a mathematical model representing the amount of DA made available to the SPNs, P, such that
dP/dt = (DA_T – P)(s + (r-Q))-bP,
where ‘DA_T’ denotes the total pool of available DA, ‘s’ is a baseline level of DA, ‘b’ is a decay rate in the amount of available DA, ‘r’ represents the reward value, and ‘Q’ the predicted reward. The Q-value is determined by an effect size measure given by
Q = 2*(W^dSPN-W^iSPN)/(N^dSPN+N^iSPN),
where W^dSPN and W^iSPN represent the synaptic weights of the D1-expressing direct pathway SPNs (dSPNs) and D2-expressing indirect pathway SPNs (iSPNs), respectively. N^dSPN and N^iSPN represent the number of dSPNs and iSPNs involved in the transaction, respectively. Preliminary simulations with a toy model (see supporting document) have yielded promising results, with the resulting Q-value aligning closely with the reward probability implemented in the trials.
We explore how our model supports recent studies which show the lack of distinction between tonic and phasic DA signals and instead produce a DA signal which aligns more closely with that of a value-function. Through our investigation we aim to reconcile this conflict between the traditional and emerging views of the characteristic features of dopamine within the decision-making pathways of the basal ganglia.
In demonstrating the efficacy of this model in a spiking neural network, we will lend support to the idea that the Q-value may be encoded in the synaptic weights of the SPNs. Furthermore, by removing the need to explicitly code a Q-value update rule into the network, we enhance the automaticity and biological realism of the existing open-source framework, CBGTPy, which may be used to conduct computational experiments of decision-making tasks.
A preprocessing method of time series signals for the transfer entropy using the classical multi-dimensional scaling | Mayu Ohira
Mayu Ohira and Yutaka Shimada,
Graduate school of science and engineering, Saitama University, Japan
Real-world systems, including neural networks, can be described as coupled dynamical systems, which sometimes exhibit complex behavior due to the interactions between elements of which the system comprises. Thus, estimating network structure is a fundamental research issue. The transfer entropy (TE) is one method for estimating connections among elements only from state time series observed from the elements.In this study, focusing on the TE, we propose a pre-processing method of time series data to improve the estimation accuracy of connections by the TE, even when the time series length is insufficient to estimate connections by the TE. Using the classical multidimensional scaling (CMDS), we change the basis of two observed time series data simultaneously while preserving the inter-point distances of the time series data. Then, we extract a common basis of these time series data to capture common features of the time series data, show that the estimation accuracy of connections and can be improved by our method.
Off-Equilibrium Fluctuation-Dissipation Theorem Paves the Way in Alzheimer’s Disease Research | Gustavo Patow
Gustavo Patow1,2,*, Juan Monti3, Irene Acero-Pousa2, Sebastián Idesis2, Anira Escrichs2,
Yonatan Sanz Perl2,4, Petra Ritter5, Morten Kringelbach6,7, Ana Solodkin8, Gustavo Deco2,9, and for the Alzheimer’s Disease Neuroimaging Initiative
1ViRVIG, Universitat de Girona, Girona, Catalonia, Spain
2Computational Neuroscience Group, Center for Brain and Cognition, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Catalonia, Spain
3Instituto de Física Rosario CONICET-UNR, Laboratorio de Colisiones Atómicas, FCEIA, Universidad Nacional de Rosario, Rosario, Argentina
4Cognitive Neuroscience Center (CNC), Universidad de San Andrés, Buenos Aires, Argentina
5Berlin Institute of Health at Charité, Charité Universitätsmedizin Berlin, Robert-Koch-Platz 4, 10117 Berlin, Germany
6Department of Psychiatry, University of Oxford, Oxford, UK
7Center for Music in the Brain, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
8Neurosciences, School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, TX, USA
9Institució Catalana de la Recerca i Estudis Avançats (ICREA), Barcelona, Catalonia, Spain
*e-mail: gustavo.patow@udg.edu
INTRODUCTION: Alzheimer’s disease (AD) is a neurodegenerative disorder characterized by progressive cognitive decline. Although traditional methods have provided insights into brain dynamics in AD, they have limitations in capturing non-equilibrium dynamics across disease stages. Recent studies suggest that dynamic functional connectivity in resting-state networks (RSNs) may serve as a biomarker for AD, but the role of deviations from dynamical equilibrium remains underexplored.
OBJECTIVE: This study applies the off-equilibrium fluctuation-dissipation theorem (FDT)~\cite{Monti2024} to analyze brain dynamics in AD, aiming to compare deviations from equilibrium in healthy controls, patients with mild cognitive impairment (MCI), and those with AD. The goal is to identify potential biomarkers for early AD detection and understand disease progression’s mechanisms.
METHODS: We employed a model-free approach based on FDT to analyze functional magnetic resonance imaging (fMRI) data, including healthy controls, MCI patients, and AD patients. Deviations from equilibrium in resting-state brain activity were quantified using fMRI data. In addition, we performed model-based simulations incorporating Amyloid-Beta (\ab{}), tau burdens, and Generative Effective Connectivity (GEC) for each subject.
RESULTS: Our findings show that deviations from equilibrium increase during the MCI stage, indicating hyperexcitability, followed by a significant decline in later stages of AD, reflecting neuronal damage. Model-based simulations incorporating \ab{} and tau burdens closely replicated these dynamics, especially in AD patients, highlighting their role in disease progression. Healthy controls exhibited lower deviations, while AD patients showed the most significant disruptions in brain dynamics.
DISCUSSION: The study demonstrates that the off-equilibrium FDT framework can accurately characterize brain dynamics in AD, providing a potential biomarker for early detection. The increase in non-equilibrium deviations during the MCI stage followed by their decline in AD offers a mechanistic explanation for disease progression. Future research should explore how combining this framework with other dynamic brain measures could further refine diagnostic tools and therapeutic strategies for AD and other neurodegenerative diseases.
A dynamical system perspective on the mean-field limit of spatially structured recurrent neural networks | Louis Pezon
Title:
A dynamical system perspective on the mean-field limit of spatially structured recurrent neural networks
Authors: L. Pezon (1,*), W. Gerstner (1).
(1) Brain Mind Institute, École Polytechnique Fédérale de Lausanne, Switzerland.
(*) Presenting author.
Abstract:
A wide class of computational neuroscience models involves large spatially structured recurrent neural networks, where interactions between neurons are described by a kernel function of individual “locations” of the neurons. These locations can represent spatial position on the cortical sheet, or neuronal tuning to task or stimulus variables [1-5]. Mean-field (i.e. large network) limits of such spatially structured networks can be represented as neural fields — an approach consistently used since the 1970s [1-4], that conforms naturally to the spatial structure of the connectivity. This approach has been justified using heuristic coarse-graining arguments [1,4]. Yet, the question of how well the dynamics of the limit neural field captures that of the finite-size network, and how it depends on various parameters of the model, remains incompletely understood.
We study a fully connected and spatially structured recurrent network of linear-nonlinear-Poisson spiking neurons. Neurons are assigned i.i.d. locations in an underlying space (called the “similarity space”), and synaptic weights are given by a kernel function of neuronal locations (as in [5,6]). Recently, the convergence of the empirical measure of neuronal trajectories to the solution of a neural field equation was proven rigorously [6] (see also [7] for networks of integrate-and-fire neurons). Yet, this approach has key limitations. First, it requires the convergence of the empirical measure of neuronal locations, which depends critically on the dimension of the similarity space and is not needed for the convergence of a “typical” trajectory. Second, focusing on the convergence of trajectories typically yields bounds (using Grönwall-like inequalities) that grow exponentially with time, revealing little about the system’s dynamical structure, such as its potential fixed points and their stability properties.
To address these concerns, we adopt a dynamical system perspective: we express the dynamics of both the finite-size network and the limit neural field as the flow of a vector field over a Hilbert space [5]. The finite-size network dynamics is characterised by a random vector field that depends on the empirical measure of neuronal locations. We obtain concentration inequalities for this vector field and its spatial derivatives, at every point of its domain. This allows us to quantify how potential fixed points of the finite-size dynamics, and their associated eigenvalues, differ from those of the limit dynamics. Critically, the obtained bounds depend on the spectral properties of the connectivity kernel and on the nonlinearity of the neuron model, but not on the dimension of the underlying similarity space.
Our approach offers a new perspective on studying the mean-field limit of large structured neural networks, shifting focus from the convergence of individual trajectories to the underlying dynamical structure. This approach could further enable obtaining quantitative guarantees on the similarity between the finite-size network dynamics and its limit, and aligns with the modern paradigm shift to population dynamics in systems neuroscience [8].
References:
[1] H.R. Wilson, J.D. Cowan. A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kybernetik (1973).
DOI: 10.1007/BF00288786
[2] K. Zhang. Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: A theory. The Journal of Neuroscience (1996).
DOI: 10.1523/JNEUROSCI.16-06-02112.1996
[3] R. Ben-Yishai, R.L. Bar-Or, H. Sompolinsky. Theory of orientation tuning in visual cortex. Proceedings of the National Academy of Sciences (1995).
DOI: 10.1073/pnas.92.9.3844
[4] W. Gerstner. Time structure of the activity in neural network models. Physical Review E (1995).
DOI: 10.1103/PhysRevE.51.738
[5] L. Pezon, V. Schmutz, W. Gerstner. Linking neural manifolds to circuit structure in recurrent networks. Preprint, bioArxiv (2024).
DOI: 10.1101/2024.02.28.582565
[6] M. Chevallier, et al. Mean field limits for nonlinear spatially extended Hawkes processes with exponential memory kernels. Stochastic Processes and their Applications (2019).
DOI: 10.1016/j.spa.2018.02.007
[7] P.-E. Jabin, V. Schmutz, D. Zhou. Non-exchangeable networks of integrate-and-fire neurons: spatially-extended mean-field limit of the empirical measure. Preprint, arXiv (2024).
DOI: 10.48550/arXiv.2409.06325
[8] S. Vyas, et al. Computation through Neural Population Dynamics. Annual Review of Neuroscience (2020).
DOI: 10.1146/annurev-neuro-092619-094115
The path the brain takes – a closer look at the temporal evolution of functional states in a network control theoretical framework | Alina Podschun
Authors:
Podschun, A., Humboldt-Universität zu Berlin, Germany
Betzel, R.F., University of Minnesota Twin Cities, United States of America Markett, S., Humboldt-Universität zu Berlin, Germany
Abstract:
Network Control Theory (NCT) is a powerful methodological toolbox that models how a dynamical system might maintain and traverse between various functional states [1]. Applied to neuroscience, it allows investigations into how the brain’s structural connectome constrains function, how specific brain regions or networks might control dynamics via functional “control inputs”, and into how these characteristics change in clinical samples [1].
Among various other factors, the energy needed by the brain to traverse between a start and target brain state is associated with specifics of the target state, i.e. some functional target states are much more costly to be reached than others [2,3]. Much less is known about the temporal aspect of the dynamics – that is, about the path of intermediate brain states the brain takes from the start state to arrive at the target state.
We here analyse specifics of these trajectories, based on 123 meta-analytically derived brain states from the Neurosynth database. With dynamics constrained by a group-averaged structural connectome calculated from the Human Connectome Project subset of unrelated participants, we conduct full NCT analyses between all pairs of brain states. We then project all 123 states, as well as all trajectories onto a shared geometrical space determined by the full set of principal components (PCs) of the 123 Neurosynth states.
Transitions between brain states might be optimized with regard to energetic costs, as well as with regard to directness of the path between start and target [2]. We show that trajectories optimized for directness also closely correspond to straight line segments between states in PC space. In practice, these direct, but also highly costly transitions are not always feasible in biological systems geared to energy minimisation (such as the brain) [3]. We show that a more balanced consideration of both cost as well as directness is indeed associated with more curved, that is, indirect, trajectories in PC space. Both the direction into which trajectories are deflected in respect to a direct path, as well as the magnitude of the deflection, are initially influenced both by the start and the target state. Over time, the influence of the target state increasingly diminishes, with trajectories ultimately only being influenced by the state from which they started. We show that this finding is persistent across NCT parameter settings and hypothesize that it is related to the brain’s energy landscape, with energetically balanced trajectories explicitly avoiding traversal through energetically costly areas of the PC space (i.e. explicitly evolving via intermediate brain states that are energetically easier to reach, even if they might be not be located on the direct path between start and target state).
Our results suggest that a model of the brain’s dynamics that is informed by biological plausibility needs to take into account not only target-state specifics and energetic cost of transitions, but necessarily also start-state specifics and characteristics of the temporal evolution of functional states – of the trajectories themselves.
Note: A descriptive figure is attached in the pdf document.
Thalamo-cortical Modelling to Advance Treatments of Tourette Syndrome | Angelica Pozzi
Angelica Pozzi, School of Mathematical Sciences, University of Nottingham, UK
Stephen Coombes, School of Mathematical Sciences, University of Nottingham, UK
Reuben O’Dea, School of Mathematical Sciences, University of Nottingham, UK
Stephen Jackson, School of Psychology, University of Nottingham, UK
Tourette syndrome is a neurological disorder characterised by motor and vocal tics, and it is particularly prevalent among children. Although tic severity often decreases with age, many adults continue to experience significant symptoms. Recent advances have shown that median nerve stimulation can effectively reduce the frequency and severity of tics, and a mechanistic understanding of its effectiveness can improve neurostimulation protocols. The thalamus is a body of neural cells that relays impulses from the sensory pathways to the cerebral cortex. Feedback from the cortex gives rise to thalamo-cortical loops that generate emergent brain rhythms from the interplay of single-cell ionic currents and network mechanisms; this circuit is believed to be dysfunctional in people with Tourette’s. This work presents a mathematical model of the thalamo-cortical circuit built from neural mass models of the cortex coupled to mean-field models of the thalamus. Analysis to treat the system’s response to external sensory drive in the form of median nerve stimulation can be performed, and its understanding will pave the way for the design of improved healthcare treatments utilising wearable devices.
Generalized dynamical phase reduction for stochastic oscillators | Alberto Pérez Cervera
Phase reduction is an important tool for studying coupled and driven oscillators.
The question of how to generalize phase reduction to stochastic oscillators remains actively debated. In this work, we propose a method to derive a self-contained stochastic phase equation of the form
$\diff \phi = a(\phi)\diff t + \sqrt{2D(\phi)}\,\diff W(t)$ that is valid not only for noise-perturbed limit cycles, but also for noise-induced oscillations. We show that our reduction captures the asymptotic statistics of qualitatively different stochastic oscillators, and use it to infer their phase-response properties.
P. Houzelstein, Group for Neural Theory, École Normale Supérieure. Paris, France
P. Thomas, Department of Mathematics, Applied Mathematics and Statistics, Case Western Reserve University, Cleveland, Ohio, USA
B. Lindner, Bernstein Center for Computational Neuroscience Berlin, Department of Physics, Humboldt Universität zu Berlin, Berlin, Germany
B. Gutkin, Group for Neural Theory, École Normale Supérieure. Paris, France
A. Pérez-Cervera, Department of Applied Mathematics, Universidad de Alicante, Spain
Brain-wide calcium imaging in zebrafish and generative network modelling reveal cell-level functional network properties of seizure susceptibility | Wei Qin
Wei Qin, Department of Anatomy and Physiology, University of Melbourne, VIC, Australia
Jessica Beevis, Queensland Brain Institute, University of Queensland, QLD, Australia
Maya Wilde, Queensland Brain Institute, University of Queensland, QLD, Australia
Sarah Stednitz, Department of Anatomy and Physiology, University of Melbourne, VIC, Australia
Josh Arnold, Queensland Brain Institute, University of Queensland, QLD, Australia
Itia Favre-Bulle, Queensland Brain Institute, University of Queensland, QLD, Australia
Ellen Hoffman, Department of Neuroscience, Yale School of Medicine, Yale University, New Haven, CT, USA
Ethan K. Scott, Department of Anatomy and Physiology, University of Melbourne, VIC, Australia
Epilepsy is a neurological disorder that causes recurrent seizures, but the underneath mechanisms are still unclear. Traditional methods, using data from humans, nonhuman primates, or rodents, have limitations in resolving the activity of single cells. An approach that captures the dynamics of individual neurons and their interactions within brain-wide networks could therefore be of great utility in understanding epilepsy. Zebrafish and calcium imaging offer such an approach, as they allow for simultaneous in-vivo recording of neuronal activity across the brain at cellular resolution.
Zebrafish share genetic and physiological similarities with humans and can exhibit seizure-like behaviours in response to various drugs. One such drug is Pentylenetetrazol (PTZ), a pharmacological agent that blocks inhibitory GABAergic signalling, causing hyperexcitability and seizure-like activity. Additionally, mutations in the scn1lab gene, encoding a sodium channel, can also cause spontaneous seizures in zebrafish. In this study, we used in-vivo light-sheet calcium imaging, brain-wide and at cellular resolution, on wildtype and scn1lab-/- mutant zebrafish larvae under baseline and post-PTZ conditions.
We utilised network analyses and computational modelling to statistically quantify differences in network topology and dynamics between two genotypes and conditions. Specifically, we examined the network of active neuronal cells involved in ictogenesis across microscopic to macroscopic scales. Our study reveals significant and consistent changes in brain network connectivity, indicating that scn1lab-/- mutations impact brain structure and functions. Additionally, we developed a generative network model (GNM) at the cellular level to explain the wiring principles governing the development of both genotypes and the effects of PTZ on the brain-wide functional network. This novel model highlights brain regions associated with genotype differences, seizure severity, and overall network excitability and synchronisation. Combining experimental data and mathematical modelling, our approach provides a novel perspective on the mechanisms of epileptogenesis at a breadth and resolution that traditional epilepsy studies cannot achieve.
Structured Dynamics in The Algorithmic Agent | Giulio Ruffini
Giulio Ruffini(1) , Francesca Castaldo(1) , Jakub Vohryzek(2)
1 Neuroelectrics Barcelona SLU, Barcelona, Spain
2 Universitat Pompeu Fabra, Barcelona, Spain
In the Kolmogorov Theory of Consciousness, algorithmic agents utilize inferred compressive models to track coarse-grained data produced by simplified world models, capturing regularities that structure subjective experience and guide action planning. Here, we study the dynamical aspects of this framework by examining how the requirement of tracking natural data drives the structural and dynamical properties of the agent.
We first formalize the notion of generative model using the language of symmetry from group theory, specifically employing Lie pseudogroups to describe the continuous transformations that characterize invariance in natural data. Then, adopting a generic neural network as a proxy for the agent dynamical system and drawing parallels to Noether’s theorem in physics, we demonstrate that data tracking forces the agent to mirror the symmetry properties of the generative world model. This dual constraint on the agent’s constitutive parameters and dynamical repertoire enforces a hierarchical organization consistent with the manifold hypothesis in the neural network.
Our findings bridge perspectives from algorithmic information theory (Kolmogorov complexity, compressive modeling), symmetry (group theory), and dynamics (conservation laws, reduced manifolds), offering insights into the neural correlates of agenthood and structured experience in natural systems, as well as the design of artificial intelligence and computational models of the brain.
Laminar Neural Mass Model for Representing Alzheimer's Disease Electrophysiological Biomarkers | Roser Sanchez Todo
Roser Sanchez-Todo1, 2 , Borja Mercadal1, Edmundo Lopez-Sola1, 2 ,Giulio Ruffini1
1 Neuroelectrics Barcelona SLU, Barcelona, Spain
2 Universitat Pompeu Fabra, Barcelona, Spain
Alzheimer’s disease (AD) is characterized by progressive cognitive decline associated with amyloid beta (Aβ) plaques and hyperphosphorylated tau (hp-τ) proteins. Despite advances in identifying its biomarkers, more research is needed to understand disease progression and early, non-invasive, or cost-effective biomarkers like MEG or EEG. Modeling the disease in a physical-physiologically realistic fashion is also crucial to building whole-brain models of patients for treatments such as noninvasive brain stimulation, including transcranial alternating current stimulation. Here, we employ the mesoscale Laminar Neural Mass Model (LaNMM) framework to model the impact of disease on the electrophysiological biomarkers of AD, including alpha and gamma oscillations, which are critical for understanding disease mechanisms and progression.
Our computational model integrates our laminar framework and uses biologically informed parameters to represent the impact of Aβ and hp-τ on neural circuitry. The model incorporates physiologically realistic mechanisms, focusing on reducing parvalbumin-positive (PV+) interneuron connectivity to represent Aβ effects and the associated excitation-inhibition imbalance. The LaNMM offers a novel platform to replicate electrophysiological biomarkers observed in M/EEG across the disease continuum by simulating alpha and gamma oscillatory activity.
The model successfully replicates alpha band slowing and gamma power reductions, which is consistent with clinical observations in AD. It demonstrates the interplay between neural hyperexcitability, spectral slowing, and oscillatory disruptions driven by Aβ and hp-τ dynamics. Importantly, this is the first biologically realistic model to replicate these biomarkers within a mesoscale framework.
This work advances our understanding of AD pathophysiology by linking molecular pathology to electrophysiological biomarkers using the LaNMM. It provides a foundation for developing computational tools to optimize diagnostic and therapeutic approaches in AD.
Neuronal field model analysis from a mathematical point of view | Lena Schadow
Stochastic dynamics play a fundamental role in modeling neuronal activity, capturing intrinsic noise and external variability in neural systems. In this talk, we analyze the stochastic neuronal model investigated by Kramer et al. [1] and originally proposed by Steyn-Ross et al. [2], to answer the questions of existence and uniqueness of solutions. We determine conditions and modifications under which the model that consists of stochastic differential and wave equations, is well-posed using methods from stochastic and functional analysis. These results provide insights into the interplay between noise and nonlinear dynamics as well as the applicability in studying phenomena, such as seizure mechanisms and cortical activity.
[1] M. A. Kramer, H. E. Kirsch and A. J. Szeri; Pathological pattern formation and cortical propagation of epileptic seizures, J. R. Soc. Interface (2005)
[2] Moira L. Steyn-Ross, D. A. Steyn-Ross, J. W. Sleigh, and D. T. J. Liley; Theoretical electroencephalogram stationary spectrum for a white-noise-driven cortex: Evidence for a general anesthetic-induced phase transition, Phys. Rev. E , Vol. 60 (1999)
Multi-bump attractors in a neural field model with two firing thresholds | Helmut Schmidt
Bump attractors emerge in spatially extended models of the cortex, such as networks of spiking neurons or neural field models. They represent localised states of persistent activity that account for experimentally observed phenomena during the delay of spatial working memory tasks. There exist solutions with multiple bump attractors in neural field models with smooth sigmoid firing rate functions, yet they are not analytically tractable. Neural field models with Heaviside step firing rate functions, on the other hand, allow one to obtain analytical solutions for such bump attractors. However, stable multi-bump solutions do not exist in this case due to the repelling behaviour between bumps that results from the lateral inhibition necessary to produce stable bumps.
Here we present a neural field model where the firing rate function is described by two Heaviside step functions. If the shape of the resulting firing rate function does not alter the fixed point structure, then this modification merely provides small quantitative changes to solutions obtained in the one-threshold case. However, if the fixed point structure is altered, specifically if the firing rate of the resting state is non-zero, then we observe the emergence of stable multi-bump solutions. Here, the resting state itself produces inhibition at long distances, which counterbalances the repelling behaviour and equilibrates the bump solutions. Snaking bifurcations organise the number and stability of the emerging solutions, which can be obtained semi-analytically. The threshold conditions and the stability function can be computed analytically (as function of the threshold crossings), and these threshold crossing conditions are then used to numerically produce the bifurcation diagrams. The relatively small number of threshold crossing conditions makes this scheme computationally efficient.
Stochastic gene expression drives correlated synaptic noise | Oleg Senkevich
Cian O’Donnell (Ulster University), Romain Veltz (Inria), Oleg Senkevich (Ulster University)
Recent experimental findings suggest that the sizes of individual synapses largely fluctuate on time scales of hours/days even in the absence of electrical activity. This may be caused by local translation bursts since all mRNAs are stochastically delivered from the soma along highly elongated and branched dendrites. Here we obtain the exact statistics of the noise in tree-like compartmentalised neurons under the premise of the standard model of gene expression. For that, we develop a software library that computes expectations and spatiotemporal correlations of all variables in the model involving gene activation/deactivation and mRNA and protein production, degradation and transport in arbitrary neurons by solving moment equations. This approach is in perfect agreement with the corresponding Monte Carlo simulations while being much less computationally expensive than the Gillespie algorithm, allowing computation of mRNA/protein distributions across entire neurons. The results suggest that the noise in synaptic protein counts can be largely super-Poissonian and highly correlated for dendritic neighbours with autocorrelation time scales spanning from minutes to weeks. The correlations depend on many factors including the neuron’s morphology and the ribosomes’ locations. The method also allows efficient quantification of the system’s response to parameter perturbations, and we consider heterosynaptic plasticity occurring due to resource sharing. We show that the model can exhibit both positive and negative heterosynaptic plasticity with long transients, which may lead to misinterpretation of experimental results obtained with too short observation time.
Effects of biophysical synaptic dynamics on a population of neurons | Brian Skelly
TBP
A biophysical model of AMPA receptor dynamics | Brian Skelly
The network activity under the model is inherently homeostatic as the DCM rewiring dynamics are bounded from above by a conserved pre- and post-synaptic strength degree distribution. Hebbian potentiation of synaptic strength emerges since the DCM preferentially rewires neurons with proportionally many unconnected nanomodule resources; that is, those neurons which were recently active. The coupling of the structural to the functional plasticity dynamics by limiting synapse deletion to silent synapses further produces the Hebbian potentiation of synapse number between similarly active neurons.
Dynamic Regulation of Synaptic Plasticity by Astrocytes: A Model of D-Serine and NMDAR Subtypes | Lorenzo Squadrani
Authors: Lorenzo Squadrani (1), Janko Petkovic (1), Pietro Verzelli (1), Tatjana Tchumatchenko (1)
Affiliations: (1) University Hospital Bonn
Learning is a fundamental brain function, allowing for experience-dependent adaptation to an ever-changing environment.
Previous research has linked learning to synaptic plasticity—the activity-dependent modification of neuronal connections—with the NMDA receptor (NMDAR) playing a key role due to its high calcium permeability and unique gating properties.
NMDAR activation requires (1) agonist binding (glutamate), (2) removal of channel-blocking magnesium ions, and (3) coagonist binding (D-serine).
Satisfying conditions (1) and (2) requires simultaneous presynaptic and postsynaptic activity, allowing the NMDAR to effectively act as a coincidence detector.
The function of D-serine gating is, however, far less clear.
Recent works highlighted how astrocytes dynamically regulate D-serine levels.
Astrocytes are known to play a crucial role in orchestrating synaptic plasticity, but the specific mechanisms by which their neuroactive transmitters, like D-serine, control these processes remain largely unexplored.
Here, we develop a computational model to investigate the molecular interaction between D-serine and NMDARs, and its functional role in modulating synaptic plasticity and enhancing learning.
We model the regulation of synaptic D-serine as an astrocyte-mediated feedback of the postsynaptic neural activity.
D-serine interaction with NMDARs takes into account the existence of multiple NMDAR subtypes with different kinetics.
In particular, we explore the hypothesis that two distinct groups of NMDARs control Long-Term Potentiation (LTP) and Depression (LTD).
We demonstrate that, in this case, the D-serine levels can dynamically modify the balance between LTD and LTP, providing at the same time a mechanism for weight stabilization and fast adaptation to changes.
We show how astrocyte-neuron interaction modulates the shape of spike-timing-dependent plasticity (STDP) and rate-based plasticity.
Despite being essentially phenomenological, our model retains a strict analogy with known biophysical mechanisms, making it a powerful framework to further study the functional role of D-serine and other molecular mechanisms in synaptic plasticity.
Differences in spatial dynamics: effects of adaptation versus h-currents in a Wilson-Cowan field | Ronja Strömsdörfer
Abstract:
During non-REM sleep, the brain shows repetitive patterns of slow oscillations (SOs, <1 Hz) between periods of active and silent neuronal activity. These activity patterns are assumed to play a crucial role in memory consolidation. Intracranial recordings during sleep measure the activity on a high spatial and temporal resolution and allow identifications of propagation direction and velocity of traveling waves of SOs. The formation of these rhythms has been an ongoing subject of investigation for in-silico studies that utilize neural mass and field models to identify candidate mechanisms in support of empirical research. A working hypothesis is that SOs are driven by an adaptive process, whereas hyperpolarizing spike-frequency adaptation as well as hyperpolarization-activated currents have been discussed as potential mechanisms. We aim to deepen the knowledge about both candidate mechanisms and quantify how they affect the dynamical landscape of an adaptive Wilson-Cowan neural field model where we equip the excitatory population with either one of the two feedback currents. We show that the adaptive Wilson-Cowan neural field model is suitable to simulate traveling waves of SOs on the macroscale, and for local regions of the cortex. It allows semi-analytically solving for components that dominate spatial features, such as the minimum wavenumber that occurs in Turing-unstable states in which spatiotemporal patterns can emerge. In a state space exploration, we find regimes of Hopf and Turing instability. The temporal dynamics of emerging traveling waves are qualitatively and quantitatively similarly affected by both mechanisms. Temporally oscillatory activity patterns increase in dominant temporal frequency when the feedback strength for either mechanism is increased while slowing down the time scales of the mechanisms decreases the dominant temporal frequency.
On the other hand, the spatial dynamics are not similarly affected. For specific regimes in which Turing instability occurs, we show that differences in spatial dynamics are more pronounced. We see the difference in effect explicitly in Turing-unstable states that don’t occur without a spatial domain. Among other differences, h-currents generally promote destabilization of high-activity homogeneous steady states through an emerging Turing bifurcation while adaptation facilitates Turing instability in low-activity states.
Authors: Ronja Strömsdörfer (Technische Universität Berlin), Klaus Obermayer (Technische Universität Berlin)
Weighted sparsity regularization for solving the inverse EEG problem | Niranjana Sudheer
We study the potential of detecting brain activity in terms of dipoles using weighted sparsity regularization for a very specific choice of weighting matrix. The work is based on theoretical results that we have proven in previous studies that require modifications to fit into the classical EEG framework. More precisely, to represent a dipole with an arbitrary rotation at a given position, we need three basis dipoles. Our previous results guarantee recovery of any one of these basis dipoles, but not a linear combination of these. We will explain why this is the case before suggesting a remedy by introducing more than three dipoles at each position, i.e., a redundant basis (a frame). This will, in fact, provide a framework that is more in line with the theoretical assumptions needed to guarantee the recovery of a single dipole with arbitrary orientation. The performance of our method is demonstrated through several different experiments, and we illustrate that the method does not suffer from depth bias and has a low dipole localization error and low spatial dispersion. We will also show that the dipole localization error decreases with the addition of extra basis dipoles.
Authors:
Ole Løseth Elvetun (Faculty of Science and Technology, Norwegian University of Life Sciences, P.O Box 5003, NO -1430, Ås, Norway)
Bjørn Fredrik Nielsen (Faculty of Science and Technology, Norwegian University of Life Sciences, P.O Box 5003, NO -1430, Ås, Norway)
Niranjana Sudheer (Faculty of Science and Technology, Norwegian University of Life Sciences, P.O Box 5003, NO -1430, Ås, Norway)
Phase-Locking Induced by Transcranial Alternating Current Stimulation in a Balanced Network of Adaptive Exponential Integrate-and-Fire Neurons | Saeed Taghavi
Hawkes AutoRegressive processes, a new model for functional connectivity estimation: theoretical analysis | Leblanc Théo
In this poster we present a new model for estimating functional connectivity while including all possible interactions between spikes and LFPs. This model mixes Hawkes processes and Autoregressive processes. We study the theoretical properties of this stochastic model and also propose a robust method for estimating functional connectivity using a LASSO procedure.
Authors : Théo Leblanc (PhD student, CEREMADE) and my PhD supervisors; Vincent Rivoirard (CEREMADE) and Patricia Reynaud-Bouret (Université Côte d’Azur).
A nonlocal variational framework for optimal neural representations | Gengshuo Tian
Balance between local and global connections enhance spatiotemporal complexity in a cortial network model | Lluc Tresseras
Balance between local and global connections enhance spatiotemporal complexity in a cortial network model
Lluc Tresserras Pujadas1
, Leonardo Dalla Porta1
, Maria V. Sanchez-Vives1,2
1Systems Neuroscience, Institut d’investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain
2ICREA, Barcelona, Spain
Abstract
The cerebral cortex exhibits a rich repertoire of spontaneous activity patterns. The spatiotemporal patterns of the spontaneous activity depend on the state of the brain, which ranges from highly synchronized (e.g., sleep) to asynchronous states (e.g., awake). During NREM sleep, the brain activity is characterized by the presence of slow oscillations (SO) where neuronal circuits spontaneously switch between periods of neuronal sustained firing (Up states) and periods of silence (Down states) at approximately 1Hz [1]. Conversely, during wakefulness, the cerebral cortex is characterized by asynchronous activity where there is a suppression of the synchronized, low-frequency content. Therefore, an inherent property of the cerebral cortex is to transition between different brain states with different activity patterns. This is also the case in pathological studies, given that brain lesions can give rise to local synchrony, thus giving rise to different concurrent brain state [2]. A possible way to differentiate between these states is to study its spatiotemporal complexity. Several methods are available for estimating cortical complexity, but the perturbational complexity index (PCI) is commonly used in clinical studies [3].
While the change in complexity due to brain states, either physiological or pathological, is dynamic, complexity also relies on network connectivity. Theoretical studies have proposed that different structural connectivity patterns can give rise to different levels of emergent complexity. For example, the connectivity patterns of the cerebral cortex, such as a high density of connections and small world connectivity, are associated with high values of neural complexity [4]. These findings suggest that brain complexity depends on the network functional structure, determining how neuronal activity propagates along the cortex [5]. By examining the relationship between network structure and emergent patterns, we can get a deeper understanding of brain function as well as develop new treatments for brain disorders. In the current study, we employed a two-dimensional network model of the SO observed in vitro cortical slices to explore the link between network structure and spontaneous and perturbed cortical
spatiotemporal activity. The model, consisting of pyramidal cells (excitatory) and interneurons (inhibitory), captures the dynamics of these cells using Hodgkin-Huxley equations. Neurons are randomly distributed in a 50×50 mesh and locally interconnected through biologically plausible synaptic dynamics [6]. By manipulating a parameter (LRCp) that governs the probability of the excitatory cells forming long-rang connections with their postsynaptic contacts, we explored a large range of network structures, from segregated to fully integrated configurations and their resulting impact on the emergent network complexity.
Refefences
[1] Steriade M, Nunez A, Amzica F. A novel slow (< 1 hz) oscillation of neocortical neurons in vivo: depolarizing and hyperpolarizing components. Journal of Neuroscience. 1993; 13(8):3252– 3265.
[2] Massimini M, Corbetta M, Sanchez-Vives MV, Andrillon T, Deco G, Rosanova M, Sarasso S. Sleep-like cortical dynamics during wakefulness and their network effects following brain injury. Nature Communications. 2024; 15(1):7207.
[3] Casali G, Gosseries O, Rosanova M, Boly M, Sarasso S, Casali KR, Casarotto S, Bruno M-A, Laureys S, Tononi G, Massimini M. A theoretically based index of consciousness independent of sensory processing and behavior. Science translational medicine. 2013; 5(198):198ra105–198ra105.
[4] Tononi G, Sporns O, Edelman GM. A measure for brain complexity: Relating functional segregation and integration in the nervous system. Proceedings of the National Academy of Sciences. 1994; 91:5033-5037.
[5] Sanchez-Vives MV, Compte A. Structural statistical properties of the connectivity could underlie the difference in activity propagation velocities in visual and olfactory cortices. International Work-Conference on the Interplay Between Natural and Artificial Computation. 2005; 133-142.
[6] Barbero-Castillo A, Mateos-Aparicio P, Dalla Porta L, Camassa A, Perez-Mendez L, Sanchez-Vives MV. Impact of GABAA and GABAB Inhibition on Cortical Dynamics and Perturbational Complexity during Synchronous and Desynchronized States. Journal of Neuroscience. 2021; 41(23):5029-5044
Short-Term Plasticity modulates UP and DOWN cortical dynamics | Catalina Vich
[1] Experimental findings of nanomodules/nanocolumns can be found in papers such as:
Generalized Tripod Gaits: A Topological Perspective on Neuron Networks | Rubén Vigara
Characterization of a bio-realistic cortical column during the generation of alpha rhythm | Pablo Vizcaíno García
Characterization of a bio-realistic cortical column during the generation of alpha rhythm
Pablo Vizcaíno(1,2,3) Fernando Maestú (1,3) Alireza Valizadeh (1) Gianluca Susi (1,2,3)
1.- Zapata-Briceño institute for Human Intelligence
2.-Department of Structure of Matter, Thermal Physics and Electronics, School of Physics, Complutense University of Madrid, Spain
3.-Center for Cognitive and Computational Neuroscience, Complutense University of Madrid, Spain
The Alpha rhythm is continuously investigated due to its prominence in the resting-state brain. Amongst the many endeavours that pursue its understanding, Shimoura et al. (2023) employed a spiking neuron-based cortical column model based on the previous work of Potjans and Diesmann (2014) to test the validity of two different hypotheses for generating alpha rhythm: 1.- pyramidal cortical neurons of layer 5. 2.- A thalamocortical loop delay. It was observed that both mechanisms could generate alpha rhythm. Regarding whole-brain models based on neural masses, such as the Jansen-Rit model, alpha rhythm is intrinsically generated by each individual population (see Cabrera-Álvarez et al., 2023). This model agreed with Shimoura’s et al. hypothesis of alpha being generated inside each column through cortical neurons in layer 5.
In this work, we aim to further explore the possible generation of the alpha rhythm using a simple motif of interconnected cortical columns and a thalamocortical layer. With this aim, we build upon the design of a spiking cortical column by Potjans & Diesmann (2014). We implemented an interconnected set of full-spiking columns encompassing 80000 neurons and 0.3 billion synapses. The connections have been derived from experimental data, utilising diffusion Magnetic Resonance Imaging data and tractography techniques. We explore both hypotheses including a more biologically comprehensive thalamic model, characterising its impact on column synchrony, irregularity, and power spectral density and how these measurements change in the presence or absence of the thalamus. An increase in alpha frequency in layer L4E is observed once the thalamus is connected to the column, but further research needs to be conducted to properly understand and characterise the underlying mechanisms. In this work we explore the effects of delay of thalamocortical synapses to the activity of the column, and the propagation of signals from the thalamus to the column.
Diversity-induced decoherence in a slow-fast neuron model | Marius Yamakou
Presenting Author: Marius Yamakou; Department of Data Science, Friedrich-Alexander-University Erlangen-Nuremberg, Germany
Other Authors: Els Heinsalu and Marco Patriarca; National Institute of Chemical Physics and Biophysics – Akadeemia tee 23, Estonia
Other Author: Stefano Scialla; Department of Engineering, Universit\`a Campus Bio-Medico di Roma – Via \’A. del Portillo 21, 00128 Rome, Italy
—
Abstract: The effects of noise and heterogeneity (or diversity) on neural dynamics have been extensively studied. Some studies emphasized the possibility of amplifying the response of a network to an external signal driven by noise or diversity. Other studies highlighted that noise or diversity could generate significant resonance effects, even without an external signal, causing coherent oscillations. An example of the latter kind is self-induced stochastic resonance (SISR). On the other hand, a few works have analyzed the combined effects of noise and diversity in neural dynamics. These studies mostly led to the conclusion that adding optimal diversity on top of noise results in a further enhancement of resonance effects caused by noise alone, i.e., the role of optimal diversity is always constructive. However, in this talk, we use slow-fast analysis, mean-field approach, and numerical simulations to demonstrate that, in contrast to previous literature, showing that network diversity can always be optimized to enhance collective behaviors such as synchronization or coherence, the effect of diversity on SISR, instead, can only be \textit{antagonistic} — a nontrivial effect which we call diversity-induced decoherence (DIDC). Our result indicates that diversity’s enhancement or deterioration of noise-induced resonance phenomena in neurons strongly depends on the underlying mechanism.
Coherent states in the interacting populations oh interneurons and pyramidal cells with ring nonlocal connections | Denis Zakharov
Coherent states in the interacting populations oh interneurons and pyramidal cells with ring nonlocal connections
Denis Zakharov (1), Daniil Radushev (1), Olesya Dogonasheva (2), and Boris Gutkin (3)
(1) Institute for cognitive Neuroscience, HSE University
(2) Institut de l’Audition, The Institut Pasteur, Université de Paris Cité
(3) Département d’études cognitives, ENS
The paper studies synchronization processes in a network consisting of populations of interneurons (IN) and pyramidal (PY) cells. In each of the populations, the interaction was carried out through connections with a non-local ring topology. In the absence of connections between populations, depending on the parameter values and initial conditions various coherent modes could be implemented in each subnet: full synchronization, cluster modes, traveling waves, static chimera states, traveling chimera states, etc. We have studied two cases of connections between populations:
1) The case of unidirectional nonlocal connections from interneurons to pyramidal cells;
2) The case of bidirectionally interacting populations.
Note that these cases can be considered as models of gamma rhythm generation: in the first case, InterNeuronal Gamma (ING), in the second, Pyramidal-InterNeuronal gamma (PING).
It was found that in the first case, the ring of PY cells qualitatively reproduces the dynamic state of the IN population. For example, if the initial state of the IN population is a traveling multimera state, then a similar traveling multimera state was observed in the PY population. It should be noted that the PY neurons, at the same time, showed not spike, but burst activity (typically 3-5 spike in bursts). In other words, in the ING model, the PY population acted as a passive signal amplifier from the IN subnetwork.
In the second case, with the bidirectional interaction of populations, the influence the PY neurons demonstrated the ability to exert a stabilizing effect on the dynamics of the IN population. It was shown that with sufficiently strong excitatory connections, the PY signal could significantly transform the state of the whole network. Here, there are some examples of transitions between IN-modes due to the action of PY cells: asynchronous state – synchronous state, traveling clusters of subthreshold oscillations – traveling wave, breathing chimera – stable multichimera, traveling multichimera – static multichimera, traveling multichimera – traveling wave, static multichimera with two synchronous clusters – two–cluster (antiphase) synchronization. Regardless of the presence or absence of PY-IN connections, after the establishment of a dynamic mode in the network, both populations demonstrate qualitatively similar dynamic regimes.
EVENT WEBSITE