Break-up and Recovery of Harmony between Direct and Indirect Pathways in The Basal Ganglia; Huntington’s Disease and Treatment

arXiv:2310.11635v2 Announce Type: replace Abstract: The basal ganglia (BG) in the brain exhibit diverse functions for motor, cognition, and emotion. Such BG functions could be made via competitive harmony between the two competing pathways, direct pathway (DP) (facilitating movement) and indirect pathway (IP) (suppressing movement). As a result of break-up of harmony between DP and IP, there appear pathological states with disorder for movement, cognition, and psychiatry. In this paper, we are concerned about the Huntington's disease (HD), which is a genetic neurodegenerative disorder causing involuntary movement and severe cognitive and psychiatric symptoms. For the HD, the number of D2 SPNs ($N_{\rm D2}$) is decreased due to degenerative loss, and hence, by decreasing $x_{\rm D2}$ (fraction of $N_{\rm D2}$), we investigate break-up of harmony between DP and IP in terms of their competition degree ${\cal C}_d$, given by the ratio of strength of DP (${\cal S}_{DP}$) to strength of IP (${\cal S}_{IP}$) (i.e., ${\cal C}_d = {\cal S}_{DP} / {\cal S}_{IP}$). In the case of HD, the IP is under-active, in contrast to the case of Parkinson's disease with over-active IP, which results in increase in ${\cal C}_d$ (from the normal value). Thus, hyperkinetic dyskinesia such as chorea (involuntary jerky movement) occurs. We also investigate treatment of HD, based on optogenetics and GP ablation, by increasing strength of IP, resulting in recovery of harmony between DP and IP. Finally, we study effect of loss of healthy synapses of all the BG cells on HD. Due to loss of healthy synapses, disharmony between DP and IP increases, leading to worsen symptoms of the HD.

A Novel method for Schizophrenia classification using nonlinear features and neural networks

arXiv:2402.14819v1 Announce Type: new Abstract: One notable method for recording brainwaves to identify neurological problems is electroencephalography (hereafter EEG). A trained neuro physician can learn more about how the brain functions through the use of EEGs. However conventionally, EEGs are only used to examine neurological problems (Eg. Seizures). But abnormal links to neurological circuits can also exist in psychological illnesses like Schizophrenia. Hence EEGs can be an alternate source of data for detection and classification of psychological disorders. A study on the classification of EEG data obtained from healthy individuals and individuals experiencing schizophrenia is conducted. The inherent nonlinear nature of brain waves are made use for the dimensionality reduction of the data. Nonlinear parameters such as Lyapunov exponent (LE) and Hurst exponent (HE) were selected as essential features. The EEG data was obtained from the openly available EEG database of MV. Lomonosov Moscow State university. To perform Noise reduction of the data, a more recently developed Tunable Q factor based wavelet transform (TQWT) is used . Finally for the classification, the 16 channel EEG time series is converted into spatial heatmaps using the aforementioned features. A convolutional neural network (CNN) is designed and trained with the modified data format for classification

Magnetic Nanoparticles for Neural Engineering

arXiv:2402.13260v1 Announce Type: new Abstract: Magnetic nanoparticles (MNPs) are the foundation of several new strategies for neural repair and neurological therapies. The fact that a remote force can act on MNPs at the cytoplasmic space constitutes the essence of many new neurotherapeutic concepts. MNPs with a predesigned physicochemical characteristic can interact with external magnetic fields to apply mechanical forces in definite areas of the cell to modulate cellular behaviour. Magnetic actuation to direct the outgrowth on neurons after nerve injury has already demonstrated the therapeutic potential for neural repair. When these magnetic cores are functionalized with molecules such as nerve growth factors or neuroprotective molecules, multifunctional devices can be developed. This chapter will review some of these new nanotechnology-based solutions for neurological diseases, specifically those based on the use of engineered MNPs used for neuroprotection and neuroregeneration. These include the use of MNPs as magnetic actuators to guide neural cells, modulate intracellular transport and stimulate axonal growth after nerve injury.

Learning optimal integration of spatial and temporal information in noisy chemotaxis

We investigate the boundary between chemotaxis driven by spatial estimation of gradients and chemotaxis driven by temporal estimation. While it is well known that spatial chemotaxis becomes disadvantageous for small organisms at high noise levels, it is unclear whether there is a discontinuous switch of optimal strategies or a continuous transition exists. Here, we employ deep reinforcement learning to study the possible integration of spatial and temporal information in an a priori unconstrained manner. We parameterize such a combined chemotactic policy by a recurrent neural network and evaluate it using a minimal theoretical model of a chemotactic cell. By comparing with constrained variants of the policy, we show that it converges to purely temporal and spatial strategies at small and large cell sizes, respectively. We find that the transition between the regimes is continuous, with the combined strategy outperforming in the transition region both the constrained variants as well as models that explicitly integrate spatial and temporal information. Finally, by utilizing the attribution method of integrated gradients, we show that the policy relies on a non-trivial combination of spatially and temporally derived gradient information in a ratio that varies dynamically during the chemotactic trajectories.

Optimal input reverberation and homeostatic self-organization towards the edge of synchronization

Transient or partial synchronization can be used to do computations, although a fully synchronized network is frequently related to epileptic seizures. Here, we propose a homeostatic mechanism that is capable of maintaining a neuronal network at the edge of a synchronization transition, thereby avoiding the harmful consequences of a fully synchronized network. We model neurons by maps since they are dynamically richer than integrate-and-fire models and more computationally efficient than conductance-based approaches. We first describe the synchronization phase transition of a dense network of neurons with different tonic spiking frequencies coupled by gap junctions. We show that at the transition critical point, inputs optimally reverberate through the network activity through transient synchronization. Then, we introduce a local homeostatic dynamic in the synaptic coupling and show that it produces a robust self-organization toward the edge of this phase transition. We discuss the potential biological consequences of this self-organization process, such as its relation to the Brain Criticality hypothesis, its input processing capacity, and how its malfunction could lead to pathological synchronization.

Biomimetic hydrogel based on HET-s amylo\”id fibers for long-term culture of primary hippocampal neurons

Historically, amyloid fibers (AF) in research has always been linked to degenerative diseases. However, HET-s AF, by their morphology and function, have only little in common to pathogenic amyloid fibers such as {\alpha}-synuclein or a\b{eta} and they have appeared as promising candidate for biocoating since few years. Here we have shown than HET-s amyloid fibers hydrogel is an extremely polyvalent coating material for the in vitro culture of primary hippocampal neurons. First, the non-cytotoxicity was demonstrated in vitro using standardized ISO protocols. Then, it is shown that in vitro culture of primary hippocampal neurons on HET-s AF hydrogels could last more than 45 days with clear signatures of spontaneous network activity, with which is a feat that not many other coatings have achieved yet. Finally, interactions between the cells, the dendrites and the hydrogels are highlighted, showing that dendrites might be able to penetrate the hydrogels in depth, therefore allowing recordings even within micrometer-thick hydrogels. In the end, those properties combined with group functionalization using standard biochemistry techniques, makes HET-s hydrogels ideal candidates to be used for the long-term growth of neurons as well as other types of cells. This versatility and easiness to use are definitely still unheard, especially for protein material. Due to its ability to transform from dry films to hydrogel when in contact with the extracellular matrix (ECM), it could also be used for in vivo implants, solving the issue of hydrogel damaging during the implant surgery.

An Operating Principle of the Cerebral Cortex, and a Cellular Mechanism for Attentional Trial-and-Error Pattern Learning and Useful Classification Extraction

A feature of the brains of intelligent animals is the ability to learn to respond to an ensemble of active neuronal inputs with a behaviorally appropriate ensemble of active neuronal outputs. Previously, a hypothesis was proposed on how this mechanism is implemented at the cellular level within the neocortical pyramidal neuron: the apical tuft or perisomatic inputs initiate "guess" neuron firings, while the basal dendrites identify input patterns based on excited synaptic clusters, with the cluster excitation strength adjusted based on reward feedback. This simple mechanism allows neurons to learn to classify their inputs in a surprisingly intelligent manner. Here, we revise and extend this hypothesis. We modify synaptic plasticity rules to align with behavioral time scale synaptic plasticity (BTSP) observed in hippocampal area CA1, making the framework more biophysically and behaviorally plausible. The neurons for the guess firings are selected in a voluntary manner via feedback connections to apical tufts in the neocortical layer 1, leading to dendritic Ca2+ spikes with burst firing, which are postulated to be neural correlates of attentional, aware processing. Once learned, the neuronal input classification is executed without voluntary or conscious control, enabling hierarchical incremental learning of classifications that is effective in our inherently classifiable world. In addition to voluntary, we propose that pyramidal neuron burst firing can be involuntary, also initiated via apical tuft inputs, drawing attention towards important cues such as novelty and noxious stimuli. We classify the excitations of neocortical pyramidal neurons into four categories based on their excitation pathway: attentional versus automatic and voluntary/acquired versus involuntary. Additionally, we hypothesize that dendrites within pyramidal neuron minicolumn bundles are coupled via depolarization...

A minimal model of cognition based on oscillatory and reinforcement processes

Building mathematical models of brains is difficult because of the sheer complexity of the problem. One potential approach is to start by identifying models of basal cognition, which give an abstract representation of a range organisms without central nervous systems, including fungi, slime moulds and bacteria. We propose one such model, demonstrating how a combination of oscillatory and current-based reinforcement processes can be used to couple resources in an efficient manner. We first show that our model connects resources in an efficient manner when the environment is constant. We then show that in an oscillatory environment our model builds efficient solutions, provided the environmental oscillations are sufficiently out of phase. We show that amplitude differences can promote efficient solutions and that the system is robust to frequency differences. We identify connections between our model and basal cognition in biological systems and slime moulds, in particular, showing how oscillatory and problem-solving properties of these systems are captured by our model.

A Lyapunov theory demonstrating a fundamental limit on the speed of systems consolidation

The nervous system reorganizes memories from an early site to a late site, a commonly observed feature of learning and memory systems known as systems consolidation. Previous work has suggested learning rules by which consolidation may occur. Here, we provide conditions under which such rules are guaranteed to lead to stable convergence of learning and consolidation. We use the theory of Lyapunov functions, which enforces stability by requiring learning rules to decrease an energy-like (Lyapunov) function. We present the theory in the context of a simple circuit architecture motivated by classic models of learning in systems consolidation mediated by the cerebellum. Stability is only guaranteed if the learning rate in the late stage is not faster than the learning rate in the early stage. Further, the slower the learning rate at the late stage, the larger the perturbation the system can tolerate with a guarantee of stability. We provide intuition for this result by mapping the consolidation model to a damped driven oscillator system, and showing that the ratio of early- to late-stage learning rates in the consolidation model can be directly identified with the (square of the) oscillator's damping ratio. This work suggests the power of the Lyapunov approach to provide constraints on nervous system function.

Exact minimax entropy models of large-scale neuronal activity

In the brain, fine-scale correlations combine to produce macroscopic patterns of activity. However, as experiments record from larger and larger populations, we approach a fundamental bottleneck: the number of correlations one would like to include in a model grows larger than the available data. In this undersampled regime, one must focus on a sparse subset of correlations; the optimal choice contains the maximum information about patterns of activity or, equivalently, minimizes the entropy of the inferred maximum entropy model. Applying this ``minimax entropy" principle is generally intractable, but here we present an exact and scalable solution for pairwise correlations that combine to form a tree (a network without loops). Applying our method to over one thousand neurons in the mouse hippocampus, we find that the optimal tree of correlations reduces our uncertainty about the population activity by 14% (over 50 times more than a random tree). Despite containing only 0.1% of all pairwise correlations, this minimax entropy model accurately predicts the observed large-scale synchrony in neural activity and becomes even more accurate as the population grows. The inferred Ising model is almost entirely ferromagnetic (with positive interactions) and exhibits signatures of thermodynamic criticality. These results suggest that a sparse backbone of excitatory interactions may play an important role in driving collective neuronal activity.