Non-invasive estimation of brain microstructure features using diffusion MRI (dMRI)—known as Microstructure Imaging—has become an increasingly diverse and complicated field over the last decades. Multi-compartment (MC)-models, representing the measured diffusion signal as a linear combination of signal models of distinct tissue types, have been developed in many forms to estimate these features. However, a generalized implementation of MC-modeling as a whole, providing deeper insights in its capabilities, remains missing. To address this fact, we present Diffusion Microstructure Imaging in Python (Dmipy), an open-source toolbox implementing PGSE-based MC-modeling in its most general form. Dmipy allows on-the-fly implementation, signal modeling, and optimization of any user-defined MC-model, for any PGSE acquisition scheme. Dmipy follows a “building block”-based philosophy to Microstructure Imaging, meaning MC-models are modularly constructed to include any number and type of tissue models, allowing simultaneous representation of a tissue's diffusivity, orientation, volume fractions, axon orientation dispersion, and axon diameter distribution. In particular, Dmipy is geared toward facilitating reproducible, reliable MC-modeling pipelines, often allowing the whole process from model construction to parameter map recovery in fewer than 10 lines of code. To demonstrate Dmipy's ease of use and potential, we implement a wide range of well-known MC-models, including IVIM, AxCaliber, NODDI(x), Bingham-NODDI, the spherical mean-based SMT and MC-MDI, and spherical convolution-based single- and multi-tissue CSD. By allowing parameter cascading between MC-models, Dmipy also facilitates implementation of advanced approaches like CSD with voxel-varying kernels and single-shell 3-tissue CSD. By providing a well-tested, user-friendly toolbox that simplifies the interaction with the otherwise complicated field of dMRI-based Microstructure Imaging, Dmipy contributes to more reproducible, high-quality research.
Category: Frontiers in Neuroinformatics
Exploring EEG Effective Connectivity Network in Estimating Influence of Color on Emotion and Memory
Color is a perceptual stimulus that has a significant impact on improving human emotion and memory. Studies have revealed that colored multimedia learning materials (MLMs) have a positive effect on learner’s emotion and learning where it was assessed by subjective/objective measurements. This study aimed to quantitatively assess the influence of colored MLMs on emotion, cognitive processes during learning, and long-term memory (LTM) retention using electroencephalography (EEG). The dataset consisted of 45 healthy participants, and MLMs were designed in colored or achromatic illustrations to elicit emotion and that to assess its impact on LTM retention after 30-min and 1-month delay. The EEG signal analysis was first started to estimate the effective connectivity network (ECN) using the phase slope index and expand it to characterize the ECN pattern using graph theoretical analysis. EEG results showed that colored MLMs had influences on theta and alpha networks, including (1) an increased frontal-parietal connectivity (top–down processing), (2) a larger number of brain hubs, (3) a lower clustering coefficient, and (4) a higher local efficiency, indicating that color influences information processing in the brain, as reflected by ECN, together with a significant improvement in learner’s emotion and memory performance. This is evidenced by a more positive emotional valence and higher recall accuracy for groups who learned with colored MLMs than that of achromatic MLMs. In conclusion, this paper demonstrated how the EEG ECN parameters could help quantify the influences of colored MLMs on emotion and cognitive processes during learning.
odMLtables: A User-Friendly Approach for Managing Metadata of Neurophysiological Experiments
An essential aspect of scientific reproducibility is a coherent and complete acquisition of metadata along with the actual data of an experiment. The high degree of complexity and heterogeneity of neuroscience experiments requires a rigorous management of the associated metadata. The odML framework represents a solution to organize and store complex metadata digitally in a hierarchical format that is both human and machine readable. However, this hierarchical representation of metadata is difficult to handle when metadata entries need to be collected and edited manually during the daily routines of a laboratory. With odMLtables, we present an open-source software solution that enables users to collect, manipulate, visualize, and store metadata in tabular representations (in xls or csv format) by providing functionality to convert these tabular collections to the hierarchically structured metadata format odML, and to either extract or merge subsets of a complex metadata collection. With this, odMLtables bridges the gap between handling metadata in an intuitive way that integrates well with daily lab routines and commonly used software products on the one hand, and the implementation of a complete, well-defined metadata collection for the experiment in a standardized format on the other hand. We demonstrate usage scenarios of the odMLtables tools in common lab routines in the context of metadata acquisition and management, and show how the tool can assist in exploring published datasets that provide metadata in the odML format.
A Performant Web-Based Visualization, Assessment, and Collaboration Tool for Multidimensional Biosignals
Biosignal-based research is often multidisciplinary and benefits greatly from multi-site collaboration. This requires appropriate tooling that supports collaboration, is easy to use, and is accessible. However, current software tools do not provide the necessary functionality, usability, and ubiquitous availability. The latter is particularly crucial in environments, such as hospitals, which often restrict users' permissions to install software. This paper introduces a new web-based application for interactive biosignal visualization and assessment. A focus has been placed on performance to allow for handling files of any size. The proposed solution can load local and remote files. It parses data locally on the client, and harmonizes channel labels. The data can then be scored, annotated, pseudonymized and uploaded to a clinical data management system for further analysis. The data and all actions can be interactively shared with a second party. This lowers the barrier to quickly visually examine data, collaborate and make informed decisions.
CoreNEURON : An Optimized Compute Engine for the NEURON Simulator
The NEURON simulator has been developed over the past three decades and is widely used by neuroscientists to model the electrical activity of neuronal networks. Large network simulation projects using NEURON have supercomputer allocations that individually measure in the millions of core hours. Supercomputer centers are transitioning to next generation architectures and the work accomplished per core hour for these simulations could be improved by an order of magnitude if NEURON was able to better utilize those new hardware capabilities. In order to adapt NEURON to evolving computer architectures, the compute engine of the NEURON simulator has been extracted and has been optimized as a library called CoreNEURON. This paper presents the design, implementation, and optimizations of CoreNEURON. We describe how CoreNEURON can be used as a library with NEURON and then compare performance of different network models on multiple architectures including IBM BlueGene/Q, Intel Skylake, Intel MIC and NVIDIA GPU. We show how CoreNEURON can simulate existing NEURON network models with 4–7x less memory usage and 2–7x less execution time while maintaining binary result compatibility with NEURON.