Over the past decades, the advance of neuroimaging technologies has sparked a rapid evolution of human brain mapping. Not only can we photograph the brain non-invasively – from the relative comfort of an MRI – there is a constant production of brain maps that describe diverse processes in the brain. This “brain mapping” is not unlike attempts at mapping the Earth: although it is easy to feel we have a good grasp of what the Earth looks like, there is always a new map to explore.
Early explorers of Earth started with the land masses: where are the continents and what does the geography of Earth look like? A natural next step might be to locate routes between landmasses that maximize efficient travel, not unlike the neural connections that link brain regions. However, this picture of Earth is just a foundation for more specific and niche layers of description. For example, superimposed on a geographic map of the Earth could be maps of forestation, energy consumption, 5G cell signal, or penguin density, depending on your interest. Each new representation of the Earth is its own perspective and can be informative in isolation, but the aggregation of these maps results in a more complete picture that may reveal relationships that were previously unknown.
This is the inspiration for neuromaps, an open-source software toolbox for contextualizing human brain maps, introduced in our recent Nature Methods publication. Neuromaps currently hosts more than forty brain maps from the published literature, including data about energy consumption (metabolism), receptor densities, myelination, cortical expansion throughout development and evolution, electrophysiological neural dynamics, and cognitive functional activity. At this point I need to emphasize that we did not actually collect any of this data. Most of it was already publicly available, but hard to find (we did some scavenging of GitHub repositories), or was not publicly available simply because no one had asked (shout-out to the many researchers that shared receptor density data and contributed to the creation of a receptor/transporter atlas of the human brain, hosted on neuromaps). So, many thanks are due to the folks that did the data collection.
The purpose of putting together a library of human brain maps is to facilitate comparisons across brain maps from diverse disciplines, multiple spatial scales, and different neuroimaging modalities. However, there isn’t a single common “space” or “representation” of brain data that makes it easy to directly compare two maps. This is a bit like having a map of the Earth that is provided in the shape of a globe, another map represented as a 2D flatmap, and a third map that is represented as a 2D series of almond-shaped Earth cutouts. Neuromaps attempts to overcome this challenge by providing tools that transform brain maps between spaces (specifically, for the neuroimagers: CIVET, fsaverage, fsLR, and MNI-152). Lastly, neuromaps implements methods for assessing the statistical significance of map-to-map comparisons (called “spatial nulls”, including both “spin tests” and generative models).
Ultimately, we hope that neuromaps will add a spark to the analysis of human brain maps, and increase accessibility of data and software tools to people with diverse research interests. There is also a slightly less obvious motive at play: we hope to standardize this increasingly common analysis pipeline of comparing brain maps to one another. Why is standardization important? Although many of us human-brain-map-analyzers spend the bulk of our working day in front of the computer, we don’t always follow the best coding practices. Code reviews and open code isn’t a guarantee, code commenting and documentation is a hit-or-miss, and it’s not unlikely that multiple PhD students under the same supervisor are all coding their own version of the same analytic pipeline (and probably over-and-over again, when we forget where we stored our code snippet or never thought to put together a script in the first place). This is not only inefficient but introduces mistakes and irreproducible research.
Thankfully, Ross Markello (the muscle behind neuromaps and the co-first-author of the paper) is a software wizard and has a knack for writing robust, well-commented, and easily understandable code (see his other projects, including abagen, a tool for processing human genetic data, pyls, a package for applying Partial Least Squares analysis, and netneurotools, a collection of handy functionalities for analyzing neuroimaging data). It also helped that our supervisor, Bratislav Misic, was enamored by the idea of
Google Maps but for The Brain, it’s going to be awesome!
This, alongside some generous scientists and serious team work resulted in the perfect storm for putting neuromaps together.
Altogether, we introduce a new software toolbox, neuromaps, to the human brain mapping community. Neuromaps is a step towards large-scale enrichment analysis for cortical data, not unlike tools that exist in adjacent fields such as bioinformatics. As the rate at which new brain maps are generated in the field continues to grow, we hope that neuormaps will provide researchers with a set of standardized workflows for better understanding what these data can tell us about the human brain.
Please sign in or register for FREE
If you are a registered user on Research Communities by Springer Nature, please sign in