Better Community detection in brain connectomes with hybrid quantum computing?
Brain connectivity is a growing approach in computational neuroscience. Indeed, we can relatively "summarize" the complexity of the brain wiring into more readible mesoscale connections. Current theories tend to agree that the brain is an entanglement of segregated regions. This raises the question "how do we partition those segregated regions?"
Luckily for us there are a series of tools from the literature of graph partitioning. However, those are still challenging as they rely on some optimizations.

For instance, the Louvain Modularity measures the strength of division of a network into communities. It's calculated by comparing the fraction of edges within communities to what would be expected if edges were distributed at random. The aim is to maximize this modularity score, indicating a strong community structure within the network.
Given the access to quantum optimizations as the Dwave solver, we can delve into the exploration of community detection within brain connectomes using quantum annealers. The authors provide an in-depth analysis of the application of quantum annealing and its potential impact on computational neuroscience.
Adiabatic processes, in classical physics, refer to transformations that occur without the exchange of heat or matter with the surroundings. In quantum mechanics, adiabatic processes are governed by the adiabatic theorem, which states that a system remains in its instantaneous eigenstate if a perturbation is applied slowly enough. This means that if a quantum system starts in the ground state of an initial Hamiltonian and the Hamiltonian is changed slowly enough, the system will evolve to the ground state of the final Hamiltonian.
Quantum annealing and adiabatic quantum computation utilize this concept by starting with a simple, known quantum system (described by an initial Hamiltonian whose ground state is easy to prepare) and slowly evolving it into a final Hamiltonian whose ground state represents the solution to the computational problem being solved.
In quantum annealing, this process involves cooling down a quantum system (qubits) initially prepared in a superposition of states to eventually settle in the lowest energy state, which corresponds to the solution of an optimization problem. The system is manipulated through a controlled annealing process that guides it to the state representing the optimal solution.
Adiabatic quantum computation follows a similar principle. It starts with a set of qubits prepared in a known initial state (related to the problem) and gradually transforms the system's Hamiltonian to encode the problem. The system then evolves adiabatically to reach the ground state, representing the solution to the computational problem encoded in the final Hamiltonian.
Both quantum annealing and adiabatic quantum computation leverage the adiabatic theorem to find the optimal solution to a given problem by gradually transforming the system in a way that preserves its quantum coherence, allowing it to explore multiple solutions and settle into the one with the lowest energy, which corresponds to the solution of the problem being addressed.
Quantum Annealing on D-Wave hardware works using Hamiltonians and Ising models, mapping problems onto the Chimera graph. It is possible to reduce network objects to Ising models and run leap hybrid solver. Despite the promises this seems to have statistically significant advantages for large networks. Indeed we see the statistical significant difference of the modularity index.
![Left: Percentage of relative increase of the QA result with respect to LCDA [MEAN±SEM]. Significancy (p<0.05, one-sided Welch’s t-test) is indicated in red. See exact values in Table S2. Errors bars were computed by propagating the SEM of the results with LCDA and QA. Dashed black line indicates the no increase threshold. Right: Effect size measured by Cohen’s d. Horizontal dashed and colored lines separate effect sizes according to standard thresholds. Cohen’s d is a measure of the effect size present in the two populations without assessing the statistical significance. Left: Percentage of relative increase of the QA result with respect to LCDA [MEAN±SEM]. Significancy (p<0.05, one-sided Welch’s t-test) is indicated in red. See exact values in Table S2. Errors bars were computed by propagating the SEM of the results with LCDA and QA. Dashed black line indicates the no increase threshold. Right: Effect size measured by Cohen’s d. Horizontal dashed and colored lines separate effect sizes according to standard thresholds. Cohen’s d is a measure of the effect size present in the two populations without assessing the statistical significance.](https://media.springernature.com/full/springer-static/image/art%3A10.1038%2Fs41598-023-30579-y/MediaObjects/41598_2023_30579_Fig6_HTML.png?as=webp)
For further investigation, the code related to this experiment was written in Python version 3.9.7, and it is available at the repository https://github.com/alecrimi/clustering-dwave, it comprises a series of custom scripts and the usage of the Networkx library version 2.8.3, and the Dwave-system library version 1.10.
Follow the Topic
-
Scientific Reports
An open access journal publishing original research from across all areas of the natural sciences, psychology, medicine and engineering.
Related Collections
With collections, you can get published faster and increase your visibility.
Artificial intelligence and medical imaging
Publishing Model: Open Access
Deadline: May 01, 2025
Artificial intelligence and precision medicine
Publishing Model: Open Access
Deadline: Jun 25, 2025
Please sign in or register for FREE
If you are a registered user on Research Communities by Springer Nature, please sign in