Disentangling visual concepts by embracing stochastic in-memory computing

Disentanglement of visual concepts is central to sensory perception. We present a compute engine capable of efficiently disentangling combinations of different visual concepts, represented by high-dimensional holographic vectors, by embracing the intrinsic stochasticity of memristive devices.
Disentangling visual concepts by embracing stochastic in-memory computing
Like

Share this post

Choose a social network to share with, or copy the URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks

Deep Neural Networks (DNNs) are powerful AI tools that can extract useful representations from unstructured data. This extraction can be used for tasks like classification or object detection. One problem is the representations the DNN has “learned” become entangled in complex ways that may be suitable for a specific task but make it difficult to generalize to even slightly different situations. A promising approach is to disentangle representations in which various attributes of knowledge are represented separately and can then be flexibly recombined to represent novel experiences.

Having a mechanism that can disentangle representations is as such essential for generalization. You could cast the entanglement and disentanglement of neurally encoded information as multiplication and factorization of large holographic vectors representing neural activities.

In a paper published in Nature Nanotechnology, we present an efficient compute engine for disentangling data-driven holographic representations by exploiting the intrinsic stochasticity associated with analog in-memory computing based on nanoscale memristive devices.

Our team has been researching emerging computing paradigms such as neuro-vector-symbolic architectures and implementing such models using analog in-memory computing (AIMC) hardware. Inspired by vector-symbolic architectures, an elegant dynamical system was dubbed a “resonator network” which can iteratively solve a particular factorization problem where factors assume holographic distributed representations. Although effective, the dynamics of resonator networks make them vulnerable to an infinite loop of search over a subset of incorrect estimates — a phenomenon known as limit cycles. Moreover, a linear activation of the attention values led to slow convergence and low accuracy overall.

In this work, we proposed to enrich the resonator networks on AIMC hardware, which can naturally harness the intrinsic device noise to prevent the resonator networks from getting stuck in the limit cycles, and to amplify the faster convergence speed by supplying non-linear activation functions. We discovered the breakthrough finding that the inevitable stochasticity present in AIMC hardware is not a curse but in fact a blessing that could pave the way for solving at least five orders of magnitude larger combinatorial problems which were previously unsolvable within the given constraints. Our enhanced in-memory factorizer also reduces the spatial and time complexity associated with the factorization problem.

We could demonstrate, on a real AIMC chip based on phase-change memory, developed within the IBM Research AI Hardware Center, how factorization of a problem space of size 16 million can be solved in real-time.

The proposed in-memory factorizer is capable of working with noisy product vectors. We have already demonstrated one application where it can be used to disentangle the perceptual representations by coupling it with a convolutional neural network.

In general, factorizing the product vectors constructed by binding (the Hadamard product) of randomly drawn vectors that exhibit no correlational structure forms a hard combinatorial search problem. We have shown that our in-memory factorizer is an efficient engine to solve one such instance of a hard combinatorial search problem.


In a follow-up study, we observed that the resonator networks are not able to accurately decode all the objects in the scene. This is because their iterative decoding relies on an “explanation away” procedure which causes noise amplification. Why did it happen? Intuitively, a mistake in decoding is more probable with more added objects in the scene. This is because each added object can be interpreted as noise from the standpoint of decoding the other. When making a wrong object estimate in decoding, the resonator network runs into the noise amplification problem by explaining away an object representation that was not present, and therefore adding more noise to the query, making it even harder to decode the remaining objects. Consequently, the noise amplification limits the number of objects that can be reliably decoded in large problem sizes. To address the noise amplification issue, we first developed a new sequential decoding procedure enhanced by generating multiple queries by means of a sampling mechanism. We further combined it with a parallel decoding procedure, hence mixed decoding, to mitigate the risk of noise amplification. Our mixed decoding approach increases the number of objects that can be successfully decoded by up to 8× while maintaining the same vector dimensionality. This study was recently published in the International Workshop on Neural-Symbolic Learning and Reasoning (NeSy)

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in