The complex behaviours of humans and other animals are products of a biological intelligence informed largely by the learned “good representations” of sensory inputs, which serve as a fundamental computational step for enabling data-efficient, generalizable and transferrable skill acquisition. Neuroscientists and machine learning (ML) researchers are thus both interested in just what constitutes these good representations of the often high-dimensional, non-linear and multiplexed sensory signals that support general intelligence.
In the new paper Symmetry-Based Representations for Artificial and Biological General Intelligence, a DeepMind research team argues that the mathematical description of symmetries in group theory and symmetry transformations for representation learning in the brain suggest symmetries as an important general framework that determines the structure of the universe, constrains the nature of natural tasks, and consequently shapes both biological and artificial intelligence. The team explores symmetry transformations as a fundamental principle for defining what makes good representations.
Symmetries are sets of transformations of objects, and the abstract set of symmetries can transform different objects. In mathematics, symmetries represent transformations that are invertible and can be composed, and this property can be abstracted into the concept of groups.
Some properties are preserved by symmetries, and while a given group action defines how elements of a set are transformed, mathematics can be used to explore what is being preserved under the action. A Rubik’s cube for example can go through a series of transformations — such as “rotate left face clockwise” or “rotate front face anti-clockwise“ — but the cube’s structure is preserved. This leads the researchers to the concept of invariant and equivariant maps for representing the properties of mathematical objects that remain unchanged after operations or transformations of a certain type are applied to the objects.
The team notes that this symmetry concept has been at the core of some of the most successful deep neural network architectures: convolutional layers (CNNs) that can outperform humans are equivariant to translation symmetries in image classification tasks; while graph neural networks (GNNs) and the attention blocks used in transformer architectures are equivariant to the full group of object permutations.
The team further examines the importance of symmetry in neuroscience. They cite abundant research results to show a direct link between coding in single IT (inferior temporal cortex) neurons and disentangled representations, indicating that brains may learn representations that reflect the symmetries of the world; and that disentangled representations can also predict functional magnetic resonance imaging (fMRI) activation in the ventral visual stream.
The researchers conclude that, given that the evolutionary development of biological intelligence is constrained by physics and that physics often adopts symmetry transformations to explore the “joints” and “stable cores” of the world to expose the relevant invariants used for solving natural tasks, they are confident that the power of symmetry-based representations can be crucial in bringing better data efficiency, generalization and transfer performance when included in ML systems.
The team hopes their work can also provide the neuroscience community with the motivation and tools to consider symmetry as an important general framework that determines the structure of the universe, constrains the nature of natural tasks, and consequently shapes both biological and artificial intelligence.
The paper Symmetry-Based Representations for Artificial and Biological General Intelligence is on arXiv.
Author: Hecate He | Editor: Michael Sarazen
We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.