Synapse Scaling and Memory
Reproduce the results
Design of Data Structure
The geometry is set to be a grid of neurons with $M$ rows and $N$ columns. I would imagine $M=N$ is easier but it doesn’t hurt to write more general code. I also assume periodic boundary condition for simplicity and ellimination of boundary artifacts.
Membrane potentials $\mathbf U$ and activities $\mathbf F$ are suposed to be $M\times N$ matrices. Weights $\mathbf W$ should be rank $M\times N$ matrices.
For convinience, we define a neuron id for the $m$ row and $n$ column in activity matrix $F$, $D(m,n) = m N + n$. Thus we can denote each element in the weight matrix as $\mathbf{W}[i][j] = \mathbf{W}[D(m_i,n_i)][D(m_j,n_j)]$ where $i = D(m_i,n_i)=m_i N + n_i$.
The network in Tetzlaff’s paper contains only nearest neighbour excitatory weights and next-nearest neighbour inhibitory weights 1. So we create two weight matrices, $\mathbf{W}_{e}$ and $\mathbf{W}_i$. The non-zero elements in $\mathbf{W}_e$ are $m \in { m_i \pm 1 }$ while the non-zero elements in $\mathbf{W}_i$ are $m \in { m_i \pm 1, m_i \pm 2 }$.
Updating Equations
Weights are updated by calculating the plasticity and synapse scaling.
$$ \Delta w_{ij}^+ = \Delta t \mu \left( F_i F_j + \kappa^{-1} (F^T - F_i) (w_{ij}^+)^2 \right). $$Activity $ F_i$ is calculated from potential $ u_i$,
$$ F_i = \frac{ \alpha }{ 1 + \exp \left( \beta(\epsilon - u_i) \right) }. $$Potential is also dynamics which is governed by
$$ \Delta u_i = \Delta t\left( - \frac{u_i}{\tau} + R\left( \sum_{j\in +} w_{ij}^+ F_j - \sum_{j\in -} w_{ij}^- F_j + w^E (F_i^E + v_i) \right) \right), $$where $ w^E$ is the external input weight, $ F_i^E$ is the external input, $ v_i$ is noise.
Parameters
- $ w^E=w_{max}$
- $ w_{max} = \sqrt{ \alpha^2 \kappa/ (\alpha - F^T) }$
- $ \alpha=100\mathrm{Hz}$
- $ \kappa = 60$
- $ F^T =0\mathrm{Hz}$
- $ \epsilon = 130\mathrm{Hz}$
- $ R=0.012\Omega$
- $ \tau =1\mathrm{sec}$
- $ \mu=1/30000\mathrm{sec^{-1}}$
Information Encoded in Memory
Memory encodes all kinds of information, some of which, should be shown as inhomogeneities in spatial distributions of weights and activities.
At this point, we probably have no idea how does the system encode everything. However, we should be able to get a grip of how memories interact by assuming some very general spatial distributions of weights and activities.
In principle we should use Fourier series and find out the interactions. As the first few steps, we could simply work on $ \cos$, and Gaussian.
Interactions between Memories
Reused of previous memories
Suppose we have remembered something that is recorded in the brain as a $ \cos(k_y y)$ spatial distribution. The new task is to remember some new information that is closely related the the previous which is mapped to the network as $ \cos(k_y y) + \cos(k_x x)$. The new memory should be easier to compared to a completely new memory.
References and Notes
Tetzlaff, C., Kolodziejski, C., Timme, M., Tsodyks, M., & Wörgötter, F. (2013). Synaptic Scaling Enables Dynamically Distinct Short- and Long-Term Memory Formation. PLoS Computational Biology, 9(10), e1003307. ↩︎
Lei Ma (2020). 'Synapse Scaling and Memory', Intelligence, 05 April. Available at: https://intelligence.leima.is/bio-intelligence/memory/synapse-scaling/.