Synapse Scaling and Memory

Question

  1. The mechanism should be information independent. That is the change of input should not change the memory mechanism.
  2. How to build memory linking/loading from other chunk.

Ideas

  1. To check the information dependence of the mechanism, I should choose different inputs with actual information.
  2. To study memory linking/loading I need to build areas of different memories.

Reproduce the results

Design of Data Structure

The geometry is set to be a grid of neurons with $M$ rows and $N$ columns. I would imagine $M=N$ is easier but it doesn’t hurt to write more general code. I also assume periodic boundary condition for simplicity and ellimination of boundary artifacts.

Membrane potentials $\mathbf U$ and activities $\mathbf F$ are suposed to be $M\times N$ matrices. Weights $\mathbf W$ should be rank $M\times N$ matrices.

For convinience, we define a neuron id for the $m$ row and $n$ column in activity matrix $F$, $D(m,n) = m N + n$. Thus we can denote each element in the weight matrix as $\mathbf{W}[i][j] = \mathbf{W}[D(m_i,n_i)][D(m_j,n_j)]$ where $i = D(m_i,n_i)=m_i N + n_i$.

The network in Tetzlaff’s paper contains only nearest neighbour excitatory weights and next-nearest neighbour inhibitory weights 1. So we create two weight matrices, $\mathbf{W}_{e}$ and $\mathbf{W}_i$. The non-zero elements in $\mathbf{W}_e$ are $m \in { m_i \pm 1 }$ while the non-zero elements in $\mathbf{W}_i$ are $m \in { m_i \pm 1, m_i \pm 2 }$.

Waste of Memory

The weight matrix is sparse. Storing the matrix in this way is a waste of memory. In principle, I could store a multidimensional matrix $\mathbf{W}_e[M][N][4]$ and $\mathbf{W}_i[M][N][8]$. $\mathbf{W}_e[m][n][1]$ is the weight between neurons located at row $m$ column $n$ and row $m-1$ column $n$. The pattern is defined as the weight between the neuron at $[m][n]$ and neurons around it in clockwise direction.

  1. $\mathbf{W}_e[m][n][0]$: row $m$ column $ n$ and row $ m-1$ column $ n$.
  2. $ \mathbf{W}_e[m][n][1]$: row $ m$ column $ n$ and row $ m$ column $ n+1$.
  3. $ \mathbf{W}_e[m][n][2]$: row $ m$ column $ n$ and row $ m+1$ column $ n$.
  4. $ \mathbf{W}_e[m][n][3]$: row $ m$ column $ n$ and row $ m+1$ column $ n-1$.

Similarly I can define $ \mathbf{W}_i[m][n][8]$.

However, we are gonna get overlap for the notations. Have to use the sparse matrix notation before I figure out how to do it more efficiently.

Updating Equations

Weights are updated by calculating the plasticity and synapse scaling.

$$ \Delta w_{ij}^+ = \Delta t \mu \left( F_i F_j + \kappa^{-1} (F^T - F_i) (w_{ij}^+)^2 \right). $$

Activity $ F_i$ is calculated from potential $ u_i$,

$$ F_i = \frac{ \alpha }{ 1 + \exp \left( \beta(\epsilon - u_i) \right) }. $$

Potential is also dynamics which is governed by

$$ \Delta u_i = \Delta t\left( - \frac{u_i}{\tau} + R\left( \sum_{j\in +} w_{ij}^+ F_j - \sum_{j\in -} w_{ij}^- F_j + w^E (F_i^E + v_i) \right) \right), $$

where $ w^E$ is the external input weight, $ F_i^E$ is the external input, $ v_i$ is noise.

Parameters

  1. $ w^E=w_{max}$
  2. $ w_{max} = \sqrt{ \alpha^2 \kappa/ (\alpha - F^T) }$
  3. $ \alpha=100\mathrm{Hz}$
  4. $ \kappa = 60$
  5. $ F^T =0\mathrm{Hz}$
  6. $ \epsilon = 130\mathrm{Hz}$
  7. $ R=0.012\Omega$
  8. $ \tau =1\mathrm{sec}$
  9. $ \mu=1/30000\mathrm{sec^{-1}}$

Information Encoded in Memory

Memory encodes all kinds of information, some of which, should be shown as inhomogeneities in spatial distributions of weights and activities.

At this point, we probably have no idea how does the system encode everything. However, we should be able to get a grip of how memories interact by assuming some very general spatial distributions of weights and activities.

In principle we should use Fourier series and find out the interactions. As the first few steps, we could simply work on $ \cos$, and Gaussian.

Interactions between Memories

Reused of previous memories

Suppose we have remembered something that is recorded in the brain as a $ \cos(k_y y)$ spatial distribution. The new task is to remember some new information that is closely related the the previous which is mapped to the network as $ \cos(k_y y) + \cos(k_x x)$. The new memory should be easier to compared to a completely new memory.

References and Notes


  1. Tetzlaff, C., Kolodziejski, C., Timme, M., Tsodyks, M., & Wörgötter, F. (2013). Synaptic Scaling Enables Dynamically Distinct Short- and Long-Term Memory Formation. PLoS Computational Biology, 9(10), e1003307. ↩︎

Planted: by ;

Lei Ma (2020). 'Synapse Scaling and Memory', Intelligence, 05 April. Available at: https://intelligence.leima.is/bio-intelligence/memory/synapse-scaling/.