sorn: A Python package for Self Organizing Recurrent Neural Network

The self-organizing recurrent neural (SORN) network is a class of neuro-inspired artificial networks. This class of networks has been shown to mimic the ability of neocortical circuits to learn and adapt through neuroplasticity mechanisms. Structurally, the SORN network consists of a pool of excitatory neurons and a small population of inhibitory neurons. The network uses five basic plasticity mechanisms found in the neocortex of the brain, namely, spike-timing-dependent plasticity, intrinsic plasticity, synaptic scaling, inhibitory spike-timingdependent plasticity, and structural plasticity (Lazar et al., 2009; Papa et al., 2017; Zheng et al., 2013) to optimize its parameters. Using mathematical tools, a SORN network simplifies the underlying structural and functional connectivity mechanisms responsible for learning and memory in the brain.


Figure 1: SORN network
sorn is a Python package designed for Self Organizing Recurrent Neural Networks. While it was originally developed for SORN networks, it can also serve as an ideal research package for Liquid State Machines (Jaeger, 2002;Jaeger et al., 2007) in general. The detailed documentation can be found at https://self-organizing-recurrent-neuralnetworks.readthedocs.io/en/latest/. To extend the potential applications of this network architecture, a demonstrative example of a neuro-robotics experiment using OpenAI Gym (Brockman et al., 2016) is provided at sorn package.

Statement of need
Reservoir computing (RC) models are neuroinspired artificial neural networks. RC networks have either sparsely or densely connected units with fixed connection weights. Unlike other RC models, SORN has synaptic weights controlled by neuroinspired plasticity mechanisms. The network has two distinct pools of excitatory and inhibitory reservoirs that compete to remain in a subcritical state suitable for learning. The subcritical state is a state between chaos and order, also called the "edge of chaos." In this state, the network has momentum with a strong affinity for order, but is sensitive to external perturbations. Through plasticity mechanisms, the network has the ability to overcome the perturbations and return to its subcritical dynamics. This self-adaptive behavior is also referred to as self-organization. To build such a network with a synergistic combination of plasticity mechanisms from scratch requires a deeper understanding of neurophysiology and soft computing. sorn reduces the cognitive load of theorists, experimenters or researchers by encapsulating all plasticity mechanisms with a high degree of reliability and flexibility.
There are few other open source codes sorn v1, sorn v2, for SORN networks, but they are application-specific and are not general-purpose software packages. However, sorn is a flexible package that allows researchers to develop the network of their interest, providing them the freedom to choose the combination of plasticity rules of their choice. Moreover, it is easy to integrate sorn with machine learning frameworks such as PyTorch and reinforcement learning toolkits such as OpenAI Gym. Overall, sorn provides a research environment for computational neuroscientists to study self-organization, adaptation, learning, memory, and behavior of brain circuits by reverse-engineering neural plasticity mechanisms.

Library Overview
The package sorn is heavily dependent on numpy (Harris et al., 2020) for numerical computation and analysis methods, seaborn and matplotlib (Barrett et al., 2005) for visualization. The network is constructed in five classes; the object SORN encapsulates all the required functions that instantiate network variables such as connection weights and thresholds. Plasticity inherits objects from SORN and implements plasticity rules with methods stdp(), ip(), ss(), sp() and istdp(). NetworkState has methods that evaluate excitatory and inhibitory network states at each time step and finally MatrixCollection objects behave like a memory cache. It collects the network states and keeps track of variables such as weights and thresholds as the network evolves during simulation and training.
The network can be instantiated, simulated and trained using two classes: Simulator and Trainer, which inherit objects from SORN.

SORN Network Model
As defined in (Lazar et al., 2009;Zheng et al., 2013) the activity of neurons in the excitatory and inhibitory pool is given by the following state equations, W EE ij -Connection strength between excitatory neurons W EI ik -Synaptic strenght from Inhibitory to excitatory network W IE ki -Synaptic strenght from Exciatory to inhibitory network x j (t) -Presynaptic excitatory neuron state at t y k (t) -Presynaptic inhibitory neuron state at t

Plasticity Rules Spike Timing Dependent Plasticity
Spike Timing Dependent Plasticity (STDP) alters synaptic efficacy between excitatory neurons based on the spike timing between presynaptic neuron j and postsynaptic neuron i. where,

Intrinsic Plasticity
Intrinsic Plasticity (IP) updates the firing threshold of excitatory neurons based on the state of the neuron at each time step. It increases the threshold if the neuron is firing and decreases it otherwise. where, T i (t) -Firing threshold of the neuron i at time t η IP -Intrinsic plasticity step size H IP -Target firing rate of the neuron

Structural Plasticity
Structural Plasticity (SP) is responsible for creating new synapses between excitatory neurons at a rate of about 1 connection per 10th time step.

Synaptic Scaling
Synaptic Scaling (SS) normalizes the synaptic strengths of presynaptic neurons and prevents network activity from declining or exploding.

Inhibitory Spike Timing Dependent Plasticity
Inhibitory Spike Timing Dependent Plasticity (iSTDP) is responsible for controlling synaptic strengths from the inhibitory to the excitatory network.
where, W EI ij -Synaptic strength from Inhibitory to excitatory network η iSTDP -Inhibitory STDP learning rate µ IP -Mean firing rate of the neuron Note that the connection strength from excitatory to inhibitory (W IE ij ) remains fixed at the initial state and also the connections between inhibitory neurons are not allowed. If True, Gaussian white noise will be added to excitatory field potentials freeze

Analysis functions
The sorn package also includes necessary methods to investigate network properties. A few of the methods in the Statistics module are: methods Description autocorr t-lagged auto correlation between neural activity fanofactor To verify poissonian process in spike generation of neuron(s) spike_source_entropy Measure the uncertainty about the origin of spike from the network using entropy firing_rate_neuron Spike rate of specific neuron firing_rate_network Spike rate of entire network avg_corr_coeff Average Pearson correlation coeffecient between neurons spike_times Time instants at which neuron spikes spike_time_intervals Inter spike intervals for each neuron hamming_distance Hamming distance between two network states More details about the statistical and plotting tools in the package can be found at (https://self-organizing-recurrent-neural-networks.readthedocs.io/en/latest/)