MultiModel
framework
Here we showcase the MultiModel
framework, a standalone framework within neurolib
to create and simulate heterogeneous brain models. By heterogeneous, we mean that a brain network may consist of nodes with totally different dynamics coupled by a single variable. Imagine having a population model for the thalamus, a different model for the hippocampus, and a different model for the cortex. Of course, the parameters and the model dynamics, and the equations would be completely different. This is all possible and even relatively easy in MultiModel
.
Implemented models
To facilitate your heterogeneous experiments, the MultiModel
comes with few population models predefined for you. We can mix these into a brain network in many ways. We provide:
aln
: the adaptive linear-nonlinear population model, it is a mean-field approximation of delay-coupled network of excitatory and inhibitory adaptive exponential integrate-and-fire neurons (AdEx)fitzhugh_nagumo
: the FitzHugh-Nagumo model, a two-dimensional slow-fast system, is a simplified version of the famous 4D Hodgkin–Huxley modelhopf
: the Hopf model (sometimes called a Stuart-Landau oscillator) is a 1D nonlinear model and serves as a normal form of Hopf bifurcation in dynamical systemsthalamus
: a conductance-based population rate model of the thalamus, it is a Jansen-Rit like population model with current-based voltage evolution, includes adaptation (K-leak), calcium, and rectifying currentswilson_cowan
: the Wilson-Cowan neuronal model is a simple model of interacting interconnected neurons of excitatory and inhibitory subtypeswong_wang
: a Wong-Wang model, a model approximating a biophysically-based cortical network model. Our implementation comes in two flavors:- original Wong-Wang model with excitatory and inhibitory subtypes
- reduced Wong-Wang model with simpler dynamics and no EXC/INH distinction
Moreover, the MultiModel
framework is built in such a way that creating and connecting new models (e.g., Jansen-Rit) is easy and intuitive. An example of how to make a brand new model implementation in MultiModel
is provided in the following example notebook (example-4.1-create-new-model.ipynb
).
Modeling hierarchy
The MultiModel
relies on the modeling hierarchy, which is typically implicit in whole-brain modeling. This hierarchy has three levels:
* NeuralMass
: represents a single neural population (typically excitatory, inhibitory, or without a subtype) and is defined by a set of parameters and (possibly delayed) (possibly stochastic) differential equations
* Node
: represents a single brain node, and it is a set of connected neural masses (so, e.g., a single Wilson-Cowan node consists of one excitatory and one inhibitory Wilson-Cowan NeuralMass
)
* Network
: represents a brain network, and it is a set of connected nodes (can be any type, as long as the coupling variables are the same)
Although the magic happens at the level of NeuralMass
(by magic, we mean the dynamics), users can only simulate (integrate) a Node
or a Network
. In other words, even for models without excitatory/inhibitory subtyping (e.g., Hopf or FitzHugh-Nagumo), we create a Node
consisting of one NeuralMass
. In the case of, e.g., Wilson-Cowan, ALN, or original Wong-Wang model, the Node
consists of one excitatory and one inhibitory mass. More info on the modeling hierarchy and how it actually works is provided in the following example notebook (example-4.1-create-new-model.ipynb
), where we need to subclass the base classes for this hierarchy to build a new model.
Basic usage in neurolib
(In the following we expect the reader to be mildly familiar with how neurolib
works, e.g. how to run a model, how to change it parameters, and how to get model results)
# import stuff
# try:
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import display
# import ALN single node model and neurolib wrapper `MultiModel`
from neurolib.models.multimodel import ALNNetwork, ALNNode, MultiModel
# except ImportError:
# import sys
# !{sys.executable} -m pip install matplotlib
# import matplotlib.pyplot as plt
Simulating the node
# create a model and wrap it to `MultiModel`
aln = MultiModel.init_node(ALNNode())
# 5 seconds run
aln.params["duration"] = 5.0 * 1000 # in ms
# MultiModel offers two integration backends, be default we are using so-called `jitcdde` backend
# `jitcdde` is a numerical backend employing adaptive dt scheme for DDEs, therefore we do not care about
# actual dt (since it is adaptive), only about the sampling dt and this can be higher
# more about this in example-4.2
aln.params["sampling_dt"] = 1.0 # in ms
# parametrise ALN model in slow limit cycle
aln.params["*EXC*input*mu"] = 4.2
aln.params["*INH*input*mu"] = 1.8
# run
aln.run()
# plot - default output is firing rates in kHz
plt.plot(aln["t"], aln["r_mean_EXC"].T, lw=2, c="k")
plt.xlabel("t [ms]")
plt.ylabel("Rate [kHz]")
As you saw in the previous cell, the internal workings of MultiModel
are very similar to the core neurolib
. Therefore, for simple runs, you do care about the following:
* MultiModel
: a wrapper class for all models in MultiModel framework which gives model objects neurolib
powers (meaning .params
and .run()
). MultiModel
class is initialized as follows:
* when initialising with Node
: model = MultiModel.init_node(<init'd Node class>)
* when initialising with Network
: model = MultiModel(<init'd Network class>)
(see later)
MultiModel
parameters and other accessible attributes
Since MultiModel
is able to simulate heterogeneous models, the internals of how parameters work is a bit more complex than in the core neurolib
. Each mass has its own parameters, each node then gathers the parameters of each mass within that node, and finally, the network gathers all parameters from each node in the network, etc. So hierarchy again. To make it easier to navigate through MultiModel
hierarchies, some attributes are implemented in all hierarchy levels.
dummy_sc = np.array([[0.0, 1.0], [1.0, 0.0]])
# init MultiModelnetwork with 2 ALN nodes with dummy sc and no delays
mm_net = ALNNetwork(connectivity_matrix=dummy_sc, delay_matrix=None)
print(mm_net)
# each network is an proper python iterator, i.e. len() is defined
print(f"Nodes: {len(mm_net)}")
# as well as __get_item__
print(mm_net[0])
print(mm_net[1])
# similarly, each node is a python iterator, i.e.
print(f"Masses in 1. node: {len(mm_net[0])}")
print(mm_net[0][0])
print(mm_net[0][1])
# in order to navigate through the hierarchy, each mass, node and net
# has its own name and label and index
# index of a node is relative within the network
# index of a mass is relative within the node
print(f"This network name: {mm_net.name}")
print(f"This network label: {mm_net.label}")
print(f"1st node name: {mm_net[0].name}")
print(f"1st node label: {mm_net[0].label}")
print(f"1st node index: {mm_net[0].index}")
print(f"1st mass in 1st node name: {mm_net[0][0].name}")
print(f"1st mass in 1st node label: {mm_net[0][0].label}")
print(f"1st mass in 1st node index: {mm_net[0][0].index}")
# you can also check number of variables etc at all levels of hierarchy
print(f"ALN EXC num. vars: {mm_net[0][0].num_state_variables}")
print(f"ALN INH num. vars: {mm_net[0][1].num_state_variables}")
print(f"ALN node num. vars: {mm_net[0].num_state_variables}")
print(f"This network num. vars: {mm_net.num_state_variables}")
# similarly you can check number of "noise variables", i.e. the number
# of stochastic variables entering the simulation
print(f"ALN EXC noise vars: {mm_net[0][0].num_noise_variables}")
# etc
# not sure what are the state variables? no problem!
print(f"ALN EXC state vars: {mm_net[0][0].state_variable_names}")
print(f"ALN node state vars: {mm_net[0].state_variable_names}")
print(f"This network state vars: {mm_net.state_variable_names}")
# if you are unsure what kind of a monster you build in MultiModel,
# a function `describe()` is available at all three levels -
# it returns a dictionary with basic info about the model object
# this is describe of a `NeuralMass`
print("")
print("Mass `describe`:")
display(mm_net[0][0].describe())
# describe of a `Node` recursively describes all masses and some more
print("")
print("Node `describe`:")
display(mm_net[0].describe())
# and finally, describe of a `Network` gives you everything
print("")
print("Network `describe`:")
display(mm_net.describe())
# PRO tip: imagine highly heterogeneous network and some long simulation with it;
# apart from the results you can dump `net.describe()` dictionary into json and
# never forget what you've done!
# now let us check the parameters.. for this we initialise MultiModel in neurolib's fashion
aln_net = MultiModel(mm_net)
# parameters are accessible via .params
aln_net.params
# as you can see the parameters are flattened nested dictionary which follows this nomenclature
# {"<network label>.<node label>_index.<mass label>_index.<param name>: param value"}
# as you can see there are a lot of parameters for simple 2-node network of ALN models
# typically you want to change parameters of all nodes at the same time
# fortunately, model.params is not your basic dictionary, it's a special one, we call it `star` dictionary,
# because you can do this:
display(aln_net.params["*tau"])
print("")
# so yes, star works as a glob identifier, so by selecting "*tau" I want all parameters named tau
# (I dont care from which mass or node it comes)
# what if I want to change taus only in EXC masses? easy:
display(aln_net.params["*EXC*tau"])
print("")
# or maybe I want to change taus only in the first node?
display(aln_net.params["*Node_0*tau"])
print("")
# of course, you can change a param value with this
aln_net.params["*Node_0*tau"] = 13.2
display(aln_net.params["*Node_0*tau"])
aln_net.params["*Node_0*tau"] = 5.0
# case: I want to change all taus except "noise" taus
# this gives all the taus, including "noise"
display(aln_net.params["*tau*"])
print("")
# pipe symbol filters out unwanted keys - here we have only taus which key does NOT include "input"
display(aln_net.params["*tau*|input"])
max_rate_e = []
min_rate_e = []
# number low for testing:
mue_inputs = np.linspace(0, 2, 2)
# use: mue_inputs = np.linspace(0, 2, 20)
# not let's match ALN parameters to those in example-0-aln-minimal and recreate
# the 1D bif. diagram
aln.params["*INH*input*mu"] = 0.5
aln.params["*b"] = 0.0
aln.params["ALNNode_0.ALNMassEXC_0.a"] = 0.0
for mue in mue_inputs:
aln.params["*EXC*input*mu"] = mue
display(aln.params["*EXC*input*mu"])
aln.run()
max_rate_e.append(np.max(aln.output[0, -int(1000 / aln.params["sampling_dt"]) :]))
min_rate_e.append(np.min(aln.output[0, -int(1000 / aln.params["sampling_dt"]) :]))
plt.plot(mue_inputs, max_rate_e, c="k", lw=2)
plt.plot(mue_inputs, min_rate_e, c="k", lw=2)
plt.title("Bifurcation diagram of the aln model")
plt.xlabel("Input to excitatory population")
plt.ylabel("Min / max firing rate [kHz]")
Connecting two models
So far, we only showed how to use MultiModel
with a single dynamical model (ALN), and that is no fun. I mean, all this is already possible in core neurolib
, and it the core, it is much faster.
However, the real strength of MultiModel
is combining different models into one network. Let us build a thalamocortical model using one node of the thalamic population model and one node of ALN, representing the cortex.
# first - imports
from neurolib.models.multimodel import ALNNode, ThalamicNode
from neurolib.models.multimodel.builder.base.constants import EXC, INH
from neurolib.models.multimodel.builder.base.network import Network
# let us start by subclassing the Network
class ALNThalamusMiniNetwork(Network):
"""
Simple thalamocortical motif: 1 cortical node ALN + 1 NMM thalamus.
"""
# provide basic attributes as name and label
name = "ALN 1 node + Thalamus"
label = "ALNThlmNet"
# define which variables are used to sync, i.e. what coupling variables our nodes need
sync_variables = [
# both nodes are connected via excitatory synapses
"network_exc_exc",
# ALN requires also squared coupling
"network_exc_exc_sq",
# and INH mass in thalamus also receives excitatory coupling
"network_inh_exc",
]
# lastly, we need to define what is default output of the network (this has to be the
# variable present in all nodes)
# for us it is excitatory firing rates
default_output = f"r_mean_{EXC}"
# define all output vars of any interest to us - EXC and INH firing rates and adaptive current in ALN
output_vars = [f"r_mean_{EXC}", f"r_mean_{INH}", f"I_A_{EXC}"]
def __init__(self, connectivity_matrix, delay_matrix):
# self connections are resolved within nodes, so zeroes at the diagonal
assert np.all(np.diag(connectivity_matrix) == 0.0)
# init ALN node with index 0
aln_node = ALNNode()
aln_node.index = 0
# index where the state variables start - for first node it is always 0
aln_node.idx_state_var = 0
# set correct indices for noise input
for mass in aln_node:
mass.noise_input_idx = [mass.index]
# init thalamus node with index 1
thalamus = ThalamicNode()
thalamus.index = 1
# thalamic state variables start where ALN state variables end - easy
thalamus.idx_state_var = aln_node.num_state_variables
# set correct indices of noise input - one per mass, after ALN noise
# indices
for mass in thalamus:
mass.noise_input_idx = [aln_node.num_noise_variables + mass.index]
# now super.__init__ network with these two nodes:
super().__init__(
nodes=[aln_node, thalamus],
connectivity_matrix=connectivity_matrix,
delay_matrix=delay_matrix,
)
# done! the only other thing we need to do, is to set the coupling variables
# thalamus vs. ALN are coupled via their firing rates and here we setup the
# coupling matrices; the super class `Network` comes with some convenient
# functions for this
def _sync(self):
"""
Set coupling variables - the ones we defined in `sync_variables`
_sync returns a list of tuples where the first element in each tuple is the coupling "symbol"
and the second is the actual mathematical expression
for the ease of doing this, `Network` class contains convenience functions for this:
- _additive_coupling
- _diffusive_coupling
- _no_coupling
here we use additive coupling only
"""
# get indices of coupling variables from all nodes
exc_indices = [
next(
iter(
node.all_couplings(
mass_indices=node.excitatory_masses.tolist()
)
)
)
for node in self
]
assert len(exc_indices) == len(self)
return (
# basic EXC <-> EXC coupling
# within_node_idx is a list of len 2 (because we have two nodes)
# with indices of coupling variables within the respective state vectors
self._additive_coupling(
within_node_idx=exc_indices, symbol="network_exc_exc"
)
# squared EXC <-> EXC coupling (only to ALN)
+ self._additive_coupling(
within_node_idx=exc_indices,
symbol="network_exc_exc_sq",
# square connectivity
connectivity=self.connectivity * self.connectivity,
)
# EXC -> INH coupling (only to thalamus)
+ self._additive_coupling(
within_node_idx=exc_indices,
symbol="network_inh_exc",
connectivity=self.connectivity,
)
+ super()._sync()
)
# lets check what we have
SC = np.array([[0.0, 0.15], [1.2, 0.0]])
delays = np.array([[0.0, 13.0], [13.0, 0.0]]) # thalamocortical delay = 13ms
thalamocortical = MultiModel(ALNThalamusMiniNetwork(connectivity_matrix=SC, delay_matrix=delays))
# original `MultiModel` instance is always accessible as `MultiModel.model_instance`
display(thalamocortical.model_instance.describe())
# fix parameters for interesting regime
thalamocortical.params["*g_LK"] = 0.032 # K-leak conductance in thalamus
thalamocortical.params["ALNThlmNet.ALNNode_0.ALNMassEXC_0.a"] = 0.0 # no firing rate adaptation
thalamocortical.params["*b"] = 15.0 # spike adaptation
thalamocortical.params["*tauA"] = 1000.0 # slow adaptation timescale
thalamocortical.params["*EXC*mu"] = 3.4 # background excitation to ALN
thalamocortical.params["*INH*mu"] = 3.5 # background inhibition to ALN
thalamocortical.params["*ALNMass*input*sigma"] = 0.05 # noise in ALN
thalamocortical.params["*TCR*input*sigma"] = 0.005 # noise in thalamus
thalamocortical.params["*input*tau"] = 5.0 # timescale of OU process
# number low for testing:
thalamocortical.params["duration"] = 2000.
# use: thalamocortical.params["duration"] = 20000. # 20 seconds simulation
thalamocortical.params["sampling_dt"] = 1.0
thalamocortical.run()
_, axs = plt.subplots(nrows=2, ncols=1, sharex=True, figsize=(12, 6))
axs[0].plot(thalamocortical.t, thalamocortical.r_mean_EXC[0, :].T)
axs[0].set_ylabel("ALN firing rate [kHz]")
axs[1].plot(thalamocortical.t, thalamocortical.r_mean_EXC[1, :].T, color="C1")
axs[1].set_ylabel("thalamus firing rate [kHz]")
axs[1].set_xlabel("time [sec]")
We can nicely see the interplay between cortical UP and DOWN states (with UP state being dominant and irregular DOWN state excursions) and thalamic spindles.
Combining different models might seem hard at first, but it is actually kind of intuitive and works as you would connect models with pen and paper.
The only necessary thing is to define and initialize individual nodes in the network (done in __init__
function) and then specify the type of coupling between these nodes (in _sync()
function). That's it!
For more information on how to build a network and for a deeper understanding of how exactly MultiModel
works, please check out our following example, where we will build the Jansen-Rit network from scratch!