EIANN.rules.hebbian#

Classes#

Module Contents#

class Ojas_rule(projection, learning_rate=None, forward_only=False)[source]#

Bases: EIANN.rules.base_classes.LearningRule

forward_only = False[source]#
step()[source]#

Perform one step of Oja’s rule weight update.

Updates weights according to Oja’s rule:

\[\Delta w = \eta (y \cdot x - y^2 \cdot w)\]

where the weight decay term \((y^2 \cdot w)\) provides automatic normalization.

class BCM_4(projection, theta_tau, k, sign=1, learning_rate=None)[source]#

Bases: EIANN.rules.base_classes.LearningRule

theta_tau[source]#
k[source]#
sign = 1[source]#
reinit()[source]#
update()[source]#
step()[source]#
classmethod backward(network, output, target, store_history=False, store_dynamics=False)[source]#
class Supervised_BCM_4(projection, theta_tau, k, sign=1, max_pop_fraction=0.025, stochastic=False, learning_rate=None, relu_gate=False)[source]#

Bases: EIANN.rules.base_classes.LearningRule

theta_tau[source]#
k[source]#
sign = 1[source]#
max_pop_fraction = 0.025[source]#
stochastic = False[source]#
relu_gate = False[source]#
reinit()[source]#
update()[source]#
step()[source]#
classmethod backward_update_layer_activity(layer, store_dynamics=False)[source]#

Update somatic state and activity for all populations that receive projections with update_phase in [‘B’, ‘backward’, ‘A’, ‘all’]. :param layer: :param store_dynamics: bool

classmethod backward_update_layer_dendritic_state(layer)[source]#

Update dendritic state for all populations that receive projections that target the dendritic compartment.

classmethod backward(network, output, target, store_history=False, store_dynamics=False)[source]#

Integrate top-down inputs and update dendritic state variables. :param network: :param output: :param target: :param store_history: bool :param store_dynamics: bool

class Hebb_WeightNorm(projection, sign=1, learning_rate=None, forward_only=False)[source]#

Bases: EIANN.rules.base_classes.LearningRule

sign = 1[source]#
forward_only = False[source]#
step()[source]#
class Hebb_WeightNorm_4(projection, sign=1, learning_rate=None, forward_only=False)[source]#

Bases: EIANN.rules.base_classes.LearningRule

sign = 1[source]#
forward_only = False[source]#
step()[source]#
class Supervised_Hebb_WeightNorm_4(projection, sign=1, max_pop_fraction=0.025, stochastic=True, learning_rate=None, relu_gate=False)[source]#

Bases: EIANN.rules.base_classes.LearningRule

sign = 1[source]#
max_pop_fraction = 0.025[source]#
stochastic = True[source]#
relu_gate = False[source]#
step()[source]#
classmethod backward_update_layer_activity(layer, store_dynamics=False)[source]#

Update somatic state and activity for all populations that receive projections with update_phase in [‘B’, ‘backward’, ‘A’, ‘all’]. :param layer: :param store_dynamics: bool

classmethod backward_update_layer_dendritic_state(layer)[source]#

Update dendritic state for all populations that receive projections that target the dendritic compartment.

classmethod backward(network, output, target, store_history=False, store_dynamics=False)[source]#

Integrate top-down inputs and update dendritic state variables. :param network: :param output: :param target: :param store_history: bool :param store_dynamics: bool

class Hebbian_Temporal_Contrast(projection, max_pop_fraction=1.0, stochastic=False, learning_rate=None, relu_gate=True)[source]#

Bases: EIANN.rules.backprop_like.BP_like_2L

step()[source]#
class Top_Down_Hebbian_Temporal_Contrast_1(projection, learning_rate=None, forward_only=False)[source]#

Bases: EIANN.rules.base_classes.LearningRule

forward_only = False[source]#
step()[source]#
class Top_Down_Hebbian_Temporal_Contrast_3(projection, learning_rate=None)[source]#

Bases: EIANN.rules.base_classes.LearningRule

step()[source]#