Skip to content

Comparisons and Construction of Learning Rules to Hopfield-Type Neural Networks in real and complex number systems.

Notifications You must be signed in to change notification settings

fitolobo/Learning-Rules-to-Hopfield-Type-Associative-Memories

Repository files navigation

Incremental-Learning-Rules-to-Hopfield-Type-Associative-Memories

Analysis, Comparisons and Construction of Incremental Learning Rules to Hopfield-Type Neural Networks in real and complex number systems.

Getting Started

This repository contain the Julia source-codes of Hopfield-Type incremental learning rules on real and complex number systems, as described in the work "Incremental Learning Rules to Hopfield-Type Associative Memories in Real and Complex Domains" by Fidelis Zanetti de Castro, Rodolfo Anibal Lobo and Marcos Eduardo Valle. The Jupyter-notebook of the computational experimens are also available in this repository.

In particular, we implemented Correlation, Projection and first e second order Storkey learning rules, applying Hopfield-type neural networks as associative memories storing and recalling synthetic patterns. For the complex case we explore two models, using splitsign and csign activation functions.

Usage

The main module of incremental learning in real and complex case is called ILearning.jl. The main method for Storkey learning rules can be called in real case by

    W1 = Storkey1(U,nothing)
    W2 = Storkey2(U,nothing)

where the function Storkey1 is the first order method and Storkey2 the second order method. The third argument let us initialize with a non null matrix previously stored, for example:

    ### Create a boolean random matrix
    N = 200;
    P1 = 10
    U1 = 2*rand(rng,Bool,(N,P1)).-1;
    
    ### Storing U1 using first order storkey rule (initializing Win = 0)
    W = Storkey1(U1,nothing);
    
    ### Create other boolean random matrix
    P2 = 10
    U2 = 2*rand(rng,Bool,(N,P2)).-1;
    
    ### Storing U2 using first order storkey rule (initializing Win = W) 
    W1 = Storkey2(U2,W);

Analogously, in the complex case we have the storkey learning rules

   W1 = Storkey1(U,nothing)
   W2 = Storkey2(U,nothing)

where the functions Storkey1 and Storkey2 are the complex versions of the real-valued learning rules, and the third argument let us initialize with a non null matrix previously stored using the same storage rule. The projection and correlation rules, in both cases are called by

  Wc = ILearning.Correlation(U,nothing);
  Wp = ILearning.Projection(U, Win, Ain)

In this case, the correlation and projection rules could also be initialized with a non null matrix.

The stored patterns have components in the binary set for real case. In the multistate complex case you need to set the resolution factor in order to define the possible states for the neurons, and when is used the splitsign activation function the componentes belongs to the set . In real case is obtained , and in the complex case both matrices symmetric with diagonal terms equal to zero.

  • Fidelis Zanetti de Castro, Rodolfo Anibal Lobo and Marcos Eduardo Valle - Federal Institute of Education, Science and Technology of Espírito Santo and University of Campinas, Brazil, 2020.

About

Comparisons and Construction of Learning Rules to Hopfield-Type Neural Networks in real and complex number systems.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published