EP3469521A1 - Neural network and method of neural network training - Google Patents

Neural network and method of neural network training

Info

Publication number
EP3469521A1
EP3469521A1 EP17811082.1A EP17811082A EP3469521A1 EP 3469521 A1 EP3469521 A1 EP 3469521A1 EP 17811082 A EP17811082 A EP 17811082A EP 3469521 A1 EP3469521 A1 EP 3469521A1
Authority
EP
European Patent Office
Prior art keywords
array
training
corrective
neuron
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP17811082.1A
Other languages
German (de)
French (fr)
Other versions
EP3469521A4 (en
Inventor
Dmitri PESCIANSCHI
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Progress Inc
Original Assignee
Progress Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/178,137 external-priority patent/US9619749B2/en
Priority claimed from US15/449,614 external-priority patent/US10423694B2/en
Application filed by Progress Inc filed Critical Progress Inc
Publication of EP3469521A1 publication Critical patent/EP3469521A1/en
Publication of EP3469521A4 publication Critical patent/EP3469521A4/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • G06N3/065Analogue means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Definitions

  • Provisional Application Serial No. 62/106,389 filed January 22, 2015, and also claims the benefit of U.S. Provisional Application Serial No. 62/173,163 filed June 09, 2015, the entire content of which is similarly incorporated by reference.
  • neural network generally refers to software and/or computer architecture, i.e., the overall design or structure of a computer system or a microprocessor, including the hardware and software required to run it.
  • Artificial neural networks may be a family of statistical learning algorithms inspired by biological neural networks, a.k.a., the central nervous systems of animals, in particular the brain. Artificial neural networks are primarily used to estimate or approximate generally unknown functions that may depend on a large number of inputs. Such neural networks have been used for a wide variety of tasks that are difficult to resolve using ordinary rule-based programming, including computer vision and speech recognition.
  • a neural network includes a plurality of inputs to the neural network configured to receive training images.
  • the training images are either received by the plurality of inputs as a training input value array or codified as the training input value array during training of the neural network, i.e., after having been received by the plurality of inputs.
  • the neural network also includes a plurality of synapses. Each synapse is connected to one of the plurality of inputs and includes a plurality of corrective weights. Each corrective weight is defined by a weight value, and the corrective weights of the plurality of synapses are organized in a corrective weight array.
  • the neural network additionally includes a plurality of neurons.
  • Each neuron has at least one output and is connected with at least one of the plurality of inputs via at least one of the plurality of synapses.
  • Each neuron is configured to add up the weight values of the corrective weights corresponding to each synapse connected to the respective neuron, such that the plurality of neurons generate a neuron sum array.
  • the neural network also includes a controller configured to receive desired images organized as a desired output value array.
  • the plurality of inputs to the neural network may be configured to receive input images. Such input images may be either received as an input value array or codified as the input value array during recognition of the images by the neural network.
  • Each synapse may include a plurality of trained corrective weights of the trained corrective weight array. Additionally, each neuron may be configured to add up the weight values of the trained corrective weights corresponding to each synapse connected to the respective neuron, such that the plurality of neurons generate a recognized images array, thereby providing recognition of such input images.
  • the neural network may also include a set of distributors.
  • the set of distributors may be configured to codify each of the training images and input images as the respective training input value array and input value array.
  • Such a set of distributors may be operatively connected to the plurality of inputs for receiving the respective training images and input images.
  • the controller may additionally be programmed with an array of target deviation of the neuron sum array from the desired output value array. Furthermore, the controller may be configured to complete training of the neural network when the deviation of the neuron sum array from the desired output value array is within an acceptable range of the array of target deviation.
  • the training input value array, input value array, corrective weight array, neuron sum array, desired output value array, deviation array, trained corrective weight array, recognized image array, and target deviation array may be organized, respectively, as a training input value matrix, input value matrix, corrective weight matrix, neuron sum matrix, desired output value matrix, deviation matrix, trained corrective weight matrix, recognized image matrix, and target deviation matrix.
  • the neural network may additionally include a plurality of data processors.
  • the controller may be additionally configured to partition at least one of the respective input value, training input value, corrective weight, neuron sum, and desired output value matrices into respective sub-matrices and communicate a plurality of the resultant sub-matrices to the plurality of data processors for separate parallel mathematical operations therewith.
  • partitioning of any of the subject matrices into respective sub-matrices facilitates concurrent or parallel data processing and an increase in speed of either image recognition of the input value matrix or training of the neural network.
  • Such concurrent or parallel data processing also permits scalability of the neural network.
  • the controller may also be configured to subtract the neuron sum matrix from the desired output value matrix to generate a matrix of deviation of neuron sums. Additionally, the controller may be configured to divide the matrix of deviation of neuron sums by the number of inputs connected to the respective neuron to generate a matrix of deviation per neuron input.
  • the controller may be also configured to determine a number of times each corrective weight was used during one training epoch of the neural network.
  • the controller may additionally be configured to form an averaged deviation matrix for the one training epoch using the determined number of times each corrective weight was used during the one training epoch.
  • the controller may be configured to add the averaged deviation matrix for the one training epoch to the corrective weight matrix to thereby generate the trained corrective weight matrix and complete the one training epoch.
  • a method of operating such a neural network i.e., for training and image recognition, is also disclosed.
  • a neural network in another embodiment, includes a plurality of network inputs, such that each input is configured to receive an input signal having an input value.
  • the neural network also includes a plurality of synapses, wherein each synapse is connected to one of the plurality of inputs and includes a plurality of corrective weights, wherein each corrective weight is established by a memory element that retains a respective weight value.
  • the neural network additionally includes a set of distributors. Each distributor is operatively connected to one of the plurality of inputs for receiving the respective input signal and is configured to select one or more corrective weights from the plurality of corrective weights in correlation with the input value.
  • the neural network also includes a set of neurons.
  • Each neuron has at least one output and is connected with at least one of the plurality of inputs via one of the plurality of synapses synapse and is configured to add up the weight values of the corrective weights selected from each synapse connected to the respective neuron and thereby generate a neuron sum.
  • the output of each neuron provides the respective neuron sum to establish an operational output signal of the neural network.
  • the neural network may also include a weight correction calculator configured to receive a desired output signal having a value, determine a deviation of the neuron sum from the desired output signal value, and modify respective corrective weight values established by the corresponding memory elements using the determined deviation. In such a case, adding up the modified corrective weight values to determine the neuron sum is intended to minimize the deviation of the neuron sum from the desired output signal value to thereby generate a trained neural network.
  • the trained neural network may be configured to receive supplementary training using solely a supplementary input signal having a value and a corresponding supplementary desired output signal.
  • each of the plurality of synapses may be configured to accept one or more additional corrective weights established by the respective memory elements.
  • the neural network may be configured to remove from the respective synapses, during or after training of the neural network, one or more corrective weights established by the respective memory elements. Such removal of some corrective weights may permit the neural network to retain only a number of memory elements required to operate the neural network.
  • the neural network may be configured to accept at least one of an additional input, an additional synapse, and an additional neuron before or during training of the neural network to thereby expand operational parameters of the neural network.
  • the neural network may be configured to remove at least one of an input, a synapse, and a neuron before, during, or after training of the neural network. Such ability to remove neural network elements that are not being used by the network is intended to simplify structure and modify operational parameters of the neural network without loss of the network's output quality.
  • Each memory element may be established by an electrical device characterized by an electrical and/or a magnetic characteristic configured to define a respective weight value.
  • a characteristic may be resistance, impedance, capacity, magnetic field, induction, electric field intensity, etc.
  • the respective electrical and/or magnetic characteristic of each device may be configured to be varied during training of the neural network.
  • the weight correction calculator may modify the respective corrective weight values by varying the respective at least one of the electrical and the magnetic characteristic of the corresponding electrical devices.
  • the electrical device may be configured as one of a resistor, a memistor, a memristor, a transistor, a capacitor, a field-effect transistor, a photoresistor, such as a light-dependent resistor (LDR), or a magnetic dependent resistor (MDR).
  • a resistor such as a light-dependent resistor (LDR), or a magnetic dependent resistor (MDR).
  • MDR magnetic dependent resistor
  • Each memory element may be established by a block of electrical resistors and include a selector device configured to select one or more electrical resistors from the block using the determined deviation to establish each corrective weight.
  • the block of electrical resistors may additionally include electrical capacitors.
  • each memory element may be established by a block having both, electrical resistors and electrical capacitors.
  • the selector device may then be additionally configured to select capacitors using the determined deviation to establish each corrective weight.
  • each neuron may be established by one of a series and a parallel communication channel, such as an electrical wire, or a series or parallel bus.
  • the weight correction calculator may be established as a set of differential amplifiers. Furthermore, each differential amplifier may be configured to generate a respective correction signal.
  • Each of the distributors may be a demultiplexer configured to select one or more corrective weights from the plurality of corrective weights in response to the received input signal.
  • Each distributor may be configured to convert the received input signal into a binary code and select one or more corrective weights from the plurality of corrective weights in correlation with the binary code.
  • a method of operating a utility neural network includes processing data via the utility neural network using modified corrective weight values established by a separate analogous neural network during training thereof.
  • the method also includes establishing an operational output signal of the utility neural network using the modified corrective weight values established by the separate analogous neural network.
  • the separate analogous neural network was trained via receiving, via an input to the neural network, a training input signal having a training input value; communicating the training input signal to a distributor operatively connected to the input; selecting, via the distributor, in correlation with the training input value, one or more corrective weights from a plurality of corrective weights, wherein each corrective weight is defined by a weight value and is positioned on a synapse connected to the input; adding up the weight values of the selected corrective weights, via a neuron connected with the input via the synapse and having at least one output, to generate a neuron sum; receiving, via a weight correction calculator, a desired output signal having a value; determining, via the weight correction calculator, a deviation of the neuron sum from the desired output signal value; and modifying, via the weight correction calculator, respective corrective weight values using the determined deviation to establish the modified corrective weight values, such that adding up the modified corrective weight values to determine
  • the utility neural network and the trained separate analogous neural network may include a matching neural network structure including a number of inputs, corrective weights, distributors, neurons, and synapses.
  • each corrective weight may be established by a memory element that retains a respective weight value.
  • FIGURE 1 is a schematic illustration of a prior art, classical artificial neural network.
  • FIGURE 2 is a schematic illustration of a "progressive neural network” (p- net) having a plurality of synapses, a set of distributors, and a plurality of corrective weights associated with each synapse.
  • p- net a "progressive neural network” having a plurality of synapses, a set of distributors, and a plurality of corrective weights associated with each synapse.
  • FIGURE 3A is a schematic illustration of a portion of the p-net shown in Figure 2, having a plurality of synapses and one synaptic weight positioned upstream of each distributor.
  • FIGURE 3B is a schematic illustration of a portion of the p-net shown in Figure 2, having a plurality of synapses and a set of synaptic weights positioned downstream of the respective plurality of corrective weights.
  • FIGURE 3C is a schematic illustration of a portion of the p-net shown in Figure 2, having a plurality of synapses and one synaptic weight positioned upstream of each distributor and a set of synaptic weights positioned downstream of the respective plurality of corrective weights.
  • FIGURE 4C is a schematic illustration of a portion of the p-net shown in
  • Figure 2 having a single distributor for all synapses of a given input, and having one synaptic weight positioned upstream of each distributor and a set of synaptic weights positioned downstream of the respective plurality of corrective weights.
  • FIGURE 5 is a schematic illustration of division of input signal value range into individual intervals in the p-net shown in Figure 2.
  • FIGURE 6B is a schematic illustration of another embodiment of the distribution for values of coefficient of impact of corrective weights in the p-net shown in Figure 2.
  • FIGURE 6C is a schematic illustration of yet another embodiment of the distribution for values of coefficient of impact of corrective weights in the p-net shown in Figure 2.
  • FIGURE 7 is a schematic illustration of an input image for the p-net shown in Figure 2, as well as one corresponding table representing the image in the form of digital codes and another corresponding table representing the same image as a set of respective intervals.
  • FIGURE 9 is a schematic illustration of an embodiment of the p-net shown in Figure 2 with an example of distribution of synaptic weights around a "central" neuron.
  • FIGURE 10 is a schematic illustration of an embodiment of the p-net shown in Figure 2, depicting a uniform distribution of training deviation between corrective weights.
  • FIGURE 11 is a schematic illustration of an embodiment of the p-net shown in Figure 2, employing modification of the corrective weights during p-net training.
  • FIGURE 14 is a schematic illustration of a model for object oriented programming for the p-net shown in Figure 2 using Unified Modeling Language (UML).
  • UML Unified Modeling Language
  • FIGURE 20 is a schematic illustration of training the p-net shown in Figure 2.
  • FIGURE 32 is a schematic illustration of another embodiment of the memory element configured as variable impedance in the p-net.
  • FIGURE 37 is a flow diagram of a method for operating the neural network shown in Figures 34-36.
  • the p-net 100 also includes a plurality or a set of synapses 118. Each synapse 118 is connected to one of the plurality of inputs 102, includes a plurality of corrective weights 112, and may also include a synaptic weight 108, as shown in Figure 2. Each corrective weight 112 is defined by a respective weight value.
  • the p- net 100 also includes a set of distributors 114. Each distributor 1 14 is operatively connected to one of the plurality of inputs 102 for receiving the respective input signal 104. Additionally, each distributor 1 14 is configured to select one or more corrective weights from the plurality of corrective weights 112 in correlation with the input value.
  • the weight correction calculator 122 is also configured to determine a deviation 128 of the neuron sum 120 from the value of the desired output signal 124, a.k.a., training error, and modify respective corrective weight values using the determined deviation 128. Thereafter, summing the modified corrective weight values to determine the neuron sum 120 minimizes the deviation of the subject neuron sum from the value of the desired output signal 124 and, as a result, is effective for training the p-net 100.
  • the deviation 128 may also be described as the training error between the determined neuron sum 120 and the value of the desired output signal 124.
  • the input values of the input signal 104 only change in the process of general network setup, and are not changed during training of the p-net. Instead of changing the input value, training of the p-net 100 is provided by changing the values 112 of the corrective weights 1 12.
  • Generating the neuron sum 120 may include initially assigning respective coefficients of impact 134 to each corrective weight 112 according to the input value 102 and then multiplying the subject coefficients of impact by values of the respective employed corrective weights 112. Then, summing via the each neuron 116 the individual products of the corrective weight 112 and the assigned coefficient of impact 134 for all the synapses 118 connected thereto.
  • the p-net 100 typically formation of the p-net 100 will take place before the training of the p-net commences. However, in a separate embodiment, if during training the p-net 100 receives an input signal 104 for which initial corrective weights are absent, appropriate corrective weights 112 may be generated. In such a case, the specific distributor 114 will determine the appropriate interval "d" for the particular input signal 104, and a group of corrective weights 112 with initial values will be generated for the given input 102, the given interval "d", and all the respective neurons 116. Additionally, a corresponding coefficient of impact 134 may be assigned to each newly generated corrective weight 112.
  • predetermined highest level may be assigned to such highest level, as also shown in Figure 5.
  • the specific interval distribution may also be non-uniform or nonlinear, such as symmetrical, asymmetrical, or unlimited. Nonlinear distribution of intervals "d" may be useful when the range of the input signals 104 is considered to be impractically large, and a certain part of the range could include input signals considered to be most critical, such as in the beginning, in the middle, or at end of the range.
  • the specific interval distribution may also be described by a random function. All the preceding examples are of the non-limiting nature, as other variants of intervals distribution are also possible.
  • determination of the deviation 128 of the neuron sum 120 from the desired output signal value may include determining the mathematical difference therebetween. Additionally, the modification of the respective corrective weights 112 may include apportioning the mathematical difference to each corrective weight used to generate the neuron sum 120. Alternatively, the apportionment of the mathematical difference may include dividing the determined difference equally between each corrective weight 112 used to generate the neuron sum 120. In a yet separate embodiment, the determination of the deviation 128 may also include dividing the value of the desired output signal 124 by the neuron sum 120 to thereby generate the deviation coefficient. Furthermore, in such a case, the modification of the respective corrective weights 112 may include multiplying each corrective weight 112 used to generate the neuron sum 120 by the generated deviation coefficient.
  • the method proceeds to frame 214.
  • the method includes modifying, via the weight correction calculator 122, respective corrective weight values using the determined deviation 128.
  • the modified corrective weight values may subsequently be added or summed up and then used to determine a new neuron sum 120.
  • the summed modified corrective weight values may then serve to minimize the deviation of the neuron sum 120 from the value of the desired output signal 124 and thereby train the p-net 100.
  • method 200 may include returning to frame 202 to perform additional training epochs until the deviation of the neuron sum 120 from the value of the desired output signal 124 is sufficiently minimized.
  • additional training epochs may be performed to converge the neuron sum 120 on the desired output signal 124 to within the predetermined deviation or error value, such that the p-net 100 may be considered trained and ready for operation with new images.
  • a specific input image may be converted into an input image in interval format, that is, real signal values may be recorded as numbers of intervals to which the subject respective signals belong. This procedure may be carried out in each training epoch for the given image. However, the image may also be formed once as a set of interval numbers. For example, in Figure 7 the initial image is presented as a picture, while in the table "Image in digital format” the same image is presented in the form of digital codes, and in the table "Image in interval format” then image is presented as a set of interval numbers, where a separate interval is assigned for each 10 values of digital codes.
  • Desired output images 126 represent a field or table of digital, where each point corresponds to a specific numeric value from - ⁇ to + ⁇ , or analog values. Each point of the desired output image 126 may correspond to the output of one of the neurons of the p-net 100. Desired output images 126 may be encoded with digital or analog codes of images, tables, text, formulas, sets of symbols, such as barcodes, or sounds. [00121] In the simplest case, each input image 106 may correspond to an output image, encoding the subject input image. One of the points of such output image may be assigned a maximum possible value, for example 100%, whereas all other points may be assigned a minimum possible value, for example, zero.
  • Figure 9 shows an embodiment of the p-net 100 in which the relationship between an input and respective neurons is reduced in accordance with statistical normal distribution. Uneven distribution of synaptic weights 108 may result in the entire input signal being communicated to a target or "central" neuron for the given input, thus assigning a value of zero to the subject synaptic weight.
  • Figure 9 shows an embodiment of the p-net 100 that is effective for recognition of local patterns.
  • 10-20% of strong connections where the values of the synaptic weights 108 are small or zero, may be distributed throughout the entire p-net 100, in a
  • a program for example, written in an object-oriented programming language, that generates main elements of the p-net, such as synapses, synaptic weights, distributors, corrective weights, neurons, etc., as software objects.
  • a program may assign relationships between the noted objects and algorithms specifying their actions.
  • synaptic and corrective weights may be formed in the beginning of formation of the p-net 100, along with setting their initial values.
  • Such evaluation of modifications may include changing, either increasing or reducing, the number of intervals; changing the type of distribution of the coefficients of corrective weight impact (Ci,d, n ), testing variants with non-uniform distribution of intervals, such as using normal, power, logarithmic, or log-normal distribution; and changing values of synaptic weights 108, for example their transition to non-uniform distribution.
  • each synapse 118 includes a set of corrective weights Wi,d, n .
  • training with new images while possibly increasing training error does not delete the images, for which the p-net 100 was previously trained.
  • the more synapses 118 contribute to each neuron 116 and the greater the number of corrective weights Wi,d, n at each synapse the less training for a specific image affects the training for other images.
  • the ability of the p-net 100 to easily shift from training mode to the recognition mode and vice versa allows for realization of a "learn from mistakes" mode, when the p-net 100 is trained by an external trainer.
  • the partially trained p-net 100 may generate its own actions, for example, to control a technological process.
  • the trainer could control the actions of the p-net 100 and correct those actions when necessary.
  • additional training of the p-net 100 could be provided.
  • p-net 100 will be trained to ensure its ability to recognize images, patterns, and correlations inherent to the image, or to a sets of images.
  • the recognition process in the simplest case repeats the first steps of the training process according to the basic algorithm disclosed as part of the method 200. In particular: • direct recognition starts with formatting of the image according to the same rules that are used to format images for training;
  • Formation of a NeuronUnit object class may include formation of:
  • the cycles may be formed, where:
  • An appropriate order and succession of reduction of the index "a” may be experimentally selected to identify strong patterns hidden in the sequence of images. For example, for every 100 images introduced into the p-net 100 during training, there may be a reduction of the index "a” by a count of one, until “a” reaches the zero value. In such a case, the value of "a” may grow correspondingly with the introduction of new images. The competition between growth and reduction of "a” may lead to a situation where random changes are gradually removed from memory, while the corrective weights Wi, n ,d, a that have been used and confirmed many times may be saved.
  • the number of selected winner outputs may be predetermined, for example, in a range of 1 to 10, or winner outputs may be selected according to the rule "no less than N% of the maximum neuron sum", where "N" may be, for example, within 90 - 100%; and o All other outputs may be set equal to zero.
  • Each corrective weight 1 12 may be implemented as a memristor-like device (memristor).
  • a memristor is a variable resistor with resistance controlled by an electrical current in a circuit, or by an electrical potential or an electric charge. Appropriate memristor functionality may be achieved via an actual memristor device, software or physical emulation thereof.
  • memristor In operation of the p-net 100 at low voltage potential, memristor may operate as a simple resistor.
  • the resistance of the memristor may be varied, for example, by a strong voltage pulse. Whether the change of the memristor's value (increasing or decreasing of the resistance) may depend on the polarity of the voltage, while the magnitude of the value change may depend on the magnitude of voltage pulse.
  • each respective output 1 17 of every neuron 116 provides the respective neuron sum 120 to establish an operational output signal 152 of the p-net 100A.
  • the operational output signal 152 has a signal value the represents either a portion or the entirety of an operational output image 154.
  • the trained p-net 100A of any disclosed embodiments may be configured to receive supplementary training using solely a supplementary input signal 156 having a value along with a corresponding supplementary desired output signal 158.
  • the previously trained p-net 100A may receive supplementary training without being retrained with some or all of the original input signals 104 and desired output signals 124 that were employed to initially train the p-net 100 A.
  • Each of the plurality of synapses 1 18 in the p-net 100A may be configured to accept one or more additional corrective weights 112 that were established by the respective memory elements 150.
  • Such additional corrective weights 112 may be added to the synapses either during training or before the supplementary training of the p-net 100A.
  • Such additional corrective weights 1 12 may be used to expand a number of memory elements 150 available to train and operate the p-net 100A.
  • each of the generated correction signals may be used to vary the electrical and/or magnetic characteristic of at least one electrical device, 160 i.e., separate correction signals used for each device being modified.
  • the weight correction calculator 122 may also be configured to generate a single correction signal used to vary the electrical and/or magnetic characteristic of each electrical device 160, i.e., one correction signal may be used for all electrical devices being modified.
  • Each neuron 116 is similarly configured to sum up the values of the corrective weights 112 selected from each synapse 118 connected to the respective neuron 116 to thereby generate and output a neuron sum array 120A, otherwise designated as ⁇ n.
  • a separate distributor 1 14 may similarly be used for each synapse 118 of a given input 102, as shown in Figures 34-36.
  • a single distributor may be used for all such synapses (not shown).
  • all corrective weights 1 12 are assigned initial values, which may change during the process of p-net training, as shown in Figure 35.
  • the initial value of the corrective weight 112 may be selected randomly, calculated with the help of a pre-determined mathematical function, selected from a predetermined template, etc.
  • Initial values of the corrective weights 1 12 may be either identical or distinct for each corrective weight 112, and may also be zero.
  • Non-volatile media for the controller 122A may include, for example, optical or magnetic disks and other persistent memory.
  • Volatile media may include, for example, dynamic random access memory (DRAM), which may constitute a main memory.
  • DRAM dynamic random access memory
  • Such instructions may be transmitted by one or more transmission medium, including coaxial cables, copper wire and fiber optics, including the wires that comprise a system bus coupled to a processor of a computer.
  • adding up the modified corrective weight values to determine the neuron sum array 120A reduces, i.e., minimizes, the deviation 128 of the neuron sum array 120 A from the desired output value array 126A to generate a trained corrective weight array 134A (shown in Figure 36).
  • the trained corrective weight array 134A includes all the corrective weights 112 within the dashed box 134A.
  • the trained corrective weight array 134A includes all the trained corrective weights 112A within the dashed box 119A and may include the distributors 114 associated therewith. Therefore, the minimized deviation 128 of the neuron sum array 120A compensates for errors generated by the p-net 100B.
  • the generated trained corrective weight array 134A facilitates concurrent or parallel training of the p-net 100B.
  • the distributors may be configured to codify the training and input images 106 as the respective training input value array 107 and input value array 107A. Accordingly, such a set of distributors 1 14 being operatively connected to the plurality of inputs 102 for receiving each of the respective training and input images 106.
  • the above operations may be performed using structured matrices, specifically a trained corrective weight matrix in place of the trained corrective weight array 134A, as will be described in detail below.
  • the training images may be received and/or organized in an input training matrix
  • the above training input images matrix may be converted via the controller 122A into the training input value matrix 141, which is represented as matrix
  • will have a corresponding number of columns for the number of inputs "I”, but accounting for a specific number of intervals "i", and a corresponding number of rows for the number of images.
  • concurrent recognition of a batch of input images 106 may be provided using matrix operation described above.
  • the trained p-net lOOC the corrective weights array, which may be represented as a two-dimensional n x k matrix
  • may be generally represented as follows:
  • the controller 122A may be additionally configured to determine a number of times each corrective weight 112 was used during one training epoch of the p-net 100B represented in the expression below by the symbol "
  • controller 122 A may be configured to add the averaged deviation matrix 157 for the one training epoch to the corrective weight matrix 142 to thereby generate the trained corrective weight matrix 146, represented below as
  • Figure 37 depicts a method 400 for operating the p-net 100B, as described above with respect to Figures 34-36.
  • the method 400 is configured to improve operation of an apparatus, such as a computer, or a system of computers employed in implementing supervised training using one or more data processors, such as the processor 150.
  • the method 400 may be programmed into a non-transitory computer-readable storage device for operating the p-net 100B and encoded with instructions executable to perform the method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Neurology (AREA)
  • Image Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)

Abstract

A neural network includes inputs for receiving input signals, and synapses connected to the inputs and having corrective weights organized in an array. Training images are either received by the inputs as an array or codified as such during training of the network. The network also includes neurons, each having an output connected with at least one input via one synapse and generating a neuron sum array by summing corrective weights selected from each synapse connected to the respective neuron. Furthermore, the network includes a controller that receives desired images in an array, determines a deviation of the neuron sum array from the desired output value array, and generates a deviation array. The controller modifies the corrective weight array using the deviation array. Adding up the modified corrective weights to determine the neuron sum array reduces the subject deviation and generates a trained corrective weight array for concurrent network training.

Description

NEURAL NETWORK AND METHOD OF NEURAL NETWORK TRAINING
CROSS-REFERENCE TO RELATED APPLICATION(S)
[0001] This application claims the benefit of U.S. Utility Application 15/449614 filed March 3, 2017 and U.S. Utility Application 15/178,137 filed June 9, 2016, each of which is a continuation-in-part of U.S. Utility Bypass Application 14/862,337 filed September 23, 2015, which is a continuation of International Application Serial No. PCT/US 2015/19236 filed March 6, 2015, which claims the benefit of U.S.
Provisional Application Serial No. 61/949,210 filed March 06, 2014, and U.S.
Provisional Application Serial No. 62/106,389 filed January 22, 2015, and also claims the benefit of U.S. Provisional Application Serial No. 62/173,163 filed June 09, 2015, the entire content of which is similarly incorporated by reference.
TECHNICAL FIELD
[0002] The disclosure relates to an artificial neural network and a method of training the same.
BACKGROUND
[0003] In machine learning, the term "neural network" generally refers to software and/or computer architecture, i.e., the overall design or structure of a computer system or a microprocessor, including the hardware and software required to run it. Artificial neural networks may be a family of statistical learning algorithms inspired by biological neural networks, a.k.a., the central nervous systems of animals, in particular the brain. Artificial neural networks are primarily used to estimate or approximate generally unknown functions that may depend on a large number of inputs. Such neural networks have been used for a wide variety of tasks that are difficult to resolve using ordinary rule-based programming, including computer vision and speech recognition.
[0004] Artificial neural networks are generally presented as systems of "neurons" which may compute values from inputs, and, as a result of their adaptive nature, are capable of machine learning, as well as partem recognition. Each neuron frequently connects with several inputs through synapses having synaptic weights. [0005] Neural networks are not programmed as typical software and hardware, but are trained. Such training is typically accomplished via analysis of a sufficient number of representative examples and by statistical or algorithmic selection of synaptic weights, so that a given set of input images corresponds to a given set of output images. A common criticism of classical neural networks is that significant time and other resources are frequently required for their training.
[0006] Various artificial neural networks are described in the following U. S. Patents: 4,979,124; 5,479,575; 5,493,688; 5,566,273; 5,682,503; 5,870,729;
7,577,631 ; and 7,814,038.
SUMMARY
[0007] In one embodiment, a neural network includes a plurality of inputs to the neural network configured to receive training images. The training images are either received by the plurality of inputs as a training input value array or codified as the training input value array during training of the neural network, i.e., after having been received by the plurality of inputs. The neural network also includes a plurality of synapses. Each synapse is connected to one of the plurality of inputs and includes a plurality of corrective weights. Each corrective weight is defined by a weight value, and the corrective weights of the plurality of synapses are organized in a corrective weight array.
[0008] The neural network additionally includes a plurality of neurons. Each neuron has at least one output and is connected with at least one of the plurality of inputs via at least one of the plurality of synapses. Each neuron is configured to add up the weight values of the corrective weights corresponding to each synapse connected to the respective neuron, such that the plurality of neurons generate a neuron sum array. The neural network also includes a controller configured to receive desired images organized as a desired output value array.
[0009] The controller is also configured to determine a deviation of the neuron sum array from the desired output value array and generate a deviation array. The controller is additionally configured to modify the corrective weight array using the determined deviation array. Adding up the modified corrective weight values to determine the neuron sum array reduces the deviation of the neuron sum array from the desired output value array, i.e., compensates for errors generated by the neuron network during training, and generates a trained corrective weight array to thereby facilitate concurrent or parallel training of the neural network.
[0010] In a trained neural network, the plurality of inputs to the neural network may be configured to receive input images. Such input images may be either received as an input value array or codified as the input value array during recognition of the images by the neural network. Each synapse may include a plurality of trained corrective weights of the trained corrective weight array. Additionally, each neuron may be configured to add up the weight values of the trained corrective weights corresponding to each synapse connected to the respective neuron, such that the plurality of neurons generate a recognized images array, thereby providing recognition of such input images.
[0011] The neural network may also include a set of distributors. In such an embodiment, the set of distributors may be configured to codify each of the training images and input images as the respective training input value array and input value array. Such a set of distributors may be operatively connected to the plurality of inputs for receiving the respective training images and input images.
[0012] The controller may additionally be programmed with an array of target deviation of the neuron sum array from the desired output value array. Furthermore, the controller may be configured to complete training of the neural network when the deviation of the neuron sum array from the desired output value array is within an acceptable range of the array of target deviation.
[0013] The training input value array, input value array, corrective weight array, neuron sum array, desired output value array, deviation array, trained corrective weight array, recognized image array, and target deviation array may be organized, respectively, as a training input value matrix, input value matrix, corrective weight matrix, neuron sum matrix, desired output value matrix, deviation matrix, trained corrective weight matrix, recognized image matrix, and target deviation matrix.
[0014] The neural network may additionally include a plurality of data processors. In such an embodiment, the controller may be additionally configured to partition at least one of the respective input value, training input value, corrective weight, neuron sum, and desired output value matrices into respective sub-matrices and communicate a plurality of the resultant sub-matrices to the plurality of data processors for separate parallel mathematical operations therewith. Such partitioning of any of the subject matrices into respective sub-matrices facilitates concurrent or parallel data processing and an increase in speed of either image recognition of the input value matrix or training of the neural network. Such concurrent or parallel data processing also permits scalability of the neural network.
[0015] The controller may modify the corrective weight matrix by applying an algebraic matrix operation to the training input value matrix and the corrective weight matrix to thereby train the neural network.
[0016] The mathematical matrix operation may include a determination of a mathematical product of the training input value and corrective weight matrices to thereby form a current training epoch weight matrix.
[0017] The controller may also be configured to subtract the neuron sum matrix from the desired output value matrix to generate a matrix of deviation of neuron sums. Additionally, the controller may be configured to divide the matrix of deviation of neuron sums by the number of inputs connected to the respective neuron to generate a matrix of deviation per neuron input.
[0018] The controller may be also configured to determine a number of times each corrective weight was used during one training epoch of the neural network. The controller may additionally be configured to form an averaged deviation matrix for the one training epoch using the determined number of times each corrective weight was used during the one training epoch. Furthermore, the controller may be configured to add the averaged deviation matrix for the one training epoch to the corrective weight matrix to thereby generate the trained corrective weight matrix and complete the one training epoch.
[0019] A method of operating such a neural network, i.e., for training and image recognition, is also disclosed.
[0020] Additionally disclosed are a non-transitory computer-readable storage device for operating an artificial neural network and an apparatus for operating an artificial neural network.
[0021] In another embodiment, a neural network includes a plurality of network inputs, such that each input is configured to receive an input signal having an input value. The neural network also includes a plurality of synapses, wherein each synapse is connected to one of the plurality of inputs and includes a plurality of corrective weights, wherein each corrective weight is established by a memory element that retains a respective weight value. The neural network additionally includes a set of distributors. Each distributor is operatively connected to one of the plurality of inputs for receiving the respective input signal and is configured to select one or more corrective weights from the plurality of corrective weights in correlation with the input value. The neural network also includes a set of neurons. Each neuron has at least one output and is connected with at least one of the plurality of inputs via one of the plurality of synapses synapse and is configured to add up the weight values of the corrective weights selected from each synapse connected to the respective neuron and thereby generate a neuron sum. The output of each neuron provides the respective neuron sum to establish an operational output signal of the neural network.
[0022] The neural network may also include a weight correction calculator configured to receive a desired output signal having a value, determine a deviation of the neuron sum from the desired output signal value, and modify respective corrective weight values established by the corresponding memory elements using the determined deviation. In such a case, adding up the modified corrective weight values to determine the neuron sum is intended to minimize the deviation of the neuron sum from the desired output signal value to thereby generate a trained neural network.
[0023] The trained neural network may be configured to receive supplementary training using solely a supplementary input signal having a value and a corresponding supplementary desired output signal.
[0024] Either during training or before the supplementary training of the neural network, each of the plurality of synapses may be configured to accept one or more additional corrective weights established by the respective memory elements.
[0025] The neural network may be configured to remove from the respective synapses, during or after training of the neural network, one or more corrective weights established by the respective memory elements. Such removal of some corrective weights may permit the neural network to retain only a number of memory elements required to operate the neural network.
[0026] The neural network may be configured to accept at least one of an additional input, an additional synapse, and an additional neuron before or during training of the neural network to thereby expand operational parameters of the neural network. [0027] The neural network may be configured to remove at least one of an input, a synapse, and a neuron before, during, or after training of the neural network. Such ability to remove neural network elements that are not being used by the network is intended to simplify structure and modify operational parameters of the neural network without loss of the network's output quality.
[0028] Each memory element may be established by an electrical device characterized by an electrical and/or a magnetic characteristic configured to define a respective weight value. Such a characteristic may be resistance, impedance, capacity, magnetic field, induction, electric field intensity, etc. The respective electrical and/or magnetic characteristic of each device may be configured to be varied during training of the neural network. Additionally, the weight correction calculator may modify the respective corrective weight values by varying the respective at least one of the electrical and the magnetic characteristic of the corresponding electrical devices.
[0029] The electrical device may be configured as one of a resistor, a memistor, a memristor, a transistor, a capacitor, a field-effect transistor, a photoresistor, such as a light-dependent resistor (LDR), or a magnetic dependent resistor (MDR).
[0030] Each memory element may be established by a block of electrical resistors and include a selector device configured to select one or more electrical resistors from the block using the determined deviation to establish each corrective weight.
[0031] The block of electrical resistors may additionally include electrical capacitors. In other words, each memory element may be established by a block having both, electrical resistors and electrical capacitors. The selector device may then be additionally configured to select capacitors using the determined deviation to establish each corrective weight.
[0032] The neural network may be configured as one of an analog, digital, and digital-analog network. In such a network, at least one of the plurality of inputs, the plurality of synapses, the memory elements, the set of distributors, the set of neurons, the weight correction calculator, and the desired output signal may be configured to operate in an analog, digital, and digital-analog format.
[0033] In the case where the neural network is configured as the analog network, each neuron may be established by one of a series and a parallel communication channel, such as an electrical wire, or a series or parallel bus. [0034] The weight correction calculator may be established as a set of differential amplifiers. Furthermore, each differential amplifier may be configured to generate a respective correction signal.
[0035] Each of the distributors may be a demultiplexer configured to select one or more corrective weights from the plurality of corrective weights in response to the received input signal.
[0036] Each distributor may be configured to convert the received input signal into a binary code and select one or more corrective weights from the plurality of corrective weights in correlation with the binary code.
[0037] The neural network may be programmed into an electronic device having a memory, and wherein each memory element is stored in the memory of the electronic device.
[0038] A method of operating a utility neural network is also disclosed. The method includes processing data via the utility neural network using modified corrective weight values established by a separate analogous neural network during training thereof. The method also includes establishing an operational output signal of the utility neural network using the modified corrective weight values established by the separate analogous neural network.
[0039] For use of modified corrective weight values by the utility neural network, the separate analogous neural network was trained via receiving, via an input to the neural network, a training input signal having a training input value; communicating the training input signal to a distributor operatively connected to the input; selecting, via the distributor, in correlation with the training input value, one or more corrective weights from a plurality of corrective weights, wherein each corrective weight is defined by a weight value and is positioned on a synapse connected to the input; adding up the weight values of the selected corrective weights, via a neuron connected with the input via the synapse and having at least one output, to generate a neuron sum; receiving, via a weight correction calculator, a desired output signal having a value; determining, via the weight correction calculator, a deviation of the neuron sum from the desired output signal value; and modifying, via the weight correction calculator, respective corrective weight values using the determined deviation to establish the modified corrective weight values, such that adding up the modified corrective weight values to determine the neuron sum minimizes the deviation of the neuron sum from the desired output signal value to thereby train the neural network.
[0040] The utility neural network and the trained separate analogous neural network may include a matching neural network structure including a number of inputs, corrective weights, distributors, neurons, and synapses.
[0041] In each of the utility neural network and the trained separate analogous neural network, each corrective weight may be established by a memory element that retains a respective weight value.
[0042] The above features and advantages, and other features and advantages of the present disclosure, will be readily apparent from the following detailed description of the embodiment(s) and best mode(s) for carrying out the described disclosure when taken in connection with the accompanying drawings and appended claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0043] FIGURE 1 is a schematic illustration of a prior art, classical artificial neural network.
[0044] FIGURE 2 is a schematic illustration of a "progressive neural network" (p- net) having a plurality of synapses, a set of distributors, and a plurality of corrective weights associated with each synapse.
[0045] FIGURE 3A is a schematic illustration of a portion of the p-net shown in Figure 2, having a plurality of synapses and one synaptic weight positioned upstream of each distributor.
[0046] FIGURE 3B is a schematic illustration of a portion of the p-net shown in Figure 2, having a plurality of synapses and a set of synaptic weights positioned downstream of the respective plurality of corrective weights.
[0047] FIGURE 3C is a schematic illustration of a portion of the p-net shown in Figure 2, having a plurality of synapses and one synaptic weight positioned upstream of each distributor and a set of synaptic weights positioned downstream of the respective plurality of corrective weights.
[0048] FIGURE 4A is a schematic illustration of a portion of the p-net shown in Figure 2, having a single distributor for all synapses of a given input and one synaptic weight positioned upstream of each distributor. [0049] FIGURE 4B is a schematic illustration of a portion of the p-net shown in Figure 2, having a single distributor for all synapses of a given input and a set of synaptic weights positioned downstream of the respective plurality of corrective weights.
[0050] FIGURE 4C is a schematic illustration of a portion of the p-net shown in
Figure 2, having a single distributor for all synapses of a given input, and having one synaptic weight positioned upstream of each distributor and a set of synaptic weights positioned downstream of the respective plurality of corrective weights.
[0051] FIGURE 5 is a schematic illustration of division of input signal value range into individual intervals in the p-net shown in Figure 2.
[0052] FIGURE 6A is a schematic illustration of one embodiment of a distribution for values of coefficient of impact of corrective weights in the p-net shown in Figure 2.
[0053] FIGURE 6B is a schematic illustration of another embodiment of the distribution for values of coefficient of impact of corrective weights in the p-net shown in Figure 2.
[0054] FIGURE 6C is a schematic illustration of yet another embodiment of the distribution for values of coefficient of impact of corrective weights in the p-net shown in Figure 2.
[0055] FIGURE 7 is a schematic illustration of an input image for the p-net shown in Figure 2, as well as one corresponding table representing the image in the form of digital codes and another corresponding table representing the same image as a set of respective intervals.
[0056] FIGURE 8 is a schematic illustration of an embodiment of the p-net shown in Figure 2 trained for recognition of two distinct images, wherein the p-net is configured to recognize a picture that includes some features of each image;
[0057] FIGURE 9 is a schematic illustration of an embodiment of the p-net shown in Figure 2 with an example of distribution of synaptic weights around a "central" neuron.
[0058] FIGURE 10 is a schematic illustration of an embodiment of the p-net shown in Figure 2, depicting a uniform distribution of training deviation between corrective weights. [0059] FIGURE 11 is a schematic illustration of an embodiment of the p-net shown in Figure 2, employing modification of the corrective weights during p-net training.
[0060] FIGURE 12 is a schematic illustration of an embodiment of the p-net shown in Figure 2, wherein the basic algorithm generates a primary set of output neuron sums, and wherein the generated set is used to generate several "winner" sums with either retained or increased values and the contribution of remaining sums is negated.
[0061] FIGURE 13 is a schematic illustration of an embodiment of the p-net shown in Figure 2 recognizing a complex image with elements of multiple images.
[0062] FIGURE 14 is a schematic illustration of a model for object oriented programming for the p-net shown in Figure 2 using Unified Modeling Language (UML).
[0063] FIGURE 15 is a schematic illustration of a general formation sequence of the p-net shown in Figure 2.
[0064] FIGURE 16 is a schematic illustration of representative analysis and preparation of data for formation of the p-net shown in Figure 2.
[0065] FIGURE 17 is a schematic illustration of representative input creation permitting interaction of the p-net shown in Figure 2 with input data during training and p-net application.
[0066] FIGURE 18 is a schematic illustration of representative creation of neuron units for the p-net shown in Figure 2.
[0067] FIGURE 19 is a schematic illustration of representative creation of each synapse connected with the neuron units.
[0068] FIGURE 20 is a schematic illustration of training the p-net shown in Figure 2.
[0069] FIGURE 21 is a schematic illustration of neuron unit training in the p-net shown in Figure 2.
[0070] FIGURE 22 is a schematic illustration of extending of neuron sums during training of the p-net shown in Figure 2.
[0071] FIGURE 23 is a flow diagram of a method used to train the p-net shown in Figures 2-22. [0072] FIGURE 24 is a schematic illustration of a specific embodiment of the p- net having each of the plurality of corrective weights established by a memory element; the p-net being depicted in the process of network training.
[0073] FIGURE 25 is a schematic illustration of a specific embodiment of the p- net having each of the plurality of corrective weights established by the memory element; the p-net being depicted in the process of image recognition.
[0074] FIGURE 26 is a schematic illustration of a representative p-net using memristors during a first stage of training.
[0075] FIGURE 27 is a schematic illustration of the representative p-net using memristors during a second stage of training.
[0076] FIGURE 28 is a schematic illustration of twin parallel branches of memristors in the representative p-net.
[0077] FIGURE 29 is a schematic illustration of the representative p-net using resistors.
[0078] FIGURE 30 is a schematic illustration of one embodiment of the memory element configured as a resistor in the p-net.
[0079] FIGURE 31 is a schematic illustration of another embodiment of the memory element configured as a resistor in the p-net.
[0080] FIGURE 32 is a schematic illustration of another embodiment of the memory element configured as variable impedance in the p-net.
[0081] FIGURE 33 is a flow diagram of a method used to operate the neural network shown in Figures 2-22 and 24-32.
[0082] FIGURE 34 is an illustration of a "progressive neural network" (p-net) having a plurality of synapses and a plurality of corrective weights associated with each synapse, according to the disclosure.
[0083] FIGURE 35 is an illustration of the p-net in the process of being trained, according to the disclosure.
[0084] FIGURE 36 is an illustration of the p-net in the process of image recognition, according to the disclosure.
[0085] FIGURE 37 is a flow diagram of a method for operating the neural network shown in Figures 34-36.
DETAILED DESCRIPTION [0086] A classical artificial neural network 10, as shown in Figure 1, typically includes input devices 12, synapses 14 with synaptic weights 16, neurons 18, including an adder 20 and activation function device 22, neuron outputs 24 and weight correction calculator 26. Each neuron 18 is connected through synapses 14 to two or more input devices 12. The values of synaptic weights 16 are commonly represented using electrical resistance, conductivity, voltage, electric charge, magnetic property, or other parameters.
[0087] Supervised training of the classical neural network 10 is generally based on an application of a set of training pairs 28. Each training pair 28 commonly consists of an input image 28-1 and a desired output image 28-2, a.k.a., a supervisory signal. Training of the classical neural network 10 is typically provided as follows. An input image in the form of a set of input signals (Ii-Im) enters the input devices 12 and is transferred to the synaptic weights 16 with initial weights (Wi). The value of the input signal is modified by the weights, typically by multiplying or dividing each signal (Ii-Im) value by the respective weight. From the synaptic weights 16, modified input signals are transferred either to the respective neurons 18. Each neuron 18 receives a set of signals from a group of synapses 14 related to the subject neuron 18. The adder 20 included in the neuron 18 sums up all the input signals modified by the weights and received by the subject neuron. Activation function devices 22 receive the respective resultant neuron sums and modify the sums according to mathematical function(s), thus forming respective output images as sets of neuron output signals
(∑Fl...∑Fn).
[0088] The obtained neuron output image defined by the neuron output signals (∑Fi- · ·∑Fn) is compared by a weight correction calculator 26 with pre-determined desired output images (Oi-On). Based on the determined difference between the obtained neuron output image∑Fn and the desired output image On, correction signals for changing the synaptic weights 16 are formed using a pre-programmed algorithm. After corrections are made to all the synaptic weights 16, the set of input signals (Ii- Im) is reintroduced to the neural network 10 and new corrections are made. The above cycle is repeated until the difference between the obtained neuron output image∑Fn and the desired output image On is determined to be less than some predetermined error. One cycle of network training with all the individual images is typically identified as a "training epoch". Generally, with each training epoch, the magnitude of error is reduced. However, depending on the number of individual inputs (Ii-Im), as well as the number of inputs and outputs, training of the classical neural network 10 may require a significant number of training epochs, which, in some cases, may be as great as hundreds of thousands.
[0089] A variety of classical neural networks exist, including Hopfield network, Restricted Boltzmann Machine, Radial basis function network, and recurrent neural network. Specific tasks of classification and clustering require a specific type of neural network, the Self-Organizing Maps that use only input images as network input training information, whereas the desired output image, corresponding to a certain input image is formed directly during the training process based on a single winning neuron having an output signal with the maximum value.
[0090] As noted above, one of the main concerns with existing, classical neural networks, such as the neural network 10, is that successful training thereof may require a significant duration of time. Some additional concerns with classical networks may be a large consumption of computing resources, which would in turn drive the need for powerful computers. Additional concerns are an inability to increase the size of the network without full retraining of the network, a
predisposition to such phenomena as "network paralysis" and "freezing at a local minimum", which make it impossible to predict if a specific neural network would be capable of being trained with a given set of images in a given sequence. Also there may be limitations related to specific sequencing of images being introduced during training, where changing the order of introduction of training images may lead to network freezes, as well as an inability to perform additional training of an already trained network.
[0091] Referring to the remaining drawings, wherein like reference numbers refer to like components, Figure 2 shows a schematic view of a progressive neural network, thereafter "progressive network", or "p-net" 100. The p-net 100 includes a plurality or a set of inputs 102 of the p-net. Each input 102 is configured to receive an input signal 104, wherein the input signals are represented as Ii, h... Im in Figure 2. Each input signal Ii, h... Im represents a value of some characteristic(s) of an input image 106, for example, a magnitude, frequency, phase, signal polarization angle, or association with different parts of the input image 106. Each input signal 104 has an input value, wherein together the plurality of input signals 104 generally describes the input image 106.
[0092] Each input value may be within a value range that lies between -∞ and +∞ and may be set in digital and/or analog forms. The range of the input values may depend on a set of training images. In the simplest case, the range input values could be the difference between the smallest and largest values of input signals for all training images. For practical reasons, the range of the input values may be limited by eliminating input values that are deemed too high. For example, such limiting of the range of the input values may be accomplished via known statistical methods for variance reduction, such as importance sampling. Another example of limiting the range of the input values may be designation of all signals that are lower than a predetermined minimum level to a specific minimum value and designation of all signals exceeding a predetermined maximum level to a specific maximum value.
[0093] The p-net 100 also includes a plurality or a set of synapses 118. Each synapse 118 is connected to one of the plurality of inputs 102, includes a plurality of corrective weights 112, and may also include a synaptic weight 108, as shown in Figure 2. Each corrective weight 112 is defined by a respective weight value. The p- net 100 also includes a set of distributors 114. Each distributor 1 14 is operatively connected to one of the plurality of inputs 102 for receiving the respective input signal 104. Additionally, each distributor 1 14 is configured to select one or more corrective weights from the plurality of corrective weights 112 in correlation with the input value.
[0094] The p-net 100 additionally includes a set of neurons 116. Each neuron 116 has at least one output 1 17 and is connected with at least one of the plurality of inputs 102 via one synapse 1 18. Each neuron 116 is configured to add up or sum the corrective weight values of the corrective weights 112 selected from each synapse 118 connected to the respective neuron 116 and thereby generate and output a neuron sum 120, otherwise designated as∑n. A separate distributor 1 14 may be used for each synapse 118 of a given input 102, as shown in Figures 3 A, 3B, and 3C, or a single distributor may be used for all such synapses, as shown in Figures 4A, 4B, and 4C. During formation or setup of the p-net 100, all corrective weights 112 are assigned initial values, which may change during the process of p-net training. The initial value of the corrective weight 1 12 may be assigned as in the classical neural network 10, for example, the weights may be selected randomly, calculated with the help of a pre-determined mathematical function, selected from a predetermined template, etc.
[0095] The p-net 100 also includes a weight correction calculator 122. The weight correction calculator 122 is configured to receive a desired, i.e.,
predetermined, output signal 124 having a signal value and representing a portion of an output image 126. The weight correction calculator 122 is also configured to determine a deviation 128 of the neuron sum 120 from the value of the desired output signal 124, a.k.a., training error, and modify respective corrective weight values using the determined deviation 128. Thereafter, summing the modified corrective weight values to determine the neuron sum 120 minimizes the deviation of the subject neuron sum from the value of the desired output signal 124 and, as a result, is effective for training the p-net 100.
[0096] For analogy with the classical network 10 discussed with respect to Figure 1, the deviation 128 may also be described as the training error between the determined neuron sum 120 and the value of the desired output signal 124. In comparison with the classical neural network 10 discussed with respect to Figure 1 , in the p-net 100 the input values of the input signal 104 only change in the process of general network setup, and are not changed during training of the p-net. Instead of changing the input value, training of the p-net 100 is provided by changing the values 112 of the corrective weights 1 12. Additionally, although each neuron 1 16 includes a summing function, where the neuron adds up the corrective weight values, the neuron 1 16 does not require, and, in fact, is characterized by absence of an activation function, such as provided by the activation function device 22 in the classical neural network 10.
[0097] In the classical neural network 10, weight correction during training is accomplished by changing synaptic weights 16, while in the p-net 100 corresponding weight correction is provided by changing corrective weights values 1 12, as shown in Figure 2. The respective corrective weights 1 12 may be included in weight correction blocks 110 positioned on all or some of the synapses 118. In neural network computer emulations, each synaptic and corrective weight may be represented either by a digital device, such as a memory cell, and/or by an analog device. In neural network software emulations, the values of the corrective weights 1 12 may be provided via an appropriate programmed algorithm, while in hardware emulations, known methods for memory control could be used.
[0098] In the p-net 100, the deviation 128 of the neuron sum 120 from the desired output signal 124 may be represented as a mathematically computed difference therebetween. Additionally, the generation of the respective modified corrective weights 112 may include apportionment of the computed difference to each corrective weight used to generate the neuron sum 120. In such an embodiment, the generation of the respective modified corrective weights 112 will permit the neuron sum 120 to be converged on the desired output signal value within a small number of epochs, in some cases needing only a single epoch, to rapidly train the p-net 100. In a specific case, the apportionment of the mathematical difference among the corrective weights 112 used to generate the neuron sum 120 may include dividing the determined difference equally between each corrective weight used to generate the respective neuron sum 120.
[0099] In a separate embodiment, the determination of the deviation 128 of the neuron sum 120 from the desired output signal value may include division of the desired output signal value by the neuron sum to thereby generate a deviation coefficient. In such a specific case, the modification of the respective modified corrective weights 112 includes multiplication of each corrective weight used to generate the neuron sum 120 by the deviation coefficient. Each distributor 1 14 may additionally be configured to assign a plurality of coefficients of impact 134 to the plurality of corrective weights 1 12. In the present embodiment, each coefficient of impact 134 may be assigned to one of the plurality of corrective weights 112 in some predetermined proportion to generate the respective neuron sum 120. For correspondence with each respective corrective weight 1 12, each coefficient of impact 134 may be assigned a "Ci,d,n" nomenclature, as shown in the Figures.
[00100] Each of the plurality of coefficients of impact 134 corresponding to the specific synapse 118 is defined by a respective impact distribution function 136. The impact distribution function 136 may be same either for all coefficients of impact 134 or only for the plurality of coefficients of impact 134 corresponding a specific synapse 118. Each of the plurality of input values may be received into a value range 137 divided into intervals or sub-divisions "d" according to an interval distribution function 140, such that each input value is received within a respective interval "d" and each corrective weight corresponds to one of such intervals. Each distributor 114 may use the respective received input value to select the respective interval "d", and to assign the respective plurality of coefficients of impact 134 to the corrective weight 112 corresponding to the selected respective interval "d" and to at least one corrective weight corresponding to an interval adjacent to the selected respective interval, such as Wi,d+i,n or Wi,d-i,n. In another non-limiting example, the predetermined proportion of the coefficients of impact 134 may be defined according to a statistical distribution.
[00101] Generating the neuron sum 120 may include initially assigning respective coefficients of impact 134 to each corrective weight 112 according to the input value 102 and then multiplying the subject coefficients of impact by values of the respective employed corrective weights 112. Then, summing via the each neuron 116 the individual products of the corrective weight 112 and the assigned coefficient of impact 134 for all the synapses 118 connected thereto.
[00102] The weight correction calculator 122 may be configured to apply the respective coefficients of impact 134 to generate the respective modified corrective weights 112. Specifically, the weight correction calculator 122 may apply a portion of the computed mathematical difference between the neuron sum 120 and the desired output signal 124 to each corrective weight 112 used to generate the neuron sum 120 according to the proportion established by the respective coefficients of impact 134. Additionally, the mathematical difference divided among the corrective weights 112 used to generate the neuron sum 120 may be further divided by the respective coefficient of impact 134. Subsequently, the result of the division of the neuron sum 120 by the respective coefficient of impact 134 may be added to the corrective weight 112 in order to converge the neuron sum 120 on the desired output signal value.
[00103] Typically formation of the p-net 100 will take place before the training of the p-net commences. However, in a separate embodiment, if during training the p-net 100 receives an input signal 104 for which initial corrective weights are absent, appropriate corrective weights 112 may be generated. In such a case, the specific distributor 114 will determine the appropriate interval "d" for the particular input signal 104, and a group of corrective weights 112 with initial values will be generated for the given input 102, the given interval "d", and all the respective neurons 116. Additionally, a corresponding coefficient of impact 134 may be assigned to each newly generated corrective weight 112. [00104] Each corrective weight 1 12 may be defined by a set of indexes configured to identify a position of each respective corrective weight on the p-net 100. The set of indexes may specifically include an input index "i" configured to identify the corrective weight 112 corresponding to the specific input 102, an interval index "d" configured to specify the discussed-above selected interval for the respective corrective weight, and a neuron index "n" configured to specify the corrective weight 1 12 corresponding to the specific neuron 1 16 with nomenclature "Wi,d,n". Thus, each corrective weight 112 corresponding to a specific input 102 is assigned the specific index "i" in the subscript to denote the subj ect position. Similarly, each corrective weight "W" corresponding to a specific neuron 116 and a respective synapse 118 is assigned the specific indexes "n" and "d" in the subscript to denote the subject position of the corrective weight on the p-net 100. The set of indexes may also include an access index "a" configured to tally a number of times the respective corrective weight 112 is accessed by the input signal 104 during training of the p-net 100. In other words, each time a specific interval "d" and the respective corrective weight 112 is selected for training from the plurality of corrective weights in correlation with the input value, the access index "a" is incremented to count the input signal. The access index "a" may be used to further specify or define a present status of each corrective weight by adopting a nomenclature "Wi,d,n,a". Each of the indexes "i", "d", "n", and "a" may be numerical values in the range of 0 to +∞.
[00105] Various possibilities of dividing the range of input signals 104 into intervals do, di ... dm are shown in Figure 5. The specific interval distribution may be uniform or linear, which, for example, may be achieved by specifying all intervals "d" with the same size. All input signals 104 having their respective input signal value lower than a predetermined lowest level may be considered to have zero value, while all input signals having their respective input signal value greater than a
predetermined highest level may be assigned to such highest level, as also shown in Figure 5. The specific interval distribution may also be non-uniform or nonlinear, such as symmetrical, asymmetrical, or unlimited. Nonlinear distribution of intervals "d" may be useful when the range of the input signals 104 is considered to be impractically large, and a certain part of the range could include input signals considered to be most critical, such as in the beginning, in the middle, or at end of the range. The specific interval distribution may also be described by a random function. All the preceding examples are of the non-limiting nature, as other variants of intervals distribution are also possible.
[00106] The number of intervals "d" within the selected range of input signals
104 may be increased to optimize the p-net 100. Such optimization of the p-net 100 may be desirable, for example, with the increase in complexity of training the input images 106. For example, a greater number of intervals may be needed for multicolor images as compared with mono-color images, and a greater number of intervals may be needed for complex ornaments than for simple graphics. An increased number of intervals may be needed for precise recognition of images with complex color gradients as compared with images described by contours, as well for a larger overall number of training images. A reduction in the number of intervals "d" may also be needed in cases with a high magnitude of noise, a high variance in training images, and excessive consumption of computing resources.
[00107] Depending on the task or type of information handled by the p-net 100, for example, visual or textual data, data from sensors of various nature, different number of intervals and the type of distribution thereof may be assigned. For each input signal value interval "d", a corresponding corrective weight of the given synapse with the index "d" may be assigned. Thus, a certain interval "d" will include all corrective weights 1 12 with the index "i" relevant to the given input, the index "d" relevant to the given interval; and all values for the index "n" from 0 to n. In the process of training the p-net 100, the distributor 114 defines each input signal value and thus relates the subj ect input signal 104 to the corresponding interval "d". For example, if there are 10 equal intervals "d" within the range of input signals from 0 to 100, the input signal having a value between 30 and 40 will be related to the interval 3, i.e., "d" = 3.
[00108] For all corrective weights 1 12 of each synapse 118 connected with the given input 102, the distributor 1 14 may assign values of the coefficient of impact 134 in accordance with the interval "d" related to the particular input signal. The distributor 114 may also assign values of the coefficient of impact 134 in accordance with a pre-determined distribution of values of the coefficient of impact 134 (shown in Figure 6), such as a sinusoidal, normal, logarithmic distribution curve, or a random distribution function. In many cases, the sum or integral of coefficient of impact 134 or Ci,d,n for a specific input signal 102 related to each synapse 118 will have a value of 1 (one).
∑Synapse Ci,d,n = 1 ΟΓ JSynapse Ci,d,n = 1 [1]
In the simplest case, the corrective weight 112 that corresponds most closely to the input signal value may be assigned a value of 1 (one) to the coefficient of impact 134 (Ci,d,n), while corrective weights for other intervals may receive a value of 0 (zero).
[00109] The p-net 100 is focused on reduction of time duration and usage of other resources during training of the p-net, as compared with classical neuron network 10. Although some of the elements disclosed herein as part of the p-net 100 are designated by certain names or identifiers known to those familiar with classical neural networks, the specific names are used for simplicity and may be employed differently from their counterparts in classical neural networks. For example, synaptic weights 16 controlling magnitudes of the input signals (Ii-Im) are instituted during the process of general setup of the classical neural network 10 and are changed during training of the classical network. On the other hand, training of the p-net 100 is accomplished by changing the corrective weights 1 12, while the synaptic weights 108 do not change during training. Additionally, as discussed above, each of the neurons 116 includes a summing or adding component, but does not include an activation function device 22 that is typical to the classical neural network 10.
[00110] In general, the p-net 100 is trained by training each neuron unit 119 that includes a respective neuron 1 16 and all the connecting synapses 1 18, including the particular neuron and all the respective synapses 118 and corrective weights 1 12 connected with the subject neuron. Accordingly, training of the p-net 100 includes changing corrective weights 1 12 contributing to the respective neuron 116. Changes to the corrective weights 1 12 take place based on a group-training algorithm included in a method 200 disclosed in detail below. In the disclosed algorithm, training error, i.e., deviation 128, is determined for each neuron, based on which correction values are determined and assigned to each of the weights 112 used in determining the sum obtained by each respective neuron 1 16. Introduction of such correction values during training is intended to reduce the deviations 128 for the subject neuron 116 to zero. During training with additional images, new errors related to images utilized earlier may again appear. To eliminate such additional errors, after completion of one training epoch, errors for all training images of the entire p-net 100 may be calculated, and if such errors are greater than pre-determined values, one or more additional training epochs may be conducted until the errors become less than a target or predetermined value.
[00111] Figure 23 depicts the method 200 of training the p-net 100, as described above with respect to Figures 2-22. The method 200 commences in frame 202 where the method includes receiving, via the input 102, the input signal 104 having the input value. Following frame 202, the method advances to frame 204. In frame 204, the method includes communicating the input signal 104 to the distributor 1 14 operatively connected to the input 102. Either in frame 202 or frame 204, the method 200 may include defining each corrective weight 1 12 by the set of indexes. As described above with respect to the structure of the p-net 100, the set of indexes may include the input index "i" configured to identify the corrective weight 1 12 corresponding to the input 102. The set of indexes may also include the interval index "d" configured to specify the selected interval for the respective corrective weight 112, and the neuron index "n" configured to specify the corrective weight 112 corresponding to the specific neuron 1 16 as "Wi,d,n". The set of indexes may additionally include the access index "a" configured to tally a number of times the respective corrective weight 112 is accessed by the input signal 104 during training of the p-net 100. Accordingly, the present status of each corrective weight may adopt the nomenclature "Wi,d,n,a".
[00112] After frame 204, the method proceeds to frame 206, in which the method includes selecting, via the distributor 1 14, in correlation with the input value, one or more corrective weights 112 from the plurality of corrective weights located on the synapse 118 connected to the subj ect input 102. As described above, each corrective weight 112 is defined by its respective weight value. In frame 206 the method may additionally include assigning, via the distributor 114, the plurality of coefficients of impact 134 to the plurality of corrective weights 1 12. In frame 206 the method may also include assigning each coefficient of impact 134 to one of the plurality of corrective weights 112 in a predetermined proportion to generate the neuron sum 120. Also, in frame 206 the method may include adding up, via the neuron 1 16, a product of the corrective weight 1 12 and the assigned coefficient of impact 134 for all the synapses 118 connected thereto. Additionally, in frame 206 the method may include applying, via the weight correction calculator 122, a portion of the determined difference to each corrective weight 1 12 used to generate the neuron sum 120 according to the proportion established by the respective coefficient of impact 134.
[00113] As described above with respect to the structure of the p-net 100, the plurality of coefficients of impact 134 may be defined by an impact distribution function 136. In such a case, the method may additionally include receiving the input value into the value range 137 divided into intervals "d" according to the interval distribution function 140, such that the input value is received within a respective interval, and each corrective weight 1 12 corresponds to one of the intervals. Also, the method may include using, via the distributor 114, the received input value to select the respective interval "d" and assign the plurality of coefficients of impact 134 to the corrective weight 1 12 corresponding to the selected respective interval "d" and to at least one corrective weight corresponding to an interval adjacent to the selected respective interval "d". As described above with respect to the structure of the p-net 100, corrective weights 1 12 corresponding to an interval adjacent to the selected respective interval "d" may be identified, for example, as Wi,d+i,n or Wi,d-i,n.
[00114] Following frame 206, the method advances to frame 208. In frame 208, the method includes adding up the weight values of the selected corrective weights 112 by the specific neuron 116 connected with the input 102 via the synapse 1 18 to generate the neuron sum 120. As described above with respect to the structure of the p-net 100, each neuron 1 16 includes at least one output 1 17. After frame 208, the method proceeds to frame 210, in which the method includes receiving, via the weight correction calculator 122, the desired output signal 124 having the signal value. Following frame 210, the method advances to frame 212 in which the method includes determining, via the weight correction calculator 122, the deviation 128 of the neuron sum 120 from the value of the desired output signal 124.
[00115] As disclosed above in the description of the p-net 100, the
determination of the deviation 128 of the neuron sum 120 from the desired output signal value may include determining the mathematical difference therebetween. Additionally, the modification of the respective corrective weights 112 may include apportioning the mathematical difference to each corrective weight used to generate the neuron sum 120. Alternatively, the apportionment of the mathematical difference may include dividing the determined difference equally between each corrective weight 112 used to generate the neuron sum 120. In a yet separate embodiment, the determination of the deviation 128 may also include dividing the value of the desired output signal 124 by the neuron sum 120 to thereby generate the deviation coefficient. Furthermore, in such a case, the modification of the respective corrective weights 112 may include multiplying each corrective weight 112 used to generate the neuron sum 120 by the generated deviation coefficient.
[00116] After frame 212, the method proceeds to frame 214. In frame 214 the method includes modifying, via the weight correction calculator 122, respective corrective weight values using the determined deviation 128. The modified corrective weight values may subsequently be added or summed up and then used to determine a new neuron sum 120. The summed modified corrective weight values may then serve to minimize the deviation of the neuron sum 120 from the value of the desired output signal 124 and thereby train the p-net 100. Following frame 214, method 200 may include returning to frame 202 to perform additional training epochs until the deviation of the neuron sum 120 from the value of the desired output signal 124 is sufficiently minimized. In other words, additional training epochs may be performed to converge the neuron sum 120 on the desired output signal 124 to within the predetermined deviation or error value, such that the p-net 100 may be considered trained and ready for operation with new images.
[00117] Generally, the input images 106 need to be prepared for training of the p-net 100. Preparation of the p-net 100 for training generally begins with formation of a set of training images, including the input images 106 and, in the majority of cases, desired output images 126 corresponding to the subject input images. The input images 106 (shown in Figure 2) defined by the input signals Ii, h... Im for training of the p-net 100 are selected in accordance with tasks that the p-net is assigned to handle, for example recognition of human images or other objects, recognition of certain activities, clustering or data classification, analysis of statistical data, pattern recognition, forecasting, or controlling certain processes. Accordingly, the input images 106 may be presented in any format suitable for introduction into a computer, for example, using formats jpeg, gif, or pptx, in the form of tables, charts, diagrams and graphics, various document formats, or a set of symbols. [00118] Preparation for training of the p-net 100 may also include conversion of the selected input images 106 for their unification that is convenient for the processing of the subject images by the p-net 100, for example, transforming all images to a format having the same number of signals, or, in the case of pictures, same number of pixels. Color images could be, for example, presented as a combination of three basic colors. Image conversion could also include modification of characteristics, for example, shifting an image in space, changing visual characteristics of the image, such as resolution, brightness, contrast, colors, viewpoint, perspective, focal length and focal point, as well as adding symbols, numbers, or notes.
[00119] After selection of the number of intervals, a specific input image may be converted into an input image in interval format, that is, real signal values may be recorded as numbers of intervals to which the subject respective signals belong. This procedure may be carried out in each training epoch for the given image. However, the image may also be formed once as a set of interval numbers. For example, in Figure 7 the initial image is presented as a picture, while in the table "Image in digital format" the same image is presented in the form of digital codes, and in the table "Image in interval format" then image is presented as a set of interval numbers, where a separate interval is assigned for each 10 values of digital codes.
[00120] The described structure of the p-net 100 and the training algorithm or method 200 as described permit continued or iterative training of the p-net, thus there is no requirement to form a complete set of training input images 106 at the start of the training process. It is possible to form a relatively small starting set of training images, and such a starting set could be expanded as necessary. The input images 106 may be divided into distinct categories, for example, a set of pictures of one person, a set of photos of cats, or a set of photographs of cars, such that each category corresponds to a single output image, such a person's name or a specific label.
Desired output images 126 represent a field or table of digital, where each point corresponds to a specific numeric value from -∞ to +∞, or analog values. Each point of the desired output image 126 may correspond to the output of one of the neurons of the p-net 100. Desired output images 126 may be encoded with digital or analog codes of images, tables, text, formulas, sets of symbols, such as barcodes, or sounds. [00121] In the simplest case, each input image 106 may correspond to an output image, encoding the subject input image. One of the points of such output image may be assigned a maximum possible value, for example 100%, whereas all other points may be assigned a minimum possible value, for example, zero. In such a case, following training, probabilistic recognition of various images in the form of a percentage of similarity with training images will be enabled. Figure 8 shows an example of how the p-net 100 trained for recognition of two images, a square and a circle, may recognize a picture that contains some features of each figure being expressed in percentages, with the sum not necessarily equal 100%. Such a process of pattern recognition by defining the percentage of similarity between different images used for training may be used to classify specific images.
[00122] To improve the accuracy and exclude errors, coding may be accomplished using a set of several neural outputs rather than one output (see below). In the simplest case, output images may be prepared in advance of training. However, it is also possible to have the output images formed by the p-net 100 during training.
[00123] In the p-net 100, there is also a possibility of inverting the input and output images. In other words, input images 106 may be in the form of a field or table of digital or analog values, where each point corresponds to one input of the p-net, while output images may be presented in any format suitable for introduction into the computer, for example using formats jpeg, gif, pptx, in the form of tables, charts, diagrams and graphics, various document formats, or a set of symbols. The resultant p-net 100 may be quite suitable for archiving systems, as well as an associative search of images, musical expressions, equations, or data sets.
[00124] Following preparation of the input images 106, typically the p-net 100 needs to be formed and/or parameters of an existing p-net must be set for handling given task(s). Formation of the p-net 100 may include the following designations:
• dimensions of the p-net 100, as defined by the number of inputs and outputs;
• synaptic weights 108 for all inputs;
• number of corrective weights 1 12;
• distribution of coefficients of corrective weight impact (Ci,d,n) for different values of input signals 104; and
• desired accuracy of training The number of inputs is determined based on the sizes of input images 106. For example, a number of pixels may be used for pictures, while the selected number of outputs may depend on the size of desired output images 126. In some cases, the selected number of outputs may depend on the number of categories of training images.
[00125] Values of individual synaptic weights 108 may be in the range of -∞ to +∞. Values of synaptic weights 108 that are less than 0 (zero) may denote signal amplification, which may be used to enhance the impact of signals from specific inputs, or from specific images, for example, for a more effective recognition of human faces in photos containing a large number of different individuals or objects. On the other hand, values of synaptic weights 108 that are greater than 0 (zero) may be used to denote signal attenuation, which may be used to reduce the number of required calculations and increase operational speed of the p-net 100. Generally, the greater the value of the synaptic weight, the more attenuated is the signal transmitted to the corresponding neuron. If all synaptic weights 108 corresponding to all inputs are equal and all neurons are equally connected with all inputs, the neural network will become universal and will be most effective for common tasks, such as when very little is known about the nature of the images in advance. However, such a structure will generally increase the number of required calculations during training and operation.
[00126] Figure 9 shows an embodiment of the p-net 100 in which the relationship between an input and respective neurons is reduced in accordance with statistical normal distribution. Uneven distribution of synaptic weights 108 may result in the entire input signal being communicated to a target or "central" neuron for the given input, thus assigning a value of zero to the subject synaptic weight.
Additionally, uneven distribution of synaptic weights may result in other neurons receiving reduced input signal values, for example, using normal, log-normal, sinusoidal, or other distribution. Values of the synaptic weights 108 for the neurons 116 receiving reduced input signal values may increase along with the increase of their distance from the "central" neuron. In such a case, the number of calculations may be reduced and operation of the p-net may speed up. Such networks, which are a combination of known fully connected and non-fully connected neural networks may be the exceedingly effective for analysis of images with strong internal patterns, for example, human faces or consecutive frames of a movie film.
[00127] Figure 9 shows an embodiment of the p-net 100 that is effective for recognition of local patterns. In order to improve the identification of common patterns, 10-20% of strong connections, where the values of the synaptic weights 108 are small or zero, may be distributed throughout the entire p-net 100, in a
deterministic, such as in the form of a grid, or a random approach. The actual formation of the p-net 100 intended for handling a particular task is performed using a program, for example, written in an object-oriented programming language, that generates main elements of the p-net, such as synapses, synaptic weights, distributors, corrective weights, neurons, etc., as software objects. Such a program may assign relationships between the noted objects and algorithms specifying their actions. In particular, synaptic and corrective weights may be formed in the beginning of formation of the p-net 100, along with setting their initial values. The p-net 100 may be fully formed before the start of its training, and be modified or added-on at a later frame, as necessary, for example, when information capacity of the network becomes exhausted, or in case of a fatal error. Completion of the p-net 100 is also possible while training continues.
[00128] If the p-net 100 is formed in advance, the number of selected corrective weights on a particular synapse may be equal to the number of intervals within the range of input signals. Additionally, corrective weights may be generated after the formation of the p-net 100, as signals in response to appearance of individual intervals. Similar to the classical neural network 10, selection of parameters and settings of the p-net 100 is provided with a series of targeted experiments. Such experiments may include (1) formation of the p-net with the same synaptic weights 108 at all inputs, and (2) assessment of input signal values for the selected images and initial selection of the number of intervals. For example, for recognition of binary (one-color) images, it may be sufficient to have only 2 intervals; for qualitative recognition of 8 bit images, up to 256 intervals may be used; approximation of complex statistical dependencies may require dozens or even hundreds of intervals; for large databases, the number of intervals could be in the thousands.
[00129] In the process of training the p-net 100, the values of input signals may be rounded as they are distributed between the specific intervals. Thus, accuracy of input signals greater than the width of the range divided by the number of intervals may not be required. For example, if the input value range is set for 100 units and the number of intervals is 10, the accuracy better than ± 5 will not be required. Such experiments may also include (3) selection of uniform distribution of intervals throughout the entire range of values of the input signals and the simplest distribution for coefficients of corrective weight impact Ci,d,n may be set equal to 1 for corrective weight corresponding to the interval for the particular input signal, while the corrective weight impact for all remaining corrective weights may be set to 0 (zero). Such experiments may additionally include (4) training p-net 100 with one, more, or all prepared training images with pre-determined accuracy.
[00130] Training time of the p-net 100 for predetermined accuracy may be established by experimentation. If accuracy and training time of the p-net 100 are satisfactory, selected settings could be either maintained or changed, while a search is continued for a more effective variant. If the required accuracy is not achieved, for optimization purposes influence of specific modification may be evaluated, which may be performed either one at the time, or in groups. Such evaluation of modifications may include changing, either increasing or reducing, the number of intervals; changing the type of distribution of the coefficients of corrective weight impact (Ci,d,n), testing variants with non-uniform distribution of intervals, such as using normal, power, logarithmic, or log-normal distribution; and changing values of synaptic weights 108, for example their transition to non-uniform distribution.
[00131] If the required training time for an accurate result is deemed excessive, training with an increased number of intervals, may be evaluated for its effect on training time. If, as a result, the training time was reduced, the increase in the number of intervals may be repeated until desired training time is obtained without a loss of required accuracy. If the training time grows with increasing number of intervals instead of being reduced, additional training may be performed with reduced number of intervals. If the reduced number of intervals results in reduced training time, the number of intervals could be further reduced until desired training time is obtained.
[00132] Formation of the p-net 100 settings may be via training with predetermined training time and experimental determination of training accuracy.
Parameters could be improved via experimental changes similar to those described above. Actual practice with various p-nets has shown that the procedure of setting selection is generally straight-forward and not time-consuming.
[00133] Actual training of the p-net 100 as part of the method 200, shown in Figure 23, starts with feeding the input image signals Ii, h... Imto the network input devices 102, from where they are transmitted to synapses 1 18, pass through the synaptic weight 108 and enter the distributor (or a group of distributors) 114. Based on the input signal value, the distributor 114 sets the number of the interval "d" that the given input signal 104 corresponds to, and assigns coefficients of corrective weight impact Ci,d,n for all the corrective weights 112 of the weight correction blocks 110 of all the synapses 118 connected with the respective input 102. For example, if the interval "d" may be set to 3 for the first input, for all weights W1 3 n Ci,3,n = 1 is set to 1 , while for all other weights with i≠ 1 and d≠ 3, Ci,d,n may be set to 0 (zero).
[00134] For each neuron 1 16, identified as "n" in the relationship below, neuron output sums∑1 ,∑2...∑n are formed by multiplying each corrective weight 112, identified as Wi,d,n in the relationship below, by a corresponding coefficient of corrective weight impact Ci,d,n for all synapses 118 contributing into the particular neuron and by adding all the obtained values:
Multiplication of Wi,d,n χ Ci,d,n may be performed by various devices, for example by distributors 1 14, devices with stored weights or directly by neurons 1 16. The sums are transferred via neuron output 117 to the weight correction calculator 122. The desired output signals Oi, C ... On describing the desired output image 126 are also fed to the calculator 122.
[00135] As discussed above, the weight correction calculator 122 is a computation device for calculating the modified value for corrective weights by comparison of the neuron output sums∑1 ,∑2...∑n with desired output signals Oi, Ch... On. Figure 11 shows a set of corrective weights Wi,d,i , contributing into the neuron output sum∑1 , which are multiplied by corresponding coefficient of corrective weight impact Ci d l, and these products are subsequently added by the neuron output sum∑1 : ∑1 = Wi.o.i x C1>0>1. + Wi.i.i Cul. + Wu,i x C1>2>1. + ... [3]
As the training commences, i.e., during the first epoch, corrective weights Wi,d,i do not correspond to the input image 106 used for training, thus, neuron output sums∑1 are not equal to the corresponding desired output image 126. Based on the initial corrective weights Wi,d,i, the weight correction system calculates the correction value Δ1, which is used for changing all the corrective weights contributing to the neuron output sum∑1 (Wi,d,i). The p-net 100 permits various options or variants for its formation and utilization of collective corrective signals for all corrective weights Wi,d,n contributing to a specified neuron 116.
[00136] Below are two exemplary and non-limiting variants for the formation and utilization of the collective corrective signals. Variant 1 - formation and utilization of corrective signals based on the difference between desired output signals and obtained output sums as follows:
• calculation of the equal correction value Δη for all corrective weights
contributing into the neuron "n" according to the equation:
Where:
On - desirable output signal corresponding to the neuron output sum∑n; S - number of synapses connected to the neuron "n".
• modification of all corrective weights Wi,d,n contributing into the neuron "n" according to the equation:
Wi,d,n modified = W + An/ Ci d n [5],
Variant 2 - formation and utilization of corrective signals based on ratio of desired output signals versus obtained output sums as follows:
• calculation of the equal correction value Δη for all corrective weights
contributing into the neuron "n" according to the equation: • modification of all corrective weights Wi,d,n contributing into the neuron "n" according to the equation:
Wi,d,n, modified = Wun, x An [7],
Modification of corrective weights Wi,d,n by any available variant is intended to reduce the training error for each neuron 116 by converging its output sum∑« on the value of the desired output signal. In such a way, the training error for a given image may be reduced until such becomes equal, or close to zero.
[00137] An example of modification of corrective weights Wi,d,n during training is shown in Figure 11. The values of corrective weights Wi,d,n are set before the training starts in the form of random weight distribution with the weight values being set to 0 ± 10% from the correction weight range and reach final weight distribution after training. The described calculation of collective signals is conducted for all neurons 116 in the p-net 100. The described training procedure for one training image may be repeated for all other training images. Such procedure may lead to appearance of training errors for some of the previously trained images, as some corrective weights Wi,d,n may participate in several images. Accordingly, training with another image may partially disrupt the distribution of corrective weights Wi,d,n formed for the previous images. However, due to the fact that each synapse 118 includes a set of corrective weights Wi,d,n, training with new images while possibly increasing training error, does not delete the images, for which the p-net 100 was previously trained. Moreover, the more synapses 118 contribute to each neuron 116 and the greater the number of corrective weights Wi,d,n at each synapse, the less training for a specific image affects the training for other images.
[00138] Each training epoch generally ends with the substantial convergence of the total training error and/or local training errors for all training images. Errors may be evaluated using known statistical methods, such as, for example, the Mean Squared Error (MSE), the Mean Absolute Error (MAE), or the Standard Error Mean (SEM). If the total error or some of the local errors are too high, additional training epoch may be conducted until the error is reduced to less than a predetermined error value.
Earlier described process of image recognition with defining the percentage of similarity between different images used for training (shown in Figure 8) is by itself a process of classification of images along previously defined categories.
[00139] For clustering, i.e., dividing images into natural classes or groups that were not previously specified, the basic training algorithm of the method 200 may be modified with the modified Self-Organizing Maps (SOM) approach. The desired output image 126 corresponding to a given input image may be formed directly in the process of training the p-net 100 based on a set of winning neurons with a maximum value of the output neuron sums 120. Figure 22 shows how the use of the basic algorithm of the method 200 may generate a primary set of the output neuron sums, where the set further is converted such that several greater sums retain their value, or increase, while all other sums are considered equal to zero. This transformed set of output neuron sums may be accepted as the desired output image 126.
[00140] Formed as described above, the set of desired output images 126 includes clusters or groups. As such, the set of desired output images 126 allows for clustering of linearly inseparable images, which is distinct from the classical network 10. Figure 13 shows how the described approach may assist with clustering a complex hypothetical image "cat-car", where different features of the image are assigned to different clusters - cats and cars. A set of desired output images 126 created as described may be used, for example, for creating different classifications, statistical analysis, images selection based on criteria formed as a result of clustering. Also, the desired output images 126 generated by the p-net 100 may be used as input images for another or additional p-net, which may also be formed along the lines described for the subject p-net 100. Thus formed, the desired output images 126 may be used for a subsequent layer of a multi-layer p-net.
[00141] Classical neural network 10 training is generally provided via a supervised training method that is based on preliminary prepared pairs of an input image and a desired output image. The same general method is also used for training of the p-net 100, however, the increased training speed of the p-net 100 also allows for training with an external trainer. The role of the external trainer may be performed, for example, by an individual or by a computer program. Acting as an external trainer, the individual may be involved in performing a physical task or operate in a gaming environment. The p-net 100 receives input signals in the form of data regarding a particular situation and changes thereto. The signals reflecting actions of the trainer may be introduced as desired output images 126 and permit the p-net 100 to be trained according to the basic algorithm. In such a way, modeling of various processes may be generated by the p-net 100 in real-time.
[00142] For example, the p-net 100 may be trained to drive a vehicle by receiving information regarding road conditions and actions of the driver. Through modeling a large variety of critical situations, the same p-net 100 may be trained by many different drivers and accumulate more driving skills than is generally possible by any single driver. The p-net 100 is capable of evaluating a specific road condition in 0.1 seconds or faster and amassing substantial "driving experience" that may enhance traffic safety in a variety of situations. The p-net 100 may also be trained to cooperate with a computer, for example, with a chess-playing machine. The ability of the p-net 100 to easily shift from training mode to the recognition mode and vice versa allows for realization of a "learn from mistakes" mode, when the p-net 100 is trained by an external trainer. In such a case, the partially trained p-net 100 may generate its own actions, for example, to control a technological process. The trainer could control the actions of the p-net 100 and correct those actions when necessary. Thus, additional training of the p-net 100 could be provided.
[00143] Informational capacity of the p-net 100 is very large, but is not unlimited. With the set dimensions, such as the number of inputs, outputs, and intervals, of the p-net 100, and with an increase in the number of images that the p-net is trained with, after a certain number of images, the number and magnitude of training errors may also increase. When such an increase in error generation is detected, the number and/or magnitude of errors may be reduced by increasing the size of p-net 100, since the p-net permits increasing the number of neurons 1 16 and/or the number of the signal intervals "d" across the p-net or in its components between training epochs. P-net 100 expansion may be provided by adding new neurons 116, adding new inputs 102 and synapses 118, changing distribution of the coefficients of corrective weight impact C; d n, and dividing existing intervals "d"
[00144] In most cases p-net 100 will be trained to ensure its ability to recognize images, patterns, and correlations inherent to the image, or to a sets of images. The recognition process in the simplest case repeats the first steps of the training process according to the basic algorithm disclosed as part of the method 200. In particular: • direct recognition starts with formatting of the image according to the same rules that are used to format images for training;
• the image is sent to the inputs of the trained p-net 100, distributors assign the corrective weights Wi,d,n corresponding to the values of input signals that were set during training, and the neurons generate the respective neuron sums, as shown in Figure 8;
• if the resulting output sums representing the output image 126 fully complies with one of the images that the p-net 100 is being trained with, there is an exact recognition of the object; and
• if the output image 126 partially complies with several images the p-net 100 is being trained with, the result shows the matching rate with different images as a percentage. Figure 13 demonstrates that during recognition of the complex image that is made based on a combination of images of a cat and a vehicle, the output image 126 represents the given image combination and indicates the percentage of each initial image's contribution into the combination.
[00145] For example, if several pictures of a specific person were used for training, the recognized image may correspond 90% to the first picture, 60% to the second picture, and 35% to the third picture. It may be that the recognized image corresponds with a certain probability to the pictures of other people or even of animals, which means that there is some resemblance between the pictures. However, the probability of such resemblance is likely to be lower. Based on such probabilities, the reliability of recognition may be determined, for example, based on Bayes' theorem.
[00146] With the p-net 100 it is also possible to implement multi-stage recognition that combines the advantages of algorithmic and neural network recognition methods. Such multi-stage recognition may include:
• initial recognition of an image by a pre-trained network via using not all, but only 1% - 10% of inputs, which are herein designated as "basic inputs". Such a portion of the inputs may be distributed within the p-net 100 either uniformly, randomly, or by any other distribution function. For example, the recognition of a person in the photograph that includes a plurality of other objects; • selecting the most informative objects or parts of objects for further detailed recognition. Such selection may be provided according to structures of specific objects that are pre-set in memory, as in the algorithmic method, or according to a gradient of colors, brightness, and/or depth of the image. For example, in recognition of portraits the following recognition zones may be selected: eyes, corners of the mouth, nose shape, as well as certain specific features, such as tattoos, vehicle plate numbers, or house numbers may also be selected and recognized using a similar approach; and
• detailed recognition of selected images, if necessary, is also possible.
[00147] Formation of a computer emulation of the p-net 100 and its training may be provided based of the above description by using any programming language. For example, an object-oriented programming may be used, wherein the synaptic weights 108, corrective weights 112, distributors 114, and neurons 116 represent programming objects or classes of objects, relations are established between object classes via links or messages, and algorithms of interaction are set between objects and between object classes.
[00148] Formation and training of the p-net 100 software emulation may include the following:
1. Preparation for the formation and training of the p-net 100, in particular:
• conversion of sets of training input images into digital form in accordance with a given task;
• analysis of the resulting digital images, including selection of parameters of the input signals to be used for training, for example, frequencies, magnitudes, phases, or coordinates; and
• setting a range for the training signals, a number of intervals within the subject range, and a distribution of coefficients of corrective weight impact Ci,d,n.
2. Formation of the p-net software emulation, including:
• formation of a set of inputs to the p-net 100. For example, the number of inputs may be equal to the number of signals in the training input image;
• formation of a set of neurons, where each neuron represents an adding device;
• formation of a set of synapses with synaptic weights, where each synapse is connected to one p-net input and one neuron; • formation of weight correction blocks in each synapse, where the weight correction blocks include distributors and corrective weights, and where each corrective weight has the following characteristics:
o Corrective weight input index (i);
o Corrective weight neuron index (n);
o Corrective weight interval index (d); and
o Corrective weight initial value (Wi,d,n ).
• designating a correlation between intervals and corrective weights.
raining each neuron with one input image, including:
• designating coefficients of corrective weight impact Ci,d,n, including:
o determining an interval corresponding to the input signal of the
training input image received by each input; and
o designating magnitudes of the coefficients of corrective weight impact
Ci,d,n to all corrective weights for all synapses.
• calculating neuron output sum (∑n) for each neuron "n" by adding corrective weight value Wi d n of all synaptic weights contributing to the neuron multiplied by the corresponding coefficients of corrective weight impact Ci,d,n:
∑"=∑i,d,n ^i,d,n X ^ί,ά,η
• calculating the deviation or training error (Tn) via subtraction of the neuron output sum∑n from the corresponding desired output signal On:
• calculating the equal correction value (Δη) for all corrective weights
contributing to the neuron "n" via dividing the training error by the number of synapses "S" connected to the neuron "n": • modifying all corrective weights Wi,d,n contributing to the respective neuron by adding to each corrective weight the correction value Δη divided by the corresponding coefficients of corrective weight impact Ci,d,n:
Wi,d,n modified = Wi,n,d + An / Ci,d,n.
Another method of calculating the equal correction value (Δη) and modifying the corrective weights Wi,d,n for all corrective weight contributing to the neuron "n" may include the following:
• dividing the signal of desired output image On by a neuron output sum∑n:
• modifying the corrective weights Wi,n,d contributing to the neuron by
multiplying the corrective weights by the correction value Δη:
Wi,d,n modified = Wi,d,n x An
4. Training the p-net 100 using all training images, including:
• repeating the process described above for all selected training images that are included in one training epoch; and
• determining an error or errors of the specific training epoch, comparing those error(s) with a predetermined acceptable error level, and repeating training epochs until the training errors become less than the predetermined acceptable error level.
[00149] An actual example of software emulation of the p-net 100 using object- oriented programming is described below and shown in Figures 14-21.
Formation of a NeuronUnit object class may include formation of:
• set of objects of the Synapse class;
• neuron 116 presenting a variable, wherein adding is performed during
training; and
• calculator 122 presenting a variable, wherein the value of desired neuron sum 120 is stored and calculation of correction values Δη is performed during the training process. Class NeuronUnit provides p-net 100 training may include:
• formation of neuron sums 120;
• setting desired sums;
• calculation of correction value Δη; and
• adding the calculated correction value Δη to the corrective weights Wi,n,d. Formation of the object class Synapse may include:
• set of corrective weights Wi,n,d; and
• pointer indicating the input connected to synapse 118.
Class Synapse may perform the following functions:
• initialization of corrective weights Wi,n,d;
• multiplying the weights Wi,n,d by the coefficients Ci,d,n; and
• correction of weights Wi,n,d.
Formation of the object class InputSignal may include:
• set of indexes on synapses 118 connected to a given input 102;
• variable that includes the value of the input signal 104;
• values of possible minimum and maximum input signal;
• number of intervals "d"; and
• interval length.
Class InputSignal may provide the following functions:
• formation of the p-net 100 structure, including:
o Adding and removal of links between an input 102 and synapses 118; and
o Setting the number of intervals "d" for synapses 118 of a particular input 102.
• setting parameters of minimum and maximum input signals 104;
• contribution into the operation of p-net 100:
o setting an input signal 104; and
o setting coefficients of corrective weight impact Ci,d,n.
Formation of the object class PNet includes a set of object classes:
• NeuronUnit; and
• InputSignal.
Class PNet provides the following functions:
• setting the number of objects of the InputSignal class; • setting the number of objects of the NeuronUnit class; and
• group request of functions of the objects NeuronUnit and InputSignal.
During the training process the cycles may be formed, where:
• neuron output sum that is equal to zero is formed before the cycle starts;
• all synapses contributing to the given NeuronUnit are reviewed. For each synapse 118:
o Based on the input signal 102, the distributor forms a set of coefficients of corrective weight impact Ci,d,n;
o All weights Wi,n,d of the said synapse 1 18 are reviewed, and for each weight:
The value of weight Wi,n,d is multiplied by the corresponding coefficient of corrective weight impact Ci,d,n;
The result of multiplication is added to the forming neuron output sum;
• correction value Δ« is calculated;
• correction value Δη is divided by the coefficient of corrective weight impact Ci,d,n, i.e., An / Ci,d,n; and
• all synapses 1 18 contributing to the given NeuronUnit are reviewed. For each synapse 118, all weights Wi,n,d of the subject synapse are reviewed, and for each weight its value is modified to the corresponding correction value Δη.
[00150] The previously noted possibility of additional training of the p-net 100 allows a combination of training with the recognition of the image that enables the training process to be sped up and its accuracy to be improved. When training the p- net 100 on a set of sequentially changing images, such as training on consecutive frames of the film that are slightly different from each other, additional training may include:
• training with the first image;
• recognition of the next image and identifying a percentage of similarity
between the new image and the image the network was initially trained with. Additional training is not required if the recognition error is less than its predetermined value; and
• if the recognition error exceeds the predetermined value, additional training is provided. [00151] Training of the p-net 100 by the above basic training algorithm is effective for solving problems of image recognition, but does not exclude the loss or corruption of data due to overlapping images. Therefore, the use of the p-net 100 for memory purposes, though possible, may not be entirely reliable. The present embodiment describes training of the p-net 100 that provides protection against loss or corruption of information. An additional restriction may be introduced into the basic network training algorithm which requires that every corrective weight Wi,n,d may be trained only once. After the first training cycle, the value of the weight Wi,n,d remains fixed or constant. This may be achieved by entering an additional access index "a" for each corrective weight, which is the above-described index representing the number of accesses to the subject corrective weight Wi,n,d during the training process.
[00152] As described above, each corrective weight may take on the nomenclature of Wi,n,d,a, wherein "a" is the number of accesses to the subject weight during the training process. In the simplest case, for the non-modified, i.e., not fixed, weights, a = 0, while for the weights that have been modified or fixed by the described basic algorithm, a = 1. Moreover, while applying the basic algorithm, the corrective weights Wi,n,d,a with the fixed value a = 1 may be excluded from the weights to which corrections are being made. In such a case, equations [5], [6], and [7] may be transformed as follows:
[00153] The above restriction may be partially applied to the correction of the previously trained corrective weights Wi,n,d,a, but only to the weights that form the most important images. For example, within the training on a set of portraits of a single person, one specific image may be declared primary and be assigned priority. After training on such a priority image, all corrective weights Wi,n,d,a that are changed in the process of training may be fixed, i.e., where the index a = 1, thus designating the weight as Wi,n,<u, and other images of the same person may remain changeable. Such priority may include other images, for example those that are used as encryption keys and/or contain critical numeric data.
[00154] The changes to the corrective weights Wi,n,d,a may also not be completely prohibited, but limited to the growth of the index "a". That is, each subsequent use of the weight Wi,n,d,a may be used to reduce its ability to change. The more often a particular corrective weight Wi,n,d,a is used, the less the weight changes with each access, and thus, during training on subsequent images, the previous, stored images are changed less and experience reduced corruption. For example, if a = 0, any change in the weight Wi,n,d,a is possible; when a = 1 the possibility of change for the weight may be decreased to ± 50% of the weight's value; with a = 2 the possibility of change may be reduced to ± 25% of the weight's value.
[00155] After reaching the predetermined number of accesses, as signified by the index "a", for example, when a = 5, further change of the weight Wi,n,d,a may be prohibited. Such an approach may provide a combination of high intelligence and information safety within a single p-net 100. Using the network error calculating mechanism, levels of permissible errors may be set such that information with losses within a predetermined accuracy range may be saved, wherein the accuracy range may be assigned according to a particular task. In other words, for the p-net 100 operating with visual images, the error may be set at the level that cannot be captured by the naked eye, which would provide a significant, "factor of increase in storage capacity. The above can enable creation of highly effective storage of visual information, for example movies.
[00156] The ability to selectively clean computer memory may be valuable for continued high-level functioning of the p-net 100. Such selective cleaning of memory may be done by removing certain images without loss of or corruption of the rest of the stored information. Such cleaning may be provided as follows: • identification of all corrective weights Wi,n,d,a that participate in the image formation, for example, by introducing the image to the network or by compiling the list of used corrective weights for each image;
• reduction of index "a" for the respective corrective weights Wi,n,d,a; and
• replacement of corrective weights Wi,n,d,a either with zero or with a random value close to the middle of the range of possible values for the subject weight when the index "a" is reduced to zero.
[00157] An appropriate order and succession of reduction of the index "a" may be experimentally selected to identify strong patterns hidden in the sequence of images. For example, for every 100 images introduced into the p-net 100 during training, there may be a reduction of the index "a" by a count of one, until "a" reaches the zero value. In such a case, the value of "a" may grow correspondingly with the introduction of new images. The competition between growth and reduction of "a" may lead to a situation where random changes are gradually removed from memory, while the corrective weights Wi,n,d,a that have been used and confirmed many times may be saved. When the p-net 100 is trained on a large number of images with similar attributes, for example, of the same subject or similar environment, the often- used corrective weights Wi,n,d,a constantly confirm their value and information in these areas becomes very stable. Furthermore, random noise will gradually disappear. In other words, the p-net 100 with a gradual decrease in the index "a" may serve as an effective noise filter.
[00158] The described embodiments of the p-net 100 training without loss of information allow creating a p-net memory with high capacity and reliability. Such memory may be used as a high-speed computer memory of large capacity providing greater speed than even the "cash memory" system, but will not increase computer cost and complexity as is typical with the "cash memory" system. According to published data, in general, while recording a movie with neural networks, memory may be compressed tens or hundreds of times without significant loss of recording quality. In other words, a neural network is able to operate as a very effective archiving program. Combining this ability of neural networks with the high-speed training ability of the p-net 100 may permit a creation of high-speed data transmission system, a memory with high storage capacity, and high-speed decryption program multimedia files, i.e., a codex. [00159] Due to the fact that in the p-net 100 data is stored as a set of corrective weights Wi,n,d,a, which is a type of code recording, decoding or unauthorized access to the p-net via existing methods and without the use of an identical network and key is unlikely. Thus, p-net 100 may offer a considerable degree of data protection. Also, unlike conventional computer memory, damage to individual storage elements of the p-net 100 presents an insignificant detrimental effect, since other elements significantly compensate lost functions. In the image recognition process, inherent patterns of the image being used are practically not distorted as a result of damage to one or more elements. The above may dramatically improve the reliability of computers and allow using certain memory blocks, which under normal conditions would be considered defective. In addition, this type of memory is less vulnerable to hacker attacks due to the absence of permanent address(s) for critical bytes in the p- net 100, making it impervious to attack of such a system by a variety of computer viruses.
[00160] The previously -noted process of image recognition with determination of the percentage of similarity between different images used in training may also be employed as a process of image classification according to the previously defined categories, as noted above. For clustering, which is a division of the images into not predefined natural classes or groups, the basic training process may be modified. The present embodiment may include:
• preparation of a set of input images for training, without including prepared output images;
• formation and training the network with the formation of the neuron output sums as it is done according to the basic algorithm;
• selection in the resulting output image of the output with maximum output sum, i.e., the winner output, or a group of winner outputs, which may be organized similar to Kohonen network;
• creation of a desired output image, in which the winner output or the group of winner outputs receive maximum values. At the same time:
o The number of selected winner outputs may be predetermined, for example, in a range of 1 to 10, or winner outputs may be selected according to the rule "no less than N% of the maximum neuron sum", where "N" may be, for example, within 90 - 100%; and o All other outputs may be set equal to zero.
• training according to the basic algorithm with using the created desired output image, Fig 13; and
• repeating all procedures for other images with formation for each image of different winners or winner groups.
[00161] The set of desired output images formed in the above manner may be used to describe clusters or groups into which the plurality of input images may naturally separate. Such a set of desired output images may be used to produce different classifications, such as for selection of images according to the established criteria and in statistical analysis. The above may also be used for the aforementioned inversion of input and output images. In other words, the desired output images may be used as the input images for another, i.e., additional, network, and the output of the additional network may be images presented in any form suitable for computer input.
[00162] In the p-net 100, after a single cycle of training with the described- above algorithm, desired output images may be generated with small output sum variation, which may slow down the training process and may also reduce its accuracy. To improve training of the p-net 100, the initial variation of points may be artificially increased or extended, so that the variation of the magnitude of the points would cover the entire range of possible output values, for example -50 to +50, as shown in Fig 21. Such an extension of the initial variation of points may be either linear or nonlinear.
[00163] A situation may develop where the maximum value of a certain output is an outlier or a mistake, for example, a manifestation of noise. Such may be manifested by the appearance of a maximum value surrounded by a multitude of small signals. When winning outputs are selected, the small signal values may be disregarded through selection the greatest signals surrounded by other large signals as the winners. For this purpose, known statistical techniques of variance reduction may be used, such as importance sampling. Such an approach may permit removing noise while maintaining basic valuable patterns. Creation of winner groups enables clustering of linearly inseparable images, i.e., images that relate to more than one cluster, as shown in Figure 13. The above may provide a significant improvement in accuracy and decrease the number of clustering errors. In the process of p-net 100 training, typical errors being subjected to
Error correction is also possible with the help of the above-described algorithm in training with an outside trainer.
[00165] Hardware portion of the p-net 100 may be provided in a digital, analog or combined digital-analog microchip. A representative p-net 100 microchip may be employed for both storage and processing of information. The p-net 100 microchip may be based on various variable resistors, field-effect transistors, memristors, capacitors, switching elements, voltage generators, non-linear photo-cells, etc.
Variable resistors may be used as synaptic weights 108 and/or corrective weights 1 12. A plurality of such resistors may be connected in parallel, series, or series-parallel. In case of parallel connection of respective resistors, signals may be coded by current values, which may in turn facilitate automated analog summation of the currents. In order to obtain positive or negative signals, two sets of resistors, excitatory and inhibitory, may be provided on each synapse. In such a hardware structure inhibitory signals may be subtracted from excitatory signals.
[00166] Each corrective weight 1 12 may be implemented as a memristor-like device (memristor). As understood by those skilled in the art, a memristor is a variable resistor with resistance controlled by an electrical current in a circuit, or by an electrical potential or an electric charge. Appropriate memristor functionality may be achieved via an actual memristor device, software or physical emulation thereof. In operation of the p-net 100 at low voltage potential, memristor may operate as a simple resistor. During training mode, the resistance of the memristor may be varied, for example, by a strong voltage pulse. Whether the change of the memristor's value (increasing or decreasing of the resistance) may depend on the polarity of the voltage, while the magnitude of the value change may depend on the magnitude of voltage pulse.
[00167] Figure 24 illustrates an embodiment of the p-net 100 labeled as p-net 100 A, which is similar in all ways to the previously described p-net 100, other than having specific elements that will be discussed below. In the p-net 100A each corrective weight 112 is established by a memory element 150 that retains the respective weight value of the particular corrective weight. In the p-net 100A, the weight correction calculator 122 may be configured to modify the respective corrective weight values of the corrective weights 1 12 being established by the corresponding memory elements 150. Consistent with other embodiments of the p- net 100 A, the corrective weights 1 12 are established in the corresponding memory elements 150 using the determined deviation 128. During operation of the p-net 100A shown in Figure 25, each respective output 1 17 of every neuron 116 provides the respective neuron sum 120 to establish an operational output signal 152 of the p-net 100A. The operational output signal 152 has a signal value the represents either a portion or the entirety of an operational output image 154.
[00168] During training of the p-net 100A, the weight correction calculator 122 may receive the desired output signal 124 representing a portion or the entirety of the output image 126, determine the deviation 128 of the neuron sum 120 from the value of the desired output signal 124, and modify respective corrective weight values established by the corresponding memory elements using the determined deviation. Furthermore, adding up the modified corrective weight values of the corrective weights 112 established by the corresponding memory elements 150 to determine the neuron sum will minimize the deviation of the neuron sum 120 from the desired output signal value 124. Minimizing the deviation of the neuron sum 120 from the desired output signal value 124 is used to train the p-net 100A.
[00169] As shown in phantom in Figure 24, the trained p-net 100A of any disclosed embodiments may be configured to receive supplementary training using solely a supplementary input signal 156 having a value along with a corresponding supplementary desired output signal 158. In other words, the previously trained p-net 100A may receive supplementary training without being retrained with some or all of the original input signals 104 and desired output signals 124 that were employed to initially train the p-net 100 A. Each of the plurality of synapses 1 18 in the p-net 100A may be configured to accept one or more additional corrective weights 112 that were established by the respective memory elements 150. Such additional corrective weights 112 may be added to the synapses either during training or before the supplementary training of the p-net 100A. Such additional corrective weights 1 12 may be used to expand a number of memory elements 150 available to train and operate the p-net 100A.
[00170] The p-net 100A may also be configured to remove from the respective synapses 1 18, either during or after training of the p-net 100 A, one or more corrective weights 112 established by the respective memory elements 150. The removal of some corrective weights 112 may permit the neural network to retain only a number of memory elements required to operate the neural network. Such ability to remove corrective weights 112 is intended to make the p-net more compact, and thus more efficient for training and subsequent operation. The p-net 100A may also be configured to accept additional inputs 102, additional neurons 1 16, along with respective additional neuron outputs 1 17, and additional synapses 1 18, either before or during training of the p-net, to thereby expand the p-net's operational parameters. Such additions to the p-net 100A may enhance capability, such as capacity, precision of output, and a number of tasks that may be handled by the p-net.
[00171] The p-net 100A may be additionally configured to remove any number of unused inputs 102, neurons 116, along with respective additional neuron outputs 1 17, and synapses 118 before, during, or after either initial training or supplementary training, of the p-net. Such ability to remove elements of the p-net 100A that are not being used is intended to simplify structure and modify operational parameters of the p-net, i.e., condense the p-net, without loss of the p-net's output quality.
[00172] As shown in Figure 26, each memory element 150 may be established by an electrical component or device 160 characterized by an electrical and/or magnetic characteristic configured to define the weight value of the respective corrective weight 112, such as resistance, impedance, capacity, magnetic field, induction, or electric field intensity. Such an electrical device 160 may, for example, be configured as a memristor (shown in Figures 26-28), a resistor (shown in Figures 29-32), a transistor, a capacitor (shown in Figures 29-32), a field-effect transistor, a photoresistor or light-dependent resistor (LDR), a magnetic dependent resistor (MDR), or a memistor. As understood by those skilled in the art, a memistor is a resistor with memory able to perform logic operations and store information, and is a generally a three-terminal implementation of the memristor.
[00173] In such an embodiment of the memory element 150, the respective electrical and/or magnetic characteristic of each electrical device 160 may be configured to be varied during training of the p-net 100A. Additionally, in the p-net 100A using the electrical device 160 embodiment of the memory element 150, the weight correction calculator 122 may modify the values of the respective corrective weights 112 by varying the respective electrical and/or magnetic characteristic of the corresponding devices employed by the p-net 100A. Each electrical device 160 may also be configured to maintain or retain the electrical and/or magnetic characteristic that corresponds to the value of the respective corrective weight 112 modified during training of the p-net 100 A, as discussed above, and used during operation of the p-net after the training.
[00174] Specific embodiments of an appropriate memristor may be both the known physical expression of the device, as well as software or electrical circuit functional representations or equivalents thereof. Figures 26-28 depict embodiments of the representative p-net 100A employing such physical memristors. As shown in Figure 26, each input 102 is configured to receive an analog input signal 104, wherein the input signals from an external source, such as an image sensor, arrays of light- sensitive elements or microphones, a digital-to-analog converter, etc. are represented as voltages VI, V2... Vm. All the input signals 104 together generally describe the corresponding input image 106.
[00175] Each memory element 150 may also be established by a block 162 having electrical resistors 164. Such a block 162 with electrical resistors 164 may include a selector device 166. The selector device 166 is configured to select one or more electrical resistors 164 from the block 162 using the determined deviation 128 of the neuron sum 120 from the value of the desired output signal 124 to establish each corrective weight 1 12, as discussed above. Furthermore, each memory element 150 established by the block 162 having electrical resistors 164 may also include electrical capacitors 168. In other words, each memory element 150 may be established by the block 162 having electrical resistors 164, as well as electrical capacitors 168. In such a case, the selector device 166 may be additionally configured to select capacitors 168, as well as electrical resistors 164, using the determined deviation 128 to establish each corrective weight 112.
[00176] Each of the embodiments of the p-net 100A shown in Figures 24 and 25 may be configured as either an analog, digital, and digital-analog neural network. In such an embodiment of the p-net 100A, any of the plurality of inputs 102, plurality of synapses 118, memory elements 150, set of distributors 114, set of neurons 116, weight correction calculator 122, and desired output signals 124 may be configured to operate in an analog, digital, and digital-analog format. Figure 26 illustrates the p-net 100 A in the first stage of training which concludes with determination of the deviation 128 of the neuron sum 120 from the value of the desired output signal 124, while Figure 27 illustrates the p-net 100A in the second stage of training concludes with formation of corrective signals 170 for the corrective weights 112 established by the electrical devices 160. As shown in Figure 26, each synapse 118 is connected to one of the plurality of inputs 102, includes a plurality of electrical devices 160, shown as memristors, that function as corrective weights 112. Each corrective weight 112 is defined by a value of electrical resistance.
[00177] The p-net 100A shown in Figure 26 also includes a set of distributors 114. Each distributor 114 is operatively connected to one of the plurality of inputs 102 for receiving the respective input signal 104 via a set of electrical devices 160, configured as memristors, establishing the appropriate corrective weight 112.
Additionally, each distributor 114 is configured to select one or more corrective weights 112 embodied by the memristors from the available plurality of corrective weights in correlation with the input voltage. Figure 28 illustrates the p-net 100A using the electrical devices 160 configured as memristors arranged in a twin parallel branches. With respect to other solutions for the construction of the p-net 100A noted above, Figures 29-31 illustrate electrical devices 160 configured as common resistors to define appropriate resistances in the p-net 100A; while Figure 32 illustrates electrical devices 160 configured to define impedances in the p-net 100 A.
[00178] In the p-net 100A configured as an analog or analog-digital network, each input 102 is configured to receive an analog or digital input signal 104, wherein the input signals are represented as Ii, h... Im in Figures 24 and 25. Each input signal Ii, h... Im represents a value of some analog characteristic(s) of an input image 106, for example, a magnitude, frequency, phase, signal polarization angle, etc. In the analog p-net 100A, each input signal 104 has an input analog value, wherein together the plurality of input signals 104 generally describes the analog input image 106. In the p-net 100A configured as an analog network, each neuron 116 may be established by either a series or a parallel communication channel 172, for example, an electrical wire, or a series or a parallel bus. A representative bus embodiment of the communication channel 172 may use both parallel and bit serial connections, as understood by those skilled in the art. For example, if the corresponding analog signals are provided via electrical current, the communication channel 172 may be series current bus, while if the corresponding analog signals are provided via electrical potential on the respective corrective weights 112, a representative communication channel may be a parallel bus.
[00179] Analog embodiments of the corrective weight 112 are shown in Figures 26-32. Each analog corrective weight 1 12 is defined by the memory element 150 that retains the respective weight value of the particular corrective weight and acts on and modifies the forward signals Ϊ (from input to output) or reverse signals I (from output to input), or an additional control signal, that flow therethrough.
Furthermore, each analog corrective weight 1 12 may retain its respective weight value during actual operation of the p-net 100 A, for example, during image recognition. Additionally, the value of each analog corrective weight 112 may be modified either via the forward signals Ϊ or via the reverse signals I, or an additional control signal, during training of the p-net 100 A. As a result, an analog embodiment of the p-net 100A may enable generation of microchips permitting support of various computer applications. Moreover, the entire p-net 100A may be programmed into an electronic device having a memory, such as a microchip, video card, etc. Accordingly, an appropriate embodiment of each memory element 150 will then also be stored in the memory of the subject electronic device.
[00180] Consistent with the above, each electrical device 160 in the analog p- net 100A may be configured to restore the electrical and/or magnetic characteristic corresponding to the modified value of the respective corrective weight 1 12 following the training of the p-net. The weight correction calculator 122 may be configured to generate one or more correction signals representative of the determined deviation 128 of the neuron sum 120 from the value of the desired output signal 124.
Furthermore, each of the generated correction signals may be used to vary the electrical and/or magnetic characteristic of at least one electrical device, 160 i.e., separate correction signals used for each device being modified. The weight correction calculator 122 may also be configured to generate a single correction signal used to vary the electrical and/or magnetic characteristic of each electrical device 160, i.e., one correction signal may be used for all electrical devices being modified.
[00181] Specific embodiments of the weight correction calculator 122 may be embedded into the p-net 100A as programmed software or be established via external devices or accessible computer programs. For example, the weight correction calculator 122 may be established as a set of differential amplifiers 174. Consistent with the overall disclosure, each such differential amplifier 174 may be configured to generate a respective correction signal representative of the determined deviation 128 of the neuron sum 120 from the value of the desired output signal 124. Each electrical device 160 may be configured to maintain the electrical and/or magnetic characteristic corresponding to the modified value of the respective corrective weight 1 12 after training of the p-net 100 A is completed. The p-net 100A may also be configured to use the maintained electrical and/or magnetic characteristic during the p-net's operation, i.e., after the training has been completed. Such structure of the p- net 100 A facilitates parallel or batch training of the corrective weights 112 and thereby permits a significant reduction in the amount of time required to train the p- net, as compared with traditional neural networks described above.
[00182] Each distributor 1 14 in the p-net 100A may be configured as a either an analog, digital, or analog-digital device that takes a single input signal 104 and selects one or more of multiple data-output-lines, which is connected to the single input, i.e., a demultiplexer 176. Such a demultiplexer 176 may be configured to select one or more corrective weights 112 from the plurality of corrective weights in response to the received input signal 104. Each distributor 114 may be configured to convert the received input signal 104 into a binary code and select one or more corrective weights 112 from the plurality of corrective weights in correlation with the binary code.
[00183] Figure 33 depicts a method 300 of operating a utility neural network 100B (shown in Figure 25). The method 300 operates in accordance with the above disclosure with respect to Figures 2-22 and 24-32. The method 300 commences in frame 302 where the method includes providing the utility neural network 100B. Following frame 302 the method advances to frame 304. In frame 304 the method includes processing data via the utility neural network 100B using modified values of the corrective weights 112 established by a separate analogous neural network, such as the p-net 100A, during training thereof. The training of the analogous p-net 100A may include supplementary training, as described above with respect to Figure 24.
[00184] The utility neural network 100B and the trained separate p-net 100A may be made analogous by having a matching neural network structure, for example, as represented in Figure 25, such that the utility neural network 100B may exclude the weight correction calculator 122 and the corresponding ability to train the utility neural network. Accordingly, the matching neural network structure may include an identical number of inputs 102, corrective weights 112, distributors 114, neurons 116, neuron outputs 117, and synapses 118. In other words, the p-net 100A and the utility neural network 100B may be substantially identical with respect to all the features establishing the networks' operating parameters, with the largest functional difference being the ability of the p-net 100A to be trained by establishing the modified corrective weights 112. Each of the analogous p-net 100A and the utility neural network 100B may be implemented in distinct configurations, for example, in various forms of hardware and/or software, as well as in analog and/or digital format, thereby permitting the utility neural network and the analogous p-net to be represented by dissimilar carriers. In such a case, a translator (not shown) may be employed to convert or interpret the data with the modified corrective weights 112.
[00185] In each of the utility neural network 100B and the trained separate p- net 100 A, each corrective weight 112 may be established by the memory element 150. Specifically, in the p-net 100 A, the memory element 150 may retain the respective modified weight value corresponding to the corrective weight 112 following the training of the p-net. Each input image 106 provided to the utility neural network 100B may be represented by the combined input signals 104 represented by Ii, l2... Im, identical to the description with respect to Figure 2. As additionally discussed with respect to Figure 2, each input signal Ii, h... Im represents a value of some
characteristic(s) of the corresponding input image 106.
[00186] From frame 304 the method proceeds to frame 306. In frame 306 the method includes establishing an operational output signal 152 using the modified corrective weights 112, as discussed with respect to Figure 25. Accordingly, the processing of the input data, such as the input image 106, via the utility neural network 100B may culminate with recognition of such data and as the operational output signal 152 of the utility neural network. The operational output signal 152 of the utility neural network 100B may then be interpreted or decoded either by the utility network itself or by an operator of the utility network as part of frame 308 to complete the method. The establishment of the modified corrective weights 112 in the p-net 100A may be accomplished according to the method 200 of training the p- net 100, as described with respect to Figure 23.
[00187] Figures 34 and 35 illustrate an embodiment of the p-net 100 labeled as p-net 100B, which is similar in all ways to the previously described p-net 100, other than having specific elements that will be discussed below. Furthermore, the p-net 100B shown in Figures 34 and 35 is configured to operate, i.e., be trained with selected images for subsequent recognition of other images, using an array structure. The term "image" as employed herein is intended to denote any type of information or data received for processing or generated by the neural network. In Figure 36, a trained p-net 100B is designated via numeral l OOC. When the p-net 100B is being trained, the input image 106 is defined as a training image, while in the trained p-net lOOC the input image 106 is intended to undergo recognition. The training images
106 are either received by the plurality of inputs 102 as a training input value array
107 or codified as a training input value array 107 during training of the p-net 100B, i.e., after having been received by the plurality of inputs.
[00188] Similar to other embodiments, each of the p-net 100B and the p-net l OOC shown in Figures 34-36 also includes synapses 118, wherein each synapse 1 18 is connected to one of the plurality of inputs 102, includes a plurality of corrective weights 112, and may also include the synaptic weight 108. The corrective weights 112 of all the synapses 118 are organized as, i.e., in the form of, a corrective weight array 119A. Accordingly, in Figures 34-36, the corrective weight array 1 19A includes all the corrective weights 112 within the dashed box 1 19 A. The p-net 100B may also include a set of distributors 1 14. In such an embodiment, each distributor 114 is operatively connected to one of the plurality of inputs 102 for receiving the respective input signal 104. The embodiment of the p-net 100B that includes the corrective weight array 1 19A may also be characterized by an absence of a distinct neuron unit 1 19, while retaining some of the neuron unit's constituent elements. [00189] Analogous to the previously described p-net 100 and p-net 100A, the p-net 100B additionally includes a set of neurons 116, and is a means for executing the actions described in detail below. Each neuron 1 16 also has at least one output 1 17 and is connected with at least one of the plurality of inputs 102 via one synapse 118. Each neuron 116 is similarly configured to sum up the values of the corrective weights 112 selected from each synapse 118 connected to the respective neuron 116 to thereby generate and output a neuron sum array 120A, otherwise designated as∑n. In the present embodiment, a separate distributor 1 14 may similarly be used for each synapse 118 of a given input 102, as shown in Figures 34-36. Alternatively, a single distributor may be used for all such synapses (not shown). During formation or setup of the p-net 100B, all corrective weights 1 12 are assigned initial values, which may change during the process of p-net training, as shown in Figure 35. The initial value of the corrective weight 112 may be selected randomly, calculated with the help of a pre-determined mathematical function, selected from a predetermined template, etc. Initial values of the corrective weights 1 12 may be either identical or distinct for each corrective weight 112, and may also be zero.
[00190] As shown in Figures 34 and 35, the p-net 100B also includes a controller 122A configured to regulate training of the p-net 100B, and as such is a means for executing the actions described in detail below. The controller 122A may include the weight correction calculator 122 described above with respect to other embodiments. In order to appropriately perform the tasks described in detail below, the controller 122A includes a memory, at least some of which is tangible and non- transitory. The memory of the controller 122 A may be a recordable medium that participates in providing computer-readable data or process instructions. Such a medium may take many forms, including but not limited to non-volatile media and volatile media. Non-volatile media for the controller 122A may include, for example, optical or magnetic disks and other persistent memory. Volatile media may include, for example, dynamic random access memory (DRAM), which may constitute a main memory. Such instructions may be transmitted by one or more transmission medium, including coaxial cables, copper wire and fiber optics, including the wires that comprise a system bus coupled to a processor of a computer.
[00191] Memory of the controller 122 A may also include an appropriate medium, for example a magnetic or an optical medium. The controller 122A may be configured or equipped with other required computer hardware, such as a high-speed clock, requisite Analog-to-Digital (A/D) and/or Digital-to- Analog (D/A) circuitry, necessary input/output circuitry and devices (I/O), as well as appropriate signal conditioning and/or buffer circuitry. Algorithms required by the controller 122 A or accessible thereby may be stored in the memory and automatically executed to provide the required functionality described in detail below.
[00192] The controller 122A may be programmed to organize the corrective weights 112 into the corrective weight array 119A. The controller 122 A is also configured to receive desired images or output signals 124 organized as a desired output value array 126 A. The controller 122A is additionally configured to determine the deviation 128 of the neuron sum array 120A from the desired output value array and generate a deviation array 132. The controller 122A is further configured to modify the corrective weight array 119A using the determined deviation array 132. In such a case, adding up the modified corrective weight values to determine the neuron sum array 120A reduces, i.e., minimizes, the deviation 128 of the neuron sum array 120 A from the desired output value array 126A to generate a trained corrective weight array 134A (shown in Figure 36). Analogous to the corrective weight array 119A shown in Figures 34 and 35, the trained corrective weight array 134A includes all the corrective weights 112 within the dashed box 134A. Also, as shown in Figure 36, and analogous to the corrective weight array 119A in Figures 34 and 35, the trained corrective weight array 134A includes all the trained corrective weights 112A within the dashed box 119A and may include the distributors 114 associated therewith. Therefore, the minimized deviation 128 of the neuron sum array 120A compensates for errors generated by the p-net 100B. Furthermore, the generated trained corrective weight array 134A facilitates concurrent or parallel training of the p-net 100B.
[00193] In the trained p-net lOOC, shown in Figure 35, the plurality of inputs 102 to the p-net may be configured to receive input images 106. Such input images 106 may be either received as an input value array 107 A or codified as an input value array 107A during recognition of the images by the p-net 100B. Each synapse 118 may include a plurality of trained corrective weights 112A. Additionally, each neuron 116 may be configured to add up the weight values of the trained corrective weights 112A corresponding to each synapse 118 connected to the respective neuron, such that the plurality of neurons generate a recognized images array 136, thereby providing recognition of the input images 106. In the embodiment of the p-net 100B and the trained p-net lOOC that includes the set of distributors 1 14, the distributors may be configured to codify the training and input images 106 as the respective training input value array 107 and input value array 107A. Accordingly, such a set of distributors 1 14 being operatively connected to the plurality of inputs 102 for receiving each of the respective training and input images 106. The above operations may be performed using structured matrices, specifically a trained corrective weight matrix in place of the trained corrective weight array 134A, as will be described in detail below.
[00194] The controller 122A may additionally be programmed with an array of target deviation or target deviation array 138 of the neuron sum array 120A from the desired output value array 126A. Furthermore, the controller 122A may be configured to complete training of the p-net 100B when the deviation 128 of the neuron sum array 120 A from the desired output value array 126A is within an acceptable range 139 of the target deviation array 138. The acceptable range 139 may be referenced against a maximum or a minimum value in, or an average value of the target deviation array 138. Alternatively, the controller 122A may be configured to complete training of the p-net 100B when the speed of reduction of the deviation 128 or convergence of the training input value array 107 and the desired output value array 126A falls to a predetermined speed value 140. The acceptable range 139 and/or the predetermined speed value 140 may be programmed into the controller 122 A.
[00195] The training input value array 107, input value array 107 A, the corrective weight array 119A, neuron sum array 120 A, desired output value array 126 A, deviation array 132, trained corrective weight array 134 A, recognized images array 136, and target deviation array 138, i.e., parameter values therein, may be organized, respectively, as a training input value matrix 141, input value matrix 141A, corrective weight matrix 142, neuron sum matrix 143, desired output value matrix 144, deviation matrix 145, trained corrective weight matrix 146, recognized images matrix 147, and target deviation matrix 148. Wherein in each respective array 107, 107 A, 1 19, 120, 126, 132, 134, 136, and 138, values of the respective parameters may be organized, for example, in the form of a processor accessible data table, the values in the respective matrices 141, 141A, 142, 143, 144, 145, 146, 147, and 148 are specifically organized to enable application of algebraic matrix operations to each respective matrix individually, as well as to combinations thereof. The matrices 141, 141A, 142, 143, 144, 145, 146, 147, and 148 are not specifically shown in the figures, but, when organized as such, are to be understood as taking place of the respective arrays 107, 107 A, 119, 120, 126, 132, 134, 136, and 138.
[00196] In the examples below, for illustration purposes, particular matrices are depicted with arbitrary number of columns and rows. For example, the training images may be received and/or organized in an input training matrix |I|:
Subsequently, the above training input images matrix may be converted via the controller 122A into the training input value matrix 141, which is represented as matrix |C|. Each matrix |C| will have a corresponding number of columns for the number of inputs "I", but accounting for a specific number of intervals "i", and a corresponding number of rows for the number of images.
In matrix |C|, intervals "i" identified with a specific corrective weight 112 that will be used during training. In columns corresponding to intervals "i", the values of signals may be replaced with ones (1) to signify that the particular signal will be used in the particular interval, while in other intervals for the subject signal, the values of signals may be replaced with zeros (0) to signify that the particular interval will not be considered.
An exemplary corrective weight matrix 146 may be formed as matrix |W| shown below:
The neuron sum matrix 143 may be represented as matrix |∑| shown below:
∑11 ∑12 ∑13
l∑ I = |C| x |W| = ∑21 ∑22 ∑23 =
∑31 ∑32 ∑33
∑ii = Cm x Win + Cm x W121 + Cm x Wm ... ∑ 21 = C211 x W211 + C221 x W221 + C231 x W231 . . .
= ∑ 31 = C311 x W311 + C321 x W321 + C331 x W331 . . .
∑ 12 = C112 x W112 + C122 x W122 + Cl32 x Wl32 ... ∑ 22 = C212 x W212 + C222 x W222 + C232 x W232 ...
The desired output value matrix 144 may be formed as matrix |0|, as shown
The deviation 128 of the neuron sum matrix 143 may be determined from the desired output value matrix 144 to generate the deviation matrix 148 represented as matrix |E| below:
∑11 ∑12 ∑13
|E| = |0| - |∑| =∑2 l ∑22 ∑23
∑31 ∑32 ∑33
Wherein,
Σιι = οη-∑η
∑12 = 012-∑12 etc.
The corrective weight matrix 142, represented as matrix |W| below, may be modified using the determined deviation matrix 145, which permits adding up the modified corrective weight 112 values to determine the neuron sum matrix 143 to minimize the deviation of the neuron sum matrix 143 from the desired output value matrix 144 to generate a trained corrective weight matrix 146, represented as matrix |W trained|. The matrix |W trained I IS derived according to expression |W trainedl = |W| + |VW| (wherein the factor |VW| will be described in detail below):
As discussed above, the formation of the trained corrective weight array 134 A and the trained corrective weight matrix 146 facilitates concurrent training of the p-net 100B.
[00197] In the embodiment of image recognition (shown in Figure 36) using the trained p-net lOOC, concurrent recognition of a batch of input images 106 may be provided using matrix operation described above. Specifically, the trained p-net lOOC the corrective weights array, which may be represented as a two-dimensional n x k matrix |W|, where "n" is the number of neurons 116 and "k" is the number of corrective weights 112 in a particular neuron. The matrix |W| may be generally represented as follows:
For concurrent recognition of a batch of input images 106, the input images to be recognized may be presented as a v x k matrix |Ir|, where "v" is the number of recognizable images, "k" is the number of corrective weights 112 in a particular neuron 116. The matrix |Ir| of input images 106 for recognition may be generally represented as follows: Ir2! Ir31 Ir vl ,
ΐΓ22 ΐΓ32 v2
Iri3 ΐΓ23 ΐΓ33 Ir v ,3
ΐΓ24 ΐΓ34 v4
Ir2k Ir3k vk
In the above matrix |Ir|, each row of the matrix is a single image subjected to recognition.
[00198] Concurrent recognition of a batch of input images 106 may be provided by multiplication of the matrix |W| by a transposed matrix |I|T, to generate the recognized image matrix 147, represented by a symbol "|Y|", and represented as follows:
|Y| = |W| x |Ir|T
The matrix |Y| has dimensions n x v. Each column of the matrix |Y| is a single output or recognized image obtained by the trained p-net lOOC. The matrix |Y| may be generally depicted as follows:
[00199] Each of the p-net 100B and lOOC may additionally include a data processor 150, which may be a sub-unit of the controller 122A. In such
embodiments, the controller 122A may be additionally configured to partition or cut- up at least one of the respective training input value matrix 141, input value matrix 141 A, corrective weight matrix 142, neuron sum matrix 143, and desired output value matrix 144 into respective sub-matrices. The controller 122A may also be configured to communicate a plurality of the resultant sub-matrix or sub-matrices to the data processor 150 for separate mathematical operations therewith. Such partitioning of any of the subject matrices 141 , 142, 143, and 144 into respective sub-matrices facilitates concurrent or parallel data processing and an increase in speed of either image recognition of the input value matrix 141 A or training of the p-net 100B. Such concurrent or parallel data processing also permits scalability of the p-net 100B or l OOC, i.e., provides ability to vary the size of the p-net by limiting the size of the respective matrices being subjected to algebraic manipulations on a particular processor and/or breaking up the matrices between multiple processors, such as the illustrated processor 150. As shown in Figures 34-36, in such an embodiment of the p-net 100B and lOOC, multiple data processors 150 in communication with the controller 122A may be employed, whether as part of the controller 122A or arranged distally therefrom, and configured to operate separately and in parallel.
[00200] The controller 122A may modify the corrective weight matrix 142 by applying an algebraic matrix operation to the training input value matrix 141 A and the corrective weight matrix to thereby train the p-net 100B. Such a mathematical matrix operation may include a determination of a mathematical product of the input value matrix 141 A and the corrective weight matrix 146 to thereby form a current training epoch weight matrix 151. The controller 122A may also be configured to subtract the neuron sum matrix 143 from the desired output value matrix 144 to generate a matrix of deviation of neuron sums 153 depicted as matrix |E| described above.
Additionally, the controller 122A may be configured to divide the matrix of deviation of neuron sums 153 by the number of synapses 1 18, identified below with a letter "m", connected to the respective neuron 1 16 to generate a matrix of deviation per neuron input 155, represented below by the symbol "|AW|", as follows:
|AW| = |E| / m
[00201] The controller 122A may be additionally configured to determine a number of times each corrective weight 112 was used during one training epoch of the p-net 100B represented in the expression below by the symbol "|S|". As shown below, the matrix |S| is obtained via multiplication of the training input value matrix 141 A by a unit vector:
Cll C12 C13 1
|S| = C21 C22 C23 1
C31 C32 C33 1
The controller 122A may be further configured to form an averaged deviation matrix
157, represented below by the symbol "|VW|", for the one training epoch using the determined number of times each corrective weight was used during the one training epoch.
|VW| = |AW| / |S|
Furthermore, the controller 122 A may be configured to add the averaged deviation matrix 157 for the one training epoch to the corrective weight matrix 142 to thereby generate the trained corrective weight matrix 146, represented below as |W trained |, and complete the one training epoch as shown below:
|W trainedl = |W| + |VW|
i3 w " 331 + v vw " 331 w " 332 + v vw " 332 w " 333 + v vw " 333
i4 w341 + v Vw " 341 w " 342 + v Vw " 342 w " 343 + v vw " 343
[00202] Figure 37 depicts a method 400 for operating the p-net 100B, as described above with respect to Figures 34-36. The method 400 is configured to improve operation of an apparatus, such as a computer, or a system of computers employed in implementing supervised training using one or more data processors, such as the processor 150. The method 400 may be programmed into a non-transitory computer-readable storage device for operating the p-net 100B and encoded with instructions executable to perform the method.
[00203] The method 400 commences in frame 402 where the method includes receiving, via the plurality of inputs 102, the training images 106. As described above with respect to structure of the p-net 100B depicted in Figures 34 and 35, the training images 106 may either be received as the training input value array 107 prior to commencement of the subject training phase or codified as the training input value array during the actual training phase. Following frame 402, the method advances to frame 404. In frame 404, the method includes organizing the corrective weights 112 of the plurality of synapses 1 18 in the corrective weight array 119A. As described above with respect to the structure of the p-net 100B, each synapse 118 is connected to one of the plurality of inputs 102 and includes a plurality of corrective weights 1 12.
[00204] After frame 404, the method proceeds to frame 406, in which the method includes generating the neuron sum array 120A via the plurality of neurons 116. As described above with respect to the structure of the p-net 100B, each neuron 116 has at least one output 117 and is connected with at least one of the plurality of inputs 102 via one of the plurality of synapses 118. Furthermore, each neuron 1 16 is configured to add up the weight values of the corrective weights 112 corresponding to each synapse 118 connected to the respective neuron. Following frame 406, in frame 408, the method includes receiving, via the controller 122A, desired images 124 organized as the desired output value array 126 A. After frame 408, the method proceeds to frame 410, in which the method includes determining, via the controller 122 A, the deviation 128 of the neuron sum array 120A from the desired output value array 126A and thereby generate the deviation array 132. [00205] Following frame 410, the method advances to frame 412. In frame 412, the method includes modifying, via the controller 122A, the corrective weight array 119A using the determined deviation array 132. The modified corrective weight values of the modified corrective weight array 1 19A may subsequently be added or summed up and then used to determine a new neuron sum array 120 A. The summed modified corrective weight values of the modified corrective weight array 119A may then serve to reduce or minimize the deviation of the neuron sum array 120 A from the desired output value array 126A and generate the trained corrective weight array 134A. The deviation array 132 may be determined as sufficiently minimized when the deviation 128 of the neuron sum array 120A from the desired output value array 126A is within an acceptable range 139 of the array of target deviation 138, as described above with respect to the structure of the p-net lOOC. The trained corrective weight array 134 A includes the trained corrective weights 1 12A determined using the deviation array 132 and thereby trains the p-net 100B.
[00206] As described above with respect to the structure of the p-net 100B, each of the training input value array 107, the corrective weight array 1 19 A, neuron sum array 120 A, desired output value array 126 A, deviation array 132, trained corrective weight array 134A, and target deviation array 138 may be organized, respectively, as the training input value matrix 141 , corrective weight matrix 142, neuron sum matrix 143, desired output value matrix 144, deviation matrix 145, trained corrective weight matrix 146, and target deviation matrix 148. In frame 412, the method may further include partitioning, via the controller 122 A, at least one of the respective training input value matrix 141, input value matrix 141 A, corrective weight matrix 142, neuron sum matrix 143, and desired output value matrices 144 into respective sub-matrices. Such resultant sub-matrices may be communicated to the data processor 150 for separate mathematical operations therewith to thereby facilitate concurrent data processing and an increase in speed of training of the p-net 100B.
[00207] In frame 412 the method may also include modifying, via the controller 122A, the corrective weight matrix 142 by applying an algebraic matrix operation to the training input value matrix 141 and the corrective weight matrix to thereby train the p-net 100B. Such a mathematical matrix operation may include determining a mathematical product of the training input value matrix 141 and corrective weight matrix 142 to thereby form the current training epoch weight matrix 151. In frame 412 the method may additionally include subtracting, via the controller 122 A, the neuron sum matrix 143 from the desired output value matrix 144 to generate the matrix of deviation of neuron sums 153. Also, in frame 412 the method may include dividing, via the controller 122 A, the matrix of deviation of neuron sums 153 by the number of inputs connected to the respective neuron 1 16 to generate the matrix of deviation per neuron input 155.
[00208] Furthermore, in frame 412 the method may include determining, via the controller 122A, the number of times each corrective weight 112 was used during one training epoch of the p-net 100B. And, the method may, moreover, include forming, via the controller 122 A, the averaged deviation matrix 157 for the one training epoch using the determined number of times each corrective weight 1 12 was used during the particular training epoch. For example, such an operation may include dividing, element-by-element, the matrix of deviation per neuron input by the determined number of times each corrective weight was used during the particular training epoch to obtain averaged deviation for each corrective weight 1 12 used during the one training epoch, thereby forming the averaged deviation matrix 157 for the one training epoch.
[00209] Additionally, other matrix-based operations may be employed in frame 412, to form an averaged deviation matrix 157 for the one training epoch using, for example, arithmetic mean, geometric mean, harmonic mean, root mean square, etc. Also, in frame 412 the method may include adding, via the controller 122A, the averaged deviation matrix 157 for the one training epoch to the corrective weight matrix 142 to thereby generate the trained corrective weight matrix 146 and complete the particular training epoch. Accordingly, by permitting matrix operations to be applied to all the corrective weights 1 12 in parallel, the method 400 facilitates concurrent, and therefore enhanced speed, training of the p-net 100B in generating the trained p-net lOOC.
[00210] Following frame 412, method 400 may include returning to frame 402 to perform additional training epochs until the deviation array 132 is sufficiently minimized. In other words, additional training epochs may be performed to converge the neuron sum array 120 A on the desired output value array 126 A to within the predetermined deviation or error value, such that the p-net 100B may be considered trained and ready for operation with new input images 106. Accordingly, after frame 412, the method may proceed to frame 414 for image recognition using the trained p- net lOOC (shown in Figure 36).
[00211] In the embodiment of image recognition using the trained p-net lOOC, in frame 414, the method 400 includes receiving the input images 106 via the plurality of inputs 102. As described above with respect to the structure of the p-net lOOC, the input images 106 may be either received as the input value array 107 A or codified as the input value array during recognition of the images by the p-net lOOC. Following frame 414, in frame 416, the method includes attributing to each synapse 118 a plurality of trained corrective weights 112A of the trained corrective weight array 134A. After frame 416, the method advances to frame 418.
[00212] In frame 418, the method includes adding up the weight values of the trained corrective weights 112A corresponding to each synapse 118 connected to the respective neuron 116. As described above with respect to the structure of the p-net 100B, such summing of the weight values of the trained corrective weights 112A enables the plurality of neurons 116 to generate a recognized images array 136, thereby providing recognition of the input images 106. As described above with respect to the structure of the p-net lOOC, in addition to the matrices 141, 142, 143, 144, 145, 146, and 148 used for training, the input value array 107A and the recognized images array 136 may be organized, respectively, as the input value matrix 141 A and the recognized images matrix 147.
[00213] In frame 418, the method may also include partitioning, via the controller 122A, any of the employed matrices, such as the input value matrix 141A, into respective sub-matrices. Such resultant sub-matrices may be communicated to the data processor 150 for separate mathematical operations therewith to thereby facilitate concurrent data processing and an increase in speed of image recognition of the p-net lOOC. Analogous to the effect matrix operations impart to the training portion of the method 400 in frames 202-212, the image recognition portion in frames 214-218 benefits from enhanced speed, when algebraic matrix operations are applied in parallel to the matrices or sub-matrices of the trained p-net lOOC. Accordingly, by permitting matrix operations to be applied to all the trained corrective weights 112A in parallel, the method 400 facilitates concurrent, and therefore enhanced speed, image recognition using the p-net lOOC. Following frame 418 the method may return to frame 402 for additional training, as described with respect to Figures 34-36, if the achieved image recognition is deemed insufficiently precise, or the method may conclude in frame 420.
[00214] The detailed description and the drawings or figures are supportive and descriptive of the disclosure, but the scope of the disclosure is defined solely by the claims. While some of the best modes and other embodiments for carrying out the claimed disclosure have been described in detail, various alternative designs and embodiments exist for practicing the disclosure defined in the appended claims. Furthermore, the embodiments shown in the drawings or the characteristics of various embodiments mentioned in the present description are not necessarily to be understood as embodiments independent of each other. Rather, it is possible that each of the characteristics described in one of the examples of an embodiment may be combined with one or a plurality of other desired characteristics from other embodiments, resulting in other embodiments not described in words or by reference to the drawings. Accordingly, such other embodiments fall within the framework of the scope of the appended claims.

Claims

WHAT IS CLAIMED IS:
1. A neural network comprising:
a plurality of inputs to the neural network configured to receive training images, wherein the training images are one of received as a training input value array and codified as the training input value array during training of the neural network;
a plurality of synapses, wherein each synapse is connected to one of the plurality of inputs and includes a plurality of corrective weights, wherein each corrective weight is defined by a weight value, and wherein the corrective weights of the plurality of synapses are organized in a corrective weight array;
a plurality of neurons, wherein each neuron has at least one output and is connected with at least one of the plurality of inputs via at least one of the plurality of synapses, and wherein each neuron is configured to add up the weight values of the corrective weights corresponding to each synapse connected to the respective neuron, such that the plurality of neurons generate a neuron sum array; and
a controller configured to:
receive desired images organized as a desired output value array;
determine a deviation of the neuron sum array from the desired output value array and generate a deviation array; and
modify the corrective weight array using the determined deviation array, such that adding up the modified corrective weight values to determine the neuron sum array reduces the deviation of the neuron sum array from the desired output value array to generate a trained corrective weight array and thereby facilitate concurrent training of the neural network.
2. The neural network of claim 1, wherein in a trained neural network: the plurality of inputs to the neural network are configured to receive input images, wherein the input images are one of received as an input value array and codified as the input value array during recognition of the images by the neural network; each synapse includes a plurality of trained corrective weights of the trained corrective weight array; and
each neuron is configured to add up the weight values of the trained corrective weights corresponding to each synapse connected to the respective neuron, such that the plurality of neurons generate a recognized images array, thereby providing recognition of the input images.
3. The neural network of claim 2, further comprising a set of distributors, wherein the set of distributors is configured to codify each of the training images and input images as the respective training input value array and input value array, and wherein the set of distributors is operatively connected to the plurality of inputs for receiving the respective training images and input images.
4. The neural network of claim 1 , wherein the controller is additionally programmed with an array of target deviation of the neuron sum array from the desired output value array, and wherein the controller is additionally configured to complete training of the neural network when the deviation of the neuron sum array from the desired output value array is within an acceptable range of the array of target deviation.
5. The neural network of claim 2, wherein the training input value array, input value array, corrective weight array, neuron sum array, desired output value array, deviation array, trained corrective weight array, recognized image array, and target deviation array is organized, respectively, as a training input value matrix, input value matrix, corrective weight matrix, neuron sum matrix, desired output value matrix, deviation matrix, trained corrective weight matrix, recognized image matrix, and target deviation matrix.
6. The neural network of claim 5, further comprising a plurality of data processors, wherein the controller is additionally configured to partition at least one of the respective input value, training input value, corrective weight, neuron sum, and desired output value matrices into respective sub-matrices and communicate a plurality of the resultant sub-matrices to the plurality of data processors for separate parallel mathematical operations therewith to thereby facilitate concurrent data processing and an increase in speed of one of image recognition of the input value matrix and training of the neural network.
7. The neural network of claim 5, wherein the controller modifies the corrective weight matrix by applying an algebraic matrix operation to the training input value matrix and the corrective weight matrix to thereby train the neural network.
8. The neural network of claim 7, wherein the mathematical matrix operation includes a determination of a mathematical product of the training input value and corrective weight matrices to thereby form a current training epoch weight matrix.
9. The neural network of claim 8, wherein the controller is additionally configured to:
subtract the neuron sum matrix from the desired output value matrix to generate a matrix of deviation of neuron sums; and
divide the matrix of deviation of neuron sums by the number of inputs connected to the respective neuron to generate a matrix of deviation per neuron input.
10. The neural network of claim 9, wherein the controller is additionally configured to:
determine a number of times each corrective weight was used during one training epoch of the neural network;
form an averaged deviation matrix for the one training epoch using the determined number of times each corrective weight was used during the one training epoch; and
add the averaged deviation matrix for the one training epoch to the corrective weight matrix to thereby generate the trained corrective weight matrix and complete the one training epoch.
1 1. A method of operating a neural network, comprising: receiving training images via a plurality of inputs to the neural network, wherein the training images are one of received as a training input value array and codified as the training input value array during training of the neural network;
organizing corrective weights of a plurality of synapses in a corrective weight array, wherein each synapse is connected to one of the plurality of inputs and includes a plurality of corrective weights, and wherein each corrective weight is defined by a weight value;
generating a neuron sum array via a plurality of neurons, wherein each neuron has at least one output and is connected with at least one of the plurality of inputs via one of the plurality of synapses, and wherein each neuron is configured to add up the weight values of the corrective weights corresponding to each synapse connected to the respective neuron;
receiving, via a controller, desired images organized as a desired output value array;
determining, via the controller, a deviation of the neuron sum array from the desired output value array and generate a deviation array; and
modifying, via the controller, the corrective weight array using the determined deviation array, such that adding up the modified corrective weight values to determine the neuron sum array reduces the deviation of the neuron sum array from the desired output value array to generate a trained corrective weight array and thereby facilitate concurrent training of the neural network.
12. The method of claim 11, wherein in a trained neural network:
receiving input images via the plurality of inputs to the neural network, wherein the input images are one of received as an input value array and codified as the input value array during recognition of the images by the neural network;
attributing to each synapse a plurality of trained corrective weights of the trained corrective weight array, wherein each trained corrective weight is defined by a weight value; and
adding up the weight values of the trained corrective weights corresponding to each synapse connected to the respective neuron, such that the plurality of neurons generate a recognized images array, thereby providing recognition of the input images.
13. The method of claim 12, further comprising codifying, via a set of distributors, each of the training images and input images as the respective training input value array and input value array, wherein the set of distributors is operatively connected to the plurality of inputs for receiving the respective training images and input images.
14. The method of claim 1 1, wherein the controller is additionally programmed with an array of target deviation of the neuron sum array from the desired output value array, the method further comprising, completing, via the controller, training of the neural network when the deviation of the neuron sum array from the desired output value array is within an acceptable range of the array of target deviation.
15. The method of claim 12, further comprising organizing the training input value array, input value array, corrective weight array, neuron sum array, desired output value array, deviation array, trained corrective weight array, recognized image array, and target deviation array, respectively, as a training input value matrix, input value matrix, corrective weight matrix, neuron sum matrix, desired output value matrix, deviation matrix, trained corrective weight matrix, recognized image matrix, and target deviation matrix.
16. The method of claim 15, wherein the neural network additionally includes a plurality of data processors, the method further comprising partitioning, via the controller, at least one of the respective input value, training input value, corrective weight, neuron sum, and desired output value matrices into respective sub- matrices and communicating a plurality of the resultant sub-matrices to the plurality of data processors for separate parallel mathematical operations therewith to thereby facilitate concurrent data processing and an increase in speed of one of image recognition of the input value matrix and training of the neural network.
17. The method of claim 15, further comprising modifying, via the controller, the corrective weight matrix by applying an algebraic matrix operation to the training input value matrix and the corrective weight matrix to thereby train the neural network.
18. The method of claim 17, wherein applying the mathematical matrix operation includes determining a mathematical product of the training input value and corrective weight matrices to thereby form a current training epoch weight matrix.
19. The method of claim 18, further comprising:
subtracting, via the controller, the neuron sum matrix from the desired output value matrix to generate a matrix of deviation of neuron sums; and
dividing, via the controller, the matrix of deviation of neuron sums by the number of inputs connected to the respective neuron to generate a matrix of deviation per neuron input.
20. The method of claim 19, further comprising:
determining, via the controller, a number of times each corrective weight was used during one training epoch of the neural network;
forming, via the controller, an averaged deviation matrix for the one training epoch using the determined number of times each corrective weight was used during the one training epoch; and
adding, via the controller, the averaged deviation matrix for the one training epoch to the corrective weight matrix to thereby generate the trained corrective weight matrix and complete the one training epoch.
21. A non-transitory computer-readable storage device for operating an artificial neural network, the storage device encoded with instructions executable to:
receive training images via a plurality of inputs to the neural network, wherein the training images are one of received as a training input value array and codified as the training input value array during training of the neural network;
organize corrective weights of a plurality of synapses in a corrective weight array, wherein each synapse is connected to one of the plurality of inputs and includes a plurality of corrective weights, and wherein each corrective weight is defined by a weight value;
generate a neuron sum array via a plurality of neurons, wherein each neuron has at least one output and is connected with at least one of the plurality of inputs via one of the plurality of synapses, and wherein each neuron is configured to add up the weight values of the corrective weights corresponding to each synapse connected to the respective neuron;
receive desired images organized as a desired output value array; determine a deviation of the neuron sum array from the desired output value array and generate a deviation array; and
modify the corrective weight array using the determined deviation array, such that adding up the modified corrective weight values to determine the neuron sum array reduces the deviation of the neuron sum array from the desired output value array to generate a trained corrective weight array and thereby facilitate concurrent training of the neural network.
22. The storage device of claim 21, further encoded with instructions executable to:
receive input images via the plurality of inputs to the neural network, wherein the input images are one of received as an input value array and codified as the input value array during recognition of the images by the neural network;
attribute to each synapse a plurality of trained corrective weights of the trained corrective weight array, wherein each trained corrective weight is defined by a weight value; and
add up the weight values of the trained corrective weights corresponding to each synapse connected to the respective neuron, such that the plurality of neurons generate a recognized images array, thereby providing recognition of the input images.
23. An apparatus for operating an artificial neural network, comprising: a means for receiving training images via a plurality of inputs to the neural network, wherein the training images are one of received as a training input value array and codified as the training input value array during training of the neural network;
a means for organizing corrective weights of a plurality of synapses in a corrective weight array, wherein each synapse is connected to one of the plurality of inputs and includes a plurality of corrective weights, and wherein each corrective weight is defined by a weight value;
a means for generating a neuron sum array via a plurality of neurons, wherein each neuron has at least one output and is connected with at least one of the plurality of inputs via one of the plurality of synapses, and wherein each neuron is configured to add up the weight values of the corrective weights corresponding to each synapse connected to the respective neuron;
a means for receiving desired images organized as a desired output value array;
a means for determining a deviation of the neuron sum array from the desired output value array and generate a deviation array; and
a means for modifying the corrective weight array using the determined deviation array, such that adding up the modified corrective weight values to determine the neuron sum array reduces the deviation of the neuron sum array from the desired output value array to generate a trained corrective weight array and thereby facilitate concurrent training of the neural network.
24. The apparatus of claim 23, wherein in a trained neural network:
a means for receiving input images via the plurality of inputs to the neural network, wherein the input images are one of received as an input value array and codified as the input value array during recognition of the images by the neural network;
a means for attributing to each synapse a plurality of trained corrective weights of the trained corrective weight array, wherein each trained corrective weight is defined by a weight value; and
a means for adding up the weight values of the trained corrective weights corresponding to each synapse connected to the respective neuron, such that the plurality of neurons generate a recognized images array, thereby providing recognition of the input images.
EP17811082.1A 2016-06-09 2017-06-09 Neural network and method of neural network training Withdrawn EP3469521A4 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US15/178,137 US9619749B2 (en) 2014-03-06 2016-06-09 Neural network and method of neural network training
US15/449,614 US10423694B2 (en) 2014-03-06 2017-03-03 Neural network and method of neural network training
PCT/US2017/036758 WO2017214507A1 (en) 2016-06-09 2017-06-09 Neural network and method of neural network training

Publications (2)

Publication Number Publication Date
EP3469521A1 true EP3469521A1 (en) 2019-04-17
EP3469521A4 EP3469521A4 (en) 2020-02-26

Family

ID=60579026

Family Applications (1)

Application Number Title Priority Date Filing Date
EP17811082.1A Withdrawn EP3469521A4 (en) 2016-06-09 2017-06-09 Neural network and method of neural network training

Country Status (5)

Country Link
EP (1) EP3469521A4 (en)
JP (1) JP7041078B2 (en)
KR (1) KR102558300B1 (en)
CN (1) CN109416758A (en)
WO (1) WO2017214507A1 (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102018202095A1 (en) * 2018-02-12 2019-08-14 Robert Bosch Gmbh Method and apparatus for checking neuron function in a neural network
US11454968B2 (en) * 2018-02-28 2022-09-27 Micron Technology, Inc. Artificial neural network integrity verification
US11562231B2 (en) * 2018-09-03 2023-01-24 Tesla, Inc. Neural networks for embedded devices
US11640522B2 (en) * 2018-12-13 2023-05-02 Tybalt, Llc Computational efficiency improvements for artificial neural networks
CN109799439B (en) * 2019-03-29 2021-04-13 云南电网有限责任公司电力科学研究院 Insulating multi-angle inclined scratch cable wetting experiment evaluation method and device
CN110046513B (en) * 2019-04-11 2023-01-03 长安大学 Plaintext associated image encryption method based on Hopfield chaotic neural network
CN110111234B (en) * 2019-04-11 2023-12-15 上海集成电路研发中心有限公司 Image processing system architecture based on neural network
CN110135557B (en) * 2019-04-11 2023-06-02 上海集成电路研发中心有限公司 Neural network topology architecture of image processing system
KR102675806B1 (en) * 2019-05-03 2024-06-18 삼성전자주식회사 Image processing apparatus and image processing method thereof
US11361218B2 (en) * 2019-05-31 2022-06-14 International Business Machines Corporation Noise and signal management for RPU array
US11714999B2 (en) * 2019-11-15 2023-08-01 International Business Machines Corporation Neuromorphic device with crossbar array structure storing both weights and neuronal states of neural networks
KR102410166B1 (en) * 2019-11-27 2022-06-20 고려대학교 산학협력단 Deep neural network accelerator using heterogeneous multiply-accumulate unit
CN111461308B (en) * 2020-04-14 2023-06-30 中国人民解放军国防科技大学 Memristor neural network and weight training method
JP7493398B2 (en) 2020-07-03 2024-05-31 日本放送協会 Conversion device, learning device, and program
CN111815640B (en) * 2020-07-21 2022-05-03 江苏经贸职业技术学院 Memristor-based RBF neural network medical image segmentation algorithm
CN112215344A (en) * 2020-09-30 2021-01-12 清华大学 Correction method and design method of neural network circuit
US20220138579A1 (en) * 2020-11-02 2022-05-05 International Business Machines Corporation Weight repetition on rpu crossbar arrays
CN113570048B (en) * 2021-06-17 2022-05-31 南方科技大学 Circuit simulation-based memristor array neural network construction and optimization method
KR102514652B1 (en) * 2021-11-19 2023-03-29 서울대학교산학협력단 Weight transfer apparatus for neuromorphic devices and weight transfer method using the same
CN115358389B (en) * 2022-09-01 2024-08-20 清华大学 Training error reduction method and device for neural network, electronic equipment and medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0421639B1 (en) 1989-09-20 1998-04-22 Fujitsu Limited Parallel data processing system
CN101980290B (en) * 2010-10-29 2012-06-20 西安电子科技大学 Method for fusing multi-focus images in anti-noise environment
US10248675B2 (en) * 2013-10-16 2019-04-02 University Of Tennessee Research Foundation Method and apparatus for providing real-time monitoring of an artifical neural network
CA2941352C (en) * 2014-03-06 2022-09-20 Progress, Inc. Neural network and method of neural network training

Also Published As

Publication number Publication date
JP7041078B2 (en) 2022-03-23
KR102558300B1 (en) 2023-07-21
KR20190016539A (en) 2019-02-18
WO2017214507A1 (en) 2017-12-14
EP3469521A4 (en) 2020-02-26
CN109416758A (en) 2019-03-01
JP2019519045A (en) 2019-07-04

Similar Documents

Publication Publication Date Title
US9619749B2 (en) Neural network and method of neural network training
EP3469521A1 (en) Neural network and method of neural network training
US9390373B2 (en) Neural network and method of neural network training
TWI655587B (en) Neural network and method of neural network training
KR102545128B1 (en) Client device with neural network and system including the same
WO2019091020A1 (en) Weight data storage method, and neural network processor based on method
CN109919183B (en) Image identification method, device and equipment based on small samples and storage medium
JP7247878B2 (en) Answer learning device, answer learning method, answer generation device, answer generation method, and program
JP2019032808A (en) Mechanical learning method and device
CN114387486A (en) Image classification method and device based on continuous learning
CN112446888B (en) Image segmentation model processing method and processing device
CN113963165B (en) Small sample image classification method and system based on self-supervision learning
Hossain et al. Detecting tomato leaf diseases by image processing through deep convolutional neural networks
CN111602145A (en) Optimization method of convolutional neural network and related product
CN115063374A (en) Model training method, face image quality scoring method, electronic device and storage medium
TWI722383B (en) Pre feature extraction method applied on deep learning
US20220343162A1 (en) Method for structure learning and model compression for deep neural network
CN115796235B (en) Method and system for training generator model for supplementing missing data
JP2019095894A (en) Estimating device, learning device, learned model, estimation method, learning method, and program
WO2023084759A1 (en) Image processing device, image processing method, and program
Takanashi et al. Image Classification Using l 1-fidelity Multi-layer Convolutional Sparse Representation
Daniels et al. Efficient Model Adaptation for Continual Learning at the Edge
Hauser Training capsules as a routing-weighted product of expert neurons
Vodianyk et al. Evolving Node Transfer Functions in Deep Neural Networks for Pattern Recognition
Zazo et al. Examples of convolutional neural networks

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20181112

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20200124

RIC1 Information provided on ipc code assigned before grant

Ipc: G06N 3/02 20060101AFI20200120BHEP

Ipc: G06F 17/16 20060101ALI20200120BHEP

Ipc: G06N 3/04 20060101ALI20200120BHEP

Ipc: G06N 3/063 20060101ALI20200120BHEP

Ipc: G06F 15/00 20060101ALI20200120BHEP

Ipc: G06N 3/08 20060101ALI20200120BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20211208

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20220621