WO2017214507A1 - Réseau neuronal et procédé d'apprentissage de réseau neuronal - Google Patents

Réseau neuronal et procédé d'apprentissage de réseau neuronal Download PDF

Info

Publication number
WO2017214507A1
WO2017214507A1 PCT/US2017/036758 US2017036758W WO2017214507A1 WO 2017214507 A1 WO2017214507 A1 WO 2017214507A1 US 2017036758 W US2017036758 W US 2017036758W WO 2017214507 A1 WO2017214507 A1 WO 2017214507A1
Authority
WO
WIPO (PCT)
Prior art keywords
array
training
corrective
neuron
matrix
Prior art date
Application number
PCT/US2017/036758
Other languages
English (en)
Inventor
Dmitri PESCIANSCHI
Original Assignee
Progress, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/178,137 external-priority patent/US9619749B2/en
Priority claimed from US15/449,614 external-priority patent/US10423694B2/en
Application filed by Progress, Inc. filed Critical Progress, Inc.
Priority to CN201780035716.7A priority Critical patent/CN109416758A/zh
Priority to KR1020197000226A priority patent/KR102558300B1/ko
Priority to JP2018564317A priority patent/JP7041078B2/ja
Priority to EP17811082.1A priority patent/EP3469521A4/fr
Publication of WO2017214507A1 publication Critical patent/WO2017214507A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • G06N3/065Analogue means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Definitions

  • Provisional Application Serial No. 62/106,389 filed January 22, 2015, and also claims the benefit of U.S. Provisional Application Serial No. 62/173,163 filed June 09, 2015, the entire content of which is similarly incorporated by reference.
  • a neural network includes a plurality of inputs to the neural network configured to receive training images.
  • the training images are either received by the plurality of inputs as a training input value array or codified as the training input value array during training of the neural network, i.e., after having been received by the plurality of inputs.
  • the neural network also includes a plurality of synapses. Each synapse is connected to one of the plurality of inputs and includes a plurality of corrective weights. Each corrective weight is defined by a weight value, and the corrective weights of the plurality of synapses are organized in a corrective weight array.
  • the controller is also configured to determine a deviation of the neuron sum array from the desired output value array and generate a deviation array.
  • the controller is additionally configured to modify the corrective weight array using the determined deviation array. Adding up the modified corrective weight values to determine the neuron sum array reduces the deviation of the neuron sum array from the desired output value array, i.e., compensates for errors generated by the neuron network during training, and generates a trained corrective weight array to thereby facilitate concurrent or parallel training of the neural network.
  • the controller may additionally be programmed with an array of target deviation of the neuron sum array from the desired output value array. Furthermore, the controller may be configured to complete training of the neural network when the deviation of the neuron sum array from the desired output value array is within an acceptable range of the array of target deviation.
  • the neural network may additionally include a plurality of data processors.
  • the controller may be additionally configured to partition at least one of the respective input value, training input value, corrective weight, neuron sum, and desired output value matrices into respective sub-matrices and communicate a plurality of the resultant sub-matrices to the plurality of data processors for separate parallel mathematical operations therewith.
  • partitioning of any of the subject matrices into respective sub-matrices facilitates concurrent or parallel data processing and an increase in speed of either image recognition of the input value matrix or training of the neural network.
  • Such concurrent or parallel data processing also permits scalability of the neural network.
  • the controller may modify the corrective weight matrix by applying an algebraic matrix operation to the training input value matrix and the corrective weight matrix to thereby train the neural network.
  • the mathematical matrix operation may include a determination of a mathematical product of the training input value and corrective weight matrices to thereby form a current training epoch weight matrix.
  • Each neuron has at least one output and is connected with at least one of the plurality of inputs via one of the plurality of synapses synapse and is configured to add up the weight values of the corrective weights selected from each synapse connected to the respective neuron and thereby generate a neuron sum.
  • the output of each neuron provides the respective neuron sum to establish an operational output signal of the neural network.
  • the neural network may also include a weight correction calculator configured to receive a desired output signal having a value, determine a deviation of the neuron sum from the desired output signal value, and modify respective corrective weight values established by the corresponding memory elements using the determined deviation. In such a case, adding up the modified corrective weight values to determine the neuron sum is intended to minimize the deviation of the neuron sum from the desired output signal value to thereby generate a trained neural network.
  • each of the plurality of synapses may be configured to accept one or more additional corrective weights established by the respective memory elements.
  • the neural network may be configured to remove from the respective synapses, during or after training of the neural network, one or more corrective weights established by the respective memory elements. Such removal of some corrective weights may permit the neural network to retain only a number of memory elements required to operate the neural network.
  • the electrical device may be configured as one of a resistor, a memistor, a memristor, a transistor, a capacitor, a field-effect transistor, a photoresistor, such as a light-dependent resistor (LDR), or a magnetic dependent resistor (MDR).
  • a resistor such as a light-dependent resistor (LDR), or a magnetic dependent resistor (MDR).
  • MDR magnetic dependent resistor
  • Each memory element may be established by a block of electrical resistors and include a selector device configured to select one or more electrical resistors from the block using the determined deviation to establish each corrective weight.
  • the block of electrical resistors may additionally include electrical capacitors.
  • each memory element may be established by a block having both, electrical resistors and electrical capacitors.
  • the selector device may then be additionally configured to select capacitors using the determined deviation to establish each corrective weight.
  • the neural network may be configured as one of an analog, digital, and digital-analog network.
  • at least one of the plurality of inputs, the plurality of synapses, the memory elements, the set of distributors, the set of neurons, the weight correction calculator, and the desired output signal may be configured to operate in an analog, digital, and digital-analog format.
  • each neuron may be established by one of a series and a parallel communication channel, such as an electrical wire, or a series or parallel bus.
  • the weight correction calculator may be established as a set of differential amplifiers. Furthermore, each differential amplifier may be configured to generate a respective correction signal.
  • Each of the distributors may be a demultiplexer configured to select one or more corrective weights from the plurality of corrective weights in response to the received input signal.
  • Each distributor may be configured to convert the received input signal into a binary code and select one or more corrective weights from the plurality of corrective weights in correlation with the binary code.
  • the neural network may be programmed into an electronic device having a memory, and wherein each memory element is stored in the memory of the electronic device.
  • a method of operating a utility neural network includes processing data via the utility neural network using modified corrective weight values established by a separate analogous neural network during training thereof.
  • the method also includes establishing an operational output signal of the utility neural network using the modified corrective weight values established by the separate analogous neural network.
  • the separate analogous neural network was trained via receiving, via an input to the neural network, a training input signal having a training input value; communicating the training input signal to a distributor operatively connected to the input; selecting, via the distributor, in correlation with the training input value, one or more corrective weights from a plurality of corrective weights, wherein each corrective weight is defined by a weight value and is positioned on a synapse connected to the input; adding up the weight values of the selected corrective weights, via a neuron connected with the input via the synapse and having at least one output, to generate a neuron sum; receiving, via a weight correction calculator, a desired output signal having a value; determining, via the weight correction calculator, a deviation of the neuron sum from the desired output signal value; and modifying, via the weight correction calculator, respective corrective weight values using the determined deviation to establish the modified corrective weight values, such that adding up the modified corrective weight values to determine
  • the utility neural network and the trained separate analogous neural network may include a matching neural network structure including a number of inputs, corrective weights, distributors, neurons, and synapses.
  • each corrective weight may be established by a memory element that retains a respective weight value.
  • FIGURE 1 is a schematic illustration of a prior art, classical artificial neural network.
  • FIGURE 2 is a schematic illustration of a "progressive neural network” (p- net) having a plurality of synapses, a set of distributors, and a plurality of corrective weights associated with each synapse.
  • p- net a "progressive neural network” having a plurality of synapses, a set of distributors, and a plurality of corrective weights associated with each synapse.
  • FIGURE 3B is a schematic illustration of a portion of the p-net shown in Figure 2, having a plurality of synapses and a set of synaptic weights positioned downstream of the respective plurality of corrective weights.
  • FIGURE 4C is a schematic illustration of a portion of the p-net shown in
  • Figure 2 having a single distributor for all synapses of a given input, and having one synaptic weight positioned upstream of each distributor and a set of synaptic weights positioned downstream of the respective plurality of corrective weights.
  • FIGURE 5 is a schematic illustration of division of input signal value range into individual intervals in the p-net shown in Figure 2.
  • FIGURE 6A is a schematic illustration of one embodiment of a distribution for values of coefficient of impact of corrective weights in the p-net shown in Figure 2.
  • FIGURE 6B is a schematic illustration of another embodiment of the distribution for values of coefficient of impact of corrective weights in the p-net shown in Figure 2.
  • FIGURE 8 is a schematic illustration of an embodiment of the p-net shown in Figure 2 trained for recognition of two distinct images, wherein the p-net is configured to recognize a picture that includes some features of each image;
  • FIGURE 15 is a schematic illustration of a general formation sequence of the p-net shown in Figure 2.
  • FIGURE 16 is a schematic illustration of representative analysis and preparation of data for formation of the p-net shown in Figure 2.
  • FIGURE 22 is a schematic illustration of extending of neuron sums during training of the p-net shown in Figure 2.
  • FIGURE 25 is a schematic illustration of a specific embodiment of the p- net having each of the plurality of corrective weights established by the memory element; the p-net being depicted in the process of image recognition.
  • FIGURE 28 is a schematic illustration of twin parallel branches of memristors in the representative p-net.
  • FIGURE 29 is a schematic illustration of the representative p-net using resistors.
  • FIGURE 30 is a schematic illustration of one embodiment of the memory element configured as a resistor in the p-net.
  • FIGURE 31 is a schematic illustration of another embodiment of the memory element configured as a resistor in the p-net.
  • FIGURE 32 is a schematic illustration of another embodiment of the memory element configured as variable impedance in the p-net.
  • FIGURE 36 is an illustration of the p-net in the process of image recognition, according to the disclosure.
  • a classical artificial neural network 10 typically includes input devices 12, synapses 14 with synaptic weights 16, neurons 18, including an adder 20 and activation function device 22, neuron outputs 24 and weight correction calculator 26. Each neuron 18 is connected through synapses 14 to two or more input devices 12.
  • the values of synaptic weights 16 are commonly represented using electrical resistance, conductivity, voltage, electric charge, magnetic property, or other parameters.
  • Supervised training of the classical neural network 10 is generally based on an application of a set of training pairs 28.
  • Each training pair 28 commonly consists of an input image 28-1 and a desired output image 28-2, a.k.a., a supervisory signal.
  • Training of the classical neural network 10 is typically provided as follows.
  • An input image in the form of a set of input signals (Ii-I m ) enters the input devices 12 and is transferred to the synaptic weights 16 with initial weights (Wi).
  • the value of the input signal is modified by the weights, typically by multiplying or dividing each signal (Ii-I m ) value by the respective weight. From the synaptic weights 16, modified input signals are transferred either to the respective neurons 18.
  • Each neuron 18 receives a set of signals from a group of synapses 14 related to the subject neuron 18.
  • the adder 20 included in the neuron 18 sums up all the input signals modified by the weights and received by the subject neuron.
  • Activation function devices 22 receive the respective resultant neuron sums and modify the sums according to mathematical function(s), thus forming respective output images as sets of neuron output signals
  • Figure 2 shows a schematic view of a progressive neural network, thereafter "progressive network", or "p-net” 100.
  • the p-net 100 includes a plurality or a set of inputs 102 of the p-net.
  • Each input 102 is configured to receive an input signal 104, wherein the input signals are represented as Ii, h... Im in Figure 2.
  • Each input signal Ii, h... Im represents a value of some characteristic(s) of an input image 106, for example, a magnitude, frequency, phase, signal polarization angle, or association with different parts of the input image 106.
  • Each input signal 104 has an input value, wherein together the plurality of input signals 104 generally describes the input image 106.
  • Each input value may be within a value range that lies between - ⁇ and + ⁇ and may be set in digital and/or analog forms.
  • the range of the input values may depend on a set of training images. In the simplest case, the range input values could be the difference between the smallest and largest values of input signals for all training images.
  • the range of the input values may be limited by eliminating input values that are deemed too high. For example, such limiting of the range of the input values may be accomplished via known statistical methods for variance reduction, such as importance sampling.
  • Another example of limiting the range of the input values may be designation of all signals that are lower than a predetermined minimum level to a specific minimum value and designation of all signals exceeding a predetermined maximum level to a specific maximum value.
  • the p-net 100 also includes a plurality or a set of synapses 118. Each synapse 118 is connected to one of the plurality of inputs 102, includes a plurality of corrective weights 112, and may also include a synaptic weight 108, as shown in Figure 2. Each corrective weight 112 is defined by a respective weight value.
  • the p- net 100 also includes a set of distributors 114. Each distributor 1 14 is operatively connected to one of the plurality of inputs 102 for receiving the respective input signal 104. Additionally, each distributor 1 14 is configured to select one or more corrective weights from the plurality of corrective weights 112 in correlation with the input value.
  • the p-net 100 additionally includes a set of neurons 116.
  • Each neuron 116 has at least one output 1 17 and is connected with at least one of the plurality of inputs 102 via one synapse 1 18.
  • Each neuron 116 is configured to add up or sum the corrective weight values of the corrective weights 112 selected from each synapse 118 connected to the respective neuron 116 and thereby generate and output a neuron sum 120, otherwise designated as ⁇ n.
  • a separate distributor 1 14 may be used for each synapse 118 of a given input 102, as shown in Figures 3 A, 3B, and 3C, or a single distributor may be used for all such synapses, as shown in Figures 4A, 4B, and 4C.
  • all corrective weights 112 are assigned initial values, which may change during the process of p-net training.
  • the initial value of the corrective weight 1 12 may be assigned as in the classical neural network 10, for example, the weights may be selected randomly, calculated with the help of a pre-determined mathematical function, selected from a predetermined template, etc.
  • the p-net 100 also includes a weight correction calculator 122.
  • the weight correction calculator 122 is configured to receive a desired, i.e.,
  • the weight correction calculator 122 is also configured to determine a deviation 128 of the neuron sum 120 from the value of the desired output signal 124, a.k.a., training error, and modify respective corrective weight values using the determined deviation 128. Thereafter, summing the modified corrective weight values to determine the neuron sum 120 minimizes the deviation of the subject neuron sum from the value of the desired output signal 124 and, as a result, is effective for training the p-net 100.
  • the deviation 128 may also be described as the training error between the determined neuron sum 120 and the value of the desired output signal 124.
  • the input values of the input signal 104 only change in the process of general network setup, and are not changed during training of the p-net. Instead of changing the input value, training of the p-net 100 is provided by changing the values 112 of the corrective weights 1 12.
  • each neuron 1 16 includes a summing function, where the neuron adds up the corrective weight values, the neuron 1 16 does not require, and, in fact, is characterized by absence of an activation function, such as provided by the activation function device 22 in the classical neural network 10.
  • weight correction during training is accomplished by changing synaptic weights 16, while in the p-net 100 corresponding weight correction is provided by changing corrective weights values 1 12, as shown in Figure 2.
  • the respective corrective weights 1 12 may be included in weight correction blocks 110 positioned on all or some of the synapses 118.
  • each synaptic and corrective weight may be represented either by a digital device, such as a memory cell, and/or by an analog device.
  • the values of the corrective weights 1 12 may be provided via an appropriate programmed algorithm, while in hardware emulations, known methods for memory control could be used.
  • the deviation 128 of the neuron sum 120 from the desired output signal 124 may be represented as a mathematically computed difference therebetween.
  • the generation of the respective modified corrective weights 112 may include apportionment of the computed difference to each corrective weight used to generate the neuron sum 120.
  • the generation of the respective modified corrective weights 112 will permit the neuron sum 120 to be converged on the desired output signal value within a small number of epochs, in some cases needing only a single epoch, to rapidly train the p-net 100.
  • the apportionment of the mathematical difference among the corrective weights 112 used to generate the neuron sum 120 may include dividing the determined difference equally between each corrective weight used to generate the respective neuron sum 120.
  • the determination of the deviation 128 of the neuron sum 120 from the desired output signal value may include division of the desired output signal value by the neuron sum to thereby generate a deviation coefficient.
  • the modification of the respective modified corrective weights 112 includes multiplication of each corrective weight used to generate the neuron sum 120 by the deviation coefficient.
  • Each distributor 1 14 may additionally be configured to assign a plurality of coefficients of impact 134 to the plurality of corrective weights 1 12.
  • each coefficient of impact 134 may be assigned to one of the plurality of corrective weights 112 in some predetermined proportion to generate the respective neuron sum 120.
  • each coefficient of impact 134 may be assigned a "Ci,d, n " nomenclature, as shown in the Figures.
  • Each of the plurality of coefficients of impact 134 corresponding to the specific synapse 118 is defined by a respective impact distribution function 136.
  • the impact distribution function 136 may be same either for all coefficients of impact 134 or only for the plurality of coefficients of impact 134 corresponding a specific synapse 118.
  • Each of the plurality of input values may be received into a value range 137 divided into intervals or sub-divisions "d" according to an interval distribution function 140, such that each input value is received within a respective interval "d" and each corrective weight corresponds to one of such intervals.
  • Each distributor 114 may use the respective received input value to select the respective interval "d", and to assign the respective plurality of coefficients of impact 134 to the corrective weight 112 corresponding to the selected respective interval "d" and to at least one corrective weight corresponding to an interval adjacent to the selected respective interval, such as Wi,d+i,n or Wi,d-i, n .
  • the predetermined proportion of the coefficients of impact 134 may be defined according to a statistical distribution.
  • Generating the neuron sum 120 may include initially assigning respective coefficients of impact 134 to each corrective weight 112 according to the input value 102 and then multiplying the subject coefficients of impact by values of the respective employed corrective weights 112. Then, summing via the each neuron 116 the individual products of the corrective weight 112 and the assigned coefficient of impact 134 for all the synapses 118 connected thereto.
  • the p-net 100 typically formation of the p-net 100 will take place before the training of the p-net commences. However, in a separate embodiment, if during training the p-net 100 receives an input signal 104 for which initial corrective weights are absent, appropriate corrective weights 112 may be generated. In such a case, the specific distributor 114 will determine the appropriate interval "d" for the particular input signal 104, and a group of corrective weights 112 with initial values will be generated for the given input 102, the given interval "d", and all the respective neurons 116. Additionally, a corresponding coefficient of impact 134 may be assigned to each newly generated corrective weight 112.
  • Each corrective weight 1 12 may be defined by a set of indexes configured to identify a position of each respective corrective weight on the p-net 100.
  • the set of indexes may specifically include an input index "i" configured to identify the corrective weight 112 corresponding to the specific input 102, an interval index “d” configured to specify the discussed-above selected interval for the respective corrective weight, and a neuron index “n” configured to specify the corrective weight 1 12 corresponding to the specific neuron 1 16 with nomenclature "Wi,d, n ".
  • each corrective weight 112 corresponding to a specific input 102 is assigned the specific index "i" in the subscript to denote the subj ect position.
  • each corrective weight "W" corresponding to a specific neuron 116 and a respective synapse 118 is assigned the specific indexes "n" and "d” in the subscript to denote the subject position of the corrective weight on the p-net 100.
  • the set of indexes may also include an access index "a" configured to tally a number of times the respective corrective weight 112 is accessed by the input signal 104 during training of the p-net 100. In other words, each time a specific interval "d" and the respective corrective weight 112 is selected for training from the plurality of corrective weights in correlation with the input value, the access index "a" is incremented to count the input signal.
  • the access index "a” may be used to further specify or define a present status of each corrective weight by adopting a nomenclature "Wi,d, n ,a".
  • Each of the indexes "i”, “d”, “n”, and “a” may be numerical values in the range of 0 to + ⁇ .
  • Figure 23 depicts the method 200 of training the p-net 100, as described above with respect to Figures 2-22.
  • the method 200 commences in frame 202 where the method includes receiving, via the input 102, the input signal 104 having the input value. Following frame 202, the method advances to frame 204. In frame 204, the method includes communicating the input signal 104 to the distributor 1 14 operatively connected to the input 102. Either in frame 202 or frame 204, the method 200 may include defining each corrective weight 1 12 by the set of indexes. As described above with respect to the structure of the p-net 100, the set of indexes may include the input index "i" configured to identify the corrective weight 1 12 corresponding to the input 102.
  • the plurality of coefficients of impact 134 may be defined by an impact distribution function 136.
  • the method may additionally include receiving the input value into the value range 137 divided into intervals "d" according to the interval distribution function 140, such that the input value is received within a respective interval, and each corrective weight 1 12 corresponds to one of the intervals.
  • the method may include using, via the distributor 114, the received input value to select the respective interval "d” and assign the plurality of coefficients of impact 134 to the corrective weight 1 12 corresponding to the selected respective interval "d” and to at least one corrective weight corresponding to an interval adjacent to the selected respective interval "d".
  • corrective weights 1 12 corresponding to an interval adjacent to the selected respective interval "d” may be identified, for example, as Wi,d+i, n or Wi,d-i, n .
  • a specific input image may be converted into an input image in interval format, that is, real signal values may be recorded as numbers of intervals to which the subject respective signals belong. This procedure may be carried out in each training epoch for the given image. However, the image may also be formed once as a set of interval numbers. For example, in Figure 7 the initial image is presented as a picture, while in the table "Image in digital format” the same image is presented in the form of digital codes, and in the table "Image in interval format” then image is presented as a set of interval numbers, where a separate interval is assigned for each 10 values of digital codes.
  • the described structure of the p-net 100 and the training algorithm or method 200 as described permit continued or iterative training of the p-net, thus there is no requirement to form a complete set of training input images 106 at the start of the training process. It is possible to form a relatively small starting set of training images, and such a starting set could be expanded as necessary.
  • the input images 106 may be divided into distinct categories, for example, a set of pictures of one person, a set of photos of cats, or a set of photographs of cars, such that each category corresponds to a single output image, such a person's name or a specific label.
  • Desired output images 126 represent a field or table of digital, where each point corresponds to a specific numeric value from - ⁇ to + ⁇ , or analog values. Each point of the desired output image 126 may correspond to the output of one of the neurons of the p-net 100. Desired output images 126 may be encoded with digital or analog codes of images, tables, text, formulas, sets of symbols, such as barcodes, or sounds. [00121] In the simplest case, each input image 106 may correspond to an output image, encoding the subject input image. One of the points of such output image may be assigned a maximum possible value, for example 100%, whereas all other points may be assigned a minimum possible value, for example, zero.
  • Figure 8 shows an example of how the p-net 100 trained for recognition of two images, a square and a circle, may recognize a picture that contains some features of each figure being expressed in percentages, with the sum not necessarily equal 100%.
  • Such a process of pattern recognition by defining the percentage of similarity between different images used for training may be used to classify specific images.
  • coding may be accomplished using a set of several neural outputs rather than one output (see below).
  • output images may be prepared in advance of training. However, it is also possible to have the output images formed by the p-net 100 during training.
  • input images 106 may be in the form of a field or table of digital or analog values, where each point corresponds to one input of the p-net, while output images may be presented in any format suitable for introduction into the computer, for example using formats jpeg, gif, pptx, in the form of tables, charts, diagrams and graphics, various document formats, or a set of symbols.
  • the resultant p-net 100 may be quite suitable for archiving systems, as well as an associative search of images, musical expressions, equations, or data sets.
  • the p-net 100 needs to be formed and/or parameters of an existing p-net must be set for handling given task(s). Formation of the p-net 100 may include the following designations:
  • the number of inputs is determined based on the sizes of input images 106. For example, a number of pixels may be used for pictures, while the selected number of outputs may depend on the size of desired output images 126. In some cases, the selected number of outputs may depend on the number of categories of training images.
  • Values of individual synaptic weights 108 may be in the range of - ⁇ to + ⁇ . Values of synaptic weights 108 that are less than 0 (zero) may denote signal amplification, which may be used to enhance the impact of signals from specific inputs, or from specific images, for example, for a more effective recognition of human faces in photos containing a large number of different individuals or objects. On the other hand, values of synaptic weights 108 that are greater than 0 (zero) may be used to denote signal attenuation, which may be used to reduce the number of required calculations and increase operational speed of the p-net 100. Generally, the greater the value of the synaptic weight, the more attenuated is the signal transmitted to the corresponding neuron.
  • Figure 9 shows an embodiment of the p-net 100 in which the relationship between an input and respective neurons is reduced in accordance with statistical normal distribution. Uneven distribution of synaptic weights 108 may result in the entire input signal being communicated to a target or "central" neuron for the given input, thus assigning a value of zero to the subject synaptic weight.
  • synaptic weights may result in other neurons receiving reduced input signal values, for example, using normal, log-normal, sinusoidal, or other distribution.
  • Values of the synaptic weights 108 for the neurons 116 receiving reduced input signal values may increase along with the increase of their distance from the "central" neuron. In such a case, the number of calculations may be reduced and operation of the p-net may speed up.
  • Such networks which are a combination of known fully connected and non-fully connected neural networks may be the exceedingly effective for analysis of images with strong internal patterns, for example, human faces or consecutive frames of a movie film.
  • a program for example, written in an object-oriented programming language, that generates main elements of the p-net, such as synapses, synaptic weights, distributors, corrective weights, neurons, etc., as software objects.
  • a program may assign relationships between the noted objects and algorithms specifying their actions.
  • synaptic and corrective weights may be formed in the beginning of formation of the p-net 100, along with setting their initial values.
  • the p-net 100 may be fully formed before the start of its training, and be modified or added-on at a later frame, as necessary, for example, when information capacity of the network becomes exhausted, or in case of a fatal error. Completion of the p-net 100 is also possible while training continues.
  • the number of selected corrective weights on a particular synapse may be equal to the number of intervals within the range of input signals. Additionally, corrective weights may be generated after the formation of the p-net 100, as signals in response to appearance of individual intervals. Similar to the classical neural network 10, selection of parameters and settings of the p-net 100 is provided with a series of targeted experiments. Such experiments may include (1) formation of the p-net with the same synaptic weights 108 at all inputs, and (2) assessment of input signal values for the selected images and initial selection of the number of intervals.
  • the values of input signals may be rounded as they are distributed between the specific intervals.
  • accuracy of input signals greater than the width of the range divided by the number of intervals may not be required.
  • Such experiments may also include (3) selection of uniform distribution of intervals throughout the entire range of values of the input signals and the simplest distribution for coefficients of corrective weight impact Ci,d, n may be set equal to 1 for corrective weight corresponding to the interval for the particular input signal, while the corrective weight impact for all remaining corrective weights may be set to 0 (zero).
  • Such experiments may additionally include (4) training p-net 100 with one, more, or all prepared training images with pre-determined accuracy.
  • Such evaluation of modifications may include changing, either increasing or reducing, the number of intervals; changing the type of distribution of the coefficients of corrective weight impact (Ci,d, n ), testing variants with non-uniform distribution of intervals, such as using normal, power, logarithmic, or log-normal distribution; and changing values of synaptic weights 108, for example their transition to non-uniform distribution.
  • Formation of the p-net 100 settings may be via training with predetermined training time and experimental determination of training accuracy.
  • Multiplication of Wi,d, n ⁇ Ci,d, n may be performed by various devices, for example by distributors 1 14, devices with stored weights or directly by neurons 1 16. The sums are transferred via neuron output 117 to the weight correction calculator 122. The desired output signals Oi, C ... On describing the desired output image 126 are also fed to the calculator 122.
  • the weight correction calculator 122 is a computation device for calculating the modified value for corrective weights by comparison of the neuron output sums ⁇ 1 , ⁇ 2... ⁇ n with desired output signals Oi, Ch... On.
  • Figure 11 shows a set of corrective weights Wi,d,i , contributing into the neuron output sum ⁇ 1 , which are multiplied by corresponding coefficient of corrective weight impact C i d l , and these products are subsequently added by the neuron output sum ⁇ 1 :
  • ⁇ 1 Wi.o.i x C 1>0>1 . + Wi.i.i C ul . + Wu,i x C 1>2>1 . + ... [3]
  • corrective weights Wi,d,i do not correspond to the input image 106 used for training, thus, neuron output sums ⁇ 1 are not equal to the corresponding desired output image 126.
  • the weight correction system calculates the correction value ⁇ 1, which is used for changing all the corrective weights contributing to the neuron output sum ⁇ 1 (Wi,d,i).
  • the p-net 100 permits various options or variants for its formation and utilization of collective corrective signals for all corrective weights Wi,d, n contributing to a specified neuron 116.
  • Wi,d,n modified W + An/ C i d n [5],
  • Wi,d,n, modified Wun, x An [7],
  • Modification of corrective weights Wi,d, n by any available variant is intended to reduce the training error for each neuron 116 by converging its output sum ⁇ « on the value of the desired output signal. In such a way, the training error for a given image may be reduced until such becomes equal, or close to zero.
  • FIG. 11 An example of modification of corrective weights Wi,d, n during training is shown in Figure 11.
  • the values of corrective weights Wi,d,n are set before the training starts in the form of random weight distribution with the weight values being set to 0 ⁇ 10% from the correction weight range and reach final weight distribution after training.
  • the described calculation of collective signals is conducted for all neurons 116 in the p-net 100.
  • the described training procedure for one training image may be repeated for all other training images. Such procedure may lead to appearance of training errors for some of the previously trained images, as some corrective weights Wi,d, n may participate in several images. Accordingly, training with another image may partially disrupt the distribution of corrective weights Wi,d, n formed for the previous images.
  • each synapse 118 includes a set of corrective weights Wi,d, n .
  • training with new images while possibly increasing training error does not delete the images, for which the p-net 100 was previously trained.
  • the more synapses 118 contribute to each neuron 116 and the greater the number of corrective weights Wi,d, n at each synapse the less training for a specific image affects the training for other images.
  • Informational capacity of the p-net 100 is very large, but is not unlimited. With the set dimensions, such as the number of inputs, outputs, and intervals, of the p-net 100, and with an increase in the number of images that the p-net is trained with, after a certain number of images, the number and magnitude of training errors may also increase. When such an increase in error generation is detected, the number and/or magnitude of errors may be reduced by increasing the size of p-net 100, since the p-net permits increasing the number of neurons 1 16 and/or the number of the signal intervals "d" across the p-net or in its components between training epochs. P-net 100 expansion may be provided by adding new neurons 116, adding new inputs 102 and synapses 118, changing distribution of the coefficients of corrective weight impact C ; d n , and dividing existing intervals "d"
  • multi-stage recognition may include:
  • Formation of a computer emulation of the p-net 100 and its training may be provided based of the above description by using any programming language.
  • an object-oriented programming may be used, wherein the synaptic weights 108, corrective weights 112, distributors 114, and neurons 116 represent programming objects or classes of objects, relations are established between object classes via links or messages, and algorithms of interaction are set between objects and between object classes.
  • Formation and training of the p-net 100 software emulation may include the following:
  • Wi,d,n modified Wi, n ,d + A n / Ci,d, n .
  • Wi,d,n modified Wi,d, n x A n
  • Formation of the object class Synapse may include:
  • Class Synapse may perform the following functions:
  • Formation of the object class InputSignal may include:
  • Class InputSignal may provide the following functions:
  • the cycles may be formed, where:
  • the distributor Based on the input signal 102, the distributor forms a set of coefficients of corrective weight impact Ci,d, n ;
  • the described embodiments of the p-net 100 training without loss of information allow creating a p-net memory with high capacity and reliability.
  • Such memory may be used as a high-speed computer memory of large capacity providing greater speed than even the "cash memory” system, but will not increase computer cost and complexity as is typical with the "cash memory” system.
  • memory may be compressed tens or hundreds of times without significant loss of recording quality. In other words, a neural network is able to operate as a very effective archiving program.
  • each neuron 116 may be established by either a series or a parallel communication channel 172, for example, an electrical wire, or a series or a parallel bus.
  • a representative bus embodiment of the communication channel 172 may use both parallel and bit serial connections, as understood by those skilled in the art. For example, if the corresponding analog signals are provided via electrical current, the communication channel 172 may be series current bus, while if the corresponding analog signals are provided via electrical potential on the respective corrective weights 112, a representative communication channel may be a parallel bus.
  • each corrective weight 112 may be established by the memory element 150. Specifically, in the p-net 100 A, the memory element 150 may retain the respective modified weight value corresponding to the corrective weight 112 following the training of the p-net.
  • Each input image 106 provided to the utility neural network 100B may be represented by the combined input signals 104 represented by Ii, l2... Im, identical to the description with respect to Figure 2. As additionally discussed with respect to Figure 2, each input signal Ii, h... Im represents a value of some
  • Each neuron 116 is similarly configured to sum up the values of the corrective weights 112 selected from each synapse 118 connected to the respective neuron 116 to thereby generate and output a neuron sum array 120A, otherwise designated as ⁇ n.
  • a separate distributor 1 14 may similarly be used for each synapse 118 of a given input 102, as shown in Figures 34-36.
  • a single distributor may be used for all such synapses (not shown).
  • all corrective weights 1 12 are assigned initial values, which may change during the process of p-net training, as shown in Figure 35.
  • the plurality of inputs 102 to the p-net may be configured to receive input images 106.
  • Such input images 106 may be either received as an input value array 107 A or codified as an input value array 107A during recognition of the images by the p-net 100B.
  • Each synapse 118 may include a plurality of trained corrective weights 112A.
  • each neuron 116 may be configured to add up the weight values of the trained corrective weights 112A corresponding to each synapse 118 connected to the respective neuron, such that the plurality of neurons generate a recognized images array 136, thereby providing recognition of the input images 106.
  • the controller 122A may additionally be programmed with an array of target deviation or target deviation array 138 of the neuron sum array 120A from the desired output value array 126A. Furthermore, the controller 122A may be configured to complete training of the p-net 100B when the deviation 128 of the neuron sum array 120 A from the desired output value array 126A is within an acceptable range 139 of the target deviation array 138.
  • the acceptable range 139 may be referenced against a maximum or a minimum value in, or an average value of the target deviation array 138.
  • values of the respective parameters may be organized, for example, in the form of a processor accessible data table, the values in the respective matrices 141, 141A, 142, 143, 144, 145, 146, 147, and 148 are specifically organized to enable application of algebraic matrix operations to each respective matrix individually, as well as to combinations thereof.
  • the above training input images matrix may be converted via the controller 122A into the training input value matrix 141, which is represented as matrix
  • will have a corresponding number of columns for the number of inputs "I”, but accounting for a specific number of intervals "i", and a corresponding number of rows for the number of images.
  • ⁇ ii Cm x Win + Cm x W121 + Cm x Wm ...
  • ⁇ 21 C211 x W211 + C221 x W221 + C231 x W231 . . .
  • the corrective weight matrix 142 may be modified using the determined deviation matrix 145, which permits adding up the modified corrective weight 112 values to determine the neuron sum matrix 143 to minimize the deviation of the neuron sum matrix 143 from the desired output value matrix 144 to generate a trained corrective weight matrix 146, represented as matrix
  • W trainedl
  • the input images to be recognized may be presented as a v x k matrix
  • of input images 106 for recognition may be generally represented as follows: Ir 2! Ir 31 Ir vl ,
  • has dimensions n x v. Each column of the matrix
  • may be generally depicted as follows:
  • Each of the p-net 100B and lOOC may additionally include a data processor 150, which may be a sub-unit of the controller 122A. In such
  • the controller 122A may be additionally configured to partition or cut- up at least one of the respective training input value matrix 141, input value matrix 141 A, corrective weight matrix 142, neuron sum matrix 143, and desired output value matrix 144 into respective sub-matrices.
  • the controller 122A may also be configured to communicate a plurality of the resultant sub-matrix or sub-matrices to the data processor 150 for separate mathematical operations therewith.
  • Such partitioning of any of the subject matrices 141 , 142, 143, and 144 into respective sub-matrices facilitates concurrent or parallel data processing and an increase in speed of either image recognition of the input value matrix 141 A or training of the p-net 100B.
  • Such concurrent or parallel data processing also permits scalability of the p-net 100B or l OOC, i.e., provides ability to vary the size of the p-net by limiting the size of the respective matrices being subjected to algebraic manipulations on a particular processor and/or breaking up the matrices between multiple processors, such as the illustrated processor 150.
  • multiple data processors 150 in communication with the controller 122A may be employed, whether as part of the controller 122A or arranged distally therefrom, and configured to operate separately and in parallel.
  • the controller 122A may modify the corrective weight matrix 142 by applying an algebraic matrix operation to the training input value matrix 141 A and the corrective weight matrix to thereby train the p-net 100B.
  • a mathematical matrix operation may include a determination of a mathematical product of the input value matrix 141 A and the corrective weight matrix 146 to thereby form a current training epoch weight matrix 151.
  • the controller 122A may also be configured to subtract the neuron sum matrix 143 from the desired output value matrix 144 to generate a matrix of deviation of neuron sums 153 depicted as matrix
  • controller 122A may be configured to divide the matrix of deviation of neuron sums 153 by the number of synapses 1 18, identified below with a letter "m”, connected to the respective neuron 1 16 to generate a matrix of deviation per neuron input 155, represented below by the symbol "
  • the controller 122A may be additionally configured to determine a number of times each corrective weight 112 was used during one training epoch of the p-net 100B represented in the expression below by the symbol "
  • the controller 122A may be further configured to form an averaged deviation matrix
  • controller 122 A may be configured to add the averaged deviation matrix 157 for the one training epoch to the corrective weight matrix 142 to thereby generate the trained corrective weight matrix 146, represented below as
  • Figure 37 depicts a method 400 for operating the p-net 100B, as described above with respect to Figures 34-36.
  • the method 400 is configured to improve operation of an apparatus, such as a computer, or a system of computers employed in implementing supervised training using one or more data processors, such as the processor 150.
  • the method 400 may be programmed into a non-transitory computer-readable storage device for operating the p-net 100B and encoded with instructions executable to perform the method.
  • the method 400 commences in frame 402 where the method includes receiving, via the plurality of inputs 102, the training images 106. As described above with respect to structure of the p-net 100B depicted in Figures 34 and 35, the training images 106 may either be received as the training input value array 107 prior to commencement of the subject training phase or codified as the training input value array during the actual training phase. Following frame 402, the method advances to frame 404. In frame 404, the method includes organizing the corrective weights 112 of the plurality of synapses 1 18 in the corrective weight array 119A. As described above with respect to the structure of the p-net 100B, each synapse 118 is connected to one of the plurality of inputs 102 and includes a plurality of corrective weights 1 12.
  • each neuron 116 has at least one output 117 and is connected with at least one of the plurality of inputs 102 via one of the plurality of synapses 118. Furthermore, each neuron 1 16 is configured to add up the weight values of the corrective weights 112 corresponding to each synapse 118 connected to the respective neuron.
  • the method includes receiving, via the controller 122A, desired images 124 organized as the desired output value array 126 A.
  • the method proceeds to frame 410, in which the method includes determining, via the controller 122 A, the deviation 128 of the neuron sum array 120A from the desired output value array 126A and thereby generate the deviation array 132. [00205] Following frame 410, the method advances to frame 412. In frame 412, the method includes modifying, via the controller 122A, the corrective weight array 119A using the determined deviation array 132. The modified corrective weight values of the modified corrective weight array 1 19A may subsequently be added or summed up and then used to determine a new neuron sum array 120 A.
  • the summed modified corrective weight values of the modified corrective weight array 119A may then serve to reduce or minimize the deviation of the neuron sum array 120 A from the desired output value array 126A and generate the trained corrective weight array 134A.
  • the deviation array 132 may be determined as sufficiently minimized when the deviation 128 of the neuron sum array 120A from the desired output value array 126A is within an acceptable range 139 of the array of target deviation 138, as described above with respect to the structure of the p-net lOOC.
  • the trained corrective weight array 134 A includes the trained corrective weights 1 12A determined using the deviation array 132 and thereby trains the p-net 100B.
  • each of the training input value array 107, the corrective weight array 1 19 A, neuron sum array 120 A, desired output value array 126 A, deviation array 132, trained corrective weight array 134A, and target deviation array 138 may be organized, respectively, as the training input value matrix 141 , corrective weight matrix 142, neuron sum matrix 143, desired output value matrix 144, deviation matrix 145, trained corrective weight matrix 146, and target deviation matrix 148.
  • the method may also include modifying, via the controller 122A, the corrective weight matrix 142 by applying an algebraic matrix operation to the training input value matrix 141 and the corrective weight matrix to thereby train the p-net 100B.
  • Such a mathematical matrix operation may include determining a mathematical product of the training input value matrix 141 and corrective weight matrix 142 to thereby form the current training epoch weight matrix 151.
  • the method may additionally include subtracting, via the controller 122 A, the neuron sum matrix 143 from the desired output value matrix 144 to generate the matrix of deviation of neuron sums 153.
  • the method may include dividing, via the controller 122 A, the matrix of deviation of neuron sums 153 by the number of inputs connected to the respective neuron 1 16 to generate the matrix of deviation per neuron input 155.
  • the method may include determining, via the controller 122A, the number of times each corrective weight 112 was used during one training epoch of the p-net 100B. And, the method may, moreover, include forming, via the controller 122 A, the averaged deviation matrix 157 for the one training epoch using the determined number of times each corrective weight 1 12 was used during the particular training epoch.
  • such an operation may include dividing, element-by-element, the matrix of deviation per neuron input by the determined number of times each corrective weight was used during the particular training epoch to obtain averaged deviation for each corrective weight 1 12 used during the one training epoch, thereby forming the averaged deviation matrix 157 for the one training epoch.
  • the method may include adding, via the controller 122A, the averaged deviation matrix 157 for the one training epoch to the corrective weight matrix 142 to thereby generate the trained corrective weight matrix 146 and complete the particular training epoch. Accordingly, by permitting matrix operations to be applied to all the corrective weights 1 12 in parallel, the method 400 facilitates concurrent, and therefore enhanced speed, training of the p-net 100B in generating the trained p-net lOOC.
  • method 400 may include returning to frame 402 to perform additional training epochs until the deviation array 132 is sufficiently minimized.
  • additional training epochs may be performed to converge the neuron sum array 120 A on the desired output value array 126 A to within the predetermined deviation or error value, such that the p-net 100B may be considered trained and ready for operation with new input images 106.
  • the method may proceed to frame 414 for image recognition using the trained p- net lOOC (shown in Figure 36).
  • the method 400 includes receiving the input images 106 via the plurality of inputs 102.
  • the input images 106 may be either received as the input value array 107 A or codified as the input value array during recognition of the images by the p-net lOOC.
  • the method includes attributing to each synapse 118 a plurality of trained corrective weights 112A of the trained corrective weight array 134A. After frame 416, the method advances to frame 418.
  • the method includes adding up the weight values of the trained corrective weights 112A corresponding to each synapse 118 connected to the respective neuron 116. As described above with respect to the structure of the p-net 100B, such summing of the weight values of the trained corrective weights 112A enables the plurality of neurons 116 to generate a recognized images array 136, thereby providing recognition of the input images 106.
  • the input value array 107A and the recognized images array 136 may be organized, respectively, as the input value matrix 141 A and the recognized images matrix 147.
  • the method may also include partitioning, via the controller 122A, any of the employed matrices, such as the input value matrix 141A, into respective sub-matrices.
  • Such resultant sub-matrices may be communicated to the data processor 150 for separate mathematical operations therewith to thereby facilitate concurrent data processing and an increase in speed of image recognition of the p-net lOOC.
  • the image recognition portion in frames 214-218 benefits from enhanced speed, when algebraic matrix operations are applied in parallel to the matrices or sub-matrices of the trained p-net lOOC.
  • the method 400 facilitates concurrent, and therefore enhanced speed, image recognition using the p-net lOOC.
  • the method may return to frame 402 for additional training, as described with respect to Figures 34-36, if the achieved image recognition is deemed insufficiently precise, or the method may conclude in frame 420.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Neurology (AREA)
  • Image Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)

Abstract

La présente invention concerne un réseau neuronal qui comprend des entrées permettant de recevoir des signaux d'entrée, et des synapses qui sont connectées aux entrées et présentent des pondérations de correction organisées dans un réseau. Les images d'apprentissage sont soit reçues par les entrées en tant que réseau, soit codifiées en tant que telles pendant l'apprentissage du réseau. Le réseau comprend également des neurones, dont chacun a une sortie connectée à au moins une entrée par l'intermédiaire d'une synapse et génère un réseau de sommes de neurones par addition des pondérations de correction choisies parmi chaque synapse connectée au neurone respectif. En outre, le réseau comprend un dispositif de commande qui reçoit des images souhaitées dans un réseau, détermine une déviation du réseau de sommes de neurones à partir du réseau de valeurs de sortie désiré, et génère un réseau de déviation. Le dispositif de commande modifie le réseau de pondérations de correction au moyen du réseau de déviation. L'addition des pondérations de correction modifiées pour déterminer le réseau de sommes de neurones réduit la déviation du sujet et génère un réseau de pondérations de correction formé destiné à un apprentissage de réseau concurrent.
PCT/US2017/036758 2016-06-09 2017-06-09 Réseau neuronal et procédé d'apprentissage de réseau neuronal WO2017214507A1 (fr)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201780035716.7A CN109416758A (zh) 2016-06-09 2017-06-09 神经网络及神经网络训练的方法
KR1020197000226A KR102558300B1 (ko) 2016-06-09 2017-06-09 신경망 및 신경망 트레이닝 방법
JP2018564317A JP7041078B2 (ja) 2016-06-09 2017-06-09 ニューラルネットワーク、およびニューラルネットワークトレーニングの方法
EP17811082.1A EP3469521A4 (fr) 2016-06-09 2017-06-09 Réseau neuronal et procédé d'apprentissage de réseau neuronal

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US15/178,137 US9619749B2 (en) 2014-03-06 2016-06-09 Neural network and method of neural network training
US15/178,137 2016-06-09
US15/449,614 US10423694B2 (en) 2014-03-06 2017-03-03 Neural network and method of neural network training
US15/449,614 2017-03-03

Publications (1)

Publication Number Publication Date
WO2017214507A1 true WO2017214507A1 (fr) 2017-12-14

Family

ID=60579026

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/036758 WO2017214507A1 (fr) 2016-06-09 2017-06-09 Réseau neuronal et procédé d'apprentissage de réseau neuronal

Country Status (5)

Country Link
EP (1) EP3469521A4 (fr)
JP (1) JP7041078B2 (fr)
KR (1) KR102558300B1 (fr)
CN (1) CN109416758A (fr)
WO (1) WO2017214507A1 (fr)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109799439A (zh) * 2019-03-29 2019-05-24 云南电网有限责任公司电力科学研究院 一种绝缘多角度倾斜划伤电缆受潮实验评估方法及装置
CN110162426A (zh) * 2018-02-12 2019-08-23 罗伯特·博世有限公司 用于检验神经网络中的神经元函数的方法和设备
CN111788586A (zh) * 2018-02-28 2020-10-16 美光科技公司 人工神经网络完整性验证
WO2021118651A1 (fr) * 2018-12-13 2021-06-17 Genghiscomm Holdings, LLC Réseaux neuronaux artificiels
US11315222B2 (en) 2019-05-03 2022-04-26 Samsung Electronics Co., Ltd. Image processing apparatus and image processing method thereof
WO2022090980A1 (fr) * 2020-11-02 2022-05-05 International Business Machines Corporation Répétition de poids sur des groupements de barres croisées de rpu
US11562231B2 (en) * 2018-09-03 2023-01-24 Tesla, Inc. Neural networks for embedded devices

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110046513B (zh) * 2019-04-11 2023-01-03 长安大学 基于Hopfield混沌神经网络的明文关联图像加密方法
CN110111234B (zh) * 2019-04-11 2023-12-15 上海集成电路研发中心有限公司 一种基于神经网络的图像处理系统架构
CN110135557B (zh) * 2019-04-11 2023-06-02 上海集成电路研发中心有限公司 一种图像处理系统的神经网络拓扑架构
US11361218B2 (en) * 2019-05-31 2022-06-14 International Business Machines Corporation Noise and signal management for RPU array
US11714999B2 (en) * 2019-11-15 2023-08-01 International Business Machines Corporation Neuromorphic device with crossbar array structure storing both weights and neuronal states of neural networks
KR102410166B1 (ko) * 2019-11-27 2022-06-20 고려대학교 산학협력단 이종 곱셈-누셈 유닛을 이용하는 심층 신경망의 가속기
CN111461308B (zh) * 2020-04-14 2023-06-30 中国人民解放军国防科技大学 忆阻神经网络及权值训练方法
JP7493398B2 (ja) 2020-07-03 2024-05-31 日本放送協会 変換装置、学習装置、およびプログラム
CN111815640B (zh) * 2020-07-21 2022-05-03 江苏经贸职业技术学院 一种基于忆阻器的rbf神经网络医学图像分割算法
CN112215344A (zh) * 2020-09-30 2021-01-12 清华大学 神经网络电路的校正方法以及设计方法
CN113570048B (zh) * 2021-06-17 2022-05-31 南方科技大学 基于电路仿真的忆阻器阵列神经网络的构建及优化方法
KR102514652B1 (ko) * 2021-11-19 2023-03-29 서울대학교산학협력단 뉴로모픽 소자를 위한 가중치 전사 장치 및 이를 이용한 가중치 전사 방법
CN115358389B (zh) * 2022-09-01 2024-08-20 清华大学 神经网络的训练误差降低方法、装置、电子设备及介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101980290B (zh) * 2010-10-29 2012-06-20 西安电子科技大学 抗噪声环境多聚焦图像融合方法
US20150106316A1 (en) * 2013-10-16 2015-04-16 University Of Tennessee Research Foundation Method and apparatus for providing real-time monitoring of an artifical neural network
US20160012330A1 (en) * 2014-03-06 2016-01-14 Progress, Inc. Neural network and method of neural network training

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0421639B1 (fr) 1989-09-20 1998-04-22 Fujitsu Limited Système de traitement de données en parallèle

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101980290B (zh) * 2010-10-29 2012-06-20 西安电子科技大学 抗噪声环境多聚焦图像融合方法
US20150106316A1 (en) * 2013-10-16 2015-04-16 University Of Tennessee Research Foundation Method and apparatus for providing real-time monitoring of an artifical neural network
US20160012330A1 (en) * 2014-03-06 2016-01-14 Progress, Inc. Neural network and method of neural network training

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3469521A4 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110162426A (zh) * 2018-02-12 2019-08-23 罗伯特·博世有限公司 用于检验神经网络中的神经元函数的方法和设备
CN111788586B (zh) * 2018-02-28 2024-04-16 美光科技公司 人工神经网络完整性验证
CN111788586A (zh) * 2018-02-28 2020-10-16 美光科技公司 人工神经网络完整性验证
US11562231B2 (en) * 2018-09-03 2023-01-24 Tesla, Inc. Neural networks for embedded devices
US11983630B2 (en) 2018-09-03 2024-05-14 Tesla, Inc. Neural networks for embedded devices
WO2021118651A1 (fr) * 2018-12-13 2021-06-17 Genghiscomm Holdings, LLC Réseaux neuronaux artificiels
US11640522B2 (en) 2018-12-13 2023-05-02 Tybalt, Llc Computational efficiency improvements for artificial neural networks
CN109799439B (zh) * 2019-03-29 2021-04-13 云南电网有限责任公司电力科学研究院 一种绝缘多角度倾斜划伤电缆受潮实验评估方法及装置
CN109799439A (zh) * 2019-03-29 2019-05-24 云南电网有限责任公司电力科学研究院 一种绝缘多角度倾斜划伤电缆受潮实验评估方法及装置
US11315222B2 (en) 2019-05-03 2022-04-26 Samsung Electronics Co., Ltd. Image processing apparatus and image processing method thereof
WO2022090980A1 (fr) * 2020-11-02 2022-05-05 International Business Machines Corporation Répétition de poids sur des groupements de barres croisées de rpu
GB2614687B (en) * 2020-11-02 2024-02-21 Ibm Weight repetition on RPU crossbar arrays
GB2614687A (en) * 2020-11-02 2023-07-12 Ibm Weight repetition on RPU crossbar arrays

Also Published As

Publication number Publication date
JP7041078B2 (ja) 2022-03-23
KR102558300B1 (ko) 2023-07-21
KR20190016539A (ko) 2019-02-18
EP3469521A1 (fr) 2019-04-17
EP3469521A4 (fr) 2020-02-26
CN109416758A (zh) 2019-03-01
JP2019519045A (ja) 2019-07-04

Similar Documents

Publication Publication Date Title
US9619749B2 (en) Neural network and method of neural network training
WO2017214507A1 (fr) Réseau neuronal et procédé d'apprentissage de réseau neuronal
US9390373B2 (en) Neural network and method of neural network training
TWI655587B (zh) 神經網路及神經網路訓練的方法
KR102545128B1 (ko) 뉴럴 네트워크를 수반한 클라이언트 장치 및 그것을 포함하는 시스템
WO2019091020A1 (fr) Procédé de stockage de données de poids, et processeur de réseau neuronal basé sur le procédé
JP7247878B2 (ja) 回答学習装置、回答学習方法、回答生成装置、回答生成方法、及びプログラム
JP2019032808A (ja) 機械学習方法および装置
CN114387486A (zh) 基于持续学习的图像分类方法以及装置
CN109919183A (zh) 一种基于小样本的图像识别方法、装置、设备及存储介质
CN112446888B (zh) 图像分割模型的处理方法和处理装置
CN113963165B (zh) 一种基于自监督学习的小样本图像分类方法及系统
Hossain et al. Detecting tomato leaf diseases by image processing through deep convolutional neural networks
CN111602145A (zh) 卷积神经网络的优化方法及相关产品
CN115063374A (zh) 模型训练、人脸图像质量评分方法、电子设备及存储介质
TWI722383B (zh) 應用於深度學習之預特徵萃取方法
Eghbali et al. Deep Convolutional Neural Network (CNN) for Large-Scale Images Classification
US20220343162A1 (en) Method for structure learning and model compression for deep neural network
JP2019095894A (ja) 推定装置、学習装置、学習済みモデル、推定方法、学習方法、及びプログラム
CN115796235B (zh) 补充缺失数据的生成器模型训练方法和系统
WO2023084759A1 (fr) Dispositif de traitement d'image, procédé de traitement d'image et programme
Takanashi et al. Image Classification Using l 1-fidelity Multi-layer Convolutional Sparse Representation
Daniels et al. Efficient Model Adaptation for Continual Learning at the Edge
Hauser Training capsules as a routing-weighted product of expert neurons
Vodianyk et al. Evolving Node Transfer Functions in Deep Neural Networks for Pattern Recognition

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17811082

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018564317

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20197000226

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2017811082

Country of ref document: EP

Effective date: 20190109