EP0474747A1 - Parallel distributed processing network characterized by an information storage matrix - Google Patents
Parallel distributed processing network characterized by an information storage matrixInfo
- Publication number
- EP0474747A1 EP0474747A1 EP90909020A EP90909020A EP0474747A1 EP 0474747 A1 EP0474747 A1 EP 0474747A1 EP 90909020 A EP90909020 A EP 90909020A EP 90909020 A EP90909020 A EP 90909020A EP 0474747 A1 EP0474747 A1 EP 0474747A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- matrix
- distributed processing
- parallel distributed
- processing network
- information storage
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 239000011159 matrix material Substances 0.000 title claims abstract description 221
- 238000012545 processing Methods 0.000 title claims description 90
- 239000013598 vector Substances 0.000 claims abstract description 124
- 230000009466 transformation Effects 0.000 claims abstract description 39
- 239000002356 single layer Substances 0.000 claims abstract description 5
- 238000000034 method Methods 0.000 claims description 30
- 229940050561 matrix product Drugs 0.000 claims description 19
- 230000008030 elimination Effects 0.000 claims description 6
- 238000003379 elimination reaction Methods 0.000 claims description 6
- 238000011156 evaluation Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 12
- 235000019687 Lamb Nutrition 0.000 description 7
- 230000000295 complement effect Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 230000001149 cognitive effect Effects 0.000 description 4
- 238000001228 spectrum Methods 0.000 description 3
- 230000000946 synaptic effect Effects 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 241001091282 Trimorphodon lambda Species 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 239000000470 constituent Substances 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000007493 shaping process Methods 0.000 description 2
- YUBJPYNSGLJZPQ-UHFFFAOYSA-N Dithiopyr Chemical compound CSC(=O)C1=C(C(F)F)N=C(C(F)(F)F)C(C(=O)SC)=C1CC(C)C YUBJPYNSGLJZPQ-UHFFFAOYSA-N 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000005284 basis set Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 239000003990 capacitor Substances 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000010410 layer Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000000465 moulding Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000004886 process control Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
- G06N3/065—Analogue means
Definitions
- the present invention relates to a parallel distributed
- connection weights are defined by an [N x N] information storage matrix [A] that satisfies the matrix
- [A] [T] [T] [A] ( 1) where IA] is an [N x N] diagonal matrix the elements of which are the eigenvalues of the matrix [A] and [T] is an [N x N] similarity
- Parallel distributed processing networks have been shown to be useful for solving large classes of complex problems in analog fashion. They are a class of highly parallel computational circuits with a plurality of linear and non-linear amplifiers having transfer functions that define input-output relations arranged in a network that connects the output of each amplifier to the input of some or all the amplifiers. Such networks may be implemented in hardware (either in discrete or integrated form) or by simulation using traditional von Neumann architecture digital computer. Such networks are believed to be more suitable for certain types of problems than a traditional von Neumann architecture digital computer. Exemplary of the classes of problems with which parallel distributed processing networks have been used are associative memory, classification applications, feature extraction, pattern recognition, and logfc circuit realization.
- network operation is such that for some (or any) input code (input vector) the network will produce one of the target vectors;
- the network may be characterized by a linear operator [A], which is a matrix with constant coefficients, and a nonlinear thresholding device denoted by ⁇ (.).
- the coefficients of the matrix [A] determine the connection weights between the amplifiers in the network, and ⁇ (.) represents a synaptic action at the output or input of each amplifier.
- the Hopfield and back propagation models derive the matrix operator [A] by using different techniques.
- the associative memory network disclosed in Hopfield, United States Patent 4,660,166 uses a interconnection scheme that connects each amplifier output to the input of all other amplifiers except itself.
- connection weights has to be symmetric, and the diagonal elements need to be equal to zero.
- Figure 2 of the Hopfield paper referenced above is believed to contain an error in which the output of an amplifier is connected back to its input.
- Figure 2 of the last- referenced patent is believed to correctly depict the interconnection scheme of the Hopfield paper.
- the Hopfield network has provided the basis for various
- Patents 4,731,747 and 4.737,929 (both to Denker), improve the Hopfield network by adjusting the time constants of the amplifiers to control the speed of convergence, by using negative gain amplifiers that possess a single output, and by using a clipped connection matrix having only two values which permits the construction of the network with fewer leads.
- United States Patent 4,752,906 overcomes the deficiency of the Hopfield network of not being able to provide temporal association by using delay elements in the output which are fed back to an input interconnection network.
- United States Patent 4,755,963 extends the range of problems solvable by the Hopfield network.
- the back propagation algorithm results in a multi layer feed forward network that uses a performance criteria in order to evaluate A (minimizing error at the output by adjusting coefficients in A).
- This technique produces good results but, unfortunately, is computationally intensive. This implies a long time for learning to converge.
- the back propagation network requires considerable time training or learning the information to be stored. Many techniques have been developed to reduce the training time. See, for example, copending application Serial Number 07/285,534, filed December 16, 1988 (ED-0367) and assigned to the assignee of the present invention, which relates to the use of stiff differential equations in training the back propagation network.
- the present invention relates to a parallel distributed
- processing network comprising a plurality of amplifiers, or nodes, connected in a single layer, with each amplifier having an input and an output.
- the output of each of the nodes is connected to the inputs of some or of all of the other nodes in the network (including being fed back into itself) by a respective line having a predetermined
- connection weights are defined by an [NxN] matrix [A], termed the "Information storage matrix", wherein the element A i.j of the information storage matrix [A] is the connection weight between the j-th input node and the i-th output node.
- the information storage matrix [A] satisfies the matrix equation
- [A] [T] [T] [A] (1).
- the matrix [T] is an [N x N] matrix, termed the "similarity transformation matrix", the columns of which are formed from a predetermined number (M) of [N x 1] target vectors plus a
- each target vector represents one of the outputs of the parallel distributed processing network.
- transformation matrix is linearly independent of all other of the vectors in that matrix.
- transformation matrix may or may not be orthogonal to all other of the vectors in that matrix.
- the matrix [A] is an [N x N] diagonal matrix, each element along the diagonal corresponding to a predetermined one of the target or the slack vectors.
- the relative value of each element along the diagonal of the [A] matrix corresponds to the rate of convergence of the outputs of the parallel distributed processing network toward the corresponding target vector.
- the values of the elements of the diagonal matrix corresponding to the target vectors are preferably larger than the values of the elements of the diagonal matrix
- the elements of the diagonal matrix corresponding to the target vectors have an absolute value greater than one while the values of the elements of the diagonal matrix corresponding to the slack vectors have an absolute value less than one.
- the information storage matrix [A] is more general, i.e., it does not have to be symmetric, or closely symmetric, and it does not require the diagonal elements equal to zero as in the Hopfield network. This means that the hardware realization is also more general.
- the cognitive behavior of the information storage matrix is more easily understood than the prior art. When an input vector is presented to the network and the network converges to a solution which is not a desired or targeted vector, a cognitive solution has been reached, which is, in general, a linear combination of target vectors.
- the inclusion of the matrix [A] as one of the factors in forming the information storage matrix is also a feature not present in the Hopfield network.
- the speed of convergence to a target solution is controllable by the selection of the values of the [A] matrix.
- Figure 1 is a generalized schematic diagram of a portion of a parallel distributed processing network the connection weights of which are characterized by the components of an information storage matrix in accordance with the present invention
- Figure 2A is a schematic diagram of a given amplifier, including the feedback and biasing resistors, corresponding to an element in the information storage matrix having a value greater than zero;
- Figure 2B is a schematic diagram of a given amplifier, including the feedback and biasing resistors, corresponding to an element in the information storage matrix having a value less than zero;
- Figure 3 is a schematic diagram of a nonlinear thresholding amplifier implementing the synaptic action of the function ⁇ (.);
- Figure 4 is a schematic diagram of a parallel distributed
- the parallel distributed processing network in accordance with the present invention will first be discussed in terms of its underlying theory and mathematical basis, after which schematic diagrams of various implementations thereof will be presented. Thereafter, several examples of the operation of the parallel distributed processing network in accordance with the present invention will be given.
- N-dimensional vector space Such a space has a topology comprising one or more localized equilibrium points each surrounded by a basin to which the network operation will gravitate when presented with an unknown input.
- the input is usually presented to the network in the form of a digital code, comprised of N binary digits, usually with values of 1 and -1, respectively.
- the network in accordance with the present invention may be characterized using an [N X N] matrix, hereafter termed the
- information storage matrix that specifies the connection weights between the amplifiers implementing the parallel distributed processing network. Because of the operational symmetricity inherent when using the information storage matrix only one-half of the 2 N possible input codes are distinct. The other codes [2 N -1 in number) are complementary.
- the Information storage matrix [A] is the [NXN] matrix that satisfies the matrix equation:
- Equation (1) defines an eigenvalue problem in which each ⁇ , that is, each element in toe [A] matrix is an eigenvalue and each column vector in the similarity transform matrix [T] is the associated
- Equation (1) can have up to n distinct solution pairs.
- [A] [T] [L] [T] - 1 (5).
- the matrix [T] is termed the "similarity transformation matrix" and is an [NXN] matrix the columns of which are formed from a predetermined number (M) of [N x 1] target vectors.
- Each target vector takes the form of one of the 2 N possible codes able to be accommodated by the N dimensional space representing the network.
- Each target vector represents one of the desired outputs, or targets, of the parallel distributed processing network.
- Each target vector contains information that is desired to be stored in some fashion and retrieved at some time in the future and thus the set
- each target vector in the set is linearly independent from the other target vectors and any vector X i in N- dimensional space can be thus expressed as the linear combination of the set of target vectors.
- the inverse of the similarity transformation matrix [T] exists.
- Some or all of the M target vectors may or may not be orthogonal to each other, if desired.
- the number M of target vectors may be less than the number N, the dimension of the information storage matrix [A]. If less than N number of target vectors are specified (that is, M ⁇ N), the remainder of the similarity transformation matrix [T] is completed by a
- the slack vectors are fictitious from the storage point of view since they do not require the data format characteristic of target vectors. However, it turns out that in most applications the slack vectors are important.
- the elements of the slack vectors should be selected such that the slack vectors do not describe one of the possible codes of the network. For example, if in a typical case the target vectors are each represented as a digital string n-bits long (that is, composed of the binary digits 1 and -1, i.e., [1-11. . . -1-11], forming a slack vector from the same binary digits would suppress the corresponding code.
- a slack vector should be formed of digits that clearly distinguish it from any of the 2 N possible target vectors.
- a slack vector may be formed with one (or more) of its elements having a fractional value, a zero value and/or a positive or negative integer values.
- the slack vectors are important in that they assist in contouring the topology and shaping the basins of the N-dimensional space corresponding to the network.
- the target vectors may form all, or part, of the information storage spectrum of the matrix [A]. If less than N target vectors are specified, then the remaining vectors in the transform matrix are arbitrary or slack vectors. In each instance the vectors in the similarity transformation matrix [T] form the geometric spectrum of the information storage matrix [A] (i.e., they are the eigenvectors of [A]).
- the [A] matrix is an [NXN] diagonal matrix that represents the collection of all eigenvalues of the information storage matrix [A] and is known as the algebraic spectra of [A]. Each element of the [A] matrix corresponds to a respective one of the target or slack vectors.
- the values assigned to the elements of the [A] matrix determine the convergence properties of the network.
- the freedom in selecting the values of the [A] matrix Implies that the speed of the network can be controlled.
- the time required for the network to reach a decision or to converge to a target after initialization can be controlled by the appropriate selection of the values of the [A] matrix.
- the values assigned to the elements of the [A] matrix have an impact in the network of the present invention. If a preassigned ⁇ i >1, then the corresponding eigenvector T i (which contains a desired output information) will determine an asymptote in the N-dimensional information space that will motivate the occurrence of desired event.
- the corresponding eigenvector T i will determine an asymptote in the N-dimensional information space that will suppress the occurrence of event. If a preassigned ⁇ i >>1, then the network will converge quickly to the corresponding target vector, approximating the feed-forward action of a back propagation network.
- the values assigned to the elements of the [A] matrix corresponding to the target vectors are greater than the values of the elements of the [A] matrix corresponding to the slack vectors. More specifically, the elements of the diagonal matrix corresponding to the target vectors have an absolute value greater than one while the values of the elements of the diagonal matrix corresponding to the slack vectors have an absolute value less than one.
- X M , Z M+1 ZN be the basis in R N , (i.e., Xi's represent target vectors and Z i 's are slack vectors), where R N represents an N-dimensional real vector space. Using this basis construct a similarity
- transformation matrix [T] [X 1 , X 2 , . . ., X M , Z M+1 , . . ., Z N ]. To it associate the diagonal matrix
- [A] that contains eigenvalues predetermined for each element in the basis.
- Equation (1) or Equation (6) produces N 2 linearly coupled equations which determine N 2 coefficients of [A].
- a second method is the Delta-rule method.
- a set of linear equations is formed:
- a Z M+ 1 ⁇ M +1 ⁇ 1 Z M+ 1
- a Z N ⁇ N Z N in which ⁇ i 'S are predetermined eigenvalues.
- the Gaussian elimination technique is faster than the Delta-Rule. If the inverse of the IT] matrix exists, the information storage matrix may be found by finding the matrix product of Equation (5).
- FIG. 1 is a generalized schematic diagram of a portion of a parallel distributed processing network in accordance with the present invention.
- the network generally indicated by the reference character 10, includes a plurality of amplifiers, or nodes, 12
- the network 10 Includes N amplifiers 12-1 through 12-N, where N corresponds to the dimension of the information storage matrix [A] derived as discussed earlier.
- N corresponds to the dimension of the information storage matrix [A] derived as discussed earlier.
- Figure 1 only four of the amplifiers 12 are shown, namely the first amplifier 12-1, the i-th and the j-th amplifiers 12-i and 12-j, respectively, and the last amplifier 12-N.
- the interconnection of the other of the N amplifiers comprising the network 10 is readily apparent from the drawing of Figure 1.
- Figure 4 is a schematic diagram that illustrates a specific parallel distributed processing network 10 used in Example II to follow where N is equal to 4, and is provided only to illustrate a fully interconnected network 10. The specific values of the resistors used in the network shown in Figure 4 are also shown.
- Each amplifier 12 has an inverting input port 16, a noninverting input port 18, and an output port 20.
- the output port 20 of each amplifier 12 is connected to the inverting input port 16 thereof by a line containing a feedback resistor 22.
- the output port 20 of each amplifier 12 is applied to a squasher 26 which implements the thresholding nonlinearity or synaptic squashing ⁇ (.) discussed earlier.
- the detailed diagram of the squasher 26 is shown in Figure 3.
- connection line 30 The interconnection of the output of any given amplifier to the input of another amplifier is determined by the value of the
- connection line 30 Contains a connectivity resistor 34, which is also subscripted by the same variables i, j, denoting that the given subscripted connectivity resistor 34 is connected in the line 30 that connects the j-th input to the i-th output amplifier.
- connectivity resistor 34 defines the connection weight of the line 30 between the j-th input to the i-th output amplifier.
- the value of the connectivity resistor 34 is related to the corresponding subscripted variable in the information storage matrix, as will be understood from the discussion that follows.
- Each of the lines 30 also includes a delay element 38 which has a predetermined signal delay time associated therewith which is provided to permit the time sequence of each iteration needed to implement the iterative action (mathematically defined in Equation (4)) by which the output state of the given amplifier 12 is reached.
- the same subscripted variable scheme as applied to the connection lines and their resistors applies to the delay lines.
- the values assigned to the eigenvalue ⁇ in the (A] matrix corresponds to the time (or the number of iterations) required for the network 10 to settle to a decision.
- An input vector applied to the network 10 takes the form:
- the information storage matrix [A] when evaluated in the manner earlier discussed, takes the following form:
- each element A ij of the information storage matrix [A] is either a positive or a negative real constant, or zero:
- the values of the connectivity resistors R ij may be readily determined from Equation (9). It should be noted that if the element A ij of the information storage matrix is between zero and 1 then one can use hardware or software techniques to eliminate difficulties in its realization. For example, a software technique would require adjusting coefficients so that the value of Ay becomes greater than 1. A hardware technique would cascade two inverting amplifiers to provide a positive value in the region specified.
- Figure 3 shows a schematic diagram of the nonlinear
- the squasher 26 defines a network that limits the value of the output of the node 12 to a range defined between a predetermined upper limit and a predetermined lower limit.
- the upper and the lower limits are, tuypically, +1 and -1, respectively.
- the network 10 be realized in an electronic hardware implementation, an optical hardware
- the electronic hardware implementation may be effected by interconnecting the components thereof using discrete analog devices and/or amplifiers, resistors, delay elements such as capacitors or RC networks; or by the
- the network may be realized using a general purpose digital computer, such as a Hewlett Packard Vectra, a Digital Equipment VAX or a Cray X-MP, operating in accordance with a program.
- a general purpose digital computer such as a Hewlett Packard Vectra, a Digital Equipment VAX or a Cray X-MP, operating in accordance with a program.
- the Appendix contains a listing, in Fortran language, whereby the network 10 may be realized on a Digital
- the listing implements the network used in Radio Equipment VAX.
- the listing implements the network used in Radio Equipment VAX.
- Example I is an example of the use of the parallel distributed processing network as a Classifier.
- a big corporation has a need to collect and process a personal data file of its constituents.
- the data collected reflects the following personal profile:
- each member has a 6-bit code that describes the personal profile associated with her/his name.
- the name and code are entered jointly into the data file.
- the "member" entry is included to account for the symmetric operation of the network 10 characterized by the information storage matrix.
- This corporation has thousands of constituents and requires a fast parallel distributed processing network that will classify members according to the information given in a profile code.
- T 1 [X 1 , X 2 , X 3 , Z 4 , Z 5 , Z 6 ]
- T 1 and A will produce the information storage matrix.
- This matrix when executed against all possible codes (i.e., 32 distinct elements in the code considered), will produce four basins illustrated in Table 1.
- the first basin in Table 1 shows all elements of the code that converge to target X 1 .
- the third and fourth basins are responsible for X 2 and X 3 .
- Each code falling in these target basins will increment a suitable counter.
- T 1 and A are used to design an information storage matrix for a parallel distributed processing network that is executing the function of CLASSIFICATION 1.
- basin for targets X 1 and X 2 which when iterated and “squashed", as prescribed in the Figures, will produce basins for targets X 1 and X 2 given below: basin for X 1 basin for X 2
- the values of the resistors are derived using Equations (8) and (9).
- the Appendix, containing pages A-1 through A-6, is a Fortran listing implementing the network shown in Figure 4 on a Digital Equipment VAX computer.
- ISM Storage Matrix
- c Array lamb(n,n) corresponds to matrix Lambda of the text.
- c Array xcode(n) corresponds to matrix X of the text.
- Arrays temp(n,n) is a temporary storage array that is used c in the program but has no corresponding matrix in the text .
- ncode 2**(n-l)
- xsum xsum + a(j ,kl)*xcode(kl) 1-60 continue
- Target vector 1 1.00 1.00 -1.00 -1.00 OUTPUT FILE Target vector 2: 1.00 -1.00 1.00 -1.00
- Slack vector 1 1.00 0.00 0.00 0.00
- Slack vector 2 0.00 1.00 0.00 0.00
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Neurology (AREA)
- Complex Calculations (AREA)
- Multi Processors (AREA)
Abstract
Un réseau monocouche de traitement réparti parallèlement (10) est caractérisé en ce qu'il comporte des priorités de connexion entre les noeuds qui sont définies par une (N x N) matrice de stockage d'informations (A) qui répond à l'équation matricielle: (A) (T) = (T) (LAMBDA), où (LAMBDA) est une (N x N) matrice diagonale dont les composantes sont les valeurs propres de la matrice (A) et (T) est une (N x N) matrice de transformation par similitude dont les colonnes sont formées d'un nombre prédéterminé M de vecteurs cibles (où M <= N) et dont les colonnes restantes sont formées d'un nombre prédeterminé Q de vecteurs de remplissage (où Q = N - M), les deux constituant ensemble les vecteurs propres de (A).A parallel distributed monolayer network (10) is characterized in that it comprises connection priorities between the nodes which are defined by an (N x N) information storage matrix (A) which meets the equation matrix: (A) (T) = (T) (LAMBDA), where (LAMBDA) is a (N x N) diagonal matrix whose components are the eigenvalues of the matrix (A) and (T) is one (N x N) similarity transformation matrix whose columns are formed by a predetermined number M of target vectors (where M <= N) and whose remaining columns are formed by a predetermined number Q of filling vectors (where Q = N - M), the two constituting together the eigenvectors of (A).
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US36080489A | 1989-06-02 | 1989-06-02 | |
US360804 | 1989-06-02 |
Publications (2)
Publication Number | Publication Date |
---|---|
EP0474747A1 true EP0474747A1 (en) | 1992-03-18 |
EP0474747A4 EP0474747A4 (en) | 1993-06-02 |
Family
ID=23419470
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP19900909020 Withdrawn EP0474747A4 (en) | 1989-06-02 | 1990-05-21 | Parallel distributed processing network characterized by an information storage matrix |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP0474747A4 (en) |
JP (1) | JPH04505678A (en) |
CA (1) | CA2017835A1 (en) |
WO (1) | WO1990015390A1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5517667A (en) * | 1993-06-14 | 1996-05-14 | Motorola, Inc. | Neural network that does not require repetitive training |
US6054710A (en) * | 1997-12-18 | 2000-04-25 | Cypress Semiconductor Corp. | Method and apparatus for obtaining two- or three-dimensional information from scanning electron microscopy |
JP6183980B1 (en) * | 2016-12-02 | 2017-08-23 | 国立大学法人東京工業大学 | Neural network circuit device, neural network, neural network processing method, and neural network execution program |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4731747A (en) * | 1986-04-14 | 1988-03-15 | American Telephone And Telegraph Company, At&T Bell Laboratories | Highly parallel computation network with normalized speed of response |
US4752906A (en) * | 1986-12-16 | 1988-06-21 | American Telephone & Telegraph Company, At&T Bell Laboratories | Temporal sequences with neural networks |
US4809193A (en) * | 1987-03-16 | 1989-02-28 | Jourjine Alexander N | Microprocessor assemblies forming adaptive neural networks |
-
1990
- 1990-05-21 EP EP19900909020 patent/EP0474747A4/en not_active Withdrawn
- 1990-05-21 WO PCT/US1990/002699 patent/WO1990015390A1/en not_active Application Discontinuation
- 1990-05-21 JP JP2508502A patent/JPH04505678A/en active Pending
- 1990-05-30 CA CA002017835A patent/CA2017835A1/en not_active Abandoned
Non-Patent Citations (3)
Title |
---|
AIP CONFERENCE PROCEEDINGS 151 : NEURAL NETWORKS FOR COMPUTING 1986, SNOWBIRD , USA pages 386 - 391 SASIELA 'Forgetting as a way to improve neural-net behavior' * |
IEEE FIRST INTERNATIONAL CONFERENCE ON NEURAL NETWORKS vol. 3, 21 June 1987, SAN DIEGO , USA pages 191 - 198 SOMANI 'Compact neural network' * |
See also references of WO9015390A1 * |
Also Published As
Publication number | Publication date |
---|---|
WO1990015390A1 (en) | 1990-12-13 |
CA2017835A1 (en) | 1990-12-02 |
JPH04505678A (en) | 1992-10-01 |
EP0474747A4 (en) | 1993-06-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Specht | Probabilistic neural networks for classification, mapping, or associative memory | |
Kohonen et al. | Fast adaptive formation of orthogonalizing filters and associative memory in recurrent networks of neuron-like elements | |
Amari | Natural gradient works efficiently in learning | |
US5479579A (en) | Cascaded VLSI neural network architecture for on-line learning | |
Sakar et al. | Growing and pruning neural tree networks | |
Kung et al. | A unified systolic architecture for artificial neural networks | |
US4874963A (en) | Neuromorphic learning networks | |
US4719591A (en) | Optimization network for the decomposition of signals | |
EP0378158A2 (en) | Neural network image processing system | |
Tao | A closer look at the radial basis function (RBF) networks | |
Jha et al. | Direction of arrival estimation using artificial neural networks | |
Cybenko | Neural networks in computational science and engineering | |
Shynk et al. | Convergence properties and stationary points of a perceptron learning algorithm | |
Han et al. | Convergence and limit points of neural network and its application to pattern recognition | |
Fukuda et al. | Structure organization of hierarchical fuzzy model using by genetic algorithm | |
EP0474747A1 (en) | Parallel distributed processing network characterized by an information storage matrix | |
Ramacher | Guide lines to VLSI design of neural nets | |
Malinowski et al. | Capabilities and limitations of feedforward neural networks with multilevel neurons | |
Bang et al. | A hardware annealing method for optimal solutions on cellular neural networks | |
US5426721A (en) | Neural networks and methods for training neural networks | |
Paielli | Simulation tests of the optimization method of Hopfield and Tank using neural networks | |
Shynk et al. | Analysis of a perceptron learning algorithm with momentum updating | |
Youse'zadeh et al. | Neural Networks Modeling of Discrete Time Chaotic Maps | |
Otto et al. | Application of fuzzy neural network to spectrum identification | |
Kuh | Performance of analog neural networks subject to drifting targets and noise |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 19911224 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE CH DE DK ES FR GB IT LI LU NL SE |
|
A4 | Supplementary search report drawn up and despatched |
Effective date: 19930415 |
|
AK | Designated contracting states |
Kind code of ref document: A4 Designated state(s): AT BE CH DE DK ES FR GB IT LI LU NL SE |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN |
|
18W | Application withdrawn |
Withdrawal date: 19930630 |