WO1993000653A1 - Neural network architecture - Google Patents
Neural network architecture Download PDFInfo
- Publication number
- WO1993000653A1 WO1993000653A1 PCT/GB1992/001077 GB9201077W WO9300653A1 WO 1993000653 A1 WO1993000653 A1 WO 1993000653A1 GB 9201077 W GB9201077 W GB 9201077W WO 9300653 A1 WO9300653 A1 WO 9300653A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- neurons
- unit according
- unit
- outputs
- memory
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
Definitions
- This invention relates to an architecture for use in constructing artificial neural networks.
- Such networks comprise a plurality of artificial neuron-like devices, hereinafter referred to simply, as "neurons".
- the invention is particularly intended for use with a particular type of neuron known as a pRAM, and by way of introduction a brief discussion is given below of the construction and operation of a pRAM.
- a pRAM neuron-like devices
- the invention is of general application to the archirecture of neural networks, and is not restricted to those where the neurons are pRAMs.
- RAM random access memory
- pRAM an abbreviation for "probabilistic RAM”
- pRAM a RAM in which a given output is produced with a given probability between 0 and 1 when a particular storage location in the RAM in addressed, rather than with a probability of either 0 or 1 as in a conventional RAM.
- a device for use in a neural processing network comprising a memory having a plurality of storage locations at each of which a number representing a probability is stored; means for selectively addressing each of the storage locations to cause the contents of the location to be read to an input of a comparator; a noise generator for inputting to the comparator a random number representing noise; and means for causing to appear at an output of the comparator an output signal having a first or second value depending on the values of the numbers received from the addressed storage location and the noise generator, the probability of the output signal having a given one of the first and second values being determined by the number at the addressed location.
- VLSI chip One way in v/hich a pRAM may be constructed is using a VLSI chip.
- VLSI chips are relatively expensive, and it is presently impractical to fit more than one pRAM, or at most a few pRAMs, on to a single chip, given the substantial chip area which is required for the memory storage of each pRAM, its random number generator and the comparator.
- Neural networks of practical interest generally comprise a large number of neurons, so that using this approach a large number of VLSI chips would be required, with consequent high cost. The problem is accentuated when the neurons are provided with a learning capability, since that further increases the size of the neuron.
- the architecture of the present invention also has the potential for a high degree cf flexibility in the connectivity of the neurons.
- a neural network unit having a plurality of neurons,which network comprises a memory providing a plurality of storage locations for each of the neurons, and, in an integrated circuit, means for defining an algorithm for the operation cf the neurons and a control unit for causing the neurons to produce outputs on the basis of data stored in the said storage locations and the said algorithm.
- the said integrated circuit is distinct from the said memory.
- the integrated circuit further defines a learning algorithm by which the neurons can undergo a learning process.
- the integrated circuit contains an output list which holds the current outputs of the neurons, and a further output list which holds the previous outputs of the neurons and the previous outputs of the neurons in any other neural network units to which it may be connected.
- the output list could alternatively be in the said memory, though it is preferably in the said integrated circuit.
- a connection pointers table is held either on the said integrated circuit or in the said memory, defining which neuron outputs or external inputs are connected to which neuron inputs.
- the neural network unit has at least one expansion port, and more preferably a plurality of expansion ports, for example four such ports, permitting the unit to be connected to at least one, and preferably a plurality, of other such units.
- the neurons are in the form of pRAMs.
- the means defining the neuron algorithm then preferably comprises a random number generator and a comparator for comparing the contents of addressed storage locations with random numbers produced by the random number generator.
- FIG. 1 shows an embodiment of the invention in block diagram form
- Figure 3 shows how a number of nodules according to the invention may be connected to one another
- FIG. 4 shows the way in which the RAM may be organised
- Figure 4a shows the forms of external RAM address used for the RAM shown in Figure 4;
- Figure 5 shows a pRAM configuration which provides for local learning
- Figure 6 shows the way in which the RAM may be organised to allow for the local learning of Figure 5;
- Figure 6a shows a modification of the external RAM address to cater for the RAM organisation of Figure 6.
- the embodiment which will now be described with reference to Figures 1 and 2 is a module which provides 128 pRAMs, though some other number, say 256, could be provided instead.
- the hardware for the module takes the form of a single VLSI chip and a conventional RAM, with appropriate connection between the two and provision for connection to other modules and/or to external inputs.
- the VLSI chip 10 comprises a control unit 11, a pseudo random number generator 12, a comparator 13, a learning block 14 with connections to receive reward and penalty signals r and p from the environment, a memory 15, and address and data latches 16 and 17 via which the chip 10 is connected to a RAM 20.
- the pRAMs of which three of the 128 are shown in Figure 2, are shown in that Figure as though each were a discrete C physical entity, but, as will be apparent from the ensuing description, each is in fact a virtual pRAM.
- the storage locations for all the pRAMs, are in the RAM 20, and the pseudo random number generator, comparator and learning block held on the VLSI serve successively as those components of each pRAM.
- the current output (0 or 1) of each pRAM is stored in column 0 of what is referred to here as an output list which is part of the VLSI memory 15.
- Column 1 of the output list holds the previous values of the outputs, and columns 2 to 5 hold the previous values of the outputs of four other modules 2 to 5 to which the module in question (regarded as module 1) is connected (see Figure 3) .
- the description of this embodiment is on the basis that the module in question is connected to four other modules, but it must be understood that it might be connected to a greater or lesser number of modules, or to no other modules.
- the module has a single output port connected to all of its neighbours, and four serial input ports each connected to a respective neighbour.
- each pRAM The connectivity of each pRAM is specified by associating each of the four address inputs of the pRAM with a respective one of the four connection pointers 0 to 3. These pointers indicate from which pRAM in which module the input concerned is to be taken. Thus, in the illustrated example the connection pointers denote the fact that the inputs to the pRAM 0 are taken from pRAM 4 in module 1, pRAM 3 in module 4, pRAM 6 in module 5 and pRAM 1 in module 4, respectively. If it is intended that some of the inputs to the pRAMs should come from external sources, rather than from other pRAMs, the output list can contain a further column in addition to those shown.
- connection pointer is a 10-bit binary number.
- To identify the number of the pRAM requires seven bits, and a further three bits are required to identify the - module number. As far as the latter is concerned, 000 to 100 may be used for example to identify the module number, and 111, say, used to identify an external input.
- VLSI memory The amount of VLSI memory required for each pRAM which shares it is very small, and can be made still smaller by shifting the table of connection pointers to the RAM. All the other requirements of the VLSI are substantially the same independent of the number of pRAMs, and in practice the number of pRAMs which can share a single VLSI chip is limited only by the update rate required (i.e. the frequency with which the memory contents of the pRAMs can be updated) . By way of example, it has been found that using a single module of 128 pRAMs it is possible to update all the pRAMs at least every 50 ⁇ s, which is faster than the response time of many biological neurons.
- the steps which take place in one way of operating the module described above are as follows: 1) Generate an input vector u for the first of the pRA s en the basis of the connection pointers for that pRAM, as stored in the connection pointers table.
- the control unit 11 transforms the vector u into the corresponding address for the RAM 20, at which address the contents of the storage location are denoted as ⁇ u .
- each module is not only sending column 0 of its output lists to its neighbouring modules, a but is also receiving from them, via its input ports, column 0 of their output lists.
- serial output ports can be used to communicate not only with neighbouring modules but also with a host computer, or with an interface to a system which the network controls or to which it provides information.
- Step (6) above refers to a form cf reinforcement training known as global reward/penalty learning, in which the contents of at least one of the storage locations in the neuron (in this case a pRAM) are altered on the basis cf signals from the environment signifying success or failure of the network as a whole (the reward and penalty signals, r and p) .
- Suitable algorithms for carrying out such reinforcement training are described in the International Patent Applications mentioned above and in a paper entitled "Reinforcement training strategies for probabilistic RAMS" by D. Gorse and J.G. Taylor in: Proceedings of Euronet '90, Moscow, September 1990, pp 98-100.
- connection pointer table is in the RAM.
- the external RAM address is then defined as shown in Fig 4a.
- the learning block 14 required for this is identical to that required for global learning, and the difference derives from the fact that whereas in global learning the inputs r and p are the same for all neurons, in local learning r and p are locally and separately generated for each neuron.
- auxiliary pRAMs One way of doing this is by using auxiliary pRAMs, and this is shown in Figure 5.
- a pair of 5- pRAMs i.e. pRAMs with 5 inputs
- the auxiliary pRAMs have fixed memory contents whicr., like t e memory contents cf the 4-pRAI.s are held in external RAM.
- the way in which the memory contents of the auxiliary pRAMs are chosen is similar to the basis used for the i-pRAMs discussed in the above mentioned International Patent Applications and in the Euronet '90 paper mentioned above.
- the output of the auxiliary pRAMs are the signals r and p.
- step (6) The process of updating the pRAM memory using this form of learning closely resembles that for the global reward- penalty learning pRAM except for step (6) . Since local learning does not depend on the performance cf the whole net, the learning procedure can be performed locally as each pRAM is processed. In this case, step (6) will normally be performed between steps (4) and (5) as this is a more efficient method. Thus the pRAM memory is updated wnilst the current ⁇ u and a (the pRAM output) are valid.
- the use of local learning requires some alteration to the organisation of the RAM compared to that used for global learning.
- extra bits are required on the address bus to select 'r' and 'p' memory, e.g. as shown in Figure 6a.
- the block select bits may for example be:
- Other forms of learning may be used instead of reward/penalty learning (either global or local) .
- use may be made of gradient descent, back- propagation, Kohonen topographic maps and Hebbian learning, all of which are established techniques in the field of neural networks.
- each pRAM can be transmitted to the neighbouring pRAM modules as soon as it is generated, rather than (as in step 7 described above) have a separate step in which the outputs of all the pRAMs in a module are transmitted. If this is done, however, extra RAM storage for copies cf column 0 in the output list must be provided in this and neighbouring modules.
- the architecture of the present invention is of general application in neural networks, and is not restricted to those where the neurons are pRAMS. This architecture is also applicable when a learning module is not present.
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE69218941T DE69218941T2 (en) | 1991-06-21 | 1992-06-16 | NEURONAL NETWORK ARCHITECTURE |
US08/167,883 US5564115A (en) | 1991-06-21 | 1992-06-16 | Neural network architecture with connection pointers |
JP5501382A JPH07500198A (en) | 1991-06-21 | 1992-06-16 | neural network architecture |
EP92912519A EP0591286B1 (en) | 1991-06-21 | 1992-06-16 | Neural network architecture |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB919113553A GB9113553D0 (en) | 1991-06-21 | 1991-06-21 | Neural network architecture |
GB9113553.3 | 1991-06-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO1993000653A1 true WO1993000653A1 (en) | 1993-01-07 |
Family
ID=10697193
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/GB1992/001077 WO1993000653A1 (en) | 1991-06-21 | 1992-06-16 | Neural network architecture |
Country Status (9)
Country | Link |
---|---|
US (1) | US5564115A (en) |
EP (1) | EP0591286B1 (en) |
JP (1) | JPH07500198A (en) |
AT (1) | ATE151545T1 (en) |
AU (1) | AU2020292A (en) |
CA (1) | CA2112113A1 (en) |
DE (1) | DE69218941T2 (en) |
GB (1) | GB9113553D0 (en) |
WO (1) | WO1993000653A1 (en) |
Families Citing this family (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7818212B1 (en) | 1999-10-22 | 2010-10-19 | Ewinwin, Inc. | Multiple criteria buying and selling model |
US7693748B1 (en) | 1991-06-03 | 2010-04-06 | Ewinwin, Inc. | Method and system for configuring a set of information including a price and volume schedule for a product |
US7089218B1 (en) * | 2004-01-06 | 2006-08-08 | Neuric Technologies, Llc | Method for inclusion of psychological temperament in an electronic emulation of the human brain |
US7925492B2 (en) | 2004-01-06 | 2011-04-12 | Neuric Technologies, L.L.C. | Method for determining relationships through use of an ordered list between processing nodes in an emulated human brain |
GB9623919D0 (en) * | 1996-11-18 | 1997-01-08 | Ericsson Telefon Ab L M | ATM Switch |
FI105428B (en) * | 1998-05-13 | 2000-08-15 | Nokia Mobile Phones Ltd | Procedure for correcting errors of parallel A / D converter, a corrector and a parallel A / D converter |
AU4981400A (en) | 1999-05-12 | 2000-12-05 | Ewinwin, Inc. | Multiple criteria buying and selling model, and system for managing open offer sheets |
US8311896B2 (en) | 1999-05-12 | 2012-11-13 | Ewinwin, Inc. | Multiple criteria buying and selling model |
US7689469B1 (en) | 1999-05-12 | 2010-03-30 | Ewinwin, Inc. | E-commerce volume pricing |
US7124099B2 (en) * | 1999-05-12 | 2006-10-17 | Ewinwin, Inc. | E-commerce volume pricing |
US8290824B1 (en) | 1999-05-12 | 2012-10-16 | Ewinwin, Inc. | Identifying incentives for a qualified buyer |
US7181419B1 (en) | 2001-09-13 | 2007-02-20 | Ewinwin, Inc. | Demand aggregation system |
US8626605B2 (en) | 1999-05-12 | 2014-01-07 | Ewinwin, Inc. | Multiple criteria buying and selling model |
US20110213648A1 (en) | 1999-05-12 | 2011-09-01 | Ewinwin, Inc. | e-COMMERCE VOLUME PRICING |
US8732018B2 (en) | 1999-05-12 | 2014-05-20 | Ewinwin, Inc. | Real-time offers and dynamic price adjustments presented to mobile devices |
US7593871B1 (en) | 2004-06-14 | 2009-09-22 | Ewinwin, Inc. | Multiple price curves and attributes |
US8140402B1 (en) | 2001-08-06 | 2012-03-20 | Ewinwin, Inc. | Social pricing |
US7899707B1 (en) | 2002-06-18 | 2011-03-01 | Ewinwin, Inc. | DAS predictive modeling and reporting function |
US7689463B1 (en) | 2002-08-28 | 2010-03-30 | Ewinwin, Inc. | Multiple supplier system and method for transacting business |
US8590785B1 (en) | 2004-06-15 | 2013-11-26 | Ewinwin, Inc. | Discounts in a mobile device |
US7364086B2 (en) * | 2003-06-16 | 2008-04-29 | Ewinwin, Inc. | Dynamic discount card tied to price curves and group discounts |
US8473449B2 (en) * | 2005-01-06 | 2013-06-25 | Neuric Technologies, Llc | Process of dialogue and discussion |
TWI392404B (en) * | 2009-04-02 | 2013-04-01 | Unimicron Technology Corp | Circuit board and manufacturing method thereof |
US9189729B2 (en) * | 2012-07-30 | 2015-11-17 | International Business Machines Corporation | Scalable neural hardware for the noisy-OR model of Bayesian networks |
US11062229B1 (en) * | 2016-02-18 | 2021-07-13 | Deepmind Technologies Limited | Training latent variable machine learning models using multi-sample objectives |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0375054A1 (en) * | 1988-12-23 | 1990-06-27 | Laboratoires D'electronique Philips | Artificial neural integrated circuit with learning means |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4989256A (en) * | 1981-08-06 | 1991-01-29 | Buckley Bruce S | Self-organizing circuits |
FR2639461A1 (en) * | 1988-11-18 | 1990-05-25 | Labo Electronique Physique | BIDIMENSIONAL ARRANGEMENT OF MEMORY POINTS AND STRUCTURE OF NEURON NETWORKS USING SUCH ARRANGEMENT |
US5293459A (en) * | 1988-12-23 | 1994-03-08 | U.S. Philips Corporation | Neural integrated circuit comprising learning means |
US5063521A (en) * | 1989-11-03 | 1991-11-05 | Motorola, Inc. | Neuram: neural network with ram |
JP3260357B2 (en) * | 1990-01-24 | 2002-02-25 | 株式会社日立製作所 | Information processing device |
GB9014569D0 (en) * | 1990-06-29 | 1990-08-22 | Univ London | Devices for use in neural processing |
US5197114A (en) * | 1990-08-03 | 1993-03-23 | E. I. Du Pont De Nemours & Co., Inc. | Computer neural network regulatory process control system and method |
US5167009A (en) * | 1990-08-03 | 1992-11-24 | E. I. Du Pont De Nemours & Co. (Inc.) | On-line process control neural network using data pointers |
-
1991
- 1991-06-21 GB GB919113553A patent/GB9113553D0/en active Pending
-
1992
- 1992-06-16 EP EP92912519A patent/EP0591286B1/en not_active Expired - Lifetime
- 1992-06-16 DE DE69218941T patent/DE69218941T2/en not_active Expired - Fee Related
- 1992-06-16 CA CA002112113A patent/CA2112113A1/en not_active Abandoned
- 1992-06-16 AU AU20202/92A patent/AU2020292A/en not_active Abandoned
- 1992-06-16 US US08/167,883 patent/US5564115A/en not_active Expired - Fee Related
- 1992-06-16 WO PCT/GB1992/001077 patent/WO1993000653A1/en active IP Right Grant
- 1992-06-16 JP JP5501382A patent/JPH07500198A/en active Pending
- 1992-06-16 AT AT92912519T patent/ATE151545T1/en not_active IP Right Cessation
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0375054A1 (en) * | 1988-12-23 | 1990-06-27 | Laboratoires D'electronique Philips | Artificial neural integrated circuit with learning means |
Non-Patent Citations (4)
Title |
---|
IEEE FIRST INTERNATIONAL CONFERENCE ON NEURAL NETWORKS vol. 3, 21 June 1987, SAN DIEGO,CA, USA pages 443 - 452 GARTH 'A chipset for high speed simulation of neural network systems' * |
INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS vol. 2, 17 June 1990, SAN DIEGO, CA ,USA pages 593 - 598 WIKE 'The VLSI implementation of STONN' * |
SIMULATION vol. 31, no. 5, November 1978, LA JOLLA, CAL US pages 145 - 153 WITTIE 'MICRONET : a reconfigurable microcomputer network for distributed systems research' * |
THE 15TH ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE 30 May 1988, HONOLULU, HAWAII,USA pages 3 - 11 GHOSH 'Critical issues in mapping neural networks on message -passing multicomputers' * |
Also Published As
Publication number | Publication date |
---|---|
EP0591286B1 (en) | 1997-04-09 |
DE69218941D1 (en) | 1997-05-15 |
EP0591286A1 (en) | 1994-04-13 |
AU2020292A (en) | 1993-01-25 |
CA2112113A1 (en) | 1993-01-07 |
US5564115A (en) | 1996-10-08 |
JPH07500198A (en) | 1995-01-05 |
GB9113553D0 (en) | 1991-08-14 |
DE69218941T2 (en) | 1997-11-06 |
ATE151545T1 (en) | 1997-04-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5564115A (en) | Neural network architecture with connection pointers | |
US5175798A (en) | Digital artificial neuron based on a probabilistic ram | |
US5613044A (en) | Learning machine synapse processor system apparatus | |
Clarkson et al. | The pRAM: An adaptive VLSI chip | |
US5799134A (en) | One dimensional systolic array architecture for neural network | |
US5131073A (en) | Neuron unit and neuron unit network | |
US5621862A (en) | Information processing apparatus for implementing neural network | |
US5634063A (en) | Neural network and method for operating the same | |
US5751913A (en) | Reconfigurable neural network and difference-square neuron | |
WO1991019259A1 (en) | Distributive, digital maximization function architecture and method | |
US5481646A (en) | Neuron unit and neuron unit network | |
Cinque et al. | Fast pyramidal algorithms for image thresholding | |
JPH05165987A (en) | Signal processor | |
EP0674792B1 (en) | Neural network architecture | |
US5274747A (en) | Neuron unit for processing digital information | |
US5440671A (en) | Neural net model and neural net optimizing method and system for realizing the same | |
CN112949834A (en) | Probability calculation pulse type neural network calculation unit and architecture | |
Lee et al. | Parallel digital image restoration using adaptive VLSI neural chips | |
WO2020186364A1 (en) | Multiport memory with analog port | |
US5185851A (en) | Neuron unit and neuron unit network | |
Hendrich | A scalable architecture for binary couplings attractor neural networks | |
Jutten et al. | Simulation machine and integrated implementation of neural networks: A review of methods, problems and realizations | |
Alhalabi et al. | Hybrid Chip Set for Artificial Neural Network Systems | |
Serrano et al. | A CMOS VLSI analog current-mode high-speed ART1 chip | |
Lu | Synthesis of neural networks for associative memories |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AT AU BB BG BR CA CH CS DE DK ES FI GB HU JP KP KR LK LU MG MN MW NL NO PL RO RU SD SE US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH DE DK ES FR GB GR IT LU MC NL SE BF BJ CF CG CI CM GA GN ML MR SN TD TG |
|
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
EX32 | Extension under rule 32 effected after completion of technical preparation for international publication | ||
LE32 | Later election for international application filed prior to expiration of 19th month from priority date or according to rule 32.2 (b) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2112113 Country of ref document: CA |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1992912519 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 08167883 Country of ref document: US |
|
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
WWP | Wipo information: published in national office |
Ref document number: 1992912519 Country of ref document: EP |
|
WWG | Wipo information: grant in national office |
Ref document number: 1992912519 Country of ref document: EP |