WO1992000572A1 - Neural processing devices with learning capability - Google Patents
Neural processing devices with learning capability Download PDFInfo
- Publication number
- WO1992000572A1 WO1992000572A1 PCT/GB1991/001053 GB9101053W WO9200572A1 WO 1992000572 A1 WO1992000572 A1 WO 1992000572A1 GB 9101053 W GB9101053 W GB 9101053W WO 9200572 A1 WO9200572 A1 WO 9200572A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- location
- value
- addressed
- probability
- output
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
Definitions
- This invention relates to artificial neuron-like devices (hereinafter referred to simply as “neurons”) for use in neural processing.
- RAM random access memory
- the pRAM is a hardware device with intrinsically neuron-like behaviour ( Figure 1). It maps binary inputs [5] (representing the presence or absence of a pulse on each of N input lines) to a binary output [4]
- ⁇ u presents a probability.
- ⁇ u is represented as an M-bit integer in the memory locations [3], having a value in the range 0 to 2 M -1 and these values represent probabilities in the range values which have a neuro-biological interpretation: it is this feature which allows networks of pRAMs, with suitably chosen memory contents, to closely mimic the behaviour of living neural systems.
- all 2 N memory components are independent random variables.
- a deterministic ( ⁇ ⁇ ⁇ 0,1 ⁇ N ) pRAM can realise any of the possible binary functions of its inputs - pRAMs differ from units more conventionally used in neural network applications in that noise is introduced at the synaptic rather than the threshold level; it is well known that synaptic noise is the dominant source of stochastic behaviour in biological neurons.
- This noise, v is introduced by the noise generator [1].
- v is an M-bit integer which varies over time and is generated by a random number generator.
- the comparator [2] compares the value stored at the memory location being addressed and v . One way of doing this is to add the value stored at the addressed location to ⁇ /.
- a spike representing a 1 is generated on arrival of the clock pulse [7]. If there is no carry bit no such spike is generated and this represents a 0. It can be seen that the probability of a 1 being generated is equal to the probability represented by the number stored at the addressed location, and it is for this reason that the latter is referred to as a probability. It should be noted that the same result could be achieved in other ways, for example by generating a 1 if the value of the probability was greater than v .
- pRAM networks operate in terms of 'spike trains' (streams of binary digits produced by the addressing of successive memory locations) information about the timing of firing events is retained; this potentially allows phenomena such as the observed phase-locking of visual neurons to be reproduced by pRAM nets, with the possibility of using such nets as part of an effective 'vision machine'.
- FIG. 9 shows a simple neural network comprising two pRAMs denoted as RAM 1 and RAM 2. It will be understood that for practical applications much more extensive networks are required, the nature of which depends on the application concerned. Nevertheless, the network shown in Figure 9 illustrates the basic principles. It will be seen that each pRAM has an output OUT and a pair of inputs denoted INI and IN2. Each output corresponds to the output [4] shown in Figure 1.
- the output from RAM 1 is applied as an input INI of RAM 1, and the output from RAM 2 is applied as an input to the input IN2 of RAM 1.
- the output from RAM 1 is also applied as an input to the input IN2 of RAM 2 , and the output of RAM is applied as an output to the input INI of RAM 2.
- the network operates in response to clock signals received from the circuit labelled TIMING & CONTROL.
- RAM 1 The circuitry of RAM 1 is shown in detail in Figure 10.
- RAM 2 is identical, except that for each reference in Figure 10 to RAM 1 there should be substituted a reference to RAM 2 and vice versa.
- RAM 1 comprises a random number generator. This is of conventional construction and will therefore not be described here in detail.
- the embodiment shown here employs shift registers and 127 stages are used to give a sequence length of 2 127 -1.
- the random number generator has an array of three EXOR gates having inputs 2, 3 and 4 which can be connected to selected ones of the taps T of the shift registers.
- the taps selected in RAM 1 will be different to those selected in RAM 2 and appropriate selection, according to criteria well known to those in the art, avoids undesired correlation between the random numbers generated by the two generators.
- the output of the random number generator is an 8-bit random number which is fed as two 4-bit segments to two adders which make up a comparator.
- the illustrated embodiment has a memory which holds four 8-bit numbers held at four addresses.
- the memory is thus addressed by 2-bit addresses.
- the contents of the addressed storage location in the memory are fed to the comparator where they are added to the random number generated at that time.
- the output of the comparator is a '1' is the addition results in a carry bit and is a '0' otherwise.
- the output of the comparator is fed to the output of the RAM (which is labelled OUT in Figure 9) and also to a latch. Here it is held ready to form one bit of the next address to be supplied to the address decoder via which the memory is addressed. As can be seen by taking Figures 9 and 10 together, the other bit of the address (i.e. that supplied to input IN2 of RAM 1) is the output of RAM 2.
- Figure 10 also shows inputs labelled Rl_LOAD and MEMORY DATA which enable the system to be initialised by loading data into the memory at the outset, and an input SCLK by means of which clock pulses are supplied to RAM 1 from a clock generator (see below).
- Rl_LOAD and MEMORY DATA which enable the system to be initialised by loading data into the memory at the outset
- SCLK by means of which clock pulses are supplied to RAM 1 from a clock generator (see below).
- FIG 10 there is an input - denoted GENERATE which is connected to the latch via an inverter gate which serves to initiate the production of a new output from the pRAM and allows a set of 8 SCLK pulses to occur.
- the clock generator shown in Figure 11 is of conventional construction and will therefore not be described in detail, its construction and operation being self-evident to a man skilled in the art from the Figure.
- each of RAM 1 and RAM 2 Each time a GENERATE pulse occurs, each of RAM 1 and RAM 2 generates a new 8-bit random number (one bit for each SCLK pulse), addresses a given one of the four storage locations in its memory, compares the random number with the contents of the addressed location with the random number, and generates an output accordingly.
- Reinforcement training is a strategy used in problems of adaptive control in which individual behavioural units (here to be identified with pRAMs) only receive information about the quality of the performance of the system as a whole, and have to discover for themselves how to change their behaviour so as to improve this. Because it relies only on a global success/failure signal, reinforcement training is likely to be the method of choice for 'on-line' neural network applications.
- a form of reinforcement training for pRAMs has been devised which is fast and efficient (and which is capable, in an embodiment thereof, of being realised entirely with pRAM technology).
- This training algorithm may be implemented using digital or analogue hardware thus making possible the manufacture of self-contained 'learning pRAMs'. Networks of such units are likely to find wide application, for example in the control of autonomous robots. Control need not be centralised; small nets of learning pRAMs could for example be located in the individual joints of a robot limb. Such a control arrangement would in many ways be akin to the semi-autonomous neural ganglia found in insects.
- a device for use in a neural processing network comprising a memory having a plurality of storage locations at each of which a number representing a probability is stored; means for selectively addressing each of the storage locations to cause the contents of the location to be read to an input of a comparator; a noise generator for inputting to the comparator a random number representing noise; means for causing to appear at an output of the comparator an output signal having a first or second value depending on the values of the numbers received from the addressed storage location and the noise generator, the probability of the output signal having a given one of the first and second values being determined by the number at the addressed location; means for receiving from the environment signals representing success or failure of the network; means for changing the value of the number stored at the addressed location if a success signal is received in such a way as to increase the probability of the successful action; and means for changing the value of the number stored at the addressed location if a failure signal is received in such a way as to decrease the probability of the unsuccessful action
- r(t), p(t) are global success, failure signals e ⁇ 0,1 ⁇ received from the environment at time t, the environmental response might itself be produced by a pRAM, though it might be produced by many other things).
- a(t) is the unit's binary output, and p , ⁇ are constants e[0,1]. The delta function is included to make it clear that only the location which is actually addressed at time t is available to be modified, the contents of the other locations being unconnected with the behaviour that led to reward or punishment at time t.
- Figure 1 shows diagrammatically a pRAM, as described above
- Figure 2 shows diagrammatically an embodiment of a pRAM having learning characteristics according to the present invention
- Figure 3 shows an alternative embodiment of a pRAM having learning characteristics
- Figure 4 shows diagrammatically a pRAM adapted to handle a real-valued input
- Figure 5 shows diagrammatically a pRAM having the ability to implement a more generalised learning rule than that employed in Figure 2;
- Figure 6 shows diagrammatically a pRAM in which eligibility traces (explained below) are added to each memory location;
- Figure 7 shows how a pRAM with eligibility traces can be used to implement Equation 9(a) (for which see below);
- FIG 8 shows the further modifications needed to implement Equation 10 (for which see below);
- Figure 9 shows a simple neural network using two pRAMs
- Figure 10 is a circuit diagram showing one of the pRAMs of Figure 9 in detail.
- FIG 11 is a circuit diagram showing the timing and control circuitry used in Figure 9.
- Figure 2 shows one way in which rule (2) can be implemented in hardware.
- the memory contents a i (t+1) are updated each clock period according to rule (2).
- the pRAM [8] is identical to the unit shown in Figure 1 and described in the text above. For a given address on the address inputs [5], an output spike is generated as described above.
- the learning rule (2) achieves a close approximation to the theoretically expected final values of the memory contents for a suitably small value of the learning rate constant p .
- this may lead to a lengthy time for training.
- p may be initially set to a large value and subsequently decremented at each successive time step by a factor which vanishes suitably fast as the number of steps increases.
- the rule (2) may also be realised in hardware using a pRAM technology ( Figure 3).
- the advantages of this method is that multiplier circuits are not required. However, this requires 2 M cycles to generate a i (t+1) where M is the number of bits used to represent ⁇ u . It is implementable, in this example, by an auxiliary 4input pRAM [16] ( Figure 3) with input lines carrying ⁇ i (t), a(t), r(t) and p(t), (the order of significance of the bits carried by lines going from ⁇ i to p) and with memory contents given by
- ⁇ i (t) e[0,1] and pRAMs are neuron-like objects which communicate via discrete pulses, it is necessary to use time-averaging (over a number of cycles, here denoted by R) to implement the update.
- the output [17] of the auxiliary pRAM [16] in each step consists of the contents of one of two locations in pRAM [16], since a, r and p remain the same and only ⁇ i alters between 0 and 1.
- the output of the pRAM [16] accumulated over R time steps using the integrator [19] is the updated memory content a i (t+1) ⁇ ⁇ i (t) + ⁇ i (t) , where ⁇ i (t) is given by (2).
- the steps used in the update are
- [19] becomes an integrator which is first cleared and then integrates over R time steps. The output after this period is then written into the pRAM address i. This is functionally identical to the description of the digital device above.
- An object of a further aspect of the invention is to provide a modified pRAM which enables such inputs to be handled.
- a neuron for use in a neural processing network, comprising a memory having a plurality of storage locations at each of which a number representing a probability is stored; a real number-to-digital converter which receives a plurality of real-valued numbers each in the range o to 1 and produces at its output a corresponding plurality of synchronised parallel pulse trains which are applied to the respective lines of the memory to define a succession of storage location addresses, the probability of a pulse representing a 1 being present in an address on a given address line being equal to the value of the real-valued number from which the pulse train applied to that address line was derived; a comparator connected to receive as an input the contents of each of the successively addressed locations, a noise generator for inputting to the comparator a succession of random numbers representing noise; means for causing to appear at an output of the comparator a succession of output signals each having a first or second value depending on the values of the numbers received from the addressed storage locations and the noise generator, the
- the device provided by the invention of the copending application performs mappings from [0,1] N to ⁇ 0,1 ⁇ using ideas of time-averaging similar to those used above to implement the reinforcement training rule (2) . It is referred to herein as an integrating pRAM or i-pRAM, and is shown in Figure 4.
- a real-valued input vector [26] x e[0,1] N is approximated by the time-average (over some period R) of successive binary input patterns i e ⁇ 0,1 ⁇ N (by the real-to-spike- frequency translator [28]:
- each of the lines [26] which makes up the vector carries a real value in the range 0 to 1.
- the time average of the pulse train carried by a given line [5] is equal to the value on the corresponding line [26].
- the pulse trains on the lines [25] are synchronised with one another.
- the translator [28] might take various forms, and one possibility is for the translator [28] to be a pRAM itself.
- i(r) selects a particular location in the pRAM [8] using the address inputs [5], resulting in a binary output at [4] denoted herein as â(r).
- These outputs are accumulated in a spike integrator [19] (see Figure 4) whose contents were reset at the start of this cycle.
- the integrator [19] comprises a counter which counts the number of l's received over a fixed interval, and, if there is no lookup table [27], for which see below, a device for generating a binary output [21] in dependence on the number counter. This device may itself operate in the manner of a pRAM with a single storage location, i.e.
- a moving average could be used with the output [21] being generated after the formation of each average.
- f might for example be a sigmoid (with threshold ⁇ and
- the i-pRAM just described can be developed further to implement a generalised form of the training rule (2).
- rule (2) the input of a single binary address results in the contents of the single addressed location being modified.
- the i-pRAM can be used to implement a generalised form of the training rule (2) in which the input of a real-valued number causes the contents of a plurality of locations to be modified. This is achieved by using an address counter for counting the number of times each of the storage locations is addressed, thus providing what will be referred to herein as a learning i-pRAM.
- This generalised training rule is. ⁇ u (8)
- the X u 's record the frequency with which addresses have been accessed.
- a simple modification to the memory section of the pRAM (Figure 1) allows the number of times each address is accessed to be recorded using counters or integrators [22] as shown in Figure 5.
- the X u 's could also be recorded in an auxiliary N-input pRAM, and used to modify the memory contents in a similar manner to Figure 3. However, this method takes 2 N times longer than that using the architecture of Figure 5.
- training may be accelerated by letting the learning rate constant, p, have an initially high value and tend to zero with time, this being achieved in a similar manner to that described above.
- Rule (8) may be further generalised in order to deal with situations in which reward or punishment may arrive an indefinite number of time steps after the critical action which caused the environmental response. In such delayed reinforcement tasks it is necessary to learn path-action, rather than position-action associations. This can be done by adding eligibility traces to each memory location as shown in Figure 6. These decay exponentially where a location is not accessed, but otherwise are incremented to reflect both access frequency and the resulting i-pRAM action.
- access means that a storage location with a given address has been accessed
- activity means that when the storage location was accessed it resulted in the pRAM firing (i.e.
- Figure 7 shows the mechanism whereby the eligibility trace e u is updated according to equation 9a showing that this feature is hardware-realisable.
- the current value of e u is read from the port [26] and multiplied by the eligibility trace decay rate, ⁇ at [28] using a multiplier [13].
- This product is combined using an adder [12] with the product of the pRAM output, a(t) [4], the access count data, X u [25] and the complement of the decay rate, ⁇ [29] before written back as e u
- Updating the f u term is identical to that above except that it is the inverse of the output, a(t), which is used to implement the equation 9b.
- FIG. 7 shows the operations required in addition to those of Figure 7 in order to implement equation 10.
- Multiplier [31] forms the product of e u and ⁇ u and multiplier [32] forms the product of f u and ⁇ u
- Multiplier [33] forms the product of e u and ⁇ u and multiplier [34] forms the product of f u and ⁇ u .
- the product formed by multiplier [33] is subtracted from the product formed by multiplier [32] in the subtractor [35].
- the product formed by multiplier [34] is subtracted from the product formed by multiplier [31] in the subtractor [36].
- the output of the subtractor [35] is multiplied by a penalty factor p which is an input from the environment to the multiplier [37] at [39].
- the output of the subtractor [36] is multiplied by a reward factor r which is an input from the environment to the multiplier [38] at [40].
- the outputs of the multipliers [37] and [38] are added to the original memory contents at [19] using the adder [12].
- the output from the adder [12] is written back into the memory using the write port [10] and the memory is thereby updated.
- the operations described implement the training rule described in equation 10.
- An alternative to the training rule of equation (8) is a rule which may take account more realistically of the behaviour of the whole i-pRAM. This alternative is expressed by
- g is a suitable function of such as, for
- the devices are described as being realised in dedicated hardware. It will be appreciated that the invention can alternatively be realised in software, using a conventional digital computer to simulate the hardware described, and the present application is intended to encompass that possibility. However, software simulation is unlikely to be practical except for very small networks and the hardware approach is much more practical for larger and therefore more interesting networks.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Complex Calculations (AREA)
- Multi Processors (AREA)
- Image Analysis (AREA)
- Harvester Elements (AREA)
- Feedback Control In General (AREA)
- Image Processing (AREA)
- Color Image Communication Systems (AREA)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB909014569A GB9014569D0 (en) | 1990-06-29 | 1990-06-29 | Devices for use in neural processing |
GB9014569.9 | 1990-06-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO1992000572A1 true WO1992000572A1 (en) | 1992-01-09 |
Family
ID=10678468
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/GB1991/001053 WO1992000572A1 (en) | 1990-06-29 | 1991-06-28 | Neural processing devices with learning capability |
PCT/GB1991/001054 WO1992000573A1 (en) | 1990-06-29 | 1991-06-28 | Neural processing devices for handling real-valued inputs |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/GB1991/001054 WO1992000573A1 (en) | 1990-06-29 | 1991-06-28 | Neural processing devices for handling real-valued inputs |
Country Status (10)
Country | Link |
---|---|
US (2) | US5175798A (de) |
EP (1) | EP0537208B1 (de) |
JP (1) | JPH05508041A (de) |
AT (1) | ATE131642T1 (de) |
AU (2) | AU8214591A (de) |
BR (1) | BR9106607A (de) |
CA (1) | CA2085896A1 (de) |
DE (1) | DE69115488T2 (de) |
GB (1) | GB9014569D0 (de) |
WO (2) | WO1992000572A1 (de) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1993018474A1 (en) * | 1992-03-11 | 1993-09-16 | University College London | Devices for use in neural processing |
WO1999040521A1 (en) * | 1998-02-05 | 1999-08-12 | Intellix A/S | N-tuple or ram based neural network classification system and method |
WO1999067694A2 (en) * | 1998-06-23 | 1999-12-29 | Intellix A/S | N-tuple or ram based neural network classification system and method |
US6393413B1 (en) | 1998-02-05 | 2002-05-21 | Intellix A/S | N-tuple or RAM based neural network classification system and method |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5324991A (en) * | 1989-07-12 | 1994-06-28 | Ricoh Company, Ltd. | Neuron unit and neuron unit network |
GB9014569D0 (en) * | 1990-06-29 | 1990-08-22 | Univ London | Devices for use in neural processing |
US5613039A (en) * | 1991-01-31 | 1997-03-18 | Ail Systems, Inc. | Apparatus and method for motion detection and tracking of objects in a region for collision avoidance utilizing a real-time adaptive probabilistic neural network |
US5563982A (en) * | 1991-01-31 | 1996-10-08 | Ail Systems, Inc. | Apparatus and method for detection of molecular vapors in an atmospheric region |
US5276772A (en) * | 1991-01-31 | 1994-01-04 | Ail Systems, Inc. | Real time adaptive probabilistic neural network system and method for data sorting |
JPH06511096A (ja) * | 1991-06-21 | 1994-12-08 | ユニバーシティー、カレッジ、ロンドン | ニューロン処理に使用されるデバイス |
GB9113553D0 (en) * | 1991-06-21 | 1991-08-14 | Univ London | Neural network architecture |
JPH05210649A (ja) * | 1992-01-24 | 1993-08-20 | Mitsubishi Electric Corp | 神経回路網表現装置 |
JPH06203005A (ja) * | 1992-10-27 | 1994-07-22 | Eastman Kodak Co | 高速区分化ニューラルネットワーク及びその構築方法 |
EP0636991A3 (de) * | 1993-07-29 | 1997-01-08 | Matsushita Electric Ind Co Ltd | Informationsverarbeitungsgerät zur Durchführung eines neuronalen Netzwerkes. |
US5542054A (en) * | 1993-12-22 | 1996-07-30 | Batten, Jr.; George W. | Artificial neurons using delta-sigma modulation |
GB2292239B (en) * | 1994-07-30 | 1998-07-01 | British Nuclear Fuels Plc | Random pulse generation |
AU8996198A (en) * | 1997-09-04 | 1999-03-22 | Camelot Information Technologies Ltd. | Heterogeneous neural networks |
US6256618B1 (en) | 1998-04-23 | 2001-07-03 | Christopher Spooner | Computer architecture using self-manipulating trees |
US6917443B1 (en) * | 1998-11-18 | 2005-07-12 | Xerox Corporation | Composite halftone screens with stochastically distributed clusters or lines |
US6980956B1 (en) * | 1999-01-07 | 2005-12-27 | Sony Corporation | Machine apparatus and its driving method, and recorded medium |
RU2445668C2 (ru) * | 2009-12-22 | 2012-03-20 | Государственное образовательное учреждение высшего профессионального образования "Санкт-Петербургский государственный горный институт имени Г.В. Плеханова (технический университет)" | Нейросетевой регулятор для управления процессом обжига известняка в печах шахтного типа |
US9628517B2 (en) * | 2010-03-30 | 2017-04-18 | Lenovo (Singapore) Pte. Ltd. | Noise reduction during voice over IP sessions |
US9189729B2 (en) * | 2012-07-30 | 2015-11-17 | International Business Machines Corporation | Scalable neural hardware for the noisy-OR model of Bayesian networks |
US9417845B2 (en) * | 2013-10-02 | 2016-08-16 | Qualcomm Incorporated | Method and apparatus for producing programmable probability distribution function of pseudo-random numbers |
US10839302B2 (en) | 2015-11-24 | 2020-11-17 | The Research Foundation For The State University Of New York | Approximate value iteration with complex returns by bounding |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3327291A (en) * | 1961-09-14 | 1967-06-20 | Robert J Lee | Self-synthesizing machine |
US3613084A (en) * | 1968-09-24 | 1971-10-12 | Bell Telephone Labor Inc | Trainable digital apparatus |
US4040094A (en) * | 1973-02-13 | 1977-08-02 | International Publishing Corporation Ltd. | Electronic screening |
US4031501A (en) * | 1975-02-04 | 1977-06-21 | The United States Of America As Represented By The Secretary Of The Army | Apparatus for electronically locating analog signals |
JPS51116255A (en) * | 1975-04-07 | 1976-10-13 | Asahi Chemical Ind | Tester for yarn quality |
US4518866A (en) * | 1982-09-28 | 1985-05-21 | Psychologics, Inc. | Method of and circuit for simulating neurons |
US4809222A (en) * | 1986-06-20 | 1989-02-28 | Den Heuvel Raymond C Van | Associative and organic memory circuits and methods |
US5148385A (en) * | 1987-02-04 | 1992-09-15 | Texas Instruments Incorporated | Serial systolic processor |
US4996648A (en) * | 1987-10-27 | 1991-02-26 | Jourjine Alexander N | Neural network using random binary code |
US4807168A (en) * | 1987-06-10 | 1989-02-21 | The United States Of America As Represented By The Administrator, National Aeronautics And Space Administration | Hybrid analog-digital associative neural network |
US4972363A (en) * | 1989-02-01 | 1990-11-20 | The Boeing Company | Neural network using stochastic processing |
US5063521A (en) * | 1989-11-03 | 1991-11-05 | Motorola, Inc. | Neuram: neural network with ram |
GB9014569D0 (en) * | 1990-06-29 | 1990-08-22 | Univ London | Devices for use in neural processing |
-
1990
- 1990-06-29 GB GB909014569A patent/GB9014569D0/en active Pending
- 1990-10-15 US US07/597,827 patent/US5175798A/en not_active Expired - Fee Related
-
1991
- 1991-06-28 US US07/966,028 patent/US5475795A/en not_active Expired - Fee Related
- 1991-06-28 WO PCT/GB1991/001053 patent/WO1992000572A1/en unknown
- 1991-06-28 DE DE69115488T patent/DE69115488T2/de not_active Expired - Fee Related
- 1991-06-28 JP JP91511408A patent/JPH05508041A/ja active Pending
- 1991-06-28 AU AU82145/91A patent/AU8214591A/en not_active Abandoned
- 1991-06-28 WO PCT/GB1991/001054 patent/WO1992000573A1/en active IP Right Grant
- 1991-06-28 EP EP91911739A patent/EP0537208B1/de not_active Expired - Lifetime
- 1991-06-28 BR BR919106607A patent/BR9106607A/pt unknown
- 1991-06-28 AT AT91911739T patent/ATE131642T1/de not_active IP Right Cessation
- 1991-06-28 CA CA002085896A patent/CA2085896A1/en not_active Abandoned
- 1991-06-28 AU AU81927/91A patent/AU8192791A/en not_active Abandoned
Non-Patent Citations (4)
Title |
---|
NEW ELECTRONICS.INCORPORATING ELECTRONICS TODAY. vol. 23, no. 1, January 1990, LONDON GB pages 16 - 18; BOOTHROYD: 'Ram chips build simple quick thinking networks ' see page 17, left column, line 2 - line 17 * |
PHYSICA D . vol. 34, 1989, AMSTERDAM NL pages 90 - 114; GORSE: 'An analysis of noisy ram and neural nets ' * |
PROCEEDINGS OF THE FIRST IEE INTERNATIONAL CONFERENCE ON ARTIFICIAL NEURAL NETWORKS 1989 1989, pages 242 - 246; CLARKSON: 'Hardware realisable models of neural processing ' cited in the application see page 243, right column, line 18 - page 244, right column, line 28; figures 3,4 SA 49103 030cited in the application * |
PROCEEDINGS PARALLEL PROCESSING IN NEURAL SYSTEMS AND COMPUTERS March 19, 1990, DUSSELDORF.FRG pages 161 - 164; GORSE: 'Training strategies for probabilistic RAMs ' see page 161, line 1 - page 164, line 23 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1993018474A1 (en) * | 1992-03-11 | 1993-09-16 | University College London | Devices for use in neural processing |
WO1999040521A1 (en) * | 1998-02-05 | 1999-08-12 | Intellix A/S | N-tuple or ram based neural network classification system and method |
US6393413B1 (en) | 1998-02-05 | 2002-05-21 | Intellix A/S | N-tuple or RAM based neural network classification system and method |
AU756987B2 (en) * | 1998-02-05 | 2003-01-30 | Intellix A/S | N-tuple or ram based neural network classification system and method |
WO1999067694A2 (en) * | 1998-06-23 | 1999-12-29 | Intellix A/S | N-tuple or ram based neural network classification system and method |
WO1999067694A3 (en) * | 1998-06-23 | 2000-02-10 | Risoe | N-tuple or ram based neural network classification system and method |
US6999950B1 (en) | 1998-06-23 | 2006-02-14 | Intellix A/S | N-tuple or RAM based neural network classification system and method |
Also Published As
Publication number | Publication date |
---|---|
DE69115488D1 (de) | 1996-01-25 |
EP0537208B1 (de) | 1995-12-13 |
AU8214591A (en) | 1992-01-23 |
AU8192791A (en) | 1992-01-23 |
WO1992000573A1 (en) | 1992-01-09 |
US5175798A (en) | 1992-12-29 |
DE69115488T2 (de) | 1996-05-09 |
GB9014569D0 (en) | 1990-08-22 |
BR9106607A (pt) | 1993-06-01 |
JPH05508041A (ja) | 1993-11-11 |
ATE131642T1 (de) | 1995-12-15 |
CA2085896A1 (en) | 1991-12-30 |
EP0537208A1 (de) | 1993-04-21 |
US5475795A (en) | 1995-12-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5475795A (en) | Neural processing devices for handling real-valued inputs | |
Schmidhuber | Learning to control fast-weight memories: An alternative to dynamic recurrent networks | |
EP0591286B1 (de) | Neuronale netzwerk architektur | |
US5517597A (en) | Convolutional expert neural system (ConExNS) | |
US5283855A (en) | Neural network and method for training the neural network | |
US5131073A (en) | Neuron unit and neuron unit network | |
Bugmann | Biologically plausible neural computation | |
Siegelmann | The simple dynamics of super Turing theories | |
US4996648A (en) | Neural network using random binary code | |
EP0385436A2 (de) | Fehler absorbierendes System in einem neuronalen Rechner | |
WO1993018474A1 (en) | Devices for use in neural processing | |
Magoulas et al. | A training method for discrete multilayer neural networks | |
US5426721A (en) | Neural networks and methods for training neural networks | |
JP3523325B2 (ja) | ニューラルネットワーク及びこれを用いた信号処理装置、自律システム、自律型ロボット並びに移動システム | |
Liang | Problem decomposition and subgoaling in artificial neural networks | |
JP3256553B2 (ja) | 信号処理装置の学習方法 | |
Gorse et al. | Encoding temporal structure in probabilistic RAM nets | |
Markova et al. | Deep Learning Approach for Identification of Non-linear Dynamic Systems | |
JP2549454B2 (ja) | 神経細胞模倣回路網及び神経細胞模倣ユニット | |
US20230014185A1 (en) | Method and device for binary coding of signals in order to implement digital mac operations with dynamic precision | |
Rossmann et al. | short-and long-term dynamics in a stochastic pulse stream neuron implemented in FPGA | |
JP3338713B2 (ja) | 信号処理装置 | |
CLEE | Neural networks with memory for intelligent computations | |
Tang et al. | A model of neurons with unidirectional linear response | |
JP3463890B2 (ja) | 神経回路模倣素子 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AT AU BB BG BR CA CH CS DE DK ES FI GB HU JP KP KR LK LU MC MG MN MW NL NO PL RO SD SE SU US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE BF BJ CF CG CH CI CM DE DK ES FR GA GB GN GR IT LU ML MR NL SE SN TD TG |
|
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
NENP | Non-entry into the national phase |
Ref country code: CA |