WO2002044926A1 - Neural cortex - Google Patents

Neural cortex Download PDF

Info

Publication number
WO2002044926A1
WO2002044926A1 PCT/SG2000/000182 SG0000182W WO0244926A1 WO 2002044926 A1 WO2002044926 A1 WO 2002044926A1 SG 0000182 W SG0000182 W SG 0000182W WO 0244926 A1 WO0244926 A1 WO 0244926A1
Authority
WO
WIPO (PCT)
Prior art keywords
pattern
recognised
index
neural network
input
Prior art date
Application number
PCT/SG2000/000182
Other languages
French (fr)
Other versions
WO2002044926A8 (en
Inventor
Yang Ming Pok
Alexei Mikhailov
Original Assignee
Yang Ming Pok
Alexei Mikhailov
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yang Ming Pok, Alexei Mikhailov filed Critical Yang Ming Pok
Priority to MXPA03005942A priority Critical patent/MXPA03005942A/en
Priority to KR10-2003-7007184A priority patent/KR20030057562A/en
Priority to CA002433999A priority patent/CA2433999A1/en
Priority to EP00978191A priority patent/EP1340160A4/en
Priority to NZ526795A priority patent/NZ526795A/en
Priority to CNA008200017A priority patent/CN1470022A/en
Priority to JP2002547024A priority patent/JP2004515002A/en
Priority to US10/398,279 priority patent/US7305370B2/en
Priority to AU2001215675A priority patent/AU2001215675A1/en
Priority to PCT/SG2000/000182 priority patent/WO2002044926A1/en
Publication of WO2002044926A1 publication Critical patent/WO2002044926A1/en
Publication of WO2002044926A8 publication Critical patent/WO2002044926A8/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Definitions

  • the invention relates to an index-based neural network and to a method of processing information by pattern recognition using a neural network. It relates particularly but not exclusively to a neural network computer system which has an index-based weightless neural network with a columnar topography, and to a method whereby an input pattern is divided into a plurality of components and each component is processed according to a single pattern index to identify a recognised output pattern corresponding to the input pattern.
  • An artificial neural network is a structure composed of a number of interconnected units typically referred to as artificial neurons. Each unit has an input/output characteristic and implements a local computation or function. The output of any unit can be determined by its input/output characteristic and its interconnection with other units. Typically the unit input/output characteristics are relatively simple.
  • the network topology problem arises due to the fact that, although the overall function or functionality achieved is determined by the network topology, there are no clear rules or design guidelines for arbitrary application.
  • the n-Tuple Classifier has been proposed in an attempt to address these problems.
  • This classifier was the first suggested RAM-based neural network concept.
  • the first hardware implementation of the n-Tuple Concept was the WISARD system developed at Brunei University around 1979 (see “Computer Vision Systems for Industry: Comparisons”, appearing as Chapter 10 in “Computer Vision Systems for Industry”, I Alexander, T Stonham and B Wilkie, 1982).
  • the WISARD system belongs to the class of RAM-based weightless neural networks. This style of neural network addresses the problem of massive computations-based training by writing the data into a RAM-network and the problem of topology by suggesting a universal RAM-based network structure.
  • WISARD-type universal structure does not simulate the higher levels of neuronal organization found in biological neural networks. This leads to inefficient use of memory with the consequence that the problem of scaling still remains acute within RAM-based neural networks, and the application range of the WISARD-technology is limited.
  • N-Tuple classification systems use a method of recognition whereby an input to the neural network is divided into a number of components (n-Tuples) with each component compared to a series of component look-up tables. Normally there is an individual look-up table for each component. The network then processes each component in light of a large number of look-up tables to determine whether there has been a match. Where a match occurs for a component then that indicates that the component has been recognised. Recognition of each of the components of an input leads to recognition of the input. The presence of a number of look-up tables results in a potentially large memory size. The memory size required is proportional to the number of components which the network may identify. This can result in a substantial increase in memory where the pattern size increases.
  • an artificial neural network might be designed for an image processing application, initially using an n x n image, where n - 128.
  • This is a relatively low-resolution image by today's standards.
  • This increase in memory results in the requirement for network expansion potentially requiring additional hardware modular blocks.
  • the resolution of the image increases a point is quickly reached where the scaling to accommodate a high resolution image is beyond a practically achievable memory limit.
  • An object of the present invention is to address, overcome or alleviate some or all of the disadvantages present in the prior art.
  • a neural network system including:
  • RAM random access memory
  • the neural network system comprises a computer hardware component.
  • the neural network system has potential for scaling. Scaling may be achieved in any suitable manner. It is preferred that systematic expansion is achieved by increasing the size of the RAM.
  • the neural network may be trained in any suitable manner. It is preferred that the neural network is trained by writing of data into the RAM, and network topology emerges during the training.
  • performance of the neural network is adjustable by changing decomposition style of input data, and thereby changing dynamic range of input components.
  • input components to the neural network address a single common index.
  • a method of processing information by pattern recognition using a neural network including the steps of - (a) storing a plurality of output patterns to be recognised in a pattern index;
  • each output pattern is divided into a plurality of recognised components with each recognised component being stored in the pattern index for recognition.
  • the index preferably consists of columns with each column corresponding to one or more recognised components.
  • the index is divided into a number of columns which is equal to or less than the number of recognised components. More preferably, the index is divided into a number of columns which is equal to the number of recognised components.
  • the method may further include the steps of each input component being compared to the corresponding recognised component column, and a score being allocated to one or more recognised components.
  • a score being allocated to one or more recognised components.
  • the score for each recognised component of a pattern is added and the recognised pattern with the highest score is identified as the output pattern.
  • Figure 1 is an index table illustrating processing of an input according to one embodiment of the invention.
  • Figure 2 is a schematic block diagram illustrating processing of an input according to an embodiment of the invention.
  • Figure 3 is a schematic block diagram illustrating processing of an output according to an embodiment of the invention.
  • the invention can be implemented through the use of a neural card built with the use of standard digital chips.
  • the invention is an index-based weightless neural network with a columnar topology that stores in RAM the patterns of binary connections and the values of the activities of output nodes.
  • the network offers:
  • Scaling potential Systematic expansion of the neural network can be achieved not by adding extra modular building blocks as in previous artificial neural networks, but by increasing the RAM size to include additional columns or by increasing the height of the index. For example, 16 million connections can be implemented with a 64 MB RAM.
  • the performance can easily be adjusted by changing the dynamic range of input components, which can be achieved by changing the decomposition style of input data.
  • a device made according to the present invention is hereinafter referred to as a Neural Cortex.
  • Both traditional artificial neural networks and traditional RAM- based artificial neural networks are networks of neuron-like computing units.
  • the computing units of the human brain are multi-neuron cortical columns.
  • a general view of the single common index on which the present invention is based can best be described as a collection of vertical columns, wherein the signals propagate in a bottom-to-top fashion.
  • the Neural Cortex operates not by memorizing the names of classes in component look-up tables but by creating and memorizing an index (a linked data representation) of input components. This index contains the names of classes (class reference numbers) and is created on training.
  • the Neural Cortex On retrieval, the Neural Cortex, like the n-Tuple Classifier, sums up the names activated by input components. The summing operation provides the generalizing ability typical of neural networks. However, unlike the n-Tuple Classifier, where a "winner-takes-all" strategy is employed, the Neural Cortex employs a "winners-take-all” strategy. This is not a matter of preference but a necessity brought about by using a single common storage. In case of the n- Tuple Classifier, each input component (n-tuple) addresses its own look-up table. In case of the Neural Cortex, all input components address a single common index. This brings about a dramatic decrease in memory size.
  • the Neural Cortex size according to the present invention may remain unchanged because still only one common index is used.
  • the present invention creates a single pattern index of input components.
  • the index contains the output components and is created by storing the output pattern and training the neurons to recognise the pattern stored within the pattern index.
  • An output pattern S is decomposed into N number of components S-i, S 2 , ...., SN such that each component Sj is interpreted as the address of a column from the index.
  • Each column stores the reference number of those patterns which have the value Sj in one or more of their components; each column does not contain more than one sample of each reference number.
  • Each input I is received this is divided into a number of components l-i, l 2 I ⁇ .
  • Each input component l ⁇ to l ⁇ is processed by the network by comparing the component with the pattern index. Where a component of the input I; matches a component of the output Sj then each reference number listed in the column of Sj has a score of one added to its total. This process is repeated for each of the input components. The scores are then added to determine the winner. The winner is the reference number with the greatest score.
  • the reference number, corresponding to a recognised output pattern is recognised by the network.
  • FIG. 1 An example of the pattern index is illustrated in Figure 1. This figure illustrates where the index has been trained or programmed to recognise three words "background”, “variable” and “mouse". In this figure the words are assigned the reference numbers 1, 2 and 3 respectively.
  • the output patterns are letters from “a” to "z” with these included as columns within the index.
  • each of the components of the input is processed by reference to this single pattern index.
  • the input is in the form of the word "mouse”.
  • This input is subsequently broken down into five letters.
  • Each letter is processed in the network by using the index.
  • the simultaneous nature of processing undertaken in the network can ensure that processing of each component is undertaken virtually simultaneously. The following processing is undertaken -
  • variable 1 has a score of 2
  • variable 2 a score of 1
  • variable 3 a score of 5.
  • the variable with the highest score is determined to be the winner and hence identified.
  • the variable 3 which has a score of 5, corresponding to the word "mouse", is therefore considered to be identified.
  • the common storage collapses the N-dimensional space into a one-dimensional space thus creating the permutational invariance, which is the price to be paid for dramatic reduction in memory size as compared to traditional RAM-based neural networks.
  • This invariance is the key to the Neural Cortex. At the same time, it is the beauty of the approach because this invariance becomes practically harmless when the component dynamic range is increased.
  • the conversion of the n-component input pattern ( s 1f s 2 , ... , s N) into a new pattern ( Ci, c 2 , ... , CM) whose components have a greater dynamic range and M ⁇ N is preferably done by the software driver of the Neural Cortex card.
  • This conversion can be referred to as the C(haos)-transform, if it converts the sequence of all input patterns into a one-dimensional chaotic iterated map.
  • the sufficient condition for the absence of identification ambiguity is that the sequence of all C-transformed input patterns is a chaotic iterated map. This is true because in this case all pattern components will be different, which leaves no room for identification ambiguity. In fact, this condition is too strong because it is sufficient that any two patterns differ in one component, at least.
  • a good approximation of the C-transform can be achieved by increasing components' dynamic range to 2 bytes, 3 bytes, etc. when 2, 3 or more components are joined together, e.g., (a,b,c) is converted into the 2 component pattern (ab, be).
  • a Neural Cortex read-cycle block-diagram is shown in Figure 2.
  • the blocks 'Roots', 'Links', 'Names', 'Score' are RAM-devices.
  • is a summer.
  • T-logic is a terminating logical device. 1.
  • Each pattern component (A-Word) is passed to the address bus of the 'Roots' RAM.
  • the output value R of the 'Roots' RAM is passed to the address bus of the 'Links' RAM.
  • the output value L of the 'Links' RAM is passed to the address bus of the 'Names' RAM.
  • a Neural Cortex write-cycle block-diagram is shown in Figure 3. The blocks
  • CU is the control unit.
  • Each pattern component A is passed to the address bus of the 'Roots'
  • the output value R of the 'Roots' RAM is passed to the address-bus of the 'Links' RAM.
  • the output value L of the 'Links' RAM is passed to the address-bus of the 'Names' RAM.
  • the output value of the 'Names' RAM is denoted by N, and the current pattern name by P. 4.
  • the values R, L, N and P are passed to the control unit, which utilizes the following logic. If L is 0 then the control unit makes a decision (point 5) on updating 'Roots', 'Links' and 'Names' RAM. Otherwise, L is fed back to the 'Links' RAM address bus. The process repeats itself from point 3.
  • Additional classes increase the amount of reference numbers that are stored in index columns and, therefore, the amount of index cells that have to be accessed.
  • D dynamic range
  • the processing time on storage and recall is proportional to the number of accessed memory cells, which is proportional to HN.
  • N is the number of the pattern components.
  • the processing time approaches O(N). This follows from the fact that H is inverse proportional to D.
  • the memory size is proportional to HD. However, H grows/decreases faster than D. Hence, adjusting the dynamic range D can efficiently control the memory size.
  • the Neural Cortex size does not exceed CD, where C is the number of pattern classes, which is because the Neural Cortex has only one "look-up-table".
  • the memory size of a traditional RAM-based artificial neural network is CDN because for this type of artificial neural network the number of input look-up-tables is equal to the number N of input pattern components.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Neurology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Character Discrimination (AREA)
  • Thermistors And Varistors (AREA)
  • Programmable Controllers (AREA)
  • Control By Computers (AREA)

Abstract

A neural network system includes a random access memory (RAM); and an index-based weightless neural network with a columnar topography; wherein patterns of binary connections and values of output nodes' activities are stored in the RAM. Information is processed by pattern recognition using the neural network by storing a plurality of output patterns to be recognised in a pattern index; accepting an input pattern and dividing the input pattern into a plurality of components; and processing each component according to the pattern index to identify a recognised output pattern corresponding to the input pattern.

Description

NEURAL CORTEX
Field of Invention
The invention relates to an index-based neural network and to a method of processing information by pattern recognition using a neural network. It relates particularly but not exclusively to a neural network computer system which has an index-based weightless neural network with a columnar topography, and to a method whereby an input pattern is divided into a plurality of components and each component is processed according to a single pattern index to identify a recognised output pattern corresponding to the input pattern.
Background to the Invention
An artificial neural network is a structure composed of a number of interconnected units typically referred to as artificial neurons. Each unit has an input/output characteristic and implements a local computation or function. The output of any unit can be determined by its input/output characteristic and its interconnection with other units. Typically the unit input/output characteristics are relatively simple.
There are three major problems associated with artificial neural networks, namely: (a) scaling and hardware size practical limits; (b) network topology; and (c) training. The scaling and hardware size problem arises because there is a relationship between application complexity and artificial neural network size, such that scaling to accommodate a high resolution image may require hardware resources which exceed practical limits.
The network topology problem arises due to the fact that, although the overall function or functionality achieved is determined by the network topology, there are no clear rules or design guidelines for arbitrary application.
The training problem arises because training is difficult to accomplish. The n-Tuple Classifier has been proposed in an attempt to address these problems. This classifier was the first suggested RAM-based neural network concept. The first hardware implementation of the n-Tuple Concept was the WISARD system developed at Brunei University around 1979 (see "Computer Vision Systems for Industry: Comparisons", appearing as Chapter 10 in "Computer Vision Systems for Industry", I Alexander, T Stonham and B Wilkie, 1982). The WISARD system belongs to the class of RAM-based weightless neural networks. This style of neural network addresses the problem of massive computations-based training by writing the data into a RAM-network and the problem of topology by suggesting a universal RAM-based network structure. However, the network topology of WISARD-type universal structure does not simulate the higher levels of neuronal organization found in biological neural networks. This leads to inefficient use of memory with the consequence that the problem of scaling still remains acute within RAM-based neural networks, and the application range of the WISARD-technology is limited.
Another example of neural networks that overcomes the problem of training by a simple memorization task is Sparse Distributed Memory (P Kanerva, 1998, "Sparse Distributed Memory", Cambridge, MA: NIT Press). However, a problem with the Sparse Distributed Memory, as with the WISARD system, is a large memory size. Another disadvantage of the Sparse Distributed Memory is its computational complexity. This is because for this type of memory, an input word must be compared to all memory locations.
N-Tuple classification systems use a method of recognition whereby an input to the neural network is divided into a number of components (n-Tuples) with each component compared to a series of component look-up tables. Normally there is an individual look-up table for each component. The network then processes each component in light of a large number of look-up tables to determine whether there has been a match. Where a match occurs for a component then that indicates that the component has been recognised. Recognition of each of the components of an input leads to recognition of the input. The presence of a number of look-up tables results in a potentially large memory size. The memory size required is proportional to the number of components which the network may identify. This can result in a substantial increase in memory where the pattern size increases. For example, an artificial neural network might be designed for an image processing application, initially using an n x n image, where n - 128. This is a relatively low-resolution image by today's standards. Where the image to be processed increases from n = 128 to n =2048 the number of neurons, the size of the network, increases by a factor of 256. This increase in memory results in the requirement for network expansion potentially requiring additional hardware modular blocks. Where the resolution of the image increases a point is quickly reached where the scaling to accommodate a high resolution image is beyond a practically achievable memory limit.
An object of the present invention is to address, overcome or alleviate some or all of the disadvantages present in the prior art.
Summary of the Invention
According to a first aspect of the invention, there is provided a neural network system including:
(a) a random access memory (RAM); and
(b) an index-based weightless neural network with a columnar topography; wherein patterns of binary connections and values of output nodes' activities are stored in the RAM.
Preferably, the neural network system comprises a computer hardware component.
In a preferred form the neural network system has potential for scaling. Scaling may be achieved in any suitable manner. It is preferred that systematic expansion is achieved by increasing the size of the RAM. The neural network may be trained in any suitable manner. It is preferred that the neural network is trained by writing of data into the RAM, and network topology emerges during the training.
It is preferred that performance of the neural network is adjustable by changing decomposition style of input data, and thereby changing dynamic range of input components.
It is preferred that input components to the neural network address a single common index.
According to a second aspect of the invention, there is provided a method of processing information by pattern recognition using a neural network including the steps of - (a) storing a plurality of output patterns to be recognised in a pattern index;
(b) accepting an input pattern and dividing the input pattern into a plurality of components;
(c) processing each component according to the pattern index to identify a recognised output pattern corresponding to the input pattern.
Preferably each output pattern is divided into a plurality of recognised components with each recognised component being stored in the pattern index for recognition. The index preferably consists of columns with each column corresponding to one or more recognised components. Preferably the index is divided into a number of columns which is equal to or less than the number of recognised components. More preferably, the index is divided into a number of columns which is equal to the number of recognised components.
The method may further include the steps of each input component being compared to the corresponding recognised component column, and a score being allocated to one or more recognised components. Preferably the score for each recognised component of a pattern is added and the recognised pattern with the highest score is identified as the output pattern. Brief Description of the Drawings
The invention will now be described in further detail by reference to the attached drawings which show example forms of the invention. It is to be understood that the specificity of the following description does not limit the generality of the foregoing disclosure.
Figure 1 is an index table illustrating processing of an input according to one embodiment of the invention.
Figure 2 is a schematic block diagram illustrating processing of an input according to an embodiment of the invention.
Figure 3 is a schematic block diagram illustrating processing of an output according to an embodiment of the invention.
Detailed Description
The invention can be implemented through the use of a neural card built with the use of standard digital chips. The invention is an index-based weightless neural network with a columnar topology that stores in RAM the patterns of binary connections and the values of the activities of output nodes. The network offers:
(a) Scaling potential: Systematic expansion of the neural network can be achieved not by adding extra modular building blocks as in previous artificial neural networks, but by increasing the RAM size to include additional columns or by increasing the height of the index. For example, 16 million connections can be implemented with a 64 MB RAM.
(b) The required memory size is reduced by a factor of N, when compared with previous n-Tuple systems such as the WISARD system, with N being the number of input components (n-Tuples). This is because the n-Tuple Classifier requires N look-up tables, whereas the present invention requires only one common storage.
(c) The network topology emerges automatically during the training. (d) Training is reduced to writing of data into RAM.
(e) The performance can easily be adjusted by changing the dynamic range of input components, which can be achieved by changing the decomposition style of input data.
A device made according to the present invention is hereinafter referred to as a Neural Cortex. Both traditional artificial neural networks and traditional RAM- based artificial neural networks are networks of neuron-like computing units. However, the computing units of the human brain are multi-neuron cortical columns. A general view of the single common index on which the present invention is based can best be described as a collection of vertical columns, wherein the signals propagate in a bottom-to-top fashion.
Unlike traditional RAM-based neural networks, the Neural Cortex operates not by memorizing the names of classes in component look-up tables but by creating and memorizing an index (a linked data representation) of input components. This index contains the names of classes (class reference numbers) and is created on training.
On retrieval, the Neural Cortex, like the n-Tuple Classifier, sums up the names activated by input components. The summing operation provides the generalizing ability typical of neural networks. However, unlike the n-Tuple Classifier, where a "winner-takes-all" strategy is employed, the Neural Cortex employs a "winners-take-all" strategy. This is not a matter of preference but a necessity brought about by using a single common storage. In case of the n- Tuple Classifier, each input component (n-tuple) addresses its own look-up table. In case of the Neural Cortex, all input components address a single common index. This brings about a dramatic decrease in memory size. The absence of a single common index in both the n-Tuple Classifier and the Sparse Distributed Memory systems explains why previous RAM-based neural networks had difficulties in terms of memory requirements whose large size significantly limited the application range. Further, a single common index is an efficient solution to the neural network expansion problem. As has been indicated above, both traditional artificial neural networks and traditional RAM-based artificial neural networks have scaling difficulties when the application size grows. For instance, if the image size grows from 128x128 pixels to 2048x2048 than a traditional artificial neural networks will need a 256-fold increase in memory because the number of n- tuples increases by the factor of 256 = 2048*2048/128*128. However paradoxically in the same situation, the Neural Cortex size according to the present invention may remain unchanged because still only one common index is used.
The present invention creates a single pattern index of input components. The index contains the output components and is created by storing the output pattern and training the neurons to recognise the pattern stored within the pattern index.
An output pattern S is decomposed into N number of components S-i, S2, ...., SN such that each component Sj is interpreted as the address of a column from the index. Each column stores the reference number of those patterns which have the value Sj in one or more of their components; each column does not contain more than one sample of each reference number. When an input I is received this is divided into a number of components l-i, l2 Iχ . Each input component lι to lχ is processed by the network by comparing the component with the pattern index. Where a component of the input I; matches a component of the output Sj then each reference number listed in the column of Sj has a score of one added to its total. This process is repeated for each of the input components. The scores are then added to determine the winner. The winner is the reference number with the greatest score. The reference number, corresponding to a recognised output pattern, is recognised by the network.
An example of the pattern index is illustrated in Figure 1. This figure illustrates where the index has been trained or programmed to recognise three words "background", "variable" and "mouse". In this figure the words are assigned the reference numbers 1, 2 and 3 respectively. The output patterns are letters from "a" to "z" with these included as columns within the index. When an input is received by the network each of the components of the input is processed by reference to this single pattern index. In this example the input is in the form of the word "mouse". This input is subsequently broken down into five letters. Each letter is processed in the network by using the index. The simultaneous nature of processing undertaken in the network can ensure that processing of each component is undertaken virtually simultaneously. The following processing is undertaken -
(a) the component of the input "m" is processed and in this case one point is added to the score attributable to variable 3;
(b) the component input "o" is processed and one point is added to variable 1 and 3;
(c) the component input "u" is processed and one point is attributable to variable 1 and 3; (d) the component input "s" is processed and one point is attributable to variable 3;
(e) the component input "e" is processed and one point is attributable to variable 2 and 3.
The network then sums up the points attributable to each variable. In this instance variable 1 has a score of 2, variable 2 a score of 1 and variable 3 a score of 5. The variable with the highest score is determined to be the winner and hence identified. The variable 3 which has a score of 5, corresponding to the word "mouse", is therefore considered to be identified.
In case of standard RAM, two different address words always point towards two different memory locations. This is no longer true in case of the Neural Cortex. For example, if the input pattern has three components (a, b, c) and the component dynamic range is 1 byte then the patterns (a,c,b), (b,a,c), (b,c,a), (c,a,b), (c,b,a) will produce the same score equal to 3 because the Neural Cortex is invariant with respect to permutations. The invariance is caused by the fact that all components (n-tuples) address a single common storage. The common storage collapses the N-dimensional space into a one-dimensional space thus creating the permutational invariance, which is the price to be paid for dramatic reduction in memory size as compared to traditional RAM-based neural networks. This invariance is the key to the Neural Cortex. At the same time, it is the beauty of the approach because this invariance becomes practically harmless when the component dynamic range is increased. For the above example, by using the 2 bytes dynamic range, where the pattern (a,b,c) is converted into the 2 component pattern (ab, be), the following scores will be obtained: (a,b,c) =>2, (a,c,b)=>0, (b,a,c)=>0, (b,c,a)=>1 , (c,a,b)=>1 , (c,b,a)=>0, so that the pattern (a,b,c) will be identified correctly.
In general case, the conversion of the n-component input pattern ( s1f s2, ... , sN) into a new pattern ( Ci, c2 , ... , CM) whose components have a greater dynamic range and M < N is preferably done by the software driver of the Neural Cortex card.
This conversion can be referred to as the C(haos)-transform, if it converts the sequence of all input patterns into a one-dimensional chaotic iterated map. The sufficient condition for the absence of identification ambiguity is that the sequence of all C-transformed input patterns is a chaotic iterated map. This is true because in this case all pattern components will be different, which leaves no room for identification ambiguity. In fact, this condition is too strong because it is sufficient that any two patterns differ in one component, at least. For practical purposes a good approximation of the C-transform can be achieved by increasing components' dynamic range to 2 bytes, 3 bytes, etc. when 2, 3 or more components are joined together, e.g., (a,b,c) is converted into the 2 component pattern (ab, be).
A Neural Cortex read-cycle block-diagram is shown in Figure 2. The blocks 'Roots', 'Links', 'Names', 'Score' are RAM-devices. Σ is a summer. T-logic is a terminating logical device. 1. Each pattern component (A-Word) is passed to the address bus of the 'Roots' RAM.
2. The output value R of the 'Roots' RAM is passed to the address bus of the 'Links' RAM. 3. The output value L of the 'Links' RAM is passed to the address bus of the 'Names' RAM.
4. And, finally, the output value N of the 'Names' RAM is passed to the address bus of the 'Score' RAM.
If L is 0 then the T-logic terminates the process. Otherwise, the 'Score' RAM content found at address N that is determined by the output of the 'Name' RAM is incremented by the value of 1. Next, the 'Links' RAM output is fed back to the 'Links' RAM address bus. The process repeats itself from point 3.
A Neural Cortex write-cycle block-diagram is shown in Figure 3. The blocks
'Roots', 'Links', 'Names', are RAM-devices. CU is the control unit.
1. Each pattern component A is passed to the address bus of the 'Roots'
RAM. 2. The output value R of the 'Roots' RAM is passed to the address-bus of the 'Links' RAM.
3. The output value L of the 'Links' RAM is passed to the address-bus of the 'Names' RAM. The output value of the 'Names' RAM is denoted by N, and the current pattern name by P. 4. The values R, L, N and P are passed to the control unit, which utilizes the following logic. If L is 0 then the control unit makes a decision (point 5) on updating 'Roots', 'Links' and 'Names' RAM. Otherwise, L is fed back to the 'Links' RAM address bus. The process repeats itself from point 3.
5. Decision Logic: a) if N = P, terminate the process; if R = 0, increment the counter value C by 1 , write C to 'Roots' RAM at address A, write C to 'Links' RAM at address R, write P to 'Names' RAM at address L, if R > 0 & L=0, increment the counter value C by 1 , write C to 'Links' RAM at address R, write P to 'Names' RAM at address L, b) terminate the process. Performance of the Neural Cortex can be adjusted in terms of memory size and read/write times. Normally, storage and recall times increase when the number of classes grows as the training continues. Additional classes increase the amount of reference numbers that are stored in index columns and, therefore, the amount of index cells that have to be accessed. As a remedy, one can increase the dynamic range D of input pattern components. This increases the number of index columns because the index address space is equal to D. As a result, the same amount of reference numbers will be spread upon the greater area, which, in turn, decreases the average index height H.
The processing time on storage and recall is proportional to the number of accessed memory cells, which is proportional to HN. Here, N is the number of the pattern components. As D increases, the processing time approaches O(N). This follows from the fact that H is inverse proportional to D.
The memory size is proportional to HD. However, H grows/decreases faster than D. Hence, adjusting the dynamic range D can efficiently control the memory size. In the worst case, the Neural Cortex size does not exceed CD, where C is the number of pattern classes, which is because the Neural Cortex has only one "look-up-table". On the other hand, the memory size of a traditional RAM-based artificial neural network is CDN because for this type of artificial neural network the number of input look-up-tables is equal to the number N of input pattern components.
It is to be understood that various modifications, alterations and/or additions may be made to the parts previously described without departing from the ambit of the present invention.

Claims

The claims defining the invention are as follows :
1. A neural network system including: (a) a random access memory (RAM); and (b) an index-based weightless neural network with a columnar topography; wherein patterns of binary connections and values of output nodes' activities are stored in the RAM.
2. A neural network system according to claim 1 wherein the system comprises a computer hardware component.
3. A neural network system according to claim 1 or claim 2 wherein systematic expansion is achieved by increasing the size of the RAM.
4. A neural network system according to any one of claims 1 to 3 wherein the neural network is trained by writing of data into the RAM, and network topology emerges during the training.
5. A neural network system according to any one of claims 1 to 4 wherein performance is adjustable by changing decomposition style of input data, and thereby changing dynamic range of input components.
6. A neural network system according to any one of claims 1 to 5 wherein input components address a single common index.
7. A method of processing information by pattern recognition using a neural network including the steps of-
(a) storing a plurality of output patterns to be recognised in a pattern index;
(b) accepting an input pattern and dividing the input pattern into a plurality of components;
(c) processing each component according to the pattern index to identify a recognised output pattern corresponding to the input pattern.
8. A method according to claim 7 wherein each output pattern is divided into a plurality of recognised components with each recognised component being stored in the pattern index for recognition.
9. A method according to claim 8 wherein the index consists of columns with each column corresponding to one or more recognised components.
10. A method according to claim 9 wherein the index is divided into a number of columns which is equal to or less than the number of recognised components.
11. A method according to claim 9 wherein the index is divided into a number of columns which is equal to the number of recognised components.
12. A method according to any one of claims 8 to 10 wherein each input component is compared to the corresponding recognised component column and a score is allocated to one or more recognised components.
13. A method according to claim 12 wherein the score for each recognised component of a pattern is added and the recognised pattern with the highest score is identified as the output pattern.
PCT/SG2000/000182 2000-11-30 2000-11-30 Neural cortex WO2002044926A1 (en)

Priority Applications (10)

Application Number Priority Date Filing Date Title
MXPA03005942A MXPA03005942A (en) 2000-11-30 2000-11-30 Neural cortex.
KR10-2003-7007184A KR20030057562A (en) 2000-11-30 2000-11-30 Neural Cortex
CA002433999A CA2433999A1 (en) 2000-11-30 2000-11-30 Neural cortex
EP00978191A EP1340160A4 (en) 2000-11-30 2000-11-30 Neural cortex
NZ526795A NZ526795A (en) 2000-11-30 2000-11-30 Hardware component of an index-based weightless neural cortex using a single common index with a systematic expansion
CNA008200017A CN1470022A (en) 2000-11-30 2000-11-30 Neural cortex
JP2002547024A JP2004515002A (en) 2000-11-30 2000-11-30 Neural cortex
US10/398,279 US7305370B2 (en) 2000-11-30 2000-11-30 Neural cortex
AU2001215675A AU2001215675A1 (en) 2000-11-30 2000-11-30 Neural cortex
PCT/SG2000/000182 WO2002044926A1 (en) 2000-11-30 2000-11-30 Neural cortex

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/SG2000/000182 WO2002044926A1 (en) 2000-11-30 2000-11-30 Neural cortex

Publications (2)

Publication Number Publication Date
WO2002044926A1 true WO2002044926A1 (en) 2002-06-06
WO2002044926A8 WO2002044926A8 (en) 2003-10-02

Family

ID=20428883

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2000/000182 WO2002044926A1 (en) 2000-11-30 2000-11-30 Neural cortex

Country Status (9)

Country Link
US (1) US7305370B2 (en)
EP (1) EP1340160A4 (en)
JP (1) JP2004515002A (en)
KR (1) KR20030057562A (en)
CN (1) CN1470022A (en)
AU (1) AU2001215675A1 (en)
CA (1) CA2433999A1 (en)
MX (1) MXPA03005942A (en)
WO (1) WO2002044926A1 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1331092C (en) * 2004-05-17 2007-08-08 中国科学院半导体研究所 Special purpose neural net computer system for pattern recognition and application method
US20060271342A1 (en) * 2005-05-27 2006-11-30 Trustees Of The University Of Pennsylvania Cort_x: a dynamic brain model
SG133421A1 (en) * 2005-12-13 2007-07-30 Singapore Tech Dynamics Pte Method and apparatus for an algorithm development environment for solving a class of real-life combinatorial optimization problems
KR101037885B1 (en) * 2011-01-18 2011-05-31 주식회사 청오에이치앤씨 Apparatus for movable vacuum cleaning
CA2975251C (en) * 2015-01-28 2021-01-26 Google Inc. Batch normalization layers
EP3295381B1 (en) * 2016-02-05 2022-08-24 DeepMind Technologies Limited Augmenting neural networks with sparsely-accessed external memory
JP2017182710A (en) * 2016-03-31 2017-10-05 ソニー株式会社 Information processing device, information processing method, and information providing method
US10140574B2 (en) * 2016-12-31 2018-11-27 Via Alliance Semiconductor Co., Ltd Neural network unit with segmentable array width rotator and re-shapeable weight memory to match segment width to provide common weights to multiple rotator segments
KR102499396B1 (en) 2017-03-03 2023-02-13 삼성전자 주식회사 Neural network device and operating method of neural network device
CN108109694B (en) * 2018-01-05 2023-06-30 李向坤 Event judging method and device, storage medium and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5285522A (en) * 1987-12-03 1994-02-08 The Trustees Of The University Of Pennsylvania Neural networks for acoustical pattern recognition
EP0684576A2 (en) * 1994-05-24 1995-11-29 International Business Machines Corporation Improvements in image processing
US5621848A (en) * 1994-06-06 1997-04-15 Motorola, Inc. Method of partitioning a sequence of data frames
US6058206A (en) * 1997-12-01 2000-05-02 Kortge; Chris Alan Pattern recognizer with independent feature learning

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02195400A (en) * 1989-01-24 1990-08-01 Canon Inc Speech recognition device
WO1990016036A1 (en) 1989-06-14 1990-12-27 Hitachi, Ltd. Hierarchical presearch-type document retrieval method, apparatus therefor, and magnetic disc device for this apparatus
US6408402B1 (en) * 1994-03-22 2002-06-18 Hyperchip Inc. Efficient direct replacement cell fault tolerant architecture
EP0709801B1 (en) * 1994-10-28 1999-12-29 Hewlett-Packard Company Method for performing string matching
US5995868A (en) * 1996-01-23 1999-11-30 University Of Kansas System for the prediction, rapid detection, warning, prevention, or control of changes in activity states in the brain of a subject
DK0935212T3 (en) * 1998-02-05 2002-02-25 Intellix As n-tuple or RAM based neural network classification system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5285522A (en) * 1987-12-03 1994-02-08 The Trustees Of The University Of Pennsylvania Neural networks for acoustical pattern recognition
EP0684576A2 (en) * 1994-05-24 1995-11-29 International Business Machines Corporation Improvements in image processing
US5621848A (en) * 1994-06-06 1997-04-15 Motorola, Inc. Method of partitioning a sequence of data frames
US6058206A (en) * 1997-12-01 2000-05-02 Kortge; Chris Alan Pattern recognizer with independent feature learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP1340160A4 *

Also Published As

Publication number Publication date
JP2004515002A (en) 2004-05-20
US20040034609A1 (en) 2004-02-19
CA2433999A1 (en) 2002-06-06
CN1470022A (en) 2004-01-21
MXPA03005942A (en) 2005-02-14
EP1340160A4 (en) 2005-03-23
WO2002044926A8 (en) 2003-10-02
EP1340160A1 (en) 2003-09-03
AU2001215675A1 (en) 2002-06-11
KR20030057562A (en) 2003-07-04
US7305370B2 (en) 2007-12-04

Similar Documents

Publication Publication Date Title
Gowda et al. Divisive clustering of symbolic objects using the concepts of both similarity and dissimilarity
Kohonen et al. Very large two-level SOM for the browsing of newsgroups
EP0591286B1 (en) Neural network architecture
US20040034609A1 (en) Neural cortex
NZ526795A (en) Hardware component of an index-based weightless neural cortex using a single common index with a systematic expansion
WO1992004687A1 (en) Process and device for the boolean realization of adaline-type neural networks
Bishop et al. Evolutionary learning to optimise mapping in n-Tuple networks
Guoqing et al. Multilayer parallel distributed pattern recognition system model using sparse RAM nets
Rohwer et al. An exploration of the effect of super large n-tuples on single layer ramnets
Muñoz-Gutiérrez et al. Shared memory nodes and discriminant's reduced storage in two dimensional modified Kanerva's sparse distributed memory model
Kukich Backpropagation topologies for sequence generation
Tambouratzis Applying logic neural networks to hand-written character recognition tasks
Chung et al. Characteristics of Hebbian-type associative memories with quantized interconnections
Macek et al. A transputer implementation of the ADAM neural network
Howard et al. Optical character recognition: A technology driver for neural networks
Valafar et al. Parallel, self organizing, consensus neural networks
Gurney 8: Alternative node types
Weeks et al. Mapping correlation matrix memory applications onto a Beowulf cluster
Gough Associative list memory
Chen et al. Recursive neural networks with high capacity
Gera Learning with Mappings and Input-Orderings using Random Access Memory—based Neural Networks
EP0748479A1 (en) Pattern recognition with n processors
de Carvalho et al. A modular boolean architecture for pattern recognition
Tambouratzis et al. Optimal topology-preservation using self-organising logical neural networks
De Carvalho et al. Combining Boolean neural architectures for image recognition

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 10398279

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 2001215675

Country of ref document: AU

WWE Wipo information: entry into national phase

Ref document number: 2000978191

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2002547024

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 008200017

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: 1020037007184

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: PA/a/2003/005942

Country of ref document: MX

Ref document number: 1011/DELNP/2003

Country of ref document: IN

ENP Entry into the national phase

Ref document number: 2003120451

Country of ref document: RU

Kind code of ref document: A

Ref country code: RU

Ref document number: RU A

WWE Wipo information: entry into national phase

Ref document number: 526795

Country of ref document: NZ

WWE Wipo information: entry into national phase

Ref document number: 2433999

Country of ref document: CA

WWP Wipo information: published in national office

Ref document number: 1020037007184

Country of ref document: KR

WWP Wipo information: published in national office

Ref document number: 2000978191

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWP Wipo information: published in national office

Ref document number: 526795

Country of ref document: NZ

WWG Wipo information: grant in national office

Ref document number: 526795

Country of ref document: NZ

WWW Wipo information: withdrawn in national office

Ref document number: 2000978191

Country of ref document: EP