EP0438569A1 - Adaptive network for classifying time-varying data - Google Patents

Adaptive network for classifying time-varying data

Info

Publication number
EP0438569A1
EP0438569A1 EP90912233A EP90912233A EP0438569A1 EP 0438569 A1 EP0438569 A1 EP 0438569A1 EP 90912233 A EP90912233 A EP 90912233A EP 90912233 A EP90912233 A EP 90912233A EP 0438569 A1 EP0438569 A1 EP 0438569A1
Authority
EP
European Patent Office
Prior art keywords
domain
input
neurons
network
information processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP90912233A
Other languages
German (de)
English (en)
French (fr)
Inventor
Patrick F. Castelaz
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Raytheon Co
Original Assignee
Hughes Aircraft Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hughes Aircraft Co filed Critical Hughes Aircraft Co
Publication of EP0438569A1 publication Critical patent/EP0438569A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/415Identification of targets based on measurements of movement associated with the target
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks

Definitions

  • This invention relates to information processors and, more particularly, to a method and apparatus for classifying time varying data.
  • Classifying of complex time-varying data poses a number of difficult problems for conventional information processors.
  • the task of classification typically involves recognizing patterns typical of known classes from large amounts of two-dimensional data. Where the patterns to be recognized have subtle variations between the known classes, traditional classifiers often fail to correctly distinguish between the classes. This is due, in part, to the strong assumptions which must be made concerning the underlying distributions of the input data. Algorithms must then be developed to extract these features and to match known features with the input features for classification.
  • classification problems include classifying time varying signals from various sources such as speech, image data, radar, sonar, etc.
  • conventional information processor are generally not fault tolerant, and cannot handle certain variations in the input signals such as changes in the orientation of a visual pattern, or differences in speakers, in the case of speech recognition.
  • neural nets are capable of recognizing a pattern and producing a desired output even where the input is incomplete or hidden in background noise. Also, neural nets exhibit greater robustness, or fault tolerance, than Von Neumann sequential computers because there are many more processing nodes, each with primarily local connections. Damage to a few nodes or links need not impair overall performance significantly.
  • neural net models utilizing various topologies, neuron characteristics, and training, or learning, algorithms. Learning algorithms specify an internal set of weights and indicate how weights should be adapted during use, or training, to improve performance.
  • some of these neural net models include the Perceptron, described in U.S. Patent No. 3,287,649 issued to F. Rosenblatt; the Hopfield Net, described in U.S. Patent Nos. 4,660,166 and 4,719,591 issued to J. Hopfield; the Hamming Net and Kohohonen self-organizing maps, described in R.
  • the time-varying data is complex and involves large quantities of data
  • a major problem is in developing a technique for representing the data to the neural network for processing.
  • the minimum amount of data required to adequately represent classifications may involve, for example, fifty time slices of sixteen frequency bands of the doppler data.
  • One way to present this data to a neural net processor would be to utilize a neural network with 800 (50 + 16) input neurons and to present each of the 800 input neurons with one sample of doppler data.
  • the disadvantage of this approach is that such a large number of input neurons and the corresponding large number of total neurons and interconnections would result in a neural network that is very complex and expensive. Further, such a complex network takes a greater period of time to process information and to learn.
  • a processor for classifying time-varying data with a minimum of preprocessing and requiring a minimum of algorithm and software development. It would also be desirable to provide a classification processor that is not based on explicit assumptions but instead can adapt by training to recognize patterns. It would also be desirable ⁇ to provide a means for representing time-varying data to an adaptive processor in a simplified ⁇ manner which reduces the total number of input values presented to the processor.
  • an adaptive network is provided with at least N--+ 1 input neurons, where N equals the number of values ⁇ in a first domain associated with a given value in a second domain.
  • the processor receives one of each of the N values in the first domain in the input neurons, and receives a single associated value from a second domain in the remaining input neuron.
  • the network is trained using known training data to produce an output that serves to classify the known data.
  • the network training is repeated for each value in the second domain by presenting that value together with each of the N values in the first domain as input. Once trained, the adaptive network will produce an output which classifies an unknown input when that input is from a class the adaptive network was trained to recognize.
  • FIG. 1 (A-D) are representative doppler signatures from four classes of multiple moving objects
  • FIG. 2 is a diagram of the adaptive network in accordance with the teachings of the present invention
  • FIG. 3 is a representation of doppler data for four classes of objects
  • FIG. 4 is a drawing of an additional embodiment of the present invention.
  • the two-dimensional data can be derived from a variety of signal sources such as infrared, optical, radar, sonar, etc.
  • the data may ' be raw, that is unprocessed, or it may be processed.
  • One example of such processing is doppler procfessing, wherein the difference in frequency between an outgoing and an incoming signal is analyzed.
  • doppler procfessing wherein the difference in frequency between an outgoing and an incoming signal is analyzed.
  • FIGS. l(A-D) four doppler signatures from four different classes of objects are shown.
  • the doppler frequency that is, the shift in frequency in the returning object or objects is represented along the horizontal axis.
  • Time is represented, along the vertical axis.
  • FIGS. l(A-D) each have a characteristic shape or pattern- The fact that the pattern changes from the lower portion of each figure to the upper portion, indicates changes in the detected doppler frequencies over time. This would indicate changes in the motion of multiple objects in the particular instance for each of the four classes of objects.
  • FIG. 3 there is shown a representative simplified doppler signature for four different classes of objects.
  • the horizontal axis represents the doppler frequency and the vertical axis represents time.
  • Each horizontal line 10 in FIG. 3 represents the doppler frequencies received at a given time.
  • the doppler signals in FIG. 3 are divided by means of vertical lines into four classes; a first class 12, a second class 14, a third class 15 and a fourth class 18.
  • FIG. 3 represent doppler signals from four different types of objects and each have a pattern that is characteristic of that object, or objects. Even though FIG. 3 represents much more simplified doppler data than that shown in FIGS. l(A-D), representation of the four patterns in FIG. 3 to a neural network would still involve a large amount of data.
  • each time slice 10 in each class is drawn from doppler frequencies from 16 frequency bins. There are 32 time slices 10 for each class. Consequently, there would be 512 individual pieces of information for each class.
  • a neural network having 512 input neurons might be required to process all of the information in each class shown in FIG. 3.
  • FIG. 3 the data shown in FIG. 3 may be represented as indicated by FIG. 2.
  • an adaptive network 20 in accordance with the preferred embodiment of the present invention is shown.
  • the adaptive network 20 utilizes a conventional neural network architecture known as a multilayer perceptron. It will be appreciated by those skilled in the art that a multilayer perceptron utilizes a layer of input neurons 22, one or more layers of inner neurons 24 and a layer of output neurons 26. Each neuron in each layer is connected to every neuron in the adjacent layer by means of synaptic connections 27, but neurons in the same layer are not typically connected to each other.
  • Each neuron accepts as input either a binary or a continuous-valued input and produces an output which is some transfer function of the inputs to that neuron.
  • the multilayer perceptron shown in FIG. 2 may be trained bythe conventional back propagation technique as is know ⁇ in the art. This technique is described in detail in the above-mentioned article by D. E. Rumelhart and J. L. McClelland, which is incorporated herein by reference.
  • the adaptive network 20 is configured so that it has a particular number of input neurons 22 determined by the input data.
  • the doppler data contains seven frequency bins. It will be appreciated that, for example, in FIG. 3 there will be 16 frequency bins, and that the number of doppler frequency bins will depend on the particular data to be analyzed, and the desired complexity of the adaptive network 20.
  • the doppler frequency curve 28, like the doppler frequency curves in FIG. 3, represents one time slice of doppler data. That is, it represents the doppler frequencies received at a given time. It is preferred that the range of frequencies be normalized so that they may be represented by a signal within a range that is acceptable to the input neurons 22. For example, the doppler frequencies may be normalized to have particular relative values between zero and one. As shown in FIG. 2, seven input neurons each receive a single doppler frequency value from the doppler frequency curve 28. An eighth input neuron 30 receives a signal which is representative of the time at which the doppler frequency curve 28 was received.
  • the magnitude of the signal used for the time input neuron 30 may be normalized so that the entire range of time values falls within the acceptable range for the input neuron 22.
  • the doppler frequency curve 28 together with the time is transmitted to the input neurons 22 ⁇ nd 30, the adaptive network 20 will produce some output state at its output neurons 26.
  • the learning algorithm known as backward error propagation may be used.
  • the adaptive network 20 will be trained to produce an output corresponding to the class of the doppler frequency curve. For example, assuming that the training input is from a first class, the desired output may be to have the first two output neurons 26 produce binary ones and all the other output neurons produce binary zero values. After repeated training procedures the adaptive network 20 will adapt the weights of the synaptic connections 27 until it produces the desired output state. Once the adaptive network 20 is trained with the first doppler frequency curve 28 at a first time slice, it may then be trained for all the successive time slices. For example, the adaptive network 20 may be trained for each of the 32 doppler frequency curves 10 in FIG. 3 to produce an output indicating the first class.
  • an unknown set of doppler frequency curves and times may be transmitted to the adaptive network 20. If the unknown doppler signature has the general characteristics of that of the first class, the adaptive network 20 will produce an output state for each .time slice corresponding the first class.
  • the adaptive network 20 may be trained to 1 recognize multiple classes of doppler signatures. It To accomplish this, the steps used to train the -adaptive network 20 to recognize the first class of doppler frequency curves is simply repeated for the second, third and fourth classes. As shown in FIG. 2, tjhe _ adaptive network 20 may be trained to indicate the second, third and fourth classes by producing binary ones in the output neurons 26 associated with those classes as indicated in FIG. 2. The number of classes which the adaptive network 20 may be trained to recognize will depend on a number of variables such as the complexity of the doppler signals, and the ' number of neurons, layers and interconnections in the adaptive network 20.
  • an adaptive network 20 in accordance with the present invention is shown. This embodiment is similar to the one shown in FIG. 2, except that it utilizes 18 input neurons 22, 24 inner neurons 24 and 26 output neurons 26. It will be appreciated that with a larger number of neurons and synaptic connections 27, time-varying data of greater complexity can be classified.
  • the adaptive network 20 Once the adaptive network 20 has been trained it could be reproduced an unlimited number of times by making a copy of the adaptive network 20.
  • the copies may have identical, but fixed weight values for the synaptic connections 27. In this way, mass production of adaptive networks 20 is possible without repeating the training process.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)
  • Radar Systems Or Details Thereof (AREA)
EP90912233A 1989-08-11 1990-08-09 Adaptive network for classifying time-varying data Withdrawn EP0438569A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US39267489A 1989-08-11 1989-08-11
US392674 1989-08-11

Publications (1)

Publication Number Publication Date
EP0438569A1 true EP0438569A1 (en) 1991-07-31

Family

ID=23551548

Family Applications (1)

Application Number Title Priority Date Filing Date
EP90912233A Withdrawn EP0438569A1 (en) 1989-08-11 1990-08-09 Adaptive network for classifying time-varying data

Country Status (3)

Country Link
EP (1) EP0438569A1 (ja)
JP (1) JPH04501328A (ja)
WO (1) WO1991002323A1 (ja)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9023909D0 (en) * 1990-11-02 1990-12-12 Secr Defence Radar apparatus
FR2669489B1 (fr) * 1990-11-16 1994-09-09 Thomson Csf Procede et dispositif de reconnaissance de modulations.
GB2261511A (en) * 1991-11-16 1993-05-19 British Nuclear Fuels Plc Ultrasonic ranging device
US6894639B1 (en) * 1991-12-18 2005-05-17 Raytheon Company Generalized hebbian learning for principal component analysis and automatic target recognition, systems and method
DE4223346C2 (de) * 1992-07-16 1996-05-02 Grieshaber Vega Kg Anordnung und Verfahren zur berührungslosen Füllstandmessung
WO1994008258A1 (en) * 1992-10-07 1994-04-14 Octrooibureau Kisch N.V. Apparatus and a method for classifying movement of objects along a passage
DE19515666A1 (de) * 1995-04-28 1996-10-31 Daimler Benz Ag Verfahren zur Detektion und Klassifizierung vergrabener Objekte mittels eines Radarverfahrens
DE19518993A1 (de) * 1995-05-29 1996-12-05 Sel Alcatel Ag Vorrichtung und Verfahren zur automatischen Detektion oder Klassifikation von Objekten
EP1643264A1 (en) * 1996-09-18 2006-04-05 MacAleese Companies, Inc. Concealed weapons detection system
DE19649563A1 (de) * 1996-11-29 1998-06-04 Alsthom Cge Alcatel Vorrichtung und Verfahren zur automatischen Klassifikation von Objekten
DE19649618A1 (de) * 1996-11-29 1998-06-04 Alsthom Cge Alcatel Verfahren und Vorrichtung zur automatischen Klassifikation von Objekten
DE19706576A1 (de) * 1997-02-20 1998-08-27 Alsthom Cge Alcatel Vorrichtung und Verfahren zur umgebungsadaptiven Klassifikation von Objekten
US7167123B2 (en) 1999-05-25 2007-01-23 Safe Zone Systems, Inc. Object detection method and apparatus
US7450052B2 (en) 1999-05-25 2008-11-11 The Macaleese Companies, Inc. Object detection method and apparatus
US9449272B2 (en) * 2013-10-29 2016-09-20 Qualcomm Incorporated Doppler effect processing in a neural network model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO9102323A1 *

Also Published As

Publication number Publication date
JPH04501328A (ja) 1992-03-05
WO1991002323A1 (en) 1991-02-21

Similar Documents

Publication Publication Date Title
US5003490A (en) Neural network signal processor
Fukumi et al. Rotation-invariant neural pattern recognition system with application to coin recognition
Pandya et al. Pattern recognition with neural networks in C++
Pal et al. Multilayer perceptron, fuzzy sets, classifiaction
Bohte et al. Unsupervised clustering with spiking neurons by sparse temporal coding and multilayer RBF networks
JP4083469B2 (ja) 階層ネットワークを用いたパターン認識方法
EP0439592A1 (en) Adaptive network for in-band signal separation
Worden et al. Artificial neural networks
EP0438569A1 (en) Adaptive network for classifying time-varying data
EP0591415A1 (en) Sparse comparison neural network
US5790758A (en) Neural network architecture for gaussian components of a mixture density function
GB2245401A (en) Neural network signal processor
WO1995000920A1 (en) An artificial network for temporal processing
Barnard et al. Image processing for image understanding with neural nets
Jay et al. First break picking using a neural network
CA2002681A1 (en) Neural network signal processor
Webb et al. Deformation-specific and deformation-invariant visual object recognition: pose vs. identity recognition of people and deforming objects
Hampson et al. Representing and learning boolean functions of multivalued features
Soulie Multi-modular neural network-hybrid architectures: a review
WO1991002322A1 (en) Pattern propagation neural network
US5712959A (en) Neural network architecture for non-Gaussian components of a mixture density function
Korablyov et al. Hybrid Neuro-Fuzzy Model with Immune Training for Recognition of Objects in an Image.
Hannibal On the possibility of using artificial neural networks in seismic monitoring tasks
Kijsirikul et al. Approximate ilp rules by backpropagation neural network: A result on thai character recognition
Apte et al. Development of back propagation neutral network model for ectracting the feature from a satellite image using curvelet transform

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 19910406

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): DE FR GB NL

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Withdrawal date: 19920708