EP0438577A1 - Adaptive processor for multi-source data fusion - Google Patents

Adaptive processor for multi-source data fusion

Info

Publication number
EP0438577A1
EP0438577A1 EP90912737A EP90912737A EP0438577A1 EP 0438577 A1 EP0438577 A1 EP 0438577A1 EP 90912737 A EP90912737 A EP 90912737A EP 90912737 A EP90912737 A EP 90912737A EP 0438577 A1 EP0438577 A1 EP 0438577A1
Authority
EP
European Patent Office
Prior art keywords
neurons
inputs
output
processor
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP90912737A
Other languages
German (de)
French (fr)
Inventor
Patrick F. Castelaz
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Raytheon Co
Original Assignee
Hughes Aircraft Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hughes Aircraft Co filed Critical Hughes Aircraft Co
Publication of EP0438577A1 publication Critical patent/EP0438577A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Definitions

  • This invention relates to information processors, and more particularly, to a neural network processor for solving assignment problems.
  • optimization problems such as constrained assignment problems, are among the most difficult for conventional digital computers to solve. This is because assignment problems are generally not solvable with a single solution, but instead there may be a range of possible solutions of which the best solution is sought.
  • the processing technique requires one entity to be selected from among many and assigned to one and only one other entity in such a way as to force the entire assignment over all entities to be optimal in some sense. For example, where individual "costs" are assigned to each entity-to-entity mapping, the problem becomes one of minimizing the total cost.
  • Examples of assignment problems include optimal plot-to-track correlation processing, the Traveling Salesman Problem " ⁇ optimal resource allocation, computerized tomography, multi-beam acoustic and ultrasound tracking, - nuclear particle tracking, multi- sensor data fusion (deghosting) for angle-only (passive) objects detected by multiple sensors, etc.
  • deghosting This problem arises whenever objects are to be detected from angle-only data originating " from multiple sensors.
  • the sensors may be radar, infrared, optical or other types of sensors. In such cases a single sensor provides a measurement that consists of the angle (azimuth) on which a target lies on a line-of-bearing.
  • the location can be determined as the intersection of the two lines-of-bearing.
  • multiple lines-of-bearing will be seen at both sensors. Lines will cross and intersections will be formed at points where no target actually exists. These intersections are called ghosts.
  • the problem then is to first determine which triangular regions have small enough areas that they might be targets, and then to sort out the true targets from, the ghosts in a group where there are many more intersections than targets. While targets will generally have smaller areas, merely taking the smallest areas will not ensure that no ghosts will be chosen.
  • a neural network is adapted to receive at least three sets of inputs simultaneously.
  • the three or more sets of inputs are directed to three or more sets of input neurons respectively.
  • the neural network is adapted to produce an output, which is a rearrangement of the inputs in one of the three or more sets.
  • By correlating the output with one of the sets of inputs a solution to the assignment problem is given. This solution also permits the correlation of one or more additional groups of inputs.
  • the network is trained by presenting a number of known solutions consisting of a rearrangement of one of the three or more sets, presented at the output neurons.
  • FIG. 1 is a diagram of a typical deghosting problem for angle-only data from three sensors
  • FIG. 2 is a diagram of a neural network in accordance with the present invention adapted to receive inputs from three sensors;
  • FIG. 3 is a diagram of a neural net adapted to receive inputs from three sensors where the number of targets is less than the number of inputs.
  • FIG. 4 is -a diagram of a neural network in accordance with the prior art.
  • a method and apparatus for solving constrained assignment problems using a neural network.
  • the neural network is trained to solve novel problems.
  • the constraints of the particular problem to be solved are not articulated, but instead are deduced by the network through its training.
  • the present invention presumes that the there exists a structure inherent to each particular problem which is not obvious to the analyst or algorithm developer.
  • the present invention uses the ability of a trainable neural network to discern and extract this* underlying structure, and subsequently use this structure, represented in its inner connections, to generalize and solve the assignment problem for arbitrary new configurations presented to the network.
  • the techniques.of the present invention will be applied to the problem of deghosting angle-only data from sensors.
  • the deghosting problem is presented as one example of the utilization of the techniques of the present invention but that a variety of other types of problems can be solved using the present invention.
  • FIG. 1 a scenario is depicted wherein angle-only data is obtained from three sensors. These sensors, 10, 12 and 14 respectively, are capable of detecting the angle at which an object lies with respect to the sensor but are not capable of giving information regarding the distance of the object.
  • a first object 16 lies at some angle with respect to a given reference point from sensor one 10.
  • a second object 18 and a third object 20 lie at second and third angles respectively from sensor one 10.
  • additional sensors, 12 and 14 it is desired to locate the position of the three objects 16, 18 and 20 by finding points at which scan lines 22 connecting sensors and detected objects, intersect.
  • scan lines 22 connecting sensors and detected objects intersect.
  • the deghosting problem is the process of distinguishing ghosts from true objects.
  • FIG. 2 an Adaptive Processor for Multi-Source Data Fusion 30 is shown that is capable of solving the deghosting problem for three sensors depicted in FIG. 1.
  • the adaptive processor for multi-source data fusion 30, in accordance with the present invention, comprises a plurality of individual processors, or neurons 32, arranged in a manner known generally as a neural network.
  • the individual neurons 32 are interconnected in such a way that the interconnection strength can be altered during a training procedure.
  • the particular interconnections scheme and training algorithm employed may be according to any number of neural network techniques, including, but not limited to the Multilayer Perceptron, the Boltzman machine, cpunterpropagation, Hopfield Net, Hamming Net, etc. While it is preferable that the neural network architecture and training algorithm employed belong to the class of supervised, as opposed to unsupervised, nets, unsupervised nets may also be applicable.
  • the adaptive processor 30 in accordance with the preferred embodiment, utilizes a neural network known as the Multilayer Perceptron, as shown in FIG. 4.
  • the functioning of the multilayer perceptron 34 as well as its associated learning algorithm, known as backward error propagation, are described in detail in Rumelhart, Hinton, and Williams, "Learning Internal Representations By Error Propagation” in D. E. Rumelhart and J. L. McClelland (EDS.), Parallel Distributed Processing; Explorations in the Microstructure of Cognition. Vol. 1 Foundations, M.I.T. Press (1986), which is incorporated herein by reference.
  • the multilayer perceptron 34 comprises an arrangement of processing units or neurons of three types, input neurons 36, inner neurons 38 and output neurons 40.
  • Each of the neurons 36, 38 and 40 comprise ' similar processors which are capable of receiving a input and generating an output that is some function, for example, a sigmoid logistic nonlinearity, of the input.
  • each neuron 36, 38 and 40 is connected to every neuron in the adjacent layer by means of synaptic connections 42.
  • the synaptic connections 42 are weighted -connections and are capable of increasing or decreasing the connection strength between individual neurons 36, 38 and 40.
  • a desired output 44 is then presented to the output neurons 40 and the difference between the desired and the actual output is found.
  • An error signal based on this difference is used by the neurons 36, 38 and 40, to change the weighted synaptic connections 42 in such a way so as to reduce the amount of the error.
  • This error signal is then propagated to the next layer of neurons, for example, the inner layer 38.
  • the desired output will be produced in response to the training input.
  • the neural network 34 can be used to produce the desired output even where the input contains only incomplete or altered forms of the desired input.
  • the adaptive processor 30 is shown incorporating a neural network 30, which may comprise a multilayer perceptron such as the one shown in FIG. 4 having certain modifications as will be discussed below.
  • the input neurons are divided into three groups.
  • the first group of neurons 44 correspond to inputs from the sensor one 10.
  • the second group of neurons 46 accept inputs from sensor two 12.
  • the third group of neurons 48 accept inputs from sensor three 14.
  • the inner neurons 50 in the adaptive processor 30 may comprise any s number of neurons, and may comprise multiple inner layers.
  • the synaptic connections 42 such as those shown in FIG. 4 are not shown in FIG. 2 for simplicity of illustration. However, as is the case in a conventional multilayer perceptron-, each neuron in each layer of the adaptive processor 30 is connected to every neuron in every adjacent layer.
  • the adaptive processor 30 also includes a layer of output neurons 52 which will be used to produce an output that corresponds to the sensor inputs from sensor two 12.
  • the number of output neurons 52 should match the number of input neurons in the second group 46, in accordance with the preferred embodiment. It should be noted that while in the preferred embodiment of the adaptive processor 30 the output neurons 52 are correlated with the sensor two neurons 46, a different group of input sensors could be chosen to correlate with the output "" neurons, in accordance with the present invention.
  • FIG. 2 also includes a series of graphs depicting t ⁇ ie sensor angles from each sensor.
  • the sensor one graph 54 is a graph of the sensor one angles arranged in increasing order.
  • the ramped line' 56 is derived by simply connecting the points at which the angles are plotted on the graph 54.
  • Line 56 is shown as a straight line for simplicity, but, it will be appreciated that it may be of any arbitrary shape depending on the arrangement of angles.
  • the angles from sensor one 10 are defined as X [!,j], which indicates a measured angle from sensor one 10 to the j object detected by sensor 1.
  • X[2,j] identifies the j angle detected by sensor two.
  • the five angles and five input neurons 44 in group one indicates that sensor one has detected five objects.
  • sensors two 12 and sensor three 14 have also detected five objects, and the angles -are depicted in graphs 55 and 59 respectively.
  • the adaptive processor 30 can be used to process a larger or smaller number of sensor angles, as well as to process more than three sensors, the only limit is the number of input neurons which are employed.
  • the vertical axis has a range of -1 to +1. While sensor angles may have any arbitrary range, they have been normalized, using a conventional normalizing technique, to relative values lying between -1 and +1. This limits the range of values which the input neurons 44 must accept. Likewise sensor two 12 and sensor three 14 angles have also been normalized.
  • the input neurons 44, 46 and 48 perform some transfer function on the inputs and transmit signals to the inner neurons 50. Through interconnections such as the synaptic connections 42 shown on FIG. 4. Likewise, the inner neurons 50 perform a transfer function on the inputs received from the input layer and transmit a signal to the output neurons 52 through a synaptic connection 42. The output neurons 52 will then produce some output.
  • the input neurons 44 are presented with a training input consisting of angles from the three sensors for known detected objects.
  • the output neurons 52 are presented with a training input A[2,j].
  • A[2,j] comprises normalized angles from sensor two 12 which are rearranged. That is, the training input A[2,j] contains the same angles as the training input from sensor two that is presented to the input neurons 46, except that, the-angles have been rearranged into an order which matches the order of the sensor one angles. It is recalled that the sensor one input angles are arranged in increasing order from the smallest to the largest angle. Thus, the first angle in A[2,j] 58 represents the angle at which a particular object is detected from sensor two. The first sensor one angle which was presented to input neurons 60 represents the angle at which- the same object was detected from sensor 110.
  • the second angle 62 in A[2,j] also represents the angle at which a second object was detected by sensor two 12.
  • the second sensor 1 angle presented to the second input neuron 64 represents the angle at which the second object was detected by sensor 1.
  • the remaining angles in A[2,j] likewise correspond to the next consecutive angles in sensor one 10.
  • the adaptive processor 30 is trained with this particular rearrangement of the sensor two angles A[2,j], because such an output would permit the identification of the true targets and the elimination of ghosts. That is, by correlating the corresponding angle from just two of the sensprs the deghosting problem is solved, as will be appreciated by those skilled in the art.
  • the adaptive processor 30 utilizes the backward error propagation technique to train the network.
  • the output neurons 52 will find the difference between the desired output A[2,j] and the actual output of each neuron 52. This difference will be used to adapt the weights between the output neurons 52 and the next layer of neurons 50.
  • this error signal is used to adapt weights through the network up to the last layer of synaptic connections reaching the input neurons 44, 46 and 48.
  • This training procedure is repeated by again presenting the same inputs from the three sensors and again training the network with the desired output A[2,j]. The training procedure is repeated until the actual output eventually falls within a desired range of the desired output A[2,j].
  • the adaptive processor 30 will produce an output y[2,j] shown in graph 65 in response to a new set of input angles, x[l,j], x[2,j], x[3,j], for which it is not known which angles in a given sensor correlate with which angles from another sensor.
  • the actual output of the trained adaptive processor 30 y[ 2 r j], like the desired output used to train the net • 2 ,j], consists of a rearrangement of the sensor two 12 angles, such that the first angle 66 in the output y[2,j] represents the point at which an object is detected by sensor two, with that same object being detected by sensor one at the angle presented to input neuron 60.
  • the continuous-valued outputs of the output neuron may either be used exactly to correlate with sensor 1 angles, or may be best-matched to the actual sensor 2 angles, which would then be used for correlating with sensor 1.
  • the adaptive processor 30 may be trained further with new scenarios. That is, new sets of angles for the same number and location of sensors can again be presented to the input neurons 44, 46, 48 and the adaptive processor 30 can be trained with a new set of desired outputs A[2,j] for a different set of detected known objects.
  • the number of different scenarios that the adaptive processor 30 is trained with will depend on a number of factors such as the nature of the problem, the number of sensors and targets, the desired accuracy, etc.
  • all of the information has been made available to the adaptive processor 30 and the adaptive processor, through its interconnections, will embody an algorithm that can generalize to a new case that it wasn't trained with. It is presumed that there exists a structure inherent to the problem for a given sensor arrangement which is now embodied in the adaptive processor 30. Thus, the adaptive processor can now recognize true targets and thereby eliminate ghosts.
  • FIG. 2 depicts the adaptive processor 30 being trained to yield a set of angles that are correlated with sensor one 10, it could . just as easily be trained to yield a set of angles that are correlated with sensor three 14 or other additional sensors as may be employed.
  • the adaptive processor 30 can now be presented with a new set of sensor angles from the three sensors 10, 12 and 14. As during the training procedure the sensor angles must be arranged in the same order such as from the smallest to the largest or any other order which was used during training.
  • the output of the adaptive processor 30 will comprise a set of sensor two 12 angles y[2,j], which are correlated with the sensor one 10 angles. That is, the first output angle can be -matched with the first sensor one angle, the second output angle can be matched with the second sensor one angle and so on. This matching or correlation will yield intersecting points which correspond to actual locations of objects, and the assignment problem is thereby solved.
  • the 1. continuous-valued outputs of the output neuron may either be used exactly to correlate with sensor 1 angles, or may be best-matched to the actual sensor 2 angles, which would then be used for correlating with sensor 1.
  • the number of input neurons in each of the three groups 44, 46 and 48 matched the number of angles and detected objects for each sensor.
  • FIG. 3 shows the case where the number of input neurons 44, 46, 49 respectively is larger than the number 0 of input angles from the three sensors.
  • the adaptive processor 30 can be used to solve problems having different numbers of objects and sensor angles.
  • the adaptive processor 30, having five input neurons for each 5 sensor can be used to solve a problem where there are only three angles for each sensor. To accomplish this, the first two inputs are set to -1, and the remaining angles are normalized to be between zero and 1.
  • the training input A[2,j] has the first two 0 angles set to -1.
  • the adaptive processor is then trained as described in connection with FIG. 2.
  • the same adaptive processor for Multi-Source Data Fusion 30 can be used for a variety of problems where the number of angles from each sensor is equal to or less than the 5 number of input neurons in each input group 44, 46 and 48.
  • One a given adaptive processor 30 is trained to solve a particular class of problems, the values of the weights in the synaptic connections 42 may then be 0 transferred to additional processors with the weights set to these fixed values. In this way, an unlimited number of trained processors can be reproduced without repeating the training procedure.
  • the present invention provides an adaptive processor for multi-source data fusion that can be used in a a wide variety of applications, including, but not limited to the class of constrained assignment problems.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Testing Or Calibration Of Command Recording Devices (AREA)
  • Image Analysis (AREA)

Abstract

Un processeur adaptatif de fusion (30) de données multisource comprend un réseau neural adapté pour recevoir au moins trois ensembles d'entrées simultanément. Trois ensembles d'entrées ou plus sont dirigés vers trois ensembles de neurones d'entrées (44, 46, 48) ou plus respectivement. Le processeur (30) est entraîné pour produire une sortie (64) laquelle est un réagencement des trois entrées. Par mise en corrélation de la sortie avec un des autres ensembles d'entrées, on donne une solution à un problème d'affectation.An adaptive multisource data fusion processor (30) includes a neural network adapted to receive at least three sets of inputs simultaneously. Three or more sets of inputs are routed to three or more sets of input neurons (44, 46, 48) respectively. The processor (30) is driven to produce an output (64) which is a rearrangement of the three inputs. By correlating the output with one of the other sets of inputs, we give a solution to an assignment problem.

Description

ADAPTIVE PROCESSOR FOR MULTI-SOURCE DATA FUSION
BACKGROUND OF THE INVENTION
1. Technical Field
This invention relates to information processors, and more particularly, to a neural network processor for solving assignment problems.
2. Discussion
Optimization problems, such as constrained assignment problems, are among the most difficult for conventional digital computers to solve. This is because assignment problems are generally not solvable with a single solution, but instead there may be a range of possible solutions of which the best solution is sought. Often, the processing technique requires one entity to be selected from among many and assigned to one and only one other entity in such a way as to force the entire assignment over all entities to be optimal in some sense. For example, where individual "costs" are assigned to each entity-to-entity mapping, the problem becomes one of minimizing the total cost. Examples of assignment problems include optimal plot-to-track correlation processing, the Traveling Salesman Problem"^ optimal resource allocation, computerized tomography, multi-beam acoustic and ultrasound tracking, - nuclear particle tracking, multi- sensor data fusion (deghosting) for angle-only (passive) objects detected by multiple sensors, etc. Of particular interest is the deghosting problem. This problem arises whenever objects are to be detected from angle-only data originating "from multiple sensors. For example, the sensors may be radar, infrared, optical or other types of sensors. In such cases a single sensor provides a measurement that consists of the angle (azimuth) on which a target lies on a line-of-bearing. With two or more sensors, the location can be determined as the intersection of the two lines-of-bearing. However, with multiple targets, multiple lines-of-bearing will be seen at both sensors. Lines will cross and intersections will be formed at points where no target actually exists. These intersections are called ghosts.
To illustrate the severity of the problem, if ten targets are observed by two sensors, up to 100 intersections can be formed. Since there are only 10 targets that means 90 of the intersections will be ghosts. With 50 targets, 2,500 intersections and 2,450 ghosts could be formed. Since the sensors have no other information available, no further discrimination of targets can be made. The addition of a third sensor might help to resolve the ambiguities since one would find targets at the intersection of three lines-of-bearing, or triple intersections. However, with measurement inaccuracies, three lines-of-bearing corresponding to a true target will not intersect at a single point but will define a triangular region. The problem then is to first determine which triangular regions have small enough areas that they might be targets, and then to sort out the true targets from, the ghosts in a group where there are many more intersections than targets. While targets will generally have smaller areas, merely taking the smallest areas will not ensure that no ghosts will be chosen.
Some previous approaches to assignment problems, such as the deghosting problem, have emphasized solutions in software on general purpose computers. One disadvantage with software solutions to assignment problems, is that they require extensive algorithm development, massive computational power, and are exceedingly slow for real-time or near-real-time problems such as angle-only target location problems. This is because the deghosting problem is a non-deterministic polynomial complete (NP-complete) class problem, for which the computational requirements for conventional techniques increase exponentially as a function of the number of targets. As a result, the problem involves a "combinatorial explosion", or exponential blowup in the number of possible answers. Thus, to solve the deghosting problem, conventional solutions, even using advanced state of the art array and parallel processors, have difficulty handling real-time problems of realistic sizes. For example, conventional solutions of the deghosting problem for three sensors are sufficiently fast up to about 15 targets, but become exponentially computation-bound beyond that. For numbers of targets in the range of 30 or so, typical software approaches using integer programming techniques could require virtually years of VAX-equivalent CPU time.
Others have suggested approaches for solving assignment problems utilizing neural networks. Such systems •are called neural networks because of their similarity to biological networks in their highly interconnected structure and in their ability to adapt to data and exhibit self-learning. A key advantage of a neural network approach is that the network can be trained to solve the problem and an explicit algorithm does flot have to be developed. For example, see U.S. Patent No. 4,660,166, issued to J. Hopfield, where a type of neural network is used to solve the Traveling Salesman Problem. * Others have suggested the use of a neural network technique known as simulated annealing. See S. Kirkpatrick, Gelatt, and Vecchi: "Optimization by Simulated Annealing", 220 Science, p.671-680 (1983) . However, while algorithms using this approach have been developed, to the applicant's knowledge, practical working architectures have not been implemented. Also, neural nets such as the one described in U.S. Patent No. 4,660,166 are generally not fast enough for real-time applications of reasonable complexity. Recent results suggest severe limitations to the size of problems addressable by Hopfield nets. For example, the Traveling Salesman Problem has been found to fail for more than thirty cities. Also, even some neural network approaches use a network that embodies certain application constraints or use certain optimizing techniques. This results in greater complexity, and other limitations since the constraints must first be known and then the network configured to learn these constraints.
Thus it would be desirable to provide an information processor that reduces the computation time required to solve constrained assignment problems of realistic sizes in real-time. It would also be desirable to provide an information processor that can solve such problems without requiring extensive algorithm development.
It would be desirable to provide an information processor to solve assignment problems which did not require explicit known constraints nor employ optimization techniques.
SUMMARY OF THE INVENTION In accordance of the teachings of the present invention, a neural network is adapted to receive at least three sets of inputs simultaneously. The three or more sets of inputs are directed to three or more sets of input neurons respectively. The neural network is adapted to produce an output, which is a rearrangement of the inputs in one of the three or more sets. By correlating the output with one of the sets of inputs a solution to the assignment problem is given. This solution also permits the correlation of one or more additional groups of inputs. The network is trained by presenting a number of known solutions consisting of a rearrangement of one of the three or more sets, presented at the output neurons.
BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a diagram of a typical deghosting problem for angle-only data from three sensors;
FIG. 2 is a diagram of a neural network in accordance with the present invention adapted to receive inputs from three sensors;
FIG. 3 is a diagram of a neural net adapted to receive inputs from three sensors where the number of targets is less than the number of inputs. FIG. 4 is -a diagram of a neural network in accordance with the prior art.
DESCRIPTION OF THE PREFERRED EMBODIMENT In accordance with the teachings of the present invention, a method and apparatus is provided for solving constrained assignment problems using a neural network. Using examples of solved assignment problems, the neural network is trained to solve novel problems. The constraints of the particular problem to be solved are not articulated, but instead are deduced by the network through its training. The present invention presumes that the there exists a structure inherent to each particular problem which is not obvious to the analyst or algorithm developer. The present invention uses the ability of a trainable neural network to discern and extract this* underlying structure, and subsequently use this structure, represented in its inner connections, to generalize and solve the assignment problem for arbitrary new configurations presented to the network. In accordance with the preferred embodiment, the techniques.of the present invention will be applied to the problem of deghosting angle-only data from sensors. It will be appreciated that the deghosting problem is presented as one example of the utilization of the techniques of the present invention but that a variety of other types of problems can be solved using the present invention. In FIG. 1 a scenario is depicted wherein angle-only data is obtained from three sensors. These sensors, 10, 12 and 14 respectively, are capable of detecting the angle at which an object lies with respect to the sensor but are not capable of giving information regarding the distance of the object. Thus it is known that a first object 16 lies at some angle with respect to a given reference point from sensor one 10. Likewise it is also known that a second object 18 and a third object 20 lie at second and third angles respectively from sensor one 10. By employing additional sensors, 12 and 14 it is desired to locate the position of the three objects 16, 18 and 20 by finding points at which scan lines 22 connecting sensors and detected objects, intersect. In particular, it is assumed that objects 16,
18 and 20 will lie at points where a scan line 22 from each of sensors one 10, two 12 and three 14 intersect. In most cases, however, scan lines 22 will have more than one triple intersect. For example, two triple intersects occur along scan line 22 connecting sensor three 14 and object 16. The extra triple intersect 24 is called a ghost, since it is known that no object exists at that point. The deghosting problem is the process of distinguishing ghosts from true objects. Referring now to FIG. 2, an Adaptive Processor for Multi-Source Data Fusion 30 is shown that is capable of solving the deghosting problem for three sensors depicted in FIG. 1. The adaptive processor for multi-source data fusion 30, in accordance with the present invention, comprises a plurality of individual processors, or neurons 32, arranged in a manner known generally as a neural network. In particular, the individual neurons 32 are interconnected in such a way that the interconnection strength can be altered during a training procedure.
The particular interconnections scheme and training algorithm employed may be according to any number of neural network techniques, including, but not limited to the Multilayer Perceptron, the Boltzman machine, cpunterpropagation, Hopfield Net, Hamming Net, etc. While it is preferable that the neural network architecture and training algorithm employed belong to the class of supervised, as opposed to unsupervised, nets, unsupervised nets may also be applicable.
By way of a nonlimiting example, the adaptive processor 30 in accordance with the preferred embodiment, utilizes a neural network known as the Multilayer Perceptron, as shown in FIG. 4. The functioning of the multilayer perceptron 34 as well as its associated learning algorithm, known as backward error propagation, are described in detail in Rumelhart, Hinton, and Williams, "Learning Internal Representations By Error Propagation" in D. E. Rumelhart and J. L. McClelland (EDS.), Parallel Distributed Processing; Explorations in the Microstructure of Cognition. Vol. 1 Foundations, M.I.T. Press (1986), which is incorporated herein by reference. In general, the multilayer perceptron 34 comprises an arrangement of processing units or neurons of three types, input neurons 36, inner neurons 38 and output neurons 40. Each of the neurons 36, 38 and 40 comprise ' similar processors which are capable of receiving a input and generating an output that is some function, for example, a sigmoid logistic nonlinearity, of the input. In addition, each neuron 36, 38 and 40 is connected to every neuron in the adjacent layer by means of synaptic connections 42. The synaptic connections 42 are weighted -connections and are capable of increasing or decreasing the connection strength between individual neurons 36, 38 and 40. During a training procedure in accordance with the backward error propagation technique, a training input is presented to the input neurons 36 and an output is produced at the output neurons 40. A desired output 44 is then presented to the output neurons 40 and the difference between the desired and the actual output is found. An error signal based on this difference is used by the neurons 36, 38 and 40, to change the weighted synaptic connections 42 in such a way so as to reduce the amount of the error. This error signal is then propagated to the next layer of neurons, for example, the inner layer 38. After repeated training sessions, the desired output will be produced in response to the training input. Once trained, the neural network 34 can be used to produce the desired output even where the input contains only incomplete or altered forms of the desired input.
Referring now to FIG. 2, the adaptive processor 30 is shown incorporating a neural network 30, which may comprise a multilayer perceptron such as the one shown in FIG. 4 having certain modifications as will be discussed below. In the adaptive processor 30, the input neurons are divided into three groups. The first group of neurons 44 correspond to inputs from the sensor one 10. The second group of neurons 46 accept inputs from sensor two 12. The third group of neurons 48 accept inputs from sensor three 14. It should be noted that, accordingly, the number of neurons in each group 40, 46 and 48 is the same as the number of input angles from the three sensors 10, 12 and 14 respectively. The inner neurons 50 in the adaptive processor 30 may comprise anysnumber of neurons, and may comprise multiple inner layers. It should be noted while the synaptic connections 42 such as those shown in FIG. 4 are not shown in FIG. 2 for simplicity of illustration. However, as is the case in a conventional multilayer perceptron-, each neuron in each layer of the adaptive processor 30 is connected to every neuron in every adjacent layer.
The adaptive processor 30 also includes a layer of output neurons 52 which will be used to produce an output that corresponds to the sensor inputs from sensor two 12. Thus, the number of output neurons 52 should match the number of input neurons in the second group 46, in accordance with the preferred embodiment. It should be noted that while in the preferred embodiment of the adaptive processor 30 the output neurons 52 are correlated with the sensor two neurons 46, a different group of input sensors could be chosen to correlate with the output ""neurons, in accordance with the present invention.
FIG. 2 also includes a series of graphs depicting tϋie sensor angles from each sensor. In particular, the sensor one graph 54 is a graph of the sensor one angles arranged in increasing order. The ramped line' 56 is derived by simply connecting the points at which the angles are plotted on the graph 54. Line 56 is shown as a straight line for simplicity, but, it will be appreciated that it may be of any arbitrary shape depending on the arrangement of angles. It will also be noted that the angles from sensor one 10 are defined as X [!,j], which indicates a measured angle from sensor one 10 to the j object detected by sensor 1. Likewise, X[2,j] identifies the j angle detected by sensor two. It will be "'appreciated that the five angles and five input neurons 44 in group one indicates that sensor one has detected five objects. Likewise, sensors two 12 and sensor three 14 have also detected five objects, and the angles -are depicted in graphs 55 and 59 respectively. The adaptive processor 30 can be used to process a larger or smaller number of sensor angles, as well as to process more than three sensors, the only limit is the number of input neurons which are employed. In the graph for sensor one 54 the vertical axis has a range of -1 to +1. While sensor angles may have any arbitrary range, they have been normalized, using a conventional normalizing technique, to relative values lying between -1 and +1. This limits the range of values which the input neurons 44 must accept. Likewise sensor two 12 and sensor three 14 angles have also been normalized.
To begin training the adaptor processor 30, all of the sensor angles are presented to the input neurons 44, 46 and 48. While the sensor angles in FIG. 2 have been arranged in order from smallest to largest angle creating the ramped curve 56, the sensor angles could be in any arbitrary order. The input neurons 44, 46 and 48 perform some transfer function on the inputs and transmit signals to the inner neurons 50. Through interconnections such as the synaptic connections 42 shown on FIG. 4. Likewise, the inner neurons 50 perform a transfer function on the inputs received from the input layer and transmit a signal to the output neurons 52 through a synaptic connection 42. The output neurons 52 will then produce some output. In accordance with the back propagation training algorithm, the input neurons 44 are presented with a training input consisting of angles from the three sensors for known detected objects. The output neurons 52 are presented with a training input A[2,j]. A[2,j] comprises normalized angles from sensor two 12 which are rearranged. That is, the training input A[2,j] contains the same angles as the training input from sensor two that is presented to the input neurons 46, except that, the-angles have been rearranged into an order which matches the order of the sensor one angles. It is recalled that the sensor one input angles are arranged in increasing order from the smallest to the largest angle. Thus, the first angle in A[2,j] 58 represents the angle at which a particular object is detected from sensor two. The first sensor one angle which was presented to input neurons 60 represents the angle at which- the same object was detected from sensor 110. In similar fashion, the second angle 62 in A[2,j] also represents the angle at which a second object was detected by sensor two 12. The second sensor 1 angle presented to the second input neuron 64 represents the angle at which the second object was detected by sensor 1. The remaining angles in A[2,j] likewise correspond to the next consecutive angles in sensor one 10.
The adaptive processor 30 is trained with this particular rearrangement of the sensor two angles A[2,j], because such an output would permit the identification of the true targets and the elimination of ghosts. That is, by correlating the corresponding angle from just two of the sensprs the deghosting problem is solved, as will be appreciated by those skilled in the art.
In accordance with the preferred embodiment, the adaptive processor 30 utilizes the backward error propagation technique to train the network. In particular the output neurons 52 will find the difference between the desired output A[2,j] and the actual output of each neuron 52. This difference will be used to adapt the weights between the output neurons 52 and the next layer of neurons 50. Likewise, this error signal is used to adapt weights through the network up to the last layer of synaptic connections reaching the input neurons 44, 46 and 48. This training procedure is repeated by again presenting the same inputs from the three sensors and again training the network with the desired output A[2,j]. The training procedure is repeated until the actual output eventually falls within a desired range of the desired output A[2,j].
Once trained, the adaptive processor 30 will produce an output y[2,j] shown in graph 65 in response to a new set of input angles, x[l,j], x[2,j], x[3,j], for which it is not known which angles in a given sensor correlate with which angles from another sensor. The actual output of the trained adaptive processor 30 y[2 rj], like the desired output used to train the net • 2,j], consists of a rearrangement of the sensor two 12 angles, such that the first angle 66 in the output y[2,j] represents the point at which an object is detected by sensor two, with that same object being detected by sensor one at the angle presented to input neuron 60. In this way, all the angles from two of the sensors are correlated so that the location of objects can be determined. The continuous-valued outputs of the output neuron may either be used exactly to correlate with sensor 1 angles, or may be best-matched to the actual sensor 2 angles, which would then be used for correlating with sensor 1.
In addition, the adaptive processor 30 may be trained further with new scenarios. That is, new sets of angles for the same number and location of sensors can again be presented to the input neurons 44, 46, 48 and the adaptive processor 30 can be trained with a new set of desired outputs A[2,j] for a different set of detected known objects. The number of different scenarios that the adaptive processor 30 is trained with will depend on a number of factors such as the nature of the problem, the number of sensors and targets, the desired accuracy, etc. In any event, once all of the training is complete, all of the information has been made available to the adaptive processor 30 and the adaptive processor, through its interconnections, will embody an algorithm that can generalize to a new case that it wasn't trained with. It is presumed that there exists a structure inherent to the problem for a given sensor arrangement which is now embodied in the adaptive processor 30. Thus, the adaptive processor can now recognize true targets and thereby eliminate ghosts.
It will be appreciated that, while FIG. 2 depicts the adaptive processor 30 being trained to yield a set of angles that are correlated with sensor one 10, it could . just as easily be trained to yield a set of angles that are correlated with sensor three 14 or other additional sensors as may be employed.
Once trained the adaptive processor 30 can now be presented with a new set of sensor angles from the three sensors 10, 12 and 14. As during the training procedure the sensor angles must be arranged in the same order such as from the smallest to the largest or any other order which was used during training. The output of the adaptive processor 30 will comprise a set of sensor two 12 angles y[2,j], which are correlated with the sensor one 10 angles. That is, the first output angle can be -matched with the first sensor one angle, the second output angle can be matched with the second sensor one angle and so on. This matching or correlation will yield intersecting points which correspond to actual locations of objects, and the assignment problem is thereby solved. As mentioned previously, the 1. continuous-valued outputs of the output neuron may either be used exactly to correlate with sensor 1 angles, or may be best-matched to the actual sensor 2 angles, which would then be used for correlating with sensor 1.
5 In the above example, the number of input neurons in each of the three groups 44, 46 and 48 matched the number of angles and detected objects for each sensor. FIG. 3 shows the case where the number of input neurons 44, 46, 49 respectively is larger than the number 0 of input angles from the three sensors. By using the techniques shown in FIG. 3 the adaptive processor 30 can be used to solve problems having different numbers of objects and sensor angles. As shown in FIG. 3, the adaptive processor 30, having five input neurons for each 5 sensor, can be used to solve a problem where there are only three angles for each sensor. To accomplish this, the first two inputs are set to -1, and the remaining angles are normalized to be between zero and 1. Likewise, the training input A[2,j] has the first two 0 angles set to -1. The adaptive processor is then trained as described in connection with FIG. 2. In this way, the same adaptive processor for Multi-Source Data Fusion 30 can be used for a variety of problems where the number of angles from each sensor is equal to or less than the 5 number of input neurons in each input group 44, 46 and 48.
One a given adaptive processor 30 is trained to solve a particular class of problems, the values of the weights in the synaptic connections 42 may then be 0 transferred to additional processors with the weights set to these fixed values. In this way, an unlimited number of trained processors can be reproduced without repeating the training procedure. In view of the foregoing, those skilled in the art should appreciate that the present invention provides an adaptive processor for multi-source data fusion that can be used in a a wide variety of applications, including, but not limited to the class of constrained assignment problems. The various advantages should become apparent to those skilled in the art after having the benefit of studying the specification, drawings and following claims.

Claims

CLAIMSWhat is Claimed is;
1. An information processor (30) for producing an output in response to a particular set of inputs comprising having a plurality of neurons (32) adapted to receive data signals and adapted to produce an output, including a plurality of input neurons (44, 46, 48) , each being adapted to receive one of said inputs, and a plurality of output neurons (52) each adapted to produce a set of outputs, and said processor also having a plurality of synaptic connections (42) providing a 0 weighted coupling between said neurons, characterized by: means for training said processor (32) to produce a desired set of outputs (64) during a training procedure; said inputs including at least three groups of 15 inputs corresponding to at least three groups of data (54, 55, 59); said desired output (64) used during training consisting of a known rearrangement of one of the three groups of inputs (54, 55, 59) , said known rearrangement -■0 creating an association between said desired output (64) and one of the other of said groups of inputs, whereby, once trained, said processor (30) will, in response to a new set of inputs, produce a new set of outputs which are a new rearrangement of one of said groups of inputs, 25 thereby creating an association between said one group of data and said other group of data. l 2. The information processor of Claim 1 wherein said means of training further comprises: means for computing the difference between said desired output and the actual output of output neurons 5 and inner neurons during training (32) ; and means for adjusting said weights (32) so as to minimize the difference between said desired output (64) and the actual output.
1 3. The processor of Claim 1 wherein said neurons (32) are arranged in an architecture characteristic of a multilayer perceptron.
1 4. The processor of Claim 1 wherein each of said groups of input neurons (44, 46, 48) have an equal number of neurons.
l 5. The processor of Claim 1 wherein there are the same number of output neurons (52) as there are neurons in each of said groups.
l 6. The processor of Claim 1 wherein said inputs represent angular data from sensors (10, 12, 14) .
*** 7. The processor of Claim 1 wherein said inputs in each group are arranged in an order of increasing magnitude.
EP90912737A 1989-08-11 1990-08-09 Adaptive processor for multi-source data fusion Withdrawn EP0438577A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US39268389A 1989-08-11 1989-08-11
US392683 1989-08-11

Publications (1)

Publication Number Publication Date
EP0438577A1 true EP0438577A1 (en) 1991-07-31

Family

ID=23551592

Family Applications (1)

Application Number Title Priority Date Filing Date
EP90912737A Withdrawn EP0438577A1 (en) 1989-08-11 1990-08-09 Adaptive processor for multi-source data fusion

Country Status (3)

Country Link
EP (1) EP0438577A1 (en)
JP (1) JP2635443B2 (en)
WO (1) WO1991002321A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063752B (en) * 2018-07-17 2022-06-17 华北水利水电大学 Multi-source high-dimensional multi-scale real-time data stream sorting method based on neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO9102321A1 *

Also Published As

Publication number Publication date
JP2635443B2 (en) 1997-07-30
WO1991002321A1 (en) 1991-02-21
JPH04501329A (en) 1992-03-05

Similar Documents

Publication Publication Date Title
US5276770A (en) Training of neural network for multi-source data fusion
Sharkawy Principle of neural network and its main types
Fish et al. Artificial neural networks: a new methodology for industrial market segmentation
Fukumi et al. Rotation-invariant neural pattern recognition system with application to coin recognition
Abdella et al. The use of genetic algorithms and neural networks to approximate missing data in database
Holt et al. Finite precision error analysis of neural network hardware implementations
Croall et al. Industrial applications of neural networks: project ANNIE handbook
Hussain et al. Application of neural computing in pharmaceutical product development: computer aided formulation design
WO1991002323A1 (en) Adaptive network for classifying time-varying data
Santos et al. Perception maps for the local navigation of a mobile robot: a neural network approach
Grant et al. The use of neural techniques in PIV and PTV
US5454064A (en) System for correlating object reports utilizing connectionist architecture
Steinbach et al. Neural networks–a model of boolean functions
EP0438577A1 (en) Adaptive processor for multi-source data fusion
Miller et al. Office of Naval Research contributions to neural networks and signal processing in oceanic engineering
Burke Competitive learning based approaches to tool-wear identification
Barshan et al. Comparative analysis of different approaches to target differentiation and localization with sonar
EP0362876A2 (en) Cellular network assignment processor using minimum/maximum convergence technique
Yacoub et al. Features selection and architecture optimization in connectionist systems
Ma et al. LVQ neural network based target differentiation method for mobile robot
Siemiatkowska A highly parallel method for mapping and navigation of an autonomous mobile robot
PANDYA et al. A stochastic parallel algorithm for supervised learning in neural networks
Ohkubo et al. Requirements for the learning of multiple dynamics
Beck et al. A self-training visual inspection system with a neural network classifier
US20220309326A1 (en) Learning method of neural network and neural processor

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 19910406

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): DE FR GB NL

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Withdrawal date: 19930318