GB2330028A - Target tracking method - Google Patents

Target tracking method Download PDF

Info

Publication number
GB2330028A
GB2330028A GB9820922A GB9820922A GB2330028A GB 2330028 A GB2330028 A GB 2330028A GB 9820922 A GB9820922 A GB 9820922A GB 9820922 A GB9820922 A GB 9820922A GB 2330028 A GB2330028 A GB 2330028A
Authority
GB
United Kingdom
Prior art keywords
cluster
pixel
frame
frames
intensity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB9820922A
Other versions
GB9820922D0 (en
GB2330028B (en
Inventor
Ian Mackieson Parker
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Thales Optronics Ltd
Original Assignee
Thales Optronics Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thales Optronics Ltd filed Critical Thales Optronics Ltd
Publication of GB9820922D0 publication Critical patent/GB9820922D0/en
Publication of GB2330028A publication Critical patent/GB2330028A/en
Application granted granted Critical
Publication of GB2330028B publication Critical patent/GB2330028B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • G01S3/78Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using electromagnetic waves other than radio waves
    • G01S3/782Systems for determining direction or deviation from predetermined direction
    • G01S3/785Systems for determining direction or deviation from predetermined direction using adjustment of orientation of directivity characteristics of a detector or detector system to give a desired condition of signal derived from that detector or detector system
    • G01S3/786Systems for determining direction or deviation from predetermined direction using adjustment of orientation of directivity characteristics of a detector or detector system to give a desired condition of signal derived from that detector or detector system the desired condition being maintained automatically
    • G01S3/7864T.V. type tracking systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A tracking method uses noisy imaging data detected at certain time intervals by a sensor 10 (which may be an infra-red camera) and converts it into a digitised frame consisting of a number of pixels (by digitiser 12) prior to storing in memory 14. Successive digitised frames are stored in the memory and are analysed by processor 16 for pixel clusters. Analysis is achieved by shifting at least a portion of adjacent image frames by a first predetermined shift value, and integrating pixel intensities to produce a combination frame for the first shift value. The step is repeated with a changed shift value for the received image frames, thereby generating a plurality of combination frames. The highest intensity pixel at each location from all the combination frames is used to generate a results frame which is analysed utilising an association algorithm to identify the track of a moving object which is displayed on output device 18.

Description

Method and Apparatus for Target Track Identification The present invention relates to a method and apparatus for tracking a moving object as located in noisy image data produced by an imaging detector, where it is impossible to distinguish the object, or the track of the object, from the noisy background without performing signal processing operations. The invention is particularly suitable for use with image data having an object which is represented by only a relatively small amount of the data, or for use with a diffuse object or objects occupying a relatively large amount of the data.
In one known system for detecting a moving target by signal processing operations, the image data is in the form of sequentially received image frames each composed of an array of pixels where each pixel represents the intensity of the image at the location of the pixel in the array. For each image frame which is received by the signal processor, only those pixels are stored which have an intensity value greater than a threshold level. Each stored pixel is stored in a computer memory as a bit word, having bits representing the x position of the pixel relative to its frame, bits representing the y position of the pixel relative to its frame, and bits representing the magnitude of intensity of the pixel.
The image frames are provided by a 360 degree scanning sensor head and are produced at a frame rate of about 1 every 1.Ssecs.
A first form of signal processing is performed on the limited number of stored pixels emanating from a single frame in order to identify those stored pixels which have adjacent locations in the frame and which can therefore be taken as forming part of a cluster. There may be several clusters identified in the frame, any one of which may be representative of the target object. The central position, intensity, and size of each cluster in the single frame is stored for subsequent signal processing. Each stored cluster is stored as a bit word having bits representing a number of parameters relating to the cluster, for example, the position, intensity and size of the cluster.
Once the first and second image frames are received, and the clusters in each frame have been identified and stored, then a second form of signal processing is performed on the two sets of stored clusters utilising an "association" algorithm. This signal processing is used to identify one cluster from the second frame which is associated with one cluster from the first frame.
Associated clusters may represent a target object occupying a different location on each of the frames because of the movement of the object.
When two associated clusters are identified then an association vector is generated which stores the displacement between the associated clusters. The association vector is used to predict the expected position of the associated cluster on the third frame.
The signal processor continues to predict expected cluster locations for a series of n frames (where n is a fixed value for the system, typically 4, 6, or 8). These frames are received, pixels are stored, and clusters are identified and stored, as before. If an associated cluster is correctly predicted for a predetermined number m of the n frames (for example, 3 out of 4 frames, or 4 out of 6 frames, or 6 out of 8 frames) then the associated clusters determined by the association vector are "declared" to be a track of the target object. The target object location for any particular frame or sequence of frames within the n frames is then calculated from the association vector and output either in its bit word form for further processing or visually in a display.
If no associated cluster is correctly predicted for the predetermined number m of the n frames then no target track is identified. When the existing declared association vector fails the system goes back to provide a second association vector. There may be, for example, 3 or 4 possible associated clusters from the first and second frames, so 3 or 4 different association vectors may be stored and used to predict associated clusters in the third frame. One associated cluster is used initially, if that fails to produce a track in m out of the n frames then the second associated cluster is used, and so on until all of the possible association clusters have been used within the n frames.
This process is continually updated as subsequent image frames are received so that when the n+l frame is received clusters from the 1st frame are rejected.
Selected pixels from the n+l frame are stored, clusters are identified and stored, and the processor determines whether the predicted location of the associated cluster is correct.
A disadvantage of this known system is that no account of the relative intensity of the clusters is taken In determining the target object, only whether the intensity is greater than a predetermined value.
Therefore, a probability of a cluster being a target cannot be assigned based on the relative intensity of clusters. Another disadvantage of this system is that there is always a delay in identifying a track because at least m frames are required before the m out of n criterion can be satisfied. The system is also slow because the sensing head scans 360 degrees. A further disadvantage of the known system is that it is not suitable for detecting targets which have a low pixel intensity (below that of the intensity threshold).
In contradistinction, the system in accordance with the present invention (which will be set out in claim 1 hereafter) receives frames much more rapidly than hitherto, for example at video compatible rats, stores in a framestore each pixel of each received frame and performs no signal processing until a group of n frames have been received and stored, where n is at least 20 and preferably is in the order of 80.
Pixels are stored in an ordered and predetermined order in the store corresponding to the ordering of the pixels in the array (for example adjacent pixels are stored in adjacent memory locations) Signal processing is thereafter performed in three stages. Firstly, m different combinations from the pixels of the n frames are formed. Each combination consists of an array of combination pixels, where the intensity of each combination pixel is obtained by summing the intensities of a single pixel from each of the n frames. For each combination, the single pixel from each frame is not chosen at random, but is derived from a common (to all frames) displacement vector which determines the x and y displacement of the pixels between adjacent frames. This displacement vector corresponds to a velocity because a velocity can be stated as being a displacement of so many pixels per frame. A combination pixel is thereby generated which is a data word having bits representing the displacement vector and bits representing the summed intensity.
Edge effects, however, limit the number of pixels that may be used (because all of the frames are the same size) so a subset of the pixels from each frame is used.
This subset is defined by the overlap of the first frame and the nth frame. It will be understood that the number of pixels in the x and y directions of each frame is very much larger than the number of pixels that are represented by the displacement vector.
The m different combinations arise because m different predetermined displacement vectors are selected, each effectively representing a different assumed velocity of the target object because the frames are received at a constant frame rate.
Secondly, pixels at the same location in the m different combinations are compared on an intensity basis and the most intense pixel for each location is stored as a data word having bits representing the displacement vector and the intensity of the most intense pixel. Thus a single composite combination called the results framestore is produced in an area of memory containing data words. Each data word has a memory address which corresponds to a pixel location and contains information about the displacement vector and the intensity of the pixel. Therefore, the addresses of the data words stored in memory can be used to determine the actual location of the pixels in the results framestore.
It will be understood that in embodiments of the present invention the m different combinations will be generated but will not be stored in a framestore: only the results framestore will be stored. The results framestore contains the highest intensity pixel for each pixel location from all of the combinations. This results framestore is then output.
This process is conducted during the accumulation of a second group of n frames and is then repeated for the second group of n image frames which are received. The process is repeated thereafter on a group by group basis.
The system continually captures groups of n frames and outputs the respective results framestores at a rate which is n times slower than the original frame rate.
Thirdly, the system operates on the series of output results framestores. The system determines for each results framestore possible clusters of pixels by using a "tournament sort" technique.
The tournament sort algorithm identifies the pixel of highest intensity in the results framestore then examines the neighbouring data words (representing neighbouring pixels) to determine whether there is a cluster. In the results framestore, all of the pixels in a cluster have the same velocity. All of the pixels in the cluster also have values of intensity higher than the surrounding pixels, and also higher than a noise value calculated for those pixels. If there is a cluster, a probability of the track being genuine is then evaluated as follows.
Firstly, an intensity value due to noise is calculated assuming that each pixel in the possible cluster produces Gaussian noise. This is done using standard statistical methods, such as an error function erfc(x) where x is the intensity. Secondly, the actual intensity of the cluster is compared with the calculated Gaussian noise value, and if the cluster intensity is higher than the Gaussian noise calculation by a predetermined amount then the cluster is accepted as a cluster, otherwise the pixels in the "cluster" are discarded as noise. The fact that a cluster has been accepted does not mean that it is a genuine cluster, merely that it is not Gaussian noise.
The cluster is stored together with a probability of the cluster being genuine. This probability is determined by comparing the cluster intensity with the Gaussian noise calculation and assigning a probability which is related to the difference between the two values.
If the cluster is accepted, the pixels from genuine clusters are then ignored, or deleted from the results frame.
The next highest intensity pixel (disregarding all of the pixels in the first cluster) is then located and the same procedure is followed to produce another possible cluster. This procedure is repeated until a predetermined criterion is fulfilled, for example, the procedure may be repeated a predetermined number of times, for a predetermine length of time, or until the intensity of the highest intensity pixel remaining is below a certain level.
These possible clusters are then stored as output fields, comprising data words which contain bits representing the centroid of the cluster, the spread of the cluster, the intensity of the cluster, the individual pixel locations in the cluster, the displacement vector of the pixels which form the cluster, and the probability of the cluster being genuine. The other pixels (not forming a cluster) are discarded. This process is repeated for each results framestore that is output.
The output fields for the most recent results frame store are then compared with the output fields of previous framestores to locate associated tracks The individual probabilities which are associated with each cluster are evaluated in determining the most probable tracks. Tracks which are successfully associated are declared as target tracks.
In other embodiments of the invention the system initially stores and operates on a group of n frames and then this group is shifted to include each frame that is received thereafter, with the first frame in the group being displaced by the new frame that is received.
In other embodiments of the invention the system may perform some pre-processing (or pre-treatment) of the data received from the sequentially received image frames. For example, if a large diffuse object is present in the received image frames then the system may perform some additional processing on the diffuse object to transform the region at the perimeter of the object into a few pixels which lie on the perimeter of the diffuse object. One method of performing this transformation is to differentiate the intensity of each pixel in the diffuse image with respect to the x and y position of the pixel. This single differentiation records the edges of the diffuse object. If the pixels are differentiated twice then this would produce a few points on the perimeter of the diffuse object corresponding to sharp kinks in the edges. Hence, the transformation process converts large diffuse objects to lines of pixels in the case of a single differentiation or to individual pixels in the case of differentiating twice.
One advantage of the present invention is that it ignores noise, cyclical motion and chaotic movement (where a small change produces a disproportionately large change), but detects deterministic or systematic motion (where an object moves in a predictable way, for example, in a unidirectional manner) According to a first aspect of the invention there is provided a signal processing method of tracking a moving object, located in noisy image data produced by an imaging detector, the method comprising the steps of: receiving a first sequence of n number of image frames consisting of a number of point pixels, applying each of m sets of possible shift values, each set of shift values corresponding to a velocity, to at least a subset of the pixels in each of the n image frames received, so that for each original frame in the sequence, m number of additional image frames are generated, thus producing m sequences of n image frames, integrating each of the m sequences of n image frames to generate m combination frames, generating a results frame comprising at each pixel location a representation of the highest intensity pixel from the m combination frames and the corresponding shift value, analysing the results frame to determine one or more clusters of pixels, and repeating the above steps on at least a second subsequent sequence of n number of image frames and assessing clusters of pixels from successive results frames utilising an association algorithm and in the event of association declaring the sequence of clusters to form one or more possible tracks.
Preferably, the signal processing method further comprises the step of storing the first sequence of n number of received image frames in an ordered and predetermined order such that for each image frame, each storage location stores the intensity of each pixel and the location of the storage location represents the position of the pixel in the image frame.
Preferably, the analysing step further comprises the step of assigning a probability to each possible cluster based on the intensity and extent of the cluster.
Preferably, the step of analysing the results frame utilises a sorting algorithm to identify the pixel of highest intensity in the results frame then to examine the neigThbouring pixels to determine whether there is a cluster, and to determine the pixel of highest intensity among the remaining pixels whilst ignoring the pixels in any cluster previously identified.
Preferably, the analysing step includes the further steps of calculating an intensity value due to noise and comparing the calculated noise value with the cluster intensity, and if the cluster intensity is higher than the calculated noise value by a predetermined amount then accepting the cluster, otherwise discarding the cluster.
Preferably, the step of assigning a probability to each possible cluster includes the step of comparing the intensity of the cluster with the intensity of the calculated noise value and assigning a probability related to the difference between the two values.
Preferably, the step of assessing possible clusters from successive results frames includes the step of using the individual probabilities associated with each cluster in determining the most probable tracks.
According to a second aspect of the invention there is provided apparatus for tracking a moving object, located in noisy image data produced by an imaging detector, the apparatus comprising: an image detector for generating digitised images at video compatible rates, a memory for storing a sequence of digitised images, a data processor for shifting at least a portion of adjacent image frames by a first predetermined shift value from a range of such predetermined shift values, integrating pixel intensities at pixel locations to produce a combination frame for the first predetermined shift value, adding a shift component to the combination frame, the shift component representing the first predetermined value, changing the first predetermined value and repeating the shifting, integrating, and adding steps for the received image frames until all of the predetermined values have been used, thereby generating a plurality of combination frames, generating a results frame comprising the highest intensity pixel at each pixel location from all of the combination frames and the corresponding shift value for each pixel in the results frame, storing the results frame in the memory, analysing the results frame to determine possible clusters, assigning a probability to each possible cluster based on the intensity of the cluster, repeating the above stages continuously and assessing possible clusters from successive results frames to determine possible tracks, and an output display for displaying track information to the user.
The present invention will now be described in greater detail with reference to the drawings in which: Fig 1 shows a block diagram of apparatus for target identification according to one embodiment of the present invention; Figs 2A to 2D show a flow chart illustrating the steps of a method of receiving a sequence of captured image frames and producing an output track of an image in the captured image frames; and Fig 3 illustrates a cascade pipeline for use in implementing the method of Figs 2A to 2D.
In the following it will be assumed that the image frames contain a feature generated by a small, fast moving target object, such as an aeroplane, which it is desired to track. It is also assumed that the background is relatively noisy so that it is difficult or impossible to distinguish the object from the background without carrying out signal processing operations.
Referring to Fig 1 there is an image sensor 10 which is an infra-red camera. The image sensor 10 is connected to a digitiser 12 which produces a digitised image (or frame) corresponding to the image detected by the image sensor 10 at certain time intervals (tl).
Thus, the digitiser samples the image detected by the image sensor 10 and outputs a digitised frame to a memory 14 at each time interval.
The memory 14 stores the successive digitised frames received from the digitiser 12 at adjacent memory blocks.
The first memory block 14aa receives the first digitised sampled frame, the second memory block 14ab receives the second digitised sampled frame, and so on until a predetermined number of frames (in this embodiment 80 frames) have been received and stored in corresponding memory blocks.
Each digitised frame is composed of a matrix of pixels, where each pixel has an associated pixel intensity. Each memory block 14aa,ab,etc. has a 32bit word corresponding to each pixel in the frame which is stored in that memory block. Thus, each memory block has the same number of 32bit words as there are pixels in each digitised frame.
Each 32bit word is composed of an 8bit x component which represents the relative displacement (with reference to the frame) in the x direction, an 8bit y component which represents the relative displacement (with reference to the frame) in the y direction, and a 16bit magnitude component which represents the pixel intensity.
A data processor 16 is connected to the memory 14 for performing operations on the sampled digitised frames stored in the memory blocks 14aa,l4bb,etc. The data processor 16 is connected to an output 18 which displays the target location, velocity and trajectory to a user.
The operation of the apparatus of Fig 1 will now be described with reference to Figs 2A to 2D, which show flowcharts illustrating the steps involved in identifying a target track from a sequence of captured images.
At any given moment in time, and for any short period of time (e.g. a few seconds), the target object will be moving with a relatively constant velocity. It will be appreciated that if the image frames containing the feature of interest are successively spatially displaced at the same rate as, and in a direction opposite to, the target object, then the target object will effectively remain stationary relative to a common frame origin, assuming that the field of view has remained stationary.
If the frame velocity (the apparent speed at which the frame is moving) is Vx in the x direction, and n is the frame number, the shift in the x direction for the nth frame is Vxn. Similarly, the displacement for the nth frame in the Y direction is Vyn. If all the displaced frames are subsequently added together, the resulting summed image frame will contain pixels of high intensity near the location of the target (provided the frame velocity is approximately equal to the target object velocity) whilst the remaining areas will usually contain pixels of only low intensity (unless there is another feature within the image frame moving at the same velocity as the target object).
The velocity of a target object is generally unknown prior to conducting signal processing operations. It is, therefore, necessary to conduct the operation described above for all possible velocities which the target object may reasonably be expected to have.
The summed frames resulting from each of the operations can then be compared on a pixel by pixel basis to identify that feature within all of the summed frames (representing all of the possible velocities) which has the greatest pixel intensity. The greatest pixel intensity should occur for the frame velocity closest to the target velocity.
Identification of the pixel with the highest intensity is performed by the data processor 16. The data processor i6 stores the summed image frames in unused blocks of memory. The data processor 16 then compares all of the summed image frames on a pixel by pixel basis to produce a new composite frame which has, at each pixel location, the highest intensity pixel for that location.
The new composite frame stores each pixel as a 32bit word which comprises a magnitude component (16bits) and the corresponding (8bit) x and (8bit) y components for each pixel. The x and y components associated with each pixel are used to indicate which summed image frame each pixel came from.
Thus, the new composite frame contains the brightest pixels selected from all of the summed frames and the corresponding x and y components for each pixel to indicate which summed frame each pixel came from.
Figure 3 illustrates schematically a cascade pipeline system 20 for carrying out the process of producing a new composite frame (results framestore).
This procedure is also detailed in Figs 2A and Fig 2B, which show the steps involved in producing a single combination for each velocity (Vx and Vy) and the steps involved in producing a new composite frame (results framestore) For each image frame in the collected sequence, a framestore 22 is provided. In this case, a total of eight framestores 22 are used for a sequence of eight frames (n=8). Each framestore 22 has associated with it a pair of registers 24a,b (shown only for the first framestore to aid clarity) which store the cumulative displacements in the x and y directions and the basic x and y displacement vector. The pair of registers associated with the first framestore store zero values for the x and y cumulative displacements and the x and y values for the basic displacement vector. The pair of registers associated with the second framestore store the x and y displacement values for both the cumulative and basic displacement vectors. The pair of registers associated with the third framestore store values equal to 2x and 2y for the cumulative displacement vector and values of x and y for the basic displacement vector. The pair of registers associated with the fourth framestore store cumulative displacement values equal to 3x and 3y and basic displacement vector values of x and y; and so on for the eight framestores. The cumulative displacement vector values are used to displace every frame relative to the first frame. In the embodiment of Fig 1, the framestore and the x and y registers are both stored in the memory 14. Each framestore 22 stores the magnitude of the pixel intensity for each pixel in the corresponding image and the basic (not the cumulative) x and y displacement values from the associated pair of registers 24a,b.
An image frame is clocked into the associated frames tore 22 with the required cumulative displacement from the associated pair of registers 24a,24b. Pairs of framestores 22 are input to an adder 24 which adds the corresponding pixel intensities (and retains the basic x and y displacement values) from each pair of framestores 22. Thus, after each summation, the intensities increase but the basic x and y displacement values are unchanged.
Pairs of adders 24 are input to a second adder 26 which adds the corresponding pixel intensities from each pair of adders 24. The pair of second adders 26 is input to a summed framestore 28.
A result framestore 30 is provided and which is initially blanked, i.e. all pixel intensities set to zero. A comparator 32 compares the intensities of corresponding pixels in the summed framestore 28 and the result framestore 30. The pixels in the result framestore 30 are replaced by those of the summed framestore 28 where the intensity of the pixels in the latter are greater.
The process is iteratively repeated for the remaining velocities such that, when the process is complete, the results framestore 30 will contain at each position the highest pixel intensities, together with the basic x and y displacement values for that pixel, from all of the summed framestores 28.
The results framestore 30 is then output to a cluster processor 34 for further processing.
The cluster processor 34 performs the steps shown in Figs 2C and 2D. The cluster processor 34 receives data from the results framestore 30 representing a sequence of frames which have been displaced and integrated. As each group of frames is received and processed a new results framestore 30 (a results framestore with different values) is produced. Thus the cluster processor 34 receives from the results framestore 30 a series of results frames at a constant rate.
This is similar to the raw data which is produced by known systems, although there is one important difference, namely, in the results framestore 30 there is x and y displacement information (representing velocity) for each pixel in the framestore 30. The output of the results framestore 30 can be used in a number of ways.
It can be used for tracking a moving target, it can be used for detecting motion, and it can be used for superresolution techniques.
Tracking is performed by the cluster processor 34 as follows. Each output frame from the results framestore 30 is analysed and the system determines possible clusters of pixels by using a "tournament sort" technique.
The tournament sort technique identifies the pixel of highest intensity in the results framestore then examines the neighbouring data words (representing neighbouring pixels) to determine whether there is a cluster. In the results framestore, all of the pixels in a cluster have the same velocity. All of the pixels in the cluster also have values of intensity higher than the surrounding pixels, and also higher than a noise value calculated for those pixels. If there is a cluster, a probability of the track being genuine is then evaluated as follows.
Firstly, an intensity value due to noise is calculated assuming that each pixel in the possible cluster produces Gaussian noise. This is done using standard statistical methods, such as an error function erfc(x) where x is the intensity. Secondly, the actual intensity of the cluster is compared with the calculated Gaussian noise value, and if the cluster intensity is higher than the Gaussian noise calculation by a predetermined amount then the cluster is accepted as a cluster, otherwise the pixels in the "cluster" are discarded as noise. The fact that a cluster has been accepted does not mean that it is a genuine cluster, merely that it is not Gaussian noise.
The cluster is stored together with a probability of the cluster being genuine. This probability is determined by comparing the cluster intensity with the Gaussian noise calculation and assigning a probability which is related to the difference between the two values.
If the cluster is accepted, the pixels from genuine clusters are then ignored, or deleted from the results frame.
The next highest intensity pixel (disregarding all of the pixels in the first cluster) is then located and che same procedure is followed to produce another possible cluster. This procedure is repeated until a predetermined criterion is fulfilled, for example, the procedure may be repeated a predetermined number of times, for a predetermine length of time, or until the intensity of the highest intensity pixel remaining is below a certain level.
These possible clusters are then stored as output fields, comprising data words which contain bits representing the centroid of the cluster, the spread of the cluster, the intensity of the cluster, the individual pixel locations in the cluster, the displacement vector of the pixels which form the cluster, and the probability of the cluster being genuine. The other pixels (not forming a cluster) are discarded. This process is repeated for each results framestore that is output.
The output fields for the most recent results frames tore are then compared with the output fields of previous framestores to locate associated clusters: that is, a cluster in one of a set of output fields which has a corresponding cluster in one of another set of output fields. The probability assigned to each cluster is used to assist in determining cluster associations.
Associated clusters may represent the track of a moving target. Clusters which are successfully associated may be declared as a target, or if a number of clusters from each frame are associated then a number of tracks may be declared. This is different from the m out of n method of declaring tracks which is the state of the art.
Motion detection may be carried out by analysing the results framestore to determine whether there are any clusters. If there are clusters then a threshold is applied to the magnitude of the velocity 4(vex2 + ivy2) and the pixel intensity. If the magnitude of the velocity is above a predetermined level and the intensity is above a predetermined (noise) level, then the system declares that the target is moving.
Super-resolution processing may be conducted using the original frames by dividing each pixel into, for example, four or more pixels by using interpolation formulae and producing a higher resolution frame from these divided pixels. At this stage pixels which were adjacent are now (for the dividing into four example) separated by two new pixels which have a value between the original two adjacent pixels. A displacement vector can now be used which has a displacement of less than one (previous) pixel because of the new intermediate pixels.
This provides a more accurate estimate of the velocity of the target as the "resolution" of the displacement vector is increased.
When performing superresolution and/or motion detection it may be desirable to perform edge detection on the input (group of n)images if the target object is not small (that is if the target object does not represent only a few pixels in the image frame) . Edge detection is a process which operates on an object covering a number of pixels and produces an outline image of that object where the outline is narrow (that is, it occupies only a few pixels) and where the bright points represent boundaries.
Thus, it will be apparent that the present invention has advantages including improved detection of targets which are at extreme ranges and obscured by noise, and improved detection of moving targets and point targets.
Various modifications may be made to the above described embodiments within the scope of the present invention. For example, in other embodiments, other criteria may be applied to determine the target object than merely detecting the pixel with the highest intensity. These other criteria may include assigning a probability to each cluster based on the intensity of the points in the cluster after area growth techniques have been applied. In the embodiment of Fig 1 the value of 32bits was used for each pixel, in other embodiments, more or less than 32bits may be used to store each pixel in memory. In other embodiments the data corresponding to the location, velocity and intensity of the target object may be output to another signal processor for further processing.

Claims (8)

  1. Claims 1. A signal processing method of tracking a moving object, located in noisy image data produced by an imaging detector, the method comprising the steps of: receiving a first sequence of n number of image frames consisting of a number of point pixels, applying each of m sets of possible shift values, each set of shift values corresponding to a velocity, to at least a subset of the pixels in each of the n image frames received, so that for each original frame in the sequence, m number of additional image frames are generated, thus producing m sequences of n image frames, integrating each of the m sequences of n image frames to generate m combination frames, generating a results frame comprising at each pixel location a representation of the highest intensity pixel from the m combination frames and the corresponding shift value, analysing the results frame to determine one or more clusters of pixels, and repeating the above steps on at least a second subsequent sequence of n number of image frames and assessing clusters of pixels from successive results frames utilising an association algorithm and in the event of association declaring the sequence of clusters to form one or more possible tracks.
  2. 2. A signal processing method according to claim 1 further comprising the step of storing the first sequence of n number of received image frames in an ordered and predetermined order such that for each image frame, each storage location stores the intensity of each pixel and the location of the storage location represents the position of the pixel in the image frame.
  3. 3. A signal processing method according to claim 1 or 2, wherein the analysing step further comprises the step of assigning a probability to each possible cluster based on the intensity and extent of the cluster.
  4. 4. A signal processing method according to any preceding claim, wherein the step of analysing the results frame utilises a sorting algorithm to Identify the pixel of highest intensity in the results frame then to examine the neighbouring pixels to determine whether there is a cluster, and to determine the pixel of highest intensity among the remaining pixels whilst ignoring the pixels in any cluster previously identified.
  5. 5. A signal processing method according to any preceding claim, wherein the analysing step includes the further steps of calculating an intensity value due to noise and comparing the calculated noise value with the cluster intensity, and if the cluster intensity is higher than the calculated noise value by a predetermined amount then accepting the cluster, otherwise discarding the cluster.
  6. 6. A signal processing method according to claim 5, wherein the step of assigning a probability to each possible cluster includes the step of comparing the intensity of the cluster with the intensity of the calculated noise value and assigning a probability related to the difference between the two values.
  7. 7. A signal processing method according to claim 6, wherein the step of assessing possible clusters from successive results frames includes the step of using the individual probabilities associated with each cluster in determining the most probable tracks.
  8. 8. Apparatus for tracking a moving object, located in noisy image data produced by an imaging detector, the apparatus comprising: an image detector for generating digitised images at video compatible rates, a memory for storing a sequence of digitised images, a data processor for shifting at least a portion of adjacent image frames by a first predetermined shift value from a range of such predetermined shift values, integrating pixel intensities at pixel locations to produce a combination frame for the first predetermined shift value, adding a shift component to the combination frame, the shift component representing the first predetermined value, changing the first predetermined value and repeating the shifting, integrating, and adding steps for the received image frames until all of the predetermined values have been used, thereby generating a plurality of combination frames, generating a results frame comprising the highest intensity pixel at each pixel location from all of the combination frames and the corresponding shift value for each pixel in the results frame, storing the results frame in the memory, analysing the results frame to determine possible clusters, assigning a probability to each possible cluster based on the intensity of the cluster, repeating the above stages continuously and assessing possible clusters from successive results frames to determine possible tracks, and an output display for displaying track information to the user.
GB9820922A 1997-10-02 1998-09-28 Method and apparatus for target track identification Expired - Fee Related GB2330028B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GBGB9720852.4A GB9720852D0 (en) 1997-10-02 1997-10-02 Method and apparatus for target track identification

Publications (3)

Publication Number Publication Date
GB9820922D0 GB9820922D0 (en) 1998-11-18
GB2330028A true GB2330028A (en) 1999-04-07
GB2330028B GB2330028B (en) 1999-12-22

Family

ID=10819898

Family Applications (2)

Application Number Title Priority Date Filing Date
GBGB9720852.4A Pending GB9720852D0 (en) 1997-10-02 1997-10-02 Method and apparatus for target track identification
GB9820922A Expired - Fee Related GB2330028B (en) 1997-10-02 1998-09-28 Method and apparatus for target track identification

Family Applications Before (1)

Application Number Title Priority Date Filing Date
GBGB9720852.4A Pending GB9720852D0 (en) 1997-10-02 1997-10-02 Method and apparatus for target track identification

Country Status (2)

Country Link
FR (1) FR2771835B1 (en)
GB (2) GB9720852D0 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2932278A1 (en) * 2008-06-06 2009-12-11 Thales Sa METHOD FOR DETECTING AN OBJECT IN A SCENE COMPRISING ARTIFACTS
AT507764B1 (en) * 2008-12-12 2013-06-15 Arc Austrian Res Centers Gmbh METHOD FOR DETECTING OBJECTS

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113808162B (en) * 2021-08-26 2024-01-23 中国人民解放军军事科学院军事医学研究院 Target tracking method, device, electronic equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0537048A1 (en) * 1991-10-08 1993-04-14 Thomson-Csf Infrared detector with high sensitivity and infrared camera using the same

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5210798A (en) * 1990-07-19 1993-05-11 Litton Systems, Inc. Vector neural network for low signal-to-noise ratio detection of a target
US5311305A (en) * 1992-06-30 1994-05-10 At&T Bell Laboratories Technique for edge/corner detection/tracking in image frames
FR2741499B1 (en) * 1995-11-20 1997-12-12 Commissariat Energie Atomique METHOD FOR STRUCTURING A SCENE IN THE MEANING OF APPARENT MOVEMENT AND DEPTH

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0537048A1 (en) * 1991-10-08 1993-04-14 Thomson-Csf Infrared detector with high sensitivity and infrared camera using the same

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2932278A1 (en) * 2008-06-06 2009-12-11 Thales Sa METHOD FOR DETECTING AN OBJECT IN A SCENE COMPRISING ARTIFACTS
WO2010003742A1 (en) 2008-06-06 2010-01-14 Thales Method of detecting an object in a scene comprising artefacts
US8558891B2 (en) 2008-06-06 2013-10-15 Thales Method of detecting an object in a scene comprising artifacts
AT507764B1 (en) * 2008-12-12 2013-06-15 Arc Austrian Res Centers Gmbh METHOD FOR DETECTING OBJECTS

Also Published As

Publication number Publication date
GB9720852D0 (en) 1998-02-11
FR2771835A1 (en) 1999-06-04
GB9820922D0 (en) 1998-11-18
GB2330028B (en) 1999-12-22
FR2771835B1 (en) 2001-02-16

Similar Documents

Publication Publication Date Title
JP3812763B2 (en) Key signal generating apparatus and method
EP0758470B1 (en) Method and apparatus for locating and identifying an object of interest in a complex image
US5181254A (en) Method for automatically identifying targets in sonar images
JP4208898B2 (en) Object tracking device and object tracking method
US9042602B2 (en) Image processing apparatus and method
US4868871A (en) Nonparametric imaging tracker
JP2915894B2 (en) Target tracking method and device
Alsheakhali et al. Hand gesture recognition system
CN110796687B (en) Sky background infrared imaging multi-target tracking method
US8363983B2 (en) Real-time face detection apparatus
EP0386181B1 (en) Maskable bilevel correlator
US5425136A (en) Three-dimensional maximum a posteriori (map) tracking
KR20040053337A (en) Computer vision method and system for blob-based analysis using a probabilistic framework
Liu et al. Space target extraction and detection for wide-field surveillance
US5471433A (en) System and method for rapidly tracking highly dynamic vehicles
GB2330028A (en) Target tracking method
Wang et al. A single-pixel target detection and tracking system
US5471434A (en) System and method for rapidly tracking vehicles of special utility in low signal-to-noise environments
Blostein et al. A tree search algorithm for target detection in image sequences
Wang A pipeline algorithm for detection and tracking of pixel-sized target trajectories
KR101635973B1 (en) Method and Apparatus of performance enhancement for coast tracking using particle filter via infra-red images
Şengil Implementation of DBSCAN Method in Star Trackers to Improve Image Segmentation in Heavy Noise Conditions.
Tucker et al. Image stabilization for a camera on a moving platform
Iwata et al. Active region segmentation of color images using neural networks
Sifakis et al. Fast marching to moving object location

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20130928