GB2520243A - Image processor - Google Patents

Image processor Download PDF

Info

Publication number
GB2520243A
GB2520243A GB1319610.0A GB201319610A GB2520243A GB 2520243 A GB2520243 A GB 2520243A GB 201319610 A GB201319610 A GB 201319610A GB 2520243 A GB2520243 A GB 2520243A
Authority
GB
United Kingdom
Prior art keywords
features
track
image
frame
image processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1319610.0A
Other versions
GB2520243B (en
GB201319610D0 (en
Inventor
Charles Offer
Glen Davidson
Graham Knight
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Thales Holdings UK PLC
Original Assignee
Thales Holdings UK PLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thales Holdings UK PLC filed Critical Thales Holdings UK PLC
Priority to GB1319610.0A priority Critical patent/GB2520243B/en
Publication of GB201319610D0 publication Critical patent/GB201319610D0/en
Publication of GB2520243A publication Critical patent/GB2520243A/en
Application granted granted Critical
Publication of GB2520243B publication Critical patent/GB2520243B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/04Anti-collision systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/147Scene change detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/04Anti-collision systems
    • G08G5/045Navigation or guidance aids, e.g. determination of anti-collision manoeuvers

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Image Analysis (AREA)

Abstract

An image processor for processing a temporal sequence of images obtained from one or more image sensors onboard a platform e.g a UAV in order to detect the threat of a collision between the platform and an inbound object. For each image, the image processor identifies features posited to belong to an object and associates the features with ones identified in the preceding image to form pairs of associated features. The pairs of associated features are in turn used for generating and updating tracks that reflect the posited movement of object parts throughout the image sequence. The tracks are evaluated over time to determine whether an impact between the object and the platform is imminent.

Description

Image processor
Field
Embodiments described herein relate to an image processor and a method for processing a temporal sequence of images.
Background
In recent years, the concept of developing vehicles and aircraft capable of operation with minimal human interaction has come to the fore. Obvious applications include so-called Dirty, Dangerous or Dull' tasks carried out by unmanned aerial vehicles.
Besides typical observation and reconnaissance tasks, such technology could present significant opportunities in civilian life. For example, removing the need for a pilot could see dramatic falls in the costs of operating commercial aircraft. Similarly, motorists would benefit from having vehicles capable of driving themselves.
At present, one of the barriers to implementing such automated systems is the ability to identify threats of collision from other objects in the surrounding environment. For an unmanned aerial vehicle, such threats could arise from e.g. other UAVs, or manned aircraft whose pilots have themselves failed to realise they are on a collision course.
Any unmanned vehicle must, therefore, have a means for determining whether such a collision is imminent, in order that it can take appropriate evasive action.
One possibility for detecting such threats is to use radar or lidar systems. These systems can determine the distance between the vehicle and any object that poses a threat of collision, as well as the velocity at which that object is approaching. The time-to-collision can then easily be calculated as the ratio of the distance and velocity.
However such an approach requires an active system with spectrum utilisation and power issues.
Passive optical systems may be preferred due to the wide availability and low cost of COTS cameras. There are numerous methods for optical collision warning. Typically, these methods comprise the steps of:
a) Identifying an object against a background
b) determining if the object's relative angular motion is low c) testing the motion and size of the object against some threshold Typically, the step of identifying an object against the background is performed by use of template matching' whereby an object (e.g. an aircraft) is segmented out of the image (using a somewhat arbitrary intensity threshold and template size), and then its size determined by the pixel area of the aircraft shape'.
The primary problem with such systems is that they do not discriminate between a large object at long range, and a small object at short range. In terms of collision avoidance, it is the time to go that is important i.e. how much time is available to manoeuvre out of collision risk. A secondary problem is that they are subject to a high false alarm rate.
It follows that there is a need to develop a reliable collision warning system with an estimate of time to collision for objects. There is also a need to ensure that such system has a low false alarm rate, in the presence of background clutter.
Summary
According to a first embodiment, there is provided a method for processing a temporal sequence of images obtained from one or more image sensors onboard a platform in order to detect the threat of a collision between the platform and an inbound object, the method comprising: receiving a first image frame in the sequence of images; identifying a plurality of features in the first image frame, the features corresponding to parts of one or more objects present in the tield of view of the sensor that captured the first image frame; receiving a second image frame in the sequence; identifying a plurality of features in the second image frame, the features corresponding to parts of one or more objects present in the tield of view of the sensor that captured the second image frame; associating respective ones of the features identified in the second image frame with features identified in the first image frame to thereby form pairs of associated features, using each pail of associated features to posit the movement of a respective object part between the first image frame and second image frame; generating one or more tracks based on the posited movement of the object parts, the tracks being characterised by parameters that reflect the posited movement of object pads from between the first image frame and the second image frame; storing the parameters for each track in memory; and for each of one or more subsequent image frames in the sequence; identifying a plurality of features in the current image frame, the features corresponding to parts of one or more objects present in the field of view of the sensor that captured the current image frame; associating respective ones of the features identified in the current image flame with the features identified in the previous image frame to form pairs of associated features, for each pair of associated features, determining whether to assign the pair of associated features to an existing track or to a new track; where pairs of features are assigned to an existing track, updating the track parameters stored in memory for that track and where pairs of features have been assigned to a new track, storing track parameters for that new track in memory; evaluating the tracks to determine whether a collision between an object and the platform is imminent.
The features identified in a given image frame may be associated with ones identified in the preceding frame by comparing the appearance of features in the two image frames. Each identified feature may comprise a group of pixels having a spatial distribution of values for one or more of intensity, hue and saturation. The features identified in a given flame may be associated with ones identified in the preceding frame by comparing the spatial distribution of values in the respective groups of pixels.
The features identified in the images may be ones for which the spatial distribution of values indicate the corner of an object.
In some embodiments, in each one of the subsequent image frames, each pair of associated features is assigned to a different track. The decision to assign each one of the associated pairs of features to a different track may be taken when the density of features identified in the image frames is below a threshold.
In some embodiments, the step of evaluating the tracks comprises assessing the spatial separation between selected features in the current frame, wherein each selected feature belongs to a respective pair of associated features that has been assigned to a different respective track. The step of evaluating the tracks may further comprise determining the reciprocal of the spatial separation between features associated with respective tracks and determining if the reciprocal of that separation follows a linear trend downwards when compared with previous image frames in the sequence. In some embodiments! a straight line function may be used to model the trend. The method may comprise extrapolating from the straight line function to estimate a point in time at which the reciprocal will reach zero, said point in time being used to estimate a time to collision. The method may further comprise determining the goodness-of-fit of the straight line function, and assigning a likelihood of collision based on the goodness-of-fit.
In some embodiments, for each subsequent image frame, more than one of the pairs of associated features is assigned to the same track. The decision to assign more than one of the associated pairs of features to the same track may be taken when the density of features identified in the image frames exceeds a threshold.
In some embodiments, for a given track, the track parameters may define the movement of a group of object parts that are posited to belong to the same object. The track parameters may include a measure of the current position of the centroid of the group of object parts. The track parameters may include a measure of the current velocity of the centroid of the group of object parts. The track parameters may include a measure of the current spatial extent of the object. The track parameters may include a measure of the current rate of change of the spatial extent of the object. The spatial extent of the object may be determined as being proportional to the standard deviation in the distance between each feature in the current image that is considered as being associated with the object and the centroid of those features.
In some embodiments, the step of evaluating the tracks comprises comparing the spatial extent of the object as determined for the current frame with the spatial extent of the object as determined for at least one previous frame in the sequence. Evaluating a track may further comprise comparing the reciprocal of the spatial extent of the object as determined for the current frame with the reciprocal of the spatial extent of the object as determined for at least one previous frame in the sequence. The method may comprise using a straight line function to model the change in the reciprocal of the spatial extent of the object between the frames and extrapolating from that function to estimate a point in time at which the reciprocal will reach zero, said point in time being used to estimate a time to collision. The method may further comprise determining the goodness-of-fit of the straight line function, and assigning a likelihood of collision based on the goodness-of-fit.
In some embodiments, for each one of the image frames from the second image frame onwards, a prediction is made of how the track parameters in the respective tracks will evolve during the interval between receiving that frame and the next frame in the sequence. The step of determining whether to assign a pair of associated features to an existing track may comprise determining if the posited movement of the object part associated with that pair of features is consistent with the predicted evolution of the track parameters.
In some embodiments, each track parameter is modelled as having a respective probability distribution. For each parameter, the value of the parameter that is stored in memory may reflect the current mean of the respective distribution. The probability distributions may be approximated as being Gaussian.
In some embodiments, the mean and covariance of each track parameter is stored for each image frame, the mean and covariance being updated each time the track parameter is itself updated.
In some embodiments, predicting how the track parameters will evolve in the interval between image frames comprises defining a new probability distribution for the values of the respective parameters in the next frame.
In some embodiments, the track parameters include the number of pairs of associated features presently assigned to the track. For each received image! the number and position of features identified in that image may be assumed to be statistically independent of each other and of the number and position of features detected in previous images.
In some embodiments, for each received image, the number of features that will be identified in the image is modelled as being drawn from a Poisson distribution.
In some embodiments, when identifying pairs of associated features as candidates for assigning to an existing track, features that belong to the track in the current frame are modelled as having positions drawn from an isotropic Gaussian distribution.
In some embodiments, when predicting the evolution of track parameters for a given track, the object to which the respective group of object parts is posited to belong is modelled as undergoing a rigid translation between successive frames. When predicting the evolution of track parameters for a given track, the object to which the respective group of object parts is posited to belong may be modelled as undergoing a combination of a rigid translation and a uniform expansion between successive frames.
In some embodiments, the motion of the object to which the respective group of object parts is posited to belong is modelled in 3D using a statistical model, in which the objects acceleration is assumed to be a white noise process.
In some embodiments, the prediction of how the track parameters in a respective track will evolve is made by taking into account the systematic dependence of the velocity of features assigned to that track and the rate of change of the spatial extent on each other and on the position of the object in the image due to the projection of the object from 3D onto the 2D image. The acceleration white noise process may be assumed to have a power spectral density proportional to the square of the object's 3D velocity.
In some embodiments, for each pair of successively captured images, measurements of optical flow are used to identify background features in the image, the background features being excluded from analysis when identifying pairs of associated features.
For each image frame, the number of features initially identified in the image may be determined and the measurements of optical flow may be used if the number is above a threshold.
In some embodiments, in the event that no new pails of associated features have been assigned to a track after a predetermined number of frames, the track is deleted from memory.
In some embodiments, for each subsequent frame, in the event that more than one pair of associated features is not assigned to an existing track, a decision is made as to whether to assign those pairs to the same new track. The decision may be based at least in part on whether the features in the current flame that belong to those pairs lie within a certain proximity of one another. The decision may be based at least in part on whether the velocities of the features in the current frame that belong to those pairs lie within a certain range of one another.
In some embodiments, an alert is issued when a collision with an object is deemed to be imminent.
The one or more image sensors may be video cameras.
According to a second embodiment, there is provided an image processor for receiving and processing a temporal sequence of image frames captured by one or more image sensors onboard a platform, the image processor comprising: a feature identification module configured to identify a plurality of features in the first image frame in the sequence, wherein the features in the first image frame correspond to parts of one or more objects present in the field of view of the sensor that captured the first image frame; the feature identification module being further configured to identify a plurality of features in the second image frame in the sequence, wherein the features in the second image frame correspond to parts of one or more objects present in the field of view of the sensor that captured the second image frame; a feature association module configured to associate respective ones of the features identified in the second image frame with features identified in the first image frame to thereby form pairs of associated features, each pair of associated features being used to posit the movement of a respective object part between the first image frame and second image frame; a tracking module configured to generate one or more tracks based on the posited movement of the object parts, the tracks being characterised by parameters that reflect the posited movement of the object parts between the first image frame and the second image frame; and a memory module for storing the track parameters in memory; wherein for each of one or more subsequent image frames in the sequence: the feature identification module is configured to identify a plurality of features in the current image frame, the features corresponding to parts of one or more objects present in the field of view of the sensor that captured the current image frame; the feature associating module is configured to associate respective ones of the features identified in the current image frame with the features identified in the previous image frame to form pairs of associated features; and the tracking module is configured to determine whether to assign the pair of associated features to an existing track or to a new track; wherein, in the event that a pair of associated features is assigned to an existing track, the tracking module is configured to update the track parameters stored in the memory module for that track and in the event that a pair of features is assigned to a new track, the tracking module is configured to store track parameters for that new track in the memory module; the processor further comprising a track evaluating module for evaluating the state of the tracks to determine whether a collision between an object and the platform is imminent.
According to a third embodiment, there is provided an imaging system comprising an image sensor and an image processor according to the second embodiment.
According to a fourth embodiment, there is provided an aircraft comprising an imaging system according to the third embodiment.
According to a fifth embodiment, there is provided a non-transitory computer readable storage medium comprising computer executable instructions than when executed by a computer will cause the computer to carry out a method according to the first embodiment.
Brief description of Figures
Embodiments will now be described by way of reference to the accompanying figures in which: Figure 1 shows a graph that relates the time to impact of an object approaching a platform with the angle subtended by the object in the field of view of a sensor onboard the platform; Figure 2 shows a flow-chart for selecting a method to use in determining whether object(s) in the field of view pose a collision risk Figure 3 shows a flow-chart of steps for detecting the threat of a collision between a platform and an inbound object according to an embodiment; Figure 4 shows an example of how a collision threat is identified using the method of Figure 3; Figure 5 shows images of an aircraft captured at different times from one another, wherein features corresponding to the same parts of the aircraft are identified in each image; Figure 6 shows extracts from a sequence of images captured at an airport, in which the motion of aircraft in the field of view is tracked using the method shown in Figure 3; Figure 7 shows a plot of the how the normalised distance between pairs of features that are identified as corresponding to parts of the aircraft in Figure 6 varies over the course of the image sequence; Figure 8 shows a flow-chart of steps for detecting the threat of a collision between a platform and an inbound object according to an embodiment; Figure 9 shows an example of flow vectors established between features identified in successive images when implementing the method of Figure 8; Figure 10 shows a sequence of images for explaining the process of initialising a track when implementing the method of Figure 8; Figure 11 shows a further sequence of images for explaining the process of initialising a track when implementing the method of Figure 8; Figure 12 shows results obtained from in-flight recordings, in which the reciprocal of the spatial extent of an object in the field of view is plotted over time by using the method of Figure 8; Figure 13 shows an example in which the time before collision with the object being monitored in Figure 12 is estimated by determining the intercept of the graph with the time axis; Figure 14 shows an example of how the uncertainty in estimating the time before collision reduces as the number of frames in which the object is tracked increases; and Figure 15 shows a schematic of an image processor in accordance with an embodiment.
Detailed descrirjtion If an object travelling on a straight-line trajectory appears to be at a constant angle, and yet the angle it subtends increases (termed blooming), there is a risk of collision (see Figure 1). The specific problem is to identify such targets as they bloom' against a potentially complex background. The rate of blooming is a function of object size, speed and trajectory.
Embodiments described herein do not rely on particular object shapes (e.g. aircraft).
Instead, embodiments are based on the hypothesis that any complex object will consist of several localised and distinct features' such as corners, edges and points. Thus, embodiments avoid the need for template libraries' to identify complex shapes and have inherent scale invariance.
It is accepted that the number and position of these features might change over time (e.g. due to illumination), but that the persistence of features over a short duration will be sufficient to identify the increasing angular separation of adjacent features required to indicate blooming'. Embodiments described herein use the separation rate of many features belonging to the same complex object, rather than trying to template match' to a) identify entire objects as a whole or b) measure single object sizes over time.
Embodiments may include the following features: 1. The ability to predict the time-to-collision by explicitly fitting the measured object expansion curve over time.
2. The use of a simple method of reducing measurement noise in an estimate of time to collision using a sliding time window. This also yields a method of testing the validity of this time-to-collision, thus false alarms are decreased.
3. To track object expansion, performance is improved via robust algorithms that take advantage of feature descriptors' present in common image processing algorithms. A reliable frame-to-frame assignment method is used.
4. Where identification of consistent features over time is unreliable due to a cluttered/changing background, or changes in illumination, the separation rate of individual feature pairs is no longer reliable. Embodiments use the expansion rate of the multiple features (i.e. the group') to estimate time to go.
5. The system uses complementary techniques for low and high clutter backgrounds. In areas of high ground clutter, it is postulated and demonstrated that the image can be stabilised between frames due to these fixed clutter points, this allows the number of features that must be considered to be greatly reduced. In areas of low ground clutter, there are few features, and image stabilisation is not required to test the expansion rate. A system with these complementary approaches may be more robust across the field of view -particularly both above and below the horizon.
Embodiments provide warning of potential colliding objects within video imagery, providing an accurate time-to-collision for each object with a low rate of false alarm.
For airborne applications, embodiments may include algorithms to handle detection of collision threats both below and above the horizon.
Embodiments use passive optical systems to identify collision threats and determine the time-to-collision for each object with a low rate of false alarm. The type of image processing algorithm used will depend on the extent of clutter in the field of view. Here, the term "clutter" refers to extraneous objects in the field of view, that are detected by the feature extractor, but which do not themselves pose a collision threat, such as stationary objects on the ground, for example.
Using one or more high-resolution cameras, image frames are captured at a relatively low frame rate, for example 3Hz. In each frame! discrete features are identified (corners, points, angles etc.) For a colliding object, the one-dimensional angular extent of the object's constituent image features will follow a characteristic expansion curve over time. The reciprocal of extent follows a straight line (subject to measurement noise), such that without any prior knowledge of object size, it is possible to form a simple estimate of time-to-collision accompanied by a goodness of fit' measure for that line fit. In turn, it becomes possible to threshold likely colliders whilst rejecting false alarms over a physically reasonable time window.
Embodiments described herein rely on feature correlation between frames to determine motion of objects in the field of view of the image sensor(s). Feature correlation is achieved by the use of positional proximity (e.g. via regular Kalman filtering) or proximity in a feature description space'. Both measures are used to determine a scalar cost of assignment between candidate feature matches across frames, and a cost-minimisation algorithm can then be used to identify the same feature in successive frames.
Using uncompressed frames from a high-resolution digital camera as input, algorithms are coded on a processor (CPU or graphics card). A combination of Open Source image processing libraries and bespoke algorithms can be used to implement the processing chain described here.
Depending on the image background, the colliders are identified using one of two complementary methods -pairing' and grouping'. The pairing' method is suited, for example, to regions in which clutter is low. In such cases, against an uncluttered background, there are relatively few test features within a candidate area, and it is expected that object features will be quite distinct overtime. Thus it is computationally possible to test all potential feature-pairs within each frame. The changing extent of the object is determined by tracking' the separation distance between feature-pairs present in successive frames. Thus the expansion rate of the object can be determined from the expansion of any constituent pairs of features upon that object.
Where the density of clutter in the background is high, it is expected that the feature extraction routines may be confused such that the object features will be inconsistent over time. The second grouping' method described herein provides a means for determining collision threats against such a high background of clutter. Features are collected together into groups with a consistent motion, and with a certain estimated extent within the image. The expansion rate of the object is found from the change in the group extent over time.
In general, image processing from moving cameras benefits from image stabilisation' where the bulk changes between frames caused by camera rotation and translation are accounted for prior to more detailed analysis. The choice of algorithm will depend on the ability to perform accurate image stabilisation.
Figure 2 shows a flowchart for determining which of the two methods are to be applied in a given case. In this instance, the example of an unmanned aerial vehicle (aircraft) is considered. However, it will be appreciated that the method is equally applicable in e.g. land vehicles and sea-going vessels.
Beginning at step 521, an image is captured by the image sensor onboard the aircraft and sent to the image processor. Next, image processing techniques are used to identify features (e.g. corners) within the image (Step S22). Discrete features can be determined by extracting corner-like' or point-like' pixel groups in an image; these algorithms are routinely available from Image Processing libraries such as OpenCV In step 523, the image processor assesses the amount of clutter in the image. If the clutter density is found to be below a threshold, the method proceeds to 1 (for which see Figure 3), whereas if the clutter density is above the threshold, the method proceeds to 2 (for which see Figure 8).
Figure 3 shows a flow chart of steps carried out in the image processing method of first method (i.e. that which is implemented when the clutter in the image is found to be below the threshold). Particularly in the case of an object above the horizon, if the clutter is low, stabilisation will be unreliable as there will be too few background features to allow the bulk motion to be estimated; however, in this case there is also less need to exclude background features from the processing as there are fewer features in total. Thus in this case a measure of potential target extent is made by measuring the pairwise pixel distance between all possible features and storing these over frames. Candidate colliding objects are declared by searching for sequences of reciprocal extent over a fixed time window (for example, 10 seconds), and testing their adherence to a straight line fit.
Referring now to Figure 3, after the clutter is determined to be beneath the threshold, the method continues at Step S31, by performing subpixel localisation of features (in this example, corners) in the image. Having extracted those features, the method proceeds to step 532, in which the image processor determines whether any of the features identified in the image can be associated with an existing track. The association comprises two tests a) An Extended Kalman Filter (EKF) is used to limit the association tests to features that are close in proximity to the underlying track and b) feature descriptors are then used to associate features with tracks most similar in appearance (such feature descriptors are available from Image Processing libraries such as OpenCV).
In this embodiment, a track defines the posited motion of a particular object (or part of an object) across the field(s) of view of the one or more image sensors. As discussed below, tracks are generated and updated from information contained in previously captured frames; essentially, the image processor determines whether the position of a feature in the present image correlates with that of a feature having a similar appearance in the preceding frame(s) of the image sequence.
Where feature(s) are determined to be associated with an existing track, the method proceeds to step S33. At this stage, the track parameters, which include the position of the feature, and its velocity in the image (i.e. the rate of change in spatial coordinates of the feature) are updated and the new track information saved to memory.
Features that are not found to be associated with an existing track may be used to initialise new tracks (step S34). It is possible that such features correspond to an object that has only now become visible in the platform's surroundings.
Having decided which of the features can be associated with an existing track that is stored in memory, and which features should instead be used as the basis for initialising new tracks, the image processor performs a review of existing tracks and deletes those that have not been recently updated (step S35). For example, the image processor may be programmed to only retain those tracks that have been updated within the past n frames (i.e. tracks to which a pair of associated features has been assigned within those last n frames).
After the track parameters have been fully updated for each track, the updated information is used to generate a range of expectation values for the location of featuie(s) in the next image (step S36). By comparing the location of featuies identified in that next image with this range of expectation values, it will in turn become possible to determine whether those features should be associated with a particular track.
Next! the image processor calculates the spacing between pairs of features in the image, where within each pair the two features are associated with different respective tracks (step S37). Computationally, separation distance is measured in terms of sub-pixel distance between feature pairs within each frame, subject to impiovement by comparing feature-pairs between subsequent frames to reduce noise.
In effect, the image processor seeks to map the increase (or decrease) in separation between two featuies that aie deemed to belong to the same object. The separation between these features, as well as the rate of change of separation over time contains information about the probability of collision with the object and the time until such a collision will occur if the aircraft maintains its present bearing. Where the image processor deteimines that the tiack spacing is consistent with a collision course, an alert is issued to the user (Step S39).
Over time the separation distance will have a characteristic form for incoming objects whose features are visible -visually this will appear as a set of diverging lines, that appeai iocused' upon the object.
Examples of how the method shown in the flow chart of Figure 3 may be implemented in practice will now be described by reference to Figures 4 to 7. Note this implementation is designed to minimise processing load, whereby N individual corner featuies aie tracked over time using N Kalman filters in Step S32-S37, and candidate track pairs are then used to test sub-pixel distances in S38. However in principle, approximately N212 filters operating directly upon the sub-pixel pair distances could be created for all candidate paired corner features over time if processing was available.
Figure 4 shows an example of how a collision threat (in this case an aeroplane) is identified using the method of Figure 3. Figures 4A, 4B and 4C show, respectively, successive images captured by the image sensor on board the aiiciaft. As can be seen, an aeioplane labelled X is piesent throughout the image sequence and is increasing in size over time at the same relative position, suggesting that a collision with that aeroplane is imminent. Also shown in the image sequence are various features present on the ground.
Figures 4D, 4E and 4F show, respectively, features (corners) that are extracted from the images shown in Figures 4A, 4B and 40, using image processing techniques previously described herein. Foi the purpose of explanation, the features identified in each image are shown by different symbols (cross, circle and triangle). Figure 4G shows the overlay of the three sets of identified features.
As shown in Figure 4D, the image processoi is able to identify (among others) features labelled 41a, 41b, 41c and 41d. These features are, to the human eye, associated with the aeroplane X, although of course the image processor does not know this. As shown in Figure 4E, in the next captured image, the image processor is able to identify featuies 42a, 42b, 42c and 42d. These featuies are deemed to be associated with respective features 41 a, 42b, 42c and 42d by virtue of their appearance; for example! the spatial distribution of intensity in the pixels that comprise the features 42a, 42b, 42c and 42d is similar to that in the features 4Th, 41 b, 41 c and 41d. Consequently, it is possible to initialise tracks that connect these respective feature matches.
In the third image 4F, features 43a, 43b, 43c and 43d are identified, which are now associated with the previously generated tracks. Figure 4H shows the series of trajectories 44a, 44b, 44c and 44d that are present following processing of the third image 4C. Each of these tracks corresponds to a respective part of the aeroplane X. Also shown in Figure H are other tracks that have been generated based on the identification of features relating to objects on the ground. It is possible to identify two focal points' of the tracks; FO is the horizon point caused by stationary objects on ground, while Fl corresponds to the aeroplane X. Thus, the blooming' of these objects in the image as they approach the aircraft manifests itself as a set of lines focused' around the centre of the incoming object, determined by tracking the angular position of individual features overtime. If the object has relative angular motion, these lines will form around the angular trajectory of the object centre. These focus lines' will tend to be quite distinct against a background of many stationary objects whose relative angular motion is a consequence of parallax alone.
By measuring the separation between respective pairs of features assigned to the tracks 44a, 44b, 44c and 44d, it is possible to ascertain that these tracks are associated with the same single object in the field of view and that the object is nearing the aircraft. This test may be performed between every one of the (N2-N)/2 possible pairs of tracks, using the sub-pixel distances between the constituent features within each frame over time.
An example of discrete aircraft features measured in a low clutter' environment against sky as part of the present pairing' tracking system is shown in Figure 5 showing two images from the same aircraft gefting closer. Here, blooming' is assessed by the increased distance between the feature pair 51, 53 formed by the wing tips of the aircraft. At subsequent time, blooming may also be assessed between any pair of features between the wing tips and engines.
Figure 6 shows results of a test implementation of the first embodiment, used to monitor the motion of aircraft in the vicinity of a runway at London's Gatwick airport. In this test, a sequence of images was taken of the runway, at a spatial resolution of 512x512 pixels. In each image, the 40 most corner-like' features were identified (a limit having been imposed to reduce processing time), with the identified corners relating to complex' objects such as the planes and runway lights. In this image sequence, a first aircraft 61 is seen appioaching the iunway on its descent, whilst a second aircraft 63 taxis back and forth in the vicinity of the runway.
Figure 6A shows an initial frame captured in the sequence, in which the first aeroplane 61 (an Airbus A320) is at higher altitude. Both aiiciaft are identifiable in the images as respective groups ot individual corner-like objects, each object being shown in Figure 6A by a respective square. At the time this image was captured, the aircraft in the sky was at a range of 5923m.
Figure 6B shows the 80th image in the same image sequence, which was captured 35 seconds after the image captured in Figure 6A. Also shown in Figure 6B are the tracks 65, 67 of the groups of features identified as being associated with each aircraft. In this image, the cuirent iange of the aiiborne plane is 3500m.
Figure 7 shows a plot of the normalised distance between i) pairs of features located on the airborne craft 61 in Figure 6, U) pairs of features on the aircraft 63 taxiing on the ground, and the background. Note the increase in size of the A320 to 1.6 times its original size over the 35-second sequence based on the feature expansion. Features on the taxiing aircraft were tracked while on the runway (t = 0-15 seconds). The taxiing airciaft was occluded from view at aiound t = 22s by the bushes in the foiegiound of the image; when the taxiing aircraft appeared from behind the occluding bushes new tracks 69 were initialised for that aircraft and these were maintained until the aircraft disappeared fiom the field of view. Note also the small variation in the background.
The examples discussed above each relate to the case where there is little clutter in the field of view. A method will now be described for scenarios in which the clutter is above the threshold identified in step S23 of Figure 2. In this embodiment, the image processor again seeks to establish tracks describing the movement of an object that is present in the aircraft's surroundings. However, in contrast to the previous embodiment, the tracks are established by considering the movement of groups of features within the image, rather than forming tracks based on individual features.
Figure 8 shows a sequence of steps as used in the second embodiment. The method continues from step S23 of Figure 2, where the density of features in the image is determined to be above the threshold. The image processor begins by performing a process of image stabilisation whereby the image is filtered to remove (stationary) features on the ground.
In step S81, the image processor stores the set of features (corners) that were extracted from the present image frame in step S22 of Figure 2. Next, the image processor receives the set of corners that were saved in the previous frame, and performs an association between these two sets of features (step S82). In step S83, the image processor uses the association between the features in the respective frames to determine optical flow vectors for features on the ground. The optical flow vectors can be used to establish a background to the image i.e. a collection of features that are identified in each image, but which can be excluded from further analysis.
Steps 582 and S83 are routinely available from Image Processing libraries such as OpenCV within Optical Flow' algorithms.
To avoid potential data incest and improve performance, a new set of features is now determined for the image in question, by re-running the image feature extractor (step S84). Thus features used subsequently are not necessarily the same ones used for the background' optical flow, and may be more appropriate to detecting the foreground' aircraft. The newly extracted features are stored in memory until the next frame (step 585). At the same time, the image processor recovers the set of features that were saved at the same stage in the previous frame and carries out an association between those features and those of the present frame (Step S86). This association is performed using feature descriptors, between candidate feature matches that are determined to be spatially close after image stabilisation.
The results of performing the associations in steps 582 and 586 is to identify pairs of features in the respective frames that bear resemblance to one another, and which may therefore correspond to the same object (or part of an object) in the field of view.
The optical flow vectors for stationary features that are present on the ground will have a characteristic form that can be used to distinguish them from features in the field of view moving at different velocities (and which may pose a collision threat to the aircraft). Thus, by considering the optical flow vectors, it is possible to eliminate from further processing features that do not pose a collision threat.
The process of identifying a collision threat against the background of stationary features by use of optical flow vectors can be explained by reference to Figure 9.
Figure 9A shows an aircraft 9 that is airborne above an urban landscape. Figure 9B shows flow vectors established between features (corners) that have been identified in the present image and the previous image received by the image processor. Flow vectors 9a and 9b comprise part of a group of vectors that correspond to features of the aircraft 9, whilst flow vectors 9c, 9d and 9e correspond to elements (houses) on the ground. It is clear in Figure 9B that the flow vectors for features on the ground are all similar in their magnitude and direction, whilst the size and direction of the vectors 9a and 9b identifies these as possibly relating to an object(s) in the local airspace. The vectors 9f and 9g are generated through an erroneous association of features identified in the two images; as will be seen below, such erroneous associations will themselves be filtered out from further analysis as the processor continues to update the associations over successive image frames.
Returning to Figure 8, the image processor determines which of the identified associations correspond to features present on the ground and filters these from further processing (Step S87).
Next, in Step S88, the image processor determines whether any of the remaining flow vectors can be associated with an existing track stored in memory. In the present embodiment, a track defines the centralised position and velocity of a group of identified features in the image (as well as a measure of extent and its rate of change) where that group is posited to correspond to a single object in the aircraft surroundings.
As discussed in more detail, the decision to associate an identified feature with an existing track is made by use of a Diffuse Target Filter (extended Kalman Filter F). The Diffuse Target Filter analyses the position of the identified features and the velocity found from the difference in position of the feature associated with it in the previous stabilised frame. The position and / or the determined velocity of the features are compared with expectation values that are generated by considering the track parameters determined for the previous trame.
Where features are determined not to be associated with an existing track, a decision is made as to whether to initialise a new track based on these features (Step S810).
New tracks are initiated, for example, by searching for sets of unassociated features that are in close proximity in both position and velocity.
Having decided which of the features can be associated with an existing track, and which features should instead be used as the basis for initialising new tracks, the image processor performs a review of existing track information and deletes those tracks that have not been recently updated (step 5812). Where features in the present image frame have been determined to be associated with a particular track, the position and / or velocity of those features is used to update the track parameters in step 589 and to generate new expected values for the position / velocity of features identified in the next image frame (Step 813). By review of the present track parameters, the image processor is able to determine whether there are objects present in the image frame that pose a collision threat and to issue a warning accordingly (steps 5814 and 5815).
The process of initialising a track in the present embodiment (see step S81 1 of Figure 8) is explained by reference to Figure 10. For simplicity, the steps of image stabilisation outlined in 581 -587 of Figure 8 are omitted from this discussion.
Referring to Figure bA, the process begins when the image processor receives a first image from an image sensor on board the aircraft. In the example shown in Figure 1 OA, the first image comprises features 101 a, 101 b, 101 c, bid, shown by a cross, a triangle, a circle and a square, respectively. Each feature is located at a different (x, y) coordinate position, with the (x, y) coordinates of each feature being saved to memory.
Figure lOB shows the next image 102 captured by the image processor. As in the first image, image processing techniques are used to identify features within this second image 102, after which a comparison is made between the first and second images.
Specifically, the features identified in the second image are compared with those of the first image to find pairs of features that have a high degree of similarity in appearance and which may, therefore, correspond to the same object or pad of object in the aircraft surroundings.
In the present example, the image processor is able to identify features 102a, 102b, 1 02c and 1 02d that share a high degree of similarly with the features 1 Ola, 101 b, 1 01c and 101 d, of the first image 101, respectively. To illustrate this, the features 1 02a, 102b, 102c and 102d in Figure lOB are also represented by a cross, a triangle, a circle and a square, respectively. The shift in position of the respective features between the two images can be seen clearly in Figure 100, which shows an overlay between the first 101 and second 102 images.
The image processor next uses the position of the features 1 02a, 1 02b, 1 02c, and 1 02d detected in the second image 102 to define possible velocity vectors for the features lOla, bib, iOlc and bid, detected in the first image 101. For each feature, the velocity vector is calculated by considering the change in spatial coordinates of the feature between the first and second image, and the time interval between capturing the two images. Figure 1OD shows respective velocity vectors 104a, 104b, 104c and 104d for each one of the features lOla, 101 b, lOlc and lOld identified in the first image. In each case, the respective velocity vector defines the trajectory of the feature based on its identified position in the second image. For clarity, Figure 1OE shows the velocity vectors 104a, 104b, 104c and 104d alone in isolation.
The image processor next groups the features of the first image, based on their respective position and I or velocity vectors. The purpose of this grouping is to identify features that potentially correspond to parts of the same body or object in the aircraft surroundings, and whose trajectory it will be necessary to monitor in order to avoid a collision. For example, one of the grouped features may correspond to an aeroplane's wing, whilst another one of the features in the same group may correspond to part of the aeroplane's fuselage.
In the present example, features lOla, lOib and lOic are located close together in the image, whilst feature lOld is located some distance away. Moreover, the respective velocity vectors 104a, 104b and 104c are similar in terms of both direction and magnitude, whilst feature lOid has a velocity vector whose direction is quite distinct from those other three. Based on this, the image processor determines the features lOla, lOib and lOlc(and by extension, features 102a, 102b and 102c) are to be grouped together.
Having identified the groups of features, the image processor next defines a region of the first image 101 that is centred on the group of features lOla, 101 b and 101 c and which expands to cover each one of those features. Figure 1 1A shows the region 11 a that is defined around the imagefeatures lOla, lOlb and lOlc. Similarly, Figure 11B shows the region 11 b that is defined around the image features 1 02a, 1 02b and 1 02c in the second image 102. In this example, each region is defined to be a circle, but other shapes are possible, depending on the spatial distribution of the features in the particular group.
Figure 11 C shows the overlay between the two regions 11 a, 11 b that are defined in the first and second image, respectively. Together, these two regions define a track i.e. a posited trajectory of an object that is present in the aircraft's surroundings.
In general, depending on the number of features in the two images, multiple tracks may be initialised. The state of each track in a given image can be represented by a state x, where I. . . x = , , r, , a, aj Here, r1 and r, are the two components of position of the centroid of the track in the image. Referring to figures 10 and 11, the centroid of the track in the first image 101 is the centre of the circle 11 a. The centroid then moves to the centre of the circle 11 b in the second image 102.
The parameters i and are the corresponding rates of change of position of the centroid. The parameter a is the logarithm of the extent of the track (i.e. the average 1 D angle in radians subtended by the target at the sensor isexp a). The size of the target in the image in each direction is r0 exp a, where r0 is a constant, neglecting distortions in the mapping from angle to position in the image when the angle from the sensor bore sight is not small. The parameter a is the rate of change of a with respect to time. Assuming small angles, the circle radius' r0 exp a is proportional to the rms of the feature distances from the centroid, in pixels. For small angles, this is proportional to the angle subtended. The constant of proportionality r0 can be found from the angular field of view and resolution of the camera. For non-small angles a non-linear relationship applies, but this is neglected.
The reciprocal of a is an estimate of the time-to-collision (i.e. the time until the target crosses the plane orthogonal to the current line of sight from the sensor to the target).
An alternative estimate of time-to-collision may be obtained by fitting a straight line to the reciprocal of exp a as it varies with time.
Once a track has been defined, features that are identified in subsequent images can be tested to determine whether they are associated with that track, by using a Diffuse Target Filter (see step S89 of Figure 8). The inputs to the Diffuse Target Filter in a particular timestep are the pairs of image features in the present frame and the adjacent (preceding) frame, where each pair consists of a respective feature in the first of the two frames and a feature in the second of the two frames that are correlated with a high degree of confidence. Pairs of features where the correlation confidence is low, or where the apparent motion between frames is similar to that of features on the ground, can be excluded to reduce false alarms and the processing load requirements.
The mechanics of the Diffuse Target Filter are now described in more detail.
First, the mean number of features belonging to the track is estimated, separately from the filter for x. For example, this can be done by maintaining quantities u and v for a given track, where if J features are associated with the track in a given frame, u and v are updated by U -*u+J, V-*\T+1 u represents a count of the number of features, and v represents a count of the number of frames, so the ratio is the number of features per frame. Both are given an exponential decay over time to gradually remove old information. They can be initialised to zero.
If the time interval between frames is At, u and v can be predicted between frames by u ue_Atfr, vvett where z-is the time scale over which the number of detections is allowed to change (here, the term "detection" refers to the identification of a pair of features in the two frames that correspond to one another). Then an estimate of the mean number of detections per frame is -.
V
Once features have been identified in an image, the association of features to a track can be carried out, for example, based on the differences between the features and the track state estimate: / x= with corresponding covariance (or uncertainty') matrix P Since each feature is matched with one in an adjacent stabilised frame, an effective velocity measurement can be found from the difference in position between frames divided by At. If the size of the region that defines the object in the image (see circles 11 a and 11 b of Figure 110) is uniformly increasing at a fractional rate ci then the i th component of the expected velocity measurement, given the track state, is: , =, +a(z, -) Here, the index i denotes the component (x or y) of 20 position or velocity in the image and where; denotes the position coordinates (x or y) of the feature in the present image. Then a decision about whether to associate a given detection j with the track can be based on 2. = (Z -Tassc)c4 (ssc,c + 1 sc,cPhIassc,c)(zassnc -1'assc,cx) det 2n(R + where the effective measurement, for the purposes of association, is z Zn Z = - zt -z2 -a12 which couples to the state by 1 0000 0 0 1 000 0 Issoc -I -a 0 1 0 0 z1-r1 0 -0 1 0 (z,-f1) and where P is the track covariance matrix. The parameter is the effective measurement covariance matrix for the purposes of association and is found by assuming that detections are distributed in position according to a 2D Gaussian distribution with an error proportional to the estimated target extent exp a, increased to include a margin of error based on the track variance for a, namely Paa. The error in the velocity measurement is assumed to have a Gaussian distribution with fixed standard deviation c, i.e. rexp2(a+k,2J) 0 0 0 R = 0 rexp2(a+k,[) 0 0 0 0 a 0 0 0 0 c where ka is a tuning parameter to account for potential underestimation of a due to random errors -for example the in-flight implementation chose k=2. Correlations between the velocity measurements' and the feature position measurements are neglected here. It is desired to associate roughly ii features with the track; this can be done by listing the features in decreasing order of A, and allocating the first J features to the track, where for all of the allocated features ñÁ.
as soc JPc where Tassoc is a threshold and p is a nominal clutter' density.
The J associated features are then used to update the track state and covariance, using a variant of the Kalman filter update equations. The feature velocity measurements' should not be used to update the track, as they are statistically correlated with the position measurements and from one frame to another. Assuming that the feature detections occur randomly, with a number J drawn from a Poisson distribution with mean n, and with each feature position independently drawn from a 2D Gaussian distribution with mean (r1,r2) and standard deviation r0e', the likelihood function for the set of detections is as follows: p({z}l x,n)=e n exp(-[(z1 _r)2 ±(z _r2)2]/(r9er2) j=1 2,zr0e (cf. K Gilholm, S Godsill, S Maskell, D Salmond, "Poisson models for extended target and group tracking", Signal and Data Processing of Small Targets 2005) The dependence on a is not in the usual form for an Extended Kalman Filter. If the likelihood is expanded about the track state estimate * to give a Gaussian approximation, then there is no certainty that the resulting covariance' matrix would be positive definite. Therefore the likelihood is instead expanded about the point zlj/J = = a a i.e. in position about the centroid of the set of associated features. The resulting Gaussian approximation to the likelihood is p({z.}i x) e4(U4' i(f&) with effective measurement it F2 a+2i0e /62 where 62 = XL(zi _it)2 +(21 -f,jtJ,with coupling matrix H=0 10000 and with effective measurement covariance 0 0 0 J,1 0 0 (26-) It is necessary that S »= 2, so that and R are finite. Then the track state and covariance can be updated using the usual Kalman filter equations: K=PH(HPH' -f-R)' i+K(2-Hk) p -÷ - (It is also possible to iterate the above solution, using the new value of cx to repeat the update starting from the beginning).
The prediction of the track state between frames is carried out using the Extended Kalman Filter method. The model of target dynamics in 3D is assumed to be d2R =w(t) dt2 where R(t) is the target position in 3D! and W(t) is a Gaussian white-noise process with mean zero and autocorrelation (w(t).(t')) = aos(i t') It is assumed that dR2 = r where y is a parameter. The mapping from 3D position to position in the image is taken to be (r1-c1 (r,c2 R, sIn =-, sln r,) R r) IR where (c.c,) is the centre of the image; this should be a valid approximation at angles close to the sensor boresight. Neglecting small quantities, the equation of motion of the state is = f(x)+) where 1,' 2áf -(r -c)f2 1r2 (x)= . \.2 2 _ai2 tF2 -c2J!1 1r0 a a2 -12 /r -t Ir and w(t) is a Gaussian white-noise process with zero mean and autocorrelation 000 000 000 000 00 r0 000 (w(t)w (t))= 0 0 0 0 0 r 000 000 000 001 As usual for the Extended Kalman Filter, f(x) is expanded about the track state estimate 1, giving the prediction equations over a timestep At: i-f-Ar **+1jdtFe(111w9f(*)*+(At+At2F +*At3F2 +...)f() P -* ePeF + Q(At) where 0 0 1 0 00 0 0 0 1 00 = -t/r] 0 2á 2 -2f2(r1 -c1),r] 0 2f1 0 -F1 -2r1(r, -c41r0 2a 0 2r2 0 0 0 0 01 0 0 -2fjr 0 2â and the system noise matrix is approximately 0 -At2a 0 0 0 o o 0 0 -At2a2 0 Ata2 0 0 0 Q(At)= -a a o +At2a 0 Ata 0 0 o o 0 0 At3o-2 1At2c2 a q 2. q o o 0 0 4At2c Atc with (?2 2 (?2 ?2 2 2 Hr1 r. ?, . Hr r, -2 a =r0y -T+--+a +aa, CYq =/1 --+--+a r r0 r; r where c52 and &2 are extra terms to ensure that the filter remains stable, even if the state estimates of rate of change happen to be zero.
The process of associating features with an established track is explained by reference to Figures liD -11 F. Figure liD shows an example of a third image 103 captured after the second image 102 by the image sensor. As before, image processing techniques are used to identify features within the third image and to compare those features with ones identified in the previous captured image (in this case, the second image 102 shown in Figure lOB). In the third image 103, the image processor is able to identify features 103a, 103b and 103c whose appearance in the image bears close correspondence to the features 102a, 102b and 102c of the second image, respectively.
The image processor now performs the same steps as shown in Figures 1OD and bE: the image processor determines new velocity vectors for the features 1 02a, 1 02b and 1 02c, based on the location of the features 103a, 1 03b and 1 03c shown in the third image. The respective velocity vectors 105a, 105b and 105c are shown in Figure liE.
Based on the position of the features 1 03a, 1 03b and 1 03c in the third image, and the velocity vectors 1 05a, 1 05b and 1 05c that connect the features 1 02a, 102b and 102c to the features 103a, 103b and 103c in the third image, the Diffuse Target Filter will determine whether the newly identified features 103a, 103b and 103c are associated with the track that was previously established in the preceding two frames in the series (i.e. the first image 101 and the second image 102). If so, the Diffuse Target Filter will use the features identified in the third image 103 to update the track parameters.
In the present case, the output of the Diffuse Target Filter is such that the features 103a, 103b and 103c are deemed to be associated with the track represented by circle 11 b in Figure 110 predicted forward to the third frame. Following this, the track parameters are updated to reflect the new position of the features. Thus, the track state is now represented by circle 11 c shown in Figure 11 F. The processing follows with subsequent Predict->Associate->Update->Predict.. steps.
Thus, from the sets of features in successive frames, the Diffuse Target Filter estimates the position and velocity of an intruder in the image, and also its extent and the rate of change of the extent. The change in the extent can then be used to estimate a time-to-collision, and decide whether the intruder is a threat.
The initial state can be taken from the mean position and velocity of the features in the group, and the covariance can be taken from the standard deviation of the features in the group. In the case of the target extent state a and its rate of change a, these can be initialised using the position standard deviation of the group and its change between the two adjacent frames.
In both of the embodiments shown in Figures 3 and 8, the image processor outputs a sequence of angular estimates s of a potential colliding object overtime. In Figure 3 s is proportional to the feature pair separation, and in Figure 8 s = exp(a) Assuming that the object has a true size L and has an unknown range R for a constant unknown range rate V, then it is possible to write: cL cL s cL cL where c is a constant of proportionality, t is time, R0 is the unknown object range at time t = 0 and R is the range at subsequent time t.
Plotting time on the x-axis and reciprocal of extent on the y-axis forms a straight line from which the intercept on the x-axis can be read as time of collision -an example from in-flight recordings is shown in Figure 12. To balance the smoothing of noisy data, against potential changes in geometry and speed over time, the estimate for time of collision is made using a suitable time-window (e.g. los, as in Figure 13).
Applying a threshold to the quality of the windowed line fit now reduces subsequent false alarms in the system. An example line fit quality from the current system is the coefficient of determination' in a linear regression r2 shown in Figure 14.
Based on the confidence level for the line fit, a processor-intensive algorithm can optionally be used upon a small area of the image to improve the time-to-collision estimate. Such a processor intensive algorithm is pixel-based dense optical flow, with numerous examples in literature such as "Real Time Optical Flow", PhD Thesis, Ted Camus, 1994.
Where approximate image stabilisation is possible, or where an external navigation solution is available, the lateral angular motion of the intruder emerges either directly from the Diffuse Target filter or can be retrieved using standard Kalman filter techniques.
Thus, embodiments provide the angular translation rate in addition to the rate of angular expansion. If the angular translation rate ± the expansion rate can sum to give close to zero angular rate to own-aircraft, that implies a collision is possible. Collision is possible if the absolute value of the angular translation rate is less than half of the rate of increase of the angle subtended by the object. This test can be subject to standard thresholds on uncertainty, and combined with the estimated Time To Collision.
It will be understood that although the examples shown in Figures 4, 10 and 11 each relate to a case in which successive images are received from the same, individual image sensor, embodiments described herein are also applicable to cases in which the successive image frames are received from two or more different image sensors.
An example is now provided for determining the requisite image resolution for an image processor according to an embodiment. Suppose aircraft (both the own aircraft and potentially colliding aircraft) have a speed of order of magnitude v, so that the relative speed might be as much as 2v. Suppose that to be useful a warning of collision must be given a minimum time T1 before collision. Suppose that the lateral size of a potentially colliding aircraft (i.e. orthogonal to the line of sight) is a in each direction.
Suppose that, in order to provide a warning of collision, the extent of the colliding aircraft in the image must be of order pixels in each direction.
Then the required image resolution, in pixels per radian, is of order of magnitude 2vT1 a E.g.with vzr200ins 1, i0s, n zI0pixe1s, a5rnthissuggestsarequired resolution of about 140 pixels per degree.
Embodiments meet a currently unmet need of providing robust warnings of airborne collisions with an estimate of the time until such collision, such that a suitable collision avoidance manoeuvre can be taken by the pilot. It does not require an assessment of target size, only the expansion rate. And this estimate is obtained by complementary
methods subject to the background clutter density.
The person skilled in the art will understand that embodiments described herein may be implemented in software, on a CPU or GPU processor as part of a soft real time processing system, operating on uncompressed frames of data streamed from cameras via Ethernet, for example. Embodiments may be designed to operate with a small number of COTS cameras and be compatible with lGbit Ethernet.
Figure 15 shows a schematic of an image processor 1501 according to an embodiment. The image processor is arranged to receive a stream of temporal images 1503 from an imaging system 1505 that comprises one or more image sensors mounted on a platform.
The image processor includes a feature identification module 1507 that performs the role of identifying the features present in each image, with those features then being stored in a memory module 1509. A feature association module 1511 performs the role of comparing features present in the preceding frame (and which have been stored in the memory module) with features present in the current frame to form pairs of associated features, where the features in each pair are posited to belong to the same object part. The tracking module 1513 is in turn configured to receive the output from the feature association module and to assign the various pairs to existing tracks and I or generate new tracks, in accordance with the algorithms described above. For each track, the tracking module 1513 updates the respective track parameters and saves these to the memory module 1509; the saved parameters will then form the basis for the tracking module's computation when the next image frame arrives at the image processor.
The track evaluating module 1515 is configured to analyse the current state of the track parameters to determine if there is a threat of collision with an object, again in accordance with the algorithms previously described. The track evaluating module is arranged to communicate with the memory module, allowing the track evaluating module to retrieve data obtained from the analysis of previous image frames and to compare that data with the current frame; specifically, the track evaluating module compares the status of the track parameters as calculated for previous frames with the most recent set of track parameters output by the tracking module 1513 to determine if the evolution in those parameters is consistent with an expected collision. In the event that a collision is deemed to be imminent, the track evaluating module 1515 is configured to forward instructions to the alert module 1517 to issue an alert. The alert may be, for example, a visual alert displayed on a screen or an audio alert.
The person skilled in the art will recognise that the various software modules shown in Figure 15 can be embedded in original equipment, or can be provided, as a whole or in part, after manufacture. For instance, the image processor software can be introduced, as a whole, as a computer program product, which may be in the form of a download, or to be introduced via a computer program storage medium, such as an optical disk.
Alternatively, modifications to an existing image processor can be made by an update, or plug-in, to provide features of the above described embodiment.
In summary, embodiments described herein provide a means whereby potential colliding objects -e.g. aiiboine intiudeis, can be declared with a low probability of false alarm, with significant warning time of collision, and an accurate estimate of the time to collision.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the invention. Indeed, the novel methods! devices and systems described herein may be embodied in a variety of forms; furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the invention. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.

Claims (93)

  1. UClaims 1. A method for processing a temporal sequence of images obtained from one or more image sensors onboard a platform in order to detect the threat of a collision between the platform and an inbound object, the method comprising: receiving a first image frame in the sequence of images; identifying a plurality of features in the first image frame, the features corresponding to parts of one or more objects present in the field of view of the sensor that captured the first image frame; receiving a second image frame in the sequence; identifying a plurality of features in the second image frame, the features corresponding to parts of one or more objects present in the field of view of the sensor that captured the second image frame; associating respective ones of the features identified in the second image frame with features identified in the first image frame to thereby form pairs of associated features, using each pair of associated features to posit the movement of a respective object part between the first image frame and second image frame; generating one or more tracks based on the posited movement of the object parts, the tracks being characterised by parameters that reflect the posited movement of object pads from between the first image frame and the second image frame; storing the parameters for each track in memory; and for each of one or more subsequent image frames in the sequence; identifying a plurality of features in the current image frame, the features corresponding to parts of one or more objects present in the field of view of the sensor that captured the current image frame; associating respective ones of the features identified in the current image frame with the features identified in the previous image frame to form pairs of associated features, for each pair of associated features, determining whether to assign the pair of associated features to an existing track or to a new track; where pairs of features are assigned to an existing track! updating the track parameters stored in memory for that track and where pairs of features have been assigned to a new track, storing track parameters for that new track in memory; evaluating the tracks to determine whether a collision between an object and the platform is imminent.
  2. 2. A method according to claim 1, wherein features identified in a given image frame are associated with ones identified in the preceding frame by comparing the appearance of features in the two image frames.
  3. 3. A method according to claim 2, wherein each identified feature comprises a group of pixels having a spatial distribution of values for one or more of intensity! hue and saturation and the features identified in a given frame are associated with ones identified in the preceding frame by comparing the spatial distribution of values in the respective groups ot pixels.
  4. 4. A method according to claim 3 wherein the features identified in the images are ones for which the spatial distribution of values indicate the corner of an object.
  5. 5. A method according to any one of the preceding claims, wherein in each one of the subsequent image frames, each pair of associated features is assigned to a different track.
  6. 6. A method according to claim 5, wherein the decision to assign each one of the associated pairs of features to a different track is taken when the density of features identified in the image frames is below a threshold.
  7. 7. A method according to claim 5 or 6, wherein the step of evaluating the tracks comprises assessing the spatial separation between selected features in the current frame, wherein each selected feature belongs to a respective pair of associated features that has been assigned to a different respective track.
  8. 8. A method according to claim 7, wherein the step of evaluating the tracks further comprises determining the reciprocal of the spatial separation between features associated with respective tracks and determining if the reciprocal of that separation follows a linear trend downwards when compared with previous image frames in the sequence.
  9. 9. A method according to claim 8, comprising using a straight line function to model the trend and extrapolating from that function to estimate a point in time at which the reciprocal will reach zero, said point in time being used to estimate a time to collision.
  10. 10. A method according to claim 9, further comprising determining the goodness-of-fit of the straight line function, and assigning a likelihood of collision based on the goodness-of-fit.
  11. 11. A method according to any one of claims 1 to 4, wherein for each subsequent image frame, more than one of the pairs of associated features is assigned to the same track.
  12. 12. A method according to claim 11, wherein the decision to assign more than one of the associated pairs of features to the same track is taken when the density of features identified in the image frames exceeds a threshold.
  13. 13. A method according to claim 11 or 12, wherein for a given track, the track parameters define the movement of a group of object parts that are posited to belong to the same object.
  14. 14. A method according to claim 13, wherein the track parameters include a measure of the current position of the centroid of the group of object parts.
  15. 15. A method according to claim 13 or 14, wherein the track parameters include a measure of the current velocity of the centroid of the group of object parts.
  16. 16. A method according to any one of claims 13 to 15, wherein the track parameters include a measure of the current spatial extent of the object.
  17. 17. A method according to any one claims 13 to 16, wherein the track parameters include a measure of the current rate of change of the spatial extent of the object.
  18. 18. A method according to claim 16 or 17, wherein the spatial extent of the object is determined as being proportional to the standard deviation in the distance between each feature in the current image that is considered as being associated with the object and the centroid of those features.
  19. 19. A method according to claim any one of claims 16 to 18, wherein the step of evaluating the tracks comprises comparing the spatial extent of the object as determined for the current frame with the spatial extent of the object as determined for at least one previous frame in the sequence.
  20. 20. A method according to claim 19, wherein evaluating a track further comprises comparing the reciprocal of the spatial extent of the object as determined for the current frame with the reciprocal of the spatial extent of the object as determined for at least one previous frame in the sequence.
  21. 21. A method according to claim 20, comprising using a straight line function to model the change in the reciprocal of the spatial extent of the object between the frames and extrapolating from that function to estimate a point in time at which the reciprocal will reach zero, said point in time being used to estimate a time to collision.
  22. 22. A method according to claim 21, further comprising determining the goodness-of4it of the straight line function, and assigning a likelihood of collision based on the goodness-of-fit.
  23. 23. A method according to any one of the preceding claims, wherein for each one of the image frames from the second image frame onwards, a prediction is made of how the track parameters in the respective tracks will evolve during the interval between receiving that frame and the next frame in the sequence.
  24. 24. A method according to claim 23, wherein determining whether to assign a pair of associated features to an existing track comprises determining if the posited movement of the object part associated with that pair of features is consistent with the predicted evolution of the track parameters.
  25. 25. A method according to any one of claims 23 to 24, wherein each track parameter is modelled as having a respective probability distribution and for each parameter, the value of the parameter that is stored in memory reflects the current mean of the respective distribution.
  26. 26. A method according to claim 25, wherein the probability distributions are approximated as being Gaussian.
  27. 27. A method according to claim 25 or 26, wherein the mean and covariance of each track parameter is stored for each image frame, the mean and covariance being updated each time the track parameter is itself updated.
  28. 28. A method according to any one of claims 25 to 27, wherein predicting how the track parameters will evolve in the interval between image frames comprises defining a new probability distribution for the values of the respective parameters in the next frame.
  29. 29. A method according to any one of claims 11 to 28, wherein the track parameters include the number of pairs of associated features presently assigned to the track.
  30. 30. A method according to claim 29 as dependent on claim 11, wherein for each received image, the number and position of features identified in that image is assumed to be statistically independent of each other and of the number and position of features detected in previous images.
  31. 31. A method according to claim 30, wherein for each received image, the number of features that will be identified in the image is modelled as being drawn from a Poisson distribution.
  32. 32. A method according to any one of claims 23 to 31 as dependent on claim 11, wherein in identifying pairs of associated features as candidates for assigning to an existing track, features that belong to the track in the current frame are modelled as having positions drawn from an isotropic Gaussian distribution.
  33. 33. A method according to any one of claims 23 to 32 as dependent on claim 13, wherein when predicting the evolution of track parameters for a given track, the object to which the respective group of object parts is posited to belong is modelled as undergoing a rigid translation between successive frames.
  34. 34. A method according to claim 33 wherein when predicting the evolution of track parameters for a given track, the object to which the lespective gioup of object parts is posited to belong is modelled as undergoing a combination of a rigid translation and a uniform expansion between successive frames.
  35. 35. A method according to any one of claims 14 to 34 as dependent on claim 13, wherein the motion of the object to which the respective group of object parts is posited to belong is modelled in 3D using a statistical model! in which the object's acceleration is assumed to be a white noise process.
  36. 36. A method according to claim 35 as dependent on claim 23, wherein the prediction of how the track parameters in a respective track will evolve is made by taking into account the systematic dependence of the velocity of features assigned to that track and the rate of change of the spatial extent on each other and on the position of the object in the image due to the projection of the object from 3D onto the 2D image
  37. 37. A method according to claim 36, wherein the acceleration white noise process is assumed to have a power spectral density proportional to the square of the object's 3D velocity.
  38. 38. A method according to any one of claims 14 to 34, wherein for each pair of successively captured images, measurements of optical flow are used to identify background features in the image, the background features being excluded from analysis when identifying pairs of associated features.
  39. 39. A method according to claim 38 wherein for each image frame, the number of features initially identified in the image is determined and the measurements of optical flow are used if the number is above a threshold.
  40. 40. A method according to any one of the preceding claims, wherein in the event that no new pairs of associated features have been assigned to a track after a predetermined number of frames, the track is deleted from memory.
  41. 41. A method according to any one of claims 12 to 40 as dependent on claim 11, wherein for each subsequent frame, in the event that more than one pair of associated features is not assigned to an existing track, a decision is made as to whether to assign those pairs to the same new track.
  42. 42. A method according to claim 41, wherein the decision is based at least in part on whether the features in the current frame that belong to those pairs lie within a certain proximity of one another.
  43. 43. A method according to claim 41, wherein the decision is based at least in part on whether the velocities of the features in the current frame that belong to those pairs lie within a certain range of one another.
  44. 44. A method according to any one of the preceding claims, wherein an alert is issued when a collision with an object is deemed to be imminent.
  45. 45. A method according to any one of the preceding claims wherein the one or more image sensors are video cameras.
  46. 46. An image processor for receiving and processing a temporal sequence of image frames captured by one or more image sensors onboard a platform, the image processor comprising: a feature identification module configured to identify a plurality of features in the first image frame in the sequence, wherein the features in the first image frame correspond to pads of one or more objects present in the field of view of the sensor that captured the first image frame; the feature identification module being further configured to identify a plurality of features in the second image frame in the sequence, wherein the features in the second image frame correspond to parts of one or more objects present in the field of view of the sensor that captured the second image frame; a feature association module configured to associate respective ones of the features identified in the second image frame with features identified in the first image frame to thereby form pairs of associated features! each pair of associated features being used to posit the movement of a respective object part between the first image frame and second image frame; a tracking module configured to generate one or more tracks based on the posited movement of the object parts, the tracks being characterised by parameters that reflect the posited movement of the object parts between the first image frame and the second image frame; and a memory module for storing the track parameters in memory; wherein for each of one or more subsequent image frames in the sequence: the feature identification module is configured to identify a plurality of features in the current image frame, the features corresponding to parts of one or more objects present in the field of view of the sensor that captured the current image frame; the feature associating module is configured to associate respective ones of the features identified in the current image frame with the features identified in the previous image frame to form pairs of associated features; and the tracking module is configured to determine whether to assign the pair of associated features to an existing track or to a new track; wherein, in the event that a pair of associated features is assigned to an existing track, the tracking module is configured to update the track parameters stored in the memory module for that track and in the event that a pair of features is assigned to a new track, the tracking module is configured to store track parameters for that new track in the memory module; the processor further comprising a track evaluating module for evaluating the state of the tracks to determine whether a collision between an object and the platform is imminent.
  47. 47. An image processor according to claim 46, wherein the feature association module is configured to associate features identified in a given frame with features identified in a preceding frame by comparing the appearance of the features in those image frames.
  48. 48. An image processor according to claim 47, wherein each identified feature comprises a group of pixels having a spatial distribution of values for one or more of intensity! hue and saturation and the feature association module is configured to associate features identified in a given frame with features identified in the preceding frame by comparing the spatial distribution of values in the respective groups of pixels.
  49. 49. An image processor according to claim 48 wherein the feature identification module is configured to identify features for which the spatial distribution of values indicate the corner of an object.
  50. 50. An image processor according to any one of claims 46 to 49, wherein for each one of the subsequent image frames, when assigning pairs of associated features to tracks, the tracking module is configured to assign each pair of associated features to a different track.
  51. 51. An image processor according to claim 50, wherein the decision to assign each one of the associated pairs of features to a different track is taken when the density of features identified in the image frames is below a threshold.
  52. 52. An image processor according to claim 50 or 51, wherein the track evaluating module is configured to assess the spatial separation between selected features in the current frame, wherein each selected feature belongs to a respective pair of associated features that has been assigned to a different respective track.
  53. 53. An image processor according to claim 52, wherein the track evaluating module is configured to determine the reciprocal of the spatial separation between features associated with respective tracks and to determine if the reciprocal of that separation follows a linear trend downwards when compared with previous image frames in the sequence.
  54. 54. An image processor according to claim 53, wherein the track evaluating module is configured to model the trend using a straight line function and to extrapolate from that function to estimate a point in time at which the reciprocal will reach zero, said point in time being used to estimate a time to collision.
  55. 55. An image processor according to claim 54, wherein the track evaluating module is configured to determine the goodness-of-fit of the straight line function, and to assign a likelihood of collision based on the goodness-of-fit.
  56. 56. An image processor according to any one of claims 46 to 49, wherein for each subsequent image frame, the tracking module is configured to assign more than one of the pairs of associated features to the same track.
  57. 57. An image processor according to claim 56, wherein the decision to assign more than one of the associated pairs of features to the same track is taken when the density of features identified in the image frames exceeds a threshold.
  58. 58. An image processor according to claim 56 or 57, wherein for a given track, the track parameters define the movement of a group of object parts that are posited to belong to the same object.
  59. 59. An image processor according to claim 58, wherein the track parameters include a measure of the current position of the centroid of the group of object parts.
  60. 60. An image processor according to claim 58 or 59, wherein the track parameters include a measure of the current velocity of the centroid of the group of object parts.
  61. 61. An image processor according to any one of claims 58 to 60, wherein the track parameters include a measure of the current spatial extent of the object.
  62. 62. An image processor according to any one claims 58 to 61, wherein the track parameters include a measure of the current rate of change of the spatial extent of the object.
  63. 63. An image processor according to claim 61 or 62, wherein the spatial extent of the object is determined as being proportional to the standard deviation in the distance between each feature in the current image that is considered as being associated with the object and the centroid of those features.
  64. 64. An image processor according to claim any one of claims 61 to 63, wherein the track evaluating module is configured to compare the spatial extent of the object as determined for the current frame with the spatial extent of the object as determined for at least one previous frame in the sequence.
  65. 65. An image processor according to claim 64, wherein the track evaluating module is configured to compare the reciprocal of the spatial extent of the object as determined for the current frame with the reciprocal of the spatial extent of the object as determined for at least one previous frame in the sequence.
  66. 66. An image processor according to claim 65, wherein the track evaluating module is configured to model the change in the reciprocal of the spatial extent of the object between the frames by using a straight line function and to extrapolate from that function to estimate a point in time at which the reciprocal will reach zero, said point in time being used to estimate a time to collision.
  67. 67. An image processor according to claim 66, wherein the track evaluating module is configured to determine the goodness-of-fit of the straight line function, and to assign a likelihood of collision based on the goodness-of-fit.
  68. 68. An image processor according to any one of claims 46 to 66, wherein for each one of the image frames from the second image frame onwards, the tracking module is configured to predict how the track parameters in the respective tracks will evolve during the interval between receiving that frame and the next frame in the sequence.
  69. 69. An image processor according to claim 68, wherein determining whether to assign a pair of associated features to an existing track, the tracking module is configured to determine if the posited movement of the object part associated with that pair of features is consistent with the predicted evolution of the track parameters.
  70. 70. An image processor according to any one of claims 68 to 69, wherein each track parameter is modelled as having a respective probability distribution and for each parameter, the value of the parameter that is stored in memory reflects the current mean of the respective distribution.
  71. 71. An image processor according to claim 70, wherein the piobability distributions are approximated as being Gaussian.
  72. 72. An image processor according to claim 70 or 71, wherein the mean and covariance of each track parameter is stored for each image frame, the mean and covariance being updated each time the track parameter is itself updated.
  73. 73. An image processor according to any one of claims 70 to 72, wherein predicting how the track parameters will evolve in the interval between image frames, the tracking module is configured to define a new probability distribution for the values of the respective parameters in the next frame.
  74. 74. An image processor according to any one of claims 56 to 73, wherein the track parameters include the number of pairs of associated features presently assigned to the track.
  75. 75. An image processor according to claim 74 as dependent on claim 56, wherein for each received image, the number and position of features identified in that image is assumed to be statistically independent of each other and of the number and position of features detected in previous images.
  76. 76. An image processor according to claim 75, wherein for each received image, the number of features that will be identified in the image is modelled as being drawn from a Poisson distribution.
  77. 77. An image processor according to any one of claims 68 to 76 as dependent on claim 56, wherein in identifying pairs of associated features as candidates for assigning to an existing track, the tracking module is configured to model features that belong to the track in the current frame as having positions drawn from an isotropic Gaussian distribution.
  78. 78. An image processor according to any one of claims 68 to 77 as dependent on claim 58, wherein when predicting the evolution of track parameters for a given track, the tracking module is configured to model the object to which the respective group of object parts is posited to belong as undergoing a rigid translation between successive frames.
  79. 79. An image processor according to claim 78 wherein when predicting the evolution of track parameters for a given track, the tracking module is configured to model the object to which the respective group of object parts is posited to belong as undergoing a combination of a rigid translation and a uniform expansion between successive frames.
  80. 80. An image processor according to any one of claims 59 to 79 as dependent on claim 58, wherein the tracking module is configured to model the motion of the object to which the respective group of object parts is posited to belong in 3D using a statistical model, in which the object's acceleration is assumed to be a white noise process.
  81. 81. An image processor according to claim 80 as dependent on claim 68, wherein the tracking module is configured to predict how the track parameters in a respective track will evolve by taking into account the systematic dependence of the velocity of features assigned to that track and the rate of change of the spatial extent on each other and on the position of the object in the image due to the projection of the object from 3D onto the 2D image
  82. 82. An image processor according to claim 81, wherein the acceleration white noise process is assumed to have a power spectral density proportional to the square of the objects 3D velocity.
  83. 83. An image processor according to any one of claims 59 to 79, further comprising an optical flow module for performing measurements of optical flow for each pair of successively captured images to identify background features in the image, the feature association module being configured to exclude the background features from analysis when identifying pairs of associated features.
  84. 84. An image processor according to claim 83 wherein for each image frame, the feature identification module is configured to determine the number of features in the image and the optical flow module is configured to perform the measurements of optical flow if the number is above a threshold.
  85. 85. An image processor according to any one of claims 45 to 84, wherein in the event that no new pairs of associated features have been assigned to a track after a predetermined number of frames, the tracking module is configured to delete the track from memory.
  86. 86. An image processor according to any one of claims 57 to 85 as dependent on claim 56, wherein for each subsequent frame, in the event that more than one pair of associated features is not assigned to an existing track, the tracking module is configured to decide whether to assign those pairs to the same new track.
  87. 87. An image processor according to claim 86. wherein the decision is based at least in part on whether the features in the current frame that belong to those pairs lie within a certain proximity of one another.
  88. 88. An image processor according to claim 87, wherein the decision is based at least in part on whether the velocities of the features in the current frame that belong to those pairs lie within a certain range of one another.
  89. 89. An image processor according to any one of claims 45 to 88, further comprising an alert module for issuing an alert when a collision with an object is deemed to be imminent.
  90. 90. An image processor according to any one of claims 45 to 89 wherein the one or more image sensors are video cameras.
  91. 91. An imaging system comprising an image sensor and an image processor according to any one of claims 45 to 90.
  92. 92. An aircraft comprising an imaging system according to claim 91.
  93. 93. A non-transitory computer readable storage medium comprising computer executable instructions than when executed by a computer will cause the computer to carry out a method according to any one of claims 1 to 44.
GB1319610.0A 2013-11-06 2013-11-06 Image processor Active GB2520243B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1319610.0A GB2520243B (en) 2013-11-06 2013-11-06 Image processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1319610.0A GB2520243B (en) 2013-11-06 2013-11-06 Image processor

Publications (3)

Publication Number Publication Date
GB201319610D0 GB201319610D0 (en) 2013-12-18
GB2520243A true GB2520243A (en) 2015-05-20
GB2520243B GB2520243B (en) 2017-12-13

Family

ID=49767761

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1319610.0A Active GB2520243B (en) 2013-11-06 2013-11-06 Image processor

Country Status (1)

Country Link
GB (1) GB2520243B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017041303A1 (en) 2015-09-11 2017-03-16 SZ DJI Technology Co., Ltd. Systems and methods for detecting and tracking movable objects
RU2668539C1 (en) * 2017-10-26 2018-10-01 Общество с ограниченной ответственностью "ГРАТОН-СК" Method and video system for prevention of collision of aircraft with obstacles
EP3685352B1 (en) * 2017-09-22 2024-03-06 Robert Bosch GmbH Method and device for evaluating images, operating assistance method, and operating device

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034075A (en) * 2018-07-31 2018-12-18 中国人民解放军61646部队 The method of face battle array gazing type remote sensing satellite tracking Ship Target
CN111784760B (en) * 2020-06-22 2023-06-20 太极计算机股份有限公司 Method for correcting radar machine learning extrapolation result by radar linear optical flow extrapolation result

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070210953A1 (en) * 2006-03-13 2007-09-13 Abraham Michael R Aircraft collision sense and avoidance system and method
US20100191391A1 (en) * 2009-01-26 2010-07-29 Gm Global Technology Operations, Inc. multiobject fusion module for collision preparation system
US20100256909A1 (en) * 2004-06-18 2010-10-07 Geneva Aerospace, Inc. Collision avoidance for vehicle control systems
US20100322474A1 (en) * 2009-06-23 2010-12-23 Ut-Battelle, Llc Detecting multiple moving objects in crowded environments with coherent motion regions

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100256909A1 (en) * 2004-06-18 2010-10-07 Geneva Aerospace, Inc. Collision avoidance for vehicle control systems
US20070210953A1 (en) * 2006-03-13 2007-09-13 Abraham Michael R Aircraft collision sense and avoidance system and method
US20100191391A1 (en) * 2009-01-26 2010-07-29 Gm Global Technology Operations, Inc. multiobject fusion module for collision preparation system
US20100322474A1 (en) * 2009-06-23 2010-12-23 Ut-Battelle, Llc Detecting multiple moving objects in crowded environments with coherent motion regions

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017041303A1 (en) 2015-09-11 2017-03-16 SZ DJI Technology Co., Ltd. Systems and methods for detecting and tracking movable objects
EP3347789A4 (en) * 2015-09-11 2018-08-29 SZ DJI Technology Co., Ltd. Systems and methods for detecting and tracking movable objects
US10198634B2 (en) 2015-09-11 2019-02-05 SZ DJI Technology Co., Ltd. Systems and methods for detecting and tracking movable objects
US10650235B2 (en) 2015-09-11 2020-05-12 SZ DJI Technology Co., Ltd. Systems and methods for detecting and tracking movable objects
EP3685352B1 (en) * 2017-09-22 2024-03-06 Robert Bosch GmbH Method and device for evaluating images, operating assistance method, and operating device
RU2668539C1 (en) * 2017-10-26 2018-10-01 Общество с ограниченной ответственностью "ГРАТОН-СК" Method and video system for prevention of collision of aircraft with obstacles

Also Published As

Publication number Publication date
GB2520243B (en) 2017-12-13
GB201319610D0 (en) 2013-12-18

Similar Documents

Publication Publication Date Title
CN110163904B (en) Object labeling method, movement control method, device, equipment and storage medium
US9483839B1 (en) Occlusion-robust visual object fingerprinting using fusion of multiple sub-region signatures
EP2713308B1 (en) Method and system for using fingerprints to track moving objects in video
US9082008B2 (en) System and methods for feature selection and matching
GB2520243A (en) Image processor
Saif et al. Moving object detection using dynamic motion modelling from UAV aerial images
Muresan et al. Multi-object tracking of 3D cuboids using aggregated features
CN109213138B (en) Obstacle avoidance method, device and system
CN112528781B (en) Obstacle detection method, device, equipment and computer readable storage medium
Zarandy et al. A novel algorithm for distant aircraft detection
Zheng et al. Detection, localization, and tracking of multiple MAVs with panoramic stereo camera networks
Nussberger et al. Robust aerial object tracking from an airborne platform
Cigla et al. Onboard stereo vision for drone pursuit or sense and avoid
Yi et al. Multi-Person tracking algorithm based on data association
Han et al. Geolocation of multiple targets from airborne video without terrain data
Vitiello et al. Assessing Performance of Radar and Visual Sensing Techniques for Ground-To-Air Surveillance in Advanced Air Mobility
CN112802100A (en) Intrusion detection method, device, equipment and computer readable storage medium
CN110781730B (en) Intelligent driving sensing method and sensing device
Poiesi et al. Detection of fast incoming objects with a moving camera.
US20130329944A1 (en) Tracking aircraft in a taxi area
Geyer et al. Prototype sense-and-avoid system for UAVs
Spraul et al. Persistent multiple hypothesis tracking for wide area motion imagery
Glozman et al. A vision-based solution to estimating time to closest point of approach for sense and avoid
CN112802058A (en) Method and device for tracking illegal moving target
Thupakula Data Fusion Techniques for Object Identification in Airport Environment