WO2008142680A2 - Poursuite et imagerie de fusion d'images - Google Patents

Poursuite et imagerie de fusion d'images Download PDF

Info

Publication number
WO2008142680A2
WO2008142680A2 PCT/IL2008/000682 IL2008000682W WO2008142680A2 WO 2008142680 A2 WO2008142680 A2 WO 2008142680A2 IL 2008000682 W IL2008000682 W IL 2008000682W WO 2008142680 A2 WO2008142680 A2 WO 2008142680A2
Authority
WO
WIPO (PCT)
Prior art keywords
target
tracking
image
imaging
modality
Prior art date
Application number
PCT/IL2008/000682
Other languages
English (en)
Other versions
WO2008142680A3 (fr
Inventor
Ron Zohar
David Reinitz
Original Assignee
Rafael Advanced Defense Systems Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rafael Advanced Defense Systems Ltd filed Critical Rafael Advanced Defense Systems Ltd
Priority to US12/600,839 priority Critical patent/US20100157056A1/en
Publication of WO2008142680A2 publication Critical patent/WO2008142680A2/fr
Publication of WO2008142680A3 publication Critical patent/WO2008142680A3/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the present invention relates to real-time tracking of one or more targets and, more particularly, to a data fusion method that combines results of a tracking modality such as ground moving target indicator (GMTI) radar that lacks sufficient resolution to identify the targets it tracks with results of an imaging modality, such as video motion detection (VMD), that has sufficient resolution to identify the targets.
  • GMTI ground moving target indicator
  • VMD video motion detection
  • tracking means producing an estimate of a target's coordinates as a function of time. The estimate so produced is the "track" of the target.
  • GMTI is a known modality for tracking vehicles moving on the ground from an airborne platform, using Doppler radar.
  • GMTI is an all-weather modality that can monitor vehicular movement in a region that spans tens of kilometers. Nevertheless, GMTI has several limitations.
  • One limitation is that the resolution and the accuracy of GMTI is limited, so that GMTI cannot resolve several closely-spaced targets and cannot identify even isolated targets.
  • Another limitation is inherent in Doppler radar.
  • GMTI senses only a target's velocity component in-line with the Doppler radar apparatus. Therefore, GMTI loses track of a target that halts, or of a target that moves only transverse to the line from the Doppler radar apparatus to the target. GMTI also may lose track of a target that moves behind an obstacle.
  • VMD is another known modality for tracking vehicles moving on the ground from an airborne platform.
  • a digital video camera is used to acquire many successive frames that image the region being monitored. Moving targets are identified by comparing successive frames.
  • the airborne platform caries a navigation mechanism, typically based on one or more GPS receivers and on an inertial measurement unit, for determining the aircraft's absolute position and absolute orientation in real time. This information, combined with elevation information in the form of a digital terrain map, is used to orient the video camera relative to the aircraft so that the video camera points at a desired position on the ground.
  • This information also is combined with the orientation of the video camera relative to the aircraft when each frame is acquired and with the digital terrain map to associate a corresponding absolute position with the pixels of the frame that correspond to a moving target and so to determine the absolute position of the moving target.
  • the frame is registered to an appropriate digital description of the region being monitored, for example to the digital terrain map or to a digital orthogonal photograph of the region being monitored, in order to determine the absolute position of the moving target.
  • the size of the region imaged in a video frame is adjustable, using a zoom lens of the video camera, from (at typical aerial platform altitudes) several kilometers at the lowest zoom setting down to on the order of several meters at the highest zoom setting.
  • the zoom lens typically is set to the setting such that several image pixels correspond to each target being tracked.
  • VMD resolves individual vehicles and locates the vehicles with an accuracy of a few meters. The vehicles then can be identified according to their visual signatures. This high resolution comes at the expense of very limited areal coverage as compared with the areal coverage available using GMTI.
  • VMD needs a clear line of sight to the targets being tracked. So, for example, an aircraft that uses GMTI to track vehicles can fly above cloud cover, whereas an aircraft that uses VMD to track vehicles must fly below cloud cover.
  • a method of monitoring a target including the steps of: (a) tracking the target using a tracking modality, thereby obtaining an estimated track of the target; (b) imaging the target using an imaging modality, thereby obtaining an image of the target; (c) associating the estimated track with the image; and (d) displaying at least one datum related to the image along with the estimated track.
  • a system for monitoring a target including: (a) a tracking subsystem for tracking the target, thereby obtaining an estimated track of the target; (b) an imaging subsystem, separate from the tracking subsystem, for imaging the target, thereby obtaining an image of the target; (c) an association mechanism for associating the estimated track with the image; and (d) a display mechanism for displaying at least one datum related to the image along with the estimated track.
  • a method of monitoring a plurality of targets including the steps of: (a) tracking the targets using a tracking modality, thereby obtaining, for each of at least one target group that includes a respective at least one of the targets, a respective estimated track; (b) for each target group: (i) based at least in part on the respective estimated track, imaging and tracking each at least one respective target of the each target group using a combined imaging and tracking modality, thereby obtaining a respective image of each respective target, and (ii) associating the respective estimated track with the respective at least one image; and (c) selectively displaying at least one the estimated track along with, for at least one of the image that is associated therewith, at least one datum related to the each at least one image.
  • a system for monitoring a plurality of targets including: (a) a tracking subsystem for tracking the targets, thereby obtaining, for each of at least one target group that includes a respective at least one of the targets, a respective estimated track; (b) a plurality of imaging subsystems for imaging the targets, each imaging subsystem for imaging a respective one of the targets, thereby obtaining a respective image that depicts the one target; (c) an association mechanism for associating each image with the respective estimated track of the target group that includes the target that is depicted by the each image; and (d) a display mechanism for selectively displaying at least one the estimated track along with at least one datum related to at least one of each image that is associated therewith.
  • a method of monitoring a target including the steps of: (a) tracking the target, using a tracking modality; and (b) in response to a degradation of the tracking of the target: (i) acquiring an image of a region estimated, based at least in part on the tracking of the target, to include the target, and (ii) locating the target in the region, based at least in part on at least a portion of the image.
  • a method of monitoring at least one target including the steps of: (a) tracking a first target, thereby obtaining an estimated track of the first target; (b) acquiring an image of a first region that includes the first target; (c) associating the estimated track with the image of the first target; (d) acquiring an image of a second region; and (e) comparing at least one datum related to the image of the first region to at least one datum related to the image of the second region to determine whether the first target is depicted in the image of the second region.
  • a method of imaging a target including the steps of: (a) obtaining an image of a region that includes the target; (b) determining coordinates of the target; (c) based at least in part on the coordinates, aiming a combined imaging and tracking modality at the target; (d) tracking and imaging the target, using the combined imaging and tracking modality; and (e) based at least in part on the tracking by the combined imaging and tracking modality, extracting, from the image, a portion of the image that depicts the target.
  • a system for imaging a target including: (a) a tracking subsystem for tracking the target, thereby obtaining a first estimated track of the target, the first estimated track including coordinates of the target; (b) a combined imaging and tracking subsystem for imaging, according to the coordinates, a region that includes the target, and for then tracking the target, thereby providing a second estimated track of the target; and (c) an extraction mechanism for extracting, from the image, a portion of the image that depicts the target, the extracting being based at least in part on the second estimated track.
  • a method of selectively monitoring a plurality of targets including the steps of: (a) imaging the targets, substantially simultaneously, thereby providing a respective image of each target; (b) displaying the images collectively; (c) based at least in part on visual inspection of the images, selecting one of the targets; and (d) devoting a resource to the one target.
  • a system for selectively monitoring a plurality of targets including: (a) at least one imaging modality for substantially simultaneously imaging the targets to provide a respective image of each target; (b) a display mechanism for displaying the images collectively; (c) a selection mechanism for selecting one of the images on the display mechanism; and (d) a tracking modality for tracking the respective target of the selected image.
  • a tracking modality provides an estimated track of the target and an imaging modality provides an image of the target.
  • image is meant herein either a single video frame of a region that includes the target or a plurality of such frames (e.g. a video clip).
  • the track of the target that is estimated by the tracking modality is associated with the image of the target that is acquired by the imaging modality, and at least one datum that is related to the image is displayed along with the estimated track.
  • the datum or data that is/are displayed could be, for example, the image itself, a portion of the image, results of Automatic Target Recognition (ATR) processing of the image, or some other output of image processing such as the color of the target.
  • ATR Automatic Target Recognition
  • the data that are displayed typically are a portion of the image, possibly accompanied by textual results of the ATR processing.
  • the imaging modality is a combined imaging and tracking modality.
  • the imaging modality includes video motion detection.
  • the target is tracked by the combined imaging and tracking modality, and the tracking of the target by the tracking modality is corrected according to the tracking of the target by the combined imaging and tracking modality.
  • the target is identified, based at least in part on at least a portion of the image of the target and on the estimated track.
  • the identifying of the target provides the datum or data that is/are displayed along with the estimated track.
  • the tracking of the target by the tracking modality is corrected in accordance with the identifying of the target.
  • the associating includes, optionally, either mapping coordinates used by the tracking modality into coordinates used by the imaging modality or mapping coordinates used by the imaging modality into coordinates used by the tracking modality.
  • the displaying of the at least portion of the image along with the estimated track is effected either substantially simultaneously with the associating, in real time, or subsequent to the associating.
  • the delayed displaying is of archived versions of the estimated track and the datum or data.
  • the estimated track and the image are archived, and the datum or data that is/are displayed are provided by identifying the target, based at least in part on the archived estimated track and on at least a portion of the archived image.
  • the display of the archived estimated track and of the (portion of the) archived image is effected upon the request of the operator of the system at which these data are archived.
  • the imaging includes pointing the imaging modality at the target in accordance with the estimated track.
  • the imaging modality is moved (as opposed to just pointed) to an appropriate vantage point, in accordance with the estimated track.
  • the imaging modality is moved by moving a platform on which the imaging modality is mounted. It often is wise to select the vantage point with reference to the target's direction of motion. Therefore, most preferably, the selection of the vantage point is performed at least in part according to the estimated track. Also most preferably, the selection of the vantage point is performed at least in part in accordance with the location of an object, for example a terrain feature such as a cliff, as determined e.g. from a digital terrain map, or an artificial structure such as a building, as determined e.g. from a digital structure map, that partly hides the target.
  • a terrain feature such as a cliff
  • an artificial structure such as a building
  • the displaying is effected at a location different from the location at which the associating is effected.
  • the estimated track to be displayed and its associated image-related data are transmitted to the location at which the estimated track and its associated image-related data are displayed.
  • the image-related data include at least a portion of the image that depicts the target.
  • a corresponding system of the present invention includes a tracking subsystem that obtains an estimated track of the target, an imaging subsystem that is separate from the tracking subsystem and that obtains an image of the target, an associating mechanism for associating the tracking with the imaging and a display mechanism for displaying at least one datum that is related to the image along with the estimated track.
  • the system includes at least one vehicle, such as an aircraft, on which the tracking subsystem and the imaging subsystem are mounted. More preferably, the system includes a plurality of such vehicles, and the tracking subsystem and the imaging subsystem are mounted on different vehicles. Most preferably, the system includes a wireless mechanism for exchanging, between or among the vehicles, the results of the tracking and the imaging. Also most preferably, the mechanism for associating the tracking with the imaging is distributed between or among the vehicles.
  • a plurality of targets are tracked by a tracking modality, thereby obtaining, for each of one or more target groups, each of which includes a respective one or more of the targets, a respective estimated track of the target group.
  • each target is imaged and tracked by a combined imaging and tracking modality to provide a respective image of that target, and the respective estimated track of that target's group is associated with at least a portion of that target's respective image.
  • At least one of the estimated tracks is selected for display and is displayed along with data that are related to its associated image(s).
  • the displaying is effected at a location different from the location at which the associating is effected.
  • the estimated track(s) to be displayed and its/their associated image-related data are transmitted to the location at which the estimated track(s) and its/their associated image-related data are displayed.
  • the data that are displayed include only portions of the images.
  • each image includes a plurality of frames and the data that are displayed include, for each image, only a portion of each frame of the image.
  • the data that are displayed along with the selected estimated track(s) include textual information about at least a portion of at least one of the images to which the data are related.
  • textual information include the output of ATR processing, the color(s) of the target(s) and the size(s) of the target(s). Displaying textual information about the target(s) facilitates seeking information about the target(s) in a database.
  • a corresponding system of the present invention includes a tracking subsystem that obtains respective estimated tracks of the target groups; a plurality of imaging subsystems, each of which obtains an image of a respective one of the targets; an associating mechanism for associating the estimated tracks with the corresponding images; and a display mechanism for displaying each estimated track along with data that are related to the corresponding image(s).
  • the system also includes a plurality of vehicles, such as aircraft, and the tracking subsystem and each of the imaging subsystems is mounted on a respective vehicle.
  • a target is tracked by a tracking modality until the tracking modality senses or predicts degradation (up to and including cessation) of its tracking of the target. Then, an image of a region that is estimated, based on the tracking, to include the target is acquired, and, based at least in part on at least a portion of the image, the target is located in the region.
  • the imaging modality is a combined imaging and tracking modality, and tracking of the target is resumed using the combined imaging and tracking modality.
  • the combined imaging and tracking modality includes video motion detection.
  • acquiring the image of the region includes pointing an image modality, that is used to acquire the image, at the region.
  • an imaging modality is moved, in response to the degradation of the tracking by the tracking modality, to an appropriate vantage point.
  • the image of the region then is acquired using that imaging modality.
  • the imaging modality is moved by moving a platform on which the imaging modality is mounted.
  • the tracking by the tracking modality provides an estimated track of the target, and the selection of the vantage point is performed at least in part according to the estimated track.
  • the selection of the vantage point is performed at least in part in accordance with the location of an object that partly hides the target.
  • locating the target in the region is based at least in part on a thermal contrast between the target and the region.
  • acquiring the image of the region and locating the target in the region are effected using a combined imaging and tracking modality such as video motion detection.
  • the tracking modality is a combined imaging and tracking modality, and, as part of locating the target in the region, at least one datum related to one or more images acquired by the combined imaging and tracking modality is compared to the image of the region that is estimated to include the target.
  • the target is both tracked and imaged.
  • the image of the target that is acquired during the tracking is cross-correlated with at least a portion of the image of the region.
  • a plurality of images of the target is acquired.
  • the images are combined, for example by averaging, and the target is located in the region by cross-correlating the combined image with at least a portion of the image of the region.
  • the tracking and the imaging are done together, using a combined imaging and tracking modality such as VMD.
  • a second target is tracked by a combined imaging and tracking modality (either the same imaging and tracking modality or a different imaging and tracking modality that also preferably includes VMD)
  • at least one datum related to an image acquired by one of the modalities is compared to at least one datum related to an image acquired by the other modality to determine whether the second target is actually the same as the first target.
  • the image of the first target need not be acquired by a combined tracking and imaging modality, but may be acquired by an imaging modality, with the purpose of the image comparison being to determine whether the target presently being tracked had been previously imaged in a different context.
  • the at least one datum that is related to that image is at least a portion of the image that depicts the respective target of that image.
  • the comparing of the two images includes cross-correlating those two at least portions.
  • the preferred method used by the present invention to obtain portions of images of targets for display along with estimated target tracks constitutes an invention in its own right.
  • an image of a region that includes a target is obtained, and the coordinates of the target are determined.
  • a combined imaging and tracking modality is aimed at the target and is used to track and image the target.
  • a portion of the image that depicts the target is extracted from the image.
  • the image of the region that includes the target is obtained as part of the tracking and imaging of the target by the combined imaging and tracking modality.
  • the image of the region that includes the target is obtained, as in the variant described above of the "resumed tracking" aspect of the present invention, separately from the tracking and imaging of the target by the combined imaging and tracking modality.
  • the coordinates of the target are determined using a tracking modality.
  • the tracking modality is a modality such as GMTI that has a wider field of view than the combined imaging and tracking modality, so that the coordinates determined by the tracking modality are only approximate coordinates.
  • the reason for using two different tracking modalities is that the wide-FOV modality can monitor a relatively large arena, within which the narrower-FOV combined imaging and tacking modality focuses on a target of interest.
  • the tracking modality and the combined imaging and tracking modality produce respective estimated tracks of the target.
  • the two tracks are associated with each other to confirm that the target being tracked by the combined imaging and tracking modality is in fact the target of interest that is tracked by the tracking modality.
  • associating the two tracks includes transforming the coordinates from the coordinate system used by the tracking modality to the coordinate system used by the combined imaging and tracking modality, or alternatively transforming the coordinates from the coordinate system used by the combined imaging and tracking modality to the coordinate system used by the tracking modality.
  • the steps of aiming the combined imaging and tracking modality at the target, tracking and imaging the target using the combined imaging and tracking modality, and extracting the portion of the image that depicts the target are effected only if it is first determined that the target is moving.
  • the combined imaging and tracking modality includes video motion detection.
  • the combined imaging and tracking modality is moved to an appropriate vantage point.
  • the imaging modality is moved by moving a platform on which the imaging modality is mounted.
  • the tracking by the combined imaging and tracking modality provides an estimated track of the target, and the selection of the vantage point is performed at least in part according to the estimated track.
  • the selection of the vantage point is performed at least in part in accordance with the location of an object that partly hides the target.
  • a corresponding system includes a tracking system for tracking the target, thereby obtaining a first estimated track of the target that includes coordinates of the target; a combined imaging and tracking subsystem for imaging, according to the coordinates, a region that includes the target and then tracking the target, thereby providing a second estimated track of the target; and an extraction mechanism for extracting from the image a portion that depicts the target, with the extraction being based at least in part on the second estimated track.
  • the system also includes an association mechanism for associating the two estimated tracks.
  • the system also includes at least one vehicle, such as an aircraft, on which the tracking subsystem and the combined imaging and tracking subsystem are mounted. More preferably, the system includes a plurality of such vehicles, and the tracking subsystem and the combined imaging and tracking subsystem are mounted on different vehicles. Most preferably, the system includes a wireless mechanism for sending the first estimated track from the tracking subsystem to the combined imaging and tracking subsystem.
  • vehicle such as an aircraft
  • the system includes a wireless mechanism for sending the first estimated track from the tracking subsystem to the combined imaging and tracking subsystem.
  • the combined imaging and tracking subsystem uses video motion detection to image and track the target.
  • the targets are imaged substantially simultaneously to provide a respective image of each target.
  • the purpose of the substantially simultaneous imaging is to allow the selection for intensive monitoring, in real time, of the most interesting target from among a large collection of targets.
  • the images are displayed collectively, for example together on a video display screen, to allow visual inspection of all the images together. Based at least in part on this visual inspection, one of the targets is selected and a resource is devoted to the selected target.
  • the targets also are ranked, and the displaying is effected in accordance with that ranking.
  • the ranking is effected at least in part using automatic target recognition. Note that automatic target recognition is only a most preferred feature of this embodiment of the present invention, unlike visual inspection of the images, which is obligatory.
  • the imaging is effected using at least one combined imaging and tracking modality, as part of tracking of the targets.
  • Examples of a resource that is devoted to the selected target include a tracking modality for tracking the selected target, an imaging modality for further imaging of the selected target, a weapon for attacking the selected target, a display device for dedicated display of a location of the selected target and a mechanism for warning of the presence of the selected target.
  • the imaging of the targets is effected by acquiring at least one image of at least a portion of an arena that includes the targets and extracting each target's respective image from the arena image(s) as a respective subportion of (one of) the arena image(s) that depicts that target.
  • a corresponding system of the present invention includes at least one imaging modality for substantially simultaneously imaging the targets to provide a respective image of each target, a display mechanism for displaying the images collectively, a selection mechanism for selecting one of the images on the display mechanism, and a resource that can be devoted to the respective target of the selected image.
  • FIG. 1 is a high level block diagram of a GMTI subsystem of the present invention
  • FIG. 2 is a high level block diagram of a VMD subsystem of the present invention
  • FIG. 3 illustrates a typical deployment of the subsystems of FIGs. 1 and 2;
  • FIG. 4 illustrates a method of associating VMD tracks with GMTI tracks
  • FIG. 5 is a portion of an image acquired by the subsystem of FIG. 2, with subimages of targets outlined
  • FIG. 6 is a data flow diagram of one embodiment of the present invention.
  • FIG. 7 shows a display of subframes that depict targets
  • FIG. 8 shows a display of some of the subframes of FIG. 7 along with the associated estimated tracks.
  • the present invention is of a data fusion method and system which can be used to track and identify moving targets. Specifically, the present invention can be used to track and identify enemy vehicles on a battlefield.
  • FIGS. 1 and 2 are high level block diagrams of, respectively, a GMTI subsystem 10 and a VMD subsystem 30 of the present invention.
  • GMTI subsystem 10 includes components found in a prior art GMTI system: a radar transceiver 12 and a control unit 14.
  • Control unit 14 includes, among other subcomponents, a processor 16 and a memory 18.
  • Memory 18 is used to store conventional GMTI software for aiming radar transceiver 12 at regions of interest and for processing data received from radar transceiver 12 to track targets. This GMTI software is executed by processor 16 and the resulting tracks are stored in memory 18.
  • Memory 18 also is used to store software that, when executed by processor 16, implements the method of the present invention as described below.
  • VMD subsystem 30 includes components found in a prior art VMD system: a gimbal-mounted digital video camera 32 and a control unit 34.
  • Control unit 34 includes, among other subcomponents, a processor 36 and a memory 38.
  • Memory 38 is used to store conventional VMD software for aiming video camera 32 at regions of interest and for processing data received from video camera 32 to track and identify targets. This VMD software is executed by processor 36 and the resulting tracks and target identities are stored in memory 38.
  • Memory 38 also is used to store software that, when executed by processor 36, implements the method of the present invention as described below.
  • subsystems 10 and 30 include respective RF communication transceivers 20 and 40 for exchanging data.
  • control unit 14 of subsystem 10 uses communication transceiver 20 to transmit GMTI tracks to subsystem 30;
  • control unit 34 of subsystem 30 uses communication transceiver 40 to receive these GMTI tracks and aims video camera 32 accordingly as described below.
  • FIG. 3 shows a typical method of deploying the combined system 10 and 30 of the present invention.
  • the combined system is deployed above a battlefield 50 on which move enemy vehicles 66, 68, 70 and 72.
  • One subsystem 10 is mounted on an aircraft 54.
  • Two subsystems 30 are mounted on respective aircraft 56 and 58.
  • subsystems 30 are shown mounted in pods underneath the fuselages of their respective aircraft 56 and 58. More commonly, subsystems 30 are mounted within the fuselages of their respective aircraft 56 and 58, just as subsystem 10 is mounted within the fuselage of aircraft 54 as shown.
  • Aircraft 54, with GMTI subsystem 10 flies above a cloud cover 52.
  • aircraft 56 and 58 are unmanned.
  • Aircraft 54 may be either manned or unmanned.
  • Zigzag lines 74, 76 and 78 represent RF signals exchanged by communication transceivers 20 and 40 of subsystems 10 and 30. These RF signals represent the data derived by subsystems 10 and 30 in the course of tracking and identifying enemy vehicles 66, 68, 70 and 72. These RF signals also represent periodic transmissions of the respective locations of aircraft 54, 56 and 58, as determined by navigation systems (not shown) on board aircraft 54, 56 and 58.
  • a first preferred embodiment of the present invention now will be described.
  • the primary purpose of this preferred embodiment is to exploit the high resolution of narrow-FOV VMD, relative to GMTI, to facilitate the identification of moving targets tracked by GMTI.
  • GMTI subsystem 10 of aircraft 54 monitors vehicular movement at relatively low resolution over a relatively wide portion of battlefield 50.
  • the field of view of GMTI subsystem 10 of aircraft 54 is indicated in Figure 3 by two bounding dashed lines 60.
  • GMTI subsystem 10 of aircraft 54 acquires and tracks enemy vehicles 66, 68, 70 and 72 and transmits the GMTI tracks that its control unit 14 estimates for enemy vehicles 66, 68, 70 and 72 to VMD subsystems 30 of aircraft 56 and 58.
  • VMD subsystem 30 of aircraft 56 recognizing that aircraft 56 is closer than aircraft 58 to enemy vehicles 66, 68 and 70, aims its video camera 32 at enemy vehicles 66, 68 and 70 according to their coordinates as estimated by GMTI subsystem 10 and then tracks enemy vehicles 66, 68 and 70.
  • the field of view of VMD subsystem 30 of aircraft 56 is indicated in Figure 3 by two bounding dashed lines 62. This field of view is considerably narrower than the field of view of GMTI subsystem 10 of aircraft 54.
  • VMD subsystem 30 of aircraft 56 associates the VMD tracks that it estimates for enemy vehicles 66, 68 and 70 with the corresponding estimated GMTI tracks that it received from GMTI subsystem 10 of aircraft 54.
  • VMD subsystem 30 of aircraft 58 recognizing that aircraft 58 is closer than aircraft 56 to enemy vehicle 72, aims its video camera 32 at enemy vehicle 72 according to its coordinates as estimated by GMTI subsystem 10 and then tracks enemy vehicle 72.
  • the field of view of VMD subsystem 30 of aircraft 58 is indicated in Figure 3 by two bounding dashed lines 64. This field of view also is considerably narrower than the field of view of GMTI subsystem 10 of aircraft 54.
  • VMD subsystem 30 of aircraft 58 associates the VMD track that it estimates for enemy vehicle 72 with the corresponding estimated GMTI track that it received from GMTI subsystem 10 of aircraft 54.
  • Figure 4 illustrates one method that VMD subsystem 30 of aircraft 56 uses to associate the estimated VMD tracks it derives with the estimated GMTI tracks received from GMTI subsystem 10 of aircraft 54.
  • Figure 4 is a plan view of the portion of battlefield 50 that lies within the field of view of VMD subsystem 30 of aircraft 56.
  • "+"s 8OA through 8OF mark the coordinates of enemy vehicles 66 and 68 as estimated by GMTI subsystem 10 of aircraft 54 at successive times t A through ⁇ F - Note that the resolution of GMTI subsystem 10 is too coarse to distinguish enemy vehicle 66 from enemy vehicle 68, so both enemy vehicles are assigned the same coordinates 80. Coordinates 80 constitute an estimated GMTI track of enemy vehicles 66 and 68.
  • VMD subsystem 30 of aircraft 56 does not know which of its estimated VMD tracks 84, 86 and 88 to associate with estimated GMTI track 80 and which of its estimated VMD tracks 84, 86 and 88 to associate with estimated GMTI track 82. So VMD subsystem 30 of aircraft 56 uses known algorithms to compare estimated VMD tracks 84, 86 and 88 to estimated GMTI tracks 80 and 82 on the basis of mutual similarities. In this example, tracks 80, 84 and 86 all represent vehicles turning to the left and tracks 82 and 88 both represent vehicles turning to the right, so VMD subsystem 30 of aircraft 56 associates estimated VMD tracks 84 and 86 with estimated GMTI track 80 and associates estimated VMD track 88 with estimated GMTI track 82.
  • VMD subsystem 30 of aircraft 56 also uses cluster association algorithms such as the algorithms taught in co-pending IL Patent Application No. 162852, entitled “DATA FUSION BY CLUSTER ASSOCIATION", to associate two enemy vehicles 66 and 68 with a single estimated GMTI track 80.
  • FIG. 5 shows a portion of the video frame acquired by VMD subsystem 30 of aircraft 56 at time t / r.
  • VMD subsystem 30 of aircraft 56 extracts from this frame three subframes: a subframe 90 that includes the pixels of the larger frame that represent enemy vehicle 66, a subframe 92 that includes the pixels of the larger frame that represent enemy vehicle 68 and a subframe 94 that includes the pixels of the larger frame that represent enemy vehicle 70.
  • VMD subsystem 30 of aircraft 56 flags subframes 90 and 92 as being associated with estimated GMTI track 80 and flags subframe 94 as being associated with estimated GMTI track 82.
  • VMD subsystem 30 of aircraft 56 then transmits subframes 90, 92 and 94 to GMTI subsystem 10 of aircraft 54.
  • VMD subsystem 30 of aircraft 56 zooms in on enemy vehicles 66, 68 and 70, flags the resulting frames as being associated with the corresponding GMTI tracks 80 or 82, and transmits the resulting frames to GMTI subsystem 10 of aircraft 54.
  • Control unit 14 of GMTI subsystem 10 of aircraft 54 compares subframes 90, 92 and 94 to a template library that is stored in its memory 18 and also compares estimated GMTI tracks 80 and 82 to a database of enemy vehicle properties that is stored in memory 18 to tentatively identify enemy vehicles 66, 68 and 70. Based on these tentative identifications, control unit 14 of GMTI subsystem 10 of aircraft 54 adjusts the parameters (e.g., Kalman filter parameters) of the algorithms that control unit 14 of GMTI subsystem 10 of aircraft 54 uses to estimate GMTI tracks 80 and 82.
  • parameters e.g., Kalman filter parameters
  • GMTI subsystem 10 of aircraft 54 also transmits estimated GMTI tracks 80 and 82 along with the associated subframes 90, 92 and 94 to a command and control center, where estimated GMTI tracks 80 and 82 are displayed to a field commander along with subframes 90, 92 and 94 and the associated tentative identifications of enemy vehicles 66, 68 and 70.
  • the command and control center is on the ground, but optionally the command and control center is on board aircraft 54.
  • the command and control center is on board aircraft 54, and is illustrated schematically as a command and control computer 55 operationally connected to GMTI subsystem 10, as a display terminal 57, operationally connected to command and control computer 55, on which estimated GMTI tracks 80 and 82 and subframes 90, 92 and 94 are displayed, and as an input unit 59 such as a computer mouse for selecting subframes that are displayed on display terminal 57 as described below in the context of a second preferred embodiment of the present invention.).
  • a command and control computer 55 operationally connected to GMTI subsystem 10
  • display terminal 57 operationally connected to command and control computer 55, on which estimated GMTI tracks 80 and 82 and subframes 90, 92 and 94 are displayed
  • an input unit 59 such as a computer mouse for selecting subframes that are displayed on display terminal 57 as described below in the context of a second preferred embodiment of the present invention.
  • the field commander performs visual identifications of enemy vehicles 66, 68 and 70 as depicted in subframes 90, 92 and 94. If any of those visual identifications differ from the corresponding tentative identifications, the field commander transmits the correct identification(s) back to GMTI subsystem of aircraft 54.
  • the field commander also optionally prioritizes enemy vehicles to be tracked. More system resources are devoted to tracking high priority enemy vehicles than to tracking low priority enemy vehicles.
  • VMD subsystems 30 it is not necessary for VMD subsystems 30 to perform tracking for more than two frames. (At least two frames are required because a VMD subsystem 30 recognizes targets within its field of view by comparing successive images. This also implies that only moving targets can be visually identified.) For example, if a VMD subsystem 30 recognizes a single moving object in its field of view, then the corresponding subframe is associated with the GMTI track without tracking the target. If the contrast between the target tracked by GMTI and its background is sufficiently great in the spectral band used by the VMD subsystem 30 (e.g. if video camera 32 acquires images in the thermal infrared and if the target of interest is known to be significantly warmer than its background), it often is not even necessary for the VMD subsystem 30 to acquire more than a single frame to capture a subframe of the target.
  • the contrast between the target tracked by GMTI and its background is sufficiently great in the spectral band used by the VMD subsystem 30 (e.g. if video camera 32 acquires images
  • VMD subsystem 30 of aircraft 56 also transmits to GMTI subsystem 10 of aircraft 54 estimated VMD tracks 84, 86 and 88 and their associations with estimated GMTI tracks 80 and 82.
  • GMTI subsystem 10 of aircraft 54 uses estimated VMD tracks 84, 86 and 88 to correct the estimation of GMTI tracks 80 and 82.
  • estimated GMTI track 80 is biased to the left of estimated VMD tracks 84 and 86
  • estimated GMTI track 82 is biased to the left of estimated VMD track 88.
  • GMTI subsystem 10 of aircraft 54 assumes that estimated VMD tracks 84, 86 and 88 are closer to the truth than estimated GMTI tracks 80 and 82, and moves estimated GMTI tracks 80 and 82 rightward accordingly.
  • VMD subsystem 30 of aircraft 56 Even after VMD subsystem 30 of aircraft 56 has transmitted subframes 90, 92 and 94 to GMTI subsystem 10 of aircraft 54, VMD subsystem 30 of aircraft 56 continues to track enemy vehicles 66, 68 and 70 and to transmit its estimated coordinates of enemy vehicles 66, 68 and 70 to GMTI subsystem 10 of aircraft 54 so that GMTI subsystem 10 of aircraft 54 can continue to correct random and systematic errors in its GMTI estimation algorithms.
  • VMD subsystem 30 of aircraft 58 tracks enemy vehicle 72 and sends the associated subframe and estimated VMD track to GMTI subsystem 10 of aircraft 54.
  • the associated data processing and data exchanges are as described above for VMD subsystem 30 of aircraft 56, except that with only one enemy vehicle 72 to track, the association of a GMTI track with a VMD track is trivial.
  • VMD subsystem 30 of aircraft 56 determines, based on the location of aircraft 56 and the estimated tracks of enemy vehicles 66, 68 and 70, and optionally based also on other information such as a digital terrain map stored in memory 38 of VMD subsystem 30 of aircraft 56, that a different location of aircraft 56 would provide a better vantage point than the present location of aircraft 56 for capturing images of vehicles 66, 68 and 70, then VMD subsystem 30 instructs aircraft 56 to fly to the location with the superior vantage point.
  • the command and control center assigns more than one VMD subsystem 30 to track one or more targets.
  • the aircraft bearing those VMD subsystems 30 fly to suitable vantage points for capturing images of the target(s) from several points of view.
  • Figure 6 is a data flow diagram of this embodiment of the present invention.
  • the functional modules represented by boxes in Figure 6 are labeled with the reference numerals of the corresponding components of subsystems 10 and 30.
  • the data fusion function of the present invention is distributed between subsystems 10 and 30, and could be performed by control unit 14 of GMTI subsystem 10 or by control unit 34 of VMD subsystem 30 or cooperatively by both control units.
  • the data fusion function of the present invention is performed at the command and control center rather than by subsystems 10 and 30.
  • GMTI subsystem 10 provides GMTI tracks, in absolute coordinates, to the data fusion module, which sends the absolute coordinates to control unit 34.
  • Control unit 34 sends control signals to video camera 32 to aim video camera 32 at the targets according to the absolute coordinates that control unit 34 received from the data fusion module.
  • Video camera 32 outputs video frames to control unit 34.
  • Control unit 34 processes these video frames to produce VMD tracks in pixel coordinates, along with associated subframes.
  • Processor 36 of control unit 34 transforms the pixel coordinates to absolute coordinates and sends the transformed tracks and the associated subframes back to the data fusion module.
  • the data fusion module associates the two kinds of tracks and sends them, along with the associated subframes, to command and control computer 55 for display.
  • a second preferred embodiment of the present invention is directed at enabling the identification, as desired, of the most interesting of a large number of moving targets that are being tracked.
  • a low- resolution tracking system for example a GMTI system on an airborne platform
  • a separate, independent imaging system for example a video system on another airborne platform
  • the video system sends a video stream to the operator, who identifies the target visually from a real-time display of the video stream.
  • This prior art method is feasible when a relatively small number of targets are tracked by the low resolution tracking system, but not when a relatively large number of targets are tracked simultaneously by the low-resolution tracking system.
  • the problems encountered in the selective tracking of a large number of targets by the prior art method is the lack of sufficient bandwidth to transmit all the video streams of all the targets of interest.
  • each GMTI track may have several VMD tracks associated with it, reflecting the fact that a VMD subsystem 30 may resolve several targets where GMTI subsystem 10 sees only one combined target.
  • Each VMD subsystem 30 tracks its target(s) and sends the following, from each video frame of each target that VMD subsystem 30 acquires, to GMTI subsystem 10:
  • the subframe within the larger frame, of the target.
  • the emphasis in this second embodiment of the present invention is on reducing the resources needed to monitor a large number of targets and to select relatively high-interest targets for intensive tracking.
  • the resources that are conserved by the present invention include the number of imaging systems needed, the bandwidth needed for transmitting video streams and the number of operators needed at the command and control center (which, like the command and control center of the first embodiment, typically is on the ground but optionally is on board aircraft 54).
  • each VMD subsystem 30 performs its tracking in pixel coordinates and transmits the pixel coordinates to GMTI subsystem 10.
  • GMTI subsystem 10 transforms the absolute target coordinates of its own GMTI track of the target(s) to the equivalent pixel coordinates and then associates the transformed GMTI track with the VMD track(s) in pixel coordinates as described above for the first preferred embodiment. If the command and control center is on the ground, GMTI subsystem 10 transmits the GMTI track of the target(s) to the command and control center along with the subframes of the target(s). Because only subframes are transmitted, this embodiment of the present invention requires much less bandwidth than the corresponding prior art method.
  • the subframes thus acquired are displayed collectively to an operator of the system of Figure 3 using, e.g., display terminal 57.
  • Figure 7 shows an example of one such display of subframes 100.
  • An operator of the system of Figure 3 selects a target of interest by selecting the associated subframe 100, using, e.g. , input unit 59, and commands the system of Figure 3 to display that subframe 100 along with its associated estimated track on a map of the arena being monitored.
  • Figure 8 shows an example of such a display.
  • targets of interest are selected visually, rather than via automatic target recognition, although automatic target recognition optionally is used to rank subframes 100 for display in the display of Figure 7.
  • each VMD subsystem 30 is multiplexed among several targets until an operator selects a target of interest.
  • one of VMD subsystems 30 is dedicated to tracking that target.
  • each VMD subsystem 30 devotes 0.5 seconds to tracking each of its assigned targets, and five VMD subsystems 30 contribute to two collective displays of 50 subframes 100. Only thirteen VMD subsystems 30 then are needed to feed video streams to five operators, for a total video bandwidth of only 2.5 Mbits/second if full frames are transmitted, and only 50 Kbits/second if only subframes 100 are transmitted as described above.
  • a third preferred embodiment of the present invention addresses a problem inherent in many tracking systems: a single tracking modality often loses track of the targets that it tracks. For example, for the reasons described in the Field and Background section (targets halting, moving transversely or becoming obscured), both a prior art GMTI system and a GMTI subsystem 10 of the present invention typically track any particular moving target for no more than a few minutes.
  • one of VMD subsystems 30 (typically the closest VMD subsystem 30) points its video camera 32 at the last known position of the missing enemy vehicle, or alternatively at a position predicted by extrapolating the last several known positions of the missing enemy vehicle. That VMD subsystem 30 acquires a video frame of its field of view of battlefield 50 and, based on the subframes of the enemy vehicles that are shared among VMD subsystems 30, attempts to locate the missing enemy vehicle in the field of view, for example by seeking pixels in the video frame that resemble the pixels of the subframe of the missing enemy vehicle.
  • the subframe is cross-correlated with the video frame, and a sufficiently high cross-correlation peak is presumed to identify the missing enemy vehicle in the video frame. If that VMD subsystem's video camera 32 is a thermal infrared camera, then the identification of the missing enemy vehicle in the video frame is made easier by the fact that a recently mobile vehicle tends to be hotter than its surroundings and so has a high contrast against its background in an infrared image. If that VMD subsystem 30 succeeds in locating the missing enemy vehicle in its field of view, then that VMD subsystem 30 tracks the missing vehicle.
  • VMD subsystem 30 also transmits the new estimated VMD track to GMTI subsystem 10 of aircraft 54. If, according to the new estimated VMD track, the missing vehicle is still moving, GMTI subsystem 10 of aircraft 54 attempts to acquire a new target at the transmitted VMD locations. When GMTI subsystem 10 of aircraft 54 succeeds in re-acquiring and tracking the missing vehicle, joint tracking resumes as described above. Continued joint tracking is useful e.g. for verifying that the target now being tracked is indeed the target that GMTI subsystem 10 lost track of.
  • the track recovery procedure of the third preferred embodiment need not wait for GMTI subsystem 10 of aircraft 54 to actually lose track of one of enemy vehicles 66, 68 or 70.
  • GMTI subsystem 10 invites one of VMD subsystems 30 to join in tracking one or more of vehicles 66, 68 or 70 when GMTI subsystem 10 recognizes existing or immanent degradation in the quality of the tracking performed by GMTI subsystem 10.
  • such an invitation may be triggered by the error bounds computed by the track estimation algorithm of GMTI subsystem 10 exceeding predetermined thresholds, or by GMTI subsystem 10 determining that one of enemy vehicles 66, 68 or 70 is coming to a halt, or by GMTI subsystem 10 determining, with reference to a digital terrain map stored in memory 18 of GMTI subsystem 10, that one of enemy vehicles 66, 68 or 70 is about to enter a topographic feature such as a ravine that obscures that enemy vehicle 66, 68 or 70 from GMTI subsystem 10.
  • initial tracking is performed by GMTI subsystem 10.
  • the scope of the third embodiment of the present invention includes initial tracking by a combined imaging and tracking modality such as VMD subsystem 30.
  • the VMD subsystem 30 that seeks to resume tracking of the lost target acquires a video frame of its field of view of battlefield 50 and compares that video frame, as described above, to the relevant subframes of the target that were acquired by the VMD subsystem 30 that lost track of the target prior to losing track of the target.
  • a fourth preferred embodiment of the present invention also reduces the bandwidth needed for joint tracking of targets by a tracking modality and an imaging modality, particularly if the command and control center is based on the ground.
  • the subframes of the targets are not displayed along with the estimated GMTI tracks in real time. Instead, the subframes are archived in memories 38 of VMD subsystems 30, along with appropriate metadata such as time stamps that allow the command and control computer subsequently display the estimated GMTI tracks along with the associated subframes. Later, the subframes are transmitted to the command and control center for display. That the subframes need not be transmitted in real time allows the subframes to be transmitted at a slower rate, and hence in a lower bandwidth channel, than is required in the real time embodiments of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

L'invention concerne un procédé de surveillance d'une cible, qui comprend les étapes consistant à: poursuivre la cible au moyen d'une modalité de poursuite afin d'obtenir une localisation estimée de la cible; imager la cible au moyen d'une modalité d'imagerie afin d'obtenir une image de la cible; associer la localisation estimée avec l'image; et afficher au moins une donnée associée à l'image avec la localisation estimée.
PCT/IL2008/000682 2007-05-20 2008-05-20 Poursuite et imagerie de fusion d'images WO2008142680A2 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/600,839 US20100157056A1 (en) 2007-05-20 2008-05-20 Tracking and imaging data fusion

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IL183320A IL183320A (en) 2007-05-20 2007-05-20 Merge information from tracking and imaging
IL183320 2007-05-20

Publications (2)

Publication Number Publication Date
WO2008142680A2 true WO2008142680A2 (fr) 2008-11-27
WO2008142680A3 WO2008142680A3 (fr) 2010-02-25

Family

ID=40032261

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2008/000682 WO2008142680A2 (fr) 2007-05-20 2008-05-20 Poursuite et imagerie de fusion d'images

Country Status (2)

Country Link
IL (1) IL183320A (fr)
WO (1) WO2008142680A2 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100245587A1 (en) * 2009-03-31 2010-09-30 Kabushiki Kaisha Topcon Automatic tracking method and surveying device
CN106408940A (zh) * 2016-11-02 2017-02-15 南京慧尔视智能科技有限公司 基于微波和视频数据融合的交通检测方法及装置
US10102259B2 (en) 2014-03-31 2018-10-16 International Business Machines Corporation Track reconciliation from multiple data sources
CN114900672A (zh) * 2022-07-14 2022-08-12 杭州舜立光电科技有限公司 一种变倍跟踪方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5466158A (en) * 1994-02-14 1995-11-14 Smith, Iii; Jay Interactive book device
US6771205B1 (en) * 1977-07-28 2004-08-03 Raytheon Company Shipboard point defense system and elements therefor
US20050012817A1 (en) * 2003-07-15 2005-01-20 International Business Machines Corporation Selective surveillance system with active sensor management policies
US20060210169A1 (en) * 2005-03-03 2006-09-21 General Dynamics Advanced Information Systems, Inc. Apparatus and method for simulated sensor imagery using fast geometric transformations

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6771205B1 (en) * 1977-07-28 2004-08-03 Raytheon Company Shipboard point defense system and elements therefor
US5466158A (en) * 1994-02-14 1995-11-14 Smith, Iii; Jay Interactive book device
US20050012817A1 (en) * 2003-07-15 2005-01-20 International Business Machines Corporation Selective surveillance system with active sensor management policies
US20060210169A1 (en) * 2005-03-03 2006-09-21 General Dynamics Advanced Information Systems, Inc. Apparatus and method for simulated sensor imagery using fast geometric transformations

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100245587A1 (en) * 2009-03-31 2010-09-30 Kabushiki Kaisha Topcon Automatic tracking method and surveying device
US8395665B2 (en) * 2009-03-31 2013-03-12 Kabushiki Kaisha Topcon Automatic tracking method and surveying device
US10102259B2 (en) 2014-03-31 2018-10-16 International Business Machines Corporation Track reconciliation from multiple data sources
CN106408940A (zh) * 2016-11-02 2017-02-15 南京慧尔视智能科技有限公司 基于微波和视频数据融合的交通检测方法及装置
CN106408940B (zh) * 2016-11-02 2023-04-14 南京慧尔视智能科技有限公司 基于微波和视频数据融合的交通检测方法及装置
CN114900672A (zh) * 2022-07-14 2022-08-12 杭州舜立光电科技有限公司 一种变倍跟踪方法

Also Published As

Publication number Publication date
WO2008142680A3 (fr) 2010-02-25
IL183320A (en) 2011-12-29
IL183320A0 (en) 2007-12-03

Similar Documents

Publication Publication Date Title
US20100157056A1 (en) Tracking and imaging data fusion
US8416298B2 (en) Method and system to perform optical moving object detection and tracking over a wide area
Kanade et al. Advances in cooperative multi-sensor video surveillance
EP2847739B1 (fr) Suivi d'objets à distance
US9070289B2 (en) System and method for detecting, tracking and estimating the speed of vehicles from a mobile platform
CA2767312C (fr) Systeme et procede de surveillance video automatique
KR101634878B1 (ko) 무인 비행체의 군집 비행을 이용한 항공 영상 정합 장치 및 방법
CN111145545A (zh) 基于深度学习的道路交通行为无人机监测系统及方法
Yahyanejad et al. Incremental mosaicking of images from autonomous, small-scale uavs
US8428344B2 (en) System and method for providing mobile range sensing
US7773116B1 (en) Digital imaging stabilization
CN111679695B (zh) 一种基于深度学习技术的无人机巡航及追踪系统和方法
CN112699839B (zh) 一种动态背景下视频目标自动锁定及跟踪方法
US11430199B2 (en) Feature recognition assisted super-resolution method
AU2012215184B2 (en) Image capturing
KR20160062880A (ko) 카메라 및 레이더를 이용한 교통정보 관리시스템
US7466343B2 (en) General line of sight stabilization system
CN107783555B (zh) 一种基于无人机的目标定位方法、装置及系统
CN117950422B (zh) 一种无人机巡查系统及巡查方法
WO2008142680A2 (fr) Poursuite et imagerie de fusion d'images
CN112950671A (zh) 一种无人机对运动目标实时高精度参数测量方法
Schleiss et al. VPAIR--Aerial Visual Place Recognition and Localization in Large-scale Outdoor Environments
Tevyashev et al. Laser opto-electronic airspace monitoring system in the visible and infrared ranges
Blaser et al. System design, calibration and performance analysis of a novel 360 stereo panoramic mobile mapping system
Arambel et al. Multiple-hypothesis tracking of multiple ground targets from aerial video with dynamic sensor control

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08751370

Country of ref document: EP

Kind code of ref document: A2

WWE Wipo information: entry into national phase

Ref document number: 12600839

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08751370

Country of ref document: EP

Kind code of ref document: A2