CN112184772B - Target tracking method and device - Google Patents

Target tracking method and device Download PDF

Info

Publication number
CN112184772B
CN112184772B CN202011061084.1A CN202011061084A CN112184772B CN 112184772 B CN112184772 B CN 112184772B CN 202011061084 A CN202011061084 A CN 202011061084A CN 112184772 B CN112184772 B CN 112184772B
Authority
CN
China
Prior art keywords
target
tracking
frames
current frame
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011061084.1A
Other languages
Chinese (zh)
Other versions
CN112184772A (en
Inventor
陈海波
武玉琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenlan Robot Shanghai Co ltd
Original Assignee
Shenlan Robot Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenlan Robot Shanghai Co ltd filed Critical Shenlan Robot Shanghai Co ltd
Priority to CN202011061084.1A priority Critical patent/CN112184772B/en
Publication of CN112184772A publication Critical patent/CN112184772A/en
Application granted granted Critical
Publication of CN112184772B publication Critical patent/CN112184772B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a target tracking method and a target tracking device, wherein the method comprises the following steps: performing target detection on the 3D target image of the current frame to obtain detection frames of N target objects in the 3D target image of the current frame; acquiring tracking frames of N target objects of a current frame; judging whether the detection frames and the tracking frames of N target objects in the current frame are successfully matched; if the tracking frames of m target objects in the current frame fail to match, calculating a homography matrix between the tracking frames of m target objects in the 3D of the current frame and the detection frames of m target objects in the corresponding previous frame; acquiring detection frames of m target objects in a current frame according to the homography matrix, and judging whether tracking frames of m target objects in the current frame are matched with the detection frames or not; and if the tracking frames of the m target objects in the current frame are successfully matched with the detection frames, outputting corresponding 3D tracking tracks. Therefore, the conditions of missed detection and false detection of the target object can be effectively avoided, and the reliability of the system is improved.

Description

Target tracking method and device
Technical Field
The invention relates to the technical field of target tracking, in particular to a target tracking method and a target tracking device.
Background
In the related art, the 3D target tracking algorithm is based on kalman algorithm and hungarian algorithm under the 3D coordinate system of the laser radar, and the tracking track is established to track the target through the IOU (Intersection-over-Union) matching according to the target predicted position and the actual position between the continuous frames.
However, in practical application, the small target object is sensitive to the position deviation, and the IOU value of the actual target and the predicted target is reduced by the small-range position shake, so that the condition of missing detection and false detection of the small target object is easily caused, and the reliability of the system is greatly reduced.
Disclosure of Invention
The invention provides a target tracking method for solving the technical problems, which can effectively avoid the condition of missed detection and false detection of a target object, thereby greatly improving the reliability of a system.
The technical scheme adopted by the invention is as follows:
The target tracking method comprises the steps of carrying out target detection on a 3D target image of a current frame to obtain detection frames of N target objects in the 3D target image of the current frame, wherein N is a positive integer greater than or equal to 1; acquiring tracking frames of N target objects in a 3D target image of a current frame; judging whether the detection frames and the tracking frames of N target objects in the 3D target image of the current frame are successfully matched; if the tracking frames of m target objects in the 3D target image of the current frame fail to match, calculating a homography matrix between the tracking frames of m target objects in the 3D target image of the current frame and the detection frames of m target objects in the 3D target image of the corresponding previous frame, wherein m is a positive integer which is greater than or equal to 1 and less than or equal to N; acquiring detection frames of m target objects in a 3D target image of a current frame according to the homography matrix, and judging whether tracking frames of m target objects in the 3D target image of the current frame are matched with the detection frames or not; and if the tracking frames of m target objects in the 3D target image of the current frame are successfully matched with the detection frames, outputting corresponding 3D tracking tracks.
The obtaining the tracking frames of the N target objects in the 3D target image of the current frame comprises the following steps: acquiring tracking frames of the N target objects in the 3D target image of the previous frame; and predicting the tracking frames of the N target objects in the 3D target image of the current frame by adopting a target detection algorithm according to the tracking frames of the N target objects in the 3D target image of the previous frame.
The target tracking method further comprises the following steps: if the detection frames of N target objects in the 3D target image of the current frame fail to match, a new 3D tracking track is established, and the categories of the detection frames of the N target objects in the 3D target image of the current frame are recorded in a category attribute list of the new 3D tracking track, wherein N is a positive integer which is greater than or equal to 1 and less than or equal to N.
The target tracking method further comprises the following steps: acquiring detection frames and tracking frames of the N target objects in the 2D target image of the current frame; judging whether the detection frames and the tracking frames of the N target objects in the 2D target image of the current frame are successfully matched; and if the matching is successful, outputting a corresponding 2D tracking track, and combining the 2D tracking track with the 3D tracking track.
The target tracking method further comprises the following steps: if the tracking frames of k target objects in the 2D target image of the current frame fail to match, predicting the tracking frames of the k target objects in the 2D target image of the next frame according to the tracking frames of the k target objects in the 2D target image of the current frame, wherein k is a positive integer greater than or equal to 1 and less than or equal to N; acquiring detection frames of the k target objects in the next 2D target image; judging whether the detection frames and the tracking frames of the k target objects in the next 2D target image are successfully matched; and if the matching is successful, outputting a corresponding 2D tracking track, and combining the 2D tracking track with the 3D tracking track.
The target tracking method further comprises the following steps: if the detection frames of p target objects in the 2D target image of the current frame fail to match, a new 2D tracking track is established, and the categories of the detection frames of the p target objects in the 2D target image of the current frame are recorded in a category attribute list of the new 2D tracking track, wherein p is a positive integer which is greater than or equal to 1 and less than or equal to N.
An object tracking device comprising: the first acquisition module is used for carrying out target detection on the 3D target image of the current frame so as to acquire detection frames of N target objects in the 3D target image of the current frame, wherein N is a positive integer greater than or equal to 1; the second acquisition module is used for acquiring tracking frames of N target objects in the 3D target image of the current frame; the first judging module is used for judging whether the detection frames and the tracking frames of N target objects in the 3D target image of the current frame are successfully matched; the computing module is used for computing a homography matrix between the tracking frames of m target objects in the 3D target image of the current frame and the detection frames of m target objects in the 3D target image of the corresponding previous frame when the matching of the tracking frames of m target objects in the 3D target image of the current frame fails, wherein m is a positive integer which is greater than or equal to 1 and less than or equal to N; the second judging module is used for acquiring detection frames of the m target objects in the 3D target image of the current frame according to the homography matrix and judging whether tracking frames of the m target objects in the 3D target image of the current frame are matched with the detection frames or not; the output module is used for outputting corresponding 3D tracking tracks when the tracking frames of the m target objects in the 3D target image of the current frame are successfully matched with the detection frames.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the above-described target tracking method when executing the computer program.
A non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the target tracking method described above.
A computer program product which, when executed by a processor, performs the object tracking method described above.
The invention has the beneficial effects that:
the invention can effectively avoid the condition of missed detection and false detection of the target object, thereby greatly improving the reliability of the system.
Drawings
FIG. 1 is a flow chart of a target tracking method according to an embodiment of the invention;
FIG. 2 is a block diagram of a target tracking method according to an embodiment of the present invention;
fig. 3 is a block diagram of a target tracking method according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Fig. 1 is a flowchart of a target tracking method according to an embodiment of the present invention.
As shown in fig. 1, the target tracking method according to the embodiment of the present invention may include the following steps:
s1, performing target detection on a 3D target image of a current frame to obtain detection frames of N target objects in the 3D target image of the current frame. Wherein N is a positive integer greater than or equal to 1.
Specifically, a point cloud clustering algorithm and a 3D detection network may be used to perform target detection on a 3D target image of a current frame, so as to obtain a maximum circumscribed rectangular frame of N target objects (for example, small target objects) in the 3D target image of the current frame, and determine whether the length, width and height of the maximum circumscribed rectangular frame of each target object meet preset conditions, for example, determine whether the length, width and height of the maximum circumscribed rectangular frame is smaller than a set threshold, if yes, perform a puffing operation, that is, multiply three-dimensional coordinates of point clouds corresponding to the maximum circumscribed rectangular frame by preset expansion coefficients, so as to implement the puffing operation, and further obtain a detection frame of each target object in the 3D target image of the current frame.
S2, acquiring tracking frames of N target objects in the 3D target image of the current frame.
According to one embodiment of the present invention, acquiring a tracking frame of N target objects in a 3D target image of a current frame includes: acquiring tracking frames of N target objects in a 3D target image of a previous frame; and predicting the tracking frames of the N target objects in the 3D target image of the current frame by adopting a target detection algorithm according to the tracking frames of the N target objects in the 3D target image of the previous frame.
Specifically, the time interval between frames may be preset, and then the tracking frames of N target objects in the 3D target image of the current frame may be predicted from the tracking frames of N target objects in the 3D target image of the previous frame using a target tracking algorithm, for example, a kalman algorithm and a hungarian algorithm.
It should be noted that, if the current frame is the initial time, the detection frames of N target objects in the 3D target image of the current frame may be used as the tracking frames of N target objects in the 3D target image.
And S3, judging whether the detection frames and the tracking frames of N target objects in the 3D target image of the current frame are successfully matched.
Specifically, after the detection frames of N target objects in the 3D target image of the current frame and the tracking frames of N target objects in the 3D target image of the corresponding current frame are obtained, the intersection ratio of the detection frames of N target objects in the 3D target image of the current frame and the tracking frames of N target objects in the 3D target image of the corresponding current frame may be calculated, and whether the intersection ratio is greater than a preset threshold may be determined. If the intersection ratio is greater than or equal to a preset threshold value, judging that the detection frames and the tracking frames of N target objects in the 3D target image of the current frame are successfully matched, at the moment, recording the categories of the detection frames of N target objects in the 3D target image of the current frame in a category attribute list of tracking tracks to which the tracking frames of N target objects in the 3D target image of the current frame belong, and outputting corresponding 3D tracking tracks; if the cross-over ratio is smaller than the preset threshold, the matching failure can be judged, and the matching result is analyzed.
S4, if the tracking frames of m target objects in the 3D target image of the current frame fail to match, calculating a homography matrix between the tracking frames of the m target objects in the 3D target image of the current frame and the detection frames of the m target objects in the 3D target image of the corresponding previous frame. Wherein m is a positive integer greater than or equal to 1 and less than or equal to N.
S5, acquiring detection frames of m target objects in the 3D target image of the current frame according to the homography matrix, and judging whether tracking frames of m target objects in the 3D target image of the current frame are matched with the detection frames or not.
And S6, if the tracking frames of m target objects in the 3D target image of the current frame are successfully matched with the detection frames, outputting corresponding 3D tracking tracks.
Specifically, if it is analyzed that the tracking frames of m target objects in the 3D target image of the current frame fail to match, a find Homography function in an OpenCV library may be used to calculate a homography matrix between the tracking frames of m target objects in the 3D target image of the current frame and the detection frames of m target objects in the 3D target image of the corresponding previous frame, and according to the homography matrix, point clouds in the detection frames of m target objects in the 3D target image of the previous frame are mapped into the 3D target image of the current frame and divided by the corresponding expansion coefficient, so as to predict the detection frames of m target objects in the 3D target image of the current frame.
Then, continuously judging whether the detection frames of m target objects in the 3D target image of the current frame are predicted to be successfully matched with the tracking frames of m target objects in the 3D target image of the current frame, and outputting corresponding 3D tracking tracks if the detection frames are successfully matched with the tracking frames of m target objects in the 3D target image of the current frame; if the matching fails, the above steps are continued, i.e. the matching result is analyzed correspondingly, and a corresponding tracking strategy is executed until the matching is successful, which can be seen in the above embodiment, and the details are not repeated here to avoid redundancy.
It should be noted that if the tracking frame agreeing to the ID fails to match continuously for a preset number of frames (for example, ten frames continuously), the 3D tracking track may be deleted.
Therefore, in the embodiment of the invention, when the target tracking detection is carried out, the homography matrix is adopted to predict the detection frame of the current frame to continue the matching when the matching of the tracking frame fails, so that the condition of missing detection and false detection of the target object can be effectively avoided, and the reliability of the system is greatly improved.
According to one embodiment of the present invention, the target tracking method further includes: if the detection frames of N target objects in the 3D target image of the current frame fail to match, a new 3D tracking track is established, and the categories of the detection frames of N target objects in the 3D target image of the current frame are recorded in a category attribute list of the new 3D tracking track, wherein N is a positive integer which is greater than or equal to 1 and less than or equal to N.
Specifically, as another possible embodiment, there is a case where the detection frame fails to match when analyzing the matching result. If it is determined that the matching of the detection frames of n target objects in the 3D target image of the current frame fails, a new 3D tracking track (composed of at least two tracking frames with the same ID) may be established, and the categories of the detection frames of n target objects in the 3D target image of the current frame may be recorded in the category attribute list of the new 3D tracking track. In the new 3D tracking track, the detection frames of n target objects in the 3D target image of the current frame may be used as the tracking frames of n target objects in the 3D target image of the current frame, and the tracking frames of n target objects in the 3D target image of the next frame may be predicted by using a target tracking algorithm, and the tracking frames of n target objects in the 3D target image of the next frame may be matched with the detection frames of n target objects in the 3D target image of the next frame, and after the matching is successful, the corresponding 3D tracking track may be output.
According to one embodiment of the present invention, the target tracking method further includes: acquiring detection frames and tracking frames of N target objects in a 2D target image of a current frame; judging whether the detection frames and the tracking frames of N target objects in the 2D target image of the current frame are successfully matched; judging whether the detection frames and the tracking frames of N target objects in the 2D target image of the current frame are successfully matched; and if the matching is successful, outputting a corresponding 2D tracking track, and combining the 2D tracking track with the 3D tracking track.
Specifically, in practical application, the detection frames of N target objects in the 3D target image of the current frame may be projected into the 2D target image, and the detection frames and the tracking frames of N target objects in the 2D target image of the current frame may be acquired, and specifically, the process of acquiring the detection frames and the tracking frames of N target objects in the 3D target image may be referred to.
Then, calculating the intersection ratio of the detection frames of N target objects in the 2D target image of the current frame and the tracking frames of N target objects in the corresponding 2D target image of the current frame, and judging whether the intersection ratio is larger than a preset threshold value. If the intersection ratio is larger than the set threshold value, judging that the detection frames and the tracking frames of N target objects in the 2D target image of the current frame are successfully matched, at the moment, recording the categories of the detection frames of N target objects in the 2D target image of the current frame in a category attribute list of the tracking tracks of the tracking frames of N target objects in the 2D target image of the current frame, and outputting corresponding 2D tracking tracks.
Finally, the 2D tracking track and the 3D tracking track may be combined, that is, the 2D tracking track and the 3D tracking track with the same ID are combined, so as to select the category of the detection frame with the largest occurrence number in the 2D tracking track and the 3D tracking track with the same ID in a preset frame (for example, ten frames) as the category of the track.
Therefore, in one embodiment of the invention, the 3D tracking algorithm is adopted to track the target, and the 2D tracking method is fused, so that not only is the point cloud information of the laser radar used, but also the image information of a matched camera is fused, and the method can adapt to scenes of a plurality of small target objects, thereby further improving the accuracy of overall target tracking.
According to one embodiment of the present invention, the target tracking method further includes: if the tracking frames of k target objects in the 2D target image of the current frame fail to match, predicting the tracking frames of k target objects in the 2D target image of the next frame according to the tracking frames of k target objects in the 2D target image of the current frame, wherein k is a positive integer which is greater than or equal to 1 and less than or equal to N; obtaining detection frames of k target objects in a next 2D target image; judging whether the detection frames and the tracking frames of k target objects in the next 2D target image are successfully matched; and if the matching is successful, outputting a corresponding 2D tracking track, and combining the 2D tracking track with the 3D tracking track.
Specifically, as a possible implementation manner, if it is analyzed that the tracking frames of k target objects in the 2D target image of the current frame fail to match, predicting the tracking frames of k target objects in the 2D target image of the next frame according to the tracking frames of k target objects in the 2D target image of the current frame is continued, and obtaining the detection frames of k target objects in the 2D target image of the next frame. The tracking block diagrams of k target objects in the next frame of 2D target image are matched with the detection blocks of k target objects in the next frame of 2D target image, and after the matching is successful, the corresponding 2D tracking track is output, and the 2D tracking track and the 3D tracking track are combined, specifically, the above embodiment can be seen, and in order to avoid redundancy, details are not described here.
It should be noted that if the tracking frame agreeing to the ID fails to match continuously for a preset number of frames (for example, ten frames continuously), the 2D tracking track may be deleted.
According to another embodiment of the present invention, the target tracking method further includes: if the detection frames of p target objects in the 2D target image of the current frame fail to match, a new 2D tracking track is established, and the categories of the detection frames of the p target objects in the 2D target image of the current frame are recorded in a category attribute list of the new 2D tracking track, wherein p is a positive integer which is greater than or equal to 1 and less than or equal to N.
As another possible implementation manner, if it is determined that the matching of the detection frames of p target objects in the 2D target image of the current frame fails, a new 2D tracking track (consisting of at least two tracking frames with the same ID) may be established, and the category of the detection frames of p target objects in the 2D target image of the current frame is recorded in the category attribute list of the new 2D tracking track. In the new 2D tracking track, the detection frame of p target objects in the 2D target image of the current frame may be used as the tracking frame of p target objects in the 2D target image of the current frame, and the tracking frame of p target objects in the 2D target image of the next frame may be predicted by using a target tracking algorithm, and the tracking frame of p target objects in the 2D target image of the next frame may be matched with the detection frame of p target objects in the 2D target image of the next frame, and after the matching is successful, the corresponding 2D tracking track is output.
In summary, according to the target tracking method of the embodiment of the present invention, target detection is performed on a current frame 3D target image to obtain detection frames of N target objects in the current frame 3D target image, and obtain tracking frames of N target objects in the current frame 3D target image, and determine whether the detection frames of N target objects in the current frame 3D target image and the tracking frames are successfully matched, and when there is failure in matching of the tracking frames of m target objects in the current frame 3D target image, a homography matrix between the tracking frames of m target objects in the current frame 3D target image and the detection frames of m target objects in the corresponding previous frame 3D target image is calculated, and the detection frames of m target objects in the current frame 3D target image are obtained according to the homography matrix, and determine whether the tracking frames of m target objects in the current frame 3D target image and the detection frames are successfully matched, and when matching of m target objects in the current frame 3D target image and the detection frames is successful, output a corresponding 3D tracking track. Therefore, the conditions of missed detection and false detection of the target object can be effectively avoided, and the reliability of the system is greatly improved.
Corresponding to the target tracking method in the above embodiment, the present invention further provides a target tracking device.
As shown in fig. 2, the object tracking device according to an embodiment of the present invention may include: the first acquisition module 100, the second acquisition module 200, the first judgment module 300, the calculation module 400, the second judgment module 500, and the output module 600.
The first obtaining module 100 is configured to perform target detection on a 3D target image of a current frame to obtain detection frames of N target objects in the 3D target image of the current frame, where N is a positive integer greater than or equal to 1; the second obtaining module 200 is configured to obtain tracking frames of N target objects in the 3D target image of the current frame; the first judging module 300 is configured to judge whether the detection frames and the tracking frames of the N target objects in the 3D target image of the current frame are successfully matched; the calculation module 400 is configured to calculate a homography matrix between a tracking frame of m target objects in a 3D target image of a current frame and a detection frame of m target objects in a corresponding 3D target image of a previous frame when matching of the tracking frame of m target objects in the 3D target image of the current frame fails, where m is a positive integer greater than or equal to 1 and less than or equal to N; the second judging module 500 is configured to obtain detection frames of m target objects in the 3D target image of the current frame according to the homography matrix, and judge whether tracking frames of m target objects in the 3D target image of the current frame are matched with the detection frames; the output module 600 is configured to output a corresponding 3D tracking track when the tracking frames of m target objects in the 3D target image of the current frame are successfully matched with the detection frames.
According to one embodiment of the present invention, the second obtaining module 200 is specifically configured to: acquiring tracking frames of N target objects in a 3D target image of a previous frame; and predicting the tracking frames of the N target objects in the 3D target image of the current frame by adopting a target detection algorithm according to the tracking frames of the N target objects in the 3D target image of the previous frame.
According to one embodiment of the invention, the output module 600 is further configured to: if the detection frames of N target objects in the 3D target image of the current frame fail to match, a new 3D tracking track is established, and the categories of the detection frames of N target objects in the 3D target image of the current frame are recorded in a category attribute list of the new 3D tracking track, wherein N is a positive integer which is greater than or equal to 1 and less than or equal to N.
According to one embodiment of the present invention, as shown in fig. 3, the target tracking apparatus further includes a track merging module 700. The track merging module 700 is configured to: acquiring detection frames and tracking frames of N target objects in a 2D target image of a current frame; judging whether the detection frames and the tracking frames of N target objects in the 2D target image of the current frame are successfully matched; and if the matching is successful, outputting a corresponding 2D tracking track, and combining the 2D tracking track with the 3D tracking track.
According to one embodiment of the invention, the track merging module 700 is further configured to: if the tracking frames of k target objects in the 2D target image of the current frame fail to match, predicting the tracking frames of k target objects in the 2D target image of the next frame according to the tracking frames of k target objects in the 2D target image of the current frame, wherein k is a positive integer which is greater than or equal to 1 and less than or equal to N; obtaining detection frames of k target objects in a next 2D target image; obtaining detection frames of k target objects in a next 2D target image; and if the matching is successful, outputting a corresponding 2D tracking track, and combining the 2D tracking track with the 3D tracking track.
According to one embodiment of the invention, the track merging module 700 is further configured to: if the detection frames of p target objects in the 2D target image of the current frame fail to match, a new 2D tracking track is established, and the categories of the detection frames of the p target objects in the 2D target image of the current frame are recorded in a category attribute list of the new 2D tracking track, wherein p is a positive integer which is greater than or equal to 1 and less than or equal to N.
It should be noted that, for more specific implementation of the target tracking apparatus according to the embodiment of the present invention, reference may be made to the above-mentioned embodiment of the target tracking method, which is not described herein.
According to the target tracking device provided by the embodiment of the invention, the target detection is carried out on the current frame 3D target image through the first acquisition module so as to acquire the detection frames of N target objects in the current frame 3D target image, the second acquisition module is used for acquiring the tracking frames of N target objects in the current frame 3D target image, the first judgment module is used for judging whether the detection frames of N target objects in the current frame 3D target image are successfully matched with the tracking frames, the calculation module is used for calculating a homography matrix between the tracking frames of m target objects in the current frame 3D target image and the detection frames of m target objects in the corresponding previous frame 3D target image when the tracking frames of m target objects in the current frame 3D target image are failed to be matched with the detection frames, and the second judgment module is used for acquiring the detection frames of m target objects in the current frame 3D target image according to the homography matrix, judging whether the tracking frames of m target objects in the current frame 3D target image are successfully matched with the detection frames or not, and outputting a corresponding 3D tracking track when the tracking frames of m target objects in the current frame 3D target image are successfully matched with the detection frames through the output module. Therefore, the conditions of missed detection and false detection of the target object can be effectively avoided, and the reliability of the system is greatly improved.
Corresponding to the embodiment, the invention also provides a computer device.
The computer device of the embodiment of the invention comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, and the target tracking method of the embodiment is realized when the processor executes the program.
According to the computer equipment provided by the embodiment of the invention, the condition of missed detection and false detection of the target object can be effectively avoided, so that the reliability of the system is greatly improved.
The present invention also proposes a non-transitory computer-readable storage medium corresponding to the above-described embodiments.
The non-transitory computer-readable storage medium of the embodiment of the present invention stores thereon a computer program that, when executed by a processor, implements the target tracking method described above.
According to the non-transitory computer readable storage medium, the condition of missed detection and false detection of the target object can be effectively avoided, so that the reliability of the system is greatly improved.
The invention also provides a computer program product corresponding to the above embodiment.
The object tracking method of the above-described embodiments may be performed when instructions in a computer program product are executed by a processor.
According to the computer program product provided by the embodiment of the invention, the condition of missing detection and false detection of the target object can be effectively avoided, so that the reliability of the system is greatly improved.
In the description of the present invention, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. The meaning of "a plurality of" is two or more, unless specifically defined otherwise.
In the present invention, unless explicitly specified and limited otherwise, the terms "mounted," "connected," "secured," and the like are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communicated with the inside of two elements or the interaction relationship of the two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
In the present invention, unless expressly stated or limited otherwise, a first feature "up" or "down" a second feature may be the first and second features in direct contact, or the first and second features in indirect contact via an intervening medium. Moreover, a first feature being "above," "over" and "on" a second feature may be a first feature being directly above or obliquely above the second feature, or simply indicating that the first feature is level higher than the second feature. The first feature being "under", "below" and "beneath" the second feature may be the first feature being directly under or obliquely below the second feature, or simply indicating that the first feature is less level than the second feature.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily for the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like. While embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the invention.

Claims (9)

1. A method of target tracking comprising the steps of:
performing target detection on the 3D target image of the current frame to obtain detection frames of N target objects in the 3D target image of the current frame, wherein N is a positive integer greater than or equal to 1;
acquiring tracking frames of N target objects in the 3D target image of the current frame;
judging whether the detection frames and the tracking frames of N target objects in the 3D target image of the current frame are successfully matched;
if the tracking frames of m target objects in the 3D target image of the current frame fail to match, calculating a homography matrix between the tracking frames of the m target objects in the 3D target image of the current frame and the detection frames of the m target objects in the 3D target image of the corresponding previous frame, wherein m is a positive integer which is greater than or equal to 1 and less than or equal to N;
Acquiring detection frames of the m target objects in the 3D target image of the current frame according to the homography matrix, and judging whether tracking frames of the m target objects in the 3D target image of the current frame are matched with the detection frames, wherein point clouds in the detection frames of the m target objects in the 3D target image of the previous frame are mapped into the 3D target image of the current frame according to the homography matrix and divided by corresponding expansion coefficients to predict the detection frames of the m target objects in the 3D target image of the current frame;
And if the tracking frames of the m target objects in the 3D target image of the current frame are successfully matched with the detection frames, outputting corresponding 3D tracking tracks.
2. The target tracking method according to claim 1, wherein the acquiring the tracking frames of the N target objects in the 3D target image of the current frame includes:
acquiring tracking frames of the N target objects in the 3D target image of the previous frame;
and predicting the tracking frames of the N target objects in the 3D target image of the current frame by adopting a target detection algorithm according to the tracking frames of the N target objects in the 3D target image of the previous frame.
3. The target tracking method of claim 1, further comprising:
If the detection frames of N target objects in the 3D target image of the current frame fail to match, a new 3D tracking track is established, and the categories of the detection frames of the N target objects in the 3D target image of the current frame are recorded in a category attribute list of the new 3D tracking track, wherein N is a positive integer which is greater than or equal to 1 and less than or equal to N.
4. A target tracking method according to any one of claims 1-3, further comprising:
acquiring detection frames and tracking frames of the N target objects in the 2D target image of the current frame;
Judging whether the detection frames and the tracking frames of the N target objects in the 2D target image of the current frame are successfully matched;
and if the matching is successful, outputting a corresponding 2D tracking track, and combining the 2D tracking track with the 3D tracking track.
5. The target tracking method of claim 4, further comprising:
If the tracking frames of k target objects in the 2D target image of the current frame fail to match, predicting the tracking frames of the k target objects in the 2D target image of the next frame according to the tracking frames of the k target objects in the 2D target image of the current frame, wherein k is a positive integer greater than or equal to 1 and less than or equal to N;
Acquiring detection frames of the k target objects in the next 2D target image;
judging whether the detection frames and the tracking frames of the k target objects in the next 2D target image are successfully matched;
and if the matching is successful, outputting a corresponding 2D tracking track, and combining the 2D tracking track with the 3D tracking track.
6. The target tracking method of claim 4, further comprising:
If the detection frames of p target objects in the 2D target image of the current frame fail to match, a new 2D tracking track is established, and the categories of the detection frames of the p target objects in the 2D target image of the current frame are recorded in a category attribute list of the new 2D tracking track, wherein p is a positive integer which is greater than or equal to 1 and less than or equal to N.
7. An object tracking device, comprising:
the first acquisition module is used for carrying out target detection on the 3D target image of the current frame so as to acquire detection frames of N target objects in the 3D target image of the current frame, wherein N is a positive integer greater than or equal to 1;
The second acquisition module is used for acquiring tracking frames of N target objects in the 3D target image of the current frame;
the first judging module is used for judging whether the detection frames and the tracking frames of N target objects in the 3D target image of the current frame are successfully matched;
the computing module is used for computing a homography matrix between the tracking frames of m target objects in the 3D target image of the current frame and the detection frames of m target objects in the 3D target image of the corresponding previous frame when the matching of the tracking frames of m target objects in the 3D target image of the current frame fails, wherein m is a positive integer which is greater than or equal to 1 and less than or equal to N;
The second judging module is used for acquiring detection frames of the m target objects in the 3D target image of the current frame according to the homography matrix and judging whether tracking frames of the m target objects in the 3D target image of the current frame are matched with the detection frames or not, wherein point clouds in the detection frames of the m target objects in the 3D target image of the previous frame are mapped into the 3D target image of the current frame according to the homography matrix and divided by corresponding expansion coefficients so as to predict the detection frames of the m target objects in the 3D target image of the current frame;
the output module is used for outputting corresponding 3D tracking tracks when the tracking frames of the m target objects in the 3D target image of the current frame are successfully matched with the detection frames.
8. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the object tracking method according to any of claims 1-6 when executing the computer program.
9. A non-transitory computer-readable storage medium, on which a computer program is stored, characterized in that the program, when executed by a processor, implements the object tracking method according to any one of claims 1-6.
CN202011061084.1A 2020-09-30 2020-09-30 Target tracking method and device Active CN112184772B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011061084.1A CN112184772B (en) 2020-09-30 2020-09-30 Target tracking method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011061084.1A CN112184772B (en) 2020-09-30 2020-09-30 Target tracking method and device

Publications (2)

Publication Number Publication Date
CN112184772A CN112184772A (en) 2021-01-05
CN112184772B true CN112184772B (en) 2024-07-09

Family

ID=73947385

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011061084.1A Active CN112184772B (en) 2020-09-30 2020-09-30 Target tracking method and device

Country Status (1)

Country Link
CN (1) CN112184772B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298852A (en) * 2021-07-27 2021-08-24 第六镜科技(北京)有限公司 Target tracking method and device, electronic equipment and computer readable storage medium
CN113723311A (en) * 2021-08-31 2021-11-30 浙江大华技术股份有限公司 Target tracking method
CN114549578A (en) * 2021-11-05 2022-05-27 北京小米移动软件有限公司 Target tracking method, device and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108053427A (en) * 2017-10-31 2018-05-18 深圳大学 A kind of modified multi-object tracking method, system and device based on KCF and Kalman
CN111563469A (en) * 2020-05-13 2020-08-21 南京师范大学 Method and device for identifying irregular parking behaviors

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9373174B2 (en) * 2014-10-21 2016-06-21 The United States Of America As Represented By The Secretary Of The Air Force Cloud based video detection and tracking system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108053427A (en) * 2017-10-31 2018-05-18 深圳大学 A kind of modified multi-object tracking method, system and device based on KCF and Kalman
CN111563469A (en) * 2020-05-13 2020-08-21 南京师范大学 Method and device for identifying irregular parking behaviors

Also Published As

Publication number Publication date
CN112184772A (en) 2021-01-05

Similar Documents

Publication Publication Date Title
CN112184772B (en) Target tracking method and device
US10782688B2 (en) Method, control apparatus, and system for tracking and shooting target
US10803364B2 (en) Control method, non-transitory computer-readable storage medium for storing control program, and control apparatus
WO2021072696A1 (en) Target detection and tracking method and system, and movable platform, camera and medium
CN109344899B (en) Multi-target detection method and device and electronic equipment
CN109949347B (en) Human body tracking method, device, system, electronic equipment and storage medium
US11170510B2 (en) Method for detecting flying spot on edge of depth image, electronic device, and computer readable storage medium
CN109859240B (en) Video object tracking method and device and vehicle
US20200175377A1 (en) Training apparatus, processing apparatus, neural network, training method, and medium
KR20110046904A (en) Apparatus and method for inpainting image by restricting reference image region
US20110091074A1 (en) Moving object detection method and moving object detection apparatus
KR101130963B1 (en) Apparatus and method for tracking non-rigid object based on shape and feature information
KR102082254B1 (en) a vehicle recognizing system
KR101406334B1 (en) System and method for tracking multiple object using reliability and delayed decision
JP6507843B2 (en) Image analysis method and image analysis apparatus
US11948312B2 (en) Object detection/tracking device, method, and program recording medium
CN110426714B (en) Obstacle identification method
WO2022147655A1 (en) Positioning method and apparatus, spatial information acquisition method and apparatus, and photographing device
CN115861315B (en) Defect detection method and device
CN112052823A (en) Target detection method and device
CN115512281A (en) Invader monitoring method and system combining video camera and laser radar
KR102581154B1 (en) Method and Apparatus for Object Detection Using Model Ensemble
JP4262297B2 (en) Cluster generation device, defect classification device, cluster generation method and program
JP6055307B2 (en) Corresponding point search device, camera posture estimation device, and programs thereof
JP2006078261A (en) Object detector

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240519

Address after: Room 6227, No. 999, Changning District, Shanghai 200050

Applicant after: Shenlan robot (Shanghai) Co.,Ltd.

Country or region after: China

Address before: 518131 room 115, building 8, 1970 Science Park, Minzhi community, Minzhi street, Longhua District, Shenzhen City, Guangdong Province

Applicant before: Shenlan artificial intelligence (Shenzhen) Co.,Ltd.

Country or region before: China

TA01 Transfer of patent application right
GR01 Patent grant