CN112990072A - Target detection and tracking method based on high and low dual thresholds - Google Patents

Target detection and tracking method based on high and low dual thresholds Download PDF

Info

Publication number
CN112990072A
CN112990072A CN202110350480.4A CN202110350480A CN112990072A CN 112990072 A CN112990072 A CN 112990072A CN 202110350480 A CN202110350480 A CN 202110350480A CN 112990072 A CN112990072 A CN 112990072A
Authority
CN
China
Prior art keywords
target
tracking
detection
position information
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110350480.4A
Other languages
Chinese (zh)
Inventor
刘湛基
王玲
石锡敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sharpvision Co ltd
Original Assignee
Sharpvision Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sharpvision Co ltd filed Critical Sharpvision Co ltd
Priority to CN202110350480.4A priority Critical patent/CN112990072A/en
Publication of CN112990072A publication Critical patent/CN112990072A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a target detection and tracking method based on high and low dual thresholds, which can recall a detection target with correct low confidence coefficient filtered by a detector prematurely through setting a dual-threshold cascade logic relation of a low detection confidence coefficient threshold and a high detection confidence coefficient threshold and combining the calculation of the coincidence degree of the detection target and the tracking target, thereby solving the problems of more missed detections and less missed detections caused by the high threshold and more missed detections caused by the low threshold, wherein the performance contradiction can occur in the selection of the thresholds when the thresholds are not selected properly or the robustness of the target detection is not enough. The method has the advantages that the low missing detection rate and the low false detection are met, and meanwhile, the real-time performance and the universality are ensured, so that the accuracy of tracking targets such as pedestrians, vehicles and the like is improved.

Description

Target detection and tracking method based on high and low dual thresholds
Technical Field
The invention belongs to the technical field of target detection and tracking of a vehicle-mounted system on a target object in a video, and particularly relates to a target detection and tracking method based on high and low dual thresholds.
Background
In a target detection and tracking method of a traditional vehicle-mounted system for a target object, a target detector and a tracker are two independent units, and the target detector and the tracker are in a cascade relation. When the target detector passes the screening of the detection threshold, the result larger than the detection threshold is input to the tracker. The tracker further performs feature information extraction on the input result. After the characteristic information of the current target and a plurality of previous frame targets is obtained, the similarity of the current target and the previous frame targets is judged by using a similarity judgment function, if the similarity is larger than a preset threshold value, the target is considered to be required to be tracked, and the result is taken as final output. However, since only one fixed threshold is set in the target detector to screen the detection result, when the threshold is not selected properly or the robustness of target detection is not sufficient, performance contradictions may occur in the selection of the threshold, i.e., "high threshold causes more missed detections, less false detections, and" low threshold causes less missed detections, more false detections ".
Disclosure of Invention
The invention aims to provide a target detection and tracking method based on high and low dual thresholds, which can improve the target tracking accuracy.
The invention relates to a target detection and tracking method based on high and low dual thresholds, which comprises the following steps:
the target detector detects the input video two-dimensional image to acquire corresponding information of each detection target;
filtering and screening the reliability value of the spatial position information of the detection target corresponding to the information of the detection target and a preset low detection reliability threshold value;
if the detection target is lower than a preset low detection reliability threshold value, deleting the detection target;
if the detection reliability is higher than or equal to the preset low detection reliability threshold, outputting detection target space position information corresponding to the corresponding detection target;
performing coincidence calculation on the detection target space position information corresponding to the output detection target according to the target category and the target tracking identification code corresponding to each detection target and the target space position information corresponding to the previous frame of tracking target, calculating coincidence values of one detection target and all tracking targets in the same category, and matching the coincidence values with a preset coincidence threshold value;
if the coincidence degree is larger than or equal to the preset coincidence degree threshold value, outputting the detection target to a matching detection result set, outputting the corresponding tracking target to the matching tracking result set, and replacing target space position information corresponding to the tracking target in the matching tracking result set with space position information of the detection target in the matching detection result set to obtain the corresponding current tracking target;
if the coincidence degree is smaller than the preset coincidence degree threshold value, outputting the detection target to a non-matching detection result set;
filtering and screening the reliability value of the spatial position information of each detection target in the non-matching detection result set and a preset high detection reliability threshold value;
if the detection target is lower than a preset high detection reliability threshold value, judging that the detection target is not a corresponding tracking target, and deleting the detection target;
and if the detection target is higher than or equal to the preset high detection reliability threshold value, judging that the detection target is the current tracking target.
According to the target detection and tracking method based on the high and low dual thresholds, the dual-threshold cascade logic relationship of the low detection reliability threshold and the high detection reliability threshold is set, and the detection target with the correct low confidence filtered by the detector in a too early state can be recalled by combining the calculation of the contact ratio of the detection target and the tracking target, so that the problems that performance contradiction occurs when the threshold is not selected properly or the robustness of target detection is not enough, namely, more detection omission and less false detection are caused by the high threshold, and less detection omission and more false detection are caused by the low threshold are solved. The method has the advantages that the low missing detection rate and the low false detection are met, and meanwhile, the real-time performance and the universality are ensured, so that the accuracy of tracking targets such as pedestrians, vehicles and the like is improved.
Detailed Description
A target detection and tracking method based on high and low dual thresholds,
the target detector detects the input video two-dimensional image to acquire corresponding information of each detection target;
filtering and screening the reliability value of the spatial position information of the detection target corresponding to the information of the detection target and a preset low detection reliability threshold value;
if the detection target is lower than a preset low detection reliability threshold value, deleting the detection target;
if the detection reliability is higher than or equal to the preset low detection reliability threshold, outputting detection target space position information corresponding to the corresponding detection target;
performing coincidence calculation on the detection target space position information corresponding to the output detection target according to the target category and the target tracking identification code corresponding to each detection target and the target space position information corresponding to the previous frame of tracking target, calculating coincidence values of one detection target and all tracking targets in the same category, and matching the coincidence values with a preset coincidence threshold value;
if the coincidence degree is larger than or equal to the preset coincidence degree threshold value, outputting the detection target to a matching detection result set, outputting the corresponding tracking target to the matching tracking result set, and replacing target space position information corresponding to the tracking target in the matching tracking result set with space position information of the detection target in the matching detection result set to obtain the corresponding current tracking target;
if the coincidence degree is smaller than the preset coincidence degree threshold value, outputting the detection target to a non-matching detection result set;
filtering and screening the reliability value of the spatial position information of each detection target in the non-matching detection result set and a preset high detection reliability threshold value;
if the detection target is lower than a preset high detection reliability threshold value, judging that the detection target is not a corresponding tracking target, and deleting the detection target;
and if the detection target is higher than or equal to the preset high detection reliability threshold value, judging that the detection target is the current tracking target.
Because the tracker needs to extract high latitude information of the previous and next frame targets, a large amount of computation is needed, and if the computation is smaller than a preset contact ratio threshold value, the corresponding tracking target is output to a non-matching tracking result set; comparing the existence times of the spatial position information corresponding to each tracking target in the non-matching tracking result set with a preset existence time threshold; if the number of the existing times is larger than a preset existing times threshold value, accumulating and outputting the existing times of the spatial position information corresponding to the tracking target; if the number of times of existence is smaller than or equal to the preset number of times of existence threshold, performing position prediction calculation on target space position information corresponding to the tracking target and space position information of the tracking target appearing in n frames before the video two-dimensional image to obtain space position information of the tracking target appearing in the next frame of the video two-dimensional image, taking the space position information as the space position information of the tracking target, and performing number of times of existence accumulation output on the space position information corresponding to the tracking target; comparing the time length value of the space position information corresponding to the tracking target with the accumulated existence times in the tracking queue with the time length threshold value of the tracking target allowed to exist in the tracking queue; if the time length is larger than the time length threshold value allowing the tracking target to exist in the tracking queue, deleting the tracking target; and if the time length is less than or equal to the time length threshold value allowing the tracking target to exist in the tracking queue, outputting the tracking target as the current tracking target. By analyzing the time-space information of a plurality of ordered frames, a plurality of different targets can be judged and tracked rapidly and comprehensively, the calculation amount of the tracked target can be reduced, and the target tracking capability is improved.
And calculating the coincidence degree of the output detection target boundary frame corresponding to the detection target according to the target category corresponding to each detection target, the target tracking identification code and the target boundary frame corresponding to the tracking target in the previous frame, calculating the coincidence degree value of one detection target and all tracking targets in the same category, and calculating the coincidence degree of the square of the distance from the detection target boundary frame to the tracking target boundary frame and the area of the tracking target boundary frame. The boundary frame of the target is a rectangular frame which is respectively formed by the upper left corner points of the rectangular frame
Figure DEST_PATH_IMAGE001
And the lower right corner point
Figure 524212DEST_PATH_IMAGE002
And (4) determining. The commonly used description information of the spatial position of the target detection result further includes a rotation angle, a contour, and the like. When the coincidence degree of the square of the distance from the detection target boundary frame to the tracking target boundary frame and the area of the tracking target boundary frame is selected for calculation, the coordinate of the center point of the target boundary frame is
Figure DEST_PATH_IMAGE003
Figure DEST_PATH_IMAGE005
Square of the distance between the center points of the bounding boxes of two objects
Figure 644615DEST_PATH_IMAGE006
Figure 142461DEST_PATH_IMAGE008
Further, the coincidence degree of the square of the distance from the detection target bounding box to the tracking target bounding box and the area of the tracking target bounding box can be calculated.
And performing position prediction calculation on a target boundary frame corresponding to the tracking target and a boundary frame of the tracking target appearing in n frames before the two-dimensional video image, wherein the calculation mode is as follows:
for the same target, the central coordinates of the bounding box of the previous n frames are:
Figure 740933DEST_PATH_IMAGE010
for the same target, the width and height information of the bounding box of the previous n frames are as follows:
Figure 820884DEST_PATH_IMAGE012
predicting the boundary box center coordinates of the tracking target of the next frame:
Figure 112188DEST_PATH_IMAGE014
predicting the width and height of a boundary box of a tracking target of a next frame:
Figure 582484DEST_PATH_IMAGE016
predicting a bounding box of a tracking target of a next frame:
Figure 958DEST_PATH_IMAGE018
the foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

Claims (4)

1. A target detection and tracking method based on high and low dual thresholds is characterized by comprising the following steps:
the target detector detects the input video two-dimensional image to acquire corresponding information of each detection target;
filtering and screening the reliability value of the spatial position information of the detection target corresponding to the information of the detection target and a preset low detection reliability threshold value;
if the detection target is lower than a preset low detection reliability threshold value, deleting the detection target;
if the detection reliability is higher than or equal to the preset low detection reliability threshold, outputting detection target space position information corresponding to the corresponding detection target;
performing coincidence calculation on the detection target space position information corresponding to the output detection target according to the target category and the target tracking identification code corresponding to each detection target and the target space position information corresponding to the previous frame of tracking target, calculating coincidence values of one detection target and all tracking targets in the same category, and matching the coincidence values with a preset coincidence threshold value;
if the coincidence degree is larger than or equal to the preset coincidence degree threshold value, outputting the detection target to a matching detection result set, outputting the corresponding tracking target to the matching tracking result set, and replacing target space position information corresponding to the tracking target in the matching tracking result set with space position information of the detection target in the matching detection result set to obtain the corresponding current tracking target;
if the coincidence degree is smaller than the preset coincidence degree threshold value, outputting the detection target to a non-matching detection result set;
filtering and screening the reliability value of the spatial position information of each detection target in the non-matching detection result set and a preset high detection reliability threshold value;
if the detection target is lower than a preset high detection reliability threshold value, judging that the detection target is not a corresponding tracking target, and deleting the detection target;
and if the detection target is higher than or equal to the preset high detection reliability threshold value, judging that the detection target is the current tracking target.
2. The target detection and tracking method based on high and low dual thresholds according to claim 1, further comprising outputting the corresponding tracking target to a non-matching tracking result set if the target is smaller than a preset overlap ratio threshold;
comparing the existence times of the spatial position information corresponding to each tracking target in the non-matching tracking result set with a preset existence time threshold;
if the number of the existing times is larger than a preset existing times threshold value, accumulating and outputting the existing times of the spatial position information corresponding to the tracking target;
if the number of times of existence is smaller than or equal to the preset number of times of existence threshold, performing position prediction calculation on target space position information corresponding to the tracking target and space position information of the tracking target appearing in n frames before the video two-dimensional image to obtain space position information of the tracking target appearing in the next frame of the video two-dimensional image, taking the space position information as the space position information of the tracking target, and performing number of times of existence accumulation output on the space position information corresponding to the tracking target;
comparing the time length value of the space position information corresponding to the tracking target with the accumulated existence times in the tracking queue with the time length threshold value of the tracking target allowed to exist in the tracking queue;
if the time length is larger than the time length threshold value allowing the tracking target to exist in the tracking queue, deleting the tracking target;
and if the time length is less than or equal to the time length threshold value allowing the tracking target to exist in the tracking queue, outputting the tracking target as the current tracking target.
3. The method for detecting and tracking targets based on high and low dual thresholds according to claim 2, wherein the spatial location information is a bounding box of a corresponding target, the bounding box of the detected target corresponding to the output detected target is subjected to coincidence calculation according to the target class corresponding to each detected target and the target boundary box corresponding to the target tracking identifier and the previous frame of tracked target, coincidence values of one detected target and all tracked targets of the same class are calculated, and coincidence of the square of the distance from the bounding box of the detected target to the bounding box of the tracked target and the area of the bounding box of the tracked target is selected for calculation.
4. The target detection and tracking method based on high and low dual thresholds according to claim 3, characterized in that the position prediction calculation is performed on the target bounding box corresponding to the tracked target and the bounding box of the tracked target appearing n frames before the video two-dimensional image, and the calculation method is as follows:
for the same target, the central coordinates of the bounding box of the previous n frames are:
Figure DEST_PATH_IMAGE002
for the same target, the width and height information of the bounding box of the previous n frames are as follows:
Figure DEST_PATH_IMAGE004
predicting the boundary box center coordinates of the tracking target of the next frame:
Figure DEST_PATH_IMAGE006
predicting the width and height of a boundary box of a tracking target of a next frame:
Figure DEST_PATH_IMAGE008
predicting a bounding box of a tracking target of a next frame:
Figure DEST_PATH_IMAGE010
CN202110350480.4A 2021-03-31 2021-03-31 Target detection and tracking method based on high and low dual thresholds Pending CN112990072A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110350480.4A CN112990072A (en) 2021-03-31 2021-03-31 Target detection and tracking method based on high and low dual thresholds

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110350480.4A CN112990072A (en) 2021-03-31 2021-03-31 Target detection and tracking method based on high and low dual thresholds

Publications (1)

Publication Number Publication Date
CN112990072A true CN112990072A (en) 2021-06-18

Family

ID=76338781

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110350480.4A Pending CN112990072A (en) 2021-03-31 2021-03-31 Target detection and tracking method based on high and low dual thresholds

Country Status (1)

Country Link
CN (1) CN112990072A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113706555A (en) * 2021-08-12 2021-11-26 北京达佳互联信息技术有限公司 Video frame processing method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111080673A (en) * 2019-12-10 2020-04-28 清华大学深圳国际研究生院 Anti-occlusion target tracking method
CN111383246A (en) * 2018-12-29 2020-07-07 杭州海康威视数字技术股份有限公司 Scroll detection method, device and equipment
CN112215155A (en) * 2020-10-13 2021-01-12 北京中电兴发科技有限公司 Face tracking method and system based on multi-feature fusion

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111383246A (en) * 2018-12-29 2020-07-07 杭州海康威视数字技术股份有限公司 Scroll detection method, device and equipment
CN111080673A (en) * 2019-12-10 2020-04-28 清华大学深圳国际研究生院 Anti-occlusion target tracking method
CN112215155A (en) * 2020-10-13 2021-01-12 北京中电兴发科技有限公司 Face tracking method and system based on multi-feature fusion

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113706555A (en) * 2021-08-12 2021-11-26 北京达佳互联信息技术有限公司 Video frame processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN106951879B (en) Multi-feature fusion vehicle detection method based on camera and millimeter wave radar
CN106934817B (en) Multi-attribute-based multi-target tracking method and device
Luvizon et al. Vehicle speed estimation by license plate detection and tracking
CN103226891B (en) Video-based vehicle collision accident detection method and system
WO2019076187A1 (en) Video blocking region selection method and apparatus, electronic device, and system
EP2709066A1 (en) Concept for detecting a motion of a moving object
GB2392033A (en) Video motion anomaly detector
CN111626275B (en) Abnormal parking detection method based on intelligent video analysis
JP6679858B2 (en) Method and apparatus for detecting occlusion of an object
Denman et al. Multi-spectral fusion for surveillance systems
Huang et al. A real-time and color-based computer vision for traffic monitoring system
CN112990072A (en) Target detection and tracking method based on high and low dual thresholds
WO2022142416A1 (en) Target tracking method and related device
CN110660225A (en) Red light running behavior detection method, device and equipment
TWI517100B (en) Method for tracking moving object and electronic apparatus using the same
EP2709065A1 (en) Concept for counting moving objects passing a plurality of different areas within a region of interest
Płaczek A real time vehicle detection algorithm for vision-based sensors
Tsai et al. Multi-lane detection and road traffic congestion classification for intelligent transportation system
CN114882709A (en) Vehicle congestion detection method and device and computer storage medium
CN114419531A (en) Object detection method, object detection system, and computer-readable storage medium
CN114445786A (en) Road congestion detection method and device, electronic equipment and storage medium
CN103714552A (en) Method and device for elimination of motion shadows and intelligent video analysis system
Del Carmen et al. Assessment of vision-based vehicle tracking for traffic monitoring applications
Lashkov et al. Computing‐efficient video analytics for nighttime traffic sensing
Bachtiar et al. Parking management by means of computer vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210618

RJ01 Rejection of invention patent application after publication