CN113160274A - Improved deep sort target detection tracking method based on YOLOv4 - Google Patents

Improved deep sort target detection tracking method based on YOLOv4 Download PDF

Info

Publication number
CN113160274A
CN113160274A CN202110417776.3A CN202110417776A CN113160274A CN 113160274 A CN113160274 A CN 113160274A CN 202110417776 A CN202110417776 A CN 202110417776A CN 113160274 A CN113160274 A CN 113160274A
Authority
CN
China
Prior art keywords
frame
detection
prediction
tracking
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110417776.3A
Other languages
Chinese (zh)
Inventor
陈紫强
张雅琼
晋良念
谢跃雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN202110417776.3A priority Critical patent/CN113160274A/en
Publication of CN113160274A publication Critical patent/CN113160274A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Abstract

The invention discloses an improved deep sort target detection tracking method based on YOLOv4, which comprises the steps of inputting data, and obtaining a detection frame of a current frame; performing track prediction through a Kalman filtering algorithm based on a detection frame of a current frame to obtain a prediction frame; performing cascade matching on the prediction frame and the detection frame of the next frame based on Hungarian algorithm; performing GIOU correlation matching on the track with cascade matching failure; updating the track based on a Kalman filtering algorithm, wherein if the target tracking is successful, the tracking frequency is increased by 1, and if the tracking is failed, the track is not counted; and repeating the steps, and if the detection tracking times are equal to the set times, judging that the tracking is successful. Under the conditions of weak illumination and shielding, the method has better tracking effect, reduces the missing detection phenomenon and improves the robustness of the system.

Description

Improved deep sort target detection tracking method based on YOLOv4
Technical Field
The invention relates to the technical field of computer vision, in particular to an improved deep sort target detection tracking method based on YOLOv 4.
Background
The main purpose of vehicle detection tracking is to identify vehicles in a forward region of interest and determine their status, which is then tracked.
The core idea of YOLO is to convert target detection into a regression problem, and obtain the position of bounding box and its category through a neural network by using the whole graph as the input of the network, so as to realize the end-to-end detection method. YOLOv1 equally divides a picture into S multiplied by S grids, each grid is respectively responsible for predicting the target with the central point falling in the grid, the detection speed is high, the migration capability is strong, but the detection effect on small targets is not good; the YOLOv2 adopts the darknet-19 as a feature extraction network, connects the shallow feature with the deep feature, is beneficial to the detection of small-size targets, and improves the detection precision; YOLOv3 replaces the original darknet19 with darknet53, and multi-scale detection is performed by utilizing a characteristic pyramid network structure, so that the instantaneity and the detection accuracy are greatly improved; the backbone feature extraction network of the Yolov4 is CSPDarknet53, and the accuracy is improved by nearly 10 points compared with the accuracy of the Yolov3 by adding SPP and PANet structures.
The multi-target tracking technology mainly comprises an SORT algorithm and a DeepSort algorithm, the SORT algorithm predicts and updates a target by using Kalman filtering, a Hungarian algorithm is used for matching a prediction box and a detection box, and the tracking effect is poor under the condition of shielding. The Deepsort tracking algorithm introduces cascade matching on the basis of the SORT, and performs data association matching on a prediction frame and a detection frame of a target track through the Hungarian algorithm in the cascade matching, so that the detection false detection and omission conditions are improved, but the tracking effect is still not good under the conditions of low illumination intensity and shielding.
Disclosure of Invention
The invention aims to provide an improved Deepsort target detection and tracking method based on YOLOv4, and aims to solve the problem that the target tracking effect is not good under the conditions of weak illumination and occlusion.
In order to achieve the above object, the present invention provides an improved deep sort target detection tracking method based on YOLOv4, which includes S101 inputting data, obtaining a detection frame of a current frame;
s102, performing track prediction through a Kalman filtering algorithm based on a detection frame of a current frame to obtain a prediction frame;
s103, performing cascade matching on the prediction frame and the detection frame of the next frame based on Hungarian algorithm;
s104, performing GIOU association matching on the track with the cascade matching failure;
s105, updating the track based on a Kalman filtering algorithm, wherein if the target tracking is successful, the tracking frequency is increased by 1, and if the tracking is failed, the track is not counted;
s106, repeating the steps S101 to S105, and if the tracking times are detected to be equal to the set times, the tracking is successful.
Wherein, after the data is input and the detection frame is obtained, the trajectory prediction is performed based on the kalman filtering algorithm, and before the prediction frame is obtained, the method further comprises: and screening the detection frames.
The specific steps of screening the detection frames are as follows: removing the detection frame with the confidence coefficient less than 0.7; the overlapped detection boxes are removed based on non-maximum suppression.
The specific steps of inputting data and acquiring the detection frame are as follows: inputting a target picture; carrying out undistorted operation and normalization on the target picture, inputting the target picture into a convolutional neural network, and carrying out forward propagation once; adding dimensions to the target pictures in batches; obtaining a prediction result of the picture; and decoding the prior frame by using the prediction result to obtain a final prediction frame, and judging whether the prior frame contains an object or not and the type of the object in the prior frame.
The specific steps of carrying out cascade matching on the prediction frame and the detection frame of the next frame based on Hungarian algorithm are as follows: acquiring a target detection frame of the next frame; matching according to a Hungarian matching algorithm, traversing the detection box matched with the prediction box if the prediction box appears in the target detection box, successfully matching and skipping to the step S105; if the prediction frame does not appear in the target detection frame, jumping to step S104; and if a new target appears in the target detection frame, jumping to the step S102.
The specific steps of carrying out GIOU association matching on the track with cascade matching failure are as follows:
if the prediction frame appears in the target detection frame, matching is successful and the step S105 is skipped; and if a new target appears in the target detection frame, initializing, and then returning to S102, and if the prediction frame does not appear in the target detection frame and exceeds the maximum cycle detection frame number, deleting the tracking object.
According to the improved DeepSort target detection tracking method based on YOLOv4, matching is performed by using the Hungarian algorithm, firstly, a prediction box and a detection box are matched by using the Hungarian algorithm in cascade matching, and then GIOU correlation matching is performed on unsuccessfully matched targets to replace IOU correlation matching. Because IOU matching can only carry out simple overlapping area comparison, the distance and the intersection mode of the two frames cannot be measured, and GIOU increases the measurement mode of the intersection scale, takes the space except the intersection of the two frames into consideration, and improves the matching result of the prediction frame and the detection frame. Under the conditions of weak illumination and shielding, the method has better tracking effect, reduces the missing detection phenomenon and improves the robustness of the system.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of an improved deep sort target detection tracking method based on YOLOv4 in accordance with the present invention;
FIG. 2 is a flow chart of the present invention for inputting data to obtain a detection frame of a current frame;
FIG. 3 is a flow chart of screening test frames according to the present invention;
FIG. 4 is a flow chart of the present invention for performing cascade matching of a prediction box and a detection box of a next frame based on the Hungarian algorithm.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
Referring to fig. 1 to 4, the present invention provides an improved deep sort target detection and tracking method based on YOLOv4, including:
s101, inputting data and acquiring a detection frame of a current frame;
after data are input, detection is carried out through a YOLOv4 algorithm, and depth characteristics of a detection frame and an image are obtained;
the method comprises the following specific steps:
s201, inputting a target picture;
s202, performing undistorted operation and normalization on the target picture, and inputting the target picture into a convolutional neural network for one-time forward propagation;
s203, adding dimensions to the pictures in batches;
s204, obtaining a prediction result of the picture;
obtaining a prediction result of a picture through a self _ out (feed _ fact) function;
s205, decoding the prior frame by using the prediction result to obtain a final prediction frame, and judging whether the prior frame contains an object or not and the type of the object in the prior frame.
After the current frame detection frame is obtained, the detection frame can be screened. The specific steps for screening the detection frames are as follows:
s301, removing detection frames with confidence coefficient less than 0.7;
s302 suppresses the removal of the overlapped detection frame based on the non-maximum value.
S102, performing track prediction through a Kalman filtering algorithm based on a detection frame of a current frame to obtain a prediction frame;
the algorithm uses an X ═ 8-dimensional state vector as a direct observation model of a target, wherein (u, v) represents the central position coordinates of a Bounding box, r and h represent the aspect ratio and height of the Bounding box respectively, and the other 4 vectors represent the corresponding speed information;
the prediction equation of kalman filtering is:
Figure BDA0003026635870000041
Figure BDA0003026635870000042
wherein the content of the first and second substances,
Figure BDA0003026635870000043
is a priori predicted value, P, of the system state at time t, predicted from the system parameters at time t-1t|t-1Is a covariance matrix of prior prediction errors, E is a mathematical expectation, FtAssociating the state of the system at the time t-1 with the state at the time t as a state transition matrix, wherein the size of the matrix is n multiplied by n; u. oftFor input control of the system, the matrix size is kX 1; b istFor inputting control matrix, input is controlled by utAnd system state XtIn association, the matrix size is n × k.
S103, performing cascade matching on the prediction frame and the detection frame of the next frame based on Hungarian algorithm;
the method comprises the following specific steps:
s401, acquiring a target detection frame of a next frame;
the target detection frame is acquired in the same manner as step S101.
S402, matching according to a Hungarian matching algorithm, traversing the matched detection box if the prediction box appears in the target detection box, successfully matching and skipping to the step S105; if the prediction frame does not appear in the target detection frame, jumping to step S104; and if a new target appears in the target detection frame, jumping to the step S102.
The Hungarian algorithm obtains a cost matrix by comprehensively matching motion matching and appearance matching, and measures the distance between two distributions of track and detection by using the squared Mahalanobis distance, as shown in a formula:
Figure BDA0003026635870000051
Figure BDA0003026635870000052
wherein d isjDenotes the jth detection box, yiThe ith predicted track is represented as a function of time,
Figure BDA0003026635870000053
and representing the covariance between the predicted track obtained by the Kalman filter and the detection frame obtained by target detection. Equation (3-23) represents an indicator that compares mahalanobis distance to a threshold value for chi-squared distribution. When d is(1)(i,j)≤t(1)And if so, namely the jth detection frame and the ith prediction track are associated, the result is 1, and the matching is successful. Setting t(1)=9.4877。
The distance between apparent features is measured using cosine distance, as shown in the formula:
Figure BDA0003026635870000054
Figure BDA0003026635870000055
wherein the content of the first and second substances,
Figure BDA0003026635870000056
the degree of similarity of the cosine is represented,
Figure BDA0003026635870000057
the k-th predicted track frame is represented, the cosine distance is equal to 1-cosine similarity, apparent features corresponding to track and detection are measured by using the cosine distance, the target ID is predicted more accurately, and the number of IDswitch times is reduced. When d is(2)(i,j)≤t(2)And when the cosine distance is smaller than or equal to the specified threshold value, the apparent feature matching is considered to be successful.
The comprehensive matching degree is obtained by weighting the motion model and the appearance model, as shown in a formula.
ci,j=λd(1)(i,j)+(1-λ)d(2)(i,j) (6)
Figure BDA0003026635870000058
Wherein λ is a hyper-parameter, bi,jIs an indicator, only bi,jWhen the j-th detection frame is 1, the j-th detection frame and the i-th predicted track are preliminarily matched.
S104, performing GIOU association matching on the track with the cascade matching failure;
if the prediction frame appears in the target detection frame, matching is successful and the step S105 is skipped; if a new target appears in the target detection frame, initializing, and then returning to S102; if the prediction frame does not appear in the target detection frame and exceeds the maximum cycle detection frame number, deleting the tracked object, specifically, if the state of one target is a stable state and three continuous frames are successfully tracked, skipping to the step S105, wherein the first frame is successfully tracked, but the corresponding target detection frame cannot be successfully matched in the next 2 frames and exceeds the maximum cycle detection number, failing to track, considering that the target moves away from the tracking range, and deleting the tracked object; and if the state of one target is an uncertain state, directly deleting the state.
S105, updating the track based on a Kalman filtering algorithm, wherein if the target tracking is successful, the tracking frequency is increased by 1, and if the tracking is failed, the track is not counted;
the update equation of kalman filtering is:
Figure BDA0003026635870000061
Pt|t=Pt|t-1-KtHtPt|t-1 (13)
Figure BDA0003026635870000062
wherein, Xt|tIs the posterior predicted value of the system state after updating at time t, ZtFor the observed value of the system at time t,
Figure BDA0003026635870000063
Htfor the state transition matrix, the real state of the system at time t is associated with an observed value, Pt|tIs the posterior prediction error covariance matrix, K, of the system updated at time ttIs the gain of the kalman gain (in),
Figure BDA0003026635870000064
Figure BDA0003026635870000065
and
Figure BDA0003026635870000066
variance, R, of system predicted and observed values, respectivelytIs a covariance matrix.
As shown in fig. 2, the improved DeepSort network structure based on YOLOv4 of the present invention replaces IOU matching with GIOU matching after cascade matching. Wherein the content of the first and second substances,
1) the IOU is:
Figure BDA0003026635870000067
LOSSIou=1-Iou (16)
2) GIOU is:
Figure BDA0003026635870000068
LOSSGIou=1-GIou (18)
the area of the target detection frame is A, the area of the prediction frame is B, and C is the area of the minimum rectangular frame containing A and B.
As shown in table 1, a cascaded matching flow chart for the detection tracking algorithm.
TABLE 1 cascaded matching Table
Figure BDA0003026635870000069
Figure BDA0003026635870000071
At the input, the track T and the index of the detection D and the maximum frame number A of the cyclic detection are inputmaxA set of (a);
1) calculating a cost matrix and a gating matrix which are comprehensively matched in the first row and the second row;
2) initializing a matching set and an unmatched detection set;
3) carrying out iterative loop on the track life n to solve the problem of linear distribution of the target track;
4) selecting the trace T of which the latest n frames do not match with the detectionnA subset of (a);
5) treatment TnLinear problem between the trace in (1) and the unmatched detection U;
6) updating the matched set and the unmatched detection set;
7) completing the tracking matching and returning the result.
S106, repeating the steps S101 to S105, and if the tracking times are detected to be equal to the set times, the tracking is successful.
If the set number of times is 3, the target is tracked by 3 continuous frames, and then the tracking is considered to be successful, if the target is tracked successfully in the first frame, but the target cannot be matched with the corresponding target detection frame successfully in the next 2 frames and exceeds the maximum cycle detection number, the tracking is failed, and the target is considered to leave the tracking range, deleted and not tracked any more.
According to the improved DeepSort target detection tracking method based on YOLOv4, matching is performed by using the Hungarian algorithm, firstly, a prediction box and a detection box are matched by using the Hungarian algorithm in cascade matching, and then GIOU correlation matching is performed on unsuccessfully matched targets to replace IOU correlation matching. Because IOU matching can only carry out simple overlapping area comparison, the distance and the intersection mode of the two frames cannot be measured, and GIOU increases the measurement mode of the intersection scale, takes the space except the intersection of the two frames into consideration, and improves the matching result of the prediction frame and the detection frame. Under the conditions of weak illumination and shielding, the method has better tracking effect, reduces the missing detection phenomenon and improves the robustness of the system.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (6)

1. An improved deep sort target detection tracking method based on YOLOv4 is characterized in that,
the method comprises the following steps: s101, inputting data and acquiring a detection frame of a current frame;
s102, performing track prediction through a Kalman filtering algorithm based on a detection frame of a current frame to obtain a prediction frame;
s103, performing cascade matching on the prediction frame and the detection frame of the next frame based on Hungarian algorithm;
s104, performing GIOU association matching on the track with the cascade matching failure;
s105, updating the track based on a Kalman filtering algorithm, wherein if the target tracking is successful, the tracking frequency is increased by 1, and if the tracking is failed, the track is not counted;
s106, repeating the steps S101 to S105, and if the tracking times are detected to be equal to the set times, the tracking is successful.
2. The improved deep sort target detection tracking method based on YOLOv4 as claimed in claim 1,
after the data is input and the detection frame is obtained, the trajectory prediction is performed based on the kalman filtering algorithm, and before the prediction frame is obtained, the method further includes: and screening the detection frames.
3. The improved deep sort target detection tracking method based on YOLOv4 as claimed in claim 2,
the specific steps for screening the detection frames are as follows:
removing the detection frame with the confidence coefficient less than 0.7;
the overlapped detection boxes are removed based on non-maximum suppression.
4. The improved deep sort target detection tracking method based on YOLOv4 as claimed in claim 1,
the specific steps of inputting data and acquiring the detection frame are as follows:
inputting a target picture;
carrying out undistorted operation and normalization on the target picture, inputting the target picture into a convolutional neural network, and carrying out forward propagation once;
adding dimensions to the target pictures in batches;
obtaining a prediction result of the picture;
and decoding the prior frame by using the prediction result to obtain a final prediction frame, and judging whether the prior frame contains an object or not and the type of the object in the prior frame.
5. The improved deep sort target detection tracking method based on YOLOv4 as claimed in claim 1,
the specific steps of carrying out cascade matching on the prediction frame and the detection frame of the next frame based on Hungarian algorithm are as follows:
acquiring a target detection frame of the next frame;
matching according to a Hungarian matching algorithm, traversing the detection box matched with the prediction box if the prediction box appears in the target detection box, successfully matching and skipping to the step S105; if the prediction frame does not appear in the target detection frame, jumping to step S104; and if a new target appears in the target detection frame, jumping to the step S102.
6. The improved deep sort target detection tracking method based on YOLOv4 as claimed in claim 1,
the concrete steps of carrying out GIOU correlation matching on the track with cascade matching failure are as follows:
if the prediction frame appears in the target detection frame, matching is successful and the step S105 is skipped; and if a new target appears in the target detection frame, initializing, and then returning to S102, and if the prediction frame does not appear in the target detection frame and exceeds the maximum cycle detection frame number, deleting the tracking object.
CN202110417776.3A 2021-04-19 2021-04-19 Improved deep sort target detection tracking method based on YOLOv4 Pending CN113160274A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110417776.3A CN113160274A (en) 2021-04-19 2021-04-19 Improved deep sort target detection tracking method based on YOLOv4

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110417776.3A CN113160274A (en) 2021-04-19 2021-04-19 Improved deep sort target detection tracking method based on YOLOv4

Publications (1)

Publication Number Publication Date
CN113160274A true CN113160274A (en) 2021-07-23

Family

ID=76868554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110417776.3A Pending CN113160274A (en) 2021-04-19 2021-04-19 Improved deep sort target detection tracking method based on YOLOv4

Country Status (1)

Country Link
CN (1) CN113160274A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537170A (en) * 2021-09-16 2021-10-22 北京理工大学深圳汽车研究院(电动车辆国家工程实验室深圳研究院) Intelligent traffic road condition monitoring method and computer readable storage medium
CN113674317A (en) * 2021-08-10 2021-11-19 深圳市捷顺科技实业股份有限公司 Vehicle tracking method and device of high-order video
CN113723311A (en) * 2021-08-31 2021-11-30 浙江大华技术股份有限公司 Target tracking method
CN113791140A (en) * 2021-11-18 2021-12-14 湖南大学 Bridge bottom interior nondestructive testing method and system based on local vibration response
CN113888825A (en) * 2021-09-16 2022-01-04 无锡湖山智能科技有限公司 Monitoring system and method for driving safety
CN113962282A (en) * 2021-08-19 2022-01-21 大连海事大学 Improved YOLOv5L + Deepsort-based real-time detection system and method for ship engine room fire
CN113983737A (en) * 2021-10-18 2022-01-28 海信(山东)冰箱有限公司 Refrigerator and food material positioning method thereof
CN114067564A (en) * 2021-11-15 2022-02-18 武汉理工大学 Traffic condition comprehensive monitoring method based on YOLO
CN114255434A (en) * 2022-03-01 2022-03-29 深圳金三立视频科技股份有限公司 Multi-target tracking method and device
CN114782495A (en) * 2022-06-16 2022-07-22 西安中科立德红外科技有限公司 Multi-target tracking method, system and computer storage medium
CN114897944A (en) * 2021-11-10 2022-08-12 北京中电兴发科技有限公司 Multi-target continuous tracking method based on DeepSORT
WO2023065395A1 (en) * 2021-10-18 2023-04-27 中车株洲电力机车研究所有限公司 Work vehicle detection and tracking method and system
CN116309696A (en) * 2022-12-23 2023-06-23 苏州驾驶宝智能科技有限公司 Multi-category multi-target tracking method and device based on improved generalized cross-over ratio
CN116580066A (en) * 2023-07-04 2023-08-11 广州英码信息科技有限公司 Pedestrian target tracking method under low frame rate scene and readable storage medium
CN116740753A (en) * 2023-04-20 2023-09-12 安徽大学 Target detection and tracking method and system based on improved YOLOv5 and deep SORT

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816690A (en) * 2018-12-25 2019-05-28 北京飞搜科技有限公司 Multi-target tracking method and system based on depth characteristic
CN109903312A (en) * 2019-01-25 2019-06-18 北京工业大学 A kind of football sportsman based on video multi-target tracking runs distance statistics method
CN110889324A (en) * 2019-10-12 2020-03-17 南京航空航天大学 Thermal infrared image target identification method based on YOLO V3 terminal-oriented guidance
CN111369590A (en) * 2020-02-27 2020-07-03 北京三快在线科技有限公司 Multi-target tracking method and device, storage medium and electronic equipment
CN111488795A (en) * 2020-03-09 2020-08-04 天津大学 Real-time pedestrian tracking method applied to unmanned vehicle
CN112528730A (en) * 2020-10-20 2021-03-19 福州大学 Cost matrix optimization method based on space constraint under Hungary algorithm

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816690A (en) * 2018-12-25 2019-05-28 北京飞搜科技有限公司 Multi-target tracking method and system based on depth characteristic
CN109903312A (en) * 2019-01-25 2019-06-18 北京工业大学 A kind of football sportsman based on video multi-target tracking runs distance statistics method
CN110889324A (en) * 2019-10-12 2020-03-17 南京航空航天大学 Thermal infrared image target identification method based on YOLO V3 terminal-oriented guidance
CN111369590A (en) * 2020-02-27 2020-07-03 北京三快在线科技有限公司 Multi-target tracking method and device, storage medium and electronic equipment
CN111488795A (en) * 2020-03-09 2020-08-04 天津大学 Real-time pedestrian tracking method applied to unmanned vehicle
CN112528730A (en) * 2020-10-20 2021-03-19 福州大学 Cost matrix optimization method based on space constraint under Hungary algorithm

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113674317A (en) * 2021-08-10 2021-11-19 深圳市捷顺科技实业股份有限公司 Vehicle tracking method and device of high-order video
CN113674317B (en) * 2021-08-10 2024-04-26 深圳市捷顺科技实业股份有限公司 Vehicle tracking method and device for high-level video
CN113962282A (en) * 2021-08-19 2022-01-21 大连海事大学 Improved YOLOv5L + Deepsort-based real-time detection system and method for ship engine room fire
CN113962282B (en) * 2021-08-19 2024-04-16 大连海事大学 Ship cabin fire real-time detection system and method based on improved yolov5l+deep
CN113723311A (en) * 2021-08-31 2021-11-30 浙江大华技术股份有限公司 Target tracking method
CN113888825A (en) * 2021-09-16 2022-01-04 无锡湖山智能科技有限公司 Monitoring system and method for driving safety
CN113537170A (en) * 2021-09-16 2021-10-22 北京理工大学深圳汽车研究院(电动车辆国家工程实验室深圳研究院) Intelligent traffic road condition monitoring method and computer readable storage medium
CN113983737A (en) * 2021-10-18 2022-01-28 海信(山东)冰箱有限公司 Refrigerator and food material positioning method thereof
WO2023065395A1 (en) * 2021-10-18 2023-04-27 中车株洲电力机车研究所有限公司 Work vehicle detection and tracking method and system
CN114897944A (en) * 2021-11-10 2022-08-12 北京中电兴发科技有限公司 Multi-target continuous tracking method based on DeepSORT
CN114897944B (en) * 2021-11-10 2022-10-25 北京中电兴发科技有限公司 Multi-target continuous tracking method based on DeepSORT
CN114067564A (en) * 2021-11-15 2022-02-18 武汉理工大学 Traffic condition comprehensive monitoring method based on YOLO
CN114067564B (en) * 2021-11-15 2023-08-29 武汉理工大学 Traffic condition comprehensive monitoring method based on YOLO
CN113791140B (en) * 2021-11-18 2022-02-25 湖南大学 Bridge bottom interior nondestructive testing method and system based on local vibration response
CN113791140A (en) * 2021-11-18 2021-12-14 湖南大学 Bridge bottom interior nondestructive testing method and system based on local vibration response
CN114255434A (en) * 2022-03-01 2022-03-29 深圳金三立视频科技股份有限公司 Multi-target tracking method and device
CN114782495A (en) * 2022-06-16 2022-07-22 西安中科立德红外科技有限公司 Multi-target tracking method, system and computer storage medium
CN114782495B (en) * 2022-06-16 2022-10-18 西安中科立德红外科技有限公司 Multi-target tracking method, system and computer storage medium
CN116309696B (en) * 2022-12-23 2023-12-01 苏州驾驶宝智能科技有限公司 Multi-category multi-target tracking method and device based on improved generalized cross-over ratio
CN116309696A (en) * 2022-12-23 2023-06-23 苏州驾驶宝智能科技有限公司 Multi-category multi-target tracking method and device based on improved generalized cross-over ratio
CN116740753A (en) * 2023-04-20 2023-09-12 安徽大学 Target detection and tracking method and system based on improved YOLOv5 and deep SORT
CN116580066A (en) * 2023-07-04 2023-08-11 广州英码信息科技有限公司 Pedestrian target tracking method under low frame rate scene and readable storage medium
CN116580066B (en) * 2023-07-04 2023-10-03 广州英码信息科技有限公司 Pedestrian target tracking method under low frame rate scene and readable storage medium

Similar Documents

Publication Publication Date Title
CN113160274A (en) Improved deep sort target detection tracking method based on YOLOv4
CN112526513B (en) Millimeter wave radar environment map construction method and device based on clustering algorithm
CN113012203B (en) High-precision multi-target tracking method under complex background
CN112101430B (en) Anchor frame generation method for image target detection processing and lightweight target detection method
Sznitman et al. Active testing for face detection and localization
US8405540B2 (en) Method for detecting small targets in radar images using needle based hypotheses verification
CN110349187B (en) Target tracking method and device based on TSK fuzzy classifier and storage medium
CN110363165B (en) Multi-target tracking method and device based on TSK fuzzy system and storage medium
JP2018022475A (en) Method and apparatus for updating background model
CN110599489A (en) Target space positioning method
CN110349188B (en) Multi-target tracking method, device and storage medium based on TSK fuzzy model
CN115063454B (en) Multi-target tracking matching method, device, terminal and storage medium
CN116012364B (en) SAR image change detection method and device
Koksal et al. Effect of annotation errors on drone detection with YOLOv3
CN111259332B (en) Fuzzy data association method and multi-target tracking method in clutter environment
CN115546705A (en) Target identification method, terminal device and storage medium
Li et al. Insect detection and counting based on YOLOv3 model
CN110222585B (en) Moving target tracking method based on cascade detector
Kropfreiter et al. A scalable track-before-detect method with Poisson/multi-Bernoulli model
CN115932913B (en) Satellite positioning pseudo-range correction method and device
CN109697474B (en) Synthetic aperture radar image change detection method based on iterative Bayes
CN115657008A (en) Multi-target tracking method and device for airborne terahertz radar
Altundogan et al. Multiple object tracking with dynamic fuzzy cognitive maps using deep learning
JP2020052475A (en) Sorter building method, image classification method, sorter building device, and image classification device
CN113126052A (en) High-resolution range profile target identification online library building method based on stage-by-stage segmentation training

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210723

RJ01 Rejection of invention patent application after publication