CN113052877A - Multi-target tracking method based on multi-camera fusion - Google Patents

Multi-target tracking method based on multi-camera fusion Download PDF

Info

Publication number
CN113052877A
CN113052877A CN202110299952.8A CN202110299952A CN113052877A CN 113052877 A CN113052877 A CN 113052877A CN 202110299952 A CN202110299952 A CN 202110299952A CN 113052877 A CN113052877 A CN 113052877A
Authority
CN
China
Prior art keywords
target
matching
camera
target tracking
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110299952.8A
Other languages
Chinese (zh)
Inventor
刘玉杰
孙奉钰
张玉鹏
张敏杰
李宗民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Petroleum East China
Original Assignee
China University of Petroleum East China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Petroleum East China filed Critical China University of Petroleum East China
Priority to CN202110299952.8A priority Critical patent/CN113052877A/en
Publication of CN113052877A publication Critical patent/CN113052877A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention combines deep learning and computer vision algorithm, and particularly discloses a multi-target tracking method based on multi-camera fusion, which comprises the following steps: s1, detecting by a detector to obtain a target detection frame; s2, predicting the position of the target at the next moment through Kalman filtering; s3, cascade matching and IOU matching associating the prediction box with the detection box; s4, matching the predicted track with the current moment detection box by using a Hungarian algorithm; s5, correcting the match by the auxiliary view; s6, Kalman filter update. The method of the invention improves the ID exchange problem caused by shielding in multi-target detection through a multi-camera fusion technology.

Description

Multi-target tracking method based on multi-camera fusion
Technical Field
The invention combines deep learning and a computer vision algorithm, and particularly discloses a multi-target tracking method based on multi-camera fusion.
Background
Multi-target tracking is gaining increasing attention in computer vision due to its academic and commercial potential. Although there are a wide variety of approaches to dealing with this issue today, issues such as target overlap, dramatic appearance changes, etc., remain significant challenges it faces. How to more effectively solve the problems of target overlapping and the like has great significance for the application of multi-target tracking technology, and in order to solve all the problems, people put forward a wide range of solutions in the past decades
multi-Object Tracking (MOT or MTT) the main task is to locate Multiple objects of interest simultaneously in a given video, and to maintain their ID, record their trajectories. These objects may be pedestrians on the road, vehicles on the road, players on the playground, or groups of animals (birds, bats, ants, fish, cells, etc.), even different parts of a single object. Besides the challenges of single-target scale change, out-of-plane rotation, illumination change and the like, multi-target tracking also needs to deal with more complex key problems including: 1) frequent shielding; 2) track initialization and termination; 3) a similar appearance; 4) interaction among multiple targets.
The multi-target tracking algorithm with the highest attention in the industry is not SORT or DeepsORT. The two algorithms realize multi-target tracking through a matching detection process and a Kalman prediction and updating mode, but the problems of shielding and ID Switch cannot be well solved.
The method combines deep learning and computer vision methods, constructs a multi-target tracking algorithm based on deep SORT and integrated with multiple cameras, and can well solve the problems of shielding and ID exchange.
Disclosure of Invention
The invention aims to provide a multi-target tracking method based on multi-camera fusion, which adopts the following scheme:
a multi-target tracking method based on multi-camera fusion comprises the following steps:
s1, detecting by a detector to obtain a target detection frame;
s2, predicting the position of the target at the next moment through Kalman filtering;
s3, cascade matching and IOU matching associating the prediction box with the detection box;
s4, matching the predicted track with the current moment detection box by using a Hungarian algorithm;
s5, correcting the match through an auxiliary view;
s6, Kalman filtering updating;
further, in the step s1, a detection frame of the target to be detected is obtained through yolov4 target detection;
further, in step s2, the next time prediction is performed on the detected target by using the kalman filtering technique;
further, in step s3, the prediction block and the detection block are associated by cascade matching and IOU matching
Further, the specific association process is as follows:
s31, distance measurement is carried out on the detection box and the prediction box through the mahalanobis distance;
s32 measuring the appearance characteristics by cosine distance
s33 measuring degree of coincidence by IOU cross-correlation
Further, in the above step s4, the Hungarian algorithm is used to find the optimal match between the detection box and the prediction box.
Further, in the step s5, the matching is corrected by the auxiliary view angle
Further, the correction process is as follows:
s51, restoring the real scene of the matching box by camera calibration algorithm
s52, correcting the matching result by the auxiliary camera
s53, modifying and restoring the corrective result
Further, in step s6, the kalman filter is updated.
The invention has the following advantages:
according to the method, the multiple targets are tracked in real time in a detection and prediction mode through a deep neural network and a computer vision method, meanwhile, a scheme of using multiple cameras for assistance is provided aiming at inherent problems of shielding and the like existing in the multi-target tracking, the shielding position of a main camera is corrected through an auxiliary view angle, the problems of ID exchange and the like caused by shielding are solved, and the accuracy of multi-target detection is improved.
Drawings
FIG. 1 is a block diagram of a multi-target tracking method based on multi-camera fusion according to the present invention;
detailed description of the invention
The invention will be described in further detail with reference to the accompanying figure 1 and the following detailed description:
referring to fig. 1, a multi-target tracking method based on multi-camera fusion includes the following steps:
s1 detecting by the detector to obtain the target detection frame
In order to realize multi-target tracking in a robust mode, the method adopts a detection plus prediction mode, and achieves the detection work of tracking people from video frames by using yolov4 which is faster and more effective.
s2, predicting the position of the target at the next moment by Kalman filtering
The detection algorithm is used for detecting the position of a person in a picture, association of the same person between frames cannot be realized, prediction of the next moment is carried out on the person at the current moment through Kalman filtering, a reference basis is provided for matching association, and the specific calculation process is as follows:
predicting a trajectory for a current time based on previous trajectories
x′=Fx
Wherein χ is the state vector of the trace at time t-1, F is the state transition matrix, and the formula predicts the state vector x' at time t:
Figure BDA0002985854550000031
wherein cx and cy respectively represent the abscissa and ordinate positions of the target center point, r represents the length-width ratio, h represents the height, and the other four are corresponding derivatives
s3, cascade matching and IOU matching associating the prediction box with the detection box;
s31, distance measurement of the detection box and the prediction box by mahalanobis distance:
Figure BDA0002985854550000032
djrepresents the position of the jth detection box, yiRepresenting the predicted position of the i-th tracker to the target, SiRepresenting the covariance matrix between the two frames, and calculating the distance between the frames to correlate the detection frame and the prediction frame, and certainly, besides the control on the distance, the correlation degree of the apparent features in the frames still needs to be looked at
s32 measuring the appearance characteristics by cosine distance
Figure BDA0002985854550000033
Extracting 128-dimensional feature vectors from the detection frame and the prediction frame through a neural network, regularizing the feature vectors to a hypersphere of a unit sphere through 12, and passing the regularization to a hypersphere of a unit sphere through
Figure BDA0002985854550000034
The cosine distance r between two eigenvectors is calculatedjCorresponding to the jth detected feature vector,
Figure BDA0002985854550000035
corresponding to the tracked feature vector, wherein the tracked feature vector is a set, and k times of feature vectors which are successfully tracked in the past are reserved
s33 measuring degree of coincidence by IOU cross-correlation
Figure BDA0002985854550000036
Wherein area (A) and area (B) are the areas of the intersection of the detection frame and the prediction frame, and rea (A) and area (B) are the areas of their phases.
s4, finding the optimal matching between the detection box and the prediction box by using the Hungarian algorithm;
the Hungarian algorithm is mainly used for solving the distribution problem and finding an optimal distribution, so that the cost for completing all tasks is minimum, and the method is called as a KM algorithm. The specific calculation process is as follows:
1. for each row of the matrix, the smallest element is subtracted
2. For each column of the matrix, the smallest element is subtracted
3. Covering all 0's in the matrix with a minimum of horizontal or vertical lines
4. If the number of lines is equal to N, the optimal allocation is found and the algorithm ends, otherwise step 55 is entered, the smallest element not covered by any line is found, each row not covered by a line subtracts this element, each column covered by a line adds this element, and step 3 is returned.
s5, correcting the match through an auxiliary view;
in order to better improve the ID exchange problem caused by the occlusion, a plurality of auxiliary cameras are used for carrying out auxiliary correction on the occlusion area.
s51, restoring the real scene of the matching box by camera calibration algorithm
Selecting key points through the visual angle of the camera and the real scene, and calculating a conversion matrix from the visual angle of the camera to the real scene, wherein the calculation process is as follows:
1. selecting four mark points as P in real scene1、P2、P3、P4The points of the two points are Q in the camera1、Q2、Q3、Q4Combining the elements to calculate P, Q
P=[P1,P2,P3]TQ=[Q1,Q2,Q3]T
2. Calculating weights
V=(P-1)T*P4 T
R=(Q-1)T*Q4 T
Figure BDA0002985854550000041
Figure BDA0002985854550000042
3. Computing transformation matrices
T′=(QT*W*P)T
s52, correcting the matching result by the auxiliary camera
And restoring the position in the camera to a real scene through the calibration matrix, and correcting the matching result of the main camera and the matching result of the auxiliary camera.
s53, modifying and restoring the corrective result
And revising the matching result again according to the auxiliary camera.
s6 Kalman Filter update
1. The measurement matrix H maps the mean vector x' of the trajectory to the detection space
Figure BDA0002985854550000043
2. Mapping the covariance matrix P' to the detection space, plus the noise matrix
S=HP′HT+R
3. Calculating a Kalman gain K for estimating the importance of the error
K=P′HTS-1
4. Calculating the updated mean vector x and covariance matrix P
x=x′+Ky
P=(1-KH)P′
It should be understood, however, that the description herein of specific embodiments is not intended to limit the invention to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.

Claims (3)

1. A multi-target tracking method based on multi-camera fusion is characterized by comprising the following steps:
s1, detecting by a detector to obtain a target detection frame;
s2, predicting the position of the target at the next moment through Kalman filtering;
s3, cascade matching and IOU matching associating the prediction box with the detection box;
s4, matching the predicted track with the current moment detection box by using a Hungarian algorithm;
s5, correcting the match through an auxiliary view;
s6, Kalman filter update.
2. The multi-camera fusion-based multi-target tracking method according to claim 1, wherein in the step s5, the matching target in the multi-target tracking is corrected and improved by a multi-view fusion method.
3. The multi-target tracking method based on multi-camera fusion as claimed in claim 1, wherein in the step s5, the multiple camera angles are restored to the real scene for correction through a camera calibration technique.
CN202110299952.8A 2021-03-22 2021-03-22 Multi-target tracking method based on multi-camera fusion Pending CN113052877A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110299952.8A CN113052877A (en) 2021-03-22 2021-03-22 Multi-target tracking method based on multi-camera fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110299952.8A CN113052877A (en) 2021-03-22 2021-03-22 Multi-target tracking method based on multi-camera fusion

Publications (1)

Publication Number Publication Date
CN113052877A true CN113052877A (en) 2021-06-29

Family

ID=76513963

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110299952.8A Pending CN113052877A (en) 2021-03-22 2021-03-22 Multi-target tracking method based on multi-camera fusion

Country Status (1)

Country Link
CN (1) CN113052877A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114049477A (en) * 2021-11-16 2022-02-15 中国水利水电科学研究院 Fish passing fishway system and dynamic identification and tracking method for fish quantity and fish type
CN115601402A (en) * 2022-12-12 2023-01-13 知行汽车科技(苏州)有限公司(Cn) Target post-processing method, device and equipment for cylindrical image detection frame and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108109162A (en) * 2018-01-08 2018-06-01 中国石油大学(华东) A kind of multiscale target tracking merged using self-adaptive features
CN109816690A (en) * 2018-12-25 2019-05-28 北京飞搜科技有限公司 Multi-target tracking method and system based on depth characteristic
CN109919981A (en) * 2019-03-11 2019-06-21 南京邮电大学 A kind of multi-object tracking method of the multiple features fusion based on Kalman filtering auxiliary
CN110490911A (en) * 2019-08-14 2019-11-22 西安宏规电子科技有限公司 Multi-cam multi-target tracking method based on Non-negative Matrix Factorization under constraint condition
CN110533687A (en) * 2018-05-11 2019-12-03 深眸科技(深圳)有限公司 Multiple target three-dimensional track tracking and device
CN111163290A (en) * 2019-11-22 2020-05-15 东南大学 Device and method for detecting and tracking night navigation ship
CN111914664A (en) * 2020-07-06 2020-11-10 同济大学 Vehicle multi-target detection and track tracking method based on re-identification
CN112016445A (en) * 2020-08-27 2020-12-01 重庆科技学院 Monitoring video-based remnant detection method
CN112070807A (en) * 2020-11-11 2020-12-11 湖北亿咖通科技有限公司 Multi-target tracking method and electronic device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108109162A (en) * 2018-01-08 2018-06-01 中国石油大学(华东) A kind of multiscale target tracking merged using self-adaptive features
CN110533687A (en) * 2018-05-11 2019-12-03 深眸科技(深圳)有限公司 Multiple target three-dimensional track tracking and device
CN109816690A (en) * 2018-12-25 2019-05-28 北京飞搜科技有限公司 Multi-target tracking method and system based on depth characteristic
CN109919981A (en) * 2019-03-11 2019-06-21 南京邮电大学 A kind of multi-object tracking method of the multiple features fusion based on Kalman filtering auxiliary
CN110490911A (en) * 2019-08-14 2019-11-22 西安宏规电子科技有限公司 Multi-cam multi-target tracking method based on Non-negative Matrix Factorization under constraint condition
CN111163290A (en) * 2019-11-22 2020-05-15 东南大学 Device and method for detecting and tracking night navigation ship
CN111914664A (en) * 2020-07-06 2020-11-10 同济大学 Vehicle multi-target detection and track tracking method based on re-identification
CN112016445A (en) * 2020-08-27 2020-12-01 重庆科技学院 Monitoring video-based remnant detection method
CN112070807A (en) * 2020-11-11 2020-12-11 湖北亿咖通科技有限公司 Multi-target tracking method and electronic device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李志华;陈耀武;: "基于多摄像头的目标连续跟踪", 电子测量与仪器学报, no. 02, 15 February 2009 (2009-02-15) *
马敬奇;钟震宇;雷欢;吴亮生;: "人体结构化特征与核相关滤波器算法融合的目标跟踪方法", 计算机应用, no. 1, 10 July 2020 (2020-07-10) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114049477A (en) * 2021-11-16 2022-02-15 中国水利水电科学研究院 Fish passing fishway system and dynamic identification and tracking method for fish quantity and fish type
CN114049477B (en) * 2021-11-16 2023-04-07 中国水利水电科学研究院 Fish passing fishway system and dynamic identification and tracking method for fish quantity and fish type
CN115601402A (en) * 2022-12-12 2023-01-13 知行汽车科技(苏州)有限公司(Cn) Target post-processing method, device and equipment for cylindrical image detection frame and storage medium
CN115601402B (en) * 2022-12-12 2023-03-28 知行汽车科技(苏州)股份有限公司 Target post-processing method, device and equipment for cylindrical image detection frame and storage medium

Similar Documents

Publication Publication Date Title
CN113269098B (en) Multi-target tracking positioning and motion state estimation method based on unmanned aerial vehicle
Nabati et al. Rrpn: Radar region proposal network for object detection in autonomous vehicles
Wolf et al. Robust vision-based localization for mobile robots using an image retrieval system based on invariant features
Kitt et al. Monocular visual odometry using a planar road model to solve scale ambiguity
CN108573496B (en) Multi-target tracking method based on LSTM network and deep reinforcement learning
CN111488795A (en) Real-time pedestrian tracking method applied to unmanned vehicle
CN102609953A (en) Multi-object appearance-enhanced fusion of camera and range sensor data
CN110782494A (en) Visual SLAM method based on point-line fusion
CN113052877A (en) Multi-target tracking method based on multi-camera fusion
CN112052802B (en) Machine vision-based front vehicle behavior recognition method
CN112991391A (en) Vehicle detection and tracking method based on radar signal and vision fusion
Wolf et al. Using an image retrieval system for vision-based mobile robot localization
CN114088081A (en) Map construction method for accurate positioning based on multi-segment joint optimization
Cho et al. Distance-based camera network topology inference for person re-identification
CN114332158A (en) 3D real-time multi-target tracking method based on camera and laser radar fusion
Sharma Feature-based efficient vehicle tracking for a traffic surveillance system
CN113689502B (en) Multi-information fusion obstacle measurement method
CN114581678A (en) Automatic tracking and re-identifying method for template feature matching
Kang et al. Robust visual tracking framework in the presence of blurring by arbitrating appearance-and feature-based detection
Walia et al. A novel approach of multi-stage tracking for precise localization of target in video sequences
Zhang et al. Target tracking for mobile robot platforms via object matching and background anti-matching
Chenchen et al. A camera calibration method for obstacle distance measurement based on monocular vision
CN115761693A (en) Method for detecting vehicle location mark points and tracking and positioning vehicles based on panoramic image
CN115144828B (en) Automatic online calibration method for intelligent automobile multi-sensor space-time fusion
CN107610154B (en) Spatial histogram representation and tracking method of multi-source target

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination