CN109961461B - Multi-moving-object tracking method based on three-dimensional layered graph model - Google Patents
Multi-moving-object tracking method based on three-dimensional layered graph model Download PDFInfo
- Publication number
- CN109961461B CN109961461B CN201910205734.6A CN201910205734A CN109961461B CN 109961461 B CN109961461 B CN 109961461B CN 201910205734 A CN201910205734 A CN 201910205734A CN 109961461 B CN109961461 B CN 109961461B
- Authority
- CN
- China
- Prior art keywords
- dimensional
- graph model
- target
- detection
- matching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/251—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a multi-moving target tracking method based on a three-dimensional layered graph model, which comprises the following steps: s1, analyzing appearance characteristics and motion characteristics of the detected target and three-dimensional space structural characteristics among the characteristics, and establishing a target three-dimensional layered graph model of each target area; s2, establishing a detection three-dimensional layered graph model of the detection target in the detection area of the current frame; s3, establishing a prediction three-dimensional layered graph model of the tracking target in the prediction area of the current frame; s4, calculating the matching degree of the node, the edge and the space structure between the detection target in the detection three-dimensional layered graph model and the prediction target in the prediction three-dimensional layered graph model; and S5, tracking and matching the target between the detection area and the prediction area according to the calculation result of the matching degree. The invention can effectively realize multi-target real-time tracking, can be applied to most Kinect monitoring scenes, can also be popularized and applied to the fields of robot target identification obstacle avoidance, intelligent traffic and the like, and has better application prospect.
Description
Technical Field
The invention relates to the technical field of three-dimensional layered graph models, in particular to a multi-moving-target tracking method based on a three-dimensional layered graph model.
Background
In the tracking of multiple moving targets in a two-dimensional scene, the targets cannot be accurately tracked due to the fact that the targets mutually shield and lose information, and therefore attention is increasingly paid to the realization of multi-target tracking under complex conditions by using three-dimensional vision.
The three-dimensional visual system is mainly divided into the following three types: the system comprises a monocular vision system, a binocular or multi-eye stereoscopic vision system and a three-dimensional depth RGB-D vision system, wherein the monocular vision system only adopts a two-dimensional camera and obtains three-dimensional vision information through three-dimensional calibration; the binocular or multi-view stereo vision system jointly images and calibrates and reconstructs three-dimensional information of a scene through two or more cameras, the two systems have high calculation complexity and low real-time performance, and the third system adopts a three-dimensional depth RGB-D camera to directly and simultaneously provide a two-dimensional RGB imageImage and three-dimensional depth information[13]Therefore, three-dimensional depth cameras have been increasingly applied to three-dimensional vision systems in recent years.
At present, the application of a three-dimensional depth camera becomes the hotspot field of three-dimensional target tracking, and generally, the target tracking of the three-dimensional camera is adopted, RGB two-dimensional image information and depth information are firstly registered, and then a method of tracking after detection is adopted, or the target tracking is directly carried out on three-dimensional point cloud data. For example, removing the ground from the 3D point cloud data, identifying by using interested information and depth, firstly estimating the depth value of the point cloud, extracting an interested area comprising a human body and other targets, then classifying the interested area into a non-human body area, then tracking based on a target detection result, performing data association according to the consistency of the depth and the appearance, selecting the interested points of RGB and a depth map, then combining the features based on the RGB and the depth map, and generating a target tracking 3D track after matching; or optimizing and matching by adopting a graph model algorithm to obtain a tracking result,
in the prior art, a layered graph model in the RGB and depth fields is provided for real-time robust multi-person tracking. Obtaining the optimal association and tracking result of the multi-human body target and directly adopting three-dimensional point cloud information to track according to the multi-target three-dimensional characteristics by RGB-D data association and optimization of the track; however, most of the kinect-based three-dimensional visual analysis is focused on three-dimensional reconstruction of a scene, navigation of a mobile robot and recognition tracking at present, multi-target tracking used for visual monitoring is in a starting stage, three-dimensional point cloud is obtained by registering RGB and depth information, target recognition tracking is carried out on the basis, the calculation complexity is high, and the kinect-based three-dimensional visual analysis cannot be directly applied to tracking under the complex condition of multiple moving targets in video monitoring.
Disclosure of Invention
The invention aims to provide a multi-moving target tracking method based on a three-dimensional layered graph model, which is used for tracking a multi-moving target by adopting a two-dimensional image and a three-dimensional depth image provided by a kinect three-dimensional camera.
In order to solve the technical problem, the invention provides a multi-moving target tracking method based on a three-dimensional layered graph model, which comprises the following steps:
s1, firstly, detecting a foreground connected region of a target by adopting a background subtraction method, then labeling the detection region by using an external rectangular frame, and establishing a target three-dimensional hierarchical graph model consisting of nodes, edges and a space structure for each target region by analyzing appearance characteristics, motion characteristics and three-dimensional space structure characteristics among characteristics of the detected target;
s2, according to the target three-dimensional layered graph model established in the step S1, establishing a detection three-dimensional layered graph model R of a detection target in the detection area of the current frame;
s3, according to the target three-dimensional layered graph model established in the step S1, establishing a prediction three-dimensional layered graph model of a tracking target in a prediction area of a current frame for the tracking target of the previous frame
S4, respectively calculating the detection three-dimensional layered graph model R of the detection target in the current frame and the prediction three-dimensional layered graph model of the tracking target in the previous frame in the current frame prediction areaMatching degree of nodes, edges and space structures among the nodes;
and S5, tracking and matching the target between the detection area and the prediction area of the current frame according to the calculation result of the step S4.
Preferably, the nodes are cluster blocks in the target region formed by color features, shape features and three-dimensional space features.
Preferably, the edge is a three-dimensional Euclidean distance between center points of different clustering blocks of the same target.
Preferably, the spatial structure is a three-dimensional euclidean distance between different target center points.
Preferably, the detecting three-dimensional layered graph model R in the step S2 is represented as:
R={V,E,S} (1)
in the formula (1), V denotes a node for detecting a three-dimensional hierarchical graph, E denotes an edge for detecting a three-dimensional hierarchical graph, and S denotes a spatial structure for detecting a three-dimensional hierarchical graph.
in the formula (2), the reaction mixture is,a node representing a predictive three-dimensional hierarchical graph model,representing the edges of a predictive three-dimensional hierarchical graph model,representing the spatial structure of the predictive three-dimensional hierarchical graph model.
Preferably, the step S4 is specifically implemented as follows:
s401, calculating a detection three-dimensional layered graph model R of a detection target in a current frame and a prediction three-dimensional layered graph model of a tracking target in a previous frame in a current frame prediction areaDegree of matching m of nodes between1And is expressed by the formula:
in the formula (3), i is a reference numeral and Vi={ll,gi,ciDenotes detecting nodes in the three-dimensional hierarchical graph model,representing nodes in a predictive three-dimensional hierarchical graph model, ll、Respectively represent nodes ViAnd nodeMarker of (a), (b), giAnd ciRepresents a node ViThe component (b) of (a) is,andrepresenting nodesComponent of (a)gRepresenting a node component giAndweight value of λcRepresenting a node component ciAndweight value of (wherein λ)g+λc=1),δgRepresenting a node component giAnddegree of matching of position vector and shape model, deltacRepresenting a node component ciAndmatching degree of the color histogram;
s402, calculating a detection three-dimensional layered graph model R of a detection target in a current frame and a prediction three-dimensional layered graph model of a tracking target in a previous frame in a current frame prediction areaDegree of matching m between edges2And is expressed by the formula:
in the formula (9), ebRepresents the length of the edge of the detection target in the detection three-dimensional layered graph model R of the current frame,three-dimensional layered graph model for predicting tracking target in current frameLength of middle edge, δbIndicates the length ebAnd lengthLength matching degree between eaRepresents the cosine angle of the edge of the detection target in the detection three-dimensional layered graph model R of the current frame,three-dimensional layered graph model for predicting tracking target in current frameCosine angle of middle edge, deltaaRepresents an angle eaAnd angle
Angle of between, λbAnd λaWeight values respectively representing the length matching degree and the angle matching degree;
s403, calculating a detection three-dimensional layered graph model R of a detection target in the current frame and a prediction three-dimensional layered graph model of a tracking target in the previous frame in the current frame prediction regionThe degree of matching m of the space structure3And is expressed by the formula:
in the formula (12), dcRepresenting the length of a three-dimensional straight line segment in the three-dimensional layered graph model R,representing a predictive three-dimensional hierarchical graph modelLength of straight line segment in middle three dimensions, deltaecRepresenting three-dimensional straight line segments dcAnddegree of matching between, dhRepresenting the detection of the angle in the three-dimensional space of the straight line segment in the three-dimensional layered graph model R,representing a predictive three-dimensional hierarchical graph modelAngle in three-dimensional space of middle straight line segment, deltaehRepresenting the angle d in three-dimensional space of a straight line segmenthAnddegree of match between, λecAnd λehRespectively representing the length matching degree delta of the three-dimensional straight line segmentecAngle matching degree delta in three-dimensional space of linear segmentehThe weight value of (1);
s404, obtaining the node matching degree m according to the step S401, the step S402 and the step S4031Degree of matching m2Matching with three-dimensional space structure m3Calculating and detecting three-dimensional layered graph modelModel R and prediction three-dimensional layered graph modelThe matching degree M between the two is expressed by the formula:
in the formula (15), j represents a reference numeral, f1、f2And f3Weight coefficients respectively representing the node matching degree, the edge matching degree and the three-dimensional space structure matching degree, and f1+f2+f3=1。
Preferably, the step S5 is specifically implemented as follows:
s501, establishing a detection three-dimensional layered graph model R and a prediction three-dimensional layered graph model according to the calculation result of the step S4Judging whether the tracking target is shielded or not according to the matching table;
s502, if the tracking target is judged to be not shielded in the step S501, obtaining an optimal matching result by utilizing a Hungarian method for solving an assignment problem;
and S503, if the tracking target is judged to be blocked in the step S501, matching the clustering block in the foreground detection area of the current frame with the tracking target, and then matching with the prediction area of the tracking target of the previous frame, so as to obtain the best matching result of the blocked area of the current frame.
Preferably, the step S503 is specifically implemented as follows:
s5031, calculating a three-dimensional hierarchical graph model of the cluster block and the tracking target in the prediction of the current framePreliminarily determining the relation between the clustering block and the tracking target according to the node matching degree;
s5032, calculating the predicted three-dimension of the clustering block and the tracking target in the current frameLayered graph modelMatching degree of the middle edge;
s5033, adding the node matching degree in the step S5031 and the edge matching degree in the step S5032, and establishing a detection three-dimensional layered graph model R and a prediction three-dimensional layered graph model R againThe best matching result of the current frame shielding area can be obtained.
Compared with the prior art, the method and the device respectively establish three-dimensional layered graph models aiming at the two-dimensional RGB image and the three-dimensional depth image, calculate the matching degree of nodes, edges and spaces between the detected three-dimensional layered graph model and the predicted three-dimensional layered graph model, further calculate the matching degree between the detected three-dimensional layered graph model and the predicted three-dimensional layered graph model, further establish a matching table of the detected three-dimensional layered graph model and the predicted three-dimensional layered graph model according to the calculated matching degree, and then accurately judge whether the tracking target is blocked according to the result of the matching table, namely, the optimal matching result of the current frame can be obtained, multiple targets in the current frame can be effectively identified, and the tracking of multiple moving targets is realized. The method can be applied to most Kinect monitoring scenes, can also be popularized and applied to the fields of robot target identification obstacle avoidance, intelligent traffic and the like, and has a good application prospect.
Drawings
FIG. 1 is a flow chart of a multi-moving-target tracking method based on a three-dimensional layered graph model according to the present invention,
FIG. 2 is a flow chart of a method for calculating a degree of matching between a detected three-dimensional hierarchical map model and a predicted three-dimensional hierarchical map model according to the present invention,
FIG. 3 is a flow chart of matching between a detected target and a tracked target in the present invention,
FIG. 4 is a matching flow chart of the present invention under the condition of the occlusion of the detected target and the tracked target.
Detailed Description
In order to make the technical solutions of the present invention better understood, the present invention is further described in detail below with reference to the accompanying drawings.
As shown in fig. 1, a method for tracking multiple moving objects based on a three-dimensional hierarchical graph model includes the following steps:
s1, firstly, detecting a foreground connected region of a target by adopting a background subtraction method, then labeling the detection region by using an external rectangular frame, and establishing a target three-dimensional hierarchical graph model consisting of nodes, edges and a space structure for each target region by analyzing appearance characteristics, motion characteristics and three-dimensional space structure characteristics among characteristics of the detected target;
s2, according to the target three-dimensional layered graph model established in the step S1, establishing a detection three-dimensional layered graph model R of a detection target in the detection area of the current frame;
s3, according to the target three-dimensional layered graph model established in the step S1, establishing a prediction three-dimensional layered graph model of a tracking target in a prediction area of a current frame for the tracking target of the previous frame
S4, respectively calculating the detection three-dimensional layered graph model R of the detection target in the current frame and the prediction three-dimensional layered graph model of the tracking target in the previous frame in the current frame prediction areaMatching degree of nodes, edges and space structures among the nodes;
and S5, tracking and matching the target between the detection area and the prediction area of the current frame according to the calculation result of the step S4.
In the embodiment of the multi-moving-target tracking method based on the three-dimensional layered image model, the RGB image and the depth information do not need to be fused, the three-dimensional layered image model is established respectively for the two-dimensional RGB image and the three-dimensional depth image, the multi-target real-time tracking can be effectively realized by predicting the information of the tracking target of the previous frame in the current frame and matching the tracking target with the current frame to obtain the multi-target tracking result, the method can be applied to most Kinect monitoring scenes, can also be popularized and applied to the fields of robot target identification obstacle avoidance, intelligent transportation and the like, and has a good application prospect.
As shown in the figure, the nodes are cluster blocks in the target region formed by color features, shape features and three-dimensional space features. In this embodiment, the nodes are each clustering block in a target region formed by color features, shape features, and three-dimensional spatial features, where the color features are target color histograms H established in HSV spaces in a plurality of detected target regionst(t is the t moment of the current frame), the shape characteristic is the length and width value of a target circumscribed rectangle frame formed by the two-dimensional shape model and the three-dimensional depth shape model, and the three-dimensional space characteristic is the two-dimensional coordinate of the center point of each clustering block in the target area and the depth coordinate corresponding to the center point of each clustering block. It should be noted that, by analyzing local adjacent HSV color characteristics of the foreground block, a cluster block can be formed with a certain similarity field.
As shown in the figure, the edge is a three-dimensional euclidean distance between center points of different cluster blocks of the same target. In this embodiment, the motion characteristics of the target are reflected by the change of the three-dimensional euclidean distance between the central points of the different clustering blocks.
As shown, the spatial structure is a three-dimensional euclidean distance between different target center points.
As shown, the three-dimensional layered graph model R in the step S2 is represented as:
R={V,E,S} (1)
in the formula (1), V denotes a node for detecting a three-dimensional hierarchical graph, E denotes an edge for detecting a three-dimensional hierarchical graph, and S denotes a spatial structure for detecting a three-dimensional hierarchical graph.
In this embodiment, node ViDenotes the clustering block labeled i, edge E (V)i,Vj) Indicates the edge connecting the clustering block i and the clustering block j (where i ≠ j), E (V)i,Vj)={dijIs then dijExpressing the three-dimensional Euclidean distance between the central points of the clustering block i and the clustering block j, and S expressing the tracking target t in the three-dimensional layered graph detection modeliCenter point and other tracking target tjThe three-dimensional euclidean distance between the center points.
in the formula (2), the reaction mixture is,a node representing a predictive three-dimensional hierarchical graph model,representing the edges of a predictive three-dimensional hierarchical graph model,representing the spatial structure of a predictive three-dimensional hierarchical graph model,
in this embodiment, a nodeRepresenting a cluster block, edge, numbered iIndicating the edge where cluster block i and cluster block j connect (where i ≠ j),representing the three-dimensional Euclidean distance between the central points of the clustering block i and the clustering block j,representing a tracked target t in a predictive three-dimensional hierarchical graph modeliCenter point and other tracking target tjThe three-dimensional euclidean distance between the center points. Three-dimensional hierarchy of detection in current frame by establishing detection targetThe graph model and the predicted three-dimensional layered graph model of the previous frame tracking target in the current frame can further calculate the matching degree of the nodes, edges and space structures among the models for matching.
As shown in fig. 2, the specific implementation manner of step S4 is:
s401, calculating a detection three-dimensional layered graph model R of a detection target in a current frame and a prediction three-dimensional layered graph model of a tracking target in a previous frame in a current frame prediction areaDegree of matching m of nodes between1And is expressed by the formula:
in the formula (3), i is a reference numeral and Vi={ll,gi,ciDenotes detecting nodes in the three-dimensional hierarchical graph model,representing nodes in a predictive three-dimensional hierarchical graph model, ll、Respectively represent nodes ViAnd nodeMarker of (a), (b), giAnd ciRepresents a node ViThe component (b) of (a) is,andrepresenting nodesComponent of (a)gRepresenting a node component giAndweight value of λcRepresenting a node component ciAndweight value of (wherein λ)g+λc=1),δgRepresenting a node component giAnddegree of matching of position vector and shape model, deltacRepresenting a node component ciAnddegree of matching of color histogram, wheregIs expressed by the formula:
in the formula (4), piA position vector representing the detected object in the current frame,represents the predicted position vector, size, of the tracked object in the current frameiRepresenting a node component giIn the shape model of the current frame,representing node componentsIn the shape model of the current frame, λpAnd λsRepresents a weight value, δpRepresenting a detected target position vector piAnd tracking the target position vectorDegree of matching therebetween, δsRepresenting the size of the detected object shape modeliAnd tracking the shape model of the objectA degree of matching therebetween, wherein δpIs expressed as:
in equation (5), ∈ is a constant, ∈ 3.841, and σ denotes a mahalanobis distance (mahalanobis distance) measure for detecting the motion state of the target, where the calculation formula of σ is expressed as:
in formula (6), p andrepresents the position vector, P andrespectively represent p andusing a kalman estimator to obtain the p,P andthe value of (c) can be obtained;
degree of match δ between shape modelssIs formulated as:
in the formula (8), hist represents the node component ciThe color histogram of (a) is calculated,representing node componentsβ represents the number of bits of the color histogram having 72 bits of HSV space;
s402, calculating a detection three-dimensional layered graph model R of a detection target in a current frame and a prediction three-dimensional layered graph model of a tracking target in a previous frame in a current frame prediction areaDegree of matching m between edges2And is expressed by the formula:
in the formula (9), ebRepresents the length of the edge of the detection target in the detection three-dimensional layered graph model R of the current frame,three-dimensional layered graph model for predicting tracking target in current frameLength of middle edge, δbIndicates the length ebAnd lengthLength match betweenDegree eaRepresents the cosine angle of the edge of the detection target in the detection three-dimensional layered graph model R of the current frame,three-dimensional layered graph model for predicting tracking target in current frameCosine angle of middle edge, deltaaRepresents an angle eaAnd angleAngle of between, λbAnd λaWeight values respectively representing length matching degree and angle matching degree (where λb+λa1) where the length matching degree δ isbIs expressed by the formula:
angle of matching deltaaCan be expressed by the following formula:
s403, calculating a detection three-dimensional layered graph model R of a detection target in the current frame and a prediction three-dimensional layered graph model of a tracking target in the previous frame in the current frame prediction regionThe degree of matching m of the space structure3And is expressed by the formula:
in the formula (12), dcRepresenting the length of a three-dimensional straight line segment in the three-dimensional layered graph model R,representing a predictive three-dimensional hierarchical graph modelLength of straight line segment in middle three dimensions, deltaecRepresenting three-dimensional straight line segments dcAnddegree of matching between, dhRepresenting the detection of the angle in the three-dimensional space of the straight line segment in the three-dimensional layered graph model R,representing a predictive three-dimensional hierarchical graph modelAngle in three-dimensional space of middle straight line segment, deltaehRepresenting the angle d in three-dimensional space of a straight line segmenthAnddegree of match between, λecAnd λehRespectively representing the length matching degree delta of the three-dimensional straight line segmentecAngle matching degree delta in three-dimensional space of linear segmentehWeight value of (wherein λ)ec+λeh1) where the length of the three-dimensional straight line segment matches the degree δecIs expressed by the formula:
angle matching degree delta in three-dimensional space of straight-line segmentehIs expressed by the formula:
s404, obtaining the node matching degree m according to the step S401, the step S402 and the step S4031Degree of matching m2Matching with three-dimensional space structure m3Calculating and detecting three-dimensional layered graph model R and predicting three-dimensional layered graph modelThe matching degree M between the two is expressed by the formula:
in the formula (15), j represents a reference numeral, f1、f2And f3Weight coefficients respectively representing the node matching degree, the edge matching degree and the three-dimensional space structure matching degree, and f1+f2+f3=1。
In this embodiment, the three-dimensional hierarchical graph model R is detected and predicted by calculationThe matching degree M provides a theoretical basis for the matching of multiple targets in the current frame.
As shown in fig. 3, the specific implementation manner of step S5 is:
s501, establishing a detection three-dimensional layered graph model R and a prediction three-dimensional layered graph model according to the calculation result of the step S4Judging whether the tracking target is shielded or not according to the matching table;
s502, if the tracking target is judged to be not shielded in the step S501, obtaining an optimal matching result by utilizing a Hungarian method for solving an assignment problem;
and S503, if the tracking target is judged to be blocked in the step S501, matching the clustering block in the foreground detection area of the current frame with the tracking target, and then matching with the prediction area of the tracking target of the previous frame, so as to obtain the best matching result of the blocked area of the current frame.
As shown in fig. 4, the specific implementation manner of step S503 is:
s5031, calculating a three-dimensional hierarchical graph model of the cluster block and the tracking target in the prediction of the current framePreliminarily determining the relation between the clustering block and the tracking target according to the node matching degree;
s5032, calculating a three-dimensional hierarchical graph model of the cluster block and the tracking target in the prediction of the current frameMatching degree of the middle edge;
s5033, adding the node matching degree in the step S5031 and the edge matching degree in the step S5032, and establishing a detection three-dimensional layered graph model R and a prediction three-dimensional layered graph model R againThe best matching result of the current frame shielding area can be obtained.
In this embodiment, the three-dimensional hierarchical graph model R is detected and predictedAccording to the matching degree M and the result of the matching table, whether the tracking target is shielded or not can be accurately judged, then the multiple targets in the current frame can be effectively identified according to the judgment result of the tracking target, and the effective tracking of the multiple moving targets is realized.
The method for tracking multiple moving objects based on the three-dimensional layered graph model provided by the invention is described in detail above. The principles and embodiments of the present invention are explained herein using specific examples, which are presented only to assist in understanding the core concepts of the present invention. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.
Claims (8)
1. A multi-moving target tracking method based on a three-dimensional layered graph model is characterized by comprising the following steps:
s1, firstly, detecting a foreground connected region of a target by adopting a background subtraction method, then labeling the detection region by using an external rectangular frame, and establishing a target three-dimensional hierarchical graph model consisting of nodes, edges and a space structure for each target region by analyzing appearance characteristics, motion characteristics and three-dimensional space structure characteristics among characteristics of the detected target;
s2, according to the target three-dimensional layered graph model established in the step S1, establishing a detection three-dimensional layered graph model R of a detection target in the detection area of the current frame;
s3, according to the target three-dimensional layered graph model established in the step S1, establishing a prediction three-dimensional layered graph model of a tracking target in a prediction area of a current frame for the tracking target of the previous frame
S4, respectively calculating the detection three-dimensional layered graph model R of the detection target in the current frame and the prediction three-dimensional layered graph model of the tracking target in the previous frame in the current frame prediction areaThe matching degree of the nodes, the edges and the space structure between the nodes and the edges is realized in a specific way as follows:
s401, calculating a detection three-dimensional layered graph model R of a detection target in a current frame and a prediction three-dimensional layered graph model of a tracking target in a previous frame in a current frame prediction areaDegree of matching m of nodes between1And is expressed by the formula:
in the formula (3), i represents a reference numeral, Vi={ll,gi,ciDenotes detecting nodes in the three-dimensional hierarchical graph model,representing nodes in a predictive three-dimensional hierarchical graph model, ll、Respectively represent nodes ViAnd nodeMarker of (a), (b), giAnd ciRepresents a node ViThe component (b) of (a) is,andrepresenting nodesComponent of (a)gRepresenting a node component giAndweight value of λcRepresenting a node component ciAndweight value of δgRepresenting a node component giAnddegree of matching of position vector and shape model, deltacRepresenting a node component ciAndmatching degree of the color histogram;
s402, calculating a detection three-dimensional layered graph model R of a detection target in a current frame and a prediction three-dimensional layered graph model of a tracking target in a previous frame in a current frame prediction areaDegree of matching m between edges2And is expressed by the formula:
in the formula (9), ebRepresents the length of the edge of the detection target in the detection three-dimensional layered graph model R of the current frame,three-dimensional layered graph model for predicting tracking target in current frameLength of middle edge, δbIndicates the length ebAnd lengthLength matching degree between eaRepresents the cosine angle of the edge of the detection target in the detection three-dimensional layered graph model R of the current frame,three-dimensional layered graph model for predicting tracking target in current frameCosine angle of middle edge, deltaaRepresents an angle eaAnd angleBetweenAngle of (d) matching degree, λbAnd λaWeight values respectively representing the length matching degree and the angle matching degree;
s403, calculating a detection three-dimensional layered graph model R of a detection target in the current frame and a prediction three-dimensional layered graph model of a tracking target in the previous frame in the current frame prediction regionThe degree of matching m of the space structure3And is expressed by the formula:
in the formula (12), dcRepresenting the length of a three-dimensional straight line segment in the three-dimensional layered graph model R,representing a predictive three-dimensional hierarchical graph modelLength of straight line segment in middle three dimensions, deltaecRepresenting three-dimensional straight line segments dcAnddegree of matching between, dhRepresenting the detection of the angle in the three-dimensional space of the straight line segment in the three-dimensional layered graph model R,representing a predictive three-dimensional hierarchical graph modelAngle in three-dimensional space of middle straight line segment, deltaehRepresenting the angle d in three-dimensional space of a straight line segmenthAnddegree of match between, λecAnd λehRespectively representing the length matching degree delta of the three-dimensional straight line segmentecAngle matching degree delta in three-dimensional space of linear segmentehThe weight value of (1);
s404, obtaining the node matching degree m according to the step S401, the step S402 and the step S4031Degree of matching m2Matching with three-dimensional space structure m3Calculating and detecting three-dimensional layered graph model R and predicting three-dimensional layered graph modelThe matching degree M between the two is expressed by the formula:
in the formula (15), f1、f2And f3Weight coefficients respectively representing the node matching degree, the edge matching degree and the three-dimensional space structure matching degree, and f1+f2+f3=1;
And S5, tracking and matching the target between the detection area and the prediction area of the current frame according to the calculation result of the step S4.
2. The method as claimed in claim 1, wherein the nodes are clustered blocks in the target region composed of color features, shape features and three-dimensional space features.
3. The method according to claim 2, wherein the edge is a three-dimensional Euclidean distance between center points of different cluster blocks of the same target.
4. The method according to claim 3, wherein the spatial structure is a three-dimensional Euclidean distance between different target center points.
5. The method for tracking multiple moving objects based on three-dimensional layered graphics model according to claim 4, wherein said detecting three-dimensional layered graphics model R in step S2 is represented as:
R={V,E,S} (1)
in the formula (1), V denotes a node for detecting a three-dimensional hierarchical graph, E denotes an edge for detecting a three-dimensional hierarchical graph, and S denotes a spatial structure for detecting a three-dimensional hierarchical graph.
6. The method for tracking multiple moving objects based on three-dimensional hierarchical graph model as claimed in claim 5, wherein said predicting three-dimensional hierarchical graph model in step S3Expressed as:
7. The method for tracking multiple moving objects based on the three-dimensional layered graph model as claimed in claim 6, wherein the step S5 is specifically implemented as follows:
s501, establishing a three-dimensional detection hierarchical graph according to the calculation result of the step S4Model R and predictive three-dimensional hierarchical graph modelJudging whether the tracking target is shielded or not according to the matching table;
s502, if the tracking target is judged to be not shielded in the step S501, obtaining an optimal matching result by utilizing a Hungarian method for solving an assignment problem;
and S503, if the tracking target is judged to be blocked in the step S501, matching the clustering block in the foreground detection area of the current frame with the tracking target, and then matching with the prediction area of the tracking target of the previous frame, so as to obtain the best matching result of the blocked area of the current frame.
8. The method for tracking multiple moving objects based on the three-dimensional layered graph model as claimed in claim 7, wherein the step S503 is specifically implemented as follows:
s5031, calculating a three-dimensional hierarchical graph model of the cluster block and the tracking target in the prediction of the current framePreliminarily determining the relation between the clustering block and the tracking target according to the node matching degree;
s5032, calculating a three-dimensional hierarchical graph model of the cluster block and the tracking target in the prediction of the current frameMatching degree of the middle edge;
s5033, adding the node matching degree in the step S5031 and the edge matching degree in the step S5032, and establishing a detection three-dimensional layered graph model R and a prediction three-dimensional layered graph model R againThe best matching result of the current frame shielding area can be obtained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910205734.6A CN109961461B (en) | 2019-03-18 | 2019-03-18 | Multi-moving-object tracking method based on three-dimensional layered graph model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910205734.6A CN109961461B (en) | 2019-03-18 | 2019-03-18 | Multi-moving-object tracking method based on three-dimensional layered graph model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109961461A CN109961461A (en) | 2019-07-02 |
CN109961461B true CN109961461B (en) | 2021-04-23 |
Family
ID=67024559
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910205734.6A Active CN109961461B (en) | 2019-03-18 | 2019-03-18 | Multi-moving-object tracking method based on three-dimensional layered graph model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109961461B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112162550B (en) * | 2020-09-02 | 2021-07-16 | 北京航空航天大学 | Three-dimensional target tracking method for active safety collision avoidance of automobile |
CN113344980A (en) * | 2021-06-29 | 2021-09-03 | 北京搜狗科技发展有限公司 | Target tracking method and device for target tracking |
CN115063789B (en) * | 2022-05-24 | 2023-08-04 | 中国科学院自动化研究所 | 3D target detection method and device based on key point matching |
CN116523970B (en) * | 2023-07-05 | 2023-10-20 | 之江实验室 | Dynamic three-dimensional target tracking method and device based on secondary implicit matching |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106067179A (en) * | 2016-05-31 | 2016-11-02 | 广东顺德中山大学卡内基梅隆大学国际联合研究院 | A kind of multi-object tracking method based on hierarchical network flow graph |
CN107292911A (en) * | 2017-05-23 | 2017-10-24 | 南京邮电大学 | A kind of multi-object tracking method merged based on multi-model with data correlation |
CN107992827A (en) * | 2017-12-03 | 2018-05-04 | 湖南工程学院 | A kind of method and device of the multiple mobile object tracking based on threedimensional model |
CN108986151A (en) * | 2017-05-31 | 2018-12-11 | 华为技术有限公司 | A kind of multiple target tracking processing method and equipment |
WO2019005291A1 (en) * | 2017-06-27 | 2019-01-03 | Qualcomm Incorporated | Using object re-identification in video surveillance |
-
2019
- 2019-03-18 CN CN201910205734.6A patent/CN109961461B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106067179A (en) * | 2016-05-31 | 2016-11-02 | 广东顺德中山大学卡内基梅隆大学国际联合研究院 | A kind of multi-object tracking method based on hierarchical network flow graph |
CN107292911A (en) * | 2017-05-23 | 2017-10-24 | 南京邮电大学 | A kind of multi-object tracking method merged based on multi-model with data correlation |
CN108986151A (en) * | 2017-05-31 | 2018-12-11 | 华为技术有限公司 | A kind of multiple target tracking processing method and equipment |
WO2019005291A1 (en) * | 2017-06-27 | 2019-01-03 | Qualcomm Incorporated | Using object re-identification in video surveillance |
CN107992827A (en) * | 2017-12-03 | 2018-05-04 | 湖南工程学院 | A kind of method and device of the multiple mobile object tracking based on threedimensional model |
Also Published As
Publication number | Publication date |
---|---|
CN109961461A (en) | 2019-07-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109961461B (en) | Multi-moving-object tracking method based on three-dimensional layered graph model | |
CN110246159B (en) | 3D target motion analysis method based on vision and radar information fusion | |
US8199977B2 (en) | System and method for extraction of features from a 3-D point cloud | |
US9846812B2 (en) | Image recognition system for a vehicle and corresponding method | |
CN111693972A (en) | Vehicle position and speed estimation method based on binocular sequence images | |
Zhou et al. | Moving object detection and segmentation in urban environments from a moving platform | |
CN105069804B (en) | Threedimensional model scan rebuilding method based on smart mobile phone | |
US20140064624A1 (en) | Systems and methods for estimating the geographic location at which image data was captured | |
CN101344965A (en) | Tracking system based on binocular camera shooting | |
CN108346160A (en) | The multiple mobile object tracking combined based on disparity map Background difference and Meanshift | |
CN111611853A (en) | Sensing information fusion method and device and storage medium | |
CN111723778B (en) | Vehicle distance measuring system and method based on MobileNet-SSD | |
CN105160649A (en) | Multi-target tracking method and system based on kernel function unsupervised clustering | |
KR20150074544A (en) | Method of tracking vehicle | |
WO2024114119A1 (en) | Sensor fusion method based on binocular camera guidance | |
Hu et al. | Robust object tracking via multi-cue fusion | |
CN112947419A (en) | Obstacle avoidance method, device and equipment | |
WO2023131203A1 (en) | Semantic map updating method, path planning method, and related apparatuses | |
CN114170535A (en) | Target detection positioning method, device, controller, storage medium and unmanned aerial vehicle | |
Murmu et al. | Relative velocity measurement using low cost single camera-based stereo vision system | |
EP2677462B1 (en) | Method and apparatus for segmenting object area | |
CN115457086A (en) | Multi-target tracking algorithm based on binocular vision and Kalman filtering | |
CN115308732A (en) | Multi-target detection and tracking method integrating millimeter wave radar and depth vision | |
Fan et al. | Human-m3: A multi-view multi-modal dataset for 3d human pose estimation in outdoor scenes | |
van de Wouw et al. | Hierarchical 2.5-d scene alignment for change detection with large viewpoint differences |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |