CN109961461B - Multi-moving-object tracking method based on three-dimensional layered graph model - Google Patents

Multi-moving-object tracking method based on three-dimensional layered graph model Download PDF

Info

Publication number
CN109961461B
CN109961461B CN201910205734.6A CN201910205734A CN109961461B CN 109961461 B CN109961461 B CN 109961461B CN 201910205734 A CN201910205734 A CN 201910205734A CN 109961461 B CN109961461 B CN 109961461B
Authority
CN
China
Prior art keywords
dimensional
graph model
target
detection
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910205734.6A
Other languages
Chinese (zh)
Other versions
CN109961461A (en
Inventor
万琴
肖岳平
朱晓林
吴迪
安希旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Institute of Engineering
Original Assignee
Hunan Institute of Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Institute of Engineering filed Critical Hunan Institute of Engineering
Priority to CN201910205734.6A priority Critical patent/CN109961461B/en
Publication of CN109961461A publication Critical patent/CN109961461A/en
Application granted granted Critical
Publication of CN109961461B publication Critical patent/CN109961461B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a multi-moving target tracking method based on a three-dimensional layered graph model, which comprises the following steps: s1, analyzing appearance characteristics and motion characteristics of the detected target and three-dimensional space structural characteristics among the characteristics, and establishing a target three-dimensional layered graph model of each target area; s2, establishing a detection three-dimensional layered graph model of the detection target in the detection area of the current frame; s3, establishing a prediction three-dimensional layered graph model of the tracking target in the prediction area of the current frame; s4, calculating the matching degree of the node, the edge and the space structure between the detection target in the detection three-dimensional layered graph model and the prediction target in the prediction three-dimensional layered graph model; and S5, tracking and matching the target between the detection area and the prediction area according to the calculation result of the matching degree. The invention can effectively realize multi-target real-time tracking, can be applied to most Kinect monitoring scenes, can also be popularized and applied to the fields of robot target identification obstacle avoidance, intelligent traffic and the like, and has better application prospect.

Description

Multi-moving-object tracking method based on three-dimensional layered graph model
Technical Field
The invention relates to the technical field of three-dimensional layered graph models, in particular to a multi-moving-target tracking method based on a three-dimensional layered graph model.
Background
In the tracking of multiple moving targets in a two-dimensional scene, the targets cannot be accurately tracked due to the fact that the targets mutually shield and lose information, and therefore attention is increasingly paid to the realization of multi-target tracking under complex conditions by using three-dimensional vision.
The three-dimensional visual system is mainly divided into the following three types: the system comprises a monocular vision system, a binocular or multi-eye stereoscopic vision system and a three-dimensional depth RGB-D vision system, wherein the monocular vision system only adopts a two-dimensional camera and obtains three-dimensional vision information through three-dimensional calibration; the binocular or multi-view stereo vision system jointly images and calibrates and reconstructs three-dimensional information of a scene through two or more cameras, the two systems have high calculation complexity and low real-time performance, and the third system adopts a three-dimensional depth RGB-D camera to directly and simultaneously provide a two-dimensional RGB imageImage and three-dimensional depth information[13]Therefore, three-dimensional depth cameras have been increasingly applied to three-dimensional vision systems in recent years.
At present, the application of a three-dimensional depth camera becomes the hotspot field of three-dimensional target tracking, and generally, the target tracking of the three-dimensional camera is adopted, RGB two-dimensional image information and depth information are firstly registered, and then a method of tracking after detection is adopted, or the target tracking is directly carried out on three-dimensional point cloud data. For example, removing the ground from the 3D point cloud data, identifying by using interested information and depth, firstly estimating the depth value of the point cloud, extracting an interested area comprising a human body and other targets, then classifying the interested area into a non-human body area, then tracking based on a target detection result, performing data association according to the consistency of the depth and the appearance, selecting the interested points of RGB and a depth map, then combining the features based on the RGB and the depth map, and generating a target tracking 3D track after matching; or optimizing and matching by adopting a graph model algorithm to obtain a tracking result,
in the prior art, a layered graph model in the RGB and depth fields is provided for real-time robust multi-person tracking. Obtaining the optimal association and tracking result of the multi-human body target and directly adopting three-dimensional point cloud information to track according to the multi-target three-dimensional characteristics by RGB-D data association and optimization of the track; however, most of the kinect-based three-dimensional visual analysis is focused on three-dimensional reconstruction of a scene, navigation of a mobile robot and recognition tracking at present, multi-target tracking used for visual monitoring is in a starting stage, three-dimensional point cloud is obtained by registering RGB and depth information, target recognition tracking is carried out on the basis, the calculation complexity is high, and the kinect-based three-dimensional visual analysis cannot be directly applied to tracking under the complex condition of multiple moving targets in video monitoring.
Disclosure of Invention
The invention aims to provide a multi-moving target tracking method based on a three-dimensional layered graph model, which is used for tracking a multi-moving target by adopting a two-dimensional image and a three-dimensional depth image provided by a kinect three-dimensional camera.
In order to solve the technical problem, the invention provides a multi-moving target tracking method based on a three-dimensional layered graph model, which comprises the following steps:
s1, firstly, detecting a foreground connected region of a target by adopting a background subtraction method, then labeling the detection region by using an external rectangular frame, and establishing a target three-dimensional hierarchical graph model consisting of nodes, edges and a space structure for each target region by analyzing appearance characteristics, motion characteristics and three-dimensional space structure characteristics among characteristics of the detected target;
s2, according to the target three-dimensional layered graph model established in the step S1, establishing a detection three-dimensional layered graph model R of a detection target in the detection area of the current frame;
s3, according to the target three-dimensional layered graph model established in the step S1, establishing a prediction three-dimensional layered graph model of a tracking target in a prediction area of a current frame for the tracking target of the previous frame
Figure GDA0002964043460000021
S4, respectively calculating the detection three-dimensional layered graph model R of the detection target in the current frame and the prediction three-dimensional layered graph model of the tracking target in the previous frame in the current frame prediction area
Figure GDA0002964043460000022
Matching degree of nodes, edges and space structures among the nodes;
and S5, tracking and matching the target between the detection area and the prediction area of the current frame according to the calculation result of the step S4.
Preferably, the nodes are cluster blocks in the target region formed by color features, shape features and three-dimensional space features.
Preferably, the edge is a three-dimensional Euclidean distance between center points of different clustering blocks of the same target.
Preferably, the spatial structure is a three-dimensional euclidean distance between different target center points.
Preferably, the detecting three-dimensional layered graph model R in the step S2 is represented as:
R={V,E,S} (1)
in the formula (1), V denotes a node for detecting a three-dimensional hierarchical graph, E denotes an edge for detecting a three-dimensional hierarchical graph, and S denotes a spatial structure for detecting a three-dimensional hierarchical graph.
Preferably, the predicting three-dimensional hierarchical graph model in the step S3
Figure GDA0002964043460000023
Expressed as:
Figure GDA0002964043460000024
in the formula (2), the reaction mixture is,
Figure GDA0002964043460000025
a node representing a predictive three-dimensional hierarchical graph model,
Figure GDA0002964043460000026
representing the edges of a predictive three-dimensional hierarchical graph model,
Figure GDA0002964043460000031
representing the spatial structure of the predictive three-dimensional hierarchical graph model.
Preferably, the step S4 is specifically implemented as follows:
s401, calculating a detection three-dimensional layered graph model R of a detection target in a current frame and a prediction three-dimensional layered graph model of a tracking target in a previous frame in a current frame prediction area
Figure GDA0002964043460000032
Degree of matching m of nodes between1And is expressed by the formula:
Figure GDA0002964043460000033
in the formula (3), i is a reference numeral and Vi={ll,gi,ciDenotes detecting nodes in the three-dimensional hierarchical graph model,
Figure GDA0002964043460000034
representing nodes in a predictive three-dimensional hierarchical graph model, ll
Figure GDA0002964043460000035
Respectively represent nodes ViAnd node
Figure GDA0002964043460000036
Marker of (a), (b), giAnd ciRepresents a node ViThe component (b) of (a) is,
Figure GDA0002964043460000037
and
Figure GDA0002964043460000038
representing nodes
Figure GDA0002964043460000039
Component of (a)gRepresenting a node component giAnd
Figure GDA00029640434600000310
weight value of λcRepresenting a node component ciAnd
Figure GDA00029640434600000311
weight value of (wherein λ)gc=1),δgRepresenting a node component giAnd
Figure GDA00029640434600000312
degree of matching of position vector and shape model, deltacRepresenting a node component ciAnd
Figure GDA00029640434600000313
matching degree of the color histogram;
s402, calculating a detection three-dimensional layered graph model R of a detection target in a current frame and a prediction three-dimensional layered graph model of a tracking target in a previous frame in a current frame prediction area
Figure GDA00029640434600000314
Degree of matching m between edges2And is expressed by the formula:
Figure GDA00029640434600000315
in the formula (9), ebRepresents the length of the edge of the detection target in the detection three-dimensional layered graph model R of the current frame,
Figure GDA00029640434600000316
three-dimensional layered graph model for predicting tracking target in current frame
Figure GDA00029640434600000317
Length of middle edge, δbIndicates the length ebAnd length
Figure GDA00029640434600000318
Length matching degree between eaRepresents the cosine angle of the edge of the detection target in the detection three-dimensional layered graph model R of the current frame,
Figure GDA00029640434600000319
three-dimensional layered graph model for predicting tracking target in current frame
Figure GDA00029640434600000320
Cosine angle of middle edge, deltaaRepresents an angle eaAnd angle
Figure GDA00029640434600000321
Angle of between, λbAnd λaWeight values respectively representing the length matching degree and the angle matching degree;
s403, calculating a detection three-dimensional layered graph model R of a detection target in the current frame and a prediction three-dimensional layered graph model of a tracking target in the previous frame in the current frame prediction region
Figure GDA00029640434600000322
The degree of matching m of the space structure3And is expressed by the formula:
Figure GDA00029640434600000323
in the formula (12), dcRepresenting the length of a three-dimensional straight line segment in the three-dimensional layered graph model R,
Figure GDA00029640434600000324
representing a predictive three-dimensional hierarchical graph model
Figure GDA0002964043460000041
Length of straight line segment in middle three dimensions, deltaecRepresenting three-dimensional straight line segments dcAnd
Figure GDA0002964043460000042
degree of matching between, dhRepresenting the detection of the angle in the three-dimensional space of the straight line segment in the three-dimensional layered graph model R,
Figure GDA0002964043460000043
representing a predictive three-dimensional hierarchical graph model
Figure GDA0002964043460000044
Angle in three-dimensional space of middle straight line segment, deltaehRepresenting the angle d in three-dimensional space of a straight line segmenthAnd
Figure GDA0002964043460000045
degree of match between, λecAnd λehRespectively representing the length matching degree delta of the three-dimensional straight line segmentecAngle matching degree delta in three-dimensional space of linear segmentehThe weight value of (1);
s404, obtaining the node matching degree m according to the step S401, the step S402 and the step S4031Degree of matching m2Matching with three-dimensional space structure m3Calculating and detecting three-dimensional layered graph modelModel R and prediction three-dimensional layered graph model
Figure GDA0002964043460000046
The matching degree M between the two is expressed by the formula:
Figure GDA0002964043460000047
in the formula (15), j represents a reference numeral, f1、f2And f3Weight coefficients respectively representing the node matching degree, the edge matching degree and the three-dimensional space structure matching degree, and f1+f2+f3=1。
Preferably, the step S5 is specifically implemented as follows:
s501, establishing a detection three-dimensional layered graph model R and a prediction three-dimensional layered graph model according to the calculation result of the step S4
Figure GDA00029640434600000411
Judging whether the tracking target is shielded or not according to the matching table;
s502, if the tracking target is judged to be not shielded in the step S501, obtaining an optimal matching result by utilizing a Hungarian method for solving an assignment problem;
and S503, if the tracking target is judged to be blocked in the step S501, matching the clustering block in the foreground detection area of the current frame with the tracking target, and then matching with the prediction area of the tracking target of the previous frame, so as to obtain the best matching result of the blocked area of the current frame.
Preferably, the step S503 is specifically implemented as follows:
s5031, calculating a three-dimensional hierarchical graph model of the cluster block and the tracking target in the prediction of the current frame
Figure GDA0002964043460000048
Preliminarily determining the relation between the clustering block and the tracking target according to the node matching degree;
s5032, calculating the predicted three-dimension of the clustering block and the tracking target in the current frameLayered graph model
Figure GDA0002964043460000049
Matching degree of the middle edge;
s5033, adding the node matching degree in the step S5031 and the edge matching degree in the step S5032, and establishing a detection three-dimensional layered graph model R and a prediction three-dimensional layered graph model R again
Figure GDA00029640434600000410
The best matching result of the current frame shielding area can be obtained.
Compared with the prior art, the method and the device respectively establish three-dimensional layered graph models aiming at the two-dimensional RGB image and the three-dimensional depth image, calculate the matching degree of nodes, edges and spaces between the detected three-dimensional layered graph model and the predicted three-dimensional layered graph model, further calculate the matching degree between the detected three-dimensional layered graph model and the predicted three-dimensional layered graph model, further establish a matching table of the detected three-dimensional layered graph model and the predicted three-dimensional layered graph model according to the calculated matching degree, and then accurately judge whether the tracking target is blocked according to the result of the matching table, namely, the optimal matching result of the current frame can be obtained, multiple targets in the current frame can be effectively identified, and the tracking of multiple moving targets is realized. The method can be applied to most Kinect monitoring scenes, can also be popularized and applied to the fields of robot target identification obstacle avoidance, intelligent traffic and the like, and has a good application prospect.
Drawings
FIG. 1 is a flow chart of a multi-moving-target tracking method based on a three-dimensional layered graph model according to the present invention,
FIG. 2 is a flow chart of a method for calculating a degree of matching between a detected three-dimensional hierarchical map model and a predicted three-dimensional hierarchical map model according to the present invention,
FIG. 3 is a flow chart of matching between a detected target and a tracked target in the present invention,
FIG. 4 is a matching flow chart of the present invention under the condition of the occlusion of the detected target and the tracked target.
Detailed Description
In order to make the technical solutions of the present invention better understood, the present invention is further described in detail below with reference to the accompanying drawings.
As shown in fig. 1, a method for tracking multiple moving objects based on a three-dimensional hierarchical graph model includes the following steps:
s1, firstly, detecting a foreground connected region of a target by adopting a background subtraction method, then labeling the detection region by using an external rectangular frame, and establishing a target three-dimensional hierarchical graph model consisting of nodes, edges and a space structure for each target region by analyzing appearance characteristics, motion characteristics and three-dimensional space structure characteristics among characteristics of the detected target;
s2, according to the target three-dimensional layered graph model established in the step S1, establishing a detection three-dimensional layered graph model R of a detection target in the detection area of the current frame;
s3, according to the target three-dimensional layered graph model established in the step S1, establishing a prediction three-dimensional layered graph model of a tracking target in a prediction area of a current frame for the tracking target of the previous frame
Figure GDA0002964043460000051
S4, respectively calculating the detection three-dimensional layered graph model R of the detection target in the current frame and the prediction three-dimensional layered graph model of the tracking target in the previous frame in the current frame prediction area
Figure GDA0002964043460000061
Matching degree of nodes, edges and space structures among the nodes;
and S5, tracking and matching the target between the detection area and the prediction area of the current frame according to the calculation result of the step S4.
In the embodiment of the multi-moving-target tracking method based on the three-dimensional layered image model, the RGB image and the depth information do not need to be fused, the three-dimensional layered image model is established respectively for the two-dimensional RGB image and the three-dimensional depth image, the multi-target real-time tracking can be effectively realized by predicting the information of the tracking target of the previous frame in the current frame and matching the tracking target with the current frame to obtain the multi-target tracking result, the method can be applied to most Kinect monitoring scenes, can also be popularized and applied to the fields of robot target identification obstacle avoidance, intelligent transportation and the like, and has a good application prospect.
As shown in the figure, the nodes are cluster blocks in the target region formed by color features, shape features and three-dimensional space features. In this embodiment, the nodes are each clustering block in a target region formed by color features, shape features, and three-dimensional spatial features, where the color features are target color histograms H established in HSV spaces in a plurality of detected target regionst(t is the t moment of the current frame), the shape characteristic is the length and width value of a target circumscribed rectangle frame formed by the two-dimensional shape model and the three-dimensional depth shape model, and the three-dimensional space characteristic is the two-dimensional coordinate of the center point of each clustering block in the target area and the depth coordinate corresponding to the center point of each clustering block. It should be noted that, by analyzing local adjacent HSV color characteristics of the foreground block, a cluster block can be formed with a certain similarity field.
As shown in the figure, the edge is a three-dimensional euclidean distance between center points of different cluster blocks of the same target. In this embodiment, the motion characteristics of the target are reflected by the change of the three-dimensional euclidean distance between the central points of the different clustering blocks.
As shown, the spatial structure is a three-dimensional euclidean distance between different target center points.
As shown, the three-dimensional layered graph model R in the step S2 is represented as:
R={V,E,S} (1)
in the formula (1), V denotes a node for detecting a three-dimensional hierarchical graph, E denotes an edge for detecting a three-dimensional hierarchical graph, and S denotes a spatial structure for detecting a three-dimensional hierarchical graph.
In this embodiment, node ViDenotes the clustering block labeled i, edge E (V)i,Vj) Indicates the edge connecting the clustering block i and the clustering block j (where i ≠ j), E (V)i,Vj)={dijIs then dijExpressing the three-dimensional Euclidean distance between the central points of the clustering block i and the clustering block j, and S expressing the tracking target t in the three-dimensional layered graph detection modeliCenter point and other tracking target tjThe three-dimensional euclidean distance between the center points.
As shown, the predicted three-dimensional hierarchical graph model in the step S3
Figure GDA0002964043460000071
Expressed as:
Figure GDA0002964043460000072
in the formula (2), the reaction mixture is,
Figure GDA0002964043460000073
a node representing a predictive three-dimensional hierarchical graph model,
Figure GDA00029640434600000726
representing the edges of a predictive three-dimensional hierarchical graph model,
Figure GDA0002964043460000074
representing the spatial structure of a predictive three-dimensional hierarchical graph model,
in this embodiment, a node
Figure GDA0002964043460000075
Representing a cluster block, edge, numbered i
Figure GDA0002964043460000076
Indicating the edge where cluster block i and cluster block j connect (where i ≠ j),
Figure GDA0002964043460000077
representing the three-dimensional Euclidean distance between the central points of the clustering block i and the clustering block j,
Figure GDA0002964043460000078
representing a tracked target t in a predictive three-dimensional hierarchical graph modeliCenter point and other tracking target tjThe three-dimensional euclidean distance between the center points. Three-dimensional hierarchy of detection in current frame by establishing detection targetThe graph model and the predicted three-dimensional layered graph model of the previous frame tracking target in the current frame can further calculate the matching degree of the nodes, edges and space structures among the models for matching.
As shown in fig. 2, the specific implementation manner of step S4 is:
s401, calculating a detection three-dimensional layered graph model R of a detection target in a current frame and a prediction three-dimensional layered graph model of a tracking target in a previous frame in a current frame prediction area
Figure GDA0002964043460000079
Degree of matching m of nodes between1And is expressed by the formula:
Figure GDA00029640434600000710
in the formula (3), i is a reference numeral and Vi={ll,gi,ciDenotes detecting nodes in the three-dimensional hierarchical graph model,
Figure GDA00029640434600000711
representing nodes in a predictive three-dimensional hierarchical graph model, ll
Figure GDA00029640434600000712
Respectively represent nodes ViAnd node
Figure GDA00029640434600000713
Marker of (a), (b), giAnd ciRepresents a node ViThe component (b) of (a) is,
Figure GDA00029640434600000714
and
Figure GDA00029640434600000715
representing nodes
Figure GDA00029640434600000716
Component of (a)gRepresenting a node component giAnd
Figure GDA00029640434600000717
weight value of λcRepresenting a node component ciAnd
Figure GDA00029640434600000718
weight value of (wherein λ)gc=1),δgRepresenting a node component giAnd
Figure GDA00029640434600000719
degree of matching of position vector and shape model, deltacRepresenting a node component ciAnd
Figure GDA00029640434600000720
degree of matching of color histogram, wheregIs expressed by the formula:
Figure GDA00029640434600000721
in the formula (4), piA position vector representing the detected object in the current frame,
Figure GDA00029640434600000722
represents the predicted position vector, size, of the tracked object in the current frameiRepresenting a node component giIn the shape model of the current frame,
Figure GDA00029640434600000723
representing node components
Figure GDA00029640434600000724
In the shape model of the current frame, λpAnd λsRepresents a weight value, δpRepresenting a detected target position vector piAnd tracking the target position vector
Figure GDA00029640434600000725
Degree of matching therebetween, δsRepresenting the size of the detected object shape modeliAnd tracking the shape model of the object
Figure GDA0002964043460000081
A degree of matching therebetween, wherein δpIs expressed as:
Figure GDA0002964043460000082
in equation (5), ∈ is a constant, ∈ 3.841, and σ denotes a mahalanobis distance (mahalanobis distance) measure for detecting the motion state of the target, where the calculation formula of σ is expressed as:
Figure GDA0002964043460000083
in formula (6), p and
Figure GDA0002964043460000084
represents the position vector, P and
Figure GDA0002964043460000085
respectively represent p and
Figure GDA0002964043460000086
using a kalman estimator to obtain the p,
Figure GDA0002964043460000087
P and
Figure GDA0002964043460000088
the value of (c) can be obtained;
degree of match δ between shape modelssIs formulated as:
Figure GDA0002964043460000089
node component ciAnd
Figure GDA00029640434600000810
degree of matching δ of color histogramcIs formulated as:
Figure GDA00029640434600000811
in the formula (8), hist represents the node component ciThe color histogram of (a) is calculated,
Figure GDA00029640434600000812
representing node components
Figure GDA00029640434600000813
β represents the number of bits of the color histogram having 72 bits of HSV space;
s402, calculating a detection three-dimensional layered graph model R of a detection target in a current frame and a prediction three-dimensional layered graph model of a tracking target in a previous frame in a current frame prediction area
Figure GDA00029640434600000814
Degree of matching m between edges2And is expressed by the formula:
Figure GDA00029640434600000815
in the formula (9), ebRepresents the length of the edge of the detection target in the detection three-dimensional layered graph model R of the current frame,
Figure GDA00029640434600000816
three-dimensional layered graph model for predicting tracking target in current frame
Figure GDA00029640434600000817
Length of middle edge, δbIndicates the length ebAnd length
Figure GDA00029640434600000818
Length match betweenDegree eaRepresents the cosine angle of the edge of the detection target in the detection three-dimensional layered graph model R of the current frame,
Figure GDA0002964043460000091
three-dimensional layered graph model for predicting tracking target in current frame
Figure GDA0002964043460000092
Cosine angle of middle edge, deltaaRepresents an angle eaAnd angle
Figure GDA0002964043460000093
Angle of between, λbAnd λaWeight values respectively representing length matching degree and angle matching degree (where λba1) where the length matching degree δ isbIs expressed by the formula:
Figure GDA0002964043460000094
angle of matching deltaaCan be expressed by the following formula:
Figure GDA0002964043460000095
s403, calculating a detection three-dimensional layered graph model R of a detection target in the current frame and a prediction three-dimensional layered graph model of a tracking target in the previous frame in the current frame prediction region
Figure GDA0002964043460000096
The degree of matching m of the space structure3And is expressed by the formula:
Figure GDA0002964043460000097
in the formula (12), dcRepresenting the length of a three-dimensional straight line segment in the three-dimensional layered graph model R,
Figure GDA0002964043460000098
representing a predictive three-dimensional hierarchical graph model
Figure GDA0002964043460000099
Length of straight line segment in middle three dimensions, deltaecRepresenting three-dimensional straight line segments dcAnd
Figure GDA00029640434600000910
degree of matching between, dhRepresenting the detection of the angle in the three-dimensional space of the straight line segment in the three-dimensional layered graph model R,
Figure GDA00029640434600000911
representing a predictive three-dimensional hierarchical graph model
Figure GDA00029640434600000912
Angle in three-dimensional space of middle straight line segment, deltaehRepresenting the angle d in three-dimensional space of a straight line segmenthAnd
Figure GDA00029640434600000913
degree of match between, λecAnd λehRespectively representing the length matching degree delta of the three-dimensional straight line segmentecAngle matching degree delta in three-dimensional space of linear segmentehWeight value of (wherein λ)eceh1) where the length of the three-dimensional straight line segment matches the degree δecIs expressed by the formula:
Figure GDA00029640434600000914
angle matching degree delta in three-dimensional space of straight-line segmentehIs expressed by the formula:
Figure GDA00029640434600000915
s404, obtaining the node matching degree m according to the step S401, the step S402 and the step S4031Degree of matching m2Matching with three-dimensional space structure m3Calculating and detecting three-dimensional layered graph model R and predicting three-dimensional layered graph model
Figure GDA0002964043460000108
The matching degree M between the two is expressed by the formula:
Figure GDA0002964043460000101
in the formula (15), j represents a reference numeral, f1、f2And f3Weight coefficients respectively representing the node matching degree, the edge matching degree and the three-dimensional space structure matching degree, and f1+f2+f3=1。
In this embodiment, the three-dimensional hierarchical graph model R is detected and predicted by calculation
Figure GDA0002964043460000102
The matching degree M provides a theoretical basis for the matching of multiple targets in the current frame.
As shown in fig. 3, the specific implementation manner of step S5 is:
s501, establishing a detection three-dimensional layered graph model R and a prediction three-dimensional layered graph model according to the calculation result of the step S4
Figure GDA0002964043460000103
Judging whether the tracking target is shielded or not according to the matching table;
s502, if the tracking target is judged to be not shielded in the step S501, obtaining an optimal matching result by utilizing a Hungarian method for solving an assignment problem;
and S503, if the tracking target is judged to be blocked in the step S501, matching the clustering block in the foreground detection area of the current frame with the tracking target, and then matching with the prediction area of the tracking target of the previous frame, so as to obtain the best matching result of the blocked area of the current frame.
As shown in fig. 4, the specific implementation manner of step S503 is:
s5031, calculating a three-dimensional hierarchical graph model of the cluster block and the tracking target in the prediction of the current frame
Figure GDA0002964043460000104
Preliminarily determining the relation between the clustering block and the tracking target according to the node matching degree;
s5032, calculating a three-dimensional hierarchical graph model of the cluster block and the tracking target in the prediction of the current frame
Figure GDA0002964043460000105
Matching degree of the middle edge;
s5033, adding the node matching degree in the step S5031 and the edge matching degree in the step S5032, and establishing a detection three-dimensional layered graph model R and a prediction three-dimensional layered graph model R again
Figure GDA0002964043460000106
The best matching result of the current frame shielding area can be obtained.
In this embodiment, the three-dimensional hierarchical graph model R is detected and predicted
Figure GDA0002964043460000107
According to the matching degree M and the result of the matching table, whether the tracking target is shielded or not can be accurately judged, then the multiple targets in the current frame can be effectively identified according to the judgment result of the tracking target, and the effective tracking of the multiple moving targets is realized.
The method for tracking multiple moving objects based on the three-dimensional layered graph model provided by the invention is described in detail above. The principles and embodiments of the present invention are explained herein using specific examples, which are presented only to assist in understanding the core concepts of the present invention. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.

Claims (8)

1. A multi-moving target tracking method based on a three-dimensional layered graph model is characterized by comprising the following steps:
s1, firstly, detecting a foreground connected region of a target by adopting a background subtraction method, then labeling the detection region by using an external rectangular frame, and establishing a target three-dimensional hierarchical graph model consisting of nodes, edges and a space structure for each target region by analyzing appearance characteristics, motion characteristics and three-dimensional space structure characteristics among characteristics of the detected target;
s2, according to the target three-dimensional layered graph model established in the step S1, establishing a detection three-dimensional layered graph model R of a detection target in the detection area of the current frame;
s3, according to the target three-dimensional layered graph model established in the step S1, establishing a prediction three-dimensional layered graph model of a tracking target in a prediction area of a current frame for the tracking target of the previous frame
Figure FDA0002976089170000011
S4, respectively calculating the detection three-dimensional layered graph model R of the detection target in the current frame and the prediction three-dimensional layered graph model of the tracking target in the previous frame in the current frame prediction area
Figure FDA0002976089170000012
The matching degree of the nodes, the edges and the space structure between the nodes and the edges is realized in a specific way as follows:
s401, calculating a detection three-dimensional layered graph model R of a detection target in a current frame and a prediction three-dimensional layered graph model of a tracking target in a previous frame in a current frame prediction area
Figure FDA0002976089170000013
Degree of matching m of nodes between1And is expressed by the formula:
Figure FDA0002976089170000014
in the formula (3), i represents a reference numeral, Vi={ll,gi,ciDenotes detecting nodes in the three-dimensional hierarchical graph model,
Figure FDA0002976089170000015
representing nodes in a predictive three-dimensional hierarchical graph model, ll
Figure FDA0002976089170000016
Respectively represent nodes ViAnd node
Figure FDA0002976089170000017
Marker of (a), (b), giAnd ciRepresents a node ViThe component (b) of (a) is,
Figure FDA0002976089170000018
and
Figure FDA0002976089170000019
representing nodes
Figure FDA00029760891700000110
Component of (a)gRepresenting a node component giAnd
Figure FDA00029760891700000111
weight value of λcRepresenting a node component ciAnd
Figure FDA00029760891700000112
weight value of δgRepresenting a node component giAnd
Figure FDA00029760891700000113
degree of matching of position vector and shape model, deltacRepresenting a node component ciAnd
Figure FDA00029760891700000114
matching degree of the color histogram;
s402, calculating a detection three-dimensional layered graph model R of a detection target in a current frame and a prediction three-dimensional layered graph model of a tracking target in a previous frame in a current frame prediction area
Figure FDA00029760891700000115
Degree of matching m between edges2And is expressed by the formula:
Figure FDA00029760891700000116
in the formula (9), ebRepresents the length of the edge of the detection target in the detection three-dimensional layered graph model R of the current frame,
Figure FDA00029760891700000117
three-dimensional layered graph model for predicting tracking target in current frame
Figure FDA0002976089170000021
Length of middle edge, δbIndicates the length ebAnd length
Figure FDA0002976089170000022
Length matching degree between eaRepresents the cosine angle of the edge of the detection target in the detection three-dimensional layered graph model R of the current frame,
Figure FDA0002976089170000023
three-dimensional layered graph model for predicting tracking target in current frame
Figure FDA0002976089170000024
Cosine angle of middle edge, deltaaRepresents an angle eaAnd angle
Figure FDA0002976089170000025
BetweenAngle of (d) matching degree, λbAnd λaWeight values respectively representing the length matching degree and the angle matching degree;
s403, calculating a detection three-dimensional layered graph model R of a detection target in the current frame and a prediction three-dimensional layered graph model of a tracking target in the previous frame in the current frame prediction region
Figure FDA0002976089170000026
The degree of matching m of the space structure3And is expressed by the formula:
Figure FDA0002976089170000027
in the formula (12), dcRepresenting the length of a three-dimensional straight line segment in the three-dimensional layered graph model R,
Figure FDA0002976089170000028
representing a predictive three-dimensional hierarchical graph model
Figure FDA0002976089170000029
Length of straight line segment in middle three dimensions, deltaecRepresenting three-dimensional straight line segments dcAnd
Figure FDA00029760891700000210
degree of matching between, dhRepresenting the detection of the angle in the three-dimensional space of the straight line segment in the three-dimensional layered graph model R,
Figure FDA00029760891700000211
representing a predictive three-dimensional hierarchical graph model
Figure FDA00029760891700000212
Angle in three-dimensional space of middle straight line segment, deltaehRepresenting the angle d in three-dimensional space of a straight line segmenthAnd
Figure FDA00029760891700000213
degree of match between, λecAnd λehRespectively representing the length matching degree delta of the three-dimensional straight line segmentecAngle matching degree delta in three-dimensional space of linear segmentehThe weight value of (1);
s404, obtaining the node matching degree m according to the step S401, the step S402 and the step S4031Degree of matching m2Matching with three-dimensional space structure m3Calculating and detecting three-dimensional layered graph model R and predicting three-dimensional layered graph model
Figure FDA00029760891700000214
The matching degree M between the two is expressed by the formula:
Figure FDA00029760891700000215
in the formula (15), f1、f2And f3Weight coefficients respectively representing the node matching degree, the edge matching degree and the three-dimensional space structure matching degree, and f1+f2+f3=1;
And S5, tracking and matching the target between the detection area and the prediction area of the current frame according to the calculation result of the step S4.
2. The method as claimed in claim 1, wherein the nodes are clustered blocks in the target region composed of color features, shape features and three-dimensional space features.
3. The method according to claim 2, wherein the edge is a three-dimensional Euclidean distance between center points of different cluster blocks of the same target.
4. The method according to claim 3, wherein the spatial structure is a three-dimensional Euclidean distance between different target center points.
5. The method for tracking multiple moving objects based on three-dimensional layered graphics model according to claim 4, wherein said detecting three-dimensional layered graphics model R in step S2 is represented as:
R={V,E,S} (1)
in the formula (1), V denotes a node for detecting a three-dimensional hierarchical graph, E denotes an edge for detecting a three-dimensional hierarchical graph, and S denotes a spatial structure for detecting a three-dimensional hierarchical graph.
6. The method for tracking multiple moving objects based on three-dimensional hierarchical graph model as claimed in claim 5, wherein said predicting three-dimensional hierarchical graph model in step S3
Figure FDA0002976089170000031
Expressed as:
Figure FDA0002976089170000032
in the formula (2), the reaction mixture is,
Figure FDA0002976089170000033
a node representing a predictive three-dimensional hierarchical graph model,
Figure FDA0002976089170000034
representing the edges of a predictive three-dimensional hierarchical graph model,
Figure FDA0002976089170000035
representing the spatial structure of the predictive three-dimensional hierarchical graph model.
7. The method for tracking multiple moving objects based on the three-dimensional layered graph model as claimed in claim 6, wherein the step S5 is specifically implemented as follows:
s501, establishing a three-dimensional detection hierarchical graph according to the calculation result of the step S4Model R and predictive three-dimensional hierarchical graph model
Figure FDA0002976089170000036
Judging whether the tracking target is shielded or not according to the matching table;
s502, if the tracking target is judged to be not shielded in the step S501, obtaining an optimal matching result by utilizing a Hungarian method for solving an assignment problem;
and S503, if the tracking target is judged to be blocked in the step S501, matching the clustering block in the foreground detection area of the current frame with the tracking target, and then matching with the prediction area of the tracking target of the previous frame, so as to obtain the best matching result of the blocked area of the current frame.
8. The method for tracking multiple moving objects based on the three-dimensional layered graph model as claimed in claim 7, wherein the step S503 is specifically implemented as follows:
s5031, calculating a three-dimensional hierarchical graph model of the cluster block and the tracking target in the prediction of the current frame
Figure FDA0002976089170000037
Preliminarily determining the relation between the clustering block and the tracking target according to the node matching degree;
s5032, calculating a three-dimensional hierarchical graph model of the cluster block and the tracking target in the prediction of the current frame
Figure FDA0002976089170000041
Matching degree of the middle edge;
s5033, adding the node matching degree in the step S5031 and the edge matching degree in the step S5032, and establishing a detection three-dimensional layered graph model R and a prediction three-dimensional layered graph model R again
Figure FDA0002976089170000042
The best matching result of the current frame shielding area can be obtained.
CN201910205734.6A 2019-03-18 2019-03-18 Multi-moving-object tracking method based on three-dimensional layered graph model Active CN109961461B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910205734.6A CN109961461B (en) 2019-03-18 2019-03-18 Multi-moving-object tracking method based on three-dimensional layered graph model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910205734.6A CN109961461B (en) 2019-03-18 2019-03-18 Multi-moving-object tracking method based on three-dimensional layered graph model

Publications (2)

Publication Number Publication Date
CN109961461A CN109961461A (en) 2019-07-02
CN109961461B true CN109961461B (en) 2021-04-23

Family

ID=67024559

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910205734.6A Active CN109961461B (en) 2019-03-18 2019-03-18 Multi-moving-object tracking method based on three-dimensional layered graph model

Country Status (1)

Country Link
CN (1) CN109961461B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112162550B (en) * 2020-09-02 2021-07-16 北京航空航天大学 Three-dimensional target tracking method for active safety collision avoidance of automobile
CN115063789B (en) * 2022-05-24 2023-08-04 中国科学院自动化研究所 3D target detection method and device based on key point matching
CN116523970B (en) * 2023-07-05 2023-10-20 之江实验室 Dynamic three-dimensional target tracking method and device based on secondary implicit matching

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106067179A (en) * 2016-05-31 2016-11-02 广东顺德中山大学卡内基梅隆大学国际联合研究院 A kind of multi-object tracking method based on hierarchical network flow graph
CN107292911A (en) * 2017-05-23 2017-10-24 南京邮电大学 A kind of multi-object tracking method merged based on multi-model with data correlation
CN107992827A (en) * 2017-12-03 2018-05-04 湖南工程学院 A kind of method and device of the multiple mobile object tracking based on threedimensional model
CN108986151A (en) * 2017-05-31 2018-12-11 华为技术有限公司 A kind of multiple target tracking processing method and equipment
WO2019005291A1 (en) * 2017-06-27 2019-01-03 Qualcomm Incorporated Using object re-identification in video surveillance

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106067179A (en) * 2016-05-31 2016-11-02 广东顺德中山大学卡内基梅隆大学国际联合研究院 A kind of multi-object tracking method based on hierarchical network flow graph
CN107292911A (en) * 2017-05-23 2017-10-24 南京邮电大学 A kind of multi-object tracking method merged based on multi-model with data correlation
CN108986151A (en) * 2017-05-31 2018-12-11 华为技术有限公司 A kind of multiple target tracking processing method and equipment
WO2019005291A1 (en) * 2017-06-27 2019-01-03 Qualcomm Incorporated Using object re-identification in video surveillance
CN107992827A (en) * 2017-12-03 2018-05-04 湖南工程学院 A kind of method and device of the multiple mobile object tracking based on threedimensional model

Also Published As

Publication number Publication date
CN109961461A (en) 2019-07-02

Similar Documents

Publication Publication Date Title
CN110246159B (en) 3D target motion analysis method based on vision and radar information fusion
US8199977B2 (en) System and method for extraction of features from a 3-D point cloud
US9846812B2 (en) Image recognition system for a vehicle and corresponding method
CN111693972A (en) Vehicle position and speed estimation method based on binocular sequence images
Zhou et al. Moving object detection and segmentation in urban environments from a moving platform
CN109961461B (en) Multi-moving-object tracking method based on three-dimensional layered graph model
CN105069804B (en) Threedimensional model scan rebuilding method based on smart mobile phone
US20140064624A1 (en) Systems and methods for estimating the geographic location at which image data was captured
CN101344965A (en) Tracking system based on binocular camera shooting
CN115049700A (en) Target detection method and device
CN105160649A (en) Multi-target tracking method and system based on kernel function unsupervised clustering
KR20150074544A (en) Method of tracking vehicle
Hu et al. Robust object tracking via multi-cue fusion
CN113223045A (en) Vision and IMU sensor fusion positioning system based on dynamic object semantic segmentation
CN111723778B (en) Vehicle distance measuring system and method based on MobileNet-SSD
CN112947419A (en) Obstacle avoidance method, device and equipment
CN114170535A (en) Target detection positioning method, device, controller, storage medium and unmanned aerial vehicle
Murmu et al. Relative velocity measurement using low cost single camera-based stereo vision system
EP2677462B1 (en) Method and apparatus for segmenting object area
CN115457086A (en) Multi-target tracking algorithm based on binocular vision and Kalman filtering
CN115308732A (en) Multi-target detection and tracking method integrating millimeter wave radar and depth vision
WO2023131203A1 (en) Semantic map updating method, path planning method, and related apparatuses
van de Wouw et al. Hierarchical 2.5-d scene alignment for change detection with large viewpoint differences
CN111815667B (en) Method for detecting moving target with high precision under camera moving condition
Halperin et al. An epipolar line from a single pixel

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant