CN107992827A - A kind of method and device of the multiple mobile object tracking based on threedimensional model - Google Patents

A kind of method and device of the multiple mobile object tracking based on threedimensional model Download PDF

Info

Publication number
CN107992827A
CN107992827A CN201711255705.8A CN201711255705A CN107992827A CN 107992827 A CN107992827 A CN 107992827A CN 201711255705 A CN201711255705 A CN 201711255705A CN 107992827 A CN107992827 A CN 107992827A
Authority
CN
China
Prior art keywords
model
target
frame
value
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711255705.8A
Other languages
Chinese (zh)
Inventor
万琴
唐勇奇
胡惠
李亚
吴迪
余洪山
林国汉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Institute of Engineering
Original Assignee
Hunan Institute of Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Institute of Engineering filed Critical Hunan Institute of Engineering
Priority to CN201711255705.8A priority Critical patent/CN107992827A/en
Publication of CN107992827A publication Critical patent/CN107992827A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/647Three-dimensional objects by matching two-dimensional images to three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/758Involving statistics of pixels or of feature values, e.g. histogram matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention discloses a kind of method and device of the multiple mobile object tracking based on threedimensional model, including:Current time two dimensional image and depth image are gathered as current frame image;The screening connection target area from current frame image, and correspondence establishment motion tracking target respectively;Color model, shape and the spatial model of each motion tracking target present frame are established, and records its characteristic ginseng value in present frame respectively;Predicted motion tracks target next frame spatial model characteristic parameter predicted value, and color model characteristic parameter predicted value is obtained with reference to shape characteristic ginseng value;Calculate matching degree of the motion tracking target previous frame in the characteristic parameter measured value of the spatial model of present frame, color model characteristic parameter predicted value and present frame;According to frame matching degree, calculate and obtain tracking result, can realize in the case where monitoring scene is fixed, accurate and effective, real-time tracking multiple target.

Description

A kind of method and device of the multiple mobile object tracking based on threedimensional model
Technical field
The present invention relates to target following technical field, is tracked more particularly to a kind of multiple mobile object based on threedimensional model Method and device.
Background technology
Three dimensional vision system can provide the information of three dimensions, and the tracking based on three dimensional vision system can rely on three-dimensional letter Breath, establishes three dimensional object model, so as to fulfill in complex situations(Such as multiple target is blocked mutually)Accurate tracking, significantly carry The accuracy of height tracking, in the vision monitoring that more may be used on the safety-security areas such as traffic, airport, bank, such as uses 3 D video The monitoring system of fusion, integrates spatial positional information, can strengthen location aware, auxiliary Emergency decision etc., dive with huge applications Power, but the scene understanding mechanism based on continuous 3D vision information is sufficiently complex, belongs to brand-new research category, based on 3D vision Multiple target compound movement follow-up study totally in starting and the exploratory stage, the complicated multiple mobile object based on 3D vision with Track research is full of hope and challenge a direction.
The content of the invention
It is an object of the invention to provide a kind of based on the two dimensional image provided by kinect video cameras and depth image structure Into threedimensional model multiple mobile object tracking and device, in the case of being fixed in monitoring scene, it is accurate and effective, The technical problem of real-time tracking multiple target.
In order to solve the above technical problems, the present invention provides a kind of method of the multiple mobile object tracking based on threedimensional model, Including:
Step S100:The two dimensional image and depth image at collection current time are as current frame image;
Step S200:Screen current frame image connection target area, and correspondence establishment pursuit movement target respectively;
Step S300:Color model, shape, the spatial model of each motion tracking target present frame are established, and is recorded respectively Its characteristic ginseng value;
Step S400:Predict spatial model characteristic parameter predicted value and color model feature of each motion tracking target in next frame Parameter prediction value;
Step S500:Calculate spatial model of the motion tracking target previous frame in present frame, color model characteristic parameter predicted value With the matching degree of the characteristic parameter measured value of present frame;
Step S600:According to frame matching degree, calculate and obtain tracking result.
Alternatively, the color model is the color histogram of institute's tracking motion target area, its characteristic parameter refers to face The HSV data of Color Histogram.
Alternatively, the shape is the corresponding boundary rectangle frame of institute's tracking motion target area, its characteristic parameter For the length and width value of the rectangle frame.
Alternatively, the spatial model refers to the threedimensional model of institute's tracking motion target area, its characteristic parameter is with two The three-dimensional coordinate description of characteristic point, the three-dimensional coordinate is by the two-dimensional coordinate value of corresponding two characteristic points and the depth of its depth image Angle value is formed, which is that will correspond to target shape model to be divided into corresponding two centers after two parts up and down Point.
Alternatively, the motion tracking target next frame spatial model characteristic parameter through the following steps that obtain 's:
S401:Predicted position:Using uniform motion model, according to each motion tracking object space model characteristic point current The two-dimensional coordinate value of frame, which calculates, obtains it in the corresponding two-dimensional coordinate predicted value of next frame;
S402:Prognostic chart picture depth value:Using uniform motion model, with reference to each motion tracking object space model characteristic point In the image depth values of present frame, it is obtained in next frame picture depth predicted value;
S403:Obtain spatial model characteristic parameter predicted value:The two dimension seat of comprehensive each motion tracking object space model characteristic point Predicted value and picture depth predicted value are marked, obtains spatial model characteristic parameter predicted value.
Alternatively, the color model the characteristic parameter predicted value of next frame be according to the motion tracking target under The HSV numbers for the color histogram that the spatial model characteristic parameter predicted value and shape characteristic parameter predicted value of one frame are established According to, and the target in the shape characteristic ginseng value of next frame is existed according to the shape of the motion tracking target The characteristic ginseng value of present frame calculates what is obtained by uniform motion model.
Alternatively, the motion tracking target previous frame is predicted in spatial model, the color model characteristic parameter of present frame Value and the matching degree of the characteristic parameter measured value of present frame are by asking the motion model matching degree of two interframe and color model phase Got like the weighted average calculation of degree, i.e.,:
Wherein:Represent the matching degree between previous frame target i and present frame target j;
Represent the top half central point of the shape of motion tracking target i to the shape of motion tracking target j Top half central point motion model matching degree;
Represent the latter half central point of the shape of motion tracking target i to the shape of motion tracking target j The latter half central point motion model matching degree;
Represent the color model of previous frame target i and the color model similarity of present frame target j.
Alternatively, the motion model matching degreeIt is the spatial model spy according to the motion tracking target The interframe Mahalanobis distances for levying parameter, that is, three-dimensional coordinate calculate what is obtained.
Alternatively, the color histogram similarity is to calculate to obtain by the following formula:
-- the color model of expression previous frame target i corresponding kth bit value in one-dimensional HSV histograms;
-- the color model figure of expression present frame target j corresponding kth bit value in one-dimensional HSV histograms;
T-- represents current time;
K-expression summation parameter, value range is 1 to 72.
Alternatively, the tracking result is obtained according to following steps:
S601:Matching matrix is established, row represents present frame target, and row represent previous frame target, and matrix element is that each row represents The matching degree for the previous frame target that present frame target is represented with each row, is divided into three kinds of situations:
1)As row, column number is consistent, then tracking is normal, without changing ranks number;
2)As line number is less than columns, show to arrange corresponding previous frame target there occurs situation about disappearing in present frame, with 0 polishing OK so that row, column number is identical;
3)As columns is less than line number, shows to go that corresponding present frame target is that target newly occur, arranged with 0 polishing so that row, column Number is identical;
S602:The best match of the matching matrix is asked, as obtains the maximum matching degree of matching matrix;
S603:Calculated with the Hungary Algorithm for solving assignment problem, obtain the maximum matching result of row, column element, then correspond to row The representative present frame target previous frame object matching representative with row, if the ranks in matching result are all 0, it is corresponded to For it is as listed above go out target disappear or the emerging tracking situation of target.
Present invention also offers a kind of device of the multiple mobile object tracking based on threedimensional model, including:
Acquisition module:The two dimensional image and depth image at current time are obtained for being gathered at the same time by using kinect video cameras As current frame image;
Detection module:After prospect connected region for detecting current frame image by using traditional background subtraction method, And difference correspondence establishment motion tracking target;
Acquisition module:For the color model, spatial model and the shape that obtain each connected region present frame and each model Characteristic ginseng value;
Prediction module:For predicting spy of each motion tracking target in the color model of next frame, spatial model and shape Levy parameter value;
Matching module:For calculating color model of the previous frame in present frame, the spatial model feature ginseng of each motion tracking target The matching degree of the characteristic parameter measured value of number predicted value and present frame, further establishes matching matrix, and uses and solve matching square The method of the best match of battle array finally determines the feature of the color model of each motion tracking target, spatial model and shape Parameter tracking value.
Memory module:For storing the acquisition module, detection module, acquisition module, prediction module, matching module difference Including characteristic ginseng value, specifically include measured value, predicted value, pursuit gain, be further used as acquisition module, prediction module, The data foundation of matching module effect.
By the color model, three-dimensional space model, shape characteristic parameter of each motion tracking target of integrated use, and The side of matrix is further matched with predicted value matching degree, foundation using solution room model, color model characteristic parameter measured value The final neutralizing of the problem of formula, the tracking target that will do more physical exercises has and calculates to solve the mathematical problem of the best match of matching matrix Machine program realize feasibility, simultaneously because its storage data characteristics it is simple, calculation amount is few, also can further improve in real time with Track efficiency, is in the case where monitoring scene is fixed, realizes accurate and effective, real-time tracking to tracking realization of goal of doing more physical exercises Effective scheme.
Brief description of the drawings
Fig. 1 is a kind of specific embodiment party of the method for multiple mobile object tracking based on threedimensional model provided by the present invention The flow chart of formula.
Fig. 2 is a kind of threedimensional model shape of the method for multiple mobile object tracking based on threedimensional model provided by the present invention State predicts the flow chart of embodiment.
Fig. 3 is a kind of method of multiple mobile object tracking based on threedimensional model provided by the present invention by frame matching Degree obtains the flow chart of the embodiment of motion tracking result.
Fig. 4 is a kind of concrete function of the device of multiple mobile object tracking based on threedimensional model provided by the present invention Module composition illustrates.
Embodiment
It is below in conjunction with the accompanying drawings and specific real in order to make those skilled in the art more fully understand technical scheme Applying example, the present invention is described in further detail, it is clear that and described embodiment is only part of the embodiment of the present invention, and The embodiment being not all of.Based on the embodiments of the present invention, those of ordinary skill in the art are not making creative work Under the premise of all embodiments for being obtained, belong to protection scope of the present invention.
Referring to Fig. 1 to Fig. 3, Fig. 1 is a kind of side of multiple mobile object tracking based on threedimensional model provided by the present invention The flow chart of the embodiment of method, Fig. 2 are a kind of multiple mobile object tracking based on threedimensional model provided by the present invention Method threedimensional model status predication embodiment flow chart, Fig. 3 is provided by the present invention a kind of based on three-dimensional The flow of the embodiment that motion tracking result is obtained by frame matching degree of the method for the multiple mobile object tracking of model Figure.
A kind of method of multiple mobile object tracking based on threedimensional model provided by the present invention, its specific steps include:
Step S100:The two dimensional image and depth image at collection current time are as current frame image.
Step S200:Screen current frame image connection target area, and correspondence establishment pursuit movement target respectively, its target Tracking sequence number value is denoted as m, and the implication of follow-up m is identical with this definition.
The screening current frame image connection target area, and correspondence establishment pursuit movement target is specially respectively:Two Target is detected in dimension image, by background subtraction method, the connected region that background is removed in two dimensional image is object detection area. Target three-dimensional and pursuit movement target are established by depth image.
Step S300:Color model, shape, the spatial model of each motion tracking target present frame are established, and respectively Record its characteristic ginseng value.
Wherein:
Color model refers to the color histogram of the connection target area, is denoted as, its characteristic parameter refers to color histogram HSV data.
Shape refers to the corresponding boundary rectangle frame of institute's tracking motion target area, is denoted as, its characteristic parameter For the length and width value of the rectangle frame, it is denoted as respectively
Spatial model refers to the threedimensional model of institute's tracking motion target area, is denoted as, its characteristic parameter is with two spies Sign point three-dimensional coordinate description, the three-dimensional coordinate by corresponding two characteristic points two-dimensional coordinate value,With the depth value of its depth image,Form, be denoted as,, Two characteristic points are that will correspond to target shape model to be divided into corresponding two central points after two parts up and down.
The subscript t in each mark represents current time above, and corresponding model carries out more after each frame obtains tracking result New to replace, the implication of subscript t is identical with this in succeeding marker, can further obtain, and corresponding subscript t-1 represents upper the one of current time Sampling instant, i.e., at the time of previous frame corresponds to, corresponding subscript t+1 represents next sampling instant at current time, i.e. next frame pair At the time of answering.
Step S400:Predict spatial model characteristic ginseng value and color model feature of each motion tracking target in next frame Parameter prediction value;
Wherein, it is denoted as referring to Fig. 2, motion tracking target in the spatial model of next frame, its characteristic parameter is with the following methods Method obtains:
S401:Predicted position:Using uniform motion model, according to each motion tracking object space model characteristic point current The two-dimensional coordinate value of frame, which calculates, obtains it in the corresponding two-dimensional coordinate predicted value of next frame;
The uniform motion model is that every two frame times interval is smaller, and target is adjacent in view of for real-time video sequence Interframe movement change is slow, therefore is approximately uniform motion, by formula(1)Shown dynamics formula, you can according to the movement Two-dimensional coordinate information of the target signature point in present frame,Calculate and obtain it in next frame corresponding two Dimension coordinate predicted value,
(1)
WhereinFor two frame time intervals,,Represent two-dimensional coordinate in horizontal and vertical movement velocity.
S402:Prognostic chart picture depth value:With reference to each motion tracking object space model characteristic point in current frame image Depth value, further by formula(2)Space model characteristic point can be obtained in next frame picture depth predicted value,
(2)
Wherein,,,Two characteristic point positions of representation space state are distinguished in present frame and next frame respectively Corresponding image depth values.
S403:Obtain spatial model characteristic parameter predicted value:The two of comprehensive each motion tracking object space model characteristic point Dimension coordinate predicted value and picture depth predicted value, obtain spatial model characteristic parameter i.e. three-dimensional coordinate predicted value.
Wherein next frame spatial model characteristic parameter predicted value is represented with the D coordinates value of two characteristic point, is remembered respectively Make,
Further, the spatial model state value combination shape parameter acquiring color model of next frame described in S400 In the status predication value of next frame, it is specially:
Motion tracking target is denoted as in the color model state value of next frame, method acquisition with the following methods:
First, its next frame is obtained according to characteristic ginseng value prediction of the shape of each motion tracking target in present frame Predicted value, prediction model such as formula(3)It is shown:
(3)
Then in conjunction with each motion tracking target signature point next frame space two-dimensional coordinate value,And shape characteristic ginseng value, further predict next frame color model
Step S500:Calculate spatial model characteristic parameter predicted value, color of the motion tracking target i previous frames in present frame Aspect of model parameter prediction value is respectively with moving target j in the spatial model of present frame, color model characteristic parameter measured value Matching degree
Wherein, the frame matching degree of the motion tracking targetIt is the spatial model matching degree by seeking two interframe,With color model similarityWeighted average calculation get, calculation formula such as formula(4)It is shown:
(4)
Wherein:Represent the top half central point of the shape of motion tracking target i to the shape of motion tracking target j The motion model matching degree of the top half central point of model;
Represent the latter half central point of the shape of motion tracking target i to the shape of motion tracking target j The latter half central point motion model matching degree;
Represent the color model of previous frame target i and the color model similarity of present frame target j.
The spatial match degree,It is to be worked as by calculating the three-dimensional coordinate of object space model predication value with target The Mahalanobis distances of the three-dimensional coordinate of previous frame measured valueCome what is assessed,AndFormula is used respectively(5), it is public Formula(6)Calculate and obtain:
(5)
And(6)
Wherein,It is the measured value of the pursuit movement target j,It is the covariance square of the pursuit movement target j measured values Battle array;Represent the pursuit movement target i predicted values,It is the covariance matrix of the pursuit movement target i predicted values.Such as FruitLess than or equal to predetermined threshold value, then it is assumed that target prediction value does not have notable difference with current measurement value feature, the two is mutual Matching, otherwise represents that the two has notable difference, does not match each other;
And the reservation threshold valueThe card side that it is n according to the obedience free degree that selection, which is,()Distribution table is tabled look-up acquisition, and wherein n is surveys Measure the dimension of vector(When each component of vector is independent mutually)Or the order of its covariance matrix, true and false set received probability when setting For 95%, by looking into chi-square distribution table, can obtain=3.841。
The color model similarityIt is to calculate to obtain by the following formula:
-- the color model of expression previous frame target i corresponding kth bit value in one-dimensional HSV histograms;
-- the color model figure of expression present frame target j corresponding kth bit value in one-dimensional HSV histograms;
K-expression summation parameter, value range is 1 to 72.
Step S600:According to frame matching degree, calculate and obtain tracking result.
Specifically, include referring to Fig. 3, its implementation:
S601:Matching matrix is established, row represents present frame target, and row represent previous frame target, and matrix element is that each row represents The matching degree for the previous frame target that present frame target is represented with each row, is divided into three kinds of situations(Assuming that previous frame is tracked target, Present frame is the target to be matched detected):
1)As row, column number is consistent, then tracking is normal, without changing ranks number;Assuming that previous frame tracking target 3, present frame inspection 3 targets are measured, then the matching matrix form established is as shown in table 1:
Table 1
2)As line number is less than columns, show to arrange corresponding previous frame target there occurs situation about disappearing in present frame, with 0 polishing OK so that row, column number is identical, it is assumed that previous frame tracking target 3, present frame only detect 2 targets, illustrate that present frame has mesh Mark disappears, and is matched again with 0 polishing row, then the matching matrix form established is as shown in table 2:
Table 2
3)As columns is less than line number, shows to go that corresponding present frame target is that target newly occur, arranged with 0 polishing so that row, column Number is identical, it is assumed that previous frame tracking target 2, present frame detect 3 targets, illustrate that present frame has emerging target, with 0 Polishing row match again, then the matching matrix form established is as shown in table 3:
Table 3
S602:The best match of the matching matrix is asked, as obtains the maximum matching degree of matching matrix;
S603:Calculated with the Hungary Algorithm for solving assignment problem, obtain the maximum matching result of row, column element, then correspond to row The representative present frame target previous frame object matching representative with row.
According to the Optimum Matching of Hungary Algorithm Program object matching matrix as a result, using match the first situation as Example:
After Hungary Algorithm matching:
Two kinds of situations next:Target disappears, target newly occurs, and can obtain best match in aforementioned manners after polishing ranks, If the ranks in matching result are all 0, its correspond to it is as listed above go out target disappear or the emerging tracking feelings of target Condition.
In view of into the technical solution of line trace, there are scene based on the three-dimensional spatial information that three dimensional vision system directly provides Mechanism understand sufficiently complex, the jejune technical problem of technical solution, technical solution of the present invention respectively moved by integrated use with Color model, three-dimensional space model, the shape characteristic parameter of track target, and further using solution room model, color The problem of aspect of model measured value of parameters matches the mode of matrix with predicted value matching degree, foundation, the tracking target that will do more physical exercises is most Neutralizing is the mathematical problem of the best match of solution matching matrix eventually, has the feasibility that computer program is realized, simultaneously because The data characteristics that it is stored is simple, and calculation amount is few, also can further improve real-time tracking efficiency, is the feelings fixed in monitoring scene Under condition, accurate and effective to tracking realization of goal of doing more physical exercises, real-time tracking effective scheme is realized.
A kind of a kind of specific embodiment party of the device of multiple mobile object tracking based on threedimensional model provided by the present invention The block diagram of formula is as shown in figure 4, the device includes:
Acquisition module 100:The two dimensional image and depth at current time are obtained for being gathered at the same time by using kinect video cameras Image is as current frame image;
Detection module 200:For detecting the prospect connected region of current frame image by using traditional background subtraction method, And difference correspondence establishment motion tracking target;
Acquisition module 300, for the color model, spatial model and the shape that obtain each connected region present frame and each model Characteristic ginseng value;
Prediction module 400, for predicting each motion tracking target in the color model of next frame, spatial model and shape Characteristic ginseng value;
Matching module 500, for calculating color model of the previous frame in present frame, the spatial model feature of each motion tracking target The matching degree of the characteristic parameter measured value of parameter prediction value and present frame, further establishes matching matrix, and using solution matching The method of the best match of matrix finally determines the spy of the color model of each motion tracking target, spatial model and shape Levy parameter tracking value;
Memory module 600, for storing the acquisition module, detection module, acquisition module, prediction module, matching module difference Including characteristic ginseng value, specifically include measured value, predicted value, pursuit gain, be further used as acquisition module, prediction module, Data foundation with module effect.
A kind of device of multiple mobile object tracking based on threedimensional model provided by the present invention, remaining is set and method phase Corresponding, details are not described herein.
By the color model, three-dimensional space model, shape characteristic parameter of each motion tracking target of integrated use, and The side of matrix is further matched with predicted value matching degree, foundation using solution room model, color model characteristic parameter measured value The final neutralizing of the problem of formula, the tracking target that will do more physical exercises has and calculates to solve the mathematical problem of the best match of matching matrix Machine program realize feasibility, simultaneously because its storage data characteristics it is simple, calculation amount is few, also can further improve in real time with Track efficiency, is in the case where monitoring scene is fixed, realizes accurate and effective, real-time tracking to tracking realization of goal of doing more physical exercises Effective scheme.
Many details are elaborated in above description to facilitate a thorough understanding of the present invention, still, the present invention can be with Implemented using other different from other modes described here, it is thus impossible to be interpreted as limiting the scope of the invention.
In short, although the present invention lists above-mentioned preferred embodiment, although it should be noted that those skilled in the art Member can carry out various change and remodeling, unless such change and remodeling deviate from the scope of the present invention, otherwise should all wrap Include within the scope of the present invention.

Claims (9)

  1. A kind of 1. method of the multiple mobile object tracking based on threedimensional model, it is characterised in that including:
    Step S100:The two dimensional image and depth image at collection current time are as current frame image;
    Step S200:Screen current frame image connection target area, and correspondence establishment motion tracking target respectively;
    Step S300:Color model, shape, the spatial model of each motion tracking target present frame are established, and is recorded respectively Its characteristic ginseng value;
    Step S400:Predict spatial model characteristic parameter predicted value and color model feature of each motion tracking target in next frame Parameter prediction value;
    Step S500:Calculate spatial model of the motion tracking target previous frame in present frame, color model characteristic parameter predicted value With the matching degree of the characteristic parameter measured value of present frame;
    Step S600:According to frame matching degree, calculate and obtain tracking result;
    Wherein:
    The color model is the color histogram of institute's tracking motion target area, its characteristic parameter refers to color histogram HSV data;
    The shape is the corresponding boundary rectangle frame of institute's tracking motion target area, its characteristic parameter is the rectangle frame Length and width value;
    The spatial model refers to the threedimensional model of institute's tracking motion target area, the three-dimensional of two characteristic points of its characteristic parameter Coordinate describes, which is made of the depth value of the two-dimensional coordinate value and its depth image of corresponding two characteristic points, should Two characteristic points are that will correspond to target shape model to be divided into corresponding two central points after two parts up and down.
  2. 2. the method for the multiple mobile object tracking based on threedimensional model as claimed in claim 1, it is characterised in that the movement Track target next frame spatial model characteristic ginseng value through the following steps that obtain:
    S401:Predicted position:Using uniform motion model, according to each motion tracking object space model characteristic point current The two-dimensional coordinate value of frame, which calculates, obtains it in the corresponding two-dimensional coordinate predicted value of next frame;
    S402:Prognostic chart picture depth value:Using uniform motion model, with reference to each motion tracking object space model characteristic point In the image depth values of present frame, it is obtained in next frame picture depth predicted value;
    S403:Obtain spatial model characteristic parameter predicted value:The two dimension seat of comprehensive each motion tracking object space model characteristic point Predicted value and picture depth predicted value are marked, obtains spatial model characteristic parameter predicted value.
  3. 3. the method for the multiple mobile object tracking based on threedimensional model as claimed in claim 1 or 2, it is characterised in that described Color model is in the spatial model feature that the characteristic parameter predicted value of next frame is according to the motion tracking target in next frame The HSV data for the color histogram that parameter prediction value and shape characteristic parameter predicted value are established.
  4. 4. the method for the multiple mobile object tracking based on threedimensional model as claimed in claim 3, it is characterised in that the mesh The shape characteristic parameter predicted value for being marked on next frame is the shape according to the motion tracking target in present frame Characteristic ginseng value calculates what is obtained by uniform motion model.
  5. 5. the method for the multiple mobile object tracking based on threedimensional model as claimed in claim 4, it is characterised in that the movement The characteristic parameter that target previous frame is tracked in the spatial model of present frame, color model characteristic parameter predicted value and present frame measures The matching degree of value is got by asking the motion model matching degree of two interframe and the weighted average calculation of color model similarity, I.e.:
    Wherein:Represent the matching degree between previous frame target i and present frame target j;
    Represent the top half central point of the shape of motion tracking target i to the shape of motion tracking target j The motion model matching degree of top half central point;
    Represent the latter half central point of the shape of motion tracking target i to the shape of motion tracking target j The motion model matching degree of the latter half central point;
    Represent the color model of previous frame target i and the color model similarity of present frame target j.
  6. 6. the method for the multiple mobile object tracking based on threedimensional model as claimed in claim 5, it is characterised in that the movement Model Matching degreeIt is the spatial model characteristic parameter i.e. interframe of three-dimensional coordinate according to the motion tracking target Mahalanobis distances calculate what is obtained.
  7. 7. the method for the multiple mobile object tracking based on threedimensional model as claimed in claim 6, it is characterised in that the color Distortion is to calculate to obtain by the following formula:
    -- the color model of expression previous frame target i corresponding kth bit value in one-dimensional HSV histograms;
    -- the color model figure of expression present frame target j corresponding kth bit value in one-dimensional HSV histograms;
    T-- represents current time;
    K-expression summation parameter, value range is 1 to 72.
  8. 8. as claimed in claim 7 based on threedimensional model multiple mobile object tracking method, the tracking result according to Lower step obtains:
    S601:Matching matrix is established, row represents present frame target, and row represent previous frame target, and matrix element is that each row represents The matching degree for the previous frame target that present frame target is represented with each row, is divided into three kinds of situations:
    1)As row, column number is consistent, then tracking is normal, without changing ranks number;
    2)As line number is less than columns, show to arrange corresponding previous frame target there occurs situation about disappearing in present frame, with 0 polishing OK so that row, column number is identical;
    3)As columns is less than line number, shows to go that corresponding present frame target is that target newly occur, arranged with 0 polishing so that row, column Number is identical;
    S602:The best match of the matching matrix is asked, as obtains the maximum matching degree of matching matrix;
    S603:Calculated with the Hungary Algorithm for solving assignment problem, obtain the maximum matching result of row, column element, then correspond to row The representative present frame target previous frame object matching representative with row, if the ranks in matching result are all 0, it is corresponded to For it is as listed above go out target disappear or the emerging tracking situation of target.
  9. 9. a kind of device of the multiple mobile object tracking based on threedimensional model, including:
    Acquisition module:The two dimensional image and depth image at current time are obtained for being gathered at the same time by using kinect video cameras As current frame image;
    Detection module:For detecting the prospect connected region of current frame image by using traditional background subtraction method, and Correspondence establishment motion tracking target respectively;
    Acquisition module:For the color model, spatial model and the shape that obtain each connected region present frame and each model Characteristic ginseng value;
    Prediction module:For predicting spy of each motion tracking target in the color model of next frame, spatial model and shape Levy parameter value;
    Matching module:For calculating color model of the previous frame in present frame, the spatial model feature ginseng of each motion tracking target The matching degree of the characteristic parameter measured value of number predicted value and present frame, further establishes matching matrix, and uses and solve matching square The method of the best match of battle array finally determines the feature of the color model of each motion tracking target, spatial model and shape Parameter tracking value;
    Memory module:Include respectively for storing the acquisition module, detection module, acquisition module, prediction module, matching module Characteristic ginseng value, measured value, predicted value, pursuit gain are specifically included, to be further used as acquisition module, prediction module, matching The data foundation of module effect.
CN201711255705.8A 2017-12-03 2017-12-03 A kind of method and device of the multiple mobile object tracking based on threedimensional model Pending CN107992827A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711255705.8A CN107992827A (en) 2017-12-03 2017-12-03 A kind of method and device of the multiple mobile object tracking based on threedimensional model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711255705.8A CN107992827A (en) 2017-12-03 2017-12-03 A kind of method and device of the multiple mobile object tracking based on threedimensional model

Publications (1)

Publication Number Publication Date
CN107992827A true CN107992827A (en) 2018-05-04

Family

ID=62035266

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711255705.8A Pending CN107992827A (en) 2017-12-03 2017-12-03 A kind of method and device of the multiple mobile object tracking based on threedimensional model

Country Status (1)

Country Link
CN (1) CN107992827A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109961461A (en) * 2019-03-18 2019-07-02 湖南工程学院 A kind of multiple mobile object tracking based on three-dimensional layered graph model
CN110047090A (en) * 2019-03-28 2019-07-23 淮阴工学院 RGB-D method for tracking target based on evolution Feature study
CN110288051A (en) * 2019-07-03 2019-09-27 电子科技大学 A kind of polyphaser multiple target matching process based on distance
CN110517293A (en) * 2019-08-29 2019-11-29 京东方科技集团股份有限公司 Method for tracking target, device, system and computer readable storage medium
CN110544273A (en) * 2018-05-29 2019-12-06 杭州海康机器人技术有限公司 motion capture method, device and system
CN110580707A (en) * 2018-06-08 2019-12-17 杭州海康威视数字技术股份有限公司 object tracking method and system
CN111127521A (en) * 2019-10-25 2020-05-08 上海联影智能医疗科技有限公司 System and method for generating and tracking the shape of an object
CN111274943A (en) * 2020-01-19 2020-06-12 深圳市商汤科技有限公司 Detection method, detection device, electronic equipment and storage medium
CN111932588A (en) * 2020-08-07 2020-11-13 浙江大学 Tracking method of airborne unmanned aerial vehicle multi-target tracking system based on deep learning
CN113689365A (en) * 2021-08-23 2021-11-23 南通大学 Target tracking and positioning method based on Azure Kinect

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101141633A (en) * 2007-08-28 2008-03-12 湖南大学 Moving object detecting and tracing method in complex scene
CN101561928A (en) * 2009-05-27 2009-10-21 湖南大学 Multi-human body tracking method based on attribute relational graph appearance model
US20100021009A1 (en) * 2007-01-25 2010-01-28 Wei Yao Method for moving targets tracking and number counting
CN102521840A (en) * 2011-11-18 2012-06-27 深圳市宝捷信科技有限公司 Moving target tracking method, system and terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100021009A1 (en) * 2007-01-25 2010-01-28 Wei Yao Method for moving targets tracking and number counting
CN101141633A (en) * 2007-08-28 2008-03-12 湖南大学 Moving object detecting and tracing method in complex scene
CN101561928A (en) * 2009-05-27 2009-10-21 湖南大学 Multi-human body tracking method based on attribute relational graph appearance model
CN102521840A (en) * 2011-11-18 2012-06-27 深圳市宝捷信科技有限公司 Moving target tracking method, system and terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
万琴 等: "基于三维视觉系统的多运动目标跟踪方法综述", 《计算机工程与应用》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110544273A (en) * 2018-05-29 2019-12-06 杭州海康机器人技术有限公司 motion capture method, device and system
CN110580707A (en) * 2018-06-08 2019-12-17 杭州海康威视数字技术股份有限公司 object tracking method and system
CN109961461B (en) * 2019-03-18 2021-04-23 湖南工程学院 Multi-moving-object tracking method based on three-dimensional layered graph model
CN109961461A (en) * 2019-03-18 2019-07-02 湖南工程学院 A kind of multiple mobile object tracking based on three-dimensional layered graph model
CN110047090A (en) * 2019-03-28 2019-07-23 淮阴工学院 RGB-D method for tracking target based on evolution Feature study
CN110047090B (en) * 2019-03-28 2022-10-14 淮阴工学院 RGB-D target tracking method based on evolution feature learning
CN110288051B (en) * 2019-07-03 2022-04-22 电子科技大学 Multi-camera multi-target matching method based on distance
CN110288051A (en) * 2019-07-03 2019-09-27 电子科技大学 A kind of polyphaser multiple target matching process based on distance
CN110517293A (en) * 2019-08-29 2019-11-29 京东方科技集团股份有限公司 Method for tracking target, device, system and computer readable storage medium
US11393103B2 (en) 2019-08-29 2022-07-19 Boe Technology Group Co., Ltd. Target tracking method, device, system and non-transitory computer readable medium
CN111127521A (en) * 2019-10-25 2020-05-08 上海联影智能医疗科技有限公司 System and method for generating and tracking the shape of an object
CN111127521B (en) * 2019-10-25 2024-03-01 上海联影智能医疗科技有限公司 System and method for generating and tracking shape of target
CN111274943A (en) * 2020-01-19 2020-06-12 深圳市商汤科技有限公司 Detection method, detection device, electronic equipment and storage medium
WO2021143935A1 (en) * 2020-01-19 2021-07-22 深圳市商汤科技有限公司 Detection method, device, electronic apparatus, and storage medium
CN111274943B (en) * 2020-01-19 2023-06-23 深圳市商汤科技有限公司 Detection method, detection device, electronic equipment and storage medium
CN111932588A (en) * 2020-08-07 2020-11-13 浙江大学 Tracking method of airborne unmanned aerial vehicle multi-target tracking system based on deep learning
CN111932588B (en) * 2020-08-07 2024-01-30 浙江大学 Tracking method of airborne unmanned aerial vehicle multi-target tracking system based on deep learning
CN113689365A (en) * 2021-08-23 2021-11-23 南通大学 Target tracking and positioning method based on Azure Kinect

Similar Documents

Publication Publication Date Title
CN107992827A (en) A kind of method and device of the multiple mobile object tracking based on threedimensional model
CN108241849B (en) Human body interaction action recognition method based on video
CN109190508B (en) Multi-camera data fusion method based on space coordinate system
CN104282020B (en) A kind of vehicle speed detection method based on target trajectory
CN105678288B (en) Method for tracking target and device
CN101833791B (en) Scene modeling method under single camera and system
US8437549B2 (en) Image processing method and image processing apparatus
US8289392B2 (en) Automatic multiscale image acquisition from a steerable camera
CN102646275B (en) The method of virtual three-dimensional superposition is realized by tracking and location algorithm
US20150178571A1 (en) Methods, devices and systems for detecting objects in a video
CN102243765A (en) Multi-camera-based multi-objective positioning tracking method and system
CN111127559B (en) Calibration rod detection method, device, equipment and storage medium in optical dynamic capture system
CN110006444B (en) Anti-interference visual odometer construction method based on optimized Gaussian mixture model
Krinidis et al. A robust and real-time multi-space occupancy extraction system exploiting privacy-preserving sensors
CN109636828A (en) Object tracking methods and device based on video image
CN110555377A (en) pedestrian detection and tracking method based on fisheye camera overlook shooting
CN110909625A (en) Computer vision basic network training, identifying and constructing method and device
CN100496122C (en) Method for tracking principal and subordinate videos by using single video camera
Xue et al. A fast visual map building method using video stream for visual-based indoor localization
CN107066975A (en) Video identification and tracking system and its method based on depth transducer
WO2020015501A1 (en) Map construction method, apparatus, storage medium and electronic device
CN105631900B (en) A kind of wireless vehicle tracking and device
CN114155278A (en) Target tracking and related model training method, related device, equipment and medium
CN116778094B (en) Building deformation monitoring method and device based on optimal viewing angle shooting
CN102005052A (en) Occluded human body tracking method based on kernel density estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination