CN106384359A - Moving target tracking method and television set - Google Patents

Moving target tracking method and television set Download PDF

Info

Publication number
CN106384359A
CN106384359A CN201610848424.2A CN201610848424A CN106384359A CN 106384359 A CN106384359 A CN 106384359A CN 201610848424 A CN201610848424 A CN 201610848424A CN 106384359 A CN106384359 A CN 106384359A
Authority
CN
China
Prior art keywords
moving target
video image
target
moving
frame video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610848424.2A
Other languages
Chinese (zh)
Other versions
CN106384359B (en
Inventor
薛婷婷
王继东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Qingdao Hisense Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Electronics Co Ltd filed Critical Qingdao Hisense Electronics Co Ltd
Priority to CN201610848424.2A priority Critical patent/CN106384359B/en
Publication of CN106384359A publication Critical patent/CN106384359A/en
Application granted granted Critical
Publication of CN106384359B publication Critical patent/CN106384359B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention provides a moving target tracking method and a television set. The method includes the following steps that: a shadow removing method for improving hybrid Gaussian background modeling is adopted to remove the shadow of moving targets in video images; model information is established for each moving object; the tracking window of the moving targets is update adaptively according to the model information of each moving object; the weights of the moving objects are determined according to the model information of each moving object; and the association probability of the moving objects is determined according to the weights of the moving objects; and tracking maintaining is performed on the moving objects in the tracking window according to the association probability of the moving objects. Since the association probability of the moving targets is established, covered and merged moving objects can be separated from each other; a plurality of moving objects can be separated from each other; tracking maintaining is performed on the moving objects in the video images; and the tracking of the moving targets can be well realized.

Description

Motion target tracking method and TV
Technical field
The present invention relates to technical field of image processing, more particularly, to a kind of motion target tracking method and TV.
Background technology
With image technique development, what intelligent monitor system was universal has been applied in life.Intelligent monitor system is Using image procossing and mode identification technology, information useless in scene is filtered out by data processing function, then emerging to sense The moving target of interest or non-athletic target carry out rapid examination and analysis, so these targets carried out detect, describe, identifying and The technology of behavior understanding, it is possible to achieve in monitoring scene Intelligent target, be accurately monitored in real time.At present,
In prior art, commonly used intelligent monitor system carries out the detection of moving target, by the figure to moving target Picture and background image are analyzed, and then realize the detection of moving target.
But in prior art, it is impossible to many when blocking or merging occur in the moving target in video image Individual moving target carries out separating, and then cannot go to realize the purpose of motion target tracking well.
Content of the invention
The present invention provides a kind of motion target tracking method and TV, in order to solve in prior art in video image It is impossible to carry out to multiple moving targets separating when blocking or merging occur in moving target, and then reality cannot be gone well The problem of the purpose of existing motion target tracking.
It is an aspect of the present invention to provide a kind of motion target tracking method, including:
Using the shadow removal method improving mixed Gaussian background modeling, remove the moon of the moving target in video image Shadow;
Set up model information for each moving target;
According to the model information of each moving target, the tracking window of adaptive updates moving target;
According to the model information of each moving target, determine the weights between each moving target;
According to the weights between each moving target, determine the association probability between each moving target;
According to the association probability between each moving target, the moving target under tracking window is tracked maintain.
Another aspect of the present invention is to provide a kind of TV it is characterised in that including:
Display movement, field programmable gate array (Field-Programmable Gate Array, abbreviation FPGA) core Piece and liquid crystal display, described fpga chip is connected with described display movement, described liquid crystal display respectively;
Wherein, described fpga chip is used for realizing the as above motion target tracking method described in any one.
The solution have the advantages that:By using the shadow removal method improving mixed Gaussian background modeling, removal regards The shade of the moving target in frequency image;Set up model information for each moving target;According to each moving target Model information, the tracking window of adaptive updates moving target;According to the model information of each moving target, determine each motion Weights between target;According to the weights between each moving target, determine the association probability between each moving target;According to each fortune Association probability between moving-target, is tracked to the moving target under tracking window maintaining.Due to establishing each motion mesh Multiple moving targets, such that it is able to carry out to the moving target respectively blocking, merging separating, can be entered by the association probability between mark Row separates, and then the moving target in video image can be tracked maintain, and removes the mesh of real motion target tracking well 's.
Brief description
The flow chart of the motion target tracking method that Fig. 1 provides for the embodiment of the present invention one;
The flow chart of the motion target tracking method that Fig. 2 provides for the embodiment of the present invention two;
The structural representation of the motion target tracking device that Fig. 3 provides for the embodiment of the present invention three;
The structural representation of the motion target tracking device that Fig. 4 provides for the embodiment of the present invention four;
The structural representation of the TV that Fig. 5 provides for the embodiment of the present invention five.
Specific embodiment
Purpose, technical scheme and advantage for making the embodiment of the present invention are clearer, below in conjunction with the embodiment of the present invention In accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described it is clear that described embodiment is The a part of embodiment of the present invention, rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art The every other embodiment being obtained under the premise of not making creative work, broadly falls into the scope of protection of the invention.
The flow chart of the motion target tracking method that Fig. 1 provides for the embodiment of the present invention one, as shown in figure 1, the present embodiment The method providing, including:
Step 101, employing improve the shadow removal method of mixed Gaussian background modeling, remove the motion mesh in video image Target shade.
Wherein, the specific implementation of step 101 is:For each of video image moving target, determine motion Color angle between the foreground image of target and background areaWherein, XdFor moving target Foreground image is in the color vector of j-th pixel, XbFor moving target background area j-th pixel color to Amount, wherein, j is positive integer;
For each of video image moving target, if α<τ is it is determined that j-th pixel of moving target is doubtful Like shade, wherein, τ is constant;
For each of video image moving target, if j-th pixel of moving target meetsThen j-th pixel of determination moving target is shade, its In, (x, y) is the coordinate of j-th pixel, IH(x,y)、IS(x,y)、IVThe foreground image that (x, y) is respectively moving target exists The H of j-th pixel, S, V component, BH(x,y)、BS(x,y)、BV(x, y) is respectively the background area of moving target at j-th The H of pixel, S, V component.
In the present embodiment, specifically, for each two field picture in video image, a back of the body can be initially set up Scene area, can be fixedly installed a background area, can dynamically arrange a background area.By background area and video The contrast of each frame of image and calculating, so that it is determined that each moving target in video image.
Firstly, it is necessary to the shade of each moving target in removal video image, specifically, can be mixed using improving The shadow removal method of Gaussian Background modeling, removes the shade of the moving target in video image.
Due in video image, the difference very little that the shade colourity of moving target changes, the therefore color angle of shade Also can very little, such that it is able to judge each pixel by the size of color angle, if for doubtful shade.Specifically, may be used With the foreground image that sets moving target j-th pixel color vector as Xd=[Hj,Sj,Vj], set moving target Background area is X in the color vector of j-th pixelb=[H'j,S'j,Vj'], such that it is able to for every in video image One moving target, calculates the color angle between the foreground image of moving target and background areaThen, set a constant tau, τ is the less numerical value voluntarily selecting, if α<τ, then permissible First determine that j-th pixel may be shade.
Afterwards, judge that the pixel of doubtful shade, whether as life shadow, can introduce sentencing of another one life shadow Certainly method, for each of video image moving target, if j-th pixel of moving target meetsIt is assured that j-th of moving target Pixel is shade.Here, (x, y) is the coordinate of j-th pixel, and IH(x,y)、IS(x,y)、IV(x, y) and BH(x, y)、BS(x,y)、BV(x, y) distinguishes H, S, V minute of denotation coordination point (x, y) place pixel input values I (x, y) and background pixel value Amount.
Step 102, set up model information for each moving target.
In the present embodiment, specifically, need for each moving target, go to set up a model information, this mould Type information, for setting up the tracking window of moving target.
Step 103, the model information according to each moving target, the tracking window of adaptive updates moving target.
In the present embodiment, specifically, the model information according to each moving target, the size according to moving target is certainly The size of the dynamic tracking window of adjustment moving target, and then the tracking window of adaptive updates moving target.
Step 104, the model information according to each moving target, determine the weights between each moving target.
In the present embodiment, specifically, the model information according to each moving target calculating in step 103, goes Calculate the weights between each moving target.
Step 105, according to the weights between each moving target, determine the association probability between each moving target.
In the present embodiment, specifically, according to the weights between each moving target, determine the association between each moving target Probability, association probability here is divided between different moving targets association probability and same moving target in different frame Association probability.
Step 106, according to the association probability between each moving target, the moving target under tracking window is tracked tie up Hold.
In the present embodiment, specifically, it is possible to count after the association probability between to each moving target calculates Calculate the status predication value of the moving target of current time, from the status predication value of the moving target according to current time, to Moving target under track window is tracked maintaining.
The present embodiment passes through, using the shadow removal method improving mixed Gaussian background modeling, to remove the fortune in video image The shade of moving-target;Set up model information for each moving target;According to the model information of each moving target, adaptive The tracking window of moving target should be updated;According to the model information of each moving target, determine the power between each moving target Value;According to the weights between each moving target, determine the association probability between each moving target;Between each moving target Association probability, is tracked to the moving target under tracking window maintaining.Due to establishing the association between each moving target Multiple moving targets, such that it is able to carry out to the moving target respectively blocking, merging separating, can be carried out separating by probability, and then Moving target in video image can be tracked maintaining, go the purpose of real motion target tracking well.
The flow chart of the motion target tracking method that Fig. 2 provides for the embodiment of the present invention two, on the basis of embodiment one, As shown in Fig. 2 the method that the present embodiment provides, step 102, specifically include:
For each moving target, determine the observed object probability-distribution function in the i-th two field picture for the moving targetWherein, β is control parameter, Xm,iFor m-th moving target In the observation of the i-th frame video image, μm,iRepresent m-th target mean vector in the i-th frame video image;
For each moving target, determine the J background probability function in the i-th frame video image for the moving target, wherein, the J-th background probability function of m moving target bes For the dimension of state space, YiRepresent pixel value in the i-th frame video image for the pixel,Represent that j-th Gauss model exists Mean vector in i-th frame video image, wherein, i, j, m, J are positive integer;
For each moving target, according to moving target the i-th frame video image observed object probability-distribution function, And moving target, in J background probability function of the i-th frame video image, determines the similarity function of moving targetδ is constant;
For each moving target, according to the similarity function of moving target, determine moving target in the i-th frame video figure The information weights of pictureWherein, N represents the total number of moving target;
For each moving target, according to moving target the i-th frame video image observed object probability-distribution function, Determine the model information of moving target
In the present embodiment, specifically, for for each moving target it is assumed that m-th target i-th frame observation mesh Marking probability-distribution function isWherein, β is control parameter, general feelings Can be with β=20 under condition;Xm,iFor m-th moving target the i-th frame video image observation, μm,iRepresent m-th target i-th The mean vector of frame video image.
Meanwhile, set up northern background model, wherein, background model is obtained by mixed Gaussian background modeling, then, for each Individual moving target, determines the J background probability function in the i-th frame video image for the moving target.Wherein, m-th moving target J-th background probability function beS is state space Dimension, YiRepresent pixel value in the i-th frame video image for the pixel,Represent j-th Gauss model in the i-th frame video figure Mean vector in picture, i, j, m, J are positive integers.
Due to can only be embodied in the state feature that moving target prospect is with background based on the modeling statistics information of weights, thus Discrimination now between the two can not be depicted.For problem above, the present invention proposes a kind of feature similarity functional based method, this Moving target statistical information can be divided into one on the occasion of collection region by individual similar function, and the relevant information of background is then divided To negative value collection region.For each moving target, according to moving target the i-th frame video image observed object probability distribution Function and moving target, in J background probability function of the i-th frame video image, determine the similarity function of moving target Expression way isδ is constant;Wherein, numerical value δ is seen as The numerical value of one very little, generally arranges δ=0.0001.Similarity function has reacted moving target foreground image and background The state characteristic similarity in region.
Then, the similarity function on the occasion of collection region will be obtained, will Rm(i)>Similarity function when 0 be brought into as In lower formula, such that it is able to obtain m-th target information weights in the i-th two field pictureWherein, N represents motion The total number of target.
Afterwards, each moving target can be directed to, according to moving target the i-th frame video image observed object probability Distribution function, determines that the model information of moving target is
Step 103, specifically includes:
Step 1031, be directed to each moving target, determine the initial tracking window of moving target.
Wherein, the specific implementation of step 1031 is:If moving target becomes greatly it is determined that the length and width of tracking window are multiplied by 1+ ζ, wherein, ζ is constant;If moving target diminishes it is determined that the length and width of tracking window are multiplied by 1- ζ.
In the present embodiment, specifically it is necessary first to be directed to each moving target, determine the initial tracking of moving target Window, that is, determine initial model information G corresponding to front 5 tracking windows of moving target1、G2、G3、G4、G5.
Assume the i-th frame video image model information be G1If moving target becomes big, and the length and width of tracking window are taken advantage of With 1+ ζ, if moving target diminishes, 1- ζ is multiplied by the length and width of tracking window, thus the model information of moving target is changed into G2, Wherein, ζ is constant, 0≤ζ≤0.6;Then, by that analogy, for model information G of the video image of the n-th+M frame3、G4、G5. Wherein model information is also corresponding with tracking window, and then can determine the initial tracking window of moving target.
Step 1032, be directed to each moving target, according to the initial tracking window of moving target, determine tracking window Change in size ratioWherein, G1、G2、 G3、G4、G5It is respectively the initial corresponding model information of front 5 tracking windows of moving target.
In the present embodiment, specifically, for each moving target, according to the initial tracking window of moving target, really Determine the change in size ratio of tracking window
The effect of wherein change in size ratio q of tracking window, is to reduce the impact to target prospect for the background, wherein q and 1 Difference bigger explanation background area more complicated, and when target scale become big when, q >=1, when yardstick become hour, q≤1.
Step 1033, it is directed to each moving target, the change in size ratio according to tracking window and moving target exist The information weights of the i-th frame video image, determine the long H of m-th target track window window in the i-th frame video imagei+1= λm,iHi(1+q) and wide Wi+1m.iWi(1+q), wherein, λm.iFor moving target m the i-th frame video image information weights.
In the present embodiment, specifically, each moving target can be directed to, according to the change in size ratio of tracking window Example and moving target, in the information weights of the i-th frame video image, determine the tracking in the i-th frame video image of m-th target A length of H of window windowi+1m,iHi(1+q), a width of W of m-th target track window window in the i-th frame video imagei+1= λm.iWi(1+q), wherein, λm.iFor moving target m the i-th frame video image information weights.
Step 104, specifically includes:
For each moving target, determine observed object model state vector X=[x, y, g, l] of moving target, its In, x, y, g, l are respectively the model information of the length and width, color histogram and moving target of tracking window;
For each moving target, according to the observed object model state vector of moving target, determine moving target Association probability function between adjacent two frames
Determine m-th moving target and n-th moving target in the size of the i-th frame video image and color histogram The first association probability function between figureAnd determine m-th motion mesh The second association probability function between the model information of mark and n-th moving target and LBP texture valueWherein, i is the i-th frame video image, and i, m, n are positive integer;
According to the first association probability function and the second association probability function, determine m-th moving target and n-th motion mesh Bayes's coefficient between mark
According to Bayes's coefficient, determine the weights of the i-th frame video imageWherein, σ is The variance of Gaussian function.
In the present embodiment, specifically, carry out the foundation of multiple mobile object model first.In order to increase the steady of target following Qualitative and accuracy, first against each moving target, determines observed object model state vector X=[x, y, g, l], its Middle x, y, g, l are expressed as the length and width of tracking window, color histogram, the model information of moving target.
Then, for each moving target, determine that the pass between adjacent two frames of moving target Connection probability function, wherein, the association probability function between adjacent two frames of m-th target is
Then, m-th moving target and n-th moving target are set in the size of the i-th frame video image and color The first association probability function between histogramAnd set m-th fortune The second association probability function between the model information of moving-target and n-th moving target and LBP texture valueWherein, i is the i-th frame video image, and i, m, n are positive integers.
It is then assumed that each assumes that region is s(n), for s(n)Each state computation go out Bayes Bhattacharyya coefficientFurther according to Bayes's coefficient, determine the i-th frame video image WeightsWherein, σ is the variance of Gaussian function.
Thus provide the dependent probability function with an adjacent interframe of target, and in the correlation of same frame between different target Probability function, and then provide guarantee for accurate data correlation.
Step 105, specifically includes:
For each moving target, calculate tracking window matrix Ω;
For each moving target, determine the state renewal equation of the JPDA of moving targetWherein, XmI () represents in data correlation moving target m in the i-th frame video figure State vector in picture, Vm(i+1) new united information after expression particle filter,Represent is that moving target m exists State vector predicted value in i-th frame video image, andpm(i+ 1) the association probability function between adjacent two frames for moving target m;
For each moving target, according to tracking window matrix, determine association probabilityWherein, ωiWeights for the i-th frame video image;
Determine that not associated observation sample number in video image is
Wherein, for each moving target, calculate tracking window matrix, including:
For each moving target, if observation data u has dropped in the tracking window of moving target m, matrix valueIf observation data u does not fall within the tracking window of moving target m, matrix value
For each moving target, according to matrix valueDetermine that tracking window matrix isWherein, N represents the total number of moving target, M represent with The sum that track window updates.
In the present embodiment, specifically, then need to be associated the determination of matrix and association probability.First against each Individual moving target, calculates tracking window matrix.It is used for representing whether measured value i falls into the square of the incidence matrix in track window Battle array value, as t=0, indicates no target, all elements are all 1, and when t ≠ 0, explanation measured value falls in track window, next Just calculate the probability that each measured value each introduces a collection target possible with it is associated.Then, it is distributed by particle in incidence matrix Position to determine confirmation matrix with the relation of track window, and the various associations that just can be inferred to moving target after obtaining confirmation matrix are general Rate.Wherein u represents observation data, and m represents moving target, and N represents the total number of moving target, and M represents what tracking window updated Sum.According to for each moving target, if observation data u has dropped in the tracking window of moving target m, If observation data u does not fall within the tracking window of moving target m,And then determine tracking window matrix
Then, for each moving target, determine the state renewal equation of the JPDA of moving target ForWherein, XmI () represents in data correlation moving target m in the i-th frame video State vector in image, Vm(i+1) new united information after expression particle filter,That represent is moving target m State vector predicted value in the i-th frame video image, simultaneouslypm (i+1) the association probability function between adjacent two frames for moving target m.Joint Probabilistic Data Association algorithm considers all sights Survey the incidence relation of data and moving target.
For each moving target, according to tracking window matrix, determine association probabilityWherein, ωiWeights for the i-th frame video image.
Then according to tracking window matrix Ω, determine that not associated observation sample number in video image is
Step 106, specifically includes:
Produce primary point sequence x according to prior probability distribution*(1)(m),x*(2)(m),...x*(z)(m), wherein, m table Levy m-th moving target, z represents generation particle point sequence sum, z is positive integer;
Posterior probability distribution is sampled, to determine sample valueAnd calculate acceptance probability
If α is (X, X*(z)(m)) >=1, then accept sampled value X(z)(m)=X*(z)(m), and determine that particle weights are 1/N, if α(X,X*(z)(m))<1, then ignore sampled value;
By measured value, determine importance weightWherein, P (X(z)(m)|X*(z)(m)) it is calculated association probability in data correlation,Represent a simple sampling Know probability density function,I is I-th frame video image;
By the weights updating, determine state estimation equationAnd covariance matrix
After every N frame to video image sampling once, and the M particle obtaining that MN iteration is sampled is as current The status predication value of the moving target in momentWith according to moving target Status predication value is tracked to the moving target under tracking window maintaining.
In the present embodiment, specifically, connect down the tracking maintenance that can be carried out moving target.Can adopt and filter and pre- The mode surveyed, the tracking carrying out moving target maintains.The present invention is filtered to moving target using particle filter method, in order to Increase the diversity of particle, introduce MCMC methodology after resampling to improve the estimated accuracy of filtering and particle number adjusted Whole.
First, initialization Markov Chain and MCMC wave filter, thus constructing a Markov Chain, concurrently set sampling In particle aging period B and sampling frame number interval N, and Markov Chain is carried out with importance sampling, according to prior probability distribution Produce primary point sequence x*(1)(m),x*(2)(m),...x*(z)M (), z represents generation particle point sequence sum.
Suggestion distribution is defined as normal distribution, after Posterior probability distribution is sampled, obtains sample valueCalculate Acceptance probability α (X, X*(z)(m)), whereinIf α is (X, X*(z)(m)) >=1, then accept sampled value X*(z)(m), i.e. X(z)(m)=X*(z)M particle weights are set to 1/N by () simultaneously;If α (X,X*(z)(m))<1, then sampled value is ignored, keep original sampled point X(z)M () is constant.
Then carry out importance sampling, can be calculated importance weight by measured value isWherein, P (X(z)(m)|X*(z)(m)) it is calculated association in data correlation Probability,q(X(z)(m),X*(z)(m)) represent a simple sampling Known probability density fonction, I is the i-th frame video image.
Thus by the weights updating, calculating state estimation equationAnd covariance matrix For
Finally carry out resampling, sampling once to video image every N frame, and MN iteration can be sampled obtain M particle is as the predicted value of the dbjective state of current timeAnd then root Moving target under tracking window is tracked maintain according to the status predication value of moving target.
The present embodiment passes through, using the shadow removal method improving mixed Gaussian background modeling, to remove the fortune in video image The shade of moving-target;Set up model information for each moving target;According to the model information of each moving target, adaptive The tracking window of moving target should be updated;According to the model information of each moving target, determine the power between each moving target Value;According to the weights between each moving target, determine the association probability between each moving target;Between each moving target Association probability, is tracked to the moving target under tracking window maintaining.And in conjunction with Bayes's coefficient, tracking window can be made Both it had been suitable for the size of moving target and had been changed, and then target that can be effectively larger to dimensional variation had been tracked, Improve the real-time of the tracking of moving target;Due to establishing the association probability between each moving target, such that it is able to right The moving target respectively block, merging carries out separating, and multiple moving targets can be carried out separate, and then can be in video image Moving target be tracked maintain, go the purpose of real motion target tracking well.
The structural representation of the motion target tracking device that Fig. 3 provides for the embodiment of the present invention three, as shown in figure 3, this reality The device of example offer is provided, including:
Shadow removal module 31, for using the shadow removal method improving mixed Gaussian background modeling, removing video figure The shade of the moving target in picture;
Model building module 32, for setting up model information for each moving target;
Window update module 33, for the model information according to each moving target, adaptive updates moving target Tracking window;
Weights determining module 34, for the model information according to each moving target, determines between each moving target Weights;
Relating module 35, for according to the weights between each moving target, determining the association probability between each moving target;
Tracking module 36, for according to the association probability between each moving target, entering to the moving target under tracking window Line trace maintains.
The motion target tracking device of the present embodiment can perform the motion target tracking method that the embodiment of the present invention one provides, It is realized, and principle is similar, and here is omitted.
The present embodiment passes through, using the shadow removal method improving mixed Gaussian background modeling, to remove the fortune in video image The shade of moving-target;Set up model information for each moving target;According to the model information of each moving target, adaptive The tracking window of moving target should be updated;According to the model information of each moving target, determine the power between each moving target Value;According to the weights between each moving target, determine the association probability between each moving target;Between each moving target Association probability, is tracked to the moving target under tracking window maintaining.Due to establishing the association between each moving target Multiple moving targets, such that it is able to carry out to the moving target respectively blocking, merging separating, can be carried out separating by probability, and then Moving target in video image can be tracked maintaining, go the purpose of real motion target tracking well.
The structural representation of the motion target tracking device that Fig. 4 provides for the embodiment of the present invention four, in the base of embodiment three On plinth, as shown in figure 4, the device that the present embodiment provides, shadow removal module 31, specifically for:
For each of video image moving target, determine between the foreground image of moving target and background area Color angleWherein, XdFor moving target foreground image j-th pixel color to Amount, XbFor moving target background area j-th pixel color vector, wherein, j be positive integer;
For each of video image moving target, if α<τ is it is determined that j-th pixel of moving target is doubtful Like shade, wherein, τ is constant;
For each of video image moving target, if j-th pixel of moving target meetsThen j-th pixel of determination moving target is shade, its In, (x, y) is the coordinate of j-th pixel, IH(x,y)、IS(x,y)、IVThe foreground image that (x, y) is respectively moving target exists The H of j-th pixel, S, V component, BH(x,y)、BS(x,y)、BV(x, y) is respectively the background area of moving target at j-th The H of pixel, S, V component.
Model building module 32, specifically for:
For each moving target, determine the observed object probability-distribution function in the i-th two field picture for the moving targetWherein, β is control parameter, Xm,iFor m-th moving target In the observation of the i-th frame video image, μm,iRepresent m-th target mean vector in the i-th frame video image;
For each moving target, determine the J background in the i-th frame video image for the moving target Probability function, wherein, j-th background probability function of m-th moving target isS is the dimension of state space, YiRepresent pixel to exist Pixel value in i-th frame video image,Represent j-th Gauss model mean vector in the i-th frame video image, wherein, I, j, m, J are positive integer;
For each moving target, according to moving target the i-th frame video image observed object probability-distribution function, And moving target, in J background probability function of the i-th frame video image, determines the similarity function of moving targetδ is constant;
For each moving target, according to the similarity function of moving target, determine moving target in the i-th frame video figure The information weights of pictureWherein, N represents the total number of moving target;
For each moving target, according to moving target the i-th frame video image observed object probability-distribution function, Determine the model information of moving target
Window update module 33, including:
Initial submodule 331, for for each moving target, determining the initial tracking window of moving target;
Ratio-dependent submodule 332, for for each moving target, according to the initial tracking window of moving target, Determine the change in size ratio of tracking windowIts In, G1、G2、G3、G4、G5It is respectively the initial corresponding model information of front 5 tracking windows of moving target;
Window determination sub-module 333, for for each moving target, according to the change in size ratio of tracking window, And moving target is in the information weights of the i-th frame video image, determine m-th target track window in the i-th frame video image The long H of windowi+1m,iHi(1+q) and wide Wi+1m.iWi(1+q), wherein, λm.iFor moving target m in the i-th frame video image Information weights.
Initial submodule 331, specifically for:
If moving target becomes greatly it is determined that the length and width of tracking window are multiplied by 1+ ζ, wherein, ζ is constant;
If moving target diminishes it is determined that the length and width of tracking window are multiplied by 1- ζ.
Weights determining module 34, specifically for:
For each moving target, determine observed object model state vector X=[x, y, g, l] of moving target, its In, x, y, g, l are respectively the model information of the length and width, color histogram and moving target of tracking window;
For each moving target, according to the observed object model state vector of moving target, determine moving target Association probability function between adjacent two frames
Determine m-th moving target and n-th moving target between the size of the i-th frame video image and color histogram The first association probability functionAnd determine m-th moving target and n-th fortune The second association probability function between the model information of moving-target and LBP texture valueWherein, i is the i-th frame video image, and i, m, n are positive integer;
According to the first association probability function and the second association probability function, determine m-th moving target and n-th motion mesh Bayes's coefficient between mark
According to Bayes's coefficient, determine the weights of the i-th frame video imageWherein, σ is The variance of Gaussian function.
Relating module 35, including:
Calculating sub module 351, for for each moving target, calculating tracking window matrix Ω;
Update determination sub-module 352, for for each moving target, determining the JPDA of moving target State renewal equationWherein, XmI () represents moving target m in data correlation and exists State vector in i-th frame video image, Vm(i+1) new united information after expression particle filter,Represent is fortune State vector predicted value in the i-th frame video image for the moving-target m, and pm(i+1) the association probability function between adjacent two frames for moving target m;
Determine the probability submodule 353, for for for each moving target, according to tracking window matrix, determines and closes Connection probabilityWherein, ωiWeights for the i-th frame video image;
Sample determination sub-module 354, for determining that not associated observation sample number in video image is
Calculating sub module 351, specifically for:
For each moving target, if observation data u has dropped in the tracking window of moving target m, matrix valueIf observation data u does not fall within the tracking window of moving target m, matrix value
For each moving target, according to matrix valueDetermine that tracking window matrix isWherein, N represents the total number of moving target, M represent with The sum that track window updates.
Tracking module 36, specifically for:
Produce primary point sequence x according to prior probability distribution*(1)(m),x*(2)(m),...x*(z)(m), wherein, m table Levy m-th moving target, z represents generation particle point sequence sum, z is positive integer;
Posterior probability distribution is sampled, to determine sample valueAnd calculate acceptance probability
If α is (X, X*(z)(m)) >=1, then accept sampled value X(z)(m)=X*(z)(m), and determine that particle weights are 1/N, if α(X,X*(z)(m))<1, then ignore sampled value;
By measured value, determine importance weightWherein, P (X(z)(m)|X*(z)(m)) it is calculated association probability in data correlation,Represent the known of a simple sampling Probability density function,i For the i-th frame video image;
By the weights updating, determine state estimation equationAnd covariance matrix
After every N frame to video image sampling once, and the M particle obtaining that MN iteration is sampled is as current The status predication value of the moving target in momentWith according to moving target Status predication value is tracked to the moving target under tracking window maintaining.
The moving target that the motion target tracking device of the present embodiment can perform the embodiment of the present invention one, embodiment two provides Tracking, it is realized, and principle is similar, and here is omitted.
The present embodiment passes through, using the shadow removal method improving mixed Gaussian background modeling, to remove the fortune in video image The shade of moving-target;Set up model information for each moving target;According to the model information of each moving target, adaptive The tracking window of moving target should be updated;According to the model information of each moving target, determine the power between each moving target Value;According to the weights between each moving target, determine the association probability between each moving target;Between each moving target Association probability, is tracked to the moving target under tracking window maintaining.And in conjunction with Bayes's coefficient, tracking window can be made Both it had been suitable for the size of moving target and had been changed, and then target that can be effectively larger to dimensional variation had been tracked, Improve the real-time of the tracking of moving target;Due to establishing the association probability between each moving target, such that it is able to right The moving target respectively block, merging carries out separating, and multiple moving targets can be carried out separate, and then can be in video image Moving target be tracked maintain, go the purpose of real motion target tracking well.
The structural representation of the TV that Fig. 5 provides for the embodiment of the present invention five, as shown in figure 5, the electricity that the present embodiment provides Depending on, including:
Display movement 51, fpga chip 52 and liquid crystal display 53, fpga chip 52 respectively with display movement 51, liquid crystal display 53 Connect;
Wherein, the motion target tracking device provide in above-described embodiment is provided in fpga chip 52.
In the present embodiment, specifically, it is provided with display movement, fpga chip and liquid crystal display, fpga chip in TV It is connected with display movement, liquid crystal display respectively.The motion target tracking device that embodiment one and embodiment two can be provided, setting In fpga chip.
Wherein, realized using the hardware that FPGA carries out method, a multi-media processing based on EP2C70 chip can be adopted Plateform system DE2-70 is carrying out;Selected multimedia processing platform system has extensive and high speed FPGA Resource, needs jumbo memory resource to go to store high-resolution view data, needs the data transmission channel of high speed Go to transmit the code stream of high bandwidth, and various video input/output interface can be supported.
The moving target that the motion target tracking device of the present embodiment can perform the embodiment of the present invention three, example IV provides Tracks of device, it is realized, and principle is similar, and here is omitted.
The motion target tracking device that the present embodiment arranges embodiment three on television, example IV provides, by adopting Improve the shadow removal method of mixed Gaussian background modeling, remove the shade of the moving target in video image;For each Moving target sets up model information;According to the model information of each moving target, the track window of adaptive updates moving target Mouthful;According to the model information of each moving target, determine the weights between each moving target;Between each moving target Weights, determine the association probability between each moving target;According to the association probability between each moving target, under tracking window Moving target is tracked maintaining.And in conjunction with Bayes's coefficient, tracking window can be made both to be suitable for the size of moving target And be changed, and then target that can be effectively larger to dimensional variation is tracked, and improves the tracking of moving target Real-time;Due to establishing the association probability between each moving target, such that it is able to enter to the moving target respectively blocking, merging Row separates, and multiple moving targets can be carried out separate, and then the moving target in video image can be tracked maintain, Go the purpose of real motion target tracking well.
One of ordinary skill in the art will appreciate that:The all or part of step realizing above-mentioned each method embodiment can be led to Cross the related hardware of programmed instruction to complete.Aforesaid program can be stored in a computer read/write memory medium.This journey Sequence upon execution, executes the step including above-mentioned each method embodiment;And aforesaid storage medium includes:ROM, RAM, magnetic disc or Person's CD etc. is various can be with the medium of store program codes.
Finally it should be noted that:Above example only in order to technical scheme to be described, is not intended to limit;Although With reference to the foregoing embodiments the present invention is described in detail, it will be understood by those within the art that:It still may be used To modify to the technical scheme described in foregoing embodiments, or equivalent is carried out to wherein some technical characteristics; And these modification or replace, do not make appropriate technical solution essence depart from various embodiments of the present invention technical scheme spirit and Scope.

Claims (10)

1. a kind of motion target tracking method is it is characterised in that include:
Using the shadow removal method improving mixed Gaussian background modeling, remove the shade of the moving target in video image;
Set up model information for each moving target;
According to the model information of each moving target, the tracking window of adaptive updates moving target;
According to the model information of each moving target, determine the weights between each moving target;
According to the weights between each moving target, determine the association probability between each moving target;
According to the association probability between each moving target, the moving target under tracking window is tracked maintain.
2. method according to claim 1 is it is characterised in that described gone using the shade improving mixed Gaussian background modeling Except method, remove the shade of the moving target in video image, including:
For each of video image moving target, determine the color between the foreground image of moving target and background area AngleWherein, XdFor moving target foreground image j-th pixel color vector, Xb For moving target background area j-th pixel color vector, wherein, j be positive integer;
For each of video image moving target, if α<τ is it is determined that j-th pixel of moving target is doubtful the moon Shadow, wherein, τ is constant;
For each of video image moving target, if j-th pixel of moving target meetsThen j-th pixel of determination moving target is shade, its In, (x, y) is the coordinate of j-th pixel, IH(x,y)、IS(x,y)、IVThe foreground image that (x, y) is respectively moving target exists The H of j-th pixel, S, V component, BH(x,y)、BS(x,y)、BV(x, y) is respectively the background area of moving target at j-th The H of pixel, S, V component.
3. method according to claim 1 is it is characterised in that described set up model information for each moving target, Including:
For each moving target, determine the observed object probability-distribution function in the i-th two field picture for the moving targetWherein, β is control parameter, Xm,iFor m-th moving target In the observation of the i-th frame video image, μm,iRepresent m-th target mean vector in the i-th frame video image;
For each moving target, determine the J background probability function in the i-th frame video image for the moving target, wherein, m J-th background probability function of individual moving target bes For the dimension of state space, YiRepresent pixel value in the i-th frame video image for the pixel,Represent that j-th Gauss model exists Mean vector in i-th frame video image, wherein, i, j, m, J are positive integer;
For each moving target, according to moving target in the observed object probability-distribution function of the i-th frame video image and Moving target, in J background probability function of the i-th frame video image, determines the similarity function of moving targetδ is constant;
For each moving target, according to the similarity function of moving target, determine moving target in the i-th frame video image Information weightsWherein, N represents the total number of moving target;
For each moving target, according to moving target in the observed object probability-distribution function of the i-th frame video image, determine The model information of moving target
4. method according to claim 3 is it is characterised in that the described model information according to each moving target, from Adapt to update the tracking window of moving target, including:
For each moving target, determine the initial tracking window of moving target;
For each moving target, according to the initial tracking window of moving target, determine the change in size ratio of tracking windowWherein, G1、G2、G3、G4、G5It is respectively The initial corresponding model information of front 5 tracking windows of moving target;
For each moving target, the change in size ratio according to tracking window and moving target are in the i-th frame video image Information weights, determine the long H of m-th target track window window in the i-th frame video imagei+1m,iHi(1+q) and wide Wi+1m.iWi(1+q), wherein, λm.iFor moving target m the i-th frame video image information weights.
5. method according to claim 4 it is characterised in that described for each moving target, determine moving target Initial tracking window, including:
If moving target becomes greatly it is determined that the length and width of tracking window are multiplied by 1+ ζ, wherein, ζ is constant;
If moving target diminishes it is determined that the length and width of tracking window are multiplied by 1- ζ.
6. method according to claim 1 is it is characterised in that the described model information according to each moving target, really Weights between fixed each moving target, including:
For each moving target, determine observed object model state vector X=[x, y, g, l] of moving target, wherein, x, Y, g, l are respectively the model information of the length and width, color histogram and moving target of tracking window;
For each moving target, according to the observed object model state vector of moving target, determine the adjacent of moving target Association probability function between two frames
Determine m-th moving target and n-th moving target between the size of the i-th frame video image and color histogram One association probability functionAnd determine m-th moving target and n-th motion mesh The second association probability function between target model information and LBP texture valueIts In, i is the i-th frame video image, and i, m, n are positive integer;
According to the first association probability function and the second association probability function, determine m-th moving target and n-th moving target it Between Bayes's coefficient
According to Bayes's coefficient, determine the weights of the i-th frame video imageWherein, σ is Gauss The variance of function.
7. method according to claim 1 it is characterised in that described according to the weights between each moving target, determine each Association probability between moving target, including:
For each moving target, calculate tracking window matrix Ω;
For each moving target, determine the state renewal equation of the JPDA of moving targetWherein, XmI () represents in data correlation moving target m in the i-th frame video figure State vector in picture, Vm(i+1) new united information after expression particle filter,Represent is that moving target m exists State vector predicted value in i-th frame video image, andpm(i+1) The association probability function between adjacent two frames for moving target m;
For each moving target, according to tracking window matrix, determine association probability Wherein, ωiWeights for the i-th frame video image;
Determine that not associated observation sample number in video image is
8. method according to claim 7 it is characterised in that described for each moving target, calculate tracking window Matrix, including:
For each moving target, if observation data u has dropped in the tracking window of moving target m, matrix valueIf observation data u does not fall within the tracking window of moving target m, matrix value
For each moving target, according to matrix valueDetermine that tracking window matrix isWherein, N represents the total number of moving target, M represent with The sum that track window updates.
9. the method according to any one of claim 1-8 it is characterised in that described according to the association between each moving target Probability, is tracked to the moving target under tracking window maintaining, including:
Produce primary point sequence x according to prior probability distribution*(1)(m),x*(2)(m),...x*(z)M (), wherein, m characterizes m Individual moving target, z represents generation particle point sequence sum, and z is positive integer;
Posterior probability distribution is sampled, to determine sample valueAnd calculate acceptance probability
If α is (X, X*(z)(m)) >=1, then accept sampled value X(z)(m)=X*(z)(m), and determine that particle weights are 1/N, if α (X, X*(z)(m))<1, then ignore sampled value;
By measured value, determine importance weightWherein, P (X(z)(m)|X*(z) (m)) it is calculated association probability in data correlation,q (X(z)(m),X*(z)(m)) represent a simple sampling known probability density fonction,I is the i-th frame video figure Picture;
By the weights updating, determine state estimation equationAnd covariance matrix
After every N frame to video image sampling once, and the M particle that MN iteration sampling is obtained is as current time Moving target status predication valueWith the state according to moving target Predicted value is tracked to the moving target under tracking window maintaining.
10. a kind of TV is it is characterised in that include:
Display movement, on-site programmable gate array FPGA chip and liquid crystal display, described fpga chip respectively with described display machine Core, described liquid crystal display connect;
Wherein, described fpga chip is used for realizing the motion target tracking method described in any one of claim 1-9.
CN201610848424.2A 2016-09-23 2016-09-23 Motion target tracking method and TV Active CN106384359B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610848424.2A CN106384359B (en) 2016-09-23 2016-09-23 Motion target tracking method and TV

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610848424.2A CN106384359B (en) 2016-09-23 2016-09-23 Motion target tracking method and TV

Publications (2)

Publication Number Publication Date
CN106384359A true CN106384359A (en) 2017-02-08
CN106384359B CN106384359B (en) 2019-06-25

Family

ID=57936913

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610848424.2A Active CN106384359B (en) 2016-09-23 2016-09-23 Motion target tracking method and TV

Country Status (1)

Country Link
CN (1) CN106384359B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108257148A (en) * 2018-01-17 2018-07-06 厦门大学 The target of special object suggests window generation method and its application in target following
CN108711164A (en) * 2018-06-08 2018-10-26 广州大学 A kind of method for testing motion based on LBP and Color features
CN108931773A (en) * 2017-05-17 2018-12-04 通用汽车环球科技运作有限责任公司 Automobile-used sextuple point cloud system
CN110009665A (en) * 2019-03-12 2019-07-12 华中科技大学 A kind of target detection tracking method blocked under environment
CN111052753A (en) * 2017-08-30 2020-04-21 Vid拓展公司 Tracking video scaling

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101355692A (en) * 2008-07-30 2009-01-28 浙江大学 Intelligent monitoring apparatus for real time tracking motion target area
US8401239B2 (en) * 2009-03-30 2013-03-19 Mitsubishi Electric Research Laboratories, Inc. Object tracking with regressing particles
US20140071286A1 (en) * 2012-09-13 2014-03-13 Xerox Corporation Method for stop sign law enforcement using motion vectors in video streams
CN103914853A (en) * 2014-03-19 2014-07-09 华南理工大学 Method for processing target adhesion and splitting conditions in multi-vehicle tracking process
CN104299210A (en) * 2014-09-23 2015-01-21 同济大学 Vehicle shadow eliminating method based on multi-feature fusion
CN105931269A (en) * 2016-04-22 2016-09-07 海信集团有限公司 Tracking method for target in video and tracking device thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101355692A (en) * 2008-07-30 2009-01-28 浙江大学 Intelligent monitoring apparatus for real time tracking motion target area
US8401239B2 (en) * 2009-03-30 2013-03-19 Mitsubishi Electric Research Laboratories, Inc. Object tracking with regressing particles
US20140071286A1 (en) * 2012-09-13 2014-03-13 Xerox Corporation Method for stop sign law enforcement using motion vectors in video streams
CN103914853A (en) * 2014-03-19 2014-07-09 华南理工大学 Method for processing target adhesion and splitting conditions in multi-vehicle tracking process
CN104299210A (en) * 2014-09-23 2015-01-21 同济大学 Vehicle shadow eliminating method based on multi-feature fusion
CN105931269A (en) * 2016-04-22 2016-09-07 海信集团有限公司 Tracking method for target in video and tracking device thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘雪: "基于图像序列的运动目标检测与跟踪算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108931773A (en) * 2017-05-17 2018-12-04 通用汽车环球科技运作有限责任公司 Automobile-used sextuple point cloud system
CN108931773B (en) * 2017-05-17 2023-01-13 通用汽车环球科技运作有限责任公司 Six-dimensional point cloud system for vehicle
CN111052753A (en) * 2017-08-30 2020-04-21 Vid拓展公司 Tracking video scaling
CN108257148A (en) * 2018-01-17 2018-07-06 厦门大学 The target of special object suggests window generation method and its application in target following
CN108257148B (en) * 2018-01-17 2020-09-25 厦门大学 Target suggestion window generation method of specific object and application of target suggestion window generation method in target tracking
CN108711164A (en) * 2018-06-08 2018-10-26 广州大学 A kind of method for testing motion based on LBP and Color features
CN108711164B (en) * 2018-06-08 2020-07-31 广州大学 Motion detection method based on L BP and Color characteristics
CN110009665A (en) * 2019-03-12 2019-07-12 华中科技大学 A kind of target detection tracking method blocked under environment

Also Published As

Publication number Publication date
CN106384359B (en) 2019-06-25

Similar Documents

Publication Publication Date Title
CN110111335B (en) Urban traffic scene semantic segmentation method and system for adaptive countermeasure learning
CN112396027B (en) Vehicle re-identification method based on graph convolution neural network
CN101141633B (en) Moving object detecting and tracing method in complex scene
CN106384359B (en) Motion target tracking method and TV
CN101339655B (en) Visual sense tracking method based on target characteristic and bayesian filtering
CN112257609B (en) Vehicle detection method and device based on self-adaptive key point heat map
CN110188597B (en) Crowd counting and positioning method and system based on attention mechanism cyclic scaling
CN107408303A (en) System and method for Object tracking
CN101470809B (en) Moving object detection method based on expansion mixed gauss model
CN102592138B (en) Object tracking method for intensive scene based on multi-module sparse projection
CN107784663A (en) Correlation filtering tracking and device based on depth information
CN104378582A (en) Intelligent video analysis system and method based on PTZ video camera cruising
CN108133172A (en) Method, the analysis method of vehicle flowrate and the device that Moving Objects are classified in video
CN114023062B (en) Traffic flow information monitoring method based on deep learning and edge calculation
CN103577875A (en) CAD (computer-aided design) people counting method based on FAST (features from accelerated segment test)
CN111179608A (en) Intersection overflow detection method, system and storage medium
CN112884742A (en) Multi-algorithm fusion-based multi-target real-time detection, identification and tracking method
CN106097383A (en) A kind of method for tracking target for occlusion issue and equipment
CN114970321A (en) Scene flow digital twinning method and system based on dynamic trajectory flow
CN110688905A (en) Three-dimensional object detection and tracking method based on key frame
CN110633678A (en) Rapid and efficient traffic flow calculation method based on video images
CN110009675A (en) Generate method, apparatus, medium and the equipment of disparity map
CN106780567B (en) Immune particle filter extension target tracking method fusing color histogram and gradient histogram
CN108596032B (en) Detection method, device, equipment and medium for fighting behavior in video
CN116245949A (en) High-precision visual SLAM method based on improved quadtree feature point extraction

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 266555 Qingdao economic and Technological Development Zone, Shandong, Hong Kong Road, No. 218

Patentee after: Hisense Video Technology Co.,Ltd.

Address before: 266555 Qingdao economic and Technological Development Zone, Shandong, Hong Kong Road, No. 218

Patentee before: HISENSE ELECTRIC Co.,Ltd.