CN105654513A - Moving target detection method based on sampling strategy - Google Patents
Moving target detection method based on sampling strategy Download PDFInfo
- Publication number
- CN105654513A CN105654513A CN201511026542.7A CN201511026542A CN105654513A CN 105654513 A CN105654513 A CN 105654513A CN 201511026542 A CN201511026542 A CN 201511026542A CN 105654513 A CN105654513 A CN 105654513A
- Authority
- CN
- China
- Prior art keywords
- pixel
- moment
- mobile vector
- point
- sample frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Image Analysis (AREA)
Abstract
The invention belongs to the field of mode identification and computer vision and particularly relates to a motion target detection method based on a sampling strategy. The motion target detection method based on the sampling strategy comprises the following steps: for each foreground frame Ft-S, calculating a resampling moment r(n), calculating a sparse point track from Ft-S to Ft, calculating a motion vector from a moment t-s to the moment r(n), and obtaining and combining sample frames, carrying out preprocessing and the like. The method fully utilizes an oversampling technology on a minority of samples to realize the complete segmentation of the motion target.
Description
Technical field
The invention belongs to pattern recognition and computer vision field, particularly relate to the moving target detecting method based on over-sampling strategy.
Background technology
Video sequence generally comprises much information, people sub-fraction information therein, the people such as moved, vehicle etc. often of interest. Moving object detection is two classification problems, its objective is to be divided into video content two classes: foreground and background, from video sequence, accurately detect that the background being not concerned with is removed by moving target completely, the foreground target obtained is used for succeeding target and follows the tracks of and follow the tracks of. Therefore, moving object detection is extremely crucial pre-treatment step in video monitoring system, and it all has very big value in computation vision field and real life.
Using maximum in moving object detection is background subtraction algorithm, and its basic thought is to be compared to divide foreground and background according to the difference between present image and background image and a threshold value set in advance. Common background subtraction algorithm has a Gaussian Mixture modeling method, or the method for Density Estimator etc. In the video sequence of reality, prospect sample number and background sample number differ greatly. But traditional modeling method often have ignored this point, therefore tradition modeling method is tended to prospect mistake is divided into background so that the precision of detection does not often reach the requirement of subsequent treatment.
The unbalanced problem of class, namely in training sample, inhomogeneity sample size is unequal. In two classification problems, the unbalanced problem of class refers to that the probability distribution of two class sample points is unbalanced. In video sequence, prospect sample belongs to minority class, and its quantity is far fewer than background sample size. But, in background subtraction method, the unbalanced problem of class does not come into one's own. On the basis of the unbalanced theory of present invention class in data mining, the over-sampling strategy introduced in data plane solves the unbalanced problem of class in background subtraction method. First over-sampling strategy replicates the minority class sample and prospect sample chosen, then the sample set of synthesis is added in minority class, obtain new prospect sample set, finally make prospect sample (minority class sample) reach identical namely balanced data set with background sample (most class sample). Over-sampling strategy introduces background subtraction method advantage and is in that to be used for classifying by equalization data collection, the bigger degree of accuracy improving classification.
Summary of the invention
It is an object of the invention to solve the deficiencies in the prior art, a kind of moving object detection algorithm based on over-sampling strategy is provided, the present invention is making full use of the oversampling technique to minority class sample so that foreground and background data set reaches equilibrium, thus realizing the full segmentation of moving target.
Based on the moving target detecting method of over-sampling strategy, specifically comprise the following steps that
S1, for each prospect frame Ft-sCalculate resampling moment r (n), specifically comprise the following steps that
S11, the last �� of the prospect that takesfFrame as reference, then all synthesis sample frameWill interval (t-2, t+2] produce, this interval is decomposed into equably 8 subintervals, impartial for obtain 8 subintervals is distributed to reference frame { Ft-s| s=1 ..., 4}, and initialize synthesis sample frame number N;
S12, assume N number of synthesis sample frameIt is with Ft-sFor reference, in order to prevent Expired Drugs, it is stipulated that synthesis sample frame is inserted in over-sampling interval uniformly, calculates r (n), wherein, r (n) is synthesis sample frame Fr(n)Moment;
S2, calculate from Ft-sTo FtThe sparse locus of points;
S3, by Ft-sIt is divided into two subset F1And F2, F1Comprise the good pixel that KLT track algorithm obtains, F2Then comprising remaining pixel, wherein, described good pixel is that experience obtains;
S4, calculating moment t-s, to moment r (n) mobile vector, specifically comprise the following steps that
F described in S41, calculating S31In each pixel from t-s moment to moment. mobile vector, particularly as follows:
S411, to F described in S31In each foreground pixel point, the sparse locus of points described in S2 obtains the moment t-s mobile vector to the integer moment closest to r (n);
S412, undertaken mobile vector described in S411 extending linearly to r (n), obtain the mobile vector in moment t-s to r (n);
F described in S42, calculating S32In each pixel moment t-s to moment r (n) mobile vector, specifically comprise the following steps that
S421, for F described in S32In pixel zi, it is located at S3 prime number F1In with ziPixel at x direction arest neighbors is zk, keep the t-s moment to r (n) moment ziAnd zkRelative position constant, i.e. ziAnd zkThere is parallel mobile vector;
S422, assume pixel zkMobile vector beThis mobile vector points to a new pixel z in r (n) momentj, calculate z respectivelyjCoordinate in x direction and y direction;
S423, for F described in S32In pixel zi, the pixel in its corresponding r (n) moment is zj, zkAt F described in S31In, and be ziAt the pixel of x direction arest neighbors, calculate z respectivelyjCoordinate in x direction and y direction;
Two pixels that S5, the mobile vector obtained by S41 connect have identical colouring information, may finally obtain with F described in S31Produce new sample as reference and namely synthesize sample frameTwo pixels that the mobile vector obtained by S42 connects have identical colouring information, may finally obtain conjunction with F described in S32Produce new sample as reference and namely synthesize sample frame
S6, by described in S5WithMerge into synthesis sample frame
S7, to described in S6 synthesize sample frameCarry out post processing, be divided into two kinds of situations:
Situation A, to a foreground pixel point, if its 8 neighborhood point is all empty, then this pixel is removed,
Situation B, to an empty pixel, if its 8 neighborhood point is foreground point more than 6 points, then this pixel is set as foreground point, and the average color information of 8 field points of this point is set to the colouring information of this point;
S8, calculate each pixel and belong to the probability of prospect or background;
S9, Combinatorial Optimization are classified.
Further, �� described in S11f=4.
The invention has the beneficial effects as follows:
The present invention makes full use of the oversampling technique to minority class sample, it is achieved the full segmentation of moving target.
Accompanying drawing explanation
Fig. 1 is based in the moving object detection algorithm of over-sampling strategy general frame flow chart.
Fig. 2 include 2a, 2b and 2c, 2a be to time interval (t-2, t+2] division, 2b, 2c be synthesis sample frame respectively N=2 and N=3 time over-sampling moment r (n) (use �� labelling).
It is F that Fig. 3 includes 3a, 3b, 3c and 3d, 3a1The track of middle foreground point, 3b, 3c, 3d is the mobile vector arriving r (1), r (2) and r (3) the t-4 moment respectively.
Fig. 4 includes 4a, 4b, 4c, 4d, 4e and 4f, 4a are to video sequence t-4 to t sparse tracking result, 4b, 4c be video sequence respectively at the segmentation result in t and t-4 moment, 4d, 4e are to video sequence respectively in the experimental result of r (1) and r (3) moment over-sampling, and 4f is the video sequence segmentation result based on over-sampling.
Detailed description of the invention
Below in conjunction with embodiment and accompanying drawing, describe technical scheme in detail.
As it is shown in figure 1, the present invention is specifically described in conjunction with a certain video sequence.
Step 1: for each Ft-sCalculate resampling moment r (n). The last �� of the prospect that takesf=4 frames as reference, then all synthesis sample frameWill interval (t-2, t+2] produce. This interval is divided into equably 8 subintervals, and initializes synthesis sample frame number N. Concretely comprise the following steps:
Step 1.1: by interval (t-2, t+2] be divided into 8 subinterval: Uk=(t+k/2-5/2, t+k/2-2], k=1 ..., 8, as shown in Figure 2 a. Impartial for obtain 8 subintervals is distributed to reference frame { Ft-s| s=1 ..., 4}, it may be assumed that Ft-4=U1��U8,Ft-3=U2��U7,Ft-2=U3��U6,Ft-1=U4��U5��
Step 1.2: assume that N synthesizes sample frameIt is with Ft-sFor reference, in order to prevent Expired Drugs, it is stipulated that synthesis sample frame is inserted in over-sampling interval uniformly. R (n) is synthesis sample frame Fr(n)Moment, computing formula is as follows:
For with Ft-1Synthesis sample frame for reference formation:
For with Ft-s(s �� 1) is with reference to the synthesis sample frame formed:
Cross in Fig. 2 b and Fig. 2 c corresponding synthesis sample frame number respectively is moment r (n) of N=2, N=3.
Step 2: calculate from Ft-sTo FtThe sparse locus of points; Video sequence is sparse from t-4 moment to t to be followed the tracks of as shown in fig. 4 a.
Step 3: by Ft-sIt is divided into two subset F1And F2, F1Comprising KLT track algorithm and obtain pixel, we it has been generally acknowledged that KLT track algorithm can obtain good feature pixel, F2Then comprise remaining pixel.
Step 4: calculate t-s to moment r (n) mobile vector, concretely comprise the following steps:
Step 4.1: calculate F1In each good pixel t-s moment to r (n) moment mobile vector;
For F1In each foreground pixel point, a tracking locus of points obtained (such as Fig. 3 a). Obtained the moment t-s mobile vector to the integer moment closest to r (n) by the locus of points, undertaken the mobile vector obtained extending linearly to r (n), obtain the mobile vector in moment t-s to r (n). With Ft-4The mobile vector of t-4 to r (1), r (2) and r (3) is represented respectively for reference frame and N=3, Fig. 3 b, Fig. 3 c and Fig. 3 d.
Explain with the calculating of t-4 to r (1) and r (2) mobile vector below, first can be obtained the mobile vector of t-4 to t-2 by step 2, i.e. Fig. 3 a. It is t-2 with r (1) the immediate integer moment, t-4 to t-2 mobile vector linearly prolongs the raw mobile vector that namely can obtain moment t-4 to r (1), as shown in Figure 3 b. For r (2), it is carved with t-2 and t-1 during close integer, but t-2 is used, and therefore can use the mobile vector of mobile vector prediction t-4 to r (2) of t-4 to t-1, as shown in Figure 3 c.
Step 4.2: calculate F2In each pixel t-s to moment r (n) mobile vector, concretely comprise the following steps:
Step 4.2.1: for F2In pixel zi, it is assumed that at F1In with ziPixel at x direction arest neighbors is zk, keep t-s to r (n) moment ziWith zkRelative position constant, i.e. ziAnd zkThere is parallel mobile vector;
Step 4.2.2: the pixel z assumedkMobile vector beThis mobile vector points to a new pixel z in r (n) momentj. So zjCoordinate in x direction is:
The coordinate in y direction calculates similar;
Step 4.2.3: for F2In pixel zi, the pixel in its corresponding r (n) moment is zj��zkAt F1In, and be ziPixel at x direction arest neighbors. Then zjCoordinate in x direction is:
The coordinate in y direction calculates similar.
Step 5: by F1New sample is produced as reference. Two pixels that the mobile vector obtained by step 4.1 connects have identical colouring information, may finally obtain conjunction with F1Produce new sample as reference and namely synthesize sample frame.
Step 6: by F2New sample is produced as reference. Two pixels that the mobile vector obtained by step 4.2 connects have identical colouring information, may finally obtain conjunction with F2Produce new sample as reference and namely synthesize sample frame.
Step 7: to synthesis sample frameCarry out post processing, be divided into two kinds of situations: (1), to a foreground pixel point, if its 8 neighborhood point is all empty, then this pixel is removed; (2) to an empty pixel, if its 8 neighborhood point is foreground point more than 6 points, then this pixel is set as foreground point, and the average color information of 8 field points of this point is set to the colouring information of this point. The binary map of Fig. 4 d, 4e respectively r (1) moment and r (2) moment synthetic frame.
Step 8: above-mentioned steps has obtained balanced sample set, for every frame ZtEach pixel ziUtilize formula (5) to calculate it and belong to the probability of prospect:
Wherein fjBeing a sample in prospect training set, J is the total number of samples participating in calculating,Being kernel function, H is core width, every frame ZtPixel ziBelong to the Probability p (z of backgroundi|li=0) calculating formula is similar with formula (5).
Step 9: obtain final classification results by minimizing energy function and formula (6):
Wherein �� is smooth item ��i,jWith data item ��iBetween weight, ��i,j(zi,zj)=lilj+(1-li)(1-lj), if (zi,zj) �� ��,�� is 8 neighborhood territory pixel set. Fig. 4 f is video sequence finally splits binary result image based on the statistics background subtraction algorithm of over-sampling strategy.
Claims (2)
1. based on the moving target detecting method of over-sampling strategy, it is characterised in that comprise the steps:
S1, for each prospect frame Ft-sCalculate resampling moment r (n), specifically comprise the following steps that
S11, the last �� of the prospect that takesfFrame as reference, then all synthesis sample frameWill interval (t-2, t+2] produce, this interval is decomposed into equably 8 subintervals, impartial for obtain 8 subintervals is distributed to reference frame { Ft-s| s=1 ..., 4}, and initialize synthesis sample frame number N;
S12, assume N number of synthesis sample frameIt is with Ft-sFor reference, in order to prevent Expired Drugs, it is stipulated that synthesis sample frame is inserted in over-sampling interval uniformly, calculates r (n), wherein, r (n) is synthesis sample frame Fr(n)Moment;
S2, calculate from Ft-sTo FtThe sparse locus of points;
S3, by Ft-sIt is divided into two subset F1And F2, F1Comprise the good pixel that KLT track algorithm obtains, F2Then comprising remaining pixel, wherein, described good pixel is that experience obtains;
S4, calculating moment t-s, to moment r (n) mobile vector, specifically comprise the following steps that
F described in S41, calculating S31In each pixel from t-s moment to moment. mobile vector, particularly as follows:
S411, to F described in S31In each foreground pixel point, the sparse locus of points described in S2 obtains the moment t-s mobile vector to the integer moment closest to r (n);
S412, undertaken mobile vector described in S411 extending linearly to r (n), obtain the mobile vector in moment t-s to r (n);
F described in S42, calculating S32In each pixel moment t-s to moment r (n) mobile vector, specifically comprise the following steps that
S421, for F described in S32In pixel zi, it is located at S3 prime number F1In with ziPixel at x direction arest neighbors is zk, keep the t-s moment to r (n) moment ziAnd zkRelative position constant, i.e. ziAnd zkThere is parallel mobile vector;
S422, assume pixel zkMobile vector beThis mobile vector points to a new pixel z in r (n) momentj, calculate z respectivelyjCoordinate in x direction and y direction;
S423, for F described in S32In pixel zi, the pixel in its corresponding r (n) moment is zj, zkAt F described in S31In, and be ziAt the pixel of x direction arest neighbors, calculate z respectivelyjCoordinate in x direction and y direction;
Two pixels that S5, the mobile vector obtained by S41 connect have identical colouring information, may finally obtain with F described in S31Produce new sample as reference and namely synthesize sample frameTwo pixels that the mobile vector obtained by S42 connects have identical colouring information, may finally obtain conjunction with F described in S32Produce new sample as reference and namely synthesize sample frame
S6, by described in S5WithMerge into synthesis sample frame
S7, to described in S6 synthesize sample frameCarry out post processing, be divided into two kinds of situations:
Situation A, to a foreground pixel point, if its 8 neighborhood point is all empty, then this pixel is removed.
Situation B, to an empty pixel, if its 8 neighborhood point is foreground point more than 6 points, then this pixel is set as foreground point, and the average color information of 8 field points of this point is set to the colouring information of this point.
S8, calculate each pixel and belong to the probability of prospect or background;
S9, Combinatorial Optimization are classified.
2. the moving target detecting method based on over-sampling strategy according to claim 1, it is characterised in that: �� described in S11f=4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201511026542.7A CN105654513A (en) | 2015-12-30 | 2015-12-30 | Moving target detection method based on sampling strategy |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201511026542.7A CN105654513A (en) | 2015-12-30 | 2015-12-30 | Moving target detection method based on sampling strategy |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105654513A true CN105654513A (en) | 2016-06-08 |
Family
ID=56490371
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201511026542.7A Pending CN105654513A (en) | 2015-12-30 | 2015-12-30 | Moving target detection method based on sampling strategy |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105654513A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109726821A (en) * | 2018-11-27 | 2019-05-07 | 东软集团股份有限公司 | Data balancing method, device, computer readable storage medium and electronic equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101777180A (en) * | 2009-12-23 | 2010-07-14 | 中国科学院自动化研究所 | Complex background real-time alternating method based on background modeling and energy minimization |
CN102663409A (en) * | 2012-02-28 | 2012-09-12 | 西安电子科技大学 | Pedestrian tracking method based on HOG-LBP |
CN102663429A (en) * | 2012-04-11 | 2012-09-12 | 上海交通大学 | Method for motion pattern classification and action recognition of moving target |
US20130129205A1 (en) * | 2010-11-24 | 2013-05-23 | Jue Wang | Methods and Apparatus for Dynamic Color Flow Modeling |
CN103218829A (en) * | 2013-04-01 | 2013-07-24 | 上海交通大学 | Foreground extracting method suitable for dynamic background |
-
2015
- 2015-12-30 CN CN201511026542.7A patent/CN105654513A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101777180A (en) * | 2009-12-23 | 2010-07-14 | 中国科学院自动化研究所 | Complex background real-time alternating method based on background modeling and energy minimization |
US20130129205A1 (en) * | 2010-11-24 | 2013-05-23 | Jue Wang | Methods and Apparatus for Dynamic Color Flow Modeling |
CN102663409A (en) * | 2012-02-28 | 2012-09-12 | 西安电子科技大学 | Pedestrian tracking method based on HOG-LBP |
CN102663429A (en) * | 2012-04-11 | 2012-09-12 | 上海交通大学 | Method for motion pattern classification and action recognition of moving target |
CN103218829A (en) * | 2013-04-01 | 2013-07-24 | 上海交通大学 | Foreground extracting method suitable for dynamic background |
Non-Patent Citations (3)
Title |
---|
CARLOS CUEVAS等: "Efficient Moving Object Detection for Lightweight Applications on Smart Cameras", 《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》 * |
ROSS MESSING等: "Activity recognition using the velocity histories of tracked keypoints", 《PROCEEDINGS OF THE 2009 IEEE 12TH INTERNATIONAL CONFERENCE ON COMPUTER VISION》 * |
XIANG ZHANG等: "Statistical Background Subtraction Based on Imbalanced Learning", 《IN PROCEEDINGS OF 2014 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109726821A (en) * | 2018-11-27 | 2019-05-07 | 东软集团股份有限公司 | Data balancing method, device, computer readable storage medium and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Kwak et al. | Learning occlusion with likelihoods for visual tracking | |
CN105913456B (en) | Saliency detection method based on region segmentation | |
CN107748873A (en) | A kind of multimodal method for tracking target for merging background information | |
CN103886325B (en) | Cyclic matrix video tracking method with partition | |
CN106778687A (en) | Method for viewing points detecting based on local evaluation and global optimization | |
CN105654054B (en) | The intelligent video analysis method of study and more visual dictionary models is propagated based on semi-supervised neighbour | |
CN109191488B (en) | Target tracking system and method based on CSK and TLD fusion algorithm | |
CN105740915B (en) | A kind of collaboration dividing method merging perception information | |
Weixing et al. | A fast pedestrian detection via modified HOG feature | |
Yang et al. | Visual tracking with long-short term based correlation filter | |
CN112364791B (en) | Pedestrian re-identification method and system based on generation of confrontation network | |
CN105046272A (en) | Image classification method based on concise unsupervised convolutional network | |
CN110942472A (en) | Nuclear correlation filtering tracking method based on feature fusion and self-adaptive blocking | |
CN104200233A (en) | Clothes classification and identification method based on Weber local descriptor | |
Ma et al. | Scene invariant crowd counting using multi‐scales head detection in video surveillance | |
CN114724218A (en) | Video detection method, device, equipment and medium | |
Zhao et al. | BiTNet: a lightweight object detection network for real-time classroom behavior recognition with transformer and bi-directional pyramid network | |
CN107330432A (en) | A kind of various visual angles vehicle checking method based on weighting Hough ballot | |
CN105654513A (en) | Moving target detection method based on sampling strategy | |
CN107392246A (en) | A kind of background modeling method of feature based model to background model distance | |
CN106778504A (en) | A kind of pedestrian detection method | |
Yang et al. | License plate detection based on sparse auto-encoder | |
CN105741315B (en) | A kind of statistics background subtraction method based on down-sampled strategy | |
Cai-Xia et al. | Object tracking method based on particle filter of adaptive patches combined with multi-features fusion | |
CN107423765A (en) | Based on sparse coding feedback network from the upper well-marked target detection method in bottom |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20160608 |
|
WD01 | Invention patent application deemed withdrawn after publication |