CN102163334B - Method for extracting video object under dynamic background based on fisher linear discriminant analysis - Google Patents

Method for extracting video object under dynamic background based on fisher linear discriminant analysis Download PDF

Info

Publication number
CN102163334B
CN102163334B CN 201110052400 CN201110052400A CN102163334B CN 102163334 B CN102163334 B CN 102163334B CN 201110052400 CN201110052400 CN 201110052400 CN 201110052400 A CN201110052400 A CN 201110052400A CN 102163334 B CN102163334 B CN 102163334B
Authority
CN
China
Prior art keywords
frame
point
motion
parameter
prime
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 201110052400
Other languages
Chinese (zh)
Other versions
CN102163334A (en
Inventor
祝世平
马丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN 201110052400 priority Critical patent/CN102163334B/en
Publication of CN102163334A publication Critical patent/CN102163334A/en
Application granted granted Critical
Publication of CN102163334B publication Critical patent/CN102163334B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a method for extracting a video object under a dynamic background based on fisher linear discriminant analysis. The method comprises the following steps of: dividing a K frame (namely a current frame) into a plurality of 8*8 blocks; obtaining a motion vector field of the K frame by performing block matching motion estimation on the K frame and a (K-1) frame; selecting probable background pixel blocks in the K frame as characteristic blocks; acquiring global motion model parameters from motion vectors of the characteristic blocks by a least square method; filtering exterior points by adopting Fisher linear discriminant analysis; performing global motion compensation for the current frame according to the global motion parameters; performing inter-frame difference between a rebuilt frame of the K frame and the (K-1) frame to extract a motion object. Tests prove that the extraction of the video object in a dynamic-background video sequence is realized, and the segmentation accuracy after compensation is obviously improved.

Description

The extracting method of object video under a kind of moving background based on the Fisher linear discriminant
Technical field
The present invention relates to the disposal route in a kind of Video segmentation, particularly the extracting method of object video under a kind of moving background based on the Fisher linear discriminant.
Background technology
Generally include two kinds of movable informations in video motion: global motion and local motion.Global motion refers to the pixel motion that occupies larger proportion in video sequence, and the general main motion by video camera causes.Can say, in most cases, not motion of background self usually, be to have caused the variation of background due to the motion of video camera.Simultaneously, the motion that foreground object is cashed out is the motion of foreground object with respect to video camera, is called local motion.In the static background video sequence, video camera is static, does not have global motion, only has the local motion of foreground object, and at this moment, we adopt the mode of the poor or background difference of frame, can eliminate relatively easily these static backgrounds; But, in the dynamic background video sequence, the existence of global motion can disturb us to remove background.Obviously, this video sequence with dynamic background is cut apart to the impact that just must at first eliminate global motion, retained the local motion of foreground object, this just need to carry out overall motion estimation and compensation.
Overall motion estimation refers to the characteristics of motion in the sequence background zone that estimation is caused by camera motion, solves a plurality of parameters in respective counts student movement movable model.Global motion compensation is according to the resulting globe motion parameter of estimation, does the mapping transformation of a corresponding background alignment between present frame and former frame.Just can adopt like this method elimination background areas such as the poor or background subtraction of frame after compensation accurately, the outstanding interested foreground area with local motion (referring to Yang Wenming. the video object segmentation of temporal-spatial fusion [D]. Zhejiang: Zhejiang University, 2006).
The purpose of overall motion estimation is exactly to find out the rule of the camera motion that causes global motion from video sequence.The method that globe motion parameter is estimated mainly is divided into differential method and unique point correspondent method, they respectively according to the velocity field on the image slices prime field or the corresponding relation of unique point ask for globe motion parameter (referring to Wu Si. video motion information analysis technical research [D]. Beijing: Institute of Computing Technology, CAS, 2005).
Above-mentioned prior art is when carrying out overall motion estimation, the sub-block of choosing often entire image participates in overall motion estimation, obviously the calculated amount of overall motion estimation is very large, speed is slow, in addition, the sub-block of doing local motion also inevitably can exert an influence to overall motion estimation, thereby reduces the precision of global motion compensation.
Summary of the invention
The technical problem to be solved in the present invention is: overcome the deficiencies in the prior art, the extracting method of object video under a kind of moving background based on the Fisher linear discriminant is provided, reduce the impact of local motion on overall motion estimation, improve precision and the speed of global motion compensation, finally realize the extraction of object video under dynamic background.
Technical solution of the present invention: the extracting method of object video under a kind of moving background based on the Fisher linear discriminant comprises the following steps:
(1) present frame and former frame are carried out piece coupling and are obtained block motion vector: every frame is divided into to 8 * 8 sub-blocks, adopts SAD matching criterior, NTSS search strategy to carry out the K frame and mate with K-1 frame piece;
(2) utilize Fisher to differentiate and ask for the global camera motion parameter: the K frame both sides sub-block got in selecting step (1) is as characteristic block, ask for six parameter camera models using its motion vector as parameter by least square method, ask for six parameter camera models by least square method again after adopting the Fisher linear decision rule to reject exterior point, until reach the cycle index of setting;
(3) rebuild present frame by global motion compensation: in conjunction with bilinear interpolation, present frame is carried out to global motion compensation and obtain its reconstruction frames;
(4) inter-frame difference extracts object video: present frame K and its reconstruction frames K ' are carried out to difference, by obtaining video object segmentation plane (Alpha plane) and object video after aftertreatment.
K frame in above step (1) and K-1 frame carry out the piece coupling and adopt minimum absolute difference (SAD) matching criterior and new three-step approach (NTSS) search strategy.
Minimum absolute difference (SAD) piece matching criterior can be calculated according to following formula:
SAD ( i , j ) = Σ m = 1 M Σ n = 1 N | f k ( m , n ) - f k - 1 ( m + i , n + j ) |
I wherein, j is displacement, f kand f k-1be respectively the gray-scale value of present frame and previous frame, the size that M * N is macro block, reach minimum if a bit locate SAD (i, j) at certain, the Optimum Matching point of this point for looking for.
In above step (2), utilize the Fisher differentiation to ask for global camera motion parametric technique performing step as follows:
(i) selected characteristic piece:
Choosing wide and high position, image two lateral extent borders 1/15 is outer boundary, and 4/15 wide and high position is that sub-block in inner boundary is as characteristic block;
(ii) least square method is asked video camera six parameter affine models:
The K frame both sides sub-block got in selecting step (i), as characteristic block, after motion vector substitution video camera six parameter models that its piece coupling is obtained, adopts least square method estimated parameter m 0, m 1, m 2, n 0, n 1, n 2, six parameter affine Transform Models: can carry out modeling to translation, rotation, convergent-divergent motion, it is defined as follows:
x ′ = m 0 + m 1 x + m 2 y y ′ = n 0 + n 1 x + n 2 y
M wherein 0and n 0mean respectively the translation amplitude of pixel in x and y direction, m 1, n 1, m 2, n 2four parametric descriptions convergent-divergent and rotation rotate; X ', y ' is respectively an x, and y carries out corresponding point after translation, rotation, convergent-divergent;
(iii) adopt the Fisher linear decision rule to reject exterior point:
From choosing of characteristic block, know that the selection range of these characteristic blocks is fixed, it is the characteristic block in moving object that very difficult assurance does not wherein have prospect.And, in the process of carrying out the piece coupling, error is also unavoidable, so reject exterior point, be very important.
The specific practice that adopts the linear criterion of Fisher to reject exterior point is:
If
Figure BDA0000048905710000031
for the position of certain pixel in present frame K frame,
Figure BDA0000048905710000032
for this pixel corresponds to the position in former frame after block matching motion is estimated,
Figure BDA0000048905710000033
for the position that corresponds to former frame that this pixel estimates by least square method, residual error R kbe defined as follows:
R k = r k ( x k , y k ) → = U kBlockMatch - U k ′
U′ k=H kA′
H k = 1 x k y k 0 0 0 0 0 0 1 x k y k
A=[m 0?m 1?m 2?n 0?n 1?n 2]
A=[m′ 0?m′ 1?m′ 2?n′ 0?n′ 1?n′ 2]
The actual value that wherein A is globe motion parameter, the estimated value of the actual value A that A ' is globe motion parameter, estimate to obtain by least square method.The matrix that H is one 2 * 6.
The theoretical foundation that adopts the linear criterion of Fisher to reject exterior point is: although may comprise exterior point in selected characteristic block (point), but most unique point still meets global motion, therefore the parameter value calculated first can roughly reflect the trend of global motion, so now the residual error of exterior point is just returned the residual error that is greater than interior point, so just can unique point be divided into to exterior point and interior point two classes according to the size of residual error, thereby reject exterior point.
If former unique point, to total N couple, is respectively: { (U kBlockMatch1, U ' k1) ... (U kBlockMatchN, U ' kN), the set R={||r of the estimation residual error that this N is right to unique point 1||, || r 2|| ... || r n||, the target of classification will find optimal threshold T exactly r, according to R, N is divided into to interior point and exterior point two classes, establish threshold value T={||r||, || r|| ∈ R} is divided into two parts by R: the residual error set R of interior point iN=|| r i||, || r i||<T}, exterior point residual error set R oUT=|| r i||, || r i||>=T}, and R iNand R oUTmeet R iN∪ R oUT=R, R iN∩ R oUT=φ, Φ means empty set, R iNaverage be μ in, variance is
Figure BDA0000048905710000036
the probability of interior point
Figure BDA0000048905710000041
r oUTaverage be μ out, variance is
Figure BDA0000048905710000042
the Probability p of exterior point out=1-p in; According to the Fisher linear decision rule, the inter-class variance that can obtain interior point and exterior point two classes is:
&sigma; InAndOut 2 = p in &sigma; R in ( &mu; in - &mu; out ) 2
When
Figure BDA0000048905710000044
can obtain optimal classification when maximum, threshold value at this moment is optimal threshold, chooses inter-class variance
Figure BDA0000048905710000045
t when maximum is T r, if point is to (U kBlockMatchi, U ' ki) residual error be less than T r, think that this point is interior point to characteristic of correspondence point; If be more than or equal to T r, thinking that this point is exterior point to characteristic of correspondence point, should filter out, make it can not enter solving of camera parameters model least square method next time.
The advantage that the present invention compared with prior art had is:
(1) speed, the raising precision of the present invention in order to accelerate global motion compensation, having adopted wide and high position, chosen in advance current frame image two lateral extent borders 1/15 is outer boundary, 4/15 wide and high position is that the sub-block in inner boundary is carried out the mode of least-squares parameter estimation as characteristic block, in addition, also add the linear criterion of Fisher to reject in time the exterior point of finding in cyclic process, reduced widely evaluated error.
(2) the present invention adopts overall motion estimation and the compensation method of piece coupling, video camera six parameter affine models, least square method, the linear criterion rejecting of Fisher exterior point etc.Experiment showed, the extraction that has realized object video in moving background video sequence, find that the extraction accuracy after compensation obviously improves.
The accompanying drawing explanation:
Fig. 1 is the extracting method process flow diagram of object video under a kind of moving background based on the Fisher linear discriminant of the present invention;
Fig. 2 is that the 4th frame of Foreman video sequence does not compensate and adopt the Video Object Extraction result after the inventive method compensates; Wherein Fig. 2 a means the 3rd frame of Foreman video sequence; Fig. 2 b means the 4th frame of Foreman video sequence; Fig. 2 c means the pretreated result of the 3rd frame of Foreman video sequence; Fig. 2 d means the pretreated result of the 4th frame of Foreman video sequence; Fig. 2 e means the result of the 3rd, 4 frame direct differential of Foreman video sequence; Fig. 2 f means that the 3rd, 4 frames of Foreman video sequence adopt the result of the rear difference of the inventive method compensation; Fig. 2 g means that the 4th frame of Foreman video sequence does not compensate the two-value video object plane of extraction; Fig. 2 h means that the 4th frame of Foreman video sequence adopts the two-value video object plane extracted after the inventive method compensation; Fig. 2 i means that the 4th frame of Foreman video sequence does not compensate the video object plane extracted; Fig. 2 j means that the 4th frame of Foreman video sequence adopts the video object plane extracted after the inventive method compensation;
Fig. 3 is that the 130th frame of Coastguard video sequence does not compensate and adopt the Video Object Extraction result after the inventive method compensates; Wherein Fig. 3 a means the 129th frame of Coastguard video sequence; Fig. 3 b means the 130th frame of Coastguard video sequence; Fig. 3 c means the pretreated result of the 129th frame of Coastguard video sequence; Fig. 3 d means the pretreated result of the 130th frame of Coastguard video sequence; Fig. 3 e means the result of the 129th, 130 frame direct differential of Coastguard video sequence; Fig. 3 f means that the 129th, 130 frames of Coastguard video sequence adopt the result of the rear difference of the inventive method compensation; Fig. 3 g means that the 130th frame of Coastguard video sequence does not compensate the two-value video object plane of extraction; Fig. 3 h means that the 130th frame of Coastguard video sequence adopts the two-value video object plane extracted after the inventive method compensation; Fig. 3 i means that the 130th frame of Coastguard video sequence does not compensate the video object plane extracted; Fig. 3 j means that the 130th frame of Coastguard video sequence adopts the video object plane extracted after the inventive method compensation.
Embodiment
As shown in Figure 1, the extracting method of object video under a kind of moving background based on the Fisher linear discriminant of the present invention, performing step is as follows:
Step 1. greyscale transformation and morphology are processed.
Because the half-tone information of image has comprised most image informations, so, in most cases, carrying out the spatial domain image while processing, all color image frames is converted into to grey-level image frame.Like this can speed up processing, save internal memory.The input video form of this experiment is yuv format, so only need to extract the Y information processing here, gets final product.In addition, every two field picture being carried out to morphology and open and close reconstruction, is in order to eliminate noise, smoothly falls some tiny edges with simplified image.Pretreated result can be referring to Fig. 2 c, Fig. 2 d and Fig. 3 c, Fig. 3 d.
Step 2. present frame K frame and former frame K-1 frame carry out the piece coupling.
The size of the piece that the inventive method is divided is 8 * 8.Can find by experiment, the if block size is large (as 16 * 16), can make processing speed accelerate, but matching precision will descend a lot; The if block size is less, and precision has improved, but speed is obviously slack-off simultaneously.So the angle of taking into account from speed and two aspects of precision considers, the block size here elects 8 * 8 as, and this is consistent with the selection of most estimation.
Piece matching criterior commonly used has: mean absolute error MAD (Mean Absolute Difference), least mean-square error MSE (Mean Square Error), minimum absolute difference SAD (Sum of Absolute).
The expression formula of MAD is:
MAD ( i , j ) = 1 MN &Sigma; m = 1 M &Sigma; n = 1 N | f k ( m , n ) - f k - 1 ( m + i , n + j ) |
Wherein (i, j) is displacement, f kand f k-1be respectively the gray-scale value of present frame and previous frame, the size that M * N is macro block, reach minimum if a bit locate MAD (i, j) at certain, and this point is the Optimum Matching point that will look for, m, the coordinate that n is the pixel in macro block.
SAD ( i , j ) = &Sigma; m = 1 M &Sigma; n = 1 N | f k ( m , n ) - f k - 1 ( m + i , n + j ) |
After the SAD criterion occurs, replace rapidly MAD and adopted by various method for estimating, because the matching effect equivalence of it and MAD, and calculated amount reduces greatly.This is because, concerning the piece that adopts identical shaped size, M and N are identical, there is no need divided by it.By the analysis to the piece matching criterior, the SAD matching criterior of choosing in experiment, find corresponding other matching criterior, and it calculates simple, and effect is better.
That in addition, search strategy is selected is new three-step approach (New Three Step Search).NTSS and three-step approach TSS different are following 2 points: improved the method that in the first step of original three-step approach, 9 points are fixed in search, adopted the way of search based on center (center-biased); Static sub-block or the search that approaches static sub-block have been introduced to " halfway termination " (Halfway-stop) function.
The NTSS search step is as follows:
The first step: remove outside 9 points of three-step approach search, additionally increase by 8 pixels adjacent with central pixel point;
Second step: judgement is static or approach the motion vector of static sub-block fast to use the halfway termination techniques;
If in first step search, minimum SAD pixel occurs in search Chuan center, stop search, be called the first step and end (first-step-stop);
If in first step search, minimum SAD pixel occurs in 8 points that search window centre is adjacent, second step only need be searched for 8 adjacent points of minimum sad value therewith, then ends, and is called second step and ends (second-step-stop);
If in first step search, minimum SAD pixel occurs in all the other points, according to three one step process, carry out.
Due to motion vector usually always high concentration be distributed near the center of search window, so NTSS adopts the search pattern based on center-biased not only to improve matching speed, also reduced the possibility that is absorbed in local optimum; Adopt the halfway termination techniques to greatly reduce search complexity, improved search efficiency.NTSS is excessive for first step step-size in search in the TSS searching method and defect that easily be absorbed in local optimum is improved, and newly-increased 8 search points are used for guaranteeing the slowly requirement of motion, and the step-length that the first step is large can meet the requirement of rapid movement again.
Step 3. is utilized Fisher to differentiate and is asked for video camera six parameter affine models.
In order to obtain camera motion model parameter accurately, choose as much as possible more possible overall piece and carry out solving of parameter as characteristic block.Simultaneously, most object video mostly is static background in ,Qi both sides, the centre position of frame of video usually.So, in the present embodiment, choosing wide and high position, image two lateral extent borders 1/15 is outer boundary, 4/15 wide and high position is that sub-block in inner boundary is as characteristic block.It should be noted that in the calculating of carrying out the camera parameters model, be actually first pixel of the upper left corner of choosing these characteristic blocks as unique point.
Known by analyzing, four parameters and six parameter models can both carry out modeling to translation, rotation, convergent-divergent motion, so, the present invention tests the result of the two modeling, find that four parameter model speed is fast, error is very large, although and six parameter models are a bit slower than it, but ratio of precision it exceeded many, so selected six parameter models.Six parameter affine Transform Models: can carry out modeling to translation, rotation, convergent-divergent motion, it is defined as follows:
x &prime; = m 0 + m 1 x + m 2 y y &prime; = n 0 + n 1 x + n 2 y
M wherein 0and n 0mean respectively the translation amplitude of pixel in x and y direction, m 1, n 1, m 2, n 2four parametric descriptions convergent-divergent and rotation rotate.
After the characteristic block of choosing is mated to motion vector substitution video camera six parameter models that get by piece, adopt least square method estimated parameter m 0, m 1, m 2, n 0, n 1, n 2.
Because the selection range of these characteristic blocks is fixed, it is the characteristic block in moving object that very difficult assurance does not wherein have prospect.And, in the process of carrying out the piece coupling, error is also unavoidable, so further reject exterior point after least square method has solved the estimated value of parameter at every turn, be very important.The present invention adopts the exterior point elimination method based on the Fisher linear discriminant.
If for the position of certain pixel in present frame K frame,
Figure BDA0000048905710000073
for this pixel corresponds to the position in former frame after block matching motion is estimated,
Figure BDA0000048905710000074
for the position that corresponds to former frame that this pixel estimates by least square method, residual error R kbe defined as follows:
R k = r k ( x k , y k ) &RightArrow; = U kBlockMatch - U k &prime;
U′ k=H kA′
H k = 1 x k y k 0 0 0 0 0 0 1 x k y k
A=[m 0?m 1?m 2?n 0?n 1?n 2]
A=[m′ 0?m′ 1?m′ 2?n′ 0?n′ 1?n′ 2]
The actual value that wherein A is globe motion parameter, the estimated value of the actual value A that A ' is globe motion parameter, estimate to obtain by least square method.H kit is the matrix of 2 * 6.
The theoretical foundation that adopts the linear criterion of Fisher to reject exterior point is: although may comprise exterior point in selected characteristic block (point), but most unique point still meets global motion, therefore the parameter value A ' calculated first can roughly reflect the trend of global motion, so now the residual error R of exterior point kwill be greater than the residual error R of interior point k, so just can unique point be divided into to exterior point and interior point two classes according to the size of residual error, thereby reject exterior point.
The specific practice that adopts the linear criterion of Fisher to reject exterior point is:
If former unique point, to total N couple, is respectively: { (U kBlockMatch1, U ' k1) ... (U kBlockMatchN, U ' kN).The set R={||r of the estimation residual error that this N is right to unique point 1||, || r 2|| ... || r n||.The target of classification will find optimal threshold T exactly r, according to R, N is divided into to interior point and exterior point two classes, establish threshold value T={||r||, || r|| ∈ R} is divided into two parts by R: the residual error set R of interior point iN=|| r i||, || r i||<T}, exterior point residual error set R oUT=|| r i||, || r i||>=T}, and R iNand R oUTmeet R iN∪ R oUT=R, R iN∩ R oUT=φ (Φ means empty set).R iNaverage be μ in, variance is the probability of interior point r oUTaverage be μ out, variance is
Figure BDA0000048905710000083
the Probability p of exterior point out=1-p in.According to the Fisher linear decision rule, the inter-class variance that can obtain interior point and exterior point two classes is:
&sigma; InAndOut 2 = p in &sigma; R in ( &mu; in - &mu; out ) 2
When
Figure BDA0000048905710000085
can obtain optimal classification when maximum, threshold value at this moment is optimal threshold.Therefore choose inter-class variance
Figure BDA0000048905710000086
t when maximum is T r,
Figure BDA0000048905710000087
if point is to (U kBlockMatchi, U ' ki) residual error be less than T r, think that this point is interior point to characteristic of correspondence point; If be more than or equal to T r, thinking that this point is exterior point to characteristic of correspondence point, should filter out, make it can not enter solving of camera parameters model least square method next time.
Step 4 is rebuild present frame by global motion compensation, and extracts object video.
Global motion compensation is according to the resulting globe motion parameter of estimation, does the mapping transformation of a corresponding background alignment between present frame and former frame.Specific practice is: for each point of present frame, the camera parameters model definite by front calculated, and obtains that this puts corresponding position in former frame, then by the mode of bilinear interpolation, aimed at assignment.So just obtain the reconstruction frames of present frame, completed global motion compensation.Then, present frame and its reconstruction frames are carried out to inter-frame difference, then fill and obtain two-value Alpha plane by binaryzation, horizontal vertical, finally mapping obtains video object plane.
In experiment, Foreman and Coastguard that the present invention has chosen the MPEG-4 standard test sequences are tested.Fig. 2 is that Foreman the 130th frame does not compensate and adopt extraction after the inventive method compensation comparison diagram as a result.Fig. 3 is that Coastguard the 130th frame does not compensate and adopt extraction after the inventive method compensation comparison diagram as a result.Fig. 3 e, Fig. 3 f from Fig. 2 and Fig. 3 can it is evident that, in the difference diagram without over-compensation after direct differential, the grain details of background is a lot, and after adopting the inventive method compensation, and the texture of background has and much become weak and even can be eliminated.The final result of extracting also can find out background unnecessary after compensation suppressed (as shown in Fig. 3 g, Fig. 3 h, Fig. 3 i, Fig. 3 j in Fig. 2 and Fig. 3), ratio of precision increased in the past.

Claims (1)

1. the extracting method of object video under the moving background based on the Fisher linear discriminant is characterized in that performing step is as follows:
(1) present frame and former frame are carried out to the piece coupling and obtain block motion vector: by the K frame, present frame is divided into some 8 * 8 sub-blocks, by K frame and K-1 frame, carries out the piece coupling, tries to achieve the block motion vector of K frame;
(2) utilize Fisher to differentiate and ask for the global camera motion parameter: the K frame both sides sub-block got in selecting step (1) is as characteristic block, ask for six parameter camera models using the motion vector of these sub-blocks as parameter by least square method, ask for six parameter camera models by least square method again after adopting the Fisher linear decision rule to reject exterior point, until reach the cycle index of setting;
(3) rebuild present frame by global motion compensation: in conjunction with bilinear interpolation, present frame is carried out to global motion compensation and obtain its reconstruction frames;
(4) inter-frame difference extracts object video: present frame K and its reconstruction frames K ' are carried out to difference, by obtaining video object segmentation plane and object video after aftertreatment;
The K frame of described step (1) and K-1 frame carry out the piece coupling and adopt minimum absolute difference (SAD) matching criterior and new three-step approach (NTSS) search strategy,
Minimum absolute difference (SAD) piece matching criterior is calculated according to following formula:
SAD ( i , j ) = &Sigma; m = 1 M &Sigma; n = 1 N | f k ( m , n ) - f k - 1 ( m + i , n + j ) |
I wherein, j is displacement, f kand f k-1be respectively the gray-scale value of present frame and previous frame, the size that M * N is macro block, reach minimum if a bit locate SAD (i, j) at certain, the Optimum Matching point of this point for looking for;
It is as follows that described step (2) utilizes the Fisher differentiation to ask for global camera motion parametric technique performing step:
(i) selected characteristic piece:
Choosing wide and high position, image two lateral extent borders 1/15 is outer boundary, and 4/15 wide and high position is that sub-block in inner boundary is as characteristic block;
(ii) least square method is asked video camera six parameter affine models:
The K frame both sides sub-block got in selecting step (i), as characteristic block, after motion vector substitution video camera six parameter models that its piece coupling is obtained, adopts least square method estimated parameter m 0, m 1, m 2, n 0, n 1, n 2, six parameter affine Transform Models: can carry out modeling to translation, rotation, convergent-divergent motion, it is defined as follows:
x &prime; = m 0 + m 1 x + m 2 y y &prime; = n 0 + n 1 x + n 2 y
M wherein 0and n 0mean respectively the translation amplitude of pixel in x and y direction, m 1, n 1, m 2, n 2four parametric descriptions convergent-divergent and rotation rotate; X ', y ' is respectively an x, and y carries out corresponding point after translation, rotation, convergent-divergent;
(iii) adopt the Fisher linear decision rule to reject exterior point:
The concrete methods of realizing that adopts the linear criterion of Fisher to reject exterior point is:
If
Figure FDA00002845725900022
for the position of certain pixel in present frame K frame, U kBlockMatch = u kBlockMatch &RightArrow; = ( x kBlockMatch , y kBlockMatchk ) T For this pixel corresponds to the position in former frame after block matching motion is estimated,
Figure FDA00002845725900024
for the position that corresponds to former frame that this pixel estimates by least square method, residual error R kbe defined as follows:
R k = r k ( x k , y k ) &RightArrow; = U kBlockMatch - u &prime; k
U' k=H kA'
H k = 1 x k y k 0 0 0 0 0 0 1 x k y k
A = m 0 m 1 m 2 n 0 n 1 n 2
A &prime; = m 0 &prime; m 1 &prime; m 2 &prime; n 0 &prime; n 1 &prime; n 2 &prime;
The actual value that wherein A is globe motion parameter, the estimated value of the actual value A that A' is globe motion parameter, estimate to obtain by least square method, the matrix that H is 2 * 6;
If former unique point, to total N couple, is respectively: { (U kBlockMatch1, U' k1) ... (U kBlockMatchN, U' kN), the set R={||r of the estimation residual error that this N is right to unique point 1||, || r 2|| ... || r n||, the target of classification will find optimal threshold T exactly r, according to R, N is divided into to interior point and exterior point two classes, establish threshold value T={||r||, || r|| ∈ R} is divided into two parts by R: the residual error set R of interior point iN=|| r i||, || r i||<T}, exterior point residual error set R oUT=|| r i||, || r i||>=T}, and R iNand R oUTmeet R iN∪ R oUT=R, R iN∩ R oUT=φ, Ф means empty set, R iNaverage be μ in, variance is
Figure FDA00002845725900029
the probability of interior point
Figure FDA000028457259000210
r oUTaverage be μ out, variance is
Figure FDA000028457259000211
the Probability p of exterior point out=1-p in; According to the Fisher linear decision rule, the inter-class variance that can obtain interior point and exterior point two classes is:
&sigma; InAndOut 2 = p in &sigma; R in ( &mu; in - &mu; out ) 2
When can obtain optimal classification when maximum, threshold value at this moment is optimal threshold, chooses inter-class variance
Figure FDA00002845725900032
t when maximum is T r,
Figure FDA00002845725900033
if point is to (U kBlockMatchi, U' ki) residual error be less than T r, think that this point is interior point to characteristic of correspondence point; If be more than or equal to T r, thinking that this point is exterior point to characteristic of correspondence point, should filter out, make it can not enter solving of camera parameters model least square method next time.
CN 201110052400 2011-03-04 2011-03-04 Method for extracting video object under dynamic background based on fisher linear discriminant analysis Expired - Fee Related CN102163334B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110052400 CN102163334B (en) 2011-03-04 2011-03-04 Method for extracting video object under dynamic background based on fisher linear discriminant analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110052400 CN102163334B (en) 2011-03-04 2011-03-04 Method for extracting video object under dynamic background based on fisher linear discriminant analysis

Publications (2)

Publication Number Publication Date
CN102163334A CN102163334A (en) 2011-08-24
CN102163334B true CN102163334B (en) 2013-12-25

Family

ID=44464545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110052400 Expired - Fee Related CN102163334B (en) 2011-03-04 2011-03-04 Method for extracting video object under dynamic background based on fisher linear discriminant analysis

Country Status (1)

Country Link
CN (1) CN102163334B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509307A (en) * 2011-10-12 2012-06-20 西安理工大学 Method for searching moving target based on longitude and latitude location and image registration
CN102737387B (en) * 2012-06-20 2014-11-19 天津大学 Method for estimating motion parameter of camera based on support vector machine (SVM)
CN102970527B (en) * 2012-10-18 2015-04-08 北京航空航天大学 Video object extraction method based on hexagon search under five-frame-background aligned dynamic background
CN102917219B (en) * 2012-10-18 2015-11-04 北京航空航天大学 Based on the dynamic background video object extraction of enhancement mode diamond search and five frame background alignment
CN102917224B (en) * 2012-10-18 2015-06-17 北京航空航天大学 Mobile background video object extraction method based on novel crossed diamond search and five-frame background alignment
CN102917221B (en) * 2012-10-18 2015-11-11 北京航空航天大学 Based on the dynamic background video object extraction of the search of novel cross rhombic and three frame background alignment
CN104123733B (en) * 2014-07-15 2018-05-04 合肥工业大学 A kind of method of motion detection and reduction error rate based on Block- matching
CN107340711A (en) * 2017-06-23 2017-11-10 中国人民解放军陆军军官学院 A kind of minute vehicle attitude angle automatic testing method based on video image
CN107767403A (en) * 2017-11-06 2018-03-06 佛山市章扬科技有限公司 A kind of method and apparatus based on moving object detection under dynamic background
CN108460630B (en) * 2018-02-12 2021-11-02 广州虎牙信息科技有限公司 Method and device for carrying out classification analysis based on user data
CN108537212B (en) * 2018-07-04 2022-10-14 南京邮电大学 Student behavior detection method based on motion estimation
CN109547789B (en) * 2019-01-11 2022-11-04 重庆理工大学 Global motion compensation algorithm
CN110782477A (en) * 2019-10-10 2020-02-11 重庆第二师范学院 Moving target rapid detection method based on sequence image and computer vision system
CN112632426B (en) * 2020-12-22 2022-08-30 新华三大数据技术有限公司 Webpage processing method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TR199700058A3 (en) * 1997-01-29 1998-08-21 Onural Levent Moving object segmentation based on rules.
CN101344968A (en) * 2008-09-02 2009-01-14 西北工业大学 Movement compensation method for star sky background image

Also Published As

Publication number Publication date
CN102163334A (en) 2011-08-24

Similar Documents

Publication Publication Date Title
CN102163334B (en) Method for extracting video object under dynamic background based on fisher linear discriminant analysis
Yang et al. Color-guided depth recovery from RGB-D data using an adaptive autoregressive model
CN104869421B (en) Saliency detection method based on overall motion estimation
CN103871076A (en) Moving object extraction method based on optical flow method and superpixel division
CN102917220B (en) Dynamic background video object extraction based on hexagon search and three-frame background alignment
CN103514608B (en) Moving object detection based on movement attention fusion model and extracting method
CN106952286A (en) Dynamic background Target Segmentation method based on motion notable figure and light stream vector analysis
CN103051857B (en) Motion compensation-based 1/4 pixel precision video image deinterlacing method
CN104065946B (en) Based on the gap filling method of image sequence
CN106210449A (en) The frame rate up-conversion method for estimating of a kind of Multi-information acquisition and system
CN105872345A (en) Full-frame electronic image stabilization method based on feature matching
CN102131058A (en) Speed conversion processing module and method of high definition digital video frame
CN105913407A (en) Method for performing fusion optimization on multi-focusing-degree image base on difference image
CN103514610B (en) A kind of moving Object Segmentation method of stationary background
CN104270624B (en) A kind of subregional 3D video mapping method
CN102917217B (en) Movable background video object extraction method based on pentagonal search and three-frame background alignment
CN104980726B (en) A kind of binocular video solid matching method of associated movement vector
CN102447870A (en) Detection method for static objects and motion compensation device
CN104778673A (en) Improved depth image enhancing algorithm based on Gaussian mixed model
CN103310482B (en) A kind of three-dimensional rebuilding method and system
CN102917222B (en) Mobile background video object extraction method based on self-adaptive hexagonal search and five-frame background alignment
CN103051893B (en) Dynamic background video object extraction based on pentagonal search and five-frame background alignment
CN102970527B (en) Video object extraction method based on hexagon search under five-frame-background aligned dynamic background
CN102917221B (en) Based on the dynamic background video object extraction of the search of novel cross rhombic and three frame background alignment
CN102917218B (en) Movable background video object extraction method based on self-adaptive hexagonal search and three-frame background alignment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20131225

Termination date: 20150304

EXPY Termination of patent right or utility model