CN107368784A - A kind of novel background subtraction moving target detecting method based on wavelet blocks - Google Patents

A kind of novel background subtraction moving target detecting method based on wavelet blocks Download PDF

Info

Publication number
CN107368784A
CN107368784A CN201710452358.1A CN201710452358A CN107368784A CN 107368784 A CN107368784 A CN 107368784A CN 201710452358 A CN201710452358 A CN 201710452358A CN 107368784 A CN107368784 A CN 107368784A
Authority
CN
China
Prior art keywords
mrow
pixel
background
image
msub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710452358.1A
Other languages
Chinese (zh)
Inventor
鲁晓锋
徐彩迪
杨夙
王磊
黑新宏
藤琳
高桥友彰
谢国
辛菁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN201710452358.1A priority Critical patent/CN107368784A/en
Publication of CN107368784A publication Critical patent/CN107368784A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/285Analysis of motion using a sequence of stereo image pairs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention discloses a kind of novel background subtraction moving target detecting method based on wavelet blocks, is specially:Each two field picture first in input image sequence, and the image of input is pre-processed coloured image is converted into gray level image, then piecemeal is carried out to obtained gray level image, every piece of intermediate value is taken as the pixel value in the block, and then obtain new images, obtained new images are modeled using the method for Gaussian Mixture background modeling, foreground target is split to obtained model by the thought of background difference and obtains foreground detection figure, denoising is carried out to obtained Noise foreground detection figure with wavelet threshold denoising method, Background maintenance finally is carried out to obtained image with adaptive Background maintenance algorithm, dynamic realtime updates background, the present invention solves moving target present in prior art due to by dynamic background, lighting change, the environment such as noise and shade influences, can not be real-time, the problem of reliable detection.

Description

A kind of novel background subtraction moving target detecting method based on wavelet blocks
Technical field
The invention belongs to video detection technology field, and in particular to a kind of novel background subtraction based on wavelet blocks Moving target detecting method.
Background technology
Moving object detection is the important component of Intelligent Measurement, and its purpose is by region of variation from image sequence Extracted from background image.Moving object detection algorithm has many kinds, can substantially be divided into following three class:Inter-frame difference Method, background subtraction and optical flow method, wherein, the target internal that frame differential method detects has cavity and profile is undesirable;The back of the body Scape calculus of finite differences can quick detection go out moving target, but very sensitive for illumination variation and shade, Detection results are undesirable;Light Stream method computation complexity is high and easily by noise, the influence of illumination variation and background perturbation, it is difficult to which the complete wheel of target can be detected It is wide.Moving object detection is a popular direction of computer vision, be widely used in robot navigation, intelligent video monitoring, The numerous areas such as industrial detection, Aero-Space.Therefore moving object detection be computer vision field research application focus and Focus, and the core of intelligent monitor system.Its purpose is exactly quickly and accurately to detect the motion in monitor video Target, i.e., moving target recognition is come out from image sequence.One of conventional method of moving object detection is background subtraction, It is used to split the moving target in image sequence.But this method is there is such as dynamic background, lighting change, noise and Shade etc. is challenged.
The content of the invention
It is an object of the invention to provide a kind of novel background subtraction moving target detecting method based on wavelet blocks, Moving target present in prior art is solved due to being influenceed by environment such as dynamic background, lighting change, noise and shades, Can not in real time, the problem of reliably detecting.
The technical solution adopted in the present invention is a kind of novel background subtraction moving target inspection based on wavelet blocks Survey method, specifically implements according to following steps:
Each two field picture in step 1, input image sequence, and the image of input is pre-processed coloured image turn Turn to gray level image;
Step 2, the gray level image obtained to step 1 carry out piecemeal, if each two field picture is M*N, each block is m*n, its In, M is the height of each two field picture, and N is the width of each two field picture, and m is the pixel of each block horizontal direction, and n is each block The pixel of vertical direction, every piece of intermediate value is taken as the pixel value in the block, and then obtain new images, wherein, M, N, m, n are Positive integer;
Step 3, using the method for Gaussian Mixture background modeling the new images that step 2 obtains are modeled;
Step 4, the model obtained by the thought of background difference to step 3 split foreground target and obtain foreground detection figure;
Step 5, the Noise foreground detection figure obtained with wavelet threshold denoising method to step 4 carry out denoising and obtain de-noising Foreground target afterwards;
Step 6, the image obtained with adaptive Background maintenance algorithm to step 5 carry out Background maintenance, dynamic realtime Update background.
The features of the present invention also resides in,
Step 3 is specifically implemented according to following steps:
Step (3.1), initialization gauss hybrid models, k Gaussian distribution model is defined for each pixel;
The parameter for the gauss hybrid models that step (3.2), renewal step (3.1) obtain, when new images are captured, with now Some Gaussian Profiles check the current pixel value of new images, if the absolute distance of new pixel and k Gauss model is in standard deviation D times within, then it is assumed that one or more of new pixel and Gauss model match, and expression is as follows:
abs(u_diff(i,j,k))<=D*sd (i, j, k) (1)
U_diff (i, j, k)=abs (double (fr_bw (i, j))-double (mean (i, j, k))) (2)
Wherein, u_diff (i, j, k) represents the absolute distance of new pixel and k-th of Gauss model average, and D represents deviation threshold Value, D=2.5, fr_bw (i, j) represent current image frame pixel, and mean (i, j, k) represents the average of current image frame pixel, its In, i represents the line number of present image, and j represents the columns of present image, and k represents the number of Gaussian Profile, and i, j are positive integer, k∈[1,5];
If new pixel matches with k-th of Gaussian Profile, Gaussian Distribution Parameters are updated to equation (3), (4) and (5):
W (i, j, k)=(1- α) * w (i, j, k)+α (3)
Mean (i, j, k)=(1-p) * mean (i, j, k)+p*double (fr_bw (i, j)) (4)
Wherein, α ∈ [0,1] represent that learning rate determines context update speed, and w (i, j, k) represents current image frame pixel Weight, sd (i, j, k) represent the standard deviation of current image frame pixel, and p represents turnover rate, the relations of p and other parameters be p=α/ W (i, j, k), i represent the line number of present image, and j represents the columns of present image, and k represents the number of Gaussian Profile.w(i,j,k) For real number, i, j are positive integer, p ∈ [0,1], k ∈ [1,5];
If new pixel mismatches with any Gaussian Profile, new Gaussian Profile will be created to replace existing distribution, Wherein weight is minimum, and the average value of the Gaussian Profile newly created is the average value of the pixel observed by current, standard deviation The maximum of initialization is arranged to, weight is arranged to the minimum value of initialization, and the weight of other Gaussian Profiles is updated to equation (6):
W (i, j, k)=(1- α) * w (i, j, k) (6)
Wherein, w (i, j, k) represents the weight of current image frame pixel, and α ∈ [0,1] represent that learning rate determines background Renewal rate, i represent the line number of present image, and j represents the columns of present image, and k represents the number of Gaussian Profile.w(i,j,k) For real number, i, j are positive integer, k ∈ [1,5];
Step (3.3), weight sum is calculated, when weight sum is more than appropriate value (0.35) of test, Gaussian Profile mould The number of type is then the number of the background model initialized;
Step (3.4), establish background model:Permutation calculation mould is carried out from big to small to k Gaussian Profile by w/sd value Type priority, priority is higher, and Gaussian Profile is more stable, also more can represent real background, and C Gaussian Profile establishes the back of the body before taking Scape model:
Wherein, w/sd represents model reference priority, and k ∈ [1,5] represent the number of Gaussian Profile, and C ∈ [1,5] represent weighting weight It is worth the number of Gaussian Profile when sum is more than the minimum value of threshold value, c represents the maximum of Gaussian Profile and c value is 5, T ∈ (0,1) threshold value is represented, w/sd spans are [0,1];
Step (3.5), calculate background:Obtained weight sum in step (3.3) is applied to obtain in step (3.4) Background model in, to obtain apparent background.
Step 4 is specially:
If the absolute value that the pixel of current image frame and the average of all Gauss models are tried to achieve after making the difference is than this D times of the standard deviation of Gauss model is big, then the pixel is the pixel in prospect, and otherwise, the pixel is in background Pixel, the pixel in prospect are expressed as following formula:
abs(u_diff(i,j,k))>D*sd(i,j,k) (8)
Wherein, i represents the line number of present image, and j represents the columns of present image, and k represents the number of Gaussian Profile, D tables Show deviation threshold, D=2.5, i, j are positive integer, k ∈ [1,5].
Step 5 is specifically implemented according to following steps:
Step (5.1), the wavelet decomposition of image:It is small to select the signal progress that a kind of small echo of N layers obtains to the step 4 Wave Decomposition, wherein, N ∈ [1,6];
Step (5.2), threshold process:Signal after being decomposed to step (5.1) is by the threshold value of acquisition, using selected Threshold function table quantifies each layer coefficients;
Step (5.3), wavelet reconstruction:Small echo weight is carried out to the signal coefficient after step (5.2) threshold function table quantification treatment Structure, obtain the signal after de-noising.
Step 6 is specifically implemented according to following steps:
Step (6.1), adaptive background maintenance algorithm are defined as follows:
CBn+1=an×IBn+(1-an)×CBn (9)
Wherein, anRepresent context update coefficient, anSpan is [0,1], CBnRepresent current background model frame, IBnTable Show instant background frames, CBn+1The current background model frame of the (n+1)th frame is represented, n is positive integer;
Step (6.2), in order to solve the fast-changing problem of image sequence Scene, introduce instant background frames, immediately the back of the body Scape frame is calculated as follows:
Wherein, MP (i, j) is the binary system figure of detection, wherein, moving region pixel value is 1, and non-athletic pixel value is 0;
Step (6.3), context update coefficient anDetermined by the illumination variation and moving target situation of present frame and background frames It is fixed, anCross formula (11) calculating:
an=0.9 × an-1+0.1×a-instn (11)
Wherein, a-instnRepresent adjacent image frame F in image sequencenAnd Fn-1Between adaptive instant weight, a- instnIt is defined as follows:
Wherein, sum-unmovn,n-1Represent corresponding two successive image frame FnAnd Fn-1Between grey scale change, sum- unmovn,n-1Represent as follows:
area-unmovn,n-1Represent the quantity of the pixel of present image non-moving areas, area-unmovn,n-1Represent such as Under:
Wherein, MP (i, j) ∈ MPn∪MPn-1, MPnAnd MPn-1It is image F respectivelynWith image Fn-1In motion pixel.
A kind of the invention has the advantages that novel background subtraction moving object detection side based on wavelet blocks Method, coloured image is converted into gray level image by being pre-processed to the image of input, then obtained gray level image entered Row piecemeal, every piece of intermediate value is taken as the pixel value in the block, and then obtain new images, use the side of Gaussian Mixture background modeling Method is modeled to obtained new images, and splitting foreground target to obtained model by the thought of background difference obtains prospect inspection Mapping, denoising is carried out to obtained Noise foreground detection figure with wavelet threshold denoising method, finally with adaptive background Maintenance algorithm carries out Background maintenance to obtained image, and dynamic realtime updates background, not only reduces the computation complexity of algorithm, And improve accuracy of detection and the adaptability of algorithm.
Brief description of the drawings
Fig. 1 (a) is a kind of novel background subtraction moving target detecting method midfield based on wavelet blocks of the present invention The present image of the 90th frame in scape 1;
Fig. 1 (b) is a kind of novel background subtraction moving target detecting method midfield based on wavelet blocks of the present invention The 90th two field picture in scape 1 passes through the binaryzation design sketch of the isolated foreground target of the inventive method;
Fig. 2 (a) is a kind of novel background subtraction moving target detecting method midfield based on wavelet blocks of the present invention The present image of the 90th frame in scape 2;
Fig. 2 (b) is a kind of novel background subtraction moving target detecting method midfield based on wavelet blocks of the present invention The 90th two field picture in scape 2 passes through the binaryzation design sketch of the isolated foreground target of the inventive method;
Fig. 3 is a kind of novel background subtraction moving target detecting method Scene 2 based on wavelet blocks of the present invention In the binaryzation design sketch of foreground target that is obtained using gauss hybrid models of the 90th two field picture;
Fig. 4 is a kind of novel background subtraction moving target detecting method Scene 2 based on wavelet blocks of the present invention In the 90th two field picture use the binaryzation design sketch of foreground target obtained based on wavelet method;
Fig. 5 is a kind of novel background subtraction moving target detecting method Scene 2 based on wavelet blocks of the present invention In the binaryzation design sketch of foreground target that is obtained using the method for proposition of the 90th two field picture.
Embodiment
The present invention is described in detail with reference to the accompanying drawings and detailed description.
A kind of novel background subtraction moving target detecting method based on wavelet blocks of the present invention, it is right in units of block The image sequence of input is handled, and is entered by the pixel that the intermediate value that every piece is obtained after each two field picture piecemeal is used as to new images And obtain new images.It should be noted that image handled in the procedure of the present invention is the along positive time series One two field picture, the second two field picture, the 3rd two field picture ..., n-th frame image (n is positive integer).
A kind of novel background subtraction moving target detecting method based on wavelet blocks of the present invention, specifically according to following Step is implemented:
Each two field picture in step 1, input image sequence, and the image of input is pre-processed coloured image turn Turn to gray level image;
Step 2, the gray level image obtained to step 1 carry out piecemeal, if each two field picture is M*N, each block is m*n, its In, M is the height of each two field picture, and N is the width of each two field picture, and m is the pixel of each block horizontal direction, and n is each block The pixel of vertical direction, every piece of intermediate value is taken as the pixel value in the block, and then obtain new images, wherein, M, N, m, n are Positive integer;
Step 3, using the method for Gaussian Mixture background modeling the new images that step 2 obtains are modeled, specifically according to Following steps are implemented:
Step (3.1), initialization gauss hybrid models, k Gaussian distribution model is defined for each pixel;
The parameter for the gauss hybrid models that step (3.2), renewal step (3.1) obtain, when new images are captured, with now Some Gaussian Profiles check the current pixel value of new images, if the absolute distance of new pixel and k Gauss model is in standard deviation D times within, then it is assumed that one or more of new pixel and Gauss model match, and expression is as follows:
abs(u_diff(i,j,k))<=D*sd (i, j, k) (1)
U_diff (i, j, k)=abs (double (fr_bw (i, j))-double (mean (i, j, k))) (2)
Wherein, u_diff (i, j, k) represents the absolute distance of new pixel and k-th of Gauss model average, and D represents deviation threshold Value, D=2.5, fr_bw (i, j) represent current image frame pixel, and mean (i, j, k) represents the average of current image frame pixel, its In, i represents the line number of present image, and j represents the columns of present image, and k represents the number of Gaussian Profile, and i, j are positive integer, k∈[1,5];
If new pixel matches with k-th of Gaussian Profile, Gaussian Distribution Parameters are updated to equation (3), (4) and (5):
W (i, j, k)=(1- α) * w (i, j, k)+α (3)
Mean (i, j, k)=(1-p) * mean (i, j, k)+p*double (fr_bw (i, j)) (4)
Wherein, α ∈ [0,1] represent that learning rate determines context update speed, and w (i, j, k) represents current image frame pixel Weight, sd (i, j, k) represent the standard deviation of current image frame pixel, and p represents turnover rate, the relations of p and other parameters be p=α/ W (i, j, k), i represent the line number of present image, and j represents the columns of present image, and k represents the number of Gaussian Profile.w(i,j,k) For real number, i, j are positive integer, p ∈ [0,1], k ∈ [1,5];
If new pixel mismatches with any Gaussian Profile, new Gaussian Profile will be created to replace existing distribution, Wherein weight is minimum, and the average value of the Gaussian Profile newly created is the average value of the pixel observed by current, standard deviation The maximum of initialization is arranged to, weight is arranged to the minimum value of initialization, and the weight of other Gaussian Profiles is updated to equation (6):
W (i, j, k)=(1- α) * w (i, j, k) (6)
Wherein, w (i, j, k) represents the weight of current image frame pixel, and α ∈ [0,1] represent that learning rate determines background Renewal rate, i represent the line number of present image, and j represents the columns of present image, and k represents the number of Gaussian Profile.w(i,j,k) For real number, i, j are positive integer, k ∈ [1,5];
Step (3.3), weight sum is calculated, when weight sum is more than the appropriate value 0.35 of test, Gaussian distribution model Number then for initialization background model number;
Step (3.4), establish background model:Permutation calculation mould is carried out from big to small to k Gaussian Profile by w/sd value Type priority, priority is higher, and Gaussian Profile is more stable, also more can represent real background, and C Gaussian Profile establishes the back of the body before taking Scape model:
Wherein, w/sd represents model reference priority, and k ∈ [1,5] represent the number of Gaussian Profile, and C ∈ [1,5] represent weighting weight It is worth the number of Gaussian Profile when sum is more than the minimum value of threshold value, c represents the maximum of Gaussian Profile and c value is 5, T ∈ (0,1) threshold value is represented, w/sd spans are [0,1];
Step (3.5), calculate background:Obtained weight sum in step (3.3) is applied to obtain in step (3.4) Background model in, to obtain apparent background;
Step 4, the model obtained by the thought of background difference to step 3 split foreground target and obtain foreground detection figure, Specially:
If the absolute value that the pixel of current image frame and the average of all Gauss models are tried to achieve after making the difference is than this D times of the standard deviation of Gauss model is big, then the pixel is the pixel in prospect, and otherwise, the pixel is in background Pixel, the pixel in prospect are expressed as following formula:
abs(u_diff(i,j,k))>D*sd(i,j,k) (8)
Wherein, i represents the line number of present image, and j represents the columns of present image, and k represents the number of Gaussian Profile, D tables Show deviation threshold, D=2.5, i, j are positive integer, k ∈ [1,5];
Step 5, the Noise foreground detection figure obtained with wavelet threshold denoising method to step 4 carry out denoising and obtain de-noising Foreground target afterwards, specifically implements according to following steps:
Step (5.1), the wavelet decomposition of image:It is small to select the signal progress that a kind of small echo of N layers obtains to the step 4 Wave Decomposition, wherein, N ∈ [1,6];
Step (5.2), threshold process:Signal after being decomposed to step (5.1) is by the threshold value of acquisition, using selected Threshold function table quantifies each layer coefficients;
Step (5.3), wavelet reconstruction:Small echo weight is carried out to the signal coefficient after step (5.2) threshold function table quantification treatment Structure, obtain the signal after de-noising;
Step 6, the image obtained with adaptive Background maintenance algorithm to step 5 carry out Background maintenance, dynamic realtime Background is updated, is specifically implemented according to following steps:
Step (6.1), adaptive background maintenance algorithm are defined as follows:
CBn+1=an×IBn+(1-an)×CBn (9)
Wherein, anRepresent context update coefficient, anSpan is [0,1], CBnRepresent current background model frame, IBnTable Show instant background frames, CBn+1The current background model frame of the (n+1)th frame is represented, n is positive integer;
Step (6.2), in order to solve the fast-changing problem of image sequence Scene, introduce instant background frames, immediately the back of the body Scape frame is calculated as follows:
Wherein, MP (i, j) is the binary system figure of detection, wherein, moving region pixel value is 1, and non-athletic pixel value is 0;
Step (6.3), context update coefficient anDetermined by the illumination variation and moving target situation of present frame and background frames It is fixed, anCalculated by formula (11):
an=0.9 × an-1+0.1×a-instn (11)
Wherein, a-instnRepresent adjacent image frame F in image sequencenAnd Fn-1Between adaptive instant weight, a- instnIt is defined as follows:
Wherein, sum-unmovn,n-1Represent corresponding two successive image frame FnAnd Fn-1Between grey scale change, sum- unmovn,n-1Represent as follows:
area-unmovn,n-1Represent the quantity of the pixel of present image non-moving areas, area-unmovn,n-1Represent such as Under:
Wherein, MP (i, j) ∈ MPn∪MPn-1, MPnAnd MPn-1It is image F respectivelynWith image Fn-1In motion pixel.
Qualitative assessment is carried out based on similarity measurement, verifies the validity of the method for proposition, is specially:
It is the region detected to make M, and N is corresponding ground truth.Then the similarity definition between M and N is:
It is as a result 1 when both are similar, is as a result 0 otherwise.It is true come comparing motion target and ground by formula (15) Real situation, the validity of the method for proposition is verified with this.
Accompanying drawing in the application is only some embodiments of the present invention, for those of ordinary skill in the art, On the premise of not paying creative work, other accompanying drawings can also be obtained according to these accompanying drawings.
A kind of novel background subtraction moving target detecting method based on wavelet blocks of the present invention, this method not only drop The low computation complexity of algorithm, and improve accuracy of detection and the adaptability of algorithm.In addition, it is examined relative to moving target The algorithm of prior art in survey has significant competitive.
Embodiment 1
A kind of novel background subtraction moving target detecting method based on wavelet blocks of the present invention, specifically according to following Step is implemented:
Each two field picture in step 1, input image sequence, and the image of input is pre-processed coloured image turn Turn to gray level image;
Step 2, the gray level image obtained to step 1 carry out piecemeal, if each two field picture size is 240*320, every piece Size is 3*3, then is divided into 80*107 block per two field picture;
Step 3, using the method for Gaussian Mixture background modeling the new images that step 2 obtains are modeled, specifically according to Following steps are implemented:
Step (3.1), initialization gauss hybrid models, 3 Gaussian distribution models are defined for each pixel;
The parameter for the gauss hybrid models that step (3.2), renewal step (3.1) obtain, when new images such as Fig. 1 (a) is caught When obtaining, the current pixel value of new images is checked with existing Gaussian Profile, if new pixel and the 3rd Gauss model it is absolute away from From within 2.5 times of standard deviation, then it is assumed that new pixel matches with the 3rd Gauss model, conversely, then mismatching.Specifically can table Show as follows:
abs(u_diff(80,107,3))>2.5*sd(80,107,3) (1)
U_diff (80,107,3)=abs (double (fr_bw (80,107))-double (mean (80,107,3))) (2)
Wherein, u_diff (80,107,3) represents the pixel and the 3rd Gauss model positioned at the row 107 of present image the 80th row The absolute distance of average, fr_bw (80,107) represent the pixel positioned at the row 107 of present image the 80th row, mean (80,107,3) The average of the pixel positioned at the row 107 of present image the 80th row is represented, sd (80,107,3) represents to be located at the row 107 of present image the 80th The standard deviation of the pixel of row, u_diff (80,107,3)=139.1551, fr_bw (80,107)=63, mean (80,107,3) =202.1551, sd (80,107,3)=6.0000.
Because the pixel and the absolute distance of the 3rd Gauss model arranged positioned at the row 107 of present image the 80th is in standard deviation Outside 2.5 times, then the pixel mismatches with the 3rd Gaussian Profile, then will create new Gaussian Profile to replace existing distribution, its For middle weight for minimum, the average value of the Gaussian Profile newly created is the average value of current observed pixel, and standard deviation is set The maximum of initialization is set to, weight is arranged to the minimum value of initialization, and the weight of other Gaussian Profiles is updated to equation (6):
wnew(80,107,3)=(1-0.01) * w (80,107,3) (6)
Wherein, the weight for the pixel that w (80,107,3) expressions arrange positioned at the row 107 of present image the 80th, w (80,107,3)= 0.1051, wnew(80,107,3)=0.1040.
Step (3.3), weight sum is calculated, when weight sum is more than the appropriate value 0.35 of test, Gaussian distribution model Number then for initialization background model number;
Step (3.4), establish background model:By w/sd value (0.1499,0.0319,0.0175) to 3 Gaussian Profiles Arranged from big to small, priority is higher, and Gaussian Profile is more stable, also more can represent real background, C Gauss point before taking Cloth establishes background model:
Wherein, C represents the number of Gaussian Profile when weighting weight values sums is more than the minimum value of threshold value, C=2, Weight sum is represented,
Step (3.5), calculate background:Obtained weight sum in step (3.3) is applied to obtain in step (3.4) Background model in, to obtain apparent background;
Step 4, the model obtained by the thought of background difference to step 3 split foreground target and obtain foreground detection figure, Specially:
The absolute value that the pixel of current image frame and the average of the 3rd Gauss model are tried to achieve after making the difference is than this Gauss model It is 2.5 times of standard deviation big, then the pixel is the pixel in prospect, is expressed as following formula:
abs(u_diff(80,107,3))>2.5*sd(80,107,3) (8)
Wherein, u_diff (80,107,3) represents the pixel and the 3rd Gauss model positioned at the row 107 of present image the 80th row The absolute distance of average, sd (80,107,3) represent the standard deviation of the pixel positioned at the row 107 of present image the 80th row, u_diff (80,107,3)=139.1551, sd (80,107,3)=6.0000;
Step 5, the Noise foreground detection figure obtained with wavelet threshold denoising method to step 4 carry out denoising and obtain de-noising Such as Fig. 1 (b) of foreground target afterwards, specifically implements according to following steps:
Step (5.1), the wavelet decomposition of image:The signal that a kind of sym5 small echos of N layers obtain the step 4 is selected to enter Row wavelet decomposition, wherein, N=3;
Step (5.2), threshold process:Threshold value to the signal after step (5.1) decomposition by acquisition, with improved half Soft-threshold function quantifies each layer coefficients;
Step (5.3), wavelet reconstruction:Small echo weight is carried out to the signal coefficient after step (5.2) threshold function table quantification treatment Structure, obtain the signal after de-noising;
Step 6, the image obtained with adaptive Background maintenance algorithm to step 5 carry out Background maintenance, dynamic realtime Background is updated, is specifically implemented according to following steps:
Step (6.1), adaptive background maintenance algorithm are defined as follows:
CB91=a90×IB90+(1-a90)×CB90 (9)
Wherein, a90Represent the context update coefficient of the 90th frame, a90=0.0122, CB90The 90th frame background model frame is represented, CBn=60, IB90Represent the 90th instant background frames of frame, IB90=60, CB91Represent the background model frame of the 91st frame, CB91=60;
Step (6.2), in order to solve the fast-changing problem of image sequence Scene, introduce instant background frames, immediately the back of the body Scape frame is calculated as follows:
Wherein, MP (80,107) is the binary system figure of detection, wherein, moving region pixel value is 1, and non-athletic pixel value is 0, MP (80,107)=1, IB90(80,107)=60;
Step (6.3), context update coefficient a90Determined by the illumination variation and moving target situation of present frame and background frames It is fixed, a90Calculated by formula (11):
a90=0.9 × a89+0.1×a-inst89 (11)
Wherein, a90=0.0122, a89=0.0124, a-inst89=0.0101, a-inst90Represent adjacent in image sequence Picture frame F90And F89Between adaptive instant weight, be defined as follows:
Wherein, sum-unmov90,89Represent corresponding two successive image frame F90And F89Between grey scale change, area- unmov90,89The quantity of the pixel of present image non-moving areas is represented, is represented as follows:
Wherein, F90(80,107)=60, F89(80,107)=66, sum-unmov90,89=83.6289, MP (80,107) =1, area-unmov90,89=8104, a-inst90=0.0102.
Embodiment 2
A kind of novel background subtraction moving target detecting method based on wavelet blocks of the present invention, specifically according to following Step is implemented:
Each two field picture in step 1, input image sequence, and the image of input is pre-processed coloured image turn Turn to gray level image;
Step 2, the gray level image obtained to step 1 carry out piecemeal, if each two field picture size is 240*320, every piece Size is 3*3, then is divided into 80*107 block per two field picture;
Step 3, using the method for Gaussian Mixture background modeling the new images that step 2 obtains are modeled, specifically according to Following steps are implemented:
Step (3.1), initialization gauss hybrid models, 3 Gaussian distribution models are defined for each pixel;
The parameter for the gauss hybrid models that step (3.2), renewal step (3.1) obtain, when new images such as Fig. 2 (a) is caught When obtaining, the current pixel value of new images is checked with existing Gaussian Profile, if new pixel and the 3rd Gauss model it is absolute away from From within 2.5 times of standard deviation, then it is assumed that new pixel matches with the 3rd Gauss model, conversely, then mismatching.Specifically can table Show as follows:
abs(u_diff(80,107,3))>2.5*sd(80,107,3) (1)
U_diff (80,107,3)=abs (double (fr_bw (80,107))-double (mean (80,107,3))) (2)
Wherein, u_diff (80,107,3) represents the pixel and the 3rd Gauss model positioned at the row 107 of present image the 80th row The absolute distance of average, fr_bw (80,107) represent current image frame pixel, and mean (80,107,3) represents current image frame picture The average of element, sd (80,107,3) represent the standard deviation of current image frame pixel, u_diff (80,107,3)=94.0196, fr_ Bw (80,107)=29, mean (80,107,3)=123.0196, sd (80,107,3)=6.0000.
Because the pixel and the absolute distance of the 3rd Gauss model arranged positioned at the row 107 of present image the 80th is in standard deviation Outside 2.5 times, then the pixel mismatches with the 3rd Gaussian Profile, then will create new Gaussian Profile to replace existing distribution, its For middle weight for minimum, the average value of the Gaussian Profile newly created is the average value of current observed pixel, and standard deviation is set The maximum of initialization is set to, weight is arranged to the minimum value of initialization, and the weight of other Gaussian Profiles is updated to equation (6):
wnew(80,107,3)=(1-0.01) * w (80,107,3) (6)
Wherein, the weight for the pixel that w (80,107,3) expressions arrange positioned at the row 107 of present image the 80th, w (80,107,3)= 0.1232, wnew(80,107,3)=0.1219.
Step (3.3), weight sum is calculated, when weight sum is more than appropriate value (0.35) of test, Gaussian Profile mould The number of type is then the number of the background model initialized;
Step (3.4), establish background model:By w/sd value (0.2854,0.0205,0.0205) to 3 Gaussian Profiles Arranged from big to small, priority is higher, and Gaussian Profile is more stable, also more can represent real background, C Gauss point before taking Cloth establishes background model:
Wherein, C represents the number of Gaussian Profile when weighting weight values sums is more than the minimum value of threshold value, C=2, Weight sum is represented,
Step (3.5), calculate background:Obtained weight sum in step (3.3) is applied to obtain in step (3.4) Background model in, to obtain apparent background;
Step 4, the model obtained by the thought of background difference to step 3 split foreground target and obtain foreground detection figure, Specially:
The absolute value that the pixel of current image frame and the average of the 3rd Gauss model are tried to achieve after making the difference is than this Gauss model It is 2.5 times of standard deviation big, then the pixel is the pixel in prospect, is expressed as following formula:
abs(u_diff(80,107,3))>2.5*sd(80,107,3) (8)
Wherein, u_diff (80,107,3) represents the pixel and the 3rd Gauss model positioned at the row 107 of present image the 80th row The absolute distance of average, sd (80,107,3) represent the standard deviation of the pixel positioned at the row 107 of present image the 80th row, u_diff (80,107,3)=94.0196, sd (80,107,3)=6.0000;
Step 5, the Noise foreground detection figure obtained with wavelet threshold denoising method to step 4 carry out denoising and obtain prospect Target such as Fig. 2 (b), specifically implements according to following steps:
Step (5.1), the wavelet decomposition of image:The signal that a kind of sym5 small echos of N layers obtain the step 4 is selected to enter Row wavelet decomposition, wherein, N=3;
Step (5.2), threshold process:Threshold value to the signal after step (5.1) decomposition by acquisition, with improved half Soft-threshold function quantifies each layer coefficients;
Step (5.3), wavelet reconstruction:Small echo weight is carried out to the signal coefficient after step (5.2) threshold function table quantification treatment Structure, obtain the signal after de-noising;
Step 6, the image obtained with adaptive Background maintenance algorithm to step 5 carry out Background maintenance, dynamic realtime Background is updated, is specifically implemented according to following steps:
Step (6.1), adaptive background maintenance algorithm are defined as follows:
CB91=a90×IB90+(1-a90)×CB90 (9)
Wherein, a90Represent the context update coefficient of the 90th frame, a90=0.0020, CB90The 90th frame background model frame is represented, CB90=31, IB90Represent the 90th instant background frames of frame, IB90=31, CB91Represent the background model frame of the 91st frame, CB91=31;
Step (6.2), in order to solve the fast-changing problem of image sequence Scene, introduce instant background frames, immediately the back of the body Scape frame is calculated as follows:
Wherein, MP (80,107) is the binary system figure of detection, wherein, moving region pixel value is 1, and non-athletic pixel value is 0, MP (80,107)=1, IB90(80,107)=31;
Step (6.3), context update coefficient a90Determined by the illumination variation and moving target situation of present frame and background frames It is fixed, a90Calculated by formula (11):
a90=0.9 × a89+0.1×a-inst89 (11)
Wherein, a90=0.0020, a89=0.0020, a-inst89=0.0017, a-inst90Represent adjacent in image sequence Picture frame F90And F89Between adaptive instant weight, be defined as follows:
Wherein, sum-unmov90,89Represent corresponding two successive image frame F90And F89Between grey scale change, area- unmov90,89The quantity of the pixel of present image non-moving areas is represented, is represented as follows:
Wherein, F90(80,107)=31, F89(80,107)=30, sum-unmov90,89=14.8320, MP (80,107) =1, area-unmov90,89=8180, a-inst90=0.0018.
Qualitative assessment is carried out based on similarity measurement, verifies the validity of the method for proposition, is specially:
It is the region detected to make M, and N is corresponding ground truth.Then the similarity definition between M and N is:
It is as a result 1 when both are similar, is as a result 0 otherwise.It is true come comparing motion target and ground by formula (15) Real situation, the validity of the method for proposition is verified with this.
Based on similarity measurement carry out qualitative assessment, for the image sequence in scene 2 by gauss hybrid models (such as Fig. 3), carry out qualitative assessment based on small echo (such as Fig. 4), method (such as Fig. 5) these three methods proposed and obtain corresponding result, such as Shown in following table:
Method S (M, N) (%)
The method of proposition 71.08
Gauss hybrid models 59.22
Based on small echo 66.56
By result obtained above, the method for proposition is effective.
The novel background subtraction moving object detection based on wavelet blocks of the present invention not only reduces the meter of algorithm Complexity is calculated, and improves the adaptability and performance of algorithm.In addition, it is relative to the prior art in moving object detection Algorithm has competitiveness.

Claims (5)

  1. A kind of 1. novel background subtraction moving target detecting method based on wavelet blocks, it is characterised in that specifically according to Following steps are implemented:
    Each two field picture in step 1, input image sequence, and the image of input is pre-processed and is converted into coloured image Gray level image;
    Step 2, the gray level image obtained to the step 1 carry out piecemeal, if each two field picture is M*N, each block is m*n, its In, M is the height of each two field picture, and N is the width of each two field picture, and m is the pixel of each block horizontal direction, and n is each block The pixel of vertical direction, every piece of intermediate value is taken as the pixel value in the block, and then obtain new images, wherein, M, N, m, n are Positive integer;
    Step 3, the new images obtained using the method for Gaussian Mixture background modeling to the step 2 are modeled;
    Step 4, the model obtained by the thought of background difference to the step 3 split foreground target and obtain foreground detection figure;
    Step 5, the Noise foreground detection figure obtained with wavelet threshold denoising method to the step 4 carry out denoising;
    Step 6, the image obtained with adaptive Background maintenance algorithm to the step 5 carry out Background maintenance, dynamic realtime Update background.
  2. 2. a kind of novel background subtraction moving target detecting method based on wavelet blocks according to claim 1, Characterized in that, the step 3 is specifically implemented according to following steps:
    Step (3.1), initialization gauss hybrid models, k Gaussian distribution model is defined for each pixel;
    The parameter for the gauss hybrid models that step (3.2), the renewal step (3.1) obtain, when new images are captured, with now Some Gaussian Profiles check the current pixel value of new images, if the absolute distance of new pixel and k Gauss model is in standard deviation D times within, then it is assumed that one or more of new pixel and Gauss model match, and expression is as follows:
    abs(u_diff(i,j,k))<=D*sd (i, j, k) (1)
    U_diff (i, j, k)=abs (double (fr_bw (i, j))-double (mean (i, j, k))) (2)
    Wherein, u_diff (i, j, k) represents the absolute distance of new pixel and k-th of Gauss model average, and D represents deviation threshold, D =2.5, fr_bw (i, j) represent current image frame pixel, and mean (i, j, k) represents the average of current image frame pixel, wherein, i The line number of present image is represented, j represents the columns of present image, and k represents the number of Gaussian Profile, and i, j are positive integer, k ∈ [1,5];
    If new pixel matches with k-th of Gaussian Profile, Gaussian Distribution Parameters are updated to equation (3), (4) and (5):
    W (i, j, k)=(1- α) * w (i, j, k)+α (3)
    Mean (i, j, k)=(1-p) * mean (i, j, k)+p*double (fr_bw (i, j)) (4)
    <mrow> <mi>s</mi> <mi>d</mi> <mrow> <mo>(</mo> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> <mo>)</mo> </mrow> <mo>=</mo> <mi>s</mi> <mi>q</mi> <mi>r</mi> <mi>t</mi> <mfenced open = "(" close = ")"> <mtable> <mtr> <mtd> <mrow> <mrow> <mo>(</mo> <mrow> <mn>1</mn> <mo>-</mo> <mi>p</mi> </mrow> <mo>)</mo> </mrow> <mo>*</mo> <mrow> <mo>(</mo> <mrow> <mi>s</mi> <mi>d</mi> <mo>^</mo> <mn>2</mn> </mrow> <mo>)</mo> </mrow> <mo>+</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>p</mi> <mo>*</mo> <mrow> <mo>(</mo> <mrow> <mo>(</mo> <mrow> <mi>d</mi> <mi>o</mi> <mi>u</mi> <mi>b</mi> <mi>l</mi> <mi>e</mi> <mrow> <mo>(</mo> <mrow> <mi>f</mi> <mi>r</mi> <mo>_</mo> <mi>b</mi> <mi>w</mi> <mrow> <mo>(</mo> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mo>)</mo> </mrow> </mrow> <mo>)</mo> </mrow> <mo>-</mo> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> <mrow> <mo>(</mo> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> <mo>)</mo> </mrow> </mrow> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>^</mo> <mn>2</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
    Wherein, α ∈ [0,1] represent that learning rate determines context update speed, and w (i, j, k) represents the weight of current image frame pixel, Sd (i, j, k) represents the standard deviation of current image frame pixel, and p represents turnover rate, the relations of p and other parameters be p=α/w (i, J, k), w (i, j, k) is real number, and i, j are positive integer, p ∈ [0,1], k ∈ [1,5];
    If new pixel mismatches with any Gaussian Profile, new Gaussian Profile will be created to replace existing distribution, wherein For weight for minimum, the average value of the Gaussian Profile newly created is the average value of current observed pixel, and standard deviation is set For the maximum of initialization, weight is arranged to the minimum value of initialization, and the weight of other Gaussian Profiles is updated to equation (6):
    W (i, j, k)=(1- α) * w (i, j, k) (6)
    Wherein, w (i, j, k) represents the weight of current image frame pixel, and α ∈ [0,1] represent that learning rate determines context update Speed, i represent the line number of present image, and j represents the columns of present image, and k represents the number of Gaussian Profile.W (i, j, k) is real Number, i, j are positive integer, k ∈ [1,5];
    Step (3.3), weight sum is calculated, when weight sum is more than the appropriate value 0.35 of test, of Gaussian distribution model Several is then the number of the background model initialized;
    Step (3.4), establish background model:It is excellent that permutation calculation model is carried out from big to small to k Gaussian Profile by w/sd value First level, priority is higher, and Gaussian Profile is more stable, also more can represent real background, and C Gaussian Profile establishes background mould before taking Type:
    <mrow> <mi>C</mi> <mo>=</mo> <mi>arg</mi> <mi> </mi> <msub> <mi>min</mi> <mi>c</mi> </msub> <mo>&lt;</mo> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>c</mi> </munderover> <msub> <mi>w</mi> <mi>k</mi> </msub> <mo>&gt;</mo> <mi>T</mi> <mo>&gt;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow>
    Wherein, w/sd represents model reference priority, and k ∈ [1,5] represent the number of Gaussian Profile, C ∈ [1,5] represent weighting weight values it With the number of Gaussian Profile during minimum value more than threshold value, c represents that the maximum of Gaussian Profile and c value are 5, T ∈ (0,1) Threshold value is represented, w/sd spans are [0,1];
    Step (3.5), calculate background:Obtained weight sum in the step (3.3) is applied in the step (3.4) In obtained background model, to obtain apparent background.
  3. 3. a kind of novel background subtraction moving target detecting method based on wavelet blocks according to claim 1, Characterized in that, the step 4 is specially:
    If the absolute value that the pixel of current image frame and the average of all Gauss models are tried to achieve after making the difference is than this Gauss D times of the standard deviation of model is big, then the pixel is the pixel in prospect, and otherwise, the pixel is the pixel in background Point, the pixel in prospect are expressed as following formula:
    abs(u_diff(i,j,k))>D*sd(i,j,k) (8)
    Wherein, i represents the line number of present image, and j represents the columns of present image, and k represents the number of Gaussian Profile, and D represents inclined Poor threshold value, D=2.5, i, j are positive integer, k ∈ [1,5].
  4. 4. a kind of novel background subtraction moving target detecting method based on wavelet blocks according to claim 1, Characterized in that, the step 5 is specifically implemented according to following steps:
    Step (5.1), the wavelet decomposition of image:Select the signal that a kind of small echo of N layers obtains to the step 4 and carry out small wavelength-division Solution, wherein, N ∈ [1,6];
    Step (5.2), threshold process:Threshold value to the signal after step (5.1) decomposition by acquisition, with improved medium-soft threshold Value function quantifies each layer coefficients;
    Step (5.3), wavelet reconstruction:Small echo weight is carried out to the signal coefficient after the step (5.2) threshold function table quantification treatment Structure, obtain the signal after de-noising.
  5. 5. a kind of novel background subtraction moving target detecting method based on wavelet blocks according to claim 1, Characterized in that, the step 6 is specifically implemented according to following steps:
    Step (6.1), adaptive background maintenance algorithm are defined as follows:
    CBn+1=an×IBn+(1-an)×CBn (9)
    Wherein, anRepresent context update coefficient, anSpan is [0,1], CBnRepresent current background model frame, IBnRepresent instant Background frames, CBn+1The current background model frame of the (n+1)th frame is represented, n is positive integer;
    Step (6.2), in order to solve the fast-changing problem of image sequence Scene, introduce instant background frames, instant background frames It is calculated as follows:
    <mrow> <msub> <mi>IB</mi> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>F</mi> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <mi>i</mi> <mi>f</mi> </mrow> </mtd> <mtd> <mrow> <mi>M</mi> <mi>P</mi> <mrow> <mo>(</mo> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mo>)</mo> </mrow> <mo>=</mo> <mn>0</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>CB</mi> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <mi>i</mi> <mi>f</mi> </mrow> </mtd> <mtd> <mrow> <mi>M</mi> <mi>P</mi> <mrow> <mo>(</mo> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow>
    Wherein, MP (i, j) is the binary system figure of detection, wherein, moving region pixel value is 1, and non-athletic pixel value is 0;
    Step (6.3), context update coefficient anDetermined by the illumination variation and moving target situation of present frame and background frames, an Cross formula (11) calculating:
    an=0.9 × an-1+0.1×a-instn (11)
    Wherein, a-instnRepresent adjacent image frame F in image sequencenAnd Fn-1Between adaptive instant weight, a-instnIt is fixed Justice is as follows:
    <mrow> <mi>a</mi> <mo>_</mo> <msub> <mi>inst</mi> <mi>n</mi> </msub> <mo>=</mo> <mfrac> <mrow> <mi>s</mi> <mi>u</mi> <mi>m</mi> <mo>_</mo> <msub> <mi>unmov</mi> <mrow> <mi>n</mi> <mo>,</mo> <mi>n</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> </mrow> <mrow> <mi>a</mi> <mi>r</mi> <mi>e</mi> <mi>a</mi> <mo>_</mo> <msub> <mi>unmov</mi> <mrow> <mi>n</mi> <mo>,</mo> <mi>n</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow>
    Wherein, sum-unmovn,n-1Represent corresponding two successive image frame FnAnd Fn-1Between grey scale change, sum-unmovn,n-1 Represent as follows:
    <mrow> <mi>s</mi> <mi>u</mi> <mi>m</mi> <mo>_</mo> <msub> <mi>unmov</mi> <mrow> <mi>n</mi> <mo>,</mo> <mi>n</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <munder> <mo>&amp;Sigma;</mo> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </munder> <mfrac> <mrow> <mo>|</mo> <msub> <mi>F</mi> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>F</mi> <mrow> <mi>n</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> <mn>256</mn> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> </mrow>
    area-unmovn,n-1Represent the quantity of the pixel of present image non-moving areas, area-unmovn,n-1Represent as follows:
    <mrow> <mi>a</mi> <mi>r</mi> <mi>e</mi> <mi>a</mi> <mo>_</mo> <msub> <mi>unmov</mi> <mrow> <mi>n</mi> <mo>,</mo> <mi>n</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <munder> <mo>&amp;Sigma;</mo> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </munder> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>M</mi> <mi>P</mi> <mo>(</mo> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>14</mn> <mo>)</mo> </mrow> </mrow>
    Wherein, MP (i, j) ∈ MPn∪MPn-1, MPnAnd MPn-1It is image F respectivelynWith image Fn-1In motion pixel.
CN201710452358.1A 2017-06-15 2017-06-15 A kind of novel background subtraction moving target detecting method based on wavelet blocks Pending CN107368784A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710452358.1A CN107368784A (en) 2017-06-15 2017-06-15 A kind of novel background subtraction moving target detecting method based on wavelet blocks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710452358.1A CN107368784A (en) 2017-06-15 2017-06-15 A kind of novel background subtraction moving target detecting method based on wavelet blocks

Publications (1)

Publication Number Publication Date
CN107368784A true CN107368784A (en) 2017-11-21

Family

ID=60306388

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710452358.1A Pending CN107368784A (en) 2017-06-15 2017-06-15 A kind of novel background subtraction moving target detecting method based on wavelet blocks

Country Status (1)

Country Link
CN (1) CN107368784A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108320299A (en) * 2017-12-28 2018-07-24 广州万威伟创网络科技有限公司 A kind of target tracking algorism based on motor behavior analysis
CN108550163A (en) * 2018-04-19 2018-09-18 湖南理工学院 Moving target detecting method in a kind of complex background scene
CN109949337A (en) * 2019-04-11 2019-06-28 新疆大学 Moving target detecting method and device based on Gaussian mixture model-universal background model
CN109993767A (en) * 2017-12-28 2019-07-09 北京京东尚科信息技术有限公司 Image processing method and system
CN112655197A (en) * 2018-08-31 2021-04-13 佳能株式会社 Image pickup apparatus using motion-dependent pixel combination

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008008045A1 (en) * 2006-07-11 2008-01-17 Agency For Science, Technology And Research Method and system for context-controlled background updating
CN101621615A (en) * 2009-07-24 2010-01-06 南京邮电大学 Self-adaptive background modeling and moving target detecting method
CN101719216A (en) * 2009-12-21 2010-06-02 西安电子科技大学 Movement human abnormal behavior identification method based on template matching
CN102147861A (en) * 2011-05-17 2011-08-10 北京邮电大学 Moving target detection method for carrying out Bayes judgment based on color-texture dual characteristic vectors
CN103559725A (en) * 2013-08-09 2014-02-05 中国地质大学(武汉) Wireless sensor node optimization selection method orientated at visual tracking
CN105046683A (en) * 2014-12-31 2015-11-11 北京航空航天大学 Object detection method based on adaptive-parameter-adjustment Gaussian mixture model

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008008045A1 (en) * 2006-07-11 2008-01-17 Agency For Science, Technology And Research Method and system for context-controlled background updating
CN101621615A (en) * 2009-07-24 2010-01-06 南京邮电大学 Self-adaptive background modeling and moving target detecting method
CN101719216A (en) * 2009-12-21 2010-06-02 西安电子科技大学 Movement human abnormal behavior identification method based on template matching
CN102147861A (en) * 2011-05-17 2011-08-10 北京邮电大学 Moving target detection method for carrying out Bayes judgment based on color-texture dual characteristic vectors
CN103559725A (en) * 2013-08-09 2014-02-05 中国地质大学(武汉) Wireless sensor node optimization selection method orientated at visual tracking
CN105046683A (en) * 2014-12-31 2015-11-11 北京航空航天大学 Object detection method based on adaptive-parameter-adjustment Gaussian mixture model

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108320299A (en) * 2017-12-28 2018-07-24 广州万威伟创网络科技有限公司 A kind of target tracking algorism based on motor behavior analysis
CN109993767A (en) * 2017-12-28 2019-07-09 北京京东尚科信息技术有限公司 Image processing method and system
CN108550163A (en) * 2018-04-19 2018-09-18 湖南理工学院 Moving target detecting method in a kind of complex background scene
CN112655197A (en) * 2018-08-31 2021-04-13 佳能株式会社 Image pickup apparatus using motion-dependent pixel combination
CN109949337A (en) * 2019-04-11 2019-06-28 新疆大学 Moving target detecting method and device based on Gaussian mixture model-universal background model

Similar Documents

Publication Publication Date Title
CN107368784A (en) A kind of novel background subtraction moving target detecting method based on wavelet blocks
CN104408460B (en) A kind of lane detection and tracking detection method
CN106875395B (en) Super-pixel-level SAR image change detection method based on deep neural network
CN102324030B (en) Target tracking method and system based on image block characteristics
CN105809715B (en) A kind of visual movement object detection method adding up transformation matrices based on interframe
CN107194924A (en) Expressway foggy-dog visibility detecting method based on dark channel prior and deep learning
CN105787482A (en) Specific target outline image segmentation method based on depth convolution neural network
CN105261037A (en) Moving object detection method capable of automatically adapting to complex scenes
CN104574439A (en) Kalman filtering and TLD (tracking-learning-detection) algorithm integrated target tracking method
CN105931220A (en) Dark channel experience and minimal image entropy based traffic smog visibility detection method
CN108665485A (en) A kind of method for tracking target merged with twin convolutional network based on correlation filtering
CN103077539A (en) Moving object tracking method under complicated background and sheltering condition
CN108876820B (en) Moving target tracking method under shielding condition based on mean shift
CN108460481B (en) Unmanned aerial vehicle reconnaissance target evolution rule prediction method based on recurrent neural network
CN107229920B (en) Behavior identification method based on integration depth typical time warping and related correction
CN103488993A (en) Crowd abnormal behavior identification method based on FAST
CN113409267B (en) Pavement crack detection and segmentation method based on deep learning
CN106709933B (en) Motion estimation method based on unsupervised learning
CN103258332A (en) Moving object detection method resisting illumination variation
CN106447674A (en) Video background removing method
CN110335294A (en) Mine water pump house leakage detection method based on frame difference method Yu 3D convolutional neural networks
CN104796582A (en) Video image denoising and enhancing method and device based on random ejection retinex
CN103456030A (en) Target tracking method based on scattering descriptor
CN109916388A (en) Fiber Optic Gyroscope Temperature Drift compensation method based on wavelet de-noising and neural network
CN104318559A (en) Quick feature point detecting method for video image matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20171121