CN105139426A - Video moving object detection method based on non-down-sampling wavelet transformation and LBP - Google Patents

Video moving object detection method based on non-down-sampling wavelet transformation and LBP Download PDF

Info

Publication number
CN105139426A
CN105139426A CN201510574247.9A CN201510574247A CN105139426A CN 105139426 A CN105139426 A CN 105139426A CN 201510574247 A CN201510574247 A CN 201510574247A CN 105139426 A CN105139426 A CN 105139426A
Authority
CN
China
Prior art keywords
lbp
pixel
pixels
block
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510574247.9A
Other languages
Chinese (zh)
Other versions
CN105139426B (en
Inventor
赵亚琴
陈越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Zhongzheng Huitest Technology Co ltd
Original Assignee
Nanjing Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Forestry University filed Critical Nanjing Forestry University
Priority to CN201510574247.9A priority Critical patent/CN105139426B/en
Publication of CN105139426A publication Critical patent/CN105139426A/en
Application granted granted Critical
Publication of CN105139426B publication Critical patent/CN105139426B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention provides a video moving object detection method based on non-down-sampling wavelet transformation and an LBP. The method comprises the steps of: establishing an initial background image; carrying out non-down-sampling wavelet transformation to obtain wavelet transformation sub-images; calculating an LBP binary characteristic vector of each pixel in each wavelet transformation sub-image; dividing each wavelet transformation sub-image into a non-overlap pixel block with N*N pixels, and calculating texture differences of corresponding pixel blocks of the background image and a current frame image; according to a set threshold, judging whether each non-overlap pixel block is a foreground block or a background block; and using the pixels marked as the background block for background updating so as to generate a background image for the moving object detection of a next frame. According to the invention, the non-down-sampling wavelet transformation and the LBP are combined, so that the translation change occurs in the wavelet transformation process is effectively avoided, the calculation is simple, the operation speed is high, and the application prospect is relatively good.

Description

A kind of video moving object detection method based on undecimated wavelet transform and LBP
Technical field
The present invention relates to a kind of detection method of video frequency motion target, especially a kind of detection method of the video frequency motion target combined based on undecimated wavelet transform and LBP.
Background technology
So-called video frequency motion target detects and refers to from the video sequence image of Still Camera shooting, is split by Motion-changed Region and extracts, namely detect foreground moving object from the background of video frame images.Video moving object subdivision is the key link of many computer vision systems, is widely used in many computer vision systems, as the three-dimensionalreconstruction, complex behavior identification etc. of moving object tracking, free viewpoint video.Therefore, many researchers are devoted to the research of video foreground moving object detection, and most of algorithm uses the color of pixel and strength information to detect foreground object, and this algorithm has the video of Similar color for foreground object and background, seems helpless.Therefore, some researcher's texture feature extractions (as LBP (LocalBinaryPattern)) are to improve the detection perform of foreground object.But, when foreground object and the background area residing for it all obey be uniformly distributed time, LBP can not detect prospect well, and LBP can not extract tiny smooth textural characteristics.Although existing algorithm uses the information such as color, intensity, texture, gradient, edge to detect moving target.But these algorithms accurately cannot detect with video background color similarity and have the foreground object of tiny smooth texture, because only utilize the color of original image and texture (as LBP) feature can not distinguish foreground object and background.
LBP texture description algorithm based on wavelet transformation is applied to the classification of radioscopic image, recognition of face and personal recognition.These algorithms are all extract LBP histogram as textural characteristics to the frequency subgraph of down-sampling wavelet transformation, because down-sampling wavelet transformation does not have translation invariance, therefore, cause being permitted great wavelet coefficient by multiple wavelet transformation layer.
Summary of the invention
The technical problem to be solved in the present invention is that existing video foreground moving object detection adopts down-sampling wavelet transformation to there will be translation change, and computation process is complicated, travelling speed is slow.
In order to solve the problems of the technologies described above, the invention provides 1, a kind of video moving object detection method based on undecimated wavelet transform and LBP, it is characterized in that, comprise the steps:
Step 1, the image utilizing initial a few frame of video or tens frames not to comprise foreground moving object sets up initial background image;
Step 2, carries out non-lower sampling L layer wavelet transformation respectively to initial background image and present image, obtains the wavelet transformation subgraph of background image and the wavelet transformation subgraph of present image respectively;
Step 3, calculates the LBP vector of binary features of each pixel in each wavelet transformation subgraph;
Step 4, is divided into the non-overlapped block of pixels of N × N pixel by each wavelet transformation subgraph, calculate the texture difference of the respective pixel block of background image and current frame image, concrete steps are:
Step 4.1, for background image and current frame image, calculates the background pixel of same coordinate position of the equidirectional frequency subgraph of same wavelet decomposition layer and the Hamming distances of the LBP vector of binary features of current frame pixel;
Step 4.2, for the pixel of each coordinate position, at the corresponding Hamming distances of each frequency subgraph, calculate the Hamming distances of Hamming distances average as the pixel of this coordinate position of M frequency subgraph, thus obtain the matrix that is element with pixel Hamming distances;
Step 4.3, matrix trace inequality step 4.2 obtained becomes the non-overlapped block of pixels of N × N pixel, and in compute matrix, the average of all elements is as the Hamming distances of this non-overlapped block of pixels;
Step 5, judge whether each non-overlapped block of pixels meets foreground blocks Rule of judgment, if the Hamming distances of current non-overlapped block of pixels is more than or equal to the distance threshold of setting and the number of pixels of current background pixel is greater than the quantity threshold of setting, then marking this non-overlapped block of pixels is foreground blocks, otherwise to mark this non-overlapped block of pixels be background block;
Step 6, is used for context update by the pixel being labeled as background block, to be the moving object detection generation background image of next frame.
Adopt undecimated wavelet transform to replace existing down-sampling wavelet transformation, there is the advantage of the translation invariance of conversion, be more suitable for the accurate detection of Local textural feature; LBP scale-of-two textural characteristics describing method is adopted to replace existing LBP histogram textural characteristics describing method, utilize the binary number of LBP as textural characteristics, and the tiny texture difference in local of Pixel-level is measured with Hamming distances, divide block of pixels again, the textural characteristics of statistical pixel block level, measured the difference of background texture and prospect texture by fine granularity to coarseness step by step, not only reduce the complexity of calculating, and substantially increase arithmetic speed and precision.
As further restriction scheme of the present utility model, background image method for building up initial in step 1 is the pixel value of pixel grey scale average as initial background image of the initial a few frame of calculating or tens frames, thus sets up initial background image.Adopt gray average as the pixel value of initial background image, fast and easy can calculate the suitable background image of acquisition.
As further restriction scheme of the present utility model, in step 2, non-lower sampling L layer wavelet transformation is carried out respectively to initial background image and present image, obtain the wavelet transformation subgraph of background image and the wavelet transformation subgraph of present image respectively, wherein, if I tbe t current frame image, B tfor its background image, represent current frame image I tthe low channel subgraph of undecimated wavelet transform and hf channel subgraph; S=1,2, K, S, represent the number of plies of wavelet transformation, in video moving object detection method, make S=3; O=1,2,3,4, represent the level of low channel and hf channel, vertical and direction, three, diagonal angle respectively successively.
As further restriction scheme of the present utility model, in described step 2, the span of L is 2 ~ 4; In step 4.2, the span of M is 7 ~ 13; When L is 2, M is 7; When L is 3, M is 10; When L is 4, M is 13.Adopt the Hamming distances average of 7 ~ 13 frequency subgraphs as the Hamming distances of the pixel of this coordinate position, not only increase computational accuracy, ensure that operation efficiency, and the generation of incident can be avoided, there is higher reliability.
As further restriction scheme of the present utility model, in step 4, the value of N is 16, each wavelet transformation subgraph is divided into the non-overlapped block of pixels of 16 × 16 pixels, calculates the texture difference of the respective pixel block of background image and current frame image:
Step 4.1, for background image and current frame image, calculates the background pixel of same coordinate position of the equidirectional frequency subgraph of same wavelet decomposition layer and the Hamming distances of the LBP vector of binary features of current frame pixel;
In step 4.2, the value of M is 10, for the pixel of each coordinate position, at the corresponding Hamming distances of each frequency subgraph, calculate the Hamming distances of Hamming distances average as the pixel of this coordinate position of 10 frequency subgraphs, thus obtain the matrix that is element with pixel Hamming distances, if LBP m(i, j, I t) be current frame image I tthe coordinate of m wavelet transformation subgraph be the LBP proper vector of the pixel of (i, j), LBP m(i, j, B t) be current background image B tthe coordinate of m wavelet transformation subgraph be the LBP proper vector of the pixel of (i, j), LBP calculates 8 neighborhoods adopting pixel, and therefore LBP vector of binary features is 8 dimensions, order LBP m ( i , j , I t ) = { LBP m I ( 1 ) , LBP m I ( 2 ) , L LBP m I ( 8 ) } , With season LBP m ( i , j , B t ) = { LBP m B ( 1 ) , LBP m B ( 2 ) , L LBP m B ( 8 ) } , Then current frame image I twith background image B tlBP vector of binary features LBP (i, j, I t) and LBP (i, j, B t) Hamming distances be:
H M ( i , j , I t ) = Σ m - 1 10 Σ k = 1 8 δ ( k ) - - - ( 1 )
In formula,
δ ( k ) = 1 LBP m I ( k ) ≠ LBP m B ( k ) 0 LBP m I ( k ) = LBP m B ( k ) , k = 1 , 2 , L 8 - - - ( 2 )
Therefore, the matrix H M (t) that is element with pixel Hamming distances is obtained;
Step 4.3, the matrix H M (t) step 4.2 obtained is divided into the non-overlapped block of pixels of 16 × 16 pixels, and in compute matrix, the average of all elements is as the Hamming distances of this non-overlapped block of pixels:
H M ( I t , n ) = 1 256 Σ j = 1 16 Σ i = 1 16 H M ( i , j , I t ) - - - ( 5 )
In formula, which non-overlapped block of pixels n represents.
As further restriction scheme of the present utility model, in step 5, judge whether each non-overlapped block of pixels meets foreground blocks Rule of judgment, if the Hamming distances of current non-overlapped block of pixels is more than or equal to the distance threshold of setting and the number of pixels of current background pixel is greater than the quantity threshold λ of setting 2, then marking this non-overlapped block of pixels is foreground blocks, otherwise to mark this non-overlapped block of pixels be background block, for each element HM (i, j, the I of matrix H M (t) t), the average with following formula coordinates computed position (i, j):
H M ‾ ( i , j ) = 1 t - 1 Σ k = 1 t - 1 H M ( i , j , I k ) - - - ( 3 )
If pixel (i, j) meets following condition, then this pixel is judged as background pixel:
H M ( i , j , I t ) ≥ α 1 H M ‾ ( i , j ) a n d H M ( i , j , I t ) > λ 1 - - - ( 4 )
In formula, α 1and λ 1for adjustment parameter, be set to 1.5≤α 1≤ 2.5 and 14≤λ 1≤ 20;
For current frame image I thamming distance matrix HM (t) each 16 × 16 block of pixels, calculate the average of t-1 block of pixels before same position with following formula:
H M ‾ ( I t , n ) = 1 t - 1 Σ k = 1 t - 1 H M ( I k , n ) - - - ( 6 )
If current block meets following condition, then this block of pixels is judged as background block:
H M ( I t , n ) ≥ α 2 H M ‾ ( I t , n ) a n d N P _ B ( n ) ≥ λ 2 - - - ( 7 )
In formula, NP_B (n) is for being judged as the number of pixels of background, α in current block 2and λ 2for adjustment parameter, be set to 1.1≤α 2≤ 1.3 and 60≤λ 2≤ 90.
Beneficial effect of the present invention is: (1) adopts undecimated wavelet transform to replace existing down-sampling wavelet transformation, has the advantage of the translation invariance of conversion, is more suitable for the accurate detection of Local textural feature; (2) LBP scale-of-two textural characteristics describing method is adopted to replace existing LBP histogram textural characteristics describing method, utilize the binary number of LBP as textural characteristics, and the tiny texture difference in local of Pixel-level is measured with Hamming distances, divide block of pixels again, the textural characteristics of statistical pixel block level, measured the difference of background texture and prospect texture by fine granularity to coarseness step by step, not only reduce the complexity of calculating, and substantially increase arithmetic speed and precision.
Accompanying drawing explanation
Fig. 1 is method flow diagram of the present invention;
Fig. 2 is the structural drawing of undecimated wavelet transform of the present invention.
Embodiment
As shown in Figure 1, a kind of video moving object detection method based on undecimated wavelet transform and LBP of the present invention, its video is compressed video or the non-compression video of using camera collection, initial a few frame or tens two field pictures of this video should be the background frames not comprising foreground moving object, in order to set up initial background image, comprise the steps:
Step 1, the image utilizing initial a few frame of video or tens frames not to comprise foreground moving object sets up initial background image, initial background image method for building up is the pixel value of pixel grey scale average as initial background image of the initial a few frame of calculating or tens frames, thus sets up initial background image;
Step 2, carries out non-lower sampling L layer wavelet transformation respectively to initial background image and present image, obtains the wavelet transformation subgraph of background image and the wavelet transformation subgraph of present image respectively, wherein, if I tbe t current frame image, B tfor its background image, represent current frame image I tthe low channel subgraph of undecimated wavelet transform and hf channel subgraph; S=1,2, K, S, represent the number of plies of wavelet transformation, in the present invention, make S=3; O=1,2,3,4, represent the level (o=2) of low channel (o=1) and hf channel, vertical (o=3) and diagonal angle (o=4) three directions respectively successively, the structural drawing of undecimated wavelet transform as shown in Figure 2;
Step 3, calculates each wavelet transformation subgraph in LBP (LocalBinaryPattern) vector of binary features of each pixel;
Step 4, is divided into the non-overlapped block of pixels of 16 × 16 pixels by each wavelet transformation subgraph, calculate the texture difference of the respective pixel block of background image and current frame image, concrete steps are:
Step 4.1, for background image and current frame image, calculates the background pixel of same coordinate position of the equidirectional frequency subgraph of same wavelet decomposition layer and the Hamming distances of the LBP vector of binary features of current frame pixel;
Step 4.2, for the pixel of each coordinate position, at the corresponding Hamming distances of each frequency subgraph, the value of M is 10, calculate the Hamming distances of Hamming distances average as the pixel of this coordinate position of 10 frequency subgraphs, thus obtain the matrix that is element with pixel Hamming distances, setting LBP m(i, j, I t) be current frame image I tthe coordinate of m wavelet transformation subgraph be the LBP proper vector of the pixel of (i, j), LBP m(i, j, B t) be current background image B tthe coordinate of m wavelet transformation subgraph be the LBP proper vector of the pixel of (i, j), LBP calculates 8 neighborhoods adopting pixel, and therefore LBP vector of binary features is 8 dimensions, order LBP m ( i , j , I t ) = { LBP m I ( 1 ) , LBP m I ( 2 ) , L LBP m I ( 8 ) } , With season LBP m ( i , j , B t ) = { LBP m B ( 1 ) , LBP m B ( 2 ) , L LBP m B ( 8 ) } , Then current frame image I twith background image B tlBP vector of binary features LBP (i, j, I t) and LBP (i, j, B t) Hamming distances be:
H M ( i , j , I t ) = Σ m - 1 10 Σ k = 1 8 δ ( k ) - - - ( 1 )
In formula,
δ ( k ) = 1 LBP m I ( k ) ≠ LBP m B ( k ) 0 LBP m I ( k ) = LBP m B ( k ) , k = 1 , 2 , L 8 - - - ( 2 )
Therefore, the matrix H M (t) that is element with pixel Hamming distances is obtained;
Step 4.3, the matrix H M (t) step 4.2 obtained is divided into the non-overlapped block of pixels of 16 × 16 pixels, in compute matrix, the average of all elements is as the Hamming distances of this non-overlapped block of pixels, with the Hamming distances of formula (5) calculating the n-th non-overlapped block of pixels is:
H M ( I t , n ) = 1 256 Σ j = 1 16 Σ i = 1 16 H M ( i , j , I t ) - - - ( 5 )
Step 5, judges whether each non-overlapped block of pixels meets foreground blocks Rule of judgment, if the Hamming distances of current non-overlapped block of pixels is more than or equal to the distance threshold of setting and the number of pixels of current background pixel is greater than the quantity threshold λ of setting 2, then marking this non-overlapped block of pixels is foreground blocks, otherwise to mark this non-overlapped block of pixels be background block, for each element HM (i, j, the I of matrix H M (t) t), the average with following formula coordinates computed position (i, j):
H M ‾ ( i , j ) = 1 t - 1 Σ k = 1 t - 1 H M ( i , j , I k ) - - - ( 3 )
If pixel (i, j) meets following condition, then this pixel is judged as background pixel:
H M ( i , j , I t ) ≥ α 1 H M ‾ ( i , j ) a n d H M ( i , j , I t ) > λ 1 - - - ( 4 )
In formula, α 1and λ 1for adjustment parameter, be set to 1.5≤α 1≤ 2.5 and 14≤λ 1≤ 20;
For current frame image I thamming distance matrix HM (t) each 16 × 16 block of pixels, calculate the average of t-1 block of pixels before same position with following formula:
H M ‾ ( I t , n ) = 1 t - 1 Σ k = 1 t - 1 H M ( I k , n ) - - - ( 6 )
If current block meets following condition, then this block of pixels is judged as background block:
H M ( I t , n ) ≥ α 2 H M ‾ ( I t , n ) a n d N P _ B ( n ) ≥ λ 2 - - - ( 7 )
In formula, NP_B (n) is for being judged as the number of pixels of background, α in current block 2and λ 2for the adjustment parameter of setting, be set to 1.1≤α 2≤ 1.3 and 60≤λ 2≤ 90;
Step 6, the pixel being labeled as background block is used for upgrading background image, to be the moving object detection generation background image of next frame, if a block of pixels is judged as background block, pixel then in this block of pixels just thinks background pixel, therefore the pixel in this block of pixels is used as context update, and more new formula is:
B t+1(x)=β·B t(x)+(1-β)I t(x)(8)
In formula, β is context update parameter, and establishes 0.0001≤β≤0.2.

Claims (6)

1., based on a video moving object detection method of undecimated wavelet transform and LBP, it is characterized in that, comprise the steps:
Step 1, the image utilizing initial a few frame of video or tens frames not to comprise foreground moving object sets up initial background image;
Step 2, carries out non-lower sampling L layer wavelet transformation respectively to initial background image and present image, obtains the wavelet transformation subgraph of background image and the wavelet transformation subgraph of present image respectively;
Step 3, calculates the LBP vector of binary features of each pixel in each wavelet transformation subgraph;
Step 4, is divided into the non-overlapped block of pixels of N × N pixel by each wavelet transformation subgraph, calculate the texture difference of the respective pixel block of background image and current frame image, concrete steps are:
Step 4.1, for background image and current frame image, calculates the background pixel of same coordinate position of the equidirectional frequency subgraph of same wavelet decomposition layer and the Hamming distances of the LBP vector of binary features of current frame pixel;
Step 4.2, for the pixel of each coordinate position, at the corresponding Hamming distances of each frequency subgraph, calculate the Hamming distances of Hamming distances average as the pixel of this coordinate position of M frequency subgraph, thus obtain the matrix that is element with pixel Hamming distances;
Step 4.3, matrix trace inequality step 4.2 obtained becomes the non-overlapped block of pixels of N × N pixel, and in compute matrix, the average of all elements is as the Hamming distances of this non-overlapped block of pixels;
Step 5, judge whether each non-overlapped block of pixels meets foreground blocks Rule of judgment, if the Hamming distances of current non-overlapped block of pixels is more than or equal to the distance threshold of setting and the number of pixels of current background pixel is greater than the quantity threshold of setting, then marking this non-overlapped block of pixels is foreground blocks, otherwise to mark this non-overlapped block of pixels be background block;
Step 6, is used for context update by the pixel being labeled as background block, to be the moving object detection generation background image of next frame.
2. the video moving object detection method based on undecimated wavelet transform and LBP according to claim 1, it is characterized in that, background image method for building up initial in described step 1 is the pixel value of pixel grey scale average as initial background image of the initial a few frame of calculating or tens frames, thus sets up initial background image.
3. the video moving object detection method based on undecimated wavelet transform and LBP according to claim 1, it is characterized in that, in described step 2, non-lower sampling L layer wavelet transformation is carried out respectively to initial background image and present image, obtain the wavelet transformation subgraph of background image and the wavelet transformation subgraph of present image respectively, wherein, if I tbe t current frame image, B tfor its background image, represent current frame image I tthe low channel subgraph of undecimated wavelet transform and hf channel subgraph; S=1,2, K, S, represent the number of plies of wavelet transformation, in video moving object detection method, make S=3; O=1,2,3,4, represent the level of low channel and hf channel, vertical and direction, three, diagonal angle respectively successively.
4. the video moving object detection method based on undecimated wavelet transform and LBP according to claim 1 and 2, it is characterized in that, in described step 2, the span of L is 2 ~ 4; In step 4.2, the span of M is 7 ~ 13; When L is 2, M is 7; When L is 3, M is 10; When L is 4, M is 13.
5. the video moving object detection method based on undecimated wavelet transform and LBP according to claim 1 and 2, it is characterized in that, in described step 2, L is 3; In described step 4, the value of N is 16, each wavelet transformation subgraph is divided into the non-overlapped block of pixels of 16 × 16 pixels, calculates the texture difference of the respective pixel block of background image and current frame image:
Step 4.1, for background image and current frame image, calculates the background pixel of same coordinate position of the equidirectional frequency subgraph of same wavelet decomposition layer and the Hamming distances of the LBP vector of binary features of current frame pixel;
In step 4.2, the value of M is 10, for the pixel of each coordinate position, at the corresponding Hamming distances of each frequency subgraph, calculate the Hamming distances of Hamming distances average as the pixel of this coordinate position of 10 frequency subgraphs, thus obtain the matrix that is element with pixel Hamming distances, if LBP m(i, j, I t) be current frame image I tthe coordinate of m wavelet transformation subgraph be the LBP proper vector of the pixel of (i, j), LBP m(i, j, B t) be current background image B tthe coordinate of m wavelet transformation subgraph be the LBP proper vector of the pixel of (i, j), LBP calculates 8 neighborhoods adopting pixel, and therefore LBP vector of binary features is 8 dimensions, order LBP m ( i , j , I t ) = { LBP m I ( 1 ) , LBP m I ( 2 ) , L LBP m I ( 8 ) } , With season LBP m ( i , j , B t ) = { LBP m B ( 1 ) , LBP m B ( 2 ) , L LBP m B ( 8 ) } , Then current frame image I twith background image B tlBP vector of binary features LBP (i, j, I t) and LBP (i, j, B t) Hamming distances be:
H M ( i , j , I t ) = Σ m - 1 10 Σ k = 1 8 δ ( k ) - - - ( 1 )
In formula,
δ ( k ) = 1 LBP m I ( k ) ≠ LBP m B ( k ) 0 LBP m I ( k ) = LBP m B ( k ) , k = 1 , 2 , L 8 - - - ( 2 )
Therefore, the matrix H M (t) that is element with pixel Hamming distances is obtained;
Step 4.3, the matrix H M (t) step 4.2 obtained is divided into the non-overlapped block of pixels of 16 × 16 pixels, and in compute matrix, the average of all elements is as the Hamming distances of this non-overlapped block of pixels:
H M ( I t , n ) = 1 256 Σ j = 1 16 Σ i = 1 16 H M ( i , j , I t ) - - - ( 5 )
In formula, which non-overlapped block of pixels n represents.
6. the video moving object detection method based on undecimated wavelet transform and LBP according to claim 5, it is characterized in that, in described step 5, judge whether each non-overlapped block of pixels meets foreground blocks Rule of judgment, if the Hamming distances of current non-overlapped block of pixels is more than or equal to the distance threshold of setting and the number of pixels of current background pixel is greater than the quantity threshold λ of setting 2, then marking this non-overlapped block of pixels is foreground blocks, otherwise to mark this non-overlapped block of pixels be background block, for each element HM (i, j, the I of matrix H M (t) t), the average with following formula coordinates computed position (i, j):
H M ‾ ( i , j ) = 1 t - 1 Σ k = 1 t - 1 H M ( i , j , I k ) - - - ( 3 )
If pixel (i, j) meets following condition, then this pixel is judged as background pixel:
H M ( i , j , I t ) ≥ α 1 H M ‾ ( i , j ) a n d H M ( i , j , I t ) > λ 1 - - - ( 4 )
In formula, α 1and λ 1for adjustment parameter, be set to 1.5≤α 1≤ 2.5 and 14≤λ 1≤ 20;
For current frame image I thamming distance matrix HM (t) each 16 × 16 block of pixels, calculate the average of t-1 block of pixels before same position with following formula:
H M ‾ ( I t , n ) = 1 t - 1 Σ k = 1 t - 1 H M ( I k , n ) - - - ( 6 )
If current block meets following condition, then this block of pixels is judged as background block:
H M ( I t , n ) ≥ α 2 H M ‾ ( I t , n ) a n d N P _ B ( n ) ≥ λ 2 - - - ( 7 )
In formula, NP_B (n) is for being judged as the number of pixels of background, α in current block 2and λ 2for adjustment parameter, be set to 1.1≤α 2≤ 1.3 and 60≤λ 2≤ 90.
CN201510574247.9A 2015-09-10 2015-09-10 A kind of video moving object detection method based on undecimated wavelet transform and LBP Expired - Fee Related CN105139426B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510574247.9A CN105139426B (en) 2015-09-10 2015-09-10 A kind of video moving object detection method based on undecimated wavelet transform and LBP

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510574247.9A CN105139426B (en) 2015-09-10 2015-09-10 A kind of video moving object detection method based on undecimated wavelet transform and LBP

Publications (2)

Publication Number Publication Date
CN105139426A true CN105139426A (en) 2015-12-09
CN105139426B CN105139426B (en) 2018-11-23

Family

ID=54724758

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510574247.9A Expired - Fee Related CN105139426B (en) 2015-09-10 2015-09-10 A kind of video moving object detection method based on undecimated wavelet transform and LBP

Country Status (1)

Country Link
CN (1) CN105139426B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107229912A (en) * 2017-05-22 2017-10-03 西安电子科技大学 Combine the Activity recognition method of statistics description based on wavelet field
CN108460393A (en) * 2018-03-12 2018-08-28 南昌航空大学 Image invariant feature extraction method based on multiresolution Trace transformation
CN111126176A (en) * 2019-12-05 2020-05-08 山东浪潮人工智能研究院有限公司 Monitoring and analyzing system and method for specific environment
CN113129331A (en) * 2019-12-31 2021-07-16 中移(成都)信息通信科技有限公司 Target movement track detection method, device and equipment and computer storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1549188A (en) * 2003-05-13 2004-11-24 范科峰 Estimation of irides image quality and status discriminating method based on irides image identification
US20050257064A1 (en) * 2004-05-11 2005-11-17 Yann Boutant Method for recognition and tracking of fibrous media and applications of such a method, particularly in the computer field
CN102663405A (en) * 2012-05-14 2012-09-12 武汉大学 Prominence and Gaussian mixture model-based method for extracting foreground of surveillance video
CN102917222A (en) * 2012-10-18 2013-02-06 北京航空航天大学 Mobile background video object extraction method based on self-adaptive hexagonal search and five-frame background alignment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1549188A (en) * 2003-05-13 2004-11-24 范科峰 Estimation of irides image quality and status discriminating method based on irides image identification
US20050257064A1 (en) * 2004-05-11 2005-11-17 Yann Boutant Method for recognition and tracking of fibrous media and applications of such a method, particularly in the computer field
CN102663405A (en) * 2012-05-14 2012-09-12 武汉大学 Prominence and Gaussian mixture model-based method for extracting foreground of surveillance video
CN102917222A (en) * 2012-10-18 2013-02-06 北京航空航天大学 Mobile background video object extraction method based on self-adaptive hexagonal search and five-frame background alignment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIAOHUA XIE等: ""Extraction of illumination invariant facial features from a single image using nonsubsampled contourlet transform"", 《PATTERN RECOGNITION》 *
戴桂平等: ""基于非下采样Contourlet变换和MB_LBP直方图的掌纹检测"", 《传感技术学报》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107229912A (en) * 2017-05-22 2017-10-03 西安电子科技大学 Combine the Activity recognition method of statistics description based on wavelet field
CN107229912B (en) * 2017-05-22 2020-04-07 西安电子科技大学 Behavior identification method based on wavelet domain combined statistical descriptor
CN108460393A (en) * 2018-03-12 2018-08-28 南昌航空大学 Image invariant feature extraction method based on multiresolution Trace transformation
CN108460393B (en) * 2018-03-12 2021-08-13 南昌航空大学 Image invariant feature extraction method based on multi-resolution Trace transformation
CN111126176A (en) * 2019-12-05 2020-05-08 山东浪潮人工智能研究院有限公司 Monitoring and analyzing system and method for specific environment
CN113129331A (en) * 2019-12-31 2021-07-16 中移(成都)信息通信科技有限公司 Target movement track detection method, device and equipment and computer storage medium
CN113129331B (en) * 2019-12-31 2024-01-30 中移(成都)信息通信科技有限公司 Target movement track detection method, device, equipment and computer storage medium

Also Published As

Publication number Publication date
CN105139426B (en) 2018-11-23

Similar Documents

Publication Publication Date Title
CN111652216B (en) Multi-scale target detection model method based on metric learning
CN110728200B (en) Real-time pedestrian detection method and system based on deep learning
CN105528794A (en) Moving object detection method based on Gaussian mixture model and superpixel segmentation
CN105469094B (en) A kind of edge vectors line drawing method of road surface bianry image
CN103065331B (en) Target tracking method based on correlation of space-time-domain edge and color feature
CN101976504B (en) Multi-vehicle video tracking method based on color space information
CN102542571B (en) Moving target detecting method and device
CN104268583A (en) Pedestrian re-recognition method and system based on color area features
CN103310194A (en) Method for detecting head and shoulders of pedestrian in video based on overhead pixel gradient direction
CN105139426A (en) Video moving object detection method based on non-down-sampling wavelet transformation and LBP
CN109886159B (en) Face detection method under non-limited condition
CN109242019B (en) Rapid detection and tracking method for optical small target on water surface
CN101908214B (en) Moving object detection method with background reconstruction based on neighborhood correlation
CN104933738A (en) Visual saliency map generation method based on local structure detection and contrast
CN102903121A (en) Fusion algorithm based on moving target tracking
CN106023249A (en) Moving object detection method based on local binary similarity pattern
CN104050685A (en) Moving target detection method based on particle filtering visual attention model
CN104732551A (en) Level set image segmentation method based on superpixel and graph-cup optimizing
CN102663399A (en) Image local feature extracting method on basis of Hilbert curve and LBP (length between perpendiculars)
CN105550703A (en) Image similarity calculating method suitable for human body re-recognition
CN106056078A (en) Crowd density estimation method based on multi-feature regression ensemble learning
Alsanad et al. Real-time fuel truck detection algorithm based on deep convolutional neural network
CN102324043B (en) Image matching method based on DCT (Discrete Cosine Transformation) through feature description operator and optimization space quantization
CN106997599A (en) A kind of video moving object subdivision method of light sensitive
CN106096615A (en) A kind of salient region of image extracting method based on random walk

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20151209

Assignee: Chuzhou Yang an hi tech technology Co.,Ltd.

Assignor: Nanjing Forestry University

Contract record no.: 2019320000246

Denomination of invention: Video moving object detection method based on non-down-sampling wavelet transformation and LBP

Granted publication date: 20181123

License type: Common License

Record date: 20190717

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20151209

Assignee: Nanjing Monte tech Environmental Protection Technology Co.,Ltd.

Assignor: Nanjing Forestry University

Contract record no.: 2019320000252

Denomination of invention: Video moving object detection method based on non-down-sampling wavelet transformation and LBP

Granted publication date: 20181123

License type: Common License

Record date: 20190719

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20151209

Assignee: Nanjing dude Automation Co.,Ltd.

Assignor: Nanjing Forestry University

Contract record no.: X2019320000107

Denomination of invention: Video moving object detection method based on non-down-sampling wavelet transformation and LBP

Granted publication date: 20181123

License type: Common License

Record date: 20191015

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20211101

Address after: 213163 Changzhou Wujin Chuangzhi cloud Valley Industrial Park Phase ii-12-1-2, No. 200, Datong West Road, Niutang Town, Wujin District, Changzhou City, Jiangsu Province

Patentee after: Jiangsu Zhongzheng huitest Technology Co.,Ltd.

Address before: No. 159, dragon pan Road, Xuanwu District, Nanjing, Jiangsu

Patentee before: NANJING FORESTRY University

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20181123