CN107944499A - A kind of background detection method modeled at the same time for prospect background - Google Patents
A kind of background detection method modeled at the same time for prospect background Download PDFInfo
- Publication number
- CN107944499A CN107944499A CN201711303361.3A CN201711303361A CN107944499A CN 107944499 A CN107944499 A CN 107944499A CN 201711303361 A CN201711303361 A CN 201711303361A CN 107944499 A CN107944499 A CN 107944499A
- Authority
- CN
- China
- Prior art keywords
- background
- region
- threshold value
- foreground object
- turnover rate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 50
- 230000007306 turnover Effects 0.000 claims abstract description 56
- 238000000034 method Methods 0.000 claims abstract description 19
- 230000008859 change Effects 0.000 claims abstract description 9
- 230000006835 compression Effects 0.000 claims abstract description 6
- 238000007906 compression Methods 0.000 claims abstract description 6
- 238000000605 extraction Methods 0.000 claims abstract description 6
- 230000000007 visual effect Effects 0.000 claims abstract description 6
- 238000012360 testing method Methods 0.000 claims description 12
- 238000007689 inspection Methods 0.000 claims 1
- 230000008569 process Effects 0.000 abstract description 8
- 238000010586 diagram Methods 0.000 description 6
- 230000006399 behavior Effects 0.000 description 5
- 230000004048 modification Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000007812 deficiency Effects 0.000 description 3
- 230000001568 sexual effect Effects 0.000 description 3
- 235000013399 edible fruits Nutrition 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000035772 mutation Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of background detection method, including S1:Receive the background model for being used for visual background extraction (ViBe) and the present frame that the high-frequency information including change of noise, background etc. is filtered out by integer DCT compression;S2:The location of pixels of the present frame is categorized as background dot class and foreground point class using LBSP operators and according to pixel classifications threshold value;S3:The prospect code book for recording its feature is established and safeguarded for the foreground object detected, and targetedly adjusts pixel classifications threshold value and background model turnover rate accordingly by the study of foreground features.This method introduces the pixel classifications stage using the background model in original ViBe background detection methods and by LBSP operators, by establishing and safeguarding that the prospect code book for recording its feature targetedly adjusts pixel classifications threshold value and background model turnover rate and then accordingly by the learning process of foreground features for the foreground object detected, result is more accurately detected so as to obtain.
Description
Technical field
This patent is related to background detection method, relate more specifically to it is a kind of is established and safeguarded according to foreground object record its spy
" prospect code book " (the foreground codebook) of sign, and pass through the didactic optimizing detection knot of study to foreground features
Fruit, and targetedly adjust the classification thresholds of each pixel and the new background detection method of background model turnover rate.
Background technology
Background detection method is a kind of object detection method that detected target interested is only embodied in testing result
General name.As computer vision application progressively develops increasingly ripe, the application of the height level such as target following and target identification
Have become demand the most basic in some application scenarios.Target detection is the premise of these high-level computer vision applications
And basis, it is therefore necessary to explore that testing result is more accurate, real-time is more preferable, the stronger background detection method of robustness, with
Meets the needs of high-level computer vision application.
In the prior art, such as paper " ViBe:A universal background subtraction
algorithm for video sequences,IEEE Trans.Image Process.,vol.20,no.6,Jun.2011
(author:O.Barnich and M.Van Droogenbroeck) " propose a kind of visual background extraction (ViBe) background detection side
Method, the background detection method only using pixel value as build model unique foundation, in background model renewal process not only with
Machine chooses the location of pixels for needing to update background model, and is the side to randomly select to update the new value of the background model
Formula is obtained from its eight neighborhood.Therefore the testing result accuracy deficiency of the background detection method.And paper " SuBSENSE:A
Universal Change Detection Method With Local Adaptive Sensitivity, IEEE
2015 (author of Transactions on Image Processing, Vol.24, No.1, January:Pierre-Luc St-
Charles, G.A.Bilodeau, R.Bergevin) " although LBSP operators are added in background model, and in model more
New stage has used eight neighborhood Space Consistency, but only considers the Space Consistency between pixel in the pixel classifications stage, and
And the limit of consideration of Space Consistency is only limited to the space of eight neighborhood and is not enough to obtain more preferable testing result.More
Importantly, although this method considers the threshold value and background model turnover rate when dynamic adjustment single pixel point is classified, do not have
Have and consider that other location of pixels with spatial simlanty, color similarity and texture paging also should together adjust pixel classifications
Threshold value and model modification rate.Paper " Background modeling and subtraction by codebook
Construction, in Proc.IEEE Int.Conf.Image Process., Singapore, Oct.2004, vol.5 (make
Person:K.Kim, T.Chalidabhongse, D.Harwood, and L.Davis) " own by considerably long training video sequence pair
Location of pixels is based on the information architecture " code book " (codebook) such as its background characteristics value, but background detection in this way
Method not only consumes substantial amounts of memory but also due to that cannot create new " code book value " causes that scene cannot be tackled well
The situation of mutation.
In conclusion although background detection method of the prior art can provide the testing result of certain effect, but still
So be significantly improved space.
The content of the invention
Based on above-mentioned technical problem, present disclosure it is expected to propose a kind of more preferable background detection method of detection result,
It can be directed to the testing result accuracy deficiency of existing background detection method and make improvement, so as to obtain preferably detection effect
Fruit.
According to an illustrative aspect in the present disclosure, improved background detection method includes:
S1:Receive be used for visual background extraction (ViBe) background model and by integer DCT compression filter out including
Noise, background change etc. including high-frequency information present frame;
S2:Using pixel color and texture eigenvalue (local binary similitude (LBSP) operator), according to each pixel point
The location of pixels of the present frame is categorized as background dot class and prospect by class threshold value (each pixel has independent classification thresholds)
Point class;
S3:" prospect the code book " (foreground for recording its feature is established and safeguarded for the foreground object detected
Codebook pixel classifications threshold value and background model turnover rate targetedly) and accordingly by the study to foreground features are adjusted.
According to a kind of embodiment in the present disclosure, affiliated step S2:Utilize pixel color and texture eigenvalue
(local binary similitude (LBSP) operator) and according to the pixel classifications threshold value of each pixel by the pixel classifications of the present frame
Further comprise for background dot class and foreground point class:
S21:It is described current when the pixel of the present frame meets the first predetermined condition and meets the second predetermined condition
The pixel of frame is classified as background dot class.
According to a kind of embodiment in the present disclosure, the step S2:Utilize the similar sexual norm of local binary
(LBSP) operator and according to pixel classifications threshold value by the pixel classifications of the present frame for background dot class and foreground point class it is further
Including:
S22:It is described to work as when the pixel of the present frame is unsatisfactory for the first predetermined condition or is unsatisfactory for the second predetermined condition
The pixel of previous frame is classified as foreground point class.
According to a kind of embodiment in the present disclosure, the step S3:Established for the foreground object detected
" prospect code book " (the foreground codebook) for recording its feature further comprises:
S31:Position feature including historic centers point position and volume radius etc., the face based on HSV color spaces
Color characteristic, textural characteristics, the frequency of occurrences and the apparatus volumetric video obtained using the similar sexual norm of local binary (LBSP) operator
The features such as the last time of occurrence that frame number represents.
According to a kind of embodiment in the present disclosure, the step S3:Safeguarded for the foreground object detected
" prospect code book " (the foreground codebook) for recording its feature further comprises:
S32:" prospect code book " is updated using newest testing result.The testing result of each frame can be with " prospect code book "
Record in (foreground codebook) compares, if the color of the foreground object and texture and certain in " prospect code book "
Bar record is similar (being judged by the area pixel classification thresholds), then this updated using the foreground object in " prospect code book " is remembered
Record.If the color and texture of the foreground object are different from the record in " prospect code book ", the feature meeting of the foreground object
Added as new record " prospect code book ".
According to a kind of embodiment in the present disclosure, further included in the background detection method step S3:
S33:Pixel classifications threshold value and background model turnover rate are targetedly adjusted by the study to foreground features, its
In,
S331:For often there is the region of foreground object, then the pixel classifications threshold value in the region should be in first threshold
On the basis of be reduced to second threshold, the background model turnover rate in the region should be reduced to second more on the basis of the first turnover rate
New rate;
S332:Targetedly found for the foreground object detected around it with the foreground object in color and line
Similar region in reason;
S333:For the region similar in color and texture of the foreground object with having detected, the pixel point in the region
Class threshold value should reduce on the basis of first threshold, and the background model turnover rate in the region should subtract on the basis of the first turnover rate
It is small.
According to a kind of embodiment in the present disclosure, the step S332:For the foreground object detected
The region similar in color and texture with the foreground object is targetedly found around it to further comprise:
S3321:After the Preliminary detection result of present frame is obtained, radius is the area of ζ around fixed foreground object
Domain with the 3rd threshold value less than first threshold targetedly find with region as the foreground object color and texture classes, with reduce
Omission factor.
According to a kind of embodiment in the present disclosure, the step S333:For the preceding scenery with having detected
Body should reduce in the color region similar with texture, the pixel classifications threshold value in the region on the basis of first threshold, the area
The background model turnover rate in domain should reduce on the basis of the first turnover rate to be further comprised:
S3331:There is the region of the first behavior similarity for the foreground object with having detected in color and texture,
Then the pixel classifications threshold value in the region should be reduced to the 4th threshold value, the background model renewal in the region on the basis of first threshold
Rate should be reduced to the 3rd turnover rate on the basis of the first turnover rate;
S3332:There is the region of the second behavior similarity for the foreground object with having detected in color and texture,
Then the pixel classifications threshold value in the region should be reduced to the 5th threshold value, the background model renewal in the region on the basis of first threshold
Rate should be reduced to the 4th turnover rate on the basis of the first turnover rate, wherein the first row is higher than second row for similarity
The 5th threshold value, the 4th turnover rate are respectively smaller than for similarity and the 4th threshold value, the 3rd turnover rate.
Utilized according to background detection method in the present disclosure and the change including noise, background is filtered out through integer DCT compression
The present frame of high-frequency information including change etc., establishes and safeguards " the prospect code for recording its feature for the foreground object detected
This " (foreground codebook), and accordingly by the study to foreground features targetedly adjust pixel classifications threshold value and
Background model turnover rate, so that background detection method in the present disclosure, which can obtain, more accurately detects result.
Brief description of the drawings
Fig. 1 shows the flow chart 100 according to improved background detection method in the present disclosure;
Fig. 2 shows the block diagram 200 of the background detection method according to Fig. 1;
Fig. 3 shows the block diagram 300 classified for each location of pixels;
Fig. 4 is shown establishes and safeguards " the prospect code book " for recording its feature for the foreground object detected
(foreground codebook), and targetedly adjust pixel classifications threshold value and the back of the body accordingly by the study to foreground features
The block diagram 400 of the dynamic process of scape model modification rate.
Embodiment
The background detection method that present disclosure is proposed, which is established and safeguarded for the foreground object detected, records its spy
" prospect code book " (the foreground codebook) of sign, and targetedly adjusted often accordingly by the study to foreground features
The pixel classifications threshold value and background model turnover rate of a pixel.Particularly, in a new frame detection process, each pixel
Decision threshold and background model turnover rate can be adjusted according to the record dynamic in " prospect code book " (foreground codebook)
It is whole.Needed in adjustment to often there is the region of foreground object increase pixel classifications threshold value ThI, j(including color threshold ThI, j
(color), texture threshold ThI, j(text) and position threshold ThI, j(dis), wherein:
The various middle δ more thancolor、δtextAnd δdisIt is the similar factors for representing similarity, and:DminI, j=(1- β)
DminI, j+ β min dis | FG (k) |,
) and background model turnover rate, while targetedly find around the foreground object detected with its color and
Region as texture classes and the pixel classifications threshold value and background model turnover rate for increasing the region.
By targetedly found around the foreground object detected with region as its color and texture classes, to certain
A little regions for meeting decision condition, these regions and the foreground object detected are similar in color and texture, but similar journey
Degree may be different.Some regions and the foreground object similarity that this has been detected are higher, and other regions have then been detected with this
The foreground object similarity gone out is relatively low.Therefore, dynamic adjustment meets the area of decision condition around the foreground object detected
The pixel classifications threshold value and background model turnover rate in domain are the similar journeys to the foreground object detected in color and texture
Spend related.
As previously mentioned, for the Zone Full for meeting decision condition around a certain foreground object detected, then with
The foreground object detected higher region of similarity degree in color and texture then increases pixel point respectively with larger amplitude
Class threshold value and background model turnover rate;Conversely, similarity degree is relatively low in color and texture with the foreground object that has detected
Region then increases pixel classifications threshold value and background model turnover rate respectively with less amplitude.
For abstract, pass through according to background detection method in the present disclosure special to foreground target position, color and texture
The study of sign, to the pixel classifications threshold value and the back of the body using more accurate, more targeted learning outcome to each pixel
The adjustment of scape model modification rate, to obtain more preferable testing result.
Below with reference to appended attached drawing, it is described in further detail and is shown according to background detection method in the present disclosure, Fig. 1
Flow chart 100 according to improved background detection method in the present disclosure, it can be seen from the figure that according to present disclosure
Improved background detection method 100 received first in method and step 110 for visual background extraction (ViBe) background mould
Type and the present frame that the high-frequency information including change of noise, background etc. is filtered out by integer DCT compression;Then,
Using the similar sexual norm of local binary (LBSP) operator and according to pixel classifications threshold value by institute in ensuing method and step 120
The location of pixels for stating present frame is categorized as background dot class and foreground point class;Then, for before detecting in method and step 130
" prospect code book " (the foreground codebook) for recording its feature is established and safeguarded to scenery body, and accordingly by prospect
The study of feature targetedly adjusts pixel classifications threshold value and background model turnover rate.According to background detection side in the present disclosure
Method is directed to the foreground object detected and establishes and safeguard " prospect code book " (the foreground codebook) for recording its feature,
And pixel classifications threshold value and background model turnover rate are targetedly adjusted accordingly by the study to foreground features, and then obtain more
For accurate testing result.
The background detection method according to Fig. 1 is further described below with reference to Fig. 2, Fig. 2 shows that the background according to Fig. 1 is examined
The block diagram 200 of survey method, is used it can be seen from the figure that being received according to improved background detection method in the present disclosure
Change including noise, background etc. is filtered out in the background model of visual background extraction (ViBe) and by integer DCT compression
The present frame of high-frequency information inside, then implements pixel classifications process, finally for each location of pixels in the present frame
" code book " (codebook) for recording its feature is established and safeguarded for the foreground object detected, and accordingly by special to prospect
The study of sign targetedly adjusts pixel classifications threshold value and background model turnover rate.
Next, Fig. 3 shows the block diagram 300 classified for each location of pixels.Can from figure
Go out, be that background model and the pixel value of present frame correspondence position carry out condition for received background model and present frame
Judge, it is specifically, described to work as when the pixel of the present frame meets the first predetermined condition and meets the second predetermined condition
The pixel of previous frame is classified as background dot class;When the pixel of the present frame is unsatisfactory for the first predetermined condition or is unsatisfactory for
During two predetermined conditions, the pixel of the present frame is classified as foreground point class.Here, it can be seen from the figure that at this
First predetermined condition is | P-Bk|<Thcolor, and the second predetermined condition is H (LBSPf, LBSPM)≤ThLBSP。
Fig. 4 is shown establishes and safeguards " the prospect code book " for recording its feature for the foreground object detected
(foreground codebook), and targetedly adjust pixel classifications threshold value and the back of the body accordingly by the study to foreground features
The block diagram 400 of the dynamic process of scape model modification rate.It can be seen from the figure that by learning the feature by foreground object
" prospect code book " (the foreground codebook) of structure, and color threshold Th is included by learning outcome dynamic adjustmentI, j
(color), texture threshold Thi,j(text) and position threshold ThI, j(dis) etc. including pixel classifications threshold value ThI, j(wherein:
The various middle δ more thancolor、δtextAnd δdisIt is the similar factors for representing similarity, and:
DminI, j=(1- β) DminI, j+β·min{dis|FG(k)|},
) and background model turnover rate, wherein, often there is the pixel classifications threshold value in the region of foreground object in first threshold
On the basis of be reduced to second threshold, the background model turnover rate in the region is reduced to second more on the basis of the first turnover rate
New rate.After the testing result of present frame is obtained, radius is the region of ζ with threshold value around the foreground object detected(D is distance of each pixel to the nearest pixel of foreground object) it is decision condition pin
Searching to property and region as the foreground object color and texture classes, for the foreground object with having detected in color and line
There is the region of the first behavior similarity, then the pixel classifications threshold value in the region should be reduced on the basis of first threshold in reason
4th threshold value, the background model turnover rate in the region should be reduced to the 3rd turnover rate on the basis of the first turnover rate;For with
The pixel classifications threshold in the foreground object detected region, the then region with the second behavior similarity in color and texture
Value should be reduced to the 5th threshold value on the basis of first threshold, and the background model turnover rate in the region should be in the base of the first turnover rate
It is reduced to the 4th turnover rate on plinth, wherein the first row is higher than the second behavior similarity and the described 4th for similarity
Threshold value, the 3rd turnover rate are respectively smaller than the 5th threshold value, the 4th turnover rate.
Two kinds of background detection methods mentioned in background technology are combined according to background detection method in the present disclosure
The advantages of and overcome their deficiency, introduce picture using the background model in former ViBe background detection methods and by LBSP operators
Plain sorting phase, establishes and safeguards " prospect the code book " (foreground for recording its feature for the foreground object detected
Codebook) and then by learning foreground features and adjusting pixel classifications threshold value and background model more using learning outcome dynamic
New rate, result is more accurately detected so as to obtain.
Although releasing purpose in the present disclosure for figure has been illustrated with some typical embodiments and details, right
The side disclosed herein for those skilled in the art it is readily apparent that in the case of without departing from present disclosure scope
Method and device can carry out various change.
Claims (8)
1. a kind of background detection method modeled at the same time for prospect background, the described method includes:
S1:Receive the background model for being used for visual background extraction and filtered out by integer DCT compression including noise, background
The present frame of high-frequency information including change etc.;
S2:The location of pixels of the present frame is categorized as carrying on the back using local binary comparability operator and according to pixel classifications threshold value
Sight spot class and foreground point class;
S3:The prospect code book for recording its feature is established and safeguarded for the foreground object detected, and by foreground features
Study targetedly adjusts pixel classifications threshold value and background model turnover rate.
2. the background detection method according to claim 1 modeled at the same time for prospect background, wherein, the step S2:
It is background dot class by the pixel classifications of the present frame using local binary similitude pattern operator and according to pixel classifications threshold value
Further comprise with foreground point class:
S21:When the pixel of the present frame meets the first predetermined condition and meets the second predetermined condition, the present frame
The pixel is classified as background dot class.
3. the background detection method according to claim 1 or 2 modeled at the same time for prospect background, wherein, the step
S2:It is background dot by the pixel classifications of the present frame using local binary similitude pattern operator and according to pixel classifications threshold value
Class and foreground point class further comprise:
S22:When the pixel of the present frame is unsatisfactory for the first predetermined condition or is unsatisfactory for the second predetermined condition, the present frame
The pixel be classified as foreground point class.
4. the background detection method according to claim 1 modeled at the same time for prospect background, wherein, the step S3:
The prospect code book that its feature is recorded for the foreground object foundation detected further comprises:
S31:Position feature including historic centers point position and volume radius etc., the color based on HSV color spaces are special
Levy, represent last using the textural characteristics of local binary similitude pattern operator acquisition, the frequency of occurrences and with video frame number
The features such as time of occurrence.
5. the background detection method according to claim 1 modeled at the same time for prospect background, wherein, affiliated step S3:
Safeguard that the prospect code book for recording its feature further comprises for the foreground object detected:
S32:Prospect code book is updated using newest testing result.In foreground object meeting and prospect code book that each frame detects
Record compares, if the position of the foreground object, color and texture recorded with certain in " prospect code book " it is similar, using before this
This record in scenery body characteristics renewal prospect code book.If the color and texture of the foreground object and the record in prospect code book
It is different from, then the feature of the foreground object can be added prospect code book as new record.
6. the background detection method according to claim 1 modeled at the same time for prospect background, the background detection method
Further included in step S3:
S33:Pixel classifications threshold value and background model turnover rate are targetedly adjusted by the study to foreground features, wherein,
S331:For often there is the region of foreground object, then the pixel classifications threshold value in the region should be on the basis of first threshold
On be reduced to second threshold, the background model turnover rate in the region should be reduced to the second renewal on the basis of the first turnover rate
Rate;
S332:Targetedly found with the foreground object in color and texture around it for the foreground object detected
Similar region;
S333:For the region similar in color and texture of the foreground object with having detected, the pixel classifications threshold in the region
Value should reduce on the basis of first threshold, and the background model turnover rate in the region should reduce on the basis of the first turnover rate.
7. the background detection method according to claim 6 modeled at the same time for prospect background, wherein, the step
S332:For the foreground object detected around it targetedly find with the foreground object in color and texture it is similar
Region further comprise:
S3321:Obtain present frame Preliminary detection result after, around fixed foreground object radius be ζ region with
Less than first threshold the 3rd threshold value targetedly find with region as the foreground object color and texture classes, to reduce missing inspection
Rate.
8. the background detection method according to claim 6 modeled at the same time for prospect background, wherein, the step
S333:For the region similar in color and texture of the foreground object with having detected, the pixel classifications threshold value in the region should
Reduce on the basis of first threshold, the background model turnover rate in the region should reduce further on the basis of the first turnover rate
Including:
S3331:There is the region of the first behavior similarity for the foreground object with having detected in color and texture, then should
The pixel classifications threshold value in region should be reduced to the 4th threshold value on the basis of first threshold, and the background model turnover rate in the region should
It is reduced to the 3rd turnover rate on the basis of the first turnover rate;
S3332:There is the region of the second behavior similarity for the foreground object with having detected in color and texture, then should
The pixel classifications threshold value in region should be reduced to the 5th threshold value on the basis of first threshold, and the background model turnover rate in the region should
It is reduced to the 4th turnover rate on the basis of the first turnover rate, wherein the first row is higher than the second behavior phase for similarity
The 5th threshold value, the 4th turnover rate are respectively smaller than like degree and the 4th threshold value, the 3rd turnover rate.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711303361.3A CN107944499A (en) | 2017-12-10 | 2017-12-10 | A kind of background detection method modeled at the same time for prospect background |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711303361.3A CN107944499A (en) | 2017-12-10 | 2017-12-10 | A kind of background detection method modeled at the same time for prospect background |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107944499A true CN107944499A (en) | 2018-04-20 |
Family
ID=61945470
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711303361.3A Pending CN107944499A (en) | 2017-12-10 | 2017-12-10 | A kind of background detection method modeled at the same time for prospect background |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107944499A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108898169A (en) * | 2018-06-19 | 2018-11-27 | Oppo广东移动通信有限公司 | Image processing method, picture processing unit and terminal device |
CN108961157A (en) * | 2018-06-19 | 2018-12-07 | Oppo广东移动通信有限公司 | Image processing method, picture processing unit and terminal device |
CN108985169A (en) * | 2018-06-15 | 2018-12-11 | 浙江工业大学 | Across the door operation detection method in shop based on deep learning target detection and dynamic background modeling |
CN109285178A (en) * | 2018-10-25 | 2019-01-29 | 北京达佳互联信息技术有限公司 | Image partition method, device and storage medium |
CN111459955A (en) * | 2020-03-13 | 2020-07-28 | 济南轨道交通集团有限公司 | Three-dimensional geological structure model automatic updating method and system based on GIS platform |
CN111784723A (en) * | 2020-02-24 | 2020-10-16 | 成科扬 | Foreground extraction algorithm based on confidence weighted fusion and visual attention |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100920918B1 (en) * | 2008-12-29 | 2009-10-12 | 주식회사 넥스파시스템 | Object detection system and object detection method using codebook algorism |
CN102622576A (en) * | 2011-01-31 | 2012-08-01 | 索尼公司 | Method and apparatus for background modeling, and method and apparatus for detecting background in video |
CN103366368A (en) * | 2013-06-21 | 2013-10-23 | 西南交通大学 | Double-truncated-cone-cylinder codebook foreground detection method capable of eliminating shadow and highlight noise |
CN103489196A (en) * | 2013-10-16 | 2014-01-01 | 北京航空航天大学 | Moving object detection method based on codebook background modeling |
CN103729862A (en) * | 2014-01-26 | 2014-04-16 | 重庆邮电大学 | Self-adaptive threshold value moving object detection method based on codebook background model |
CN103914842A (en) * | 2014-04-04 | 2014-07-09 | 上海电机学院 | Foreground detecting method based on Codebook background differencing |
CN104835145A (en) * | 2015-04-09 | 2015-08-12 | 电子科技大学 | Foreground detection method based on self-adaptive Codebook background model |
CN105139372A (en) * | 2015-02-06 | 2015-12-09 | 哈尔滨工业大学深圳研究生院 | Codebook improvement algorithm for prospect detection |
CN107169997A (en) * | 2017-05-31 | 2017-09-15 | 上海大学 | Background subtraction algorithm under towards night-environment |
-
2017
- 2017-12-10 CN CN201711303361.3A patent/CN107944499A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100920918B1 (en) * | 2008-12-29 | 2009-10-12 | 주식회사 넥스파시스템 | Object detection system and object detection method using codebook algorism |
CN102622576A (en) * | 2011-01-31 | 2012-08-01 | 索尼公司 | Method and apparatus for background modeling, and method and apparatus for detecting background in video |
CN103366368A (en) * | 2013-06-21 | 2013-10-23 | 西南交通大学 | Double-truncated-cone-cylinder codebook foreground detection method capable of eliminating shadow and highlight noise |
CN103489196A (en) * | 2013-10-16 | 2014-01-01 | 北京航空航天大学 | Moving object detection method based on codebook background modeling |
CN103729862A (en) * | 2014-01-26 | 2014-04-16 | 重庆邮电大学 | Self-adaptive threshold value moving object detection method based on codebook background model |
CN103914842A (en) * | 2014-04-04 | 2014-07-09 | 上海电机学院 | Foreground detecting method based on Codebook background differencing |
CN105139372A (en) * | 2015-02-06 | 2015-12-09 | 哈尔滨工业大学深圳研究生院 | Codebook improvement algorithm for prospect detection |
CN104835145A (en) * | 2015-04-09 | 2015-08-12 | 电子科技大学 | Foreground detection method based on self-adaptive Codebook background model |
CN107169997A (en) * | 2017-05-31 | 2017-09-15 | 上海大学 | Background subtraction algorithm under towards night-environment |
Non-Patent Citations (2)
Title |
---|
YANG, Y (YANG, YUN)等: "An Improved ViBe for Video Moving Object Detection Based on Evidential Reasoning", 《2016 IEEE INTERNATIONAL CONFERENCE ON MULTISENSOR FUSION AND INTEGRATION FOR INTELLIGENT SYSTEMS (MFI)》, 31 December 2016 (2016-12-31), pages 26 - 31 * |
郭春生: "一种基于码本模型的运动目标检测算法", 《中国图象图形学报A》, no. 7, 31 December 2010 (2010-12-31), pages 1079 - 1083 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108985169A (en) * | 2018-06-15 | 2018-12-11 | 浙江工业大学 | Across the door operation detection method in shop based on deep learning target detection and dynamic background modeling |
CN108985169B (en) * | 2018-06-15 | 2020-12-11 | 浙江工业大学 | Shop cross-door operation detection method based on deep learning target detection and dynamic background modeling |
CN108898169A (en) * | 2018-06-19 | 2018-11-27 | Oppo广东移动通信有限公司 | Image processing method, picture processing unit and terminal device |
CN108961157A (en) * | 2018-06-19 | 2018-12-07 | Oppo广东移动通信有限公司 | Image processing method, picture processing unit and terminal device |
CN108961157B (en) * | 2018-06-19 | 2021-06-01 | Oppo广东移动通信有限公司 | Picture processing method, picture processing device and terminal equipment |
CN109285178A (en) * | 2018-10-25 | 2019-01-29 | 北京达佳互联信息技术有限公司 | Image partition method, device and storage medium |
CN111784723A (en) * | 2020-02-24 | 2020-10-16 | 成科扬 | Foreground extraction algorithm based on confidence weighted fusion and visual attention |
CN111459955A (en) * | 2020-03-13 | 2020-07-28 | 济南轨道交通集团有限公司 | Three-dimensional geological structure model automatic updating method and system based on GIS platform |
CN111459955B (en) * | 2020-03-13 | 2023-09-29 | 济南轨道交通集团有限公司 | Automatic three-dimensional geological structure model updating method and system based on GIS platform |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107944499A (en) | A kind of background detection method modeled at the same time for prospect background | |
US11830230B2 (en) | Living body detection method based on facial recognition, and electronic device and storage medium | |
JP5045371B2 (en) | Foreground / background classification apparatus, method, and program for each pixel of moving image | |
Guo et al. | Fast background subtraction based on a multilayer codebook model for moving object detection | |
US8184914B2 (en) | Method and system of person identification by facial image | |
CN109598735A (en) | Method using the target object in Markov D-chain trace and segmented image and the equipment using this method | |
CN110032989B (en) | Table document image classification method based on frame line characteristics and pixel distribution | |
JP5229575B2 (en) | Image processing apparatus and method, and program | |
CN101551852B (en) | Training system, training method and detection method | |
CN111640089A (en) | Defect detection method and device based on feature map center point | |
JP2013065119A (en) | Face authentication device and face authentication method | |
CN104700405B (en) | A kind of foreground detection method and system | |
CN106127234B (en) | Non-reference picture quality appraisement method based on characteristics dictionary | |
Shao et al. | Generative image inpainting via edge structure and color aware fusion | |
JP2019153057A (en) | Image processing apparatus, learning apparatus, image processing method, learning method, image processing program, and learning program | |
CN101853500A (en) | Colored multi-focus image fusing method | |
Devadethan et al. | Face detection and facial feature extraction based on a fusion of knowledge based method and morphological image processing | |
Li et al. | YOLO-PL: Helmet wearing detection algorithm based on improved YOLOv4 | |
CN116824641B (en) | Gesture classification method, device, equipment and computer storage medium | |
CN108280388A (en) | The method and apparatus and type of face detection method and device of training face detection model | |
Parekh et al. | A survey of image enhancement and object detection methods | |
CN111582654B (en) | Service quality evaluation method and device based on deep cycle neural network | |
JP4192719B2 (en) | Image processing apparatus and method, and program | |
CN107204011A (en) | A kind of depth drawing generating method and device | |
CN109165551B (en) | Expression recognition method for adaptively weighting and fusing significance structure tensor and LBP characteristics |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |