CN106570885A - Background modeling method based on brightness and texture fusion threshold value - Google Patents

Background modeling method based on brightness and texture fusion threshold value Download PDF

Info

Publication number
CN106570885A
CN106570885A CN201610991994.7A CN201610991994A CN106570885A CN 106570885 A CN106570885 A CN 106570885A CN 201610991994 A CN201610991994 A CN 201610991994A CN 106570885 A CN106570885 A CN 106570885A
Authority
CN
China
Prior art keywords
pixel
threshold value
brightness
background
texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610991994.7A
Other languages
Chinese (zh)
Inventor
王敏
孙靖文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN201610991994.7A priority Critical patent/CN106570885A/en
Publication of CN106570885A publication Critical patent/CN106570885A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a background modeling method based on a brightness and texture infusion threshold value. The method comprises: a brightness and texture fusion threshold value is calculated; with a VIBE algorithm, all pixel points at one frame of image are classified into two types: foreground pixels and background pixels; Gaussian mixture modeling is carried out on pixel points with changing brightness in a new image frame; and then a threshold value corresponding to each pixel point is updated. According to the method, because the texture and color brightness are fused to form a threshold value and advantages of the Gaussian mixture model and the VIBE algorithm are combined, the background can be extracted accurately under the circumstance that several kinds of external disturbances like an illumination change, slight camera shaking, a dynamic background element exist, the influence on the real moving target by shadows can be suppressed to a certain extent, the anti-interference capability is enhanced, the image frame processing is accelerated, and the moving target segmentation precision is improved effectively.

Description

Based on brightness and the background modeling method of grain table threshold value
Technical field
This method belongs to video analysis field, and in particular to based on brightness and the background modeling method of grain table threshold value.
Background technology
With the continuous enhancing of the development and people of science and technology to security precautions, with intellectual analysis function Video monitoring system of new generation, obtains increasing concern, and has started to penetrate in the middle of our daily life.
The accurate extraction of moving target is one of important research contents of intelligent video monitoring system, is also that current kinetic is regarded Feel the not yet basic difficulties for solving in research.The purpose of moving object detection is by dividing monitor video image sequence Analysis, determines and whether there is in monitoring scene moving target, and then moving region (also referred to as foreground area) is extracted from detection image Come.It is the basic premise for carrying out the subsequent treatment such as motion target tracking, classification and identification to moving region accurate and effective Ground Split. It is background subtraction method that extensive moving target detecting method is also compared in research comparative maturity application simultaneously at present.
Background subtraction method sets up background model for background image first, then by comparing detection image and background model Difference to judge scene in whether there is moving target.Can background model correctly and efficiently reflect real-time background, can direct shadow Ring the accuracy of moving object detection.But in complicated scene, it will usually there is the interference of various extraneous factors (as illumination becomes Change, video camera slight jitter, dynamic background element etc.), these all propose to one preferable background model of design and choose War;Additionally, motion shade and moving target are closely coupled, and in the case where light application ratio is relatively strong, motion shade and moving target The same all have a significant difference with background, therefore is usually extracted by the part as moving target, has had a strong impact on fortune The precision of moving-target segmentation.
The content of the invention
The purpose of the present invention is for the deficiencies in the prior art, it is proposed that built based on the background of brightness and grain table threshold value Mould method.Methods described can effectively improve moving Object Segmentation precision, rapid extraction background, while suppressing shade to real goal Impact.
In order to solve above-mentioned technical problem, the technical solution used in the present invention is:
Based on brightness and the background modeling method of grain table threshold value, methods described is first for the image sequence in video First calculate the brightness of each pixel and grain table threshold value in a two field picture;Then by VIBE (Visual Background Extractor, visual background is extracted) all pixels point on the two field picture is divided into two classes by algorithm, is respectively foreground pixel and the back of the body Scene element;Then Gaussian modeling is carried out to the pixel that monochrome information in new image frame is changed;Final updating each picture The corresponding threshold value of vegetarian refreshments;Specifically include following steps:
Step 1:The all pixels point in a two field picture is gathered, and obtains view data and data texturing;
Step 2:According to the view data and data texturing that obtain in step 1, using ViBe algorithms to background model Original state carries out assignment, calculates the fusion threshold value of brightness and texture;
Step 3:The pixel value of current pixel point is compared with the fusion threshold value obtained in step 2, if pixel value is big In fusion threshold value, then the pixel is background dot;Otherwise go to step 6;
Step 4:The new background dot detected with step 3 updates VIBE background models;
Step 5:Gaussian modeling is carried out to the pixel that monochrome information in new image frame is changed;
Step 6:Update corresponding threshold value T of each pixel and calculate turnover rate R.
The step 2 is comprised the following steps:
Step 201:One neighborhood territory pixel of random selection current pixel point;
Step 202:Fusion threshold value dist of brightness and texture is calculated according to below equation:
Dist=alpha* (norm/N)+beta*dis;
Wherein, j ∈ { 1,2,3 }, represents the label of tri- passages of RGB;RandIndex is randomly selected in step 201 The label of neighborhood territory pixel;sobel_xj[randIndex] and sobel_yj[randIndex] represents respectively j-th channel sample collection In randomly selected the randIndex sample sobel gradients horizontally and vertically;sobel_xjRepresent in the horizontal direction Sobel gradients and sobel_yjRepresent sobel gradients vertically;lumij[randIndex] represents j-th channel sample Concentrate the brightness of randomly selected the randIndex sample;Alpha and beta is texture and the fusion coefficients of brightness, alpha It is 1 that value is 7, beta values;N is that statistics needs the corresponding norm sums of pixel for updating in former frame.
Beneficial effect:The invention discloses based on brightness and the background modeling method of grain table threshold value.Methods described is first First calculate the fusion threshold value of brightness and texture;Then all pixels point on one two field picture is divided into into two classes by VIBE algorithms:Before Scene element and background pixel;Then Gaussian modeling is carried out to the pixel that brightness in new image frame is changed;Final updating The corresponding threshold value of each pixel.Present invention fusion texture and chroma-luminance are used as threshold value, and with reference to mixed Gauss model and , can there are various external disturbances in the advantage of VIBE algorithms the two detection algorithms, such as illumination variation, video camera is slightly trembled Background is accurately extracted when dynamic, dynamic background element, and can to a certain extent suppress shade to real motion target Impact, enhance capacity of resisting disturbance, picture frame processing speed is accelerated, while effectively improving moving Object Segmentation precision.
Description of the drawings
Fig. 1 is the flow chart of the method that the present invention is provided.
Specific embodiment
Below in conjunction with the accompanying drawings, the present invention is described in detail.
The present invention provides the background modeling method based on brightness and grain table threshold value, comprises the following steps:
Step 1:The all pixels point in a two field picture is gathered, and obtains view data and data texturing.
Step 2:According to the view data and data texturing that obtain in step 1, using VIBE algorithms to background model Original state carries out assignment, calculates the fusion threshold value of brightness and texture.Specifically include following steps:
Step 201:One neighborhood territory pixel point of random selection current pixel point, i.e. M0(x)=v0 (y | y ∈ NG (x)) }, Wherein t=0 represents initial time, v0The pixel value at y points is represented, y is a neighborhood picture of randomly selected current pixel point Element, NG (x) be neighborhood point set, M0X () is meant that the model relevant information of current pixel point, wherein comprising brightness data and Data texturing.Initialization needs two kinds of data, and first is that view data is brightness data, to three-channel GMM model The set of data samples of brightness is initialized, and view data is exactly the three-channel data of RGB in image, the GMM model of passage It is to carry out stochastical sampling n times acquisition by the brightness to neighborhood point that luma samples collection is obtained;Second is data texturing, to threeway The texture sample collection of the GMM model in road is initialized, and by the texture to neighborhood n times stochastical sampling acquisition is carried out.Texture number According to the sobel data using three-channel x directions sobel and y direction, totally 6 groups of sobel textural characteristics, terraced by calculating sobel Spend to obtain, for stating the phase and amplitude of current point pixel and neighborhood territory pixel change.
Step 202:Calculate the fusion threshold value of brightness and texture;
Whether the fusion of brightness data and data texturing judge current pixel point when judging whether background needs to update When being foreground point, a threshold value is needed to be judged, this threshold value is exactly by, its side calculated to brightness and texture Method is as follows:
Dist=alpha* (norm/N)+beta*dis;
Wherein, j ∈ { 1,2,3 }, represents the label of tri- passages of RGB;RandIndex is randomly selected in step 201 The label of neighborhood territory pixel.sobel_xj[randIndex] and sobel_yj[randIndex] represents respectively j-th channel sample collection In randomly selected the randIndex sample sobel gradients horizontally and vertically;lumij[randIndex] represents jth Individual channel sample concentrates the brightness of randomly selected the randIndex sample;Alpha and beta is the fusion of texture and brightness Coefficient, general alpha is 1 for 7, beta;N is that statistics needs the corresponding norm sums of pixel for updating, dist in former frame As merge threshold value.
Step 3:The pixel value of current pixel point is compared with the fusion threshold value obtained in step 2, if pixel value is big In fusion threshold value, then the pixel is background dot;Otherwise go to step 6.
Step 4:The new background dot detected with step 3 is updated to VIBE background models.Obtain in random selection step 2 The sample being replaced is needed in the sample set for obtaining, the sample set for randomly choosing neighborhood of pixels updates, the turnover rate of ViBe is certainly Adapt to, and updating neighborhood sample set is updated with the new pixel value of neighborhood, and when updating synchronized update pair is needed The texture information answered.
Step 5:Gaussian modeling is carried out to the pixel that monochrome information in new image frame is changed;
GMM background model initializings, are that each pixel in image builds K Gauss distribution, and general K selects 3-5, and Afterwards image is described with the weighted sum of this K distribution.Regard the gray scale of any point pixel (x, y) in image sequence as independence Statistic processess, it is assumed that its Gaussian distributed, be designated as N (u, σ).Image sequence (I1,I2,…,It,IN) in t (t ∈ { 1,2 ..., N }) image ItProbability density function p (Xt) be expressed as:
Wherein wi,tIt is the weights of i-th Gauss distribution of t, andη(Xt,ui,tσi,t) represent t i-th The probability density function of individual Gauss distribution, with this to t infrared image ItEach pixel set up GMM;ui,tAnd σi,tPoint Not Biao Shi i-th Gauss distribution of t average and standard deviation.
After the pixel value for reading new image frame, by current pixel xtMatched with K Gauss distribution, matching criteria It is:
|xt-ui,t-1|<2.5σi,t-1(i=1 ..., K, t=1 ..., N).
If pixel xtWith average u of certain Gauss distributioni,t-1Meet above formula, then it is assumed that pixel xtMatch with the distribution, Otherwise mismatch.For the distribution of matching, by formula wi,t=(1- α) wi,t-1+αMi,tEnter line parameter renewal, wherein α is to update speed Rate, α values are 0.005;Weights are according to formulaIt is updated;It is wherein right In the distribution M of matchingi,t=1, and unmatched distribution Mi,t=0, reinitialize.The model number for judging pixel is More than 5, just removing distribution probability is minimum more than 5, directly initialized model being put into model set less than 5.
When a new two field picture then, to carry out more model parameter using the pixel of new images according to context update formula Newly, K Gauss distribution of pixel according to the descending arrangement of weights, by b high distribution weights summation of priority, i.e., most B big distribution weights summation, wherein b is preferably 5, and when its value is more than threshold value T, here T is 0.9, is made up of this b distribution Background model, namely:
Background image is obtained by above-mentioned Gaussian modeling method, then sport foreground area is extracted using background subtraction method Domain Dt:
Dt(x, y)=It(x,y)-BGt(x,y)
Step 6:Update corresponding threshold value T of each pixel and calculate turnover rate R.Each pixel corresponds to threshold value T And R, current pixel judge terminate, the two values will be updated, so that next two field picture is used.The bigger renewal of threshold value T Speed is faster.

Claims (2)

1. based on brightness and the background modeling method of grain table threshold value, it is characterised in that:Methods described is for the figure in video As sequence, the brightness of each pixel and grain table threshold value in a two field picture are calculated first;Then VIBE algorithms are used by the frame All pixels point is divided into two classes on image, is respectively foreground pixel and background pixel;Then monochrome information in new image frame is had The pixel of change carries out Gaussian modeling;The corresponding threshold value of each pixel of final updating;Specifically include following steps:
Step 1:The all pixels point in a two field picture is gathered, and obtains view data and data texturing;
Step 2:According to the view data and data texturing that obtain in step 1, using VIBE algorithms to the initial of background model State carries out assignment, calculates the fusion threshold value of brightness and texture;
Step 3:The pixel value of current pixel point is compared with the fusion threshold value obtained in step 2, if pixel value is more than melted Threshold value is closed, then the pixel is background dot;Otherwise go to step 6;
Step 4:The new background dot detected with step 3 updates ViBe background models;
Step 5:Gaussian modeling is carried out to the pixel that monochrome information in new image frame is changed;
Step 6:Update corresponding threshold value T of each pixel and calculate turnover rate R.
2. it is according to claim 1 based on brightness and the background modeling method of grain table threshold value, it is characterised in that described Step 2 is comprised the following steps:
Step 201:One neighborhood territory pixel of random selection current pixel point;
Step 202:Fusion threshold value dist of brightness and texture is calculated according to below equation:
n o r m = &Sigma; j m a x ( | s o b e l _ x j &lsqb; r a n d I n d e x &rsqb; - s o b e l _ x j | , | s o b e l _ y j &lsqb; r a n d I n d e x &rsqb; - s o b e l _ y j | ) ;
d i s = &Sigma; j | lumi j &lsqb; r a n d I n d e x &rsqb; - lumi j | ;
Dist=alpha* (norm/N)+beta*dis;
Wherein, j ∈ { 1,2,3 }, represents the label of tri- passages of RGB;RandIndex is randomly selected neighborhood in step 201 The label of pixel;sobel_xj[randIndex] and sobel_yj[randIndex] represent respectively j-th channel sample concentrate with The sobel gradients horizontally and vertically of the randIndex sample that machine is selected;sobel_xjRepresent in the horizontal direction Sobel gradients and sobel_yjRepresent sobel gradients vertically;lumij[randIndex] represents j-th channel sample Concentrate the brightness of randomly selected the randIndex sample;Alpha and beta is texture and the fusion coefficients of brightness, alpha It is 1 that value is 7, beta values;
N is that statistics needs the corresponding norm sums of pixel for updating in former frame.
CN201610991994.7A 2016-11-10 2016-11-10 Background modeling method based on brightness and texture fusion threshold value Pending CN106570885A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610991994.7A CN106570885A (en) 2016-11-10 2016-11-10 Background modeling method based on brightness and texture fusion threshold value

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610991994.7A CN106570885A (en) 2016-11-10 2016-11-10 Background modeling method based on brightness and texture fusion threshold value

Publications (1)

Publication Number Publication Date
CN106570885A true CN106570885A (en) 2017-04-19

Family

ID=58541130

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610991994.7A Pending CN106570885A (en) 2016-11-10 2016-11-10 Background modeling method based on brightness and texture fusion threshold value

Country Status (1)

Country Link
CN (1) CN106570885A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107301655A (en) * 2017-06-16 2017-10-27 上海远洲核信软件科技股份有限公司 A kind of video movement target method for detecting based on background modeling
CN110580429A (en) * 2018-06-11 2019-12-17 北京中科晶上超媒体信息技术有限公司 video background library management method and device and application thereof
CN110765979A (en) * 2019-11-05 2020-02-07 中国计量大学 Intelligent LED garden lamp based on background modeling and light control
CN111784723A (en) * 2020-02-24 2020-10-16 成科扬 Foreground extraction algorithm based on confidence weighted fusion and visual attention
CN112235476A (en) * 2020-09-15 2021-01-15 南京航空航天大学 Test data generation method based on fusion variation
CN113222873A (en) * 2021-06-01 2021-08-06 平安科技(深圳)有限公司 Image data enhancement method and device based on two-dimensional Gaussian distribution and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105513053A (en) * 2015-11-26 2016-04-20 河海大学 Background modeling method for video analysis
US20160180195A1 (en) * 2013-09-06 2016-06-23 Toyota Jidosha Kabushiki Kaisha Augmenting Layer-Based Object Detection With Deep Convolutional Neural Networks
CN105741277A (en) * 2016-01-26 2016-07-06 大连理工大学 ViBe (Visual Background Extractor) algorithm and SLIC (Simple Linear Iterative Cluster) superpixel based background difference method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160180195A1 (en) * 2013-09-06 2016-06-23 Toyota Jidosha Kabushiki Kaisha Augmenting Layer-Based Object Detection With Deep Convolutional Neural Networks
CN105513053A (en) * 2015-11-26 2016-04-20 河海大学 Background modeling method for video analysis
CN105741277A (en) * 2016-01-26 2016-07-06 大连理工大学 ViBe (Visual Background Extractor) algorithm and SLIC (Simple Linear Iterative Cluster) superpixel based background difference method

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107301655A (en) * 2017-06-16 2017-10-27 上海远洲核信软件科技股份有限公司 A kind of video movement target method for detecting based on background modeling
CN110580429A (en) * 2018-06-11 2019-12-17 北京中科晶上超媒体信息技术有限公司 video background library management method and device and application thereof
CN110580429B (en) * 2018-06-11 2023-06-06 北京中科晶上超媒体信息技术有限公司 Video background library management method, device and application thereof
CN110765979A (en) * 2019-11-05 2020-02-07 中国计量大学 Intelligent LED garden lamp based on background modeling and light control
CN111784723A (en) * 2020-02-24 2020-10-16 成科扬 Foreground extraction algorithm based on confidence weighted fusion and visual attention
CN112235476A (en) * 2020-09-15 2021-01-15 南京航空航天大学 Test data generation method based on fusion variation
CN113222873A (en) * 2021-06-01 2021-08-06 平安科技(深圳)有限公司 Image data enhancement method and device based on two-dimensional Gaussian distribution and storage medium
CN113222873B (en) * 2021-06-01 2023-06-16 平安科技(深圳)有限公司 Image data enhancement method and device based on two-dimensional Gaussian distribution and storage medium

Similar Documents

Publication Publication Date Title
CN111209810B (en) Boundary frame segmentation supervision deep neural network architecture for accurately detecting pedestrians in real time through visible light and infrared images
WO2021208275A1 (en) Traffic video background modelling method and system
CN109961049B (en) Cigarette brand identification method under complex scene
CN105513053B (en) One kind is used for background modeling method in video analysis
US20230289979A1 (en) A method for video moving object detection based on relative statistical characteristics of image pixels
CN106570885A (en) Background modeling method based on brightness and texture fusion threshold value
CN107564022B (en) Saliency detection method based on Bayesian Fusion
CN104050471B (en) Natural scene character detection method and system
CN102542289B (en) Pedestrian volume statistical method based on plurality of Gaussian counting models
CN105404847B (en) A kind of residue real-time detection method
CN104268583B (en) Pedestrian re-recognition method and system based on color area features
CN105740945B (en) A kind of people counting method based on video analysis
CN106934386B (en) A kind of natural scene character detecting method and system based on from heuristic strategies
CN110717896A (en) Plate strip steel surface defect detection method based on saliency label information propagation model
CN107169985A (en) A kind of moving target detecting method based on symmetrical inter-frame difference and context update
CN103258332B (en) A kind of detection method of the moving target of resisting illumination variation
CN109918971B (en) Method and device for detecting number of people in monitoring video
CN102136059A (en) Video- analysis-base smoke detecting method
CN110827312B (en) Learning method based on cooperative visual attention neural network
CN107895379A (en) The innovatory algorithm of foreground extraction in a kind of video monitoring
CN109919053A (en) A kind of deep learning vehicle parking detection method based on monitor video
CN106991686A (en) A kind of level set contour tracing method based on super-pixel optical flow field
CN105069816B (en) A kind of method and system of inlet and outlet people flow rate statistical
CN109359549A (en) A kind of pedestrian detection method based on mixed Gaussian and HOG_LBP
CN104599291B (en) Infrared motion target detection method based on structural similarity and significance analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170419