CN103310422A - Image acquiring method and device - Google Patents

Image acquiring method and device Download PDF

Info

Publication number
CN103310422A
CN103310422A CN2013102694436A CN201310269443A CN103310422A CN 103310422 A CN103310422 A CN 103310422A CN 2013102694436 A CN2013102694436 A CN 2013102694436A CN 201310269443 A CN201310269443 A CN 201310269443A CN 103310422 A CN103310422 A CN 103310422A
Authority
CN
China
Prior art keywords
image
background
pixel
initial
component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013102694436A
Other languages
Chinese (zh)
Other versions
CN103310422B (en
Inventor
王兵
崔明
宋斌恒
王俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinchen Yi Jie (beijing) Technology Co Ltd
Original Assignee
Xinchen Yi Jie (beijing) Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinchen Yi Jie (beijing) Technology Co Ltd filed Critical Xinchen Yi Jie (beijing) Technology Co Ltd
Priority to CN201310269443.6A priority Critical patent/CN103310422B/en
Publication of CN103310422A publication Critical patent/CN103310422A/en
Application granted granted Critical
Publication of CN103310422B publication Critical patent/CN103310422B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses an image acquiring method and device. The method includes conducting color space conversion on an original image to obtain an initial image space, conducting foreground and background judgment on the initial image space to acquire an initial background image and an initial foreground image, conducting preset frame difference modification on the initial background image and the initial foreground image to obtain a background image and a foreground image. By means of the method and device, the acquired background image and foreground image are accurate, noise is small, the problems in the prior art of foreground error judgment or large background noise caused by the fact that the existing modeling method is slow in response and poor in sensitivity are solved, and the effects of accurately extracting the background image and the foreground image are achieved.

Description

Obtain method and the device of image
Technical field
The present invention relates to image processing field, in particular to a kind of method and device that obtains image.
Background technology
The algorithm of present widely used background modeling is the mixed Gaussian Background Algorithm, its mechanism be to keep on each pixel a plurality of color Gaussian distribution as a setting, concrete principle is as follows:
Mixed Gauss model is added up color vector, keeps the color samples point of a plurality of normal distributions, establishes the Multi-dimensional Gaussian distribution that keeps K color vector, for t certain pixel value x constantly t, the probability that is subordinated to background is:
p ( x t ) = Σ i = 1 K w i 1 ( 2 π ) n 2 | Σ i | 1 2 e - 1 2 ( x t - μ i ) T Σ i - 1 ( x t - μ i ) .
The process that background model is upgraded then is constantly to upgrade weight w according to the color of newly obtaining iAverage μ iAnd variances sigma iIf x tBelong to certain distribution i, update mode is as follows:
w i,t=w i,t-1+α(1-w i,t-1)
μ i,ti,t-1+β(x ti,t-1)。
σ i,t 2i,t-1 2+β((x ti,t) T(x ti,t)-σ i,t-1 2
Meanwhile define priority
Figure BDA00003433499500012
Keep front K distribution of priority maximum.When occurring being different from this K color that distributes, reject the minimum distribution of priority, this color is updated to K distribution as initial value.
As shown in Figure 1, use this algorithm: after reading color value, judge whether to be subordinated to existing certain Gaussian distribution, in the situation that be subordinated to existing certain Gaussian distribution, upgrade this Gaussian distribution with certain speed, then recomputate each distribution of weights and from big to small ordering, in the situation that be not subordinated to existing certain Gaussian distribution, directly recomputate after each distribution of weights and the from big to small ordering, gather as a setting less than the set of the Gaussian distribution of given number percent satisfying the weight sum, and whether the Gaussian distribution of judging current color value institute subordinate belongs to the background set, if the Gaussian distribution of this color subordinate belongs to the background set, exporting this color is background, if the Gaussian distribution of this color subordinate does not belong to the background set, exporting this color is prospect.
Existing improvement algorithm based on the mixed Gaussian background modeling mainly contains following several at present:
1. select rational color space as input.The Color Channel that the method selective discrimination degree is high forms the color space vector with it, and sets up mixed Gauss model thereon.Color space commonly used has the RGB color space, the SRGB space, and the HSV space, the Lab space, YCrCb space etc., the differentiation effect of these color spaces under dark-coloured scene is all undesirable.
2. design adaptive learning speed.The method is determined the learning rate of each pixel according to the result of current foreground detection, can process to a certain extent the static situation of personage, if but the personage is long-term static when finally becoming background, and the erection rate of background will be slower after the personage removes.
3. shadow removal means.The method is intended to eliminate the prospect erroneous judgement that shadow of object brings, and by shadow region change color amplitude is added up, generates the shade criterion.In classroom environment, it is less that background is affected by shade, if introduce the shadow removal means, will make dark place susceptibility variation.
Existing many improved mixed Gaussian background modeling algorithms can roughly be divided into two classes from application point.The first kind is the improvement of general scene, and this class algorithm is used more in fields such as monitoring, can adapt to more scene, but specific aim is not enough.Equations of The Second Kind is the improvement for special scenes, and the custom colors space arithmetic that the present invention proposes also belongs to this type, and this class algorithm has stronger specific aim, and special scenes is had preferably effect.Existing method can't satisfy the needs of indoor tracking Background generation fully, and its subject matter has following 2 points:
One, the background that indoor light changes, disturbance causes is unstable.It is the important factor in order of Background generation that light changes.Indoor environment is generally fluorescent lamp lighting, the stroboscopic of projection and televisor, to cause luminance video to change fast, camera sampling inequality causes the brightness of light and shade transitional region unstable, general mixed Gaussian background adopts conservative update mechanism, sudden change for brightness is difficult to accomplish quick response, and noise increases as a result, False Rate improves to cause background.
Two, the dark place resolution is had relatively high expectations.Owing to reasons such as interior decoration and light layings, there is the phenomenon of illumination deficiency in a lot of zones (such as door, blackboard, TV etc.) in the actual scene.Low resolution camera also can cause dark areas information loss in the situation that whole comparison of light and shade is strong in addition.The application such as tracking have higher requirement to the background resolution characteristic of these dark areas, if the personage enters these zones, desirable algorithm can respond this situation, rather than the part of personage as background.Existing method is many all to be to process for high-definition picture, the chromatic zones calibration is lower and the color sliding scales is larger in this zone, this will cause Background Algorithm sensitivity to descend, or produce more noises when guaranteeing sensitivity, can't satisfy the situation of low resolution.
Three, the personage is static for a long time, and classroom situation requires teacher to keep for a long time as prospect, but teacher may long-term transfixion, and general Background Algorithm can be with teacher as a setting, thereby cause the prospect erroneous judgement.
Cause prospect erroneous judgement or the many problems of background noise for existing modeling method low-response, insufficient sensitivity in the prior art, not yet propose at present effective solution.
Summary of the invention
Cause prospect erroneous judgement or the many problems of background noise for the existing modeling method low-response of correlation technique, insufficient sensitivity, effective solution is not yet proposed at present, for this reason, fundamental purpose of the present invention is to provide a kind of method and device that obtains image, to address the above problem.
To achieve these goals, according to an aspect of the present invention, provide a kind of method of obtaining image, the method comprises: original image is carried out the color space conversion obtain the initial pictures space; Front background judgement is carried out in the initial pictures space obtained initial background figure and initial foreground picture; Initial background figure and initial foreground picture are preset frame number poor correction background extraction image and foreground image.
Further, to initial background figure with initial foreground picture presets the poor correction background extraction image of frame number and the step of foreground image comprises: initial foreground picture is carried out the forward correction obtain foreground image; Initial background figure oppositely revised obtain marking image and moving image; According to marking image and moving image initial background figure is carried out mixed Gauss model renewal processing, with the background extraction image.
Further, original image is the image of present frame, initial foreground picture is carried out the step that the forward correction obtains foreground image to be comprised: calculate from present frame and recall the preliminary image of default frame number at the average brightness of each corresponding pixel points, obtain primary image, wherein, original image and preliminary image are for to carry out the image that imaging processing is obtained to the object in the first environment; The brightness value of the second corresponding in the brightness value of the first pixel of primary image and initial foreground picture pixel is poor, obtain luminance difference; Whether the sensed luminance difference meets first threshold, and wherein, in the situation that luminance difference surpasses first threshold, the second pixel is the motor image vegetarian refreshments, and the cavity of using the motor image vegetarian refreshments to eliminate in the initial foreground picture obtains foreground image.
Further, the brightness value of the second corresponding in the brightness value of the first pixel of primary image and initial foreground picture pixel is poor, and the step that obtains luminance difference comprises: according to following formula calculating luminance difference d (l 1, l 2), formula is: d (l 1, l 2)=| logl 1-logl 2|, wherein, l 1The brightness value that represents the second pixel, l 2The brightness value that represents the first pixel.
Further, initial background figure is oppositely revised the step that obtains marking image and moving image comprise: initial foreground picture is carried out dividing processing obtain image set, wherein, image set comprises the subimage after a plurality of cutting apart; Respectively subimage and initial background figure are merged checking, wherein, increase mark for merging successful subimage with initial background figure, be denoted as marking image, will merge unsuccessful subimage with initial background figure and be denoted as moving image.
Further, according to marking image and moving image initial background figure is carried out mixed Gauss model and upgrade and process, comprise with the step of background extraction image: the learning rate that redefines marking image and moving image; Use learning rate to upgrade weight parameter, Mean Parameters and the variance parameter of mixed Gauss model; Use weight parameter, Mean Parameters and variance parameter to upgrade initial background figure and obtain background image.
Further, the step of using learning rate to upgrade weight parameter, Mean Parameters and the variance parameter of mixed Gauss model comprises: use following formula to obtain weight parameter, Mean Parameters and variance parameter, formula is:
w i,t=w i,t-1+α(1-w i,t-1)
μ logL,tlogL,t-1+β(logL-μ logL,t-1)
σ logL,t 2logL,t-1 2+β((logL-μ logL,t) 2logL,t-1 2)
μ G,tG,t-1+β(G-μ i,t-1)
σ G,t 2G,t-1 2+β((G-μ G,t) 2G,t-1 2)
μ S r , t = μ S r , t - 1 + ( 0.5 w i + 1 , t + 0.5 w i , t ) β ( S r - μ S r , t - 1 ) ,
σ S r , t 2 = σ S r , t - 1 2 + ( 0.5 w i + 1 , t + 0.5 w i , t ) β ( ( S r - μ S r , t ) 2 - σ S r , t - 1 2 )
μ S g , t = μ S g , t - 1 + ( 0.5 w i + 1 , t + 0.5 w i , t ) β ( S g - μ S g , t - 1 )
σ S g , t 2 = σ S g , t - 1 2 + ( 0.5 w i + 1 , t + 0.5 w i , t ) β ( ( S g - μ S g , t ) 2 - σ S g , t - 1 2 )
Wherein, logL, G, Sr and Sg represent respectively the value of logL component among the initial background figure, G component, Sr component and Sg component, w I, tThe weight of i pixel when the t frame of expression initial background figure, w I, t-1Be i the pixel of the initial background figure weight when the t-1 frame, α represents w I, tLearning rate, μ LogL, tThe average of logL component when the t frame of expression initial background figure, μ LogL, t-1The average of expression initial background figure logL component when the t-1 frame, β represents the learning rate of average and variance, σ LogL, tThe standard deviation of expression initial background figure logL component when the t frame, σ LogL, t-1The standard deviation of expression initial background figure logL component when the t-1 frame, μ G, tThe average of expression initial background figure G component when the t frame, μ G, t-1The average of expression initial background figure G component when the t-1 frame, μ I, t-1The average of expression i pixel of initial background figure when the t-1 frame, σ G, tThe standard deviation of expression initial background figure G component when the t frame, σ G, t-1The standard deviation of expression initial background figure G component when the t-1 frame,
Figure BDA00003433499500045
Expression initial background figure S rThe average of component when the t frame, Expression initial background figure S rThe average of component when the t-1 frame, w I+1, tThe weight of expression i+1 pixel of initial background figure when the t frame,
Figure BDA00003433499500047
Expression initial background figure S rThe standard deviation of component when the t frame,
Figure BDA00003433499500048
Expression initial background figure S rThe standard deviation of component when the t-1 frame,
Figure BDA00003433499500049
Expression initial background figure S gThe average of component when the t frame,
Figure BDA000034334995000410
Expression initial background figure S gThe average of component when the t-1 frame,
Figure BDA000034334995000411
Expression initial background figure S gThe standard deviation of component when the t frame,
Figure BDA000034334995000412
Expression initial background figure S gThe standard deviation of component when the t-1 frame.
Further, original image is the RGB image, and the initial pictures space comprises logL component, G component, Sr component and Sg component, wherein, original image is carried out color space conversion to be obtained the step in initial pictures space and comprise: calculate the value of L component by the first formula, the first formula is:
L=f(Y)=f(0.072b+0.715g+0.213r),
f ( Y ) = Y 3 , Y > 0.088856 7.787 Y + 0.1379 Y < = 0.008856 , Wherein, b is the blue channel colour of RGB image, and g is the green passage colour of RGB image, and r is the red passage colour of RGB image, and the span of b, g and r is [0,1], and f is nonlinear function, and Y is the gray-scale value of RGB image; Calculate the value of G component by the second formula, the second formula is:
G = 0.075 g 0.213 r + 0.715 g + 0.072 b ;
Calculate the value of Sr component and Sg component by the 3rd formula, the 3rd formula is:
Sr = r i , j - r i , j - 1 Sg = g i , j - g i , j - 1 , Wherein, r I, jComponent represents the red passage colour of pixel level coordinate i on the original image, pixel that vertical coordinate is j, g I, jComponent represents the green passage colour of pixel level coordinate i on the original image, pixel that vertical coordinate is j, and true origin is the upper left corner of RGB image.
Further, front background is carried out in the initial pictures space judge that the step of obtaining initial background figure and initial foreground picture comprises: judge whether each pixel in the initial pictures space is under the jurisdiction of background, wherein, in the situation that pixel is under the jurisdiction of background, with pixel as a setting pixel to obtain initial background figure; In the situation that pixel does not belong to background, pixel is put to obtain initial foreground picture as foreground pixel.
Further, judge that the step whether each pixel in the initial pictures space is under the jurisdiction of background comprises: by following condition formula background extraction pixel, condition formula is:
&delta; ( x ) = &delta; ( log L , G , S r , S g ) = 1 | log L - &mu; log L | &sigma; log L + | G - &mu; G | &sigma; G + | S r - &mu; S r | &sigma; S r + | S g - &mu; S g | &sigma; S g < C 0 otherwise , Wherein, δ (x) is discriminant function, and parameter x is this pixel color space vector value, in the situation that δ (x) equals 1, with pixel x pixel as a setting, in the situation that δ (x) equals 0, with pixel x as the foreground pixel point, logL, G, S r, S gBe respectively logL component, G component, the S in initial pictures space rComponent and S gComponent, μ represent the average of corresponding subscript parameters, that is, and and μ LogLBe the average of the logL component in initial pictures space, μ GBe the average of the G component in initial pictures space,
Figure BDA00003433499500054
S for the initial pictures space rThe average of component, S for the initial pictures space gThe average of component, σ represents the standard deviation of corresponding subscript parameters, that is, and σ LogLBe the standard deviation of the logL component in initial pictures space, σ GBe the standard deviation of the G component in initial pictures space, S for the initial pictures space rThe standard deviation of component,
Figure BDA00003433499500057
S for the initial pictures space gThe standard deviation of component, C is predetermined threshold value.
Further, after initial background figure was preset frame number poor correction background extraction image and foreground image with initial foreground picture, method also comprised: output background image and foreground image.
Further, before output background image and foreground image, method also comprises: detect the confidence level Err (I) of foreground image I by following formula, wherein, formula is:
Err ( I ) = true , Fl ( I ) > 0.7 L ( I ) false , otherwise , Wherein Err (I)=true represents that confidence level is true, the unusual generation; Err (I)=false represents that confidence level is false, unusual not generation; The number of row that comprises the pixel of pixel value non-zero among Fl (I) the expression foreground image I, total columns of L (I) expression foreground image I.
To achieve these goals, according to a further aspect in the invention, provide a kind of device that obtains image, this device comprises: the first modular converter is used for that original image is carried out the color space conversion and obtains the initial pictures space; The first processing module is used for that front background judgement is carried out in the initial pictures space and obtains initial background figure and initial foreground picture; The second processing module is used for initial background figure and initial foreground picture are preset frame number poor correction background extraction image and foreground image.
Further, the second processing module comprises: the first correcting module is used for that initial foreground picture is carried out the forward correction and obtains foreground image; The second correcting module obtains marking image and moving image for initial background figure is oppositely revised; The first update module is used for according to marking image and moving image initial background figure being carried out mixed Gauss model renewal processing, with the background extraction image.
Further, the first correcting module comprises: the first computing module, be used for calculating from present frame and recall the preliminary image of default frame number at the average brightness of each corresponding pixel points, obtain primary image, wherein, original image and preliminary image are for to carry out the image that imaging processing is obtained to the object in the first environment; The second computing module, it is poor with the brightness value of the second pixel that initially foreground picture is corresponding to be used for the brightness value of the first pixel of primary image, obtains luminance difference; Whether first detection module meets first threshold for detection of luminance difference; The first sub-processing module is used in the situation that luminance difference surpasses first threshold, and the second pixel is the motor image vegetarian refreshments, and the cavity of using the motor image vegetarian refreshments to eliminate in the initial foreground picture obtains foreground image, and wherein, original image is the image of present frame.
Further, the second correcting module comprises: the second sub-processing module, and be used for that initial foreground picture is carried out dividing processing and obtain image set, wherein, image set comprises the subimage after a plurality of cutting apart; The 3rd sub-processing module is used for respectively subimage and initial background figure being merged checking, wherein, increases mark for merging successful subimage with initial background figure, is denoted as marking image, will merge unsuccessful subimage with initial background figure and be denoted as moving image.
Further, the first update module comprises: the 4th sub-processing module, for the learning rate that redefines marking image and moving image; The first sub-computing module is used for using learning rate to upgrade weight parameter, Mean Parameters and the variance parameter of mixed Gauss model; The first sub-update module is used for using weight parameter, Mean Parameters and variance parameter to upgrade initial background figure and obtains background image.
Further, the first processing module comprises: the first judge module is used for judging whether each pixel in initial pictures space is under the jurisdiction of background; The 5th sub-processing module is used in the situation that pixel is under the jurisdiction of background, with pixel as a setting pixel to obtain initial background figure; The 6th sub-processing module is used in the situation that pixel does not belong to background, and pixel is put to obtain initial foreground picture as foreground pixel.
By the present invention, the first modular converter is converted to the initial pictures space with original image, wherein, the initial pictures space can be the Custom Space image, then by the first processing module front background judgement is carried out in the initial pictures space and obtained initial background figure and initial foreground picture, and by the second processing module initial background figure and initial foreground picture are preset frame number poor correction background extraction image and foreground image, because original image has been carried out the color space conversion, rather than directly from original image, extract background image and foreground image, the impact that the initial background figure that extracts and initial foreground picture are not changed by the brightness of the interior space, and initial background figure and initial foreground picture carried out the correction of the difference of default frame number, the pixel that does not belong in the background can be removed, replenish the cavity in the prospect complete, thereby can be so that the more accurate noise of the background image that obtains and foreground image be little, solved existing modeling method low-response in the prior art, insufficient sensitivity causes prospect erroneous judgement or the many problems of background noise, has realized the effect of accurate extraction background image and prospect.
Description of drawings
Accompanying drawing described herein is used to provide a further understanding of the present invention, consists of the application's a part, and illustrative examples of the present invention and explanation thereof are used for explaining the present invention, do not consist of improper restriction of the present invention.In the accompanying drawings:
Fig. 1 is the method flow diagram that obtains image according to prior art;
Fig. 2 is the structural representation according to the device that obtains image of the embodiment of the invention;
Fig. 3 is the process flow diagram according to the method for obtaining image of the embodiment of the invention; And
Fig. 4 is the schematic diagram according to the method for reverse correction image embodiment illustrated in fig. 3.
Embodiment
Need to prove, in the situation that do not conflict, embodiment and the feature among the embodiment among the application can make up mutually.Describe below with reference to the accompanying drawings and in conjunction with the embodiments the present invention in detail.
Fig. 2 is the structural representation according to the device that obtains image of the embodiment of the invention.As shown in Figure 2, this device comprises: the first modular converter 10 is used for that original image is carried out the color space conversion and obtains the initial pictures space; The first processing module 30 is used for that front background judgement is carried out in the initial pictures space and obtains initial background figure and initial foreground picture; The second processing module 50 is used for initial background figure and initial foreground picture are preset frame number poor correction background extraction image and foreground image.
By the present invention, by the present invention, the first modular converter is converted to the initial pictures space with original image, wherein, the initial pictures space can be the Custom Space image, then by the first processing module front background judgement is carried out in the initial pictures space and obtained initial background figure and initial foreground picture, and by the second processing module initial background figure and initial foreground picture are preset frame number poor correction background extraction image and foreground image, because original image has been carried out the color space conversion, rather than directly from original image, extract background image and foreground image, the impact that the initial background figure that extracts and initial foreground picture are not changed by the brightness of the interior space, and initial background figure and initial foreground picture carried out the correction of the difference of default frame number, the pixel that does not belong in the background can be removed, replenish the cavity in the prospect complete, thereby can be so that the more accurate noise of the background image that obtains and foreground image be little, solved existing modeling method low-response in the prior art, insufficient sensitivity causes prospect erroneous judgement or the many problems of background noise, has realized the effect of accurate extraction background image and foreground image.
According to the abovementioned embodiments of the present invention, the second processing module can comprise: the first correcting module is used for that initial foreground picture is carried out the forward correction and obtains foreground image; The second correcting module obtains marking image and moving image for initial background figure is oppositely revised; The first update module is used for according to marking image and moving image initial background figure being carried out mixed Gauss model renewal processing, with the background extraction image.
Particularly, the first correcting module can comprise: the first computing module, be used for calculating from present frame and recall the preliminary image of default frame number at the average brightness of each corresponding pixel points, obtain primary image, wherein, original image and preliminary image are for to carry out the image that imaging processing is obtained to the object in the first environment; The second computing module, it is poor with the brightness value of the second pixel that initially foreground picture is corresponding to be used for the brightness value of the first pixel of primary image, obtains luminance difference; Whether first detection module meets first threshold for detection of luminance difference; The first sub-processing module is used in the situation that luminance difference surpasses first threshold, and the second pixel is the motor image vegetarian refreshments, and the cavity of using the motor image vegetarian refreshments to eliminate in the initial foreground picture obtains foreground image, and wherein, original image is the image of present frame.
Particularly, the second correcting module can comprise: the second sub-processing module, and be used for that initial foreground picture is carried out dividing processing and obtain image set, wherein, image set comprises the subimage after a plurality of cutting apart; The 3rd sub-processing module is used for respectively subimage and initial background figure being merged checking, wherein, increases mark for merging successful subimage with initial background figure, is denoted as marking image, will merge unsuccessful subimage with initial background figure and be denoted as moving image.
In the above embodiment of the present invention, the first update module can comprise: the 4th sub-processing module, for the learning rate that redefines marking image and moving image; The first sub-computing module is used for using learning rate to upgrade weight parameter, Mean Parameters and the variance parameter of mixed Gauss model; The first sub-update module is used for using weight parameter, Mean Parameters and variance parameter to upgrade initial background figure and obtains background image.
According to the abovementioned embodiments of the present invention, the first processing module can comprise: the first judge module is used for judging whether each pixel in initial pictures space is under the jurisdiction of background; The 5th sub-processing module is used in the situation that pixel is under the jurisdiction of background, with pixel as a setting pixel to obtain initial background figure; The 6th sub-processing module is used in the situation that pixel does not belong to background, and pixel is put to obtain initial foreground picture as foreground pixel.
Fig. 3 is that the method comprises the steps: as shown in Figure 3 according to the process flow diagram of the method for obtaining image of the embodiment of the invention
Step S102 carries out the color space conversion to original image and obtains the initial pictures space.
Step S104 carries out front background judgement to the initial pictures space and obtains initial background figure and initial foreground picture.
Step S106 presets frame number poor correction background extraction image and foreground image to initial background figure and initial foreground picture.
By the present invention, the first modular converter is converted to the initial pictures space with original image, wherein, the initial pictures space can be the Custom Space image, then by the first processing module front background judgement is carried out in the initial pictures space and obtained initial background figure and initial foreground picture, and by the second processing module initial background figure and initial foreground picture are preset frame number poor correction background extraction image and foreground image, because original image has been carried out the color space conversion, rather than directly from original image, extract background image and foreground image, the impact that the initial background figure that extracts and initial foreground picture are not changed by the brightness of the interior space, and initial background figure and initial foreground picture carried out the correction of the difference of default frame number, the pixel that does not belong in the background can be removed, replenish the cavity in the prospect complete, thereby can be so that the more accurate noise of the background image that obtains and foreground image be little, solved existing modeling method low-response in the prior art, insufficient sensitivity causes prospect erroneous judgement or the many problems of background noise, has realized the effect of accurate extraction background image and foreground image.
According to the abovementioned embodiments of the present invention, the step of initial background figure and initial foreground picture being preset the poor correction background extraction image of frame number and foreground image can comprise: initial foreground picture is carried out the forward correction obtain foreground image; Initial background figure oppositely revised obtain marking image and moving image; According to marking image and moving image initial background figure is carried out mixed Gauss model renewal processing, with the background extraction image.
Particularly, can be with the image of original image as present frame, initial foreground picture is carried out the step that the forward correction obtains foreground image to be comprised: calculate from present frame and recall the preliminary image of default frame number (namely in the default frame number scope) at the average brightness of each corresponding pixel points, obtain primary image, wherein, original image and preliminary image are for to carry out the image that imaging processing is obtained to the object in the first environment; The brightness value of the second corresponding in the brightness value of the first pixel of primary image and initial foreground picture pixel is poor, obtain luminance difference; Whether the sensed luminance difference meets first threshold, and wherein, in the situation that luminance difference does not meet first threshold, the second pixel is the motor image vegetarian refreshments, and the cavity of using the motor image vegetarian refreshments to eliminate in the initial foreground picture obtains foreground image.
Particularly, can be with all motor image vegetarian refreshments as different motion piecemeals, then the initial foreground picture that provides from mixed Gauss model, the motion piecemeal that will be communicated with preliminary foreground picture with the means of regional connectivity is added to prospect, obtains complete foreground image.
Wherein, default frame number can be 30 frames.To initial background figure carry out the forward correction with the image that obtains after the preliminary image averaging of 30 frames as primary image, the difference that the brightness value of the pixel in original image and this primary image is done under the logarithm distance sense obtains luminance difference, and then the pixel that the luminance difference that does not meet (namely surpassing) first threshold in the luminance difference is corresponding is as motion parts.
Particularly, the brightness value of the second corresponding in the brightness value of the first pixel of primary image and initial foreground picture pixel is poor, and the step that obtains luminance difference can comprise: according to following formula calculating luminance difference d (l 1, l 2), formula is: d (l 1, l 2)=| logl 1-logl 2|, wherein, l 1The brightness value that represents the second pixel, l 2The brightness value that represents the first pixel.
In the above embodiment of the present invention, initial background figure is oppositely revised the step that obtains marking image and moving image can be comprised: the foreground image that obtains the initial pictures space; Foreground image is carried out dividing processing obtain image set, wherein, image set comprises the subimage after a plurality of cutting apart; Respectively subimage and initial background figure are merged checking, wherein, increase mark for merging successful subimage with initial background figure, be denoted as marking image, will merge unsuccessful subimage with initial background figure and be denoted as moving image.The turnover rate of the pixel that then the raising marking image identifies in the step that mixed Gauss model upgrades, the turnover rate of the pixel that the reduction moving image identifies.
Particularly, as shown in Figure 4, obtain after the initial background figure and initial foreground picture of present frame, initial foreground picture (this image is preferably bianry image, and this figure can also be foreground image) is cut apart, obtain a plurality of subimages, respectively subimage and initial background figure are merged checking, wherein, increase mark for merging successful subimage with initial background figure, be denoted as marking image, will merge unsuccessful subimage with initial background figure and be denoted as moving image.
In the above embodiment of the present invention, according to marking image and moving image initial background figure is carried out mixed Gauss model and upgrade and process, comprise with the step of background extraction image: the learning rate that redefines marking image and moving image; Use learning rate to upgrade weight parameter, Mean Parameters and the variance parameter of mixed Gauss model; Use weight parameter, Mean Parameters and variance parameter to upgrade initial background figure and obtain background image.
Particularly, the step of using learning rate to upgrade weight parameter, Mean Parameters and the variance parameter of mixed Gauss model comprises: use following formula to obtain weight parameter, Mean Parameters and variance parameter, formula is:
w i,t=w i,t-1+α(1-w i,t-1)
μ logL,tlogL,t-1+β(logL-μ logL,t-1)
σ logL,t 2logL,t-1 2+β((logL-μ logL,t) 2logL,t-1 2)
μ G,tG,t-1+β(G-μ i,t-1)
σ G,t 2G,t-1 2+β((G-μ G,t) 2G,t-1 2)
&mu; S r , t = &mu; S r , t - 1 + ( 0.5 w i + 1 , t + 0.5 w i , t ) &beta; ( S r - &mu; S r , t - 1 ) ,
&sigma; S r , t 2 = &sigma; S r , t - 1 2 + ( 0.5 w i + 1 , t + 0.5 w i , t ) &beta; ( ( S r - &mu; S r , t ) 2 - &sigma; S r , t - 1 2 )
&mu; S g , t = &mu; S g , t - 1 + ( 0.5 w i + 1 , t + 0.5 w i , t ) &beta; ( S g - &mu; S g , t - 1 )
&sigma; S g , t 2 = &sigma; S g , t - 1 2 + ( 0.5 w i + 1 , t + 0.5 w i , t ) &beta; ( ( S g - &mu; S g , t ) 2 - &sigma; S g , t - 1 2 )
Wherein, logL, G, Sr and Sg represent respectively the value of logL component among the described initial background figure, G component, Sr component and Sg component, w I, tThe weight of i pixel when the t frame of expression initial background figure, w I, t-1Be i the pixel of the initial background figure weight when the t-1 frame, α represents w I, tLearning rate, μ LogL, tThe average of logL component when the t frame of expression initial background figure, μ LogL, t-1The average of expression initial background figure logL component when the t-1 frame, β represents the learning rate of average and variance, σ LogL, tThe standard deviation of expression initial background figure logL component when the t frame, σ LogL, t-1The standard deviation of expression initial background figure logL component when the t-1 frame, μ G, tThe average of expression initial background figure G component when the t frame, μ G, t-1The average of expression initial background figure G component when the t-1 frame, μ I, t-1The average of expression i pixel of initial background figure when the t-1 frame, σ G, tThe standard deviation of expression initial background figure G component when the t frame, σ G, t-1The standard deviation of expression initial background figure G component when the t-1 frame,
Figure BDA000034334995001012
Expression initial background figure S rThe average of component when the t frame,
Figure BDA00003433499500105
Expression initial background figure S rThe average of component when the t-1 frame, w I+1, tThe weight of expression i+1 pixel of initial background figure when the t frame,
Figure BDA00003433499500106
Expression initial background figure S rThe standard deviation of component when the t frame,
Figure BDA00003433499500107
Expression initial background figure S rThe standard deviation of component when the t-1 frame,
Figure BDA00003433499500108
Expression initial background figure S gThe average of component when the t frame, Expression initial background figure S gThe average of component when the t-1 frame,
Figure BDA000034334995001010
Expression initial background figure S gThe standard deviation of component when the t frame,
Figure BDA000034334995001011
Expression initial background figure S gThe standard deviation of component when the t-1 frame; Use weight parameter, Mean Parameters and variance parameter to upgrade the correction Background and obtain initial background figure.
According to the abovementioned embodiments of the present invention, original image can be the RGB image, the initial pictures space comprises logL component, G component, Sr component and Sg component, wherein, original image is carried out color space conversion to be obtained the step in initial pictures space and can comprise: calculate the value of L component by the first formula, the first formula is:
L=f(Y)=f(0.072b+0.715g+0.213r),
f ( Y ) = Y 3 , Y > 0.088856 7.787 Y + 0.1379 Y < = 0.008856 , Wherein, b is the blue channel colour of RGB image, and g is the green passage colour of RGB image, and r is the red passage colour of RGB image, and the span of b, g and r is [0,1], and f is nonlinear function, and Y is the gray-scale value of RGB image; Calculate the value of G component by the second formula, the second formula is:
G = 0.075 g 0.213 r + 0.715 g + 0.072 b ;
Calculate the value of Sr component and Sg component by the 3rd formula, the 3rd formula is:
Sr = r i , j - r i , j - 1 Sg = g i , j - g i , j - 1 , Wherein, r I, jComponent represents the red passage colour of pixel level coordinate i on the original image, pixel that vertical coordinate is j, g I, jComponent represents the green passage colour of pixel level coordinate i on the original image, pixel that vertical coordinate is j, and true origin is the upper left corner of RGB image.
Wherein, can extract the Color Channel that is fit to Background generation and combine color feature vector, in the present embodiment, preferentially select brightness value L as basic color channel, brightness value L has brightness and changes insensitive characteristics, shows preferably stability in indoor scene.In actual use, if adopt between [0,255] chromatic zones, can carry out proportional zoom to above-mentioned color value.
Add regular color component G in this initial pictures space, the raising algorithm is corresponding to color.The g component occupies larger proportion in half-tone information, human eye is comparatively responsive to the response of green, and this component is larger to the effect of color differentiating, in the present embodiment, has used normalized method, has alleviated the disturbance that brightness brings.
In addition, in order to improve the corresponding of edge, introduced Sr, two components of Sg, these two components can extract edge variation information from neighborhood information, and these two components can improve the detection that vertical texture is changed.
The initial pictures space that finally provides is (logL, G, Sr, Sg).Wherein logL represents brightness L is got denary logarithm, and there are good identification capability and stability in the initial pictures space to indoor environment.
More specifically, in the above-described embodiments, video resolution is 352*288, sampling rate was at least for 5 frame/seconds, output image is the RGB coloured image, gathers the image in first environment space (in the present embodiment can be indoor for the classroom) by non-wide-angle monitoring camera, with the RGB coloured image of capture card output as input source, processing procedure is to process in real time, thereby guarantees the quick response of algorithm.
In the above embodiment of the present invention, front background is carried out in the initial pictures space judge that the step of obtaining initial background figure and initial foreground picture comprises: judge whether each pixel in the initial pictures space is under the jurisdiction of background, wherein, in the situation that pixel is under the jurisdiction of background, with pixel as a setting pixel to obtain initial background figure; In the situation that pixel does not belong to background, pixel is put to obtain initial foreground picture as foreground pixel.
Particularly, judge that the step whether each pixel in the initial pictures space is under the jurisdiction of background comprises: by following condition formula background extraction pixel, condition formula is:
&delta; ( x ) = &delta; ( [ log L , G , S r , S g ] T ) = 1 | log L - &mu; log L | &sigma; log L + | G - &mu; G | &sigma; G + | S r - &mu; S r | &sigma; S r + | S g - &mu; S g | &sigma; S g < C 0 otherwise , Wherein, δ (x) is discriminant function, and parameter x is this pixel color space vector value, in the situation that δ (x) equals 1, with pixel x pixel as a setting, in the situation that δ (x) equals 0, with pixel x as the foreground pixel point, logL, G, S r, S gBe respectively logL component, G component, the S in initial pictures space rComponent and S gComponent, μ represent the average of corresponding subscript parameters, that is, and and μ LogLBe the average of the logL component in initial pictures space, μ GBe the average of the G component in initial pictures space,
Figure BDA00003433499500122
S for the initial pictures space rThe average of component, S for the initial pictures space gThe average of component, σ represents the standard deviation of corresponding subscript parameters, that is, and σ LogLBe the standard deviation of the logL component in initial pictures space, σ GBe the standard deviation of the G component in initial pictures space,
Figure BDA00003433499500124
S for the initial pictures space rThe standard deviation of component,
Figure BDA00003433499500125
S for the initial pictures space gThe standard deviation of component, C is predetermined threshold value.
Can keep preferably the object marginal information by the detected prospect of above-mentioned formula, the cavity that interior of articles may occur adopts morphological operation (expanding-the corrosion operation) to eliminate the cavity, and the initial background figure that obtains will compare complete and accurate, and noise is little.
In the above embodiment of the present invention, after initial background figure was preset frame number poor correction background extraction image and foreground image with initial prospect, method can also comprise: output background image and foreground image.
Particularly, before output background image and foreground image, method can also comprise: detect the confidence level Err (I) of foreground image I by following formula, wherein, formula is:
Err ( I ) = true , Fl ( I ) > 0.7 L ( I ) false , otherwise , Wherein Err (I)=true represents that confidence level is true, the unusual generation; Err (I)=false represents that confidence level is false, unusual not generation; The number of row that comprises the pixel of pixel value non-zero among Fl (I) the expression foreground image I, total columns of L (I) expression foreground image I.
This algorithm under the condition of low resolution camera when resisting preferably the light disturbance, guaranteed the front background area calibration under the different brightness, reached the effect that reduces the background mistake.
Pass through the above embodiment of the present invention, can change according to light and make rapid adjustment, and when (brighten, dimmed) or rhythmic variation (periodic brightness is floated) occur to change suddenly the video overall brightness, can guarantee upgrading in time of background luminance.
Need to prove, can in the computer system such as one group of computer executable instructions, carry out in the step shown in the process flow diagram of accompanying drawing, and, although there is shown logical order in flow process, but in some cases, can carry out step shown or that describe with the order that is different from herein.
From above description, can find out, the present invention has realized following technique effect: by the present invention, the first modular converter is converted to the initial pictures space with original image, wherein, the initial pictures space can be the Custom Space image, then by the first processing module the background judgement is carried out in the initial pictures space and obtained initial background figure, and by the second processing module initial background figure is preset the poor of frame number and revise, obtain background image, because original image has been carried out the color space conversion, rather than directly from original image, extract background image, the impact that the initial background figure that extracts is not changed by the brightness of the interior space, and initial background figure has been carried out the correction of the difference of default frame number, and the pixel that does not belong in the background can be removed, thus can be so that the more accurate noise of the background image that obtains be little, solved existing modeling method low-response in the prior art, insufficient sensitivity causes prospect erroneous judgement or the many problems of background noise, has realized the effect of accurate extraction background image.
Use the present invention, variation has tolerance to light.This programme adopts mixed Gauss model, uses luminance brightness L component to reach the tolerance that indoor light is changed.Light change process than similar algorithm is simple, reliable; Discrimination to dark areas is higher.This programme is compared with similar algorithm in conjunction with Sobel edge gradient information and luminance brightness information under the logarithm distance, and is higher to personage's Motion Resolution specific aim of dark areas; High to static personage's degree of reservation.Scheme among the present invention not only uses single camera information that character positions is judged, has also adopted the mode of bull information synergism checking.The Background generation mode of this associating can remedy the shortcoming of single camera poor reliability, compares with similar single camera Background Algorithm, has greatly improved the retention to static personage, has kept simultaneously context update speed.
Obviously, those skilled in the art should be understood that, above-mentioned each module of the present invention or each step can realize with general calculation element, they can concentrate on the single calculation element, perhaps be distributed on the network that a plurality of calculation elements form, alternatively, they can be realized with the executable program code of calculation element, thereby, they can be stored in the memory storage and be carried out by calculation element, perhaps they are made into respectively each integrated circuit modules, perhaps a plurality of modules in them or step are made into the single integrated circuit module and realize.Like this, the present invention is not restricted to any specific hardware and software combination.
The above is the preferred embodiments of the present invention only, is not limited to the present invention, and for a person skilled in the art, the present invention can have various modifications and variations.Within the spirit and principles in the present invention all, any modification of doing, be equal to replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (18)

1. a method of obtaining image is characterized in that, comprising:
Original image is carried out the color space conversion obtain the initial pictures space;
Front background judgement is carried out in described initial pictures space obtained initial background figure and initial foreground picture;
Described initial background figure and described initial foreground picture are preset frame number poor correction background extraction image and foreground image.
2. method according to claim 1 is characterized in that, to described initial background figure with described initial foreground picture is preset the poor correction background extraction image of frame number and the step of foreground image comprises:
Described initial foreground picture is carried out the forward correction obtain described foreground image;
Described initial background figure oppositely revised obtain marking image and moving image;
According to described marking image and described moving image described initial background figure is carried out mixed Gauss model renewal processing, to obtain described background image.
3. method according to claim 2 is characterized in that, described original image is the image of present frame, described initial foreground picture is carried out the step that the forward correction obtains described foreground image comprise:
Calculating is recalled the described preliminary image of default frame number at the average brightness of each corresponding pixel points from present frame, obtains primary image, and wherein, described original image and described preliminary image are for to carry out the image that imaging processing is obtained to the object in the first environment;
The brightness value of the second corresponding in the brightness value of the first pixel of described primary image and described initial foreground picture pixel is poor, obtain luminance difference;
Detect described luminance difference and whether meet first threshold, wherein, in the situation that described luminance difference surpasses described first threshold, described the second pixel is the motor image vegetarian refreshments, and the cavity of using described motor image vegetarian refreshments to eliminate in the described initial foreground picture obtains described foreground image.
4. method according to claim 3 is characterized in that, the brightness value of the second corresponding in the brightness value of the first pixel of described primary image and described initial foreground picture pixel is poor, and the step that obtains luminance difference comprises:
Calculate described luminance difference d (l according to following formula 1, l 2), described formula is:
D (l 1, l 2)=| logl 1-logl 2|, wherein, described l 1The brightness value that represents described the second pixel, l 2The brightness value that represents described the first pixel.
5. method according to claim 2 is characterized in that, described initial background figure is oppositely revised the step that obtains marking image and moving image comprise:
Described initial foreground picture is carried out dividing processing obtain image set, wherein, described image set comprises the subimage after a plurality of cutting apart;
Respectively described subimage and described initial background figure are merged checking, wherein, increase mark for merging successful described subimage with described initial background figure, be denoted as described marking image, will merge unsuccessful described subimage with described initial background figure and be denoted as described moving image.
6. method according to claim 2 is characterized in that, according to described marking image and described moving image described initial background figure is carried out mixed Gauss model renewal processing, comprises with the step of obtaining described background image:
Redefine the learning rate of described marking image and described moving image;
Use described learning rate to upgrade weight parameter, Mean Parameters and the variance parameter of mixed Gauss model;
Use described weight parameter, described Mean Parameters and described variance parameter to upgrade described initial background figure and obtain described background image.
7. method according to claim 6 is characterized in that, the step of using described learning rate to upgrade weight parameter, Mean Parameters and the variance parameter of mixed Gauss model comprises:
Use following formula to obtain described weight parameter, described Mean Parameters and described variance parameter, described formula is:
w i,t=w i,t-1+α(1-w i,t-1)
μ logL,tlogL,t-1+β(logL-μ logL,t-1)
σ logL,t 2logL,t-1 2+β((logL-μ logL,t) 2logL,t-1 2)
μ G,tG,t-1+β(G-μ i,t-1)
σ G,t 2G,t-1 2+β((G-μ G,t) 2G,t-1 2)
&mu; S r , t = &mu; S r , t - 1 + ( 0.5 w i + 1 , t + 0.5 w i , t ) &beta; ( S r - &mu; S r , t - 1 ) ,
&sigma; S r , t 2 = &sigma; S r , t - 1 2 + ( 0.5 w i + 1 , t + 0.5 w i , t ) &beta; ( ( S r - &mu; S r , t ) 2 - &sigma; S r , t - 1 2 )
&mu; S g , t = &mu; S g , t - 1 + ( 0.5 w i + 1 , t + 0.5 w i , t ) &beta; ( S g - &mu; S g , t - 1 )
&sigma; S g , t 2 = &sigma; S g , t - 1 2 + ( 0.5 w i + 1 , t + 0.5 w i , t ) &beta; ( ( S g - &mu; S g , t ) 2 - &sigma; S g , t - 1 2 )
Wherein, logL, G, Sr and Sg represent respectively the value of logL component among the described initial background figure, G component, Sr component and Sg component, w I, tThe weight of i pixel when the t frame that represents described initial background figure, w I, t-1Be i the pixel of the described initial background figure weight when the t-1 frame, α represents w I, tLearning rate, μ LogL, tThe average of logL component when the t frame that represents described initial background figure, μ LogL, t-1Represent the average of described initial background figure logL component when the t-1 frame, β represents the learning rate of average and variance, σ LogL, tRepresent the standard deviation of described initial background figure logL component when the t frame, σ LogL, t-1Represent the standard deviation of described initial background figure logL component when the t-1 frame, μ G, tRepresent the average of described initial background figure G component when the t frame, μ G, t-1Represent the average of described initial background figure G component when the t-1 frame, μ I, t-1Represent the average of i pixel of described initial background figure when the t-1 frame, σ G, tRepresent the standard deviation of described initial background figure G component when the t frame, σ G, t-1Represent the standard deviation of described initial background figure G component when the t-1 frame,
Figure FDA000034334994000310
Represent described initial background figure S rThe average of component when the t frame,
Figure FDA000034334994000311
Represent described initial background figure S rThe average of component when the t-1 frame, w I+1, tRepresent the weight of i+1 pixel of described initial background figure when the t frame,
Figure FDA00003433499400031
Represent described initial background figure S rThe standard deviation of component when the t frame,
Figure FDA00003433499400032
Represent described initial background figure S rThe standard deviation of component when the t-1 frame,
Figure FDA00003433499400033
Represent described initial background figure S gThe average of component when the t frame,
Figure FDA00003433499400034
Represent described initial background figure S gThe average of component when the t-1 frame, Represent described initial background figure S gThe standard deviation of component when the t frame,
Figure FDA00003433499400036
Represent described initial background figure S gThe standard deviation of component when the t-1 frame.
8. method according to claim 1, it is characterized in that, described original image is the RGB image, and described initial pictures space comprises logL component, G component, Sr component and Sg component, wherein, original image being carried out color space conversion obtains the step in initial pictures space and comprises:
Calculate the value of described L component by the first formula, described the first formula is:
L=f(Y)=f(0.072b+0.715g+0.213r),
f ( Y ) = Y 3 , Y > 0.088856 7.787 Y + 0.1379 Y < = 0.008856 , Wherein, b is the blue channel colour of described RGB image, and g is the green passage colour of described RGB image, and r is the red passage colour of described RGB image, and the span of b, g and r is [0,1], and f is nonlinear function, and Y is the gray-scale value of described RGB image;
Calculate the value of described G component by the second formula, described the second formula is:
G = 0.075 g 0.213 r + 0.715 g + 0.072 b ;
Calculate the value of described Sr component and described Sg component by the 3rd formula, described the 3rd formula is:
Sr = r i , j - r i , j - 1 Sg = g i , j - g i , j - 1 , Wherein, r I, jComponent represents the described red passage colour of pixel level coordinate i on the described original image, pixel that vertical coordinate is j, g I, jComponent represents the described horizontal coordinate i of pixel on the described original image, described vertical coordinate is the described green passage colour of the pixel of j, and true origin is the upper left corner of described RGB image.
9. method according to claim 1 is characterized in that, front background is carried out in described initial pictures space judge that the step of obtaining initial background figure and initial foreground picture comprises:
Judge whether each pixel in the described initial pictures space is under the jurisdiction of background, wherein, in the situation that described pixel is under the jurisdiction of described background, with described pixel as a setting pixel to obtain described initial background figure;
In the situation that described pixel does not belong to described background, described pixel is put to obtain described initial foreground picture as foreground pixel.
10. method according to claim 9 is characterized in that, judges that the step whether each pixel in the described initial pictures space is under the jurisdiction of background comprises:
Obtain described background pixel point by following condition formula, described condition formula is:
&delta; ( x ) = &delta; ( log L , G , S r , S g ) = 1 | log L - &mu; log L | &sigma; log L + | G - &mu; G | &sigma; G + | S r - &mu; S r | &sigma; S r + | S g - &mu; S g | &sigma; S g < C 0 otherwise , , wherein, δ (x) is discriminant function, parameter x is this pixel color space vector value, in the situation that δ (x) equals 1, with pixel x as described background pixel point, in the situation that δ (x) equals 0, with pixel x as described foreground pixel point, logL, G, S r, S gBe respectively logL component, G component, the S in initial pictures space rComponent and S gComponent, μ represent the average of corresponding subscript parameters, that is, and and μ LogLBe the average of the logL component in described initial pictures space, μ GBe the average of the G component in described initial pictures space, S for described initial pictures space rThe average of component, S for described initial pictures space gThe average of component, σ represents the standard deviation of corresponding subscript parameters, that is, and σ LogLBe the standard deviation of the logL component in described initial pictures space, σ GBe the standard deviation of the G component in described initial pictures space,
Figure FDA00003433499400045
S for described initial pictures space rThe standard deviation of component,
Figure FDA00003433499400046
S for described initial pictures space gThe standard deviation of component, C is predetermined threshold value.
11. method according to claim 1 is characterized in that, after described initial background figure and described initial foreground picture were preset frame number poor correction background extraction image and foreground image, described method also comprised:
Export described background image and described foreground image.
12. method according to claim 11 is characterized in that, before the described background image of output and described foreground image, described method also comprises:
Detect the confidence level Err (I) of described foreground image I by following formula, wherein, described formula is:
Err ( I ) = true , Fl ( I ) > 0.7 L ( I ) false , otherwise , Wherein Err (I)=true represents that confidence level is true, the unusual generation; Err (I)=false represents that confidence level is false, unusual not generation; The number of row that comprises the pixel of pixel value non-zero among the described foreground image I of Fl (I) expression, total columns of the described foreground image I of L (I) expression.
13. a device that obtains image is characterized in that, comprising:
The first modular converter is used for that original image is carried out the color space conversion and obtains the initial pictures space;
The first processing module is used for that front background judgement is carried out in described initial pictures space and obtains initial background figure and initial foreground picture;
The second processing module is used for described initial background figure and described initial foreground picture are preset frame number poor correction background extraction image and foreground image.
14. device according to claim 13 is characterized in that, described the second processing module comprises:
The first correcting module is used for that described initial foreground picture is carried out the forward correction and obtains described foreground image;
The second correcting module obtains marking image and moving image for described initial background figure is oppositely revised;
The first update module is used for according to described marking image and described moving image described initial background figure being carried out mixed Gauss model renewal processing, to obtain described background image.
15. device according to claim 14 is characterized in that, described the first correcting module comprises:
The first computing module, be used for calculating from present frame and recall the described preliminary image of default frame number at the average brightness of each corresponding pixel points, obtain primary image, wherein, described original image and described preliminary image are for to carry out the image that imaging processing is obtained to the object in the first environment;
The second computing module, the brightness value that is used for the second pixel that the brightness value of the first pixel of described primary image is corresponding with described initial foreground picture is poor, obtains luminance difference;
Whether first detection module meets first threshold for detection of described luminance difference;
The first sub-processing module is used in the situation that described luminance difference surpasses described first threshold, and described the second pixel is the motor image vegetarian refreshments, and the cavity of using described motor image vegetarian refreshments to eliminate in the described initial foreground picture obtains described foreground image,
Wherein, described original image is the image of present frame.
16. device according to claim 14 is characterized in that, described the second correcting module comprises:
The second sub-processing module is used for that described initial foreground picture is carried out dividing processing and obtains image set, and wherein, described image set comprises the subimage after a plurality of cutting apart;
The 3rd sub-processing module, be used for respectively described subimage and described initial background figure being merged checking, wherein, increase mark for merging successful described subimage with described initial background figure, be denoted as described marking image, will merge unsuccessful described subimage with described initial background figure and be denoted as described moving image.
17. device according to claim 14 is characterized in that, described the first update module comprises:
The 4th sub-processing module is for the learning rate that redefines described marking image and described moving image;
The first sub-computing module is used for using described learning rate to upgrade weight parameter, Mean Parameters and the variance parameter of mixed Gauss model;
The first sub-update module is used for using described weight parameter, described Mean Parameters and described variance parameter to upgrade described initial background figure and obtains described background image.
18. device according to claim 13 is characterized in that, described the first processing module comprises:
The first judge module is used for judging whether each pixel in described initial pictures space is under the jurisdiction of background;
The 5th sub-processing module is used in the situation that described pixel is under the jurisdiction of described background, with described pixel as a setting pixel to obtain described initial background figure;
The 6th sub-processing module is used in the situation that described pixel does not belong to described background, and described pixel is put to obtain described initial foreground picture as foreground pixel.
CN201310269443.6A 2013-06-28 2013-06-28 Obtain the method and device of image Active CN103310422B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310269443.6A CN103310422B (en) 2013-06-28 2013-06-28 Obtain the method and device of image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310269443.6A CN103310422B (en) 2013-06-28 2013-06-28 Obtain the method and device of image

Publications (2)

Publication Number Publication Date
CN103310422A true CN103310422A (en) 2013-09-18
CN103310422B CN103310422B (en) 2016-08-31

Family

ID=49135600

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310269443.6A Active CN103310422B (en) 2013-06-28 2013-06-28 Obtain the method and device of image

Country Status (1)

Country Link
CN (1) CN103310422B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104598120A (en) * 2013-10-30 2015-05-06 宏达国际电子股份有限公司 Color Sampling Method and Touch Control Device thereof
CN105354281A (en) * 2014-02-03 2016-02-24 株式会社隆创 Image inspection apparatus and image inspection procedure
CN106233716A (en) * 2014-04-22 2016-12-14 日本电信电话株式会社 Video display devices, video projection, dynamic illusion present device, video-generating device, their method, data structure, program
JP2017151536A (en) * 2016-02-22 2017-08-31 株式会社メガチップス Image processing apparatus, control program, and area specification method
CN108027958A (en) * 2015-09-21 2018-05-11 高通股份有限公司 Efficient display processing is carried out by prefetching
CN108305512A (en) * 2018-01-05 2018-07-20 珠海向导科技有限公司 A kind of classroom electronic notebook record system and method
CN111415357A (en) * 2020-03-19 2020-07-14 长光卫星技术有限公司 Portable shadow extraction method based on color image
CN111738949A (en) * 2020-06-19 2020-10-02 北京百度网讯科技有限公司 Image brightness adjusting method and device, electronic equipment and storage medium
CN113256490A (en) * 2020-02-13 2021-08-13 北京小米松果电子有限公司 Document image processing method, device and medium
CN115034911A (en) * 2022-07-05 2022-09-09 广州高新工程顾问有限公司 BIM-based whole-process cost consultation service method and system
CN115880248A (en) * 2022-12-13 2023-03-31 哈尔滨耐是智能科技有限公司 Surface scratch defect identification method and visual detection equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101068342A (en) * 2007-06-05 2007-11-07 西安理工大学 Video frequency motion target close-up trace monitoring method based on double-camera head linkage structure
US20100098331A1 (en) * 2008-09-26 2010-04-22 Sony Corporation System and method for segmenting foreground and background in a video
CN102073852A (en) * 2011-01-14 2011-05-25 华南理工大学 Multiple vehicle segmentation method based on optimum threshold values and random labeling method for multiple vehicles
CN103116987A (en) * 2013-01-22 2013-05-22 华中科技大学 Traffic flow statistic and violation detection method based on surveillance video processing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101068342A (en) * 2007-06-05 2007-11-07 西安理工大学 Video frequency motion target close-up trace monitoring method based on double-camera head linkage structure
US20100098331A1 (en) * 2008-09-26 2010-04-22 Sony Corporation System and method for segmenting foreground and background in a video
CN102073852A (en) * 2011-01-14 2011-05-25 华南理工大学 Multiple vehicle segmentation method based on optimum threshold values and random labeling method for multiple vehicles
CN103116987A (en) * 2013-01-22 2013-05-22 华中科技大学 Traffic flow statistic and violation detection method based on surveillance video processing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
甘玲等: "一种应用于运动车辆检测的背景更新方法", 《计算机工程》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104598120B (en) * 2013-10-30 2018-06-01 宏达国际电子股份有限公司 Color sample method and touch-control control device
CN104598120A (en) * 2013-10-30 2015-05-06 宏达国际电子股份有限公司 Color Sampling Method and Touch Control Device thereof
CN105354281A (en) * 2014-02-03 2016-02-24 株式会社隆创 Image inspection apparatus and image inspection procedure
CN106233716A (en) * 2014-04-22 2016-12-14 日本电信电话株式会社 Video display devices, video projection, dynamic illusion present device, video-generating device, their method, data structure, program
CN106233716B (en) * 2014-04-22 2019-12-24 日本电信电话株式会社 Dynamic illusion presenting device, dynamic illusion presenting method, and program
CN108027958A (en) * 2015-09-21 2018-05-11 高通股份有限公司 Efficient display processing is carried out by prefetching
JP2017151536A (en) * 2016-02-22 2017-08-31 株式会社メガチップス Image processing apparatus, control program, and area specification method
CN108305512A (en) * 2018-01-05 2018-07-20 珠海向导科技有限公司 A kind of classroom electronic notebook record system and method
CN113256490A (en) * 2020-02-13 2021-08-13 北京小米松果电子有限公司 Document image processing method, device and medium
CN111415357A (en) * 2020-03-19 2020-07-14 长光卫星技术有限公司 Portable shadow extraction method based on color image
CN111415357B (en) * 2020-03-19 2023-04-07 长光卫星技术股份有限公司 Portable shadow extraction method based on color image
CN111738949A (en) * 2020-06-19 2020-10-02 北京百度网讯科技有限公司 Image brightness adjusting method and device, electronic equipment and storage medium
CN111738949B (en) * 2020-06-19 2024-04-05 北京百度网讯科技有限公司 Image brightness adjusting method and device, electronic equipment and storage medium
CN115034911A (en) * 2022-07-05 2022-09-09 广州高新工程顾问有限公司 BIM-based whole-process cost consultation service method and system
CN115880248A (en) * 2022-12-13 2023-03-31 哈尔滨耐是智能科技有限公司 Surface scratch defect identification method and visual detection equipment
CN115880248B (en) * 2022-12-13 2024-02-09 哈尔滨耐是智能科技有限公司 Surface scratch defect identification method and visual detection equipment

Also Published As

Publication number Publication date
CN103310422B (en) 2016-08-31

Similar Documents

Publication Publication Date Title
CN103310422A (en) Image acquiring method and device
US8280165B2 (en) System and method for segmenting foreground and background in a video
US11308335B2 (en) Intelligent video surveillance system and method
CN101393603B (en) Method for recognizing and detecting tunnel fire disaster flame
CN101969718B (en) Intelligent lighting control system and control method
US7224735B2 (en) Adaptive background image updating
US9142032B2 (en) Method and apparatus for implementing motion detection
CN103093203B (en) A kind of human body recognition methods again and human body identify system again
CN101794406B (en) Automatic counting system for density of Bemisia tabaci adults
US7190811B2 (en) Adaptive tracking for gesture interfaces
CN101969719A (en) Intelligent lamplight control system and control method thereof
CN106204586B (en) A kind of moving target detecting method under complex scene based on tracking
CN105404847A (en) Real-time detection method for object left behind
CN103927520A (en) Method for detecting human face under backlighting environment
CN101237522A (en) Motion detection method and device
CN103996382A (en) Method and system for improving RGBW-image saturation
CN104599511B (en) Traffic flow detection method based on background modeling
Huerta et al. Exploiting multiple cues in motion segmentation based on background subtraction
CN106327525B (en) Cross the border behavior method of real-time for a kind of computer room important place
CN102339390B (en) Method and system for updating target template of video monitoring system
CN109766828A (en) A kind of vehicle target dividing method, device and communication equipment
CN104537688A (en) Moving object detecting method based on background subtraction and HOG features
CN112866581A (en) Camera automatic exposure compensation method and device and electronic equipment
JP7092615B2 (en) Shadow detector, shadow detection method, shadow detection program, learning device, learning method, and learning program
WO2023005827A1 (en) Exposure compensation method and apparatus, and electronic device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant