CN102665034A - Night effect removal method for camera-collected video - Google Patents

Night effect removal method for camera-collected video Download PDF

Info

Publication number
CN102665034A
CN102665034A CN2012100701892A CN201210070189A CN102665034A CN 102665034 A CN102665034 A CN 102665034A CN 2012100701892 A CN2012100701892 A CN 2012100701892A CN 201210070189 A CN201210070189 A CN 201210070189A CN 102665034 A CN102665034 A CN 102665034A
Authority
CN
China
Prior art keywords
image
video
night
value
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012100701892A
Other languages
Chinese (zh)
Inventor
明安龙
傅慧源
吴世新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
JIANGSU HUAFENG INTERNET OF THINGS TECHNOLOGY CO LTD
Original Assignee
JIANGSU HUAFENG INTERNET OF THINGS TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by JIANGSU HUAFENG INTERNET OF THINGS TECHNOLOGY CO LTD filed Critical JIANGSU HUAFENG INTERNET OF THINGS TECHNOLOGY CO LTD
Priority to CN2012100701892A priority Critical patent/CN102665034A/en
Publication of CN102665034A publication Critical patent/CN102665034A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a night effect removal method for a camera-collected video, comprising the following steps of: collecting images of a same scene at daytime under various light conditions to form a set; training samples, creating a dictionary and setting a threshold; carrying out color model processing and denoising for an image collected at night; calculating a background image according to the threshold; successively obtaining the image of every frame of a streaming video collected at night; carrying out the color model processing and denoising to get a foreground image; integrating the foreground image with the background image; and removing the noise and storing integrated images in a new video file until the image of the last frame of the video stream is treated. The night effect removal method for the camera-collected video has simple processes, and high efficiency; is capable of effectively removing night effect of a monitoring video at night for a fixed scene and improving the contours and colors of the objects in the monitoring video, thereby enabling the effect of the whole image to be close to the image at daytime, obtaining detail information of the images easily, and obtaining a video effect close to that at daytime.

Description

A kind of night effect minimizing technology for video camera acquisition video
Technical field
The present invention relates to a kind of video processing technique, specifically a kind of night effect minimizing technology for video camera acquisition video. 
Background technique
In terms of video processing technique, traditional processing for evening images is fairly simple, the brightness of image pixel is mainly improved by increasing the method for brightness, to improve entire image brightness. 
Nighttime image enhancing based on the mapping of non-linear negative tone has various tone-mapping algorithms to come out in recent years.Wherein a complex set of algorithm is the method based on contrast or gradient field, the emphasis of these algorithms is the mapping of the holding of contrast rather than brightness, and the thinking of this method is most sensitive originating from brightness ratio of the human eye for contrast or different luminance areas.Details of this tone mapping due to preferably saving contrast, it is all to generate very sharp keen image, it is done so that cost be so that whole picture contrast becomes flat.The example of this tone mapping method includes: gradient field high dynamic range compression and high dynamic range images perception frame.On the whole, tone mapping operator can be divided into 4 kinds:
(1) the same nonlinear conversion curve global operator: is used to each pixel;
(2) its peripheral information Local Operator: is considered to each pixel to select transforming function transformation function;
(3) frequency domain operator: according to the spatial frequency information of image come compression of dynamic range;
(4) gradient field operator: multiple dimensioned decaying is carried out to image from gradient field, then recovers luminance picture from new gradient image.
Relative to tone mapping, low dynamic range echograms are converted into high dynamic range images by negative tone mapping, under the premise of can be perceived by human visual system, the Luminance Distribution range of enlarged image.Compared with tone mapping, reflect that the research work penetrated is started late, using also not extensive enough.Typical negative tone mapping algorithm can do linear or index expansion by the pixel to low dynamic range echograms, in addition setting threshold value, the promotion of Lai Shixian brightness of image range to pixel value.Nighttime image dark, contrast are low, it can be regarded as one kind of low dynamic range echograms, negative tone mapping, which is applied to nighttime image, enhances field, proposes a kind of non-linear negative tone mapping operator to enhance low-dynamic range video, achieves preferable reinforcing effect.Non-linear negative tone mapping operator Berulett carries out nighttime image enhancing using the Nonlinear Mapping operator based on logarithm.In view of the similitude of logarithmic function, exponential function and hyperbolic tangent function curve, we expand logarithm negative tone mapping operator, video frame is pre-processed using three kinds of Nonlinear Mapping operators, enhance the contrast and details of image, and its reinforcing effect and processing time are simply compared. 
Another method Retinex algorithm is how put forward one of Edwin Land adjusts about human visual system and perceive the color of object and the model of brightness, in this model, image consists of two parts, a part is the bright brightness of object in scene, low frequency part corresponding to image, another part is the reflecting brightness of object in scene, and corresponding to the high frequency section of image, usually they are also referred to as luminance picture and reflected image.So if luminance picture and reflected image are isolated from given image, under conditions of constant color, so that it may achieve the purpose that enhance image by changing the ratio of luminance picture and reflected image in original image.Image enhancement based on SSR and wavelet transformation analyzes the histogram of nighttime image it is found that the low frequency component of bluish-green two kinds of primary colors is more in original nighttime image, and the intermediate frequency component of primary red is more, so nighttime image is partial to red tone.Thus be not suitable for enhancing nighttime image using MSR algorithm, and SSR enhances algorithm, is influenced by the parameter of Gaussian function, the Gaussian function of different parameters, can generate different effects to the enhancing of image.Using lesser parameter, it is mainly used to protrude the grain details of image, and uses biggish parameter, then can restores the color of image in bigger degree. 
At present, the field is hotter to the enhancing research of common low-quality images, but it is fewer to the research at evening images removal night, however in practical applications, such as in intelligent traffic monitoring and Indoor Video, it is difficult to tell the profile of object and colouring information under night condition, there is a concern that the detailed information such as the profile of object and color in the fixed scene of monitoring. 
Traditional night monitoring mainly uses thermal camera, and what is obtained is usually gray level image.For expecting that the demand of color and detailed information is unable to satisfy in figure.And the price of thermal camera is higher, is not suitable for being widely applied.Common video camera is difficult to work normally at night, and night brightness is low, and light is poor, and monitor video quality is lower.The video monitoring at night is particularly important for safe antitheft work, and the research about New Year's Eve is less, and for traditional video camera, the present invention proposes a kind of new method. 
Summary of the invention
Technical problem to be solved by the invention is to provide a kind of methods that the night video for video camera acquisition carries out the removal of night effect, the video image of this method reduction is close to the effect of video on daytime, and this method also has common camera preferable reduction effect. 
A kind of night effect minimizing technology for video camera acquisition video of the present invention comprising following steps:
1) image acquired under various illumination conditions on Same Scene daytime forms set;
2) training sample creates dictionary, and threshold value is arranged;
3) by night in the video flowing that Same Scene acquires, a wherein frame image is obtained,
4) color model processing and denoising are carried out;
5) treated that dictionary that image creates with step 2 compares for step 4), is removed fuzzy and super-resolution enhancing processing, goes out Background according to threshold calculations; 
6) video flowing by night acquisition inputs, and successively obtains each frame image;
7) it carries out color model processing and denoising obtains foreground image;
8) background image that image and step 5) that step 7) obtains obtain is merged;
9) it removes noise and is stored in the video file newly created,
10) step 6) is gone to until the last frame image procossing of video flowing finishes, and is terminated.
Above-mentioned steps 4) and step 7) described in color model treatment process are as follows:
To the relationship between the triple channel of color image, a color model is proposed, the main formulas of the model is as follows:
Figure 254208DEST_PATH_IMAGE001
(1)
(2)
Figure 109031DEST_PATH_IMAGE003
(3)
In formula,
Figure RE-DEST_PATH_IMAGE004
For be arranged parameter, value range be -5 <
Figure 265600DEST_PATH_IMAGE005
< 0,
Figure RE-DEST_PATH_IMAGE006
For entire image mean value,
Figure 598492DEST_PATH_IMAGE007
For the R channel value of pixel (x, y) in original image,
Figure RE-DEST_PATH_IMAGE008
For the G channel value of pixel (x, y) in original image,For the channel B value of pixel (x, y) in original image,
Figure RE-DEST_PATH_IMAGE010
For the mean value in the channel R of entire image,
Figure 546911DEST_PATH_IMAGE011
For the mean value in the channel G of entire image,
Figure RE-DEST_PATH_IMAGE012
For the mean value of the channel B of entire image;It is substituted into formula respectively by three channels of each pixel to night image, obtained new gray value, balance is kept on the new color of image obtained in this way, is enhanced in brightness.
Above-mentioned steps 4) and step 7) described in denoising process are as follows:
A. the detection of noise spot: each pixel in check image, ifIt is to be made an uproar to survey pixel, the grey scale pixel value at position (i, j) is
Figure RE-DEST_PATH_IMAGE014
;It enables
Figure 888211DEST_PATH_IMAGE015
It indicates with pixel
Figure 933921DEST_PATH_IMAGE014
Centered on window area, calculate maximum and minimum therein according to following formula:
Figure RE-DEST_PATH_IMAGE016
  (4)
Figure 556980DEST_PATH_IMAGE019
Figure RE-DEST_PATH_IMAGE020
(5)
In formula, n indicates that noise, s indicate signal, when the absolute difference of the value of center pixel and four adjacent pixels is more than threshold k, it is believed that this pixel is noise spot;
B. noise spot is filtered out: median filtering removal is carried out to the noise spot detected.
Above-mentioned steps 5) in remove blurring process are as follows:
Initialisation image first, then the fuzzy core of initialization is substituted into, the fuzzy core of estimation is calculated by interative computation according to formula (6),
Figure 231675DEST_PATH_IMAGE021
                            (6)
In formula (6),Indicate source images,
Figure 128962DEST_PATH_IMAGE023
Indicate fuzzy core,
Figure RE-DEST_PATH_IMAGE024
Indicate blurred picture,
Figure 496489DEST_PATH_IMAGE025
For constant;
Preliminary de-blurred image is obtained by the operation of deconvoluting of image again
                  (7)
This is a least square regularization problem, can be asked with the formula of an approximate form
Figure 675798DEST_PATH_IMAGE023
(8)
In formula,
Figure RE-DEST_PATH_IMAGE028
Indicate Fourier transformation,Indicate inverse Fourier transform,
Figure RE-DEST_PATH_IMAGE030
It indicates
Figure 920625DEST_PATH_IMAGE028
Conjugation negative,
Figure 255791DEST_PATH_IMAGE031
Indicate that corresponding element is multiplied;
Image by deblurring processing is substituted into initialisation image, then the sparse factor a in super-resolution is iterated to calculate by fuzzy core and dictionary, last image can be used
Figure 984713DEST_PATH_IMAGE022
=
Figure RE-DEST_PATH_IMAGE032
Approximation replaces original image.
For step 4) treated image is known as image A, the background image in dictionary is known as image B, the process of step 5) foreground and background separation are as follows:
5.1) image A and image B are divided into bulk first, it is respectively compared the similarity of the block of two image same positions, here similarity function uses RMSE, that is root-mean-square error, think that two blocks belong to Same Scene when the similarity of two blocks is greater than or equal to threshold value, it is all background, first layer marks the position of foreground blocks;Two pieces are considered as when the similarity of two blocks is lower than threshold value and belongs to different scenes, are belonging respectively to foreground and background;
5.2) two images after treatment are divided again according to smaller piece, continues to compare similarity, more whether reaches threshold value first, be background if it is greater than or equal to threshold value;Less than threshold value judge whether it is labeled, by one layer it is labeled be then prospect, do not have labeled for background, this time marking the position of foreground blocks is that the second layer marks;
5.3) image is divided again according to smaller piece, compares the block of same position, when similarity value is lower than threshold value, judged whether labeled by the second layer.When similarity value is greater than or equal to threshold value, it is believed that this block is background, thus can accurately isolate foreground and background.
Beneficial effects of the present invention:
The present invention is directed the New Year's Eve problem for the fixed scene that shot by camera arrives in intelligent video monitoring, this is seldom related in pervious document and patent.This method process is simple, efficiency is higher, New Year's Eve can effectively be realized for the video monitoring at fixed scene night, the available enhancing of profile and color of object in monitor video, image of the effect of entire image close to daytime, it is readily available out the detailed information of image, obtains the video effect close to daytime;And the present invention demonstrates its correctness by different scenes.
Detailed description of the invention
Fig. 1 is flow chart of the invention. 
Specific embodiment
It the purpose of the present invention is the New Year's Eve technique study of the fixed scene for video monitoring and realizes, we illustrate our method with one section of Xitucheng Lu traffic video monitoring, and the New Year's Eve of other fixed scene can be realized according to this completely. 
The present invention is mainly illustrated from following four part: color model processing, denoising, deblurring and super-resolution enhancing, the fusion of background prospect. 
1. color model is handled
Under night condition, scene dark, contrast is low, in addition the reflective influence of the irradiation of vehicle car light and car light ground, common video camera can not work normally in the case where night dark.Due to night dark, first have to improve the brightness of image, but keep the colouring information of image, the method of traditional raising brightness of image is directly to increase brightness value in luminance channel, we propose a color model, mainly handle the image at night according to the relationship between the triple channel of color image, triple channel is handled respectively, not only improving brightness can be with retaining color information.Main formulas is as follows:
(1)
Figure RE-DEST_PATH_IMAGE034
(2)
(3)
Figure 520102DEST_PATH_IMAGE004
For be arranged parameter, value range be -5 <
Figure 470741DEST_PATH_IMAGE005
< 0, generally take
Figure 792394DEST_PATH_IMAGE005
=-1.5 effects are preferable.For entire image mean value,
Figure RE-DEST_PATH_IMAGE036
For the R channel value of pixel (x, y) in original image,
Figure 25109DEST_PATH_IMAGE037
For the G channel value of pixel (x, y) in original image,
Figure RE-DEST_PATH_IMAGE038
For the channel B value of pixel (x, y) in original image.
Figure 400727DEST_PATH_IMAGE039
For the mean value in the channel R of entire image,For the mean value in the channel G of entire image,
Figure 759027DEST_PATH_IMAGE041
For the mean value of the channel B of entire image.It is substituted into formula respectively by three channels of each pixel to night image, obtained new gray value, the new image obtained in this way keeps balance in color again, is enhanced in brightness.
2. denoising
Not only brightness is low for night monitor video, but also there are some unknown noises, interferes the image quality of monitor video, and to obtain more visible picture will carry out denoising to image.
Median filtering is a kind of non-linear denoising method for being widely used in removing impulsive noise.Although it can be effectively removed the impulsive noise in image, solely the loss of image detail information will cause using median filter method removal impulsive noise, so that image be made to thicken.And the impulse noise filter method based on noise spot detection can effectively keep the detailed information of image while filtering out noise.Impulse noise filter method key based on noise spot detection is: first is that the detection of noise spot;Second is that being filtered out to noise spot.The detection algorithm of noise spot is for finding out the noise spot in image.Good detection algorithm should find out the noise spot in image as much as possible, while noise spot is made in the information point erroneous judgement in image as few as possible.And filtering out using the available more satisfactory filter effect of median filtering algorithm to impulsive noise. 
In natural image, there is biggish correlation between adjacent pixel, the gray value and surrounding gray value of certain point are very close, other than isolated point (being considered noise), even if meeting the condition marginal portion.If the value of a pixel and the value of its neighborhood fall far short in piece image, which is probably exactly by noise pollution.In the noise monitoring stage, main purpose is accurately to detect noise spot as far as possible, it is possible to use biggish monitoring window (such as 7*7 or 9*9).If
Figure 231597DEST_PATH_IMAGE013
It is that grey scale pixel value by image polluted by noise, at position (i, j) is
Figure RE-DEST_PATH_IMAGE042
.It enables
Figure 718948DEST_PATH_IMAGE043
It indicates with pixel
Figure 581862DEST_PATH_IMAGE042
Centered on window area, calculate maximum and minimum therein, the noise detecting process of this method are as follows:
Figure RE-DEST_PATH_IMAGE044
 
Figure 806170DEST_PATH_IMAGE045
 
Figure RE-DEST_PATH_IMAGE046
(4)
Figure 182181DEST_PATH_IMAGE047
(5)
In formula, n indicates that noise, s indicate signal.When the absolute difference of the value of center pixel and four adjacent pixels is more than threshold k, it is believed that this pixel is noise spot.
In the noise filtering stage, selection is existing to lesser filter window (3*3 or 5*5), can be effectively protected details in this way.When carrying out median filtering, only the noise spot detected is filtered, ascending order arrangement is carried out to the gray value of the pixel in window, takes the gray value that median is new as this pixel. 
3. deblurring and super-resolution enhancing
By the image that color model is handled, blur margin may be caused clear due to the raising of brightness, whole image thickens.Adaptive fuzzy core (width clearly image and fuzzy nuclear convolution after image blur) estimation passes through initialisation image, substitute into the fuzzy core of initialization, the fuzzy core of estimation is calculated by interative computation, then preliminary de-blurred image is obtained by the operation of deconvoluting of image.The image for passing through deblurring processing is substituted into initialisation image, then is iterated to calculate by fuzzy core and dictionary and obtains the sparse factor in super-resolution.The fuzzy core of image is estimated according to formula (6).
                            (6) 
In formula (6),
Figure 190643DEST_PATH_IMAGE022
Indicate source images,
Figure 218642DEST_PATH_IMAGE023
Indicate fuzzy core,
Figure 400224DEST_PATH_IMAGE024
Indicate blurred picture,
Figure 527580DEST_PATH_IMAGE025
For constant.
                  
Figure RE-DEST_PATH_IMAGE050
(7)
This is a least square regularization problem, can be asked with the formula of an approximate form
Figure 365086DEST_PATH_IMAGE023
Figure 931197DEST_PATH_IMAGE027
(8)
   
Figure 170548DEST_PATH_IMAGE028
Indicate Fourier transformation,
Figure 531122DEST_PATH_IMAGE029
Indicate inverse Fourier transform,
Figure 357389DEST_PATH_IMAGE030
It indicatesConjugation negative,
Figure 555469DEST_PATH_IMAGE031
Indicate that corresponding element is multiplied.
It is iterated to calculate again by fuzzy core and dictionary and obtains sparse factor a in super-resolution.Last image can be used
Figure 227890DEST_PATH_IMAGE022
=Approximation replaces original image. 
Image super-resolution, which refers to, recovers high-definition picture by a width low-resolution image or image sequence.The decline that will lead to picture quality there are many factor during obtaining image is degenerated, and such as the aberration of optical system, atmospheric perturbation, movement, defocus and system noise, they will cause the fuzzy and deformation of image.According to the viewpoint of Fourier Optics, optical imaging system is a low-pass filter, and due to being influenced by optical diffraction, transmission function is zero in some the cutoff frequency values above determined by resolution of diffraction.Obviously, common image restoration technology such as deconvolution techniques etc. can only cannot surmount it by the frequency recovery of object at the corresponding cutoff frequency of diffraction limit, and what the energy and information except such cutoff frequency were had no way out is lost.Super-resolution image enhancing is just attempt to restore the information except cutoff frequency, so that image obtains more details and information.Super-resolution enhances creation of the major part in dictionary of algorithm, choosing great amount of samples collection first is mainly the day images under fixed scene, in different times with the day images under illumination condition as sample set, a subset of each image as dictionary creates dictionary. 
In creation dictionary process, image in different time periods on daytime is chosen first, as training sample.As soon as all information of each image in training set are transformed to a column vector, the information comprising all images in such dictionary. 
Figure 446437DEST_PATH_IMAGE051
(9)
In formulaIndicate the sparse factor,
Figure 394801DEST_PATH_IMAGE024
Indicate original image,
Figure 300440DEST_PATH_IMAGE053
Indicate dictionary.When
Figure 396572DEST_PATH_IMAGE053
When known, it can be calculated according to formula (9)
Figure 45860DEST_PATH_IMAGE052
Value.The sparse factor is continued to optimize using successive ignition
Figure 911047DEST_PATH_IMAGE052
, formula (9) obtains optimization solution, and the image of most Zhongdao is closer to day images.
4. foreground and background separates
The method of traditional separation foreground and background is mainly background subtraction, foreground and background can be isolated after carrying out simple differencing by comparing consecutive frame image, but for evening images, foreground and background is all very fuzzy in figure, and the foreground and background error got by background subtraction is very big.The method that we are described below.
For the night video flowing of input, a frame of video is taken first, color model processing is carried out to image, treated, and image is compared with background image again.Here treated, image calls image A that background image is called image B. 
Image A and image B are divided into bulk first, such as the block of 30*30, it is respectively compared the similarity of the block of two image same positions, here similarity function uses RMSE(root-mean-square error), when the similarity of two blocks is greater than or equal to threshold value (by a large amount of training pictures, setting threshold value) when think that two blocks belong to Same Scene, i.e., be all background, first layer marks the position of foreground blocks.Two pieces are considered as when the similarity of two blocks is lower than threshold value and belongs to different scenes, are belonging respectively to foreground and background.Two images after treatment are divided again, for example the block of 10*10 continues to compare similarity, more whether reaches threshold value first according to smaller piece, is background if it is greater than or equal to threshold value.Less than threshold value judge whether it is labeled, by one layer it is labeled be then prospect, do not have labeled for background, this time marking the position of foreground blocks is that the second layer marks.Image is divided according to the block of 5*5 again, compares the block of same position, when similarity value is lower than threshold value, is judged whether labeled by the second layer.When similarity value is greater than or equal to threshold value, it is believed that this block is background.It thus can accurately isolate foreground and background. 
During comparison block, comparing is using block as minimum unit.The mobile step-length of block can provide artificially, we are by experimental verification when block takes 30*30 here, and step-length takes 3 or 5, and to obtain result more accurate, when block takes 10*10, step-length takes 2, when block takes 5*5, step-length takes 1, and it is relatively good to obtain result, more accurately sub-argument can go out foreground and background.The figure on the left side is a frame of night monitor video, and the figure on the right is the foreground and background isolated. 
For a monitoring scene, sample image under the different illumination conditions on the daytime gathered, creation is at dictionary after tractable.This process can carry out offline, and dictionary has trained when allocating and transporting algorithm every time.Background can be calculated by the video image of trained dictionary and a frame night.Background image can be kept later by calculating background, can apply identical background for different night monitoring video flows. 
A frame image in night traffic surveillance videos, first it can be seen that people and Che soft edge in figure, light is very dark in figure, and brightness is very low, is difficult to tell automobile and pedestrian.Our algorithm carries out color model processing to image first, and effect becomes limpid in sight after processing, but compares with the image on daytime, from the feeling of human eye from the point of view of or as night, but light is very strong. 
During off-line training, a large amount of day images samples (choosing 100 day images in different time periods here), entirely background image are first chosen, dictionary is trained, includes the main information of image in dictionary.Deblurring processing is carried out as input picture, the image after the fuzzy core and deblurring estimated, then by dictionary, iterate to calculate out background image.The background image on calculated background image not simple daytime at this time, with evening images information and the very similar image of background on daytime. 
The video flowing that night monitors is divided into image one by one, each frame image is carried out color model processing, using denoising, treated image and background image carry out similarity system design, isolate prospect.The prospect of the final result is that treated as a result, background is the background image for being previously calculated out for color model. 
The above is only a preferred embodiment of the present invention, it is noted that for those skilled in the art, without departing from the principle of the present invention, can also make several improvements, these improvement also should be regarded as protection scope of the present invention. 

Claims (5)

1. a kind of night effect minimizing technology for video camera acquisition video, it is characterised in that the following steps are included:
1) image acquired under various illumination conditions on Same Scene daytime forms set;
2) training sample creates dictionary, and threshold value is arranged;
3) by night in the video flowing that Same Scene acquires, a wherein frame image is obtained,
4) color model processing and denoising are carried out;
5) treated that dictionary that image creates with step 2 compares for step 4), is removed fuzzy and super-resolution enhancing processing, goes out Background according to threshold calculations; 
6) video flowing by night acquisition inputs, and successively obtains each frame image;
7) it carries out color model processing and denoising obtains foreground image;
8) background image that image and step 5) that step 7) obtains obtain is merged;
9) it removes noise and is stored in the video file newly created,
10) step 6) is gone to until the last frame image procossing of video flowing finishes, and is terminated.
2. the night effect minimizing technology according to claim 1 for video camera acquisition video, it is characterised in that color model treatment process described in step 4) and step 7) are as follows:
To the relationship between the triple channel of color image, a color model is proposed, the main formulas of the model is as follows:
Figure 245983DEST_PATH_IMAGE001
(1)
(2)
Figure 688783DEST_PATH_IMAGE003
(3)
In formula,For be arranged parameter, value range be -5 <
Figure 644286DEST_PATH_IMAGE005
< 0,
Figure 757736DEST_PATH_IMAGE006
For entire image mean value,
Figure 682966DEST_PATH_IMAGE007
For the R channel value of pixel (x, y) in original image,
Figure 972521DEST_PATH_IMAGE008
For the G channel value of pixel (x, y) in original image,
Figure 34018DEST_PATH_IMAGE009
For the channel B value of pixel (x, y) in original image,
Figure 380685DEST_PATH_IMAGE010
For the mean value in the channel R of entire image,
Figure 793212DEST_PATH_IMAGE011
For the mean value in the channel G of entire image,
Figure 945845DEST_PATH_IMAGE012
For the mean value of the channel B of entire image;It is substituted into formula respectively by three channels of each pixel to night image, obtained new gray value, balance is kept on the new color of image obtained in this way, is enhanced in brightness.
3. the night effect minimizing technology according to claim 1 for video camera acquisition video, it is characterised in that denoising process described in step 4) and step 7) are as follows:
A. the detection of noise spot: each pixel in check image, if
Figure 127428DEST_PATH_IMAGE013
It is to be made an uproar to survey pixel, the grey scale pixel value at position (i, j) is
Figure 379417DEST_PATH_IMAGE014
;It enablesIt indicates with pixelCentered on window area, calculate maximum and minimum therein according to following formula:
Figure 209336DEST_PATH_IMAGE016
 
Figure 632227DEST_PATH_IMAGE017
 
Figure 19346DEST_PATH_IMAGE018
(4)
Figure 389148DEST_PATH_IMAGE019
Figure 342060DEST_PATH_IMAGE020
(5)
In formula, n indicates that noise, s indicate signal, when the absolute difference of the value of center pixel and four adjacent pixels is more than threshold k, it is believed that this pixel is noise spot;
B. noise spot is filtered out: median filtering removal is carried out to the noise spot detected.
4. the night effect minimizing technology according to claim 1 for video camera acquisition video, it is characterised in that remove blurring process in step 5) are as follows:
Initialisation image first, then the fuzzy core of initialization is substituted into, the fuzzy core of estimation is calculated by interative computation according to formula (6),
Figure 139115DEST_PATH_IMAGE021
                            (6)
In formula (6),
Figure 813197DEST_PATH_IMAGE022
Indicate source images,
Figure 986690DEST_PATH_IMAGE023
Indicate fuzzy core,
Figure 997371DEST_PATH_IMAGE024
Indicate blurred picture,
Figure 27644DEST_PATH_IMAGE025
For constant;
Preliminary de-blurred image is obtained by the operation of deconvoluting of image again
                 
Figure 123776DEST_PATH_IMAGE026
(7)
This is a least square regularization problem, can be asked with the formula of an approximate form
Figure 897697DEST_PATH_IMAGE023
Figure DEST_PATH_IMAGE027
(8)
In formula,Indicate Fourier transformation,
Figure 964059DEST_PATH_IMAGE029
Indicate inverse Fourier transform,
Figure 547487DEST_PATH_IMAGE030
It indicatesConjugation negative,
Figure 844793DEST_PATH_IMAGE031
Indicate that corresponding element is multiplied;
Image by deblurring processing is substituted into initialisation image, then the sparse factor a in super-resolution is iterated to calculate by fuzzy core and dictionary, last image can be used
Figure 216869DEST_PATH_IMAGE022
=
Figure 287593DEST_PATH_IMAGE032
Approximation replaces original image.
5. the night effect minimizing technology according to claim 1 for video camera acquisition video, which is characterized in that for step 4) treated image is known as image A, the background image in dictionary is known as image B, the process of step 5) foreground and background separation are as follows:
5.1) image A and image B are divided into bulk first, it is respectively compared the similarity of the block of two image same positions, here similarity function uses RMSE, that is root-mean-square error, think that two blocks belong to Same Scene when the similarity of two blocks is greater than or equal to threshold value, it is all background, first layer marks the position of foreground blocks;Two pieces are considered as when the similarity of two blocks is lower than threshold value and belongs to different scenes, are belonging respectively to foreground and background;
5.2) two images after treatment are divided again according to smaller piece, continues to compare similarity, more whether reaches threshold value first, be background if it is greater than or equal to threshold value;Less than threshold value judge whether it is labeled, by one layer it is labeled be then prospect, do not have labeled for background, this time marking the position of foreground blocks is that the second layer marks;
5.3) image is divided again according to smaller piece, compares the block of same position, when similarity value is lower than threshold value, judge whether labeled by the second layer, when similarity value is greater than or equal to threshold value, it is believed that this block is background, thus can accurately isolate foreground and background.
CN2012100701892A 2012-03-16 2012-03-16 Night effect removal method for camera-collected video Pending CN102665034A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012100701892A CN102665034A (en) 2012-03-16 2012-03-16 Night effect removal method for camera-collected video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012100701892A CN102665034A (en) 2012-03-16 2012-03-16 Night effect removal method for camera-collected video

Publications (1)

Publication Number Publication Date
CN102665034A true CN102665034A (en) 2012-09-12

Family

ID=46774438

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012100701892A Pending CN102665034A (en) 2012-03-16 2012-03-16 Night effect removal method for camera-collected video

Country Status (1)

Country Link
CN (1) CN102665034A (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020930A (en) * 2012-11-26 2013-04-03 天津大学 Nighttime monitoring video enhancing method
CN103595933A (en) * 2013-11-25 2014-02-19 陈皓 Method for image noise reduction
CN104168404A (en) * 2014-07-25 2014-11-26 南京杰迈视讯科技有限公司 Infrared camera night vision rectification method
CN104318529A (en) * 2014-10-19 2015-01-28 新疆宏开电子系统集成有限公司 Method for processing low-illumination images shot in severe environment
WO2015154526A1 (en) * 2014-07-09 2015-10-15 中兴通讯股份有限公司 Color restoration method and apparatus for low-illumination-level video surveillance images
CN105096263A (en) * 2014-05-22 2015-11-25 安凯(广州)微电子技术有限公司 Image filtering method and device
CN106530248A (en) * 2016-10-28 2017-03-22 中国南方电网有限责任公司 Method for intelligently detecting scene video noise of transformer station
CN108198138A (en) * 2017-11-24 2018-06-22 北京邮电大学 A kind of night effect minimizing technology and device for monitor video
CN108309708A (en) * 2018-01-23 2018-07-24 李思霈 Blind-man crutch
CN108460414A (en) * 2018-02-27 2018-08-28 北京三快在线科技有限公司 Generation method, device and the electronic equipment of training sample image
CN111369475A (en) * 2020-03-26 2020-07-03 北京百度网讯科技有限公司 Method and apparatus for processing video
CN111429375A (en) * 2020-03-27 2020-07-17 扆亮海 Night monitoring video quality improving method assisted by daytime image reference
CN112565178A (en) * 2020-10-21 2021-03-26 深圳供电局有限公司 Unmanned aerial vehicle power equipment system of patrolling and examining based on streaming media technique
CN112785504A (en) * 2021-02-23 2021-05-11 深圳市来科计算机科技有限公司 Day and night image fusion method
CN113112418A (en) * 2021-03-26 2021-07-13 浙江理工大学 Low-illumination image iterative enhancement method
US11062436B2 (en) 2019-05-10 2021-07-13 Samsung Electronics Co., Ltd. Techniques for combining image frames captured using different exposure settings into blended images
US11095829B2 (en) 2019-06-11 2021-08-17 Samsung Electronics Co., Ltd. Apparatus and method for high dynamic range (HDR) image creation of dynamic scenes using graph cut-based labeling
US11107205B2 (en) 2019-02-18 2021-08-31 Samsung Electronics Co., Ltd. Techniques for convolutional neural network-based multi-exposure fusion of multiple image frames and for deblurring multiple image frames
CN113591832A (en) * 2021-08-20 2021-11-02 杭州数橙科技有限公司 Training method of image processing model, document image processing method and device
US11430094B2 (en) 2020-07-20 2022-08-30 Samsung Electronics Co., Ltd. Guided multi-exposure image fusion

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090046995A1 (en) * 2007-08-13 2009-02-19 Sandeep Kanumuri Image/video quality enhancement and super-resolution using sparse transformations
CN101409825A (en) * 2007-10-10 2009-04-15 中国科学院自动化研究所 Nighttime vision monitoring method based on information fusion
CN101556690A (en) * 2009-05-14 2009-10-14 复旦大学 Image super-resolution method based on overcomplete dictionary learning and sparse representation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090046995A1 (en) * 2007-08-13 2009-02-19 Sandeep Kanumuri Image/video quality enhancement and super-resolution using sparse transformations
CN101409825A (en) * 2007-10-10 2009-04-15 中国科学院自动化研究所 Nighttime vision monitoring method based on information fusion
CN101556690A (en) * 2009-05-14 2009-10-14 复旦大学 Image super-resolution method based on overcomplete dictionary learning and sparse representation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴海涛 等: "复杂环境下的夜间视频车辆检测", 《计算机应用研究》 *

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020930A (en) * 2012-11-26 2013-04-03 天津大学 Nighttime monitoring video enhancing method
CN103595933A (en) * 2013-11-25 2014-02-19 陈皓 Method for image noise reduction
CN103595933B (en) * 2013-11-25 2019-04-16 陈皓 A kind of noise-reduction method of image
CN105096263A (en) * 2014-05-22 2015-11-25 安凯(广州)微电子技术有限公司 Image filtering method and device
CN105096263B (en) * 2014-05-22 2018-10-09 安凯(广州)微电子技术有限公司 image filtering method and device
WO2015154526A1 (en) * 2014-07-09 2015-10-15 中兴通讯股份有限公司 Color restoration method and apparatus for low-illumination-level video surveillance images
CN104168404A (en) * 2014-07-25 2014-11-26 南京杰迈视讯科技有限公司 Infrared camera night vision rectification method
CN104168404B (en) * 2014-07-25 2017-09-12 南京杰迈视讯科技有限公司 A kind of thermal camera night vision antidote
CN104318529A (en) * 2014-10-19 2015-01-28 新疆宏开电子系统集成有限公司 Method for processing low-illumination images shot in severe environment
CN106530248A (en) * 2016-10-28 2017-03-22 中国南方电网有限责任公司 Method for intelligently detecting scene video noise of transformer station
CN108198138A (en) * 2017-11-24 2018-06-22 北京邮电大学 A kind of night effect minimizing technology and device for monitor video
CN108198138B (en) * 2017-11-24 2019-05-03 北京邮电大学 A kind of night effect minimizing technology and device for monitor video
CN108309708A (en) * 2018-01-23 2018-07-24 李思霈 Blind-man crutch
CN108460414B (en) * 2018-02-27 2019-09-17 北京三快在线科技有限公司 Generation method, device and the electronic equipment of training sample image
CN108460414A (en) * 2018-02-27 2018-08-28 北京三快在线科技有限公司 Generation method, device and the electronic equipment of training sample image
US11107205B2 (en) 2019-02-18 2021-08-31 Samsung Electronics Co., Ltd. Techniques for convolutional neural network-based multi-exposure fusion of multiple image frames and for deblurring multiple image frames
US11062436B2 (en) 2019-05-10 2021-07-13 Samsung Electronics Co., Ltd. Techniques for combining image frames captured using different exposure settings into blended images
US11095829B2 (en) 2019-06-11 2021-08-17 Samsung Electronics Co., Ltd. Apparatus and method for high dynamic range (HDR) image creation of dynamic scenes using graph cut-based labeling
CN111369475A (en) * 2020-03-26 2020-07-03 北京百度网讯科技有限公司 Method and apparatus for processing video
CN111369475B (en) * 2020-03-26 2023-06-23 北京百度网讯科技有限公司 Method and apparatus for processing video
CN111429375A (en) * 2020-03-27 2020-07-17 扆亮海 Night monitoring video quality improving method assisted by daytime image reference
US11430094B2 (en) 2020-07-20 2022-08-30 Samsung Electronics Co., Ltd. Guided multi-exposure image fusion
CN112565178B (en) * 2020-10-21 2023-04-28 深圳供电局有限公司 Unmanned aerial vehicle electrical equipment inspection system based on streaming media technology
CN112565178A (en) * 2020-10-21 2021-03-26 深圳供电局有限公司 Unmanned aerial vehicle power equipment system of patrolling and examining based on streaming media technique
CN112785504B (en) * 2021-02-23 2022-12-23 深圳市来科计算机科技有限公司 Day and night image fusion method
CN112785504A (en) * 2021-02-23 2021-05-11 深圳市来科计算机科技有限公司 Day and night image fusion method
CN113112418A (en) * 2021-03-26 2021-07-13 浙江理工大学 Low-illumination image iterative enhancement method
CN113112418B (en) * 2021-03-26 2023-10-10 浙江理工大学 Low-illumination image iteration enhancement method
CN113591832A (en) * 2021-08-20 2021-11-02 杭州数橙科技有限公司 Training method of image processing model, document image processing method and device
CN113591832B (en) * 2021-08-20 2024-04-05 杭州数橙科技有限公司 Training method of image processing model, document image processing method and device

Similar Documents

Publication Publication Date Title
CN102665034A (en) Night effect removal method for camera-collected video
Dudhane et al. C^ 2msnet: A novel approach for single image haze removal
You et al. Adherent raindrop detection and removal in video
CN108734670B (en) Method for restoring single night weak-illumination haze image
CN107292830B (en) Low-illumination image enhancement and evaluation method
CN111062293B (en) Unmanned aerial vehicle forest flame identification method based on deep learning
CN111860120B (en) Automatic shielding detection method and device for vehicle-mounted camera
CN102768760A (en) Quick image dehazing method on basis of image textures
CN110276318A (en) Nighttime road rains recognition methods, device, computer equipment and storage medium
CN111311503A (en) Night low-brightness image enhancement system
CN109345479B (en) Real-time preprocessing method and storage medium for video monitoring data
Honda et al. Make my day-high-fidelity color denoising with near-infrared
CN109118450A (en) A kind of low-quality images Enhancement Method under the conditions of dust and sand weather
CN110738624B (en) Area-adaptive image defogging system and method
CN116596792B (en) Inland river foggy scene recovery method, system and equipment for intelligent ship
Ding et al. Restoration of single sand-dust image based on style transformation and unsupervised adversarial learning
Tao et al. Improved Retinex for low illumination image enhancement of nighttime traffic
Pal Visibility enhancement of fog degraded image sequences on SAMEER TU dataset using dark channel strategy
CN114418874A (en) Low-illumination image enhancement method
Wang et al. Low-light traffic objects detection for automated vehicles
CN103226813B (en) A kind of disposal route improving rainy day video image quality
Kansal et al. Effect of non uniform illumination compensation on dehazing/de-fogging techniques
Naseeba et al. KP Visibility Restoration of Single Hazy Images Captured in Real-World Weather Conditions
Unnikrishnan et al. Non-Local Retinex Based Dehazing and Low Light Enhancement of Images.
CN110084761A (en) A kind of image defogging algorithm based on grey relational grade guiding filtering

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20120912