CN104978720A - Video image raindrop removal method and apparatus - Google Patents

Video image raindrop removal method and apparatus Download PDF

Info

Publication number
CN104978720A
CN104978720A CN201510379692.XA CN201510379692A CN104978720A CN 104978720 A CN104978720 A CN 104978720A CN 201510379692 A CN201510379692 A CN 201510379692A CN 104978720 A CN104978720 A CN 104978720A
Authority
CN
China
Prior art keywords
component
color space
initial pictures
image
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510379692.XA
Other languages
Chinese (zh)
Inventor
朱青松
袁杰
王磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201510379692.XA priority Critical patent/CN104978720A/en
Publication of CN104978720A publication Critical patent/CN104978720A/en
Pending legal-status Critical Current

Links

Abstract

The invention provides a video image raindrop removal method and apparatus, and relates to the technical field of video image rain removal. The method comprises the steps of: obtaining an initial image of each frame in a video, converting RGB color space to YCbCr color space, and obtaining a Y component of the initial image after color space conversion; carrying out two-dimensional empirical mode decomposition, and generating a profile image of the initial image; carrying out bilateral filtering, and generating a salient edge image of an object; carrying out binaryzation and crossing processing on the salient edge image of the object and the profile image, and generating a rain-removed image; carrying out wavelet denoising treatment on the rain-removed image, and generating a denoised image; and carrying out alpha mixing according to a Y component of the denoised image and the Y component of the initial image, generating a result image in the YCbCr color space, converting the YCbCr color space to the RGB color space, forming a result image in the RGB color space, and generating a rain-removed video. The video image raindrop removal method and apparatus solve the problem that a current simple value substitution method greatly reduces the image quality of an output video.

Description

A kind of raindrops in video image minimizing technology and device
Technical field
The present invention relates to video image except rain technical field, particularly relate to a kind of raindrops in video image minimizing technology and device.
Background technology
Current, outdoor computer vision system has been widely used in the technical fields such as military and national defense, medical treatment, intelligent transportation.But, due to environment reason such as inclement weathers, the performance of outdoor computer vision system may be had a strong impact on, even cause its complete failure.Therefore the effective ways of adverse weather conditions are eliminated, for essential a round-the-clock outdoor computer vision system.In numerous inclement weather conditions, raindrop, owing to having comparatively macroparticle radius and other complicated physical characteristicss, can cause the quality of the image that vision system absorbs and affect largely.Image raindrop remove technology by using the characteristic such as physics, frequency of rain, identify the raindrop in image, remove.It significantly can not only promote picture quality, also helps the further process of image.Therefore, image raindrop remove technology has become the indispensable guardian technique of computer vision field.
Detected about raindrop in image in the last few years and became focus already with the research of removing.Current in image raindrop are removed, the simple Shift Method of the most adopted value of prior art, is namely replaced the raindrop pixel detected, thus reconstitutes image, carry out raindrop removal by the gray-scale value of background pixel.But because raindrop can cause fogging action to image or video, simple value Shift Method can reduce greatly to the quality of output video image.
Summary of the invention
Embodiments of the invention provide a kind of raindrops in video image minimizing technology and device, can cause fogging action to solve current raindrop to image or video, the problem that simple value Shift Method can reduce greatly to the quality of output video image.
For achieving the above object, the present invention adopts following technical scheme:
A kind of raindrops in video image minimizing technology, comprising:
Obtain each frame initial pictures in video, and each frame initial pictures is converted to YCbCr color space from rgb color space, obtain the Y-component of the initial pictures after color space conversion;
The Y-component of described initial pictures is carried out two-dimensional empirical mode decomposition, generates the contour images of described initial pictures;
Each pixel in the Y-component of described initial pictures is carried out bilateral filtering, generates object prominent edge image;
Described object prominent edge image and described contour images carried out binaryzation and intersect process, generating and remove rain parts of images;
According to wavelet modulus maxima algorithm, Wavelet Denoising Method process is carried out to the described rain parts of images that goes, image after generation denoising;
Carry out α mixing according to the Y-component of image after denoising and the Y-component of described initial pictures, generate the result images under YCbCr color space;
Result images under described YCbCr color space is converted to rgb color space from YCbCr color space, forms the result images under rgb color space;
Result images under each frame rgb color space is synthesized, generates and remove rain video.
Concrete, described each frame initial pictures is converted to YCbCr color space from rgb color space, obtains the Y-component of the initial pictures after color space conversion, comprising:
Pass through formula:
Y C b C r = 16 128 128 + 1 255 65.481 128.553 24.966 - 37.797 - 74.203 112.000 112.000 - 93.786 - 18.214 R G B
Each frame initial pictures is converted to YCbCr color space from rgb color space, obtains the Y-component of the initial pictures after color space conversion; Wherein, R, G, B are respectively the intensity level of each pixel R component of initial pictures, G component and B component; Y, Cb, Cr are respectively the Y-component of initial pictures, Cb component and Cr component after color space conversion.
Concrete, the Y-component of described initial pictures is carried out two-dimensional empirical mode decomposition, generates the contour images of described initial pictures, comprising:
Step 101: the Y-component inputting initial pictures described in a frame;
Step 102: the Y-component of described initial pictures is mapped on an XOY rectangular coordinate plane; Wherein, the gray-scale value of the Y-component corresponding pixel points of described initial pictures is Z coordinate;
Step 103: the local maximum point set drawn game portion minimizer set being identified the Y-component of described initial pictures by morphological image method;
Step 104: respectively described local maximum point set and described local minimum point set are carried out plane delaunay triangulation, interpolation smoothing obtains maximum value envelope surface Emax and minimal value envelope surface Emin respectively again, obtains the algebraic average E of maximum value envelope surface Emax and minimal value envelope surface Emin;
Wherein, E = E m a x + E m i n 2 ;
Step 105: deduct described algebraic average E from the Y-component of described initial pictures, forms object information;
Step 106: determine whether described object information meets every layer of screening termination condition;
Described every layer of screening termination condition is that the number of Local modulus maxima and local minizing point is equal or only differ 1 across numbers of zeros with one, and described algebraic average is 0;
If described object information meets described every layer of screening termination condition, perform step 107, if otherwise described object information does not meet described every layer of screening termination condition, and return and perform step 103;
Step 107: using described object information as n-th layer image detail information;
Step 108: determine whether n-th layer image detail information has and be no more than 1 extreme point;
If n-th layer image detail information has be no more than 1 extreme point, perform step 110; Otherwise, if the extreme point of n-th layer image detail information is more than 1, perform step 109;
Step 109: deduct described n-th layer image detail information in the Y-component of described initial pictures, and return execution step 101;
Step 110, using the contour images of described object information as described initial pictures.
Concrete, each pixel in the Y-component of described initial pictures is carried out bilateral filtering, generates object prominent edge image, comprising:
According to formula: each pixel in the Y-component of described initial pictures is calculated, generates object prominent edge image;
Wherein, g (i, j) is the pixel value of object prominent edge image; (i, j), two pixel coordinates of the Y-component that (k, l) is initial pictures; F (k, l) is the grey scale pixel value at pixel (k, l) place; ω (i, j, k, l) is a weight coefficient;
Wherein, ω ( i , j , k , l ) = exp ( - ( i - k ) 2 + ( j - l ) 2 2 σ d 2 - || f ( i , j ) - f ( k , l ) || 2 2 σ r 2 ) ; F (i, j) is the grey scale pixel value at pixel (i, j) place.
Concrete, the Y-component of the described Y-component according to image after denoising and described initial pictures carries out α mixing, generates the result images under YCbCr color space, comprising:
According to formula: C=α C b+ (1-α) C rcarry out α mixing, generate the result images under YCbCr color space; Wherein, C is the Y-component of the result images under described YCbCr color space; α is a channel value pre-set; C bfor the Y-component of image after denoising; C rfor the Y-component of described initial pictures.
Concrete, the Y-component of image and the Y-component of described initial pictures carry out α mixing after according to denoising, after generating the result images under YCbCr color space, comprising:
The Y-component of the result images under described YCbCr color space is carried out brightness regulation according to an imadjust function.
A kind of raindrops in video image removal device, comprising:
Color space converting unit, for obtaining each frame initial pictures in video, and is converted to YCbCr color space by each frame initial pictures from rgb color space, obtains the Y-component of the initial pictures after color space conversion;
Two-dimensional empirical mode decomposition unit, for the Y-component of described initial pictures is carried out two-dimensional empirical mode decomposition, generates the contour images of described initial pictures;
Bilateral filtering unit, for each pixel in the Y-component of described initial pictures is carried out bilateral filtering, generates object prominent edge image;
Remove rain parts of images generation unit, for described object prominent edge image and described contour images being carried out binaryzation and intersecting process, generate and remove rain parts of images;
Wavelet Denoising Method processing unit, for carrying out Wavelet Denoising Method process according to wavelet modulus maxima algorithm to the described rain parts of images that goes, image after generation denoising;
α hybrid processing unit, for carrying out α mixing according to the Y-component of image after denoising and the Y-component of described initial pictures, generates the result images under YCbCr color space;
Described color space converting unit, also for the result images under described YCbCr color space is converted to rgb color space from YCbCr color space, forms the result images under rgb color space;
Remove rain Video Composition unit, for the result images under each frame rgb color space is synthesized, generate and remove rain video.
In addition, described color space converting unit, specifically for:
Pass through formula:
Y C b C r = 16 128 128 + 1 255 65.481 128.553 24.966 - 37.797 - 74.203 112.000 112.000 - 93.786 - 18.214 R G B
Each frame initial pictures is converted to YCbCr color space from rgb color space, obtains the Y-component of the initial pictures after color space conversion; Wherein, R, G, B are respectively the intensity level of each pixel R component of initial pictures, G component and B component; Y, Cb, Cr are respectively the Y-component of initial pictures, Cb component and Cr component after color space conversion.
In addition, this two-dimensional empirical mode decomposition unit, specifically for performing:
Step 101: the Y-component inputting initial pictures described in a frame;
Step 102: the Y-component of described initial pictures is mapped on an XOY rectangular coordinate plane; Wherein, the gray-scale value of the Y-component corresponding pixel points of described initial pictures is Z coordinate;
Step 103: the local maximum point set drawn game portion minimizer set being identified the Y-component of described initial pictures by morphological image method;
Step 104: respectively described local maximum point set and described local minimum point set are carried out plane delaunay triangulation, interpolation smoothing obtains maximum value envelope surface Emax and minimal value envelope surface Emin respectively again, obtains the algebraic average E of maximum value envelope surface Emax and minimal value envelope surface Emin;
Wherein, E = E m a x + E m i n 2 ;
Step 105: deduct described algebraic average E from the Y-component of described initial pictures, forms object information;
Step 106: determine whether described object information meets every layer of screening termination condition;
Described every layer of screening termination condition is that the number of Local modulus maxima and local minizing point is equal or only differ 1 across numbers of zeros with one, and described algebraic average is 0;
If described object information meets described every layer of screening termination condition, perform step 107, if otherwise described object information does not meet described every layer of screening termination condition, and return and perform step 103;
Step 107: using described object information as n-th layer image detail information;
Step 108: determine whether n-th layer image detail information has and be no more than 1 extreme point;
If n-th layer image detail information has be no more than 1 extreme point, perform step 110; Otherwise, if the extreme point of n-th layer image detail information is more than 1, perform step 109;
Step 109: deduct described n-th layer image detail information in the Y-component of described initial pictures, and return execution step 101;
Step 110, using the contour images of described object information as described initial pictures.
In addition, described bilateral filtering unit, specifically for:
According to formula: each pixel in the Y-component of described initial pictures is calculated, generates object prominent edge image;
Wherein, g (i, j) is the pixel value of object prominent edge image; (i, j), two pixel coordinates of the Y-component that (k, l) is initial pictures; F (k, l) is the grey scale pixel value at pixel (k, l) place; ω (i, j, k, l) is a weight coefficient;
Wherein, ω ( i , j , k , l ) = exp ( - ( i - k ) 2 + ( j - l ) 2 2 σ d 2 - || f ( i , j ) - f ( k , l ) || 2 2 σ r 2 ) ; F (i, j) is the grey scale pixel value at pixel (i, j) place.
In addition, described α hybrid processing unit, specifically for:
According to formula: C=α C b+ (1-α) C rcarry out α mixing, generate the result images under YCbCr color space; Wherein, C is the Y-component of the result images under described YCbCr color space; α is a channel value pre-set; C bfor the Y-component of image after denoising; C rfor the Y-component of described initial pictures.
In addition, this raindrops in video image removal device, also comprises:
Brightness adjusting unit, for carrying out brightness regulation by the Y-component of the result images under described YCbCr color space according to an imadjust function.
A kind of raindrops in video image minimizing technology that the embodiment of the present invention provides and device, by obtaining each frame initial pictures in video, and each frame initial pictures is converted to YCbCr color space from rgb color space, obtain the Y-component of the initial pictures after color space conversion; The Y-component of initial pictures is carried out two-dimensional empirical mode decomposition, generates the contour images of initial pictures; Each pixel in the Y-component of initial pictures is carried out bilateral filtering, generates object prominent edge image; Object prominent edge image and contour images carried out binaryzation and intersect process, generating and remove rain parts of images; Wavelet Denoising Method process is carried out to removing rain parts of images, image after generation denoising according to wavelet modulus maxima algorithm; Carry out α mixing according to the Y-component of image after denoising and the Y-component of initial pictures, generate the result images under YCbCr color space; Result images under YCbCr color space is converted to rgb color space from YCbCr color space, forms the result images under rgb color space; Result images under each frame rgb color space is synthesized, generates and remove rain video.By the present invention, color conversion, two-dimensional empirical mode decomposition, bilateral filtering and binaryzation are carried out to frame of video initial pictures and intersect process, Wavelet Denoising Method process, α hybrid processing, return again and carry out color conversion, thus rain result images can be generated, solve current raindrop and can cause fogging action to image or video, the problem that simple value Shift Method can reduce greatly to the quality of output video image.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
The process flow diagram of a kind of raindrops in video image minimizing technology that Fig. 1 provides for the embodiment of the present invention;
Fig. 2 is the process flow diagram of the two-dimensional empirical mode decomposition process in the embodiment of the present invention;
Fig. 3 is the curve shape schematic diagram in the imadjust function in the embodiment of the present invention;
Fig. 4 provides a kind of structural representation of raindrops in video image removal device for the embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
As shown in Figure 1, the embodiment of the present invention provides a kind of raindrops in video image minimizing technology, comprising:
Step 201, each frame initial pictures obtained in video, and each frame initial pictures is converted to YCbCr color space from rgb color space, obtain the Y-component of the initial pictures after color space conversion.
Step 202, the Y-component of initial pictures is carried out two-dimensional empirical mode decomposition, generate the contour images of initial pictures.
Step 203, each pixel in the Y-component of initial pictures is carried out bilateral filtering, generate object prominent edge image.
Step 204, object prominent edge image and contour images carried out binaryzation and intersect process, generating and remove rain parts of images.
This binaryzation can utilize the binaryzation order in Matlab to realize.
Step 205, carrying out Wavelet Denoising Method process according to wavelet modulus maxima algorithm to removing rain parts of images, generating image after denoising.
This wavelet transform process process can be that the noisy rain parts of images that goes is carried out multi-scale wavelet transformation, transforms from the time domain to wavelet field, under each yardstick, then extracts the wavelet coefficient of signal, and removes the wavelet coefficient of denoising.Finally use wavelet inverse transformation reconstruction signal.
Step 206, carry out α mixing according to the Y-component of image after denoising and the Y-component of initial pictures, generate the result images under YCbCr color space.
Step 207, the result images under YCbCr color space is converted to rgb color space from YCbCr color space, forms the result images under rgb color space.
Step 208, the result images under each frame rgb color space to be synthesized, generate and remove rain video.
A kind of raindrops in video image minimizing technology that the embodiment of the present invention provides, obtains each frame initial pictures in video, and each frame initial pictures is converted to YCbCr color space from rgb color space, obtains the Y-component of the initial pictures after color space conversion; The Y-component of initial pictures is carried out two-dimensional empirical mode decomposition, generates the contour images of initial pictures; Each pixel in the Y-component of initial pictures is carried out bilateral filtering, generates object prominent edge image; Object prominent edge image and contour images carried out binaryzation and intersect process, generating and remove rain parts of images; Wavelet Denoising Method process is carried out to removing rain parts of images, image after generation denoising according to wavelet modulus maxima algorithm; Carry out α mixing according to the Y-component of image after denoising and the Y-component of initial pictures, generate the result images under YCbCr color space; Result images under YCbCr color space is converted to rgb color space from YCbCr color space, forms the result images under rgb color space; Result images under each frame rgb color space is synthesized, thus rain video is removed in generation.By the present invention, color conversion, two-dimensional empirical mode decomposition, bilateral filtering and binaryzation are carried out to frame of video initial pictures and intersect process, Wavelet Denoising Method process, α hybrid processing, return again and carry out color conversion, thus rain result images can be generated, solve current raindrop and can cause fogging action to image or video, the problem that simple value Shift Method can reduce greatly to the quality of output video image.
Concrete, in above-mentioned steps 201, each frame initial pictures is converted to YCbCr color space from rgb color space, obtains the Y-component of the initial pictures after color space conversion, can be realized by following formula:
Y C b C r = 16 128 128 + 1 255 65.481 128.553 24.966 - 37.797 - 74.203 112.000 112.000 - 93.786 - 18.214 R G B
Wherein, R, G, B are respectively the intensity level of each pixel R component of initial pictures, G component and B component; Y, Cb, Cr are respectively the Y-component of initial pictures, Cb component and Cr component after color space conversion.
What deserves to be explained is, under rgb color space, due to needs process 3 components, larger time waste can be caused, therefore carry out color space herein and be converted to YCbCr color space, by the color attribute of raindrop, determine in YCbCr space, only Y-component is subject to raindrop impact.
To above-mentioned formula:
Y C b C r = 16 128 128 + 1 255 65.481 128.553 24.966 - 37.797 - 74.203 112.000 112.000 - 93.786 - 18.214 R G B
Be out of shape, can obtain:
C b C r = 128 128 + 1 255 - 37.797 - 74.203 112.000 112.000 - 93.786 - 18.214 R b g + Δ R G b g + Δ G B b g + Δ B
And then can obtain:
Cb=128-0.1482R bg-0.2910G bg+0.4392B bg
Cr=128+0.4392R bg-0.3678G bg-0.0714B bg
Wherein, R bg, G bg, B bgbe not by the background intensity values of pixel on three components that raindrop cover respectively, Δ R, Δ G, Δ B are the increment of background intensity on R, G, B component that raindrop cause.If pixel is not covered by raindrop, then Δ R, Δ G, Δ B is 0.Can be found by above formula, after color space conversion, by raindrop effect diagram picture only Y-component contain raindrop composition, Cb and Cr two components can the Strength Changes part that causes of cancellation raindrop, makes it by the impact of the existence of raindrop.
Concrete, as shown in Figure 2, in above-mentioned steps 202, the Y-component of initial pictures is carried out two-dimensional empirical mode decomposition, generate the contour images of initial pictures, can realize as follows, such as:
Step 101: the Y-component inputting a frame initial pictures.
Step 102: the Y-component of initial pictures is mapped on an XOY rectangular coordinate plane.Wherein, the gray-scale value of the Y-component corresponding pixel points of initial pictures is Z coordinate.
Step 103: the local maximum point set drawn game portion minimizer set being identified the Y-component of initial pictures by morphological image method.
Step 104: respectively local maximum point set drawn game portion minimizer set is carried out plane delaunay triangulation, interpolation smoothing obtains maximum value envelope surface Emax and minimal value envelope surface Emin respectively again, obtains the algebraic average E of maximum value envelope surface Emax and minimal value envelope surface Emin.
Wherein, E = E m a x + E m i n 2 .
Step 105: deduct algebraic average E from the Y-component of initial pictures, forms object information.
Step 106: whether determination result information meets every layer of screening termination condition.
Wherein, this every layer screening termination condition is that the number of Local modulus maxima and local minizing point is equal or only differ 1 across numbers of zeros with one, and algebraic average is 0.
If object information meets every layer of screening termination condition, perform step 107, if otherwise object information does not meet every layer of screening termination condition, and return and perform step 103.
Step 107: using object information as n-th layer image detail information.
Step 108: determine whether n-th layer image detail information has and be no more than 1 extreme point.
If n-th layer image detail information has be no more than 1 extreme point, perform step 110; Otherwise, if the extreme point of n-th layer image detail information is more than 1, perform step 109.
Step 109: deduct n-th layer image detail information in the Y-component of initial pictures, and return execution step 101.
Step 110, using the contour images of object information as initial pictures.
This contour images is the HFS of initial pictures.
Concrete have comparatively significantly rain line usually due to initial pictures, be unfavorable for further married operation, so need to carry out bilateral filtering process, as each pixel in the Y-component of initial pictures is carried out bilateral filtering by step 203, generate object prominent edge image, can realize according to following formula:
g ( i , j ) = Σ k , l f ( k , l ) ω ( i , j , k , l ) Σ k , l ω ( i , j , k , l )
Wherein, g (i, j) is the pixel value of object prominent edge image; (i, j), two pixel coordinates of the Y-component that (k, l) is initial pictures; F (k, l) is the grey scale pixel value at pixel (k, l) place; ω (i, j, k, l) is a weight coefficient.
Wherein, ω ( i , j , k , l ) = exp ( - ( i - k ) 2 + ( j - l ) 2 2 σ d 2 - || f ( i , j ) - f ( k , l ) || 2 2 σ r 2 ) ; F (i, j) is the grey scale pixel value at pixel (i, j) place.
This weight coefficient ω (i, j, k, l) is that the filter coefficient r (i, j, k, l) of filter coefficient d (i, j, k, l) and the grey scale pixel value difference decision determined by pixel geometry space length affected.
Wherein,
d ( i , j , k , l ) = exp ( - ( i - k ) 2 + ( j - l ) 2 2 σ d 2 )
ω ( i , j , k , l ) = exp ( - || f ( i , j ) - f ( k , l ) || 2 2 σ r 2 ) ;
Concrete, the Y-component according to image after denoising of above-mentioned steps 207 and the Y-component of initial pictures carry out α mixing, generate the result images under YCbCr color space, can be realized by following formula:
C=αC b+(1-α)C r
Wherein, C is the Y-component of the result images under YCbCr color space; α is a channel value pre-set, and such as it is 0.85; C bfor the Y-component of image after denoising; C rfor the Y-component of initial pictures.
In addition, after step 206 generates the result images under YCbCr color space, because α hybrid technology process Y passage can cause color distortion to a certain extent, so need the Y-component of the result images under YCbCr color space to carry out brightness:
Specifically the Y-component of the result images under YCbCr color space can be carried out brightness regulation according to an imadjust function.This imadjust function is the function in matlab, and its function content is:
g=imadjust(f,[low_inhigh_in],[low_outhigh_out],gamma)
This function parameter gamma specifies the shape of curve, and this curve is used for mapping the brightness value of f, so that synthetic image g.If gamma is less than 1, then maps and be weighted to higher output valve, if gamma is greater than 1, maps and be weighted to lower output valve.If ignore gamma herein, then default value is 1, is linear mapping.This curve shape as shown in Figure 3.Such as the value on 0-0.5 interval is mapped to 0-1, good effect can be obtained, but be not only confined to this.
Corresponding to the embodiment of the method shown in Fig. 1, the embodiment of the present invention provides a kind of raindrops in video image removal device, as shown in Figure 4, comprising: color space converting unit 31, two-dimensional empirical mode decomposition unit 32, bilateral filtering unit 33, remove rain parts of images generation unit 34, Wavelet Denoising Method processing unit 35, α hybrid processing unit 36, remove rain Video Composition unit 37.Wherein:
Color space converting unit 31, can obtain each frame initial pictures in video, and each frame initial pictures is converted to YCbCr color space from rgb color space, obtains the Y-component of the initial pictures after color space conversion.
Two-dimensional empirical mode decomposition unit 32, can carry out two-dimensional empirical mode decomposition by the Y-component of initial pictures, generates the contour images of initial pictures.
Bilateral filtering unit 33, can carry out bilateral filtering by each pixel in the Y-component of initial pictures, generates object prominent edge image.
Remove rain parts of images generation unit 34, object prominent edge image and contour images can be carried out binaryzation and intersect process, generating and remove rain parts of images.
Wavelet Denoising Method processing unit 35, can carry out Wavelet Denoising Method process according to wavelet modulus maxima algorithm to removing rain parts of images, image after generation denoising.
α hybrid processing unit 36, can carry out α mixing according to the Y-component of the Y-component of image after denoising and initial pictures, generates the result images under YCbCr color space.
Color space converting unit 31, can also be converted to rgb color space by the result images under YCbCr color space from YCbCr color space, forms the result images under rgb color space.
Remove rain Video Composition unit 37, the result images under each frame rgb color space can be synthesized, generate and remove rain video.
In addition, color space converting unit 31, specifically can:
Pass through formula:
Y C b C r = 16 128 128 + 1 255 65.481 128.553 24.966 - 37.797 - 74.203 112.000 112.000 - 93.786 - 18.214 R G B
Each frame initial pictures is converted to YCbCr color space from rgb color space, obtains the Y-component of the initial pictures after color space conversion; Wherein, R, G, B are respectively the intensity level of each pixel R component of initial pictures, G component and B component; Y, Cb, Cr are respectively the Y-component of initial pictures, Cb component and Cr component after color space conversion.
In addition, this two-dimensional empirical mode decomposition unit 32, specifically can perform:
Step 101: the Y-component inputting a frame initial pictures.
Step 102: the Y-component of initial pictures is mapped on an XOY rectangular coordinate plane; Wherein, the gray-scale value of the Y-component corresponding pixel points of initial pictures is Z coordinate.
Step 103: the local maximum point set drawn game portion minimizer set being identified the Y-component of initial pictures by morphological image method.
Step 104: respectively local maximum point set drawn game portion minimizer set is carried out plane delaunay triangulation, interpolation smoothing obtains maximum value envelope surface Emax and minimal value envelope surface Emin respectively again, obtains the algebraic average E of maximum value envelope surface Emax and minimal value envelope surface Emin.
Wherein, E = E m a x + E m i n 2 .
Step 105: deduct algebraic average E from the Y-component of initial pictures, forms object information.
Step 106: whether determination result information meets every layer of screening termination condition.
Every layer of screening termination condition is that the number of Local modulus maxima and local minizing point is equal or only differ 1 across numbers of zeros with one, and algebraic average is 0.
If object information meets every layer of screening termination condition, perform step 107, if otherwise object information does not meet every layer of screening termination condition, and return and perform step 103.
Step 107: using object information as n-th layer image detail information.
Step 108: determine whether n-th layer image detail information has and be no more than 1 extreme point.
If n-th layer image detail information has be no more than 1 extreme point, perform step 110; Otherwise, if the extreme point of n-th layer image detail information is more than 1, perform step 109.
Step 109: deduct n-th layer image detail information in the Y-component of initial pictures, and return execution step 101.
Step 110, using the contour images of object information as initial pictures.
In addition, bilateral filtering unit 33, specifically can:
According to formula: each pixel in the Y-component of initial pictures is calculated, generates object prominent edge image.
Wherein, g (i, j) is the pixel value of object prominent edge image; (i, j), two pixel coordinates of the Y-component that (k, l) is initial pictures; F (k, l) is the grey scale pixel value at pixel (k, l) place; ω (i, j, k, l) is a weight coefficient.
Wherein, ω ( i , j , k , l ) = exp ( - ( i - k ) 2 + ( j - l ) 2 2 σ d 2 - || f ( i , j ) - f ( k , l ) || 2 2 σ r 2 ) ; F (i, j) is the grey scale pixel value at pixel (i, j) place.
In addition, α hybrid processing unit 36, specifically can:
According to formula: C=α C b+ (1-α) C rcarry out α mixing, generate the result images under YCbCr color space; Wherein, C is the Y-component of the result images under YCbCr color space; α is a channel value pre-set; C bfor the Y-component of image after denoising; C rfor the Y-component of initial pictures.
In addition, as shown in Figure 4, this raindrops in video image removal device, can also comprise:
Brightness adjusting unit 38, can carry out brightness regulation by the Y-component of the result images under YCbCr color space according to an imadjust function.
What deserves to be explained is, the specific implementation of a kind of raindrops in video image removal device that the embodiment of the present invention provides see the embodiment of the method corresponding to above-mentioned Fig. 1, can repeat no more herein.
A kind of raindrops in video image removal device that the embodiment of the present invention provides, by obtaining each frame initial pictures in video, and is converted to YCbCr color space by each frame initial pictures from rgb color space, obtains the Y-component of the initial pictures after color space conversion; The Y-component of initial pictures is carried out two-dimensional empirical mode decomposition, generates the contour images of initial pictures; Each pixel in the Y-component of initial pictures is carried out bilateral filtering, generates object prominent edge image; Object prominent edge image and contour images carried out binaryzation and intersect process, generating and remove rain parts of images; Wavelet Denoising Method process is carried out to removing rain parts of images, image after generation denoising according to wavelet modulus maxima algorithm; Carry out α mixing according to the Y-component of image after denoising and the Y-component of initial pictures, generate the result images under YCbCr color space; Result images under YCbCr color space is converted to rgb color space from YCbCr color space, forms the result images under rgb color space; Result images under each frame rgb color space is synthesized, generates and remove rain video.By the present invention, color conversion, two-dimensional empirical mode decomposition, bilateral filtering and binaryzation are carried out to frame of video initial pictures and intersect process, Wavelet Denoising Method process, α hybrid processing, return again and carry out color conversion, thus rain result images can be generated, solve current raindrop and can cause fogging action to image or video, the problem that simple value Shift Method can reduce greatly to the quality of output video image.
Those skilled in the art should understand, embodiments of the invention can be provided as method, system or computer program.Therefore, the present invention can adopt the form of complete hardware embodiment, completely software implementation or the embodiment in conjunction with software and hardware aspect.And the present invention can adopt in one or more form wherein including the upper computer program implemented of computer-usable storage medium (including but not limited to magnetic disk memory, CD-ROM, optical memory etc.) of computer usable program code.
The present invention describes with reference to according to the process flow diagram of the method for the embodiment of the present invention, equipment (system) and computer program and/or block scheme.Should understand can by the combination of the flow process in each flow process in computer program instructions realization flow figure and/or block scheme and/or square frame and process flow diagram and/or block scheme and/or square frame.These computer program instructions can be provided to the processor of multi-purpose computer, special purpose computer, Embedded Processor or other programmable data processing device to produce a machine, the instruction performed be produced can be implemented in the device of the function of specifying in process flow diagram flow process or multiple flow process and/or block scheme square frame or multiple square frame by the processor of computing machine or other programmable data processing device.
These computer program instructions also can be stored in can in the computer-readable memory that works in a specific way of vectoring computer or other programmable data processing device, the instruction making to be stored in this computer-readable memory produces the manufacture comprising command device, and this command device realizes the function of specifying in process flow diagram flow process or multiple flow process and/or block scheme square frame or multiple square frame.
These computer program instructions also can be loaded in computing machine or other programmable data processing device, make on computing machine or other programmable devices, to perform sequence of operations step to produce computer implemented process, thus the instruction performed on computing machine or other programmable devices provides the step that can be implemented in the function of specifying in process flow diagram flow process or multiple flow process and/or block scheme square frame or multiple square frame.
Apply specific embodiment in the present invention to set forth principle of the present invention and embodiment, the explanation of above embodiment just understands method of the present invention and core concept thereof for helping; Meanwhile, for one of ordinary skill in the art, according to thought of the present invention, all will change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention.

Claims (12)

1. a raindrops in video image minimizing technology, is characterized in that, comprising:
Obtain each frame initial pictures in video, and each frame initial pictures is converted to YCbCr color space from rgb color space, obtain the Y-component of the initial pictures after color space conversion;
The Y-component of described initial pictures is carried out two-dimensional empirical mode decomposition, generates the contour images of described initial pictures;
Each pixel in the Y-component of described initial pictures is carried out bilateral filtering, generates object prominent edge image;
Described object prominent edge image and described contour images carried out binaryzation and intersect process, generating and remove rain parts of images;
According to wavelet modulus maxima algorithm, Wavelet Denoising Method process is carried out to the described rain parts of images that goes, image after generation denoising;
Carry out α mixing according to the Y-component of image after denoising and the Y-component of described initial pictures, generate the result images under YCbCr color space;
Result images under described YCbCr color space is converted to rgb color space from YCbCr color space, forms the result images under rgb color space;
Result images under each frame rgb color space is synthesized, generates and remove rain video.
2. raindrops in video image minimizing technology according to claim 1, is characterized in that, described each frame initial pictures is converted to YCbCr color space from rgb color space, obtains the Y-component of the initial pictures after color space conversion, comprising:
Pass through formula:
Y C b C r = 16 128 128 + 1 255 65.481 128.553 24.966 - 37.797 - 74.203 112.000 112.000 - 93.786 - 18.214 R G B
Each frame initial pictures is converted to YCbCr color space from rgb color space, obtains the Y-component of the initial pictures after color space conversion; Wherein, R, G, B are respectively the intensity level of each pixel R component of initial pictures, G component and B component; Y, Cb, Cr are respectively the Y-component of initial pictures, Cb component and Cr component after color space conversion.
3. raindrops in video image minimizing technology according to claim 1, is characterized in that, the Y-component of described initial pictures is carried out two-dimensional empirical mode decomposition, generates the contour images of described initial pictures, comprising:
Step 101: the Y-component inputting initial pictures described in a frame;
Step 102: the Y-component of described initial pictures is mapped on an XOY rectangular coordinate plane; Wherein, the gray-scale value of the Y-component corresponding pixel points of described initial pictures is Z coordinate;
Step 103: the local maximum point set drawn game portion minimizer set being identified the Y-component of described initial pictures by morphological image method;
Step 104: respectively described local maximum point set and described local minimum point set are carried out plane delaunay triangulation, interpolation smoothing obtains maximum value envelope surface Emax and minimal value envelope surface Emin respectively again, obtains the algebraic average E of maximum value envelope surface Emax and minimal value envelope surface Emin;
Wherein, E = E m a x + E m i n 2 ;
Step 105: deduct described algebraic average E from the Y-component of described initial pictures, forms object information;
Step 106: determine whether described object information meets every layer of screening termination condition;
Described every layer of screening termination condition is that the number of Local modulus maxima and local minizing point is equal or only differ 1 across numbers of zeros with one, and described algebraic average is 0;
If described object information meets described every layer of screening termination condition, perform step 107, if otherwise described object information does not meet described every layer of screening termination condition, and return and perform step 103;
Step 107: using described object information as n-th layer image detail information;
Step 108: determine whether n-th layer image detail information has and be no more than 1 extreme point;
If n-th layer image detail information has be no more than 1 extreme point, perform step 110; Otherwise, if the extreme point of n-th layer image detail information is more than 1, perform step 109;
Step 109: deduct described n-th layer image detail information in the Y-component of described initial pictures, and return execution step 101;
Step 110, using the contour images of described object information as described initial pictures.
4. raindrops in video image minimizing technology according to claim 1, is characterized in that, each pixel in the Y-component of described initial pictures is carried out bilateral filtering, generates object prominent edge image, comprising:
According to formula: each pixel in the Y-component of described initial pictures is calculated, generates object prominent edge image;
Wherein, g (i, j) is the pixel value of object prominent edge image; (i, j), two pixel coordinates of the Y-component that (k, l) is initial pictures; F (k, l) is the grey scale pixel value at pixel (k, l) place; ω (i, j, k, l) is a weight coefficient;
Wherein, ω ( i , j , k , l ) = exp ( - ( i - k ) 2 + ( j - l ) 2 2 σ d 2 - || f ( i , j ) - f ( k , l ) || 2 2 σ r 2 ) ; F (i, j) is the grey scale pixel value at pixel (i, j) place.
5. raindrops in video image minimizing technology according to claim 1, is characterized in that, the Y-component of the described Y-component according to image after denoising and described initial pictures carries out α mixing, generates the result images under YCbCr color space, comprising:
According to formula: C=α C b+ (1-α) C rcarry out α mixing, generate the result images under YCbCr color space; Wherein, C is the Y-component of the result images under described YCbCr color space; α is a channel value pre-set; C bfor the Y-component of image after denoising; C rfor the Y-component of described initial pictures.
6. raindrops in video image minimizing technology according to claim 5, is characterized in that, the Y-component of image and the Y-component of described initial pictures carry out α mixing after according to denoising, after generating the result images under YCbCr color space, comprising:
The Y-component of the result images under described YCbCr color space is carried out brightness regulation according to an imadjust function.
7. a raindrops in video image removal device, is characterized in that, comprising:
Color space converting unit, for obtaining each frame initial pictures in video, and is converted to YCbCr color space by each frame initial pictures from rgb color space, obtains the Y-component of the initial pictures after color space conversion;
Two-dimensional empirical mode decomposition unit, for the Y-component of described initial pictures is carried out two-dimensional empirical mode decomposition, generates the contour images of described initial pictures;
Bilateral filtering unit, for each pixel in the Y-component of described initial pictures is carried out bilateral filtering, generates object prominent edge image;
Remove rain parts of images generation unit, for described object prominent edge image and described contour images being carried out binaryzation and intersecting process, generate and remove rain parts of images;
Wavelet Denoising Method processing unit, for carrying out Wavelet Denoising Method process according to wavelet modulus maxima algorithm to the described rain parts of images that goes, image after generation denoising;
α hybrid processing unit, for carrying out α mixing according to the Y-component of image after denoising and the Y-component of described initial pictures, generates the result images under YCbCr color space;
Described color space converting unit, also for the result images under described YCbCr color space is converted to rgb color space from YCbCr color space, forms the result images under rgb color space;
Remove rain Video Composition unit, for the result images under each frame rgb color space is synthesized, generate and remove rain video.
8. raindrops in video image removal device according to claim 7, is characterized in that, described color space converting unit, specifically for:
Pass through formula:
Y C b C r = 16 128 128 + 1 255 65.481 128.553 24.966 - 37.797 - 74.203 112.000 112.000 - 93.786 - 18.214 R G B
Each frame initial pictures is converted to YCbCr color space from rgb color space, obtains the Y-component of the initial pictures after color space conversion; Wherein, R, G, B are respectively the intensity level of each pixel R component of initial pictures, G component and B component; Y, Cb, Cr are respectively the Y-component of initial pictures, Cb component and Cr component after color space conversion.
9. raindrops in video image removal device according to claim 7, is characterized in that, two-dimensional empirical mode decomposition unit, specifically for performing:
Step 101: the Y-component inputting initial pictures described in a frame;
Step 102: the Y-component of described initial pictures is mapped on an XOY rectangular coordinate plane; Wherein, the gray-scale value of the Y-component corresponding pixel points of described initial pictures is Z coordinate;
Step 103: the local maximum point set drawn game portion minimizer set being identified the Y-component of described initial pictures by morphological image method;
Step 104: respectively described local maximum point set and described local minimum point set are carried out plane delaunay triangulation, interpolation smoothing obtains maximum value envelope surface Emax and minimal value envelope surface Emin respectively again, obtains the algebraic average E of maximum value envelope surface Emax and minimal value envelope surface Emin;
Wherein, E = E m a x + E m i n 2 ;
Step 105: deduct described algebraic average E from the Y-component of described initial pictures, forms object information;
Step 106: determine whether described object information meets every layer of screening termination condition;
Described every layer of screening termination condition is that the number of Local modulus maxima and local minizing point is equal or only differ 1 across numbers of zeros with one, and described algebraic average is 0;
If described object information meets described every layer of screening termination condition, perform step 107, if otherwise described object information does not meet described every layer of screening termination condition, and return and perform step 103;
Step 107: using described object information as n-th layer image detail information;
Step 108: determine whether n-th layer image detail information has and be no more than 1 extreme point;
If n-th layer image detail information has be no more than 1 extreme point, perform step 110; Otherwise, if the extreme point of n-th layer image detail information is more than 1, perform step 109;
Step 109: deduct described n-th layer image detail information in the Y-component of described initial pictures, and return execution step 101;
Step 110, using the contour images of described object information as described initial pictures.
10. raindrops in video image removal device according to claim 7, is characterized in that, described bilateral filtering unit, specifically for:
According to formula: each pixel in the Y-component of described initial pictures is calculated, generates object prominent edge image;
Wherein, g (i, j) is the pixel value of object prominent edge image; (i, j), two pixel coordinates of the Y-component that (k, l) is initial pictures; F (k, l) is the grey scale pixel value at pixel (k, l) place; ω (i, j, k, l) is a weight coefficient;
Wherein, ω ( i , j , k , l ) = exp ( - ( i - k ) 2 + ( j - l ) 2 2 σ d 2 - || f ( i , j ) - f ( k , l ) || 2 2 σ r 2 ) ; F (i, j) is the grey scale pixel value at pixel (i, j) place.
11. raindrops in video image removal devices according to claim 7, is characterized in that, described α hybrid processing unit, specifically for:
According to formula: C=α C b+ (1-α) C rcarry out α mixing, generate the result images under YCbCr color space; Wherein, C is the Y-component of the result images under described YCbCr color space; α is a channel value pre-set; C bfor the Y-component of image after denoising; C rfor the Y-component of described initial pictures.
12. raindrops in video image removal devices according to claim 11, is characterized in that, also comprise:
Brightness adjusting unit, for carrying out brightness regulation by the Y-component of the result images under described YCbCr color space according to an imadjust function.
CN201510379692.XA 2015-07-01 2015-07-01 Video image raindrop removal method and apparatus Pending CN104978720A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510379692.XA CN104978720A (en) 2015-07-01 2015-07-01 Video image raindrop removal method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510379692.XA CN104978720A (en) 2015-07-01 2015-07-01 Video image raindrop removal method and apparatus

Publications (1)

Publication Number Publication Date
CN104978720A true CN104978720A (en) 2015-10-14

Family

ID=54275200

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510379692.XA Pending CN104978720A (en) 2015-07-01 2015-07-01 Video image raindrop removal method and apparatus

Country Status (1)

Country Link
CN (1) CN104978720A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106056545A (en) * 2016-05-24 2016-10-26 中国科学院深圳先进技术研究院 Image rain removing method and image rain removing system
CN110148089A (en) * 2018-06-19 2019-08-20 腾讯科技(深圳)有限公司 A kind of image processing method, device and equipment, computer storage medium
CN111161177A (en) * 2019-12-25 2020-05-15 Tcl华星光电技术有限公司 Image self-adaptive noise reduction method and device
CN111612864A (en) * 2020-04-27 2020-09-01 厦门盈趣科技股份有限公司 Drawing method and system based on photo and image recognition

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130242188A1 (en) * 2010-11-15 2013-09-19 Indian Institute Of Technology, Kharagpur Method and Apparatus for Detection and Removal of Rain from Videos using Temporal and Spatiotemporal Properties
CN103700070A (en) * 2013-12-12 2014-04-02 中国科学院深圳先进技术研究院 Video raindrop-removing algorithm based on rain-tendency scale
CN103714518A (en) * 2013-12-12 2014-04-09 中国科学院深圳先进技术研究院 Video rain removing method
CN103729828A (en) * 2013-12-12 2014-04-16 中国科学院深圳先进技术研究院 Video rain removing method
CN104537622A (en) * 2014-12-31 2015-04-22 中国科学院深圳先进技术研究院 Method and system for removing raindrop influence in single image
CN104537634A (en) * 2014-12-31 2015-04-22 中国科学院深圳先进技术研究院 Method and system for removing raindrop influences in dynamic image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130242188A1 (en) * 2010-11-15 2013-09-19 Indian Institute Of Technology, Kharagpur Method and Apparatus for Detection and Removal of Rain from Videos using Temporal and Spatiotemporal Properties
CN103700070A (en) * 2013-12-12 2014-04-02 中国科学院深圳先进技术研究院 Video raindrop-removing algorithm based on rain-tendency scale
CN103714518A (en) * 2013-12-12 2014-04-09 中国科学院深圳先进技术研究院 Video rain removing method
CN103729828A (en) * 2013-12-12 2014-04-16 中国科学院深圳先进技术研究院 Video rain removing method
CN104537622A (en) * 2014-12-31 2015-04-22 中国科学院深圳先进技术研究院 Method and system for removing raindrop influence in single image
CN104537634A (en) * 2014-12-31 2015-04-22 中国科学院深圳先进技术研究院 Method and system for removing raindrop influences in dynamic image

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106056545A (en) * 2016-05-24 2016-10-26 中国科学院深圳先进技术研究院 Image rain removing method and image rain removing system
CN110148089A (en) * 2018-06-19 2019-08-20 腾讯科技(深圳)有限公司 A kind of image processing method, device and equipment, computer storage medium
CN110148089B (en) * 2018-06-19 2024-04-23 腾讯科技(深圳)有限公司 Image processing method, device and equipment and computer storage medium
CN111161177A (en) * 2019-12-25 2020-05-15 Tcl华星光电技术有限公司 Image self-adaptive noise reduction method and device
CN111161177B (en) * 2019-12-25 2023-09-26 Tcl华星光电技术有限公司 Image self-adaptive noise reduction method and device
CN111612864A (en) * 2020-04-27 2020-09-01 厦门盈趣科技股份有限公司 Drawing method and system based on photo and image recognition
CN111612864B (en) * 2020-04-27 2023-05-09 厦门盈趣科技股份有限公司 Drawing method and system based on photo and image recognition

Similar Documents

Publication Publication Date Title
Liu et al. Single image dehazing with depth-aware non-local total variation regularization
Wang et al. Dehazing for images with large sky region
CN107403415B (en) Compressed depth map quality enhancement method and device based on full convolution neural network
Gupta et al. Review of different local and global contrast enhancement techniques for a digital image
WO2016159884A1 (en) Method and device for image haze removal
CN104715461A (en) Image noise reduction method
CN105335947A (en) Image de-noising method and image de-noising apparatus
Singh et al. Contrast enhancement and brightness preservation using global-local image enhancement techniques
CN104574328A (en) Color image enhancement method based on histogram segmentation
CN106169181A (en) A kind of image processing method and system
CN110533614B (en) Underwater image enhancement method combining frequency domain and airspace
CN104978720A (en) Video image raindrop removal method and apparatus
CN104794685A (en) Image denoising realization method and device
Yan et al. Method to Enhance Degraded Image in Dust Environment.
CN110322404B (en) Image enhancement method and system
CN104680485A (en) Method and device for denoising image based on multiple resolutions
CN106920222A (en) A kind of image smoothing method and device
Wohlberg Convolutional sparse representations with gradient penalties
CN104657951A (en) Multiplicative noise removal method for image
CN112991197B (en) Low-illumination video enhancement method and device based on detail preservation of dark channel
CN105427262A (en) Image de-noising method based on bidirectional enhanced diffusion filtering
CN109118440B (en) Single image defogging method based on transmissivity fusion and adaptive atmospheric light estimation
CN111353955A (en) Image processing method, device, equipment and storage medium
CN101739670B (en) Non-local mean space domain time varying image filtering method
CN111476736B (en) Image defogging method, terminal and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20151014