CN114529460A - Low-illumination-scene intelligent highway monitoring rapid defogging method and device and electronic equipment - Google Patents
Low-illumination-scene intelligent highway monitoring rapid defogging method and device and electronic equipment Download PDFInfo
- Publication number
- CN114529460A CN114529460A CN202111578311.2A CN202111578311A CN114529460A CN 114529460 A CN114529460 A CN 114529460A CN 202111578311 A CN202111578311 A CN 202111578311A CN 114529460 A CN114529460 A CN 114529460A
- Authority
- CN
- China
- Prior art keywords
- pixel
- image
- color
- light
- scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 77
- 238000000034 method Methods 0.000 title claims abstract description 51
- 230000004907 flux Effects 0.000 claims abstract description 71
- 238000002834 transmittance Methods 0.000 claims abstract description 41
- 238000004364 calculation method Methods 0.000 claims abstract description 17
- 238000005286 illumination Methods 0.000 claims abstract description 17
- 238000003384 imaging method Methods 0.000 claims description 32
- 239000003795 chemical substances by application Substances 0.000 claims description 17
- 150000001875 compounds Chemical class 0.000 claims description 12
- 238000012935 Averaging Methods 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 6
- 239000003086 colorant Substances 0.000 claims description 5
- 230000000295 complement effect Effects 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims description 5
- 239000000126 substance Substances 0.000 claims description 5
- 230000000153 supplemental effect Effects 0.000 claims description 5
- 238000000149 argon plasma sintering Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 239000000428 dust Substances 0.000 description 4
- 238000011084 recovery Methods 0.000 description 4
- 239000000779 smoke Substances 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 125000004432 carbon atom Chemical group C* 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/94—Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
- G06T2207/30192—Weather; Meteorology
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a low-illumination scene intelligent highway monitoring rapid defogging method and device and electronic equipment, wherein the method comprises the following steps: s1, designing a dark original color adjusting model based on light intensity and color density by utilizing HSI color space, and calibrating dark original color; s2, introducing a local luminous flux proportion of the scene, and calculating local ambient light; s3, calculating the transmittance of the monitoring scene according to S1 and S2; and S4, combining the transmissivity and the local ambient light, designing parallel calculation, and restoring a road traffic monitoring scene. According to the method, the accuracy of the transmissivity is improved by calibrating the dark primary color; by introducing the light flux proportion, the ambient light of the monitored scene is restrained, and the brightness of the recovered scene is improved. In addition, the method disclosed by the invention is low in computational complexity and high in parallelization degree, and can realize efficient and rapid defogging of the monitoring scene. Therefore, under the low-illumination foggy-day environment, the intelligent road monitoring system obtains the real-time road condition and provides a technical support for implementing intelligent monitoring.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for intelligently and quickly defogging a highway in a low-illumination monitoring scene and electronic equipment.
Background
The road surface monitoring is a visual window for acquiring real-time image data by an intelligent road system, prejudging traffic risks, eliminating traffic hidden dangers and ensuring vehicle safety and smooth road surface. However, when the monitoring scene is in a fog, haze, smoke and dust environment, the system is difficult to accurately capture data, and if the monitoring scene is in a traffic risk, traffic accidents are easily caused. Therefore, the interference of removing fog, haze, smoke and dust is avoided, and the recovered real monitoring scene has important value in the intelligent road system. The existing image defogging technology comprises a defogging algorithm based on image enhancement, a defogging algorithm based on an atmospheric light scattering imaging model and a defogging algorithm based on a learning strategy. The defogging method based on image enhancement is poor in defogging performance, and the restored scene is mostly provided with obvious fog residues; the defogging method based on the scattering imaging model can effectively defogge, but obvious light intensity loss is easily caused, and the defogging result is dark; the amount of data required by the defogging of the learning strategy is large, the training time is long, and mostly based on a scattering imaging model and prior, the defogging result is also accompanied with the loss of brightness. In consideration of the fact that the existing defogging method is easy to cause the restoration scene to become dark, the existing defogging method is difficult to accurately restore the monitoring scene to cope with the fog, haze, smoke and dust scenes in the low-illumination environment, and a clear monitoring picture is provided.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the problem of obvious brightness loss in the existing defogging technology, the invention provides a rapid defogging method which can rapidly and effectively defogg and can ensure stable scene brightness so as to deal with the interference of fog, haze, smoke and dust in a smart road monitoring scene in a low-illumination environment.
In order to achieve the purpose, the invention provides a low-illumination scene intelligent highway monitoring rapid defogging method, which comprises the following steps of:
step 1: re-expressing the dark primaries in an HSI (chroma, saturation, luminance) color space with the luminance and color density of the observed image;
step 2, calibrating the dark primary color according to the pixel brightness and the color concentration;
step 3, calculating the transmittance of the image according to the calibrated dark primary color;
step 4, constructing a light flux proportion calculation model;
and 5: introducing a light flux proportion, and modifying an atmospheric scattering imaging model;
step 6, calculating local ambient light by using the luminous flux proportion;
and 7, restoring the original monitoring scene by combining the transmittance and the local ambient light.
The step 1 comprises the following steps: calculating the dark primary color of the monitored foggy day image by adopting the following formula
WhereinAndare respectively dark primary colorsThe illuminance and the color density of the light,andrespectively corresponding to the original fog-free pixels yiThe light intensity and color density of the dark primary color, the dark primary color of the real scene is:and isEqual to zero or slightly greater than 0, i.e.The parameter value intervals are (0, 1).
In step 2, the following formula is adopted to calculate the pixel y of the foggy day image y at the position iiBrightness I ofi:
Wherein the content of the first and second substances,pixel y respectively representing foggy day image y at position iiThe red, green and blue channel intensity values of (2) are definediIs pixel yiThe saturation of (c) is calculated as follows:
where min is a minimum operation, multiplying saturation by brightness, SiIiIs defined as pixel yiFor constructing the dark primary color complementary term.The calculation method of (2) is also referred to the above two formulas.
The step 2 comprises the following steps: the calibrated dark primary color is calculated using the following formula
In the above formula, the first and second carbon atoms are,is the calibrated dark primary color, and the brightness of the dark primary color pixelAnd color densityA dark primary color supplemental term is constructed,for calibration to the dark primaries. Further, e isiIs pixel yiLocal background light flux ratio of (1), tiIs pixel yiThe transmittance of (c).
The step 3 comprises the following steps: the following equation for transmittance is constructed:
the resulting transmission was:
in the formula, intermediate parametersIntermediate parameterA0、B0Can be obtained from observation image, sigma is adjustment parameter, and the numerical value interval is [0.8, 1.0 ]]。
Step 4 comprises the following steps: according to the fact that a monitoring scene has two regions of backlight and surface light, the ambient light intensity of the backlight region is small, the ambient light intensity of the surface light region is strong, the attenuation direction of the ambient light intensity of the local region of the monitoring scene is defined as a light flux direction, namely the light flux direction points to the backlight region from the surface light region, the attenuation direction is the light intensity gradient direction of a local image block on an image, and due to the fact that the image transmissivity and the depth of field d exist:
ti=e-βd
where β is the attenuation factor, β may take a constant value according to the atmospheric scattering imaging principle, i.e. the transmission is only related to d. Since the local image block includes a smaller continuous scene, the same depth of field can be set, and therefore, the pixel transmittances in the image block are consistent. In thatOn the basis, the invention finds that the fog has no influence on the luminous flux direction, namely the luminous flux directions of the monitoring image and the original scene image are consistent, and obtains the direction angle
In the formula (I), the compound is shown in the specification,respectively, a convolution window omega centered on a pixel i in the original scene imageiThe light intensity changes in the horizontal and vertical directions,the light intensity variation of the convolution window centered on the pixel i in the monitored image in the horizontal and vertical directions is respectively due toThus vectorAndthe directions are coincident, and the direction angle is obtained by arctangent
According toWindow omega of convolutioniAll the pixels in the system are divided into surface light and backlight, the number of the surface light pixels in a local area is superior due to the dispersion distribution of the ambient light, and the surface light pixels (convolution window omega)iLarger one of) occupies the convolution window omegaiThe proportion of the total number of the middle pixels is defined as the luminous flux proportion epsiloni:
Where N is the convolution window omegaiTotal number of middle pixels, max ({ sum (N)k) ) } is the number of area light pixels.
The step 5 comprises the following steps: the atmospheric scattering imaging model is as follows:
yi=tixi+(1-ti)·A
in the formula, tiIs a foggy day pixel yiA is the overall ambient light of the monitored scene; y isiIs the pixel value of the foggy day image y at the position i;
the atmospheric scattering imaging model is modified using the following formula:
yi=A·∈i·tixi+(1-ti)·A
introducing eiThe rear local environment light is A · epsiloniThe ambient light becomes small, xiNamely, the brightness value of the scene is enhanced, and the stability of the brightness is kept, so that the defogging recovery of the intelligent road monitoring scene in the low-illumination environment can be realized.
The step 6 comprises the following steps: in the monitored image, the ambient light [ A ] is obtained by averaging a certain proportion (generally 0.1%) of pixels with the smallest image transmittanceR,AG,AB]And then:
wherein A isR,AG,ABThe gray values of the red, green and blue channels of the ambient light respectively; in order to avoid overexposure at the long range, if,the luminous flux proportion of the pixel is determined as ei1, its physical meaning lies in a distant scene, such as the sky, its local placeThe partial ambient light is the overall ambient light of the scene. From the light flux ratio, the local ambient light is defined as: e is the same asiA。
The step 7 comprises the following steps: introducing transmittance t in modified atmosphere scattering imaging modeliThe luminous flux proportion is epsiloniIntegral ambient light intensity [ A ]R,AG,AB]And restoring the original monitoring scene:
in the formula (I), the compound is shown in the specification,is the intensity of the fog-free pixel in color channel c, AcIs the intensity of the ambient light in color channel c,is the intensity of the foggy day pixel in color channel c, c ∈ { R, G, B }. R, G, B represent the intensity of the red, green, blue color channels, respectively.
The invention also provides a low-illumination scene intelligent road monitoring rapid defogging device which comprises a monitoring image preprocessing module, a model construction and correction module and a defogging restoration image module;
the monitoring image preprocessing module is used for expressing the dark primary color again in the HSI color space according to the brightness and the color concentration of an observation image, calibrating the dark primary color according to the brightness and the color concentration of a pixel, and calculating the transmissivity of the image according to the calibrated dark primary color;
the model building and correcting module is used for building a light flux ratio calculation model, introducing a light flux ratio, modifying an atmospheric scattering imaging model, utilizing the light flux ratio and calculating local environment light;
the defogging restoration image module is used for restoring an original monitoring scene by combining the transmittance and the local ambient light;
the method for re-expressing the dark primary color in the HSI color space by using the brightness and the color density of the observed image specifically comprises the following steps: by using such asThe dark primary color of the monitored foggy day image is calculated by the following formula
WhereinAndare respectively dark primary colorsThe brightness and the color density of the color,andrespectively corresponding to original haze-free pixels xiLight intensity and color density of dark primary color, dark primary color of real sceneComprises the following steps:
the calibrating the dark primary color according to the pixel brightness and the color density and the calibrating the dark primary color specifically include: calculating the pixel y of the foggy day image y at the position i by adopting the following formulaiBrightness I ofi:
Wherein the content of the first and second substances,pixel y respectively representing foggy day image y at position iiDefining S as the intensity values of the red, green and blue channelsiIs pixel yiThe saturation of (c) is calculated as follows:
where min is a minimum operation, multiplying saturation by brightness, SiIiIs defined as a pixel yiFor constructing a dark primary color complementary term;
In the above formula, the brightness of the dark primary color pixel is usedAnd color densityA dark primary color supplemental term is constructed,for calibrating the dark primaries; e is the same asiIs pixel yiLocal background light flux ratio of (1), tiIs pixel yiThe transmittance of (a);
the calculating of the transmittance of the image specifically includes: the following equation for transmittance is constructed:
the resulting transmittance was:
the constructing of the light flux ratio calculation model specifically includes: according to the fact that a monitoring scene has two regions of backlight and surface light, the ambient light intensity of the backlight region is small, the ambient light intensity of the surface light region is strong, the attenuation direction of the ambient light intensity of the local region of the monitoring scene is defined as a light flux direction, namely the light flux direction points to the backlight region from the surface light region, the attenuation direction is the light intensity gradient direction of a local image block on an image, and due to the fact that the image transmissivity and the depth of field d exist:
ti=e-βd
In the formula (I), the compound is shown in the specification,respectively, by pixel x in the original scene imageiCentered convolution window omegaiThe light intensity varies in the horizontal and vertical directions,respectively, by pixel y in the monitored imageiCentered convolution window omegaiLight intensity variations in the horizontal and vertical directions;
according toWindow omega of convolutioniAll the pixels in the system are divided into surface light and backlight, and the surface light pixels occupy a convolution window omegaiThe proportion of the total number of the middle pixels is defined as the luminous flux proportion epsiloni:
Where N is the convolution window omegaiTotal number of middle pixels, max ({ sum (N)k) }) is the number of face light pixels;
the method for modifying the atmospheric scattering imaging model by introducing the light flux ratio specifically comprises the following steps: the atmospheric scattering imaging model is as follows:
yi=tixi+(1-ti)·A
in the formula, tiIs a foggy day pixel yiA is the overall ambient light of the monitored scene; y isiIs the pixel value of the foggy day image y at the position i;
the atmospheric scattering imaging model is modified using the following formula:
yi=A·∈i·tixi+(1-ti)·A;
the calculating of the local ambient light by using the luminous flux ratio specifically includes: in the monitored image, the ambient light [ A ] is obtained by averaging the pixels with the minimum image transmittance in a certain proportionR,AG,AB]And then:
wherein A isR,AG,ABThe gray values of the red, green and blue channels of the ambient light respectively; if it is notThe luminous flux proportion of the pixel is determined as ei=1;
The restoring of the original monitoring scene by combining the transmittance and the local ambient light specifically includes: introducing transmittance t in modified atmosphere scattering imaging modeliThe luminous flux proportion is epsiloniIntegral ambient light intensity [ A ]R,AG,AB]And restoring the original monitoring scene:
in the formula (I), the compound is shown in the specification,is the intensity of the fog-free pixel in color channel c, AcIs the intensity of the ambient light in color channel c,is the intensity of the foggy day pixel in the color channel c, c belongs to { R, G, B }, and R, G, B respectively represent the intensity of the red, green, blue color channels.
The device is calculatingAnd when the parameters are equal, the parallel computing method is adopted for completing the operation, and the specific storage of the light flux proportion and the ambient light position of the camera at different monitoring angles is realized.
The present invention also provides an electronic device, comprising: a processor and a memory having computer program instructions stored therein, which when executed by the processor, cause the processor to perform the low light scene intelligent highway monitoring fast defogging method.
The working principle is as follows: the invention uses HSI color space parallel computing calculation parametersThen calculate ti,∈iAnd [ A ]R,AG,AB]And then, a modified atmosphere light scattering imaging model is introduced to obtain a restoration result of the road monitoring scene, and because the monitoring distance and the monitoring scene change are small when the monitoring angle of the camera is unchanged, the ambient light positions and the local light flux ratio under different monitoring angles can be stored, so that the restoration speed is improved.
Has the advantages that: compared with the prior art, the defogging restoration method provided by the invention considers the light change of local environment, improves the transmissivity precision, has less related parameters and high parallel computation degree, thus effectively and quickly defogging, simultaneously improving the light intensity of a monitoring scene and having obvious defogging restoration effect on an intelligent road monitoring scene in a low-illumination environment.
Drawings
The foregoing and/or other advantages of the invention will become further apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings.
FIG. 1 is a schematic flow chart of a low-light-level scene intelligent highway monitoring and rapid defogging method provided by the invention;
FIG. 2 is an image of a road monitoring scene in a low-illumination fog environment;
FIG. 3 shows the defogging recovery result of a conventional Dark Channel Prior (DCP);
FIG. 4 is a defogging restoration result of a DCP apriori combined deep learning (CAN) framework;
FIG. 5 is a scene after defogging recovery of the method provided by the present invention;
fig. 6a, 6b, 6c are histograms of the defogging recovery results of the DCP method, the CAN method, and the method of the present invention.
Detailed Description
As shown in figure 1, the invention provides a low-illumination scene intelligent highway monitoring rapid defogging method which mainly improves the transmissivity precision by modifying dark channel prior, improves the light precision of local environment by introducing a light flux ratio, improves the brightness of a restored scene by modifying an atmospheric light scattering imaging model, and improves the running speed of the method by adopting parallel calculation according to the data characteristics so as to realize rapid defogging restoration of an 'intelligent highway' monitoring scene in a low-illumination environment.
Wherein, the transmittance solving based on the corrected dark primary color comprises the solving of a dark primary color light intensity channel, the solving of a color concentration channel, the solving of ambient light and the transmittance solving, and comprises the following steps:
step 1: solving the dark primary color light intensity channel and the color concentration channel, according to the HSI color space, the expression of the dark primary color:
at pixel yiA convolution window omega with a center size of 23 × 23iThe position of the dark primary color pixel is indexed, and the light intensity and the color density of the pixel are obtained as a pixel yiLight intensity channel ofAnd color density channel
Step 2: and (5) solving the ambient light, indexing 0.1% pixels with maximum dark primary color brightness of all pixels in the image, and averaging to obtain the overall ambient light [ A ] of the sceneR,AG,AB]。
And step 3: and (3) solving the transmissivity, wherein according to the steps 1 and 2, the corrected dark primary color:
then, the pixel y is solved by the modified dark primary coloriTransmittance of (2):
in the formula (I), the compound is shown in the specification,∈iis the local background light flux proportion of the pixel i, which can be obtained by parallel calculation, sigma is the adjustment parameter, and the numerical value interval is [0.8, 1.0 ]]。
The method comprises the following steps of obtaining the light flux proportion of a pixel through parallel operation in a monitoring image, and then introducing the light flux proportion into the whole ambient light to obtain the ambient light of the pixel, wherein the method comprises the following steps:
step a 1: by pixel yiUsing a 23 × 23 convolution window as a center, the background luminance gradient in the horizontal direction and the vertical direction was obtainedThe inverse tangent yields the luminous flux direction of the pixel:
the light flux directions in the 23 × 23 convolution window are divided into two types, and the number of pixels max ({ sum (N) occupied by the number of pixels is obtainedk) ) }) of the convolution window and the total number of the convolution window pixels, namely the luminous flux proportion epsiloni;
Step a 2: pixel yiLocal ambient light calculation of (1):
the modification of the atmospheric light scattering imaging model and the defogging restoration of the scene are implemented according to the physical fact that the larger the light flux ratio, the stronger the local ambient light of the pixel, and the method comprises the following steps of:
step b 1: introducing local ambient light to modify the atmospheric scattering imaging model:
yi=A·∈i·tixi+(1-ti)·A
step b 2: calculating a restoration scene according to the modified atmospheric light scattering imaging model:
in the formula (I), the compound is shown in the specification,is the intensity of the fog-free pixel in color channel c, AcIs the intensity of the ambient light in color channel c,is the intensity of the foggy day pixel in color channel c, c ∈ { R, G, B }.
The rapid method is realized by parallel computing and data storage of key data, and comprises the following steps.
Step c 1: respectively aligning key parameters in the monitored imagesAnd e isiAnd solving at the same time.
Step c 2: due to the fact that the number of monitoring angles of a monitoring scene is limited, the whole ambient light and the local ambient light can be rapidly solved according to the ambient light position and the local light flux ratio under different monitoring angles.
Table 1 shows the operation time of the present invention, DCP method and CAN method, on an Intel (R) core (TM) i7-1510U CPU @1.80GHz 16.00GB RAM PC, using matlab2021a software to actually monitor an image (FIG. 2: 704X 352, the same below). As can be seen from the data in the table, the method has the fastest operation time compared with other two methods.
TABLE 1
Fig. 3, 4, and 5 show the results of the DCP method, the CAN method, and the method of the present invention for processing the actual monitored image, and it CAN be seen that compared with the other two methods, the method of the present invention not only has an obvious defogging effect, but also has an obvious brightness enhancement, for example, the histogram data of fig. 6a, 6b, and 6c CAN be verified, the abscissa of the histogram is the range of brightness values, and the ordinate is the number of pixels of each brightness value.
The invention also provides a low-illumination scene intelligent road monitoring rapid defogging device which comprises a monitoring image preprocessing module, a model construction and correction module and a defogging restoration image module;
the monitoring image preprocessing module is used for expressing the dark primary color again in the HSI color space according to the brightness and the color density of an observation image, calibrating the dark primary color according to the pixel brightness and the color density, and calculating the transmissivity of the image according to the calibrated dark primary color;
the model building and correcting module is used for building a light flux ratio calculation model, introducing a light flux ratio, modifying an atmospheric scattering imaging model, utilizing the light flux ratio and calculating local environment light;
the defogging restoration image module is used for restoring an original monitoring scene by combining the transmittance and the local ambient light;
the re-expressing the dark primary color in the HSI color space by the brightness and the color density of the observed image specifically comprises the following steps: calculating the dark primary color of the monitored foggy day image by adopting the following formula
WhereinAndare respectively dark primary colorsThe brightness and the color density of the color,andrespectively corresponding to original haze-free pixels xiLight intensity and color density of dark primary color, dark primary color of real sceneComprises the following steps:
the calibrating the dark primary color according to the pixel brightness and the color density and the calibrating the dark primary color specifically include: calculating the pixel y of the foggy day image y at the position i by adopting the following formulaiBrightness I ofi:
Wherein the content of the first and second substances,pixel y respectively representing foggy day image y at position iiThe red, green and blue channel intensity values of (2) are definediIs pixel yiThe saturation of (c) is calculated as follows:
wherein min is minimizedOperation, the product S of saturation and brightnessiIiIs defined as pixel yiFor constructing a dark primary color complementary term;
In the above formula, the brightness of the dark primary color pixel is usedAnd color densityA dark primary color supplemental term is constructed,for calibrating the dark primaries; e is the same asiIs a pixel yiLocal background light flux ratio of (1), tiIs pixel yiThe transmittance of (a);
the calculating of the transmittance of the image specifically includes: the following equation for transmittance is constructed:
the resulting transmittance was:
the constructing of the light flux ratio calculation model specifically includes: according to the fact that a monitoring scene has two regions of backlight and surface light, the ambient light intensity of the backlight region is small, the ambient light intensity of the surface light region is strong, the attenuation direction of the ambient light intensity of the local region of the monitoring scene is defined as a light flux direction, namely the light flux direction points to the backlight region from the surface light region, the attenuation direction is the light intensity gradient direction of a local image block on an image, and due to the fact that the image transmissivity and the depth of field d exist:
ti=e-βd
In the formula (I), the compound is shown in the specification,respectively, in the original scene image by pixel xiCentered convolution window omegaiThe light intensity varies in the horizontal and vertical directions,respectively, by pixel y in the monitored imageiCentered convolution window omegaiLight intensity variations in the horizontal and vertical directions;
according toWindow omega of convolutioniAll the pixels in the system are divided into surface light and backlight, and the surface light pixels occupy a convolution window omegaiThe proportion of the total number of the middle pixels is defined as the luminous flux proportion epsiloni:
Where N is the convolution window omegaiTotal number of middle pixels, max ({ sum (N)k) }) is the number of face light pixels;
the method for modifying the atmospheric scattering imaging model by introducing the light flux ratio specifically comprises the following steps: the atmospheric scattering imaging model is as follows:
yi=tixi+(1-ti)·A
in the formula, tiIs a foggy day pixel yiA is the overall ambient light of the monitored scene; y isiIs the pixel value of the foggy day image y at the position i;
the atmospheric scattering imaging model is modified using the following formula:
yi=A·∈i·tixi+(1-ti)·A
the calculating the local ambient light by using the luminous flux ratio specifically includes: in the monitored image, the ambient light [ A ] is obtained by averaging the pixels with the minimum image transmittance in a certain proportionR,AG,AB]And then:
wherein, AR,AG,ABThe gray values of the red, green and blue channels of the ambient light respectively; if it is notThe luminous flux proportion of the pixel is determined as ei=1;
The restoring of the original monitoring scene by combining the transmittance and the local ambient light specifically includes: introducing transmittance t in modified atmosphere scattering imaging modeliThe luminous flux proportion is epsiloniIntegral ambient light intensity [ A ]R,AG,AB]And restoring the original monitoring scene:
in the formula (I), the compound is shown in the specification,is the intensity of the fog-free pixel in color channel c, AcIs the intensity of the ambient light in color channel c,is the intensity of the foggy day pixel in the color channel c, c belongs to { R, G, B }, and R, G, B respectively represent the intensity of the red, green, blue color channels.
The device is calculatingAnd when the parameters are equal, the parallel computing method is adopted for completing the operation, and the specific storage of the light flux proportion and the ambient light position of the camera at different monitoring angles is realized.
As described above, the low-light scene intelligent highway monitoring fast defogging device according to the embodiment of the present application can be implemented in various terminal devices, such as a server of a distributed computing system. In one example, the low-light scene intelligent road monitoring fast defogging device according to the embodiment of the application can be integrated into the terminal device as a software module and/or a hardware module. For example, the means may be a software module in the operating system of the terminal device, or may be an application developed for the terminal device; of course, the apparatus may also be one of many hardware modules of the terminal device.
Alternatively, in another example, the apparatus and the terminal device may be separate terminal devices, and the apparatus may be connected to the terminal device through a wired and/or wireless network and transmit the interaction information according to an agreed data format.
The present application also provides an electronic device 10 comprising:
one or more processors 11 and memory 12, the processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer readable storage medium and executed by the processor 11 to implement the low-light scene intelligent highway monitoring fast defogging method and/or other desired functions of the various embodiments of the present application described above.
In one example, the electronic device 10 may also include an input device 13 and an output device 14, which may be interconnected by a bus system and/or other form of connection mechanism.
For example, the input device 13 may be a keyboard, a mouse, or the like.
The output device 14 can output various information to the outside, including the result of monitoring the fast defogging on the smart road in the low illumination scene. The output devices 14 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices.
According to another aspect of the present application, there is also provided a computer readable storage medium having stored thereon computer program instructions operable, when executed by a computing device, to perform low light scene intelligent highway monitoring fast defogging as described above.
The invention provides a method, a device and an electronic device for intelligently monitoring and rapidly defogging a low-illumination scene, and a plurality of methods and ways for implementing the technical scheme are provided, the above description is only a preferred embodiment of the invention, and it should be noted that, for a person skilled in the art, a plurality of improvements and embellishments can be made without departing from the principle of the invention, and the improvements and embellishments should also be regarded as the protection scope of the invention. All the components not specified in the present embodiment can be realized by the prior art.
Claims (10)
1. The low-illumination scene intelligent highway monitoring rapid defogging method is characterized by comprising the following steps of:
step 1: re-expressing the dark primary color in the HSI color space by the brightness and the color density of the observed image;
step 2, calibrating the dark primary color according to the pixel brightness and the color concentration;
step 3, calculating the transmittance of the image according to the calibrated dark primary color;
step 4, constructing a light flux proportion calculation model;
and 5: introducing a light flux proportion, and modifying an atmospheric scattering imaging model;
step 6, calculating local ambient light by using the luminous flux proportion;
and 7, restoring the original monitoring scene by combining the transmissivity and the local ambient light.
2. The method of claim 1, wherein step 1 comprises: calculating the dark primary color of the monitored foggy day image by adopting the following formula
3. the method according to claim 2, wherein in step 2, the following formula is adopted to calculate the pixel y of the foggy day image y at the position iiBrightness I ofi:
Wherein the content of the first and second substances,pixel y respectively representing foggy day image y at position iiDefining S as the intensity values of the red, green and blue channelsiIs pixel yiThe saturation of (c) is calculated as follows:
wherein min is pole findingSmall value operation, multiplying saturation and brightness by SiIiIs defined as a pixel yiFor constructing the dark primary color complementary term.
4. The method of claim 3, wherein step 2 comprises: the calibrated dark primary color is calculated using the following formula
6. The method of claim 5, wherein step 4 comprises: according to the fact that a monitoring scene has two regions of backlight and surface light, the ambient light intensity of the backlight region is small, the ambient light intensity of the surface light region is strong, the attenuation direction of the ambient light intensity of the local region of the monitoring scene is defined as a light flux direction, namely the light flux direction points to the backlight region from the surface light region, the attenuation direction is the light intensity gradient direction of a local image block on an image, and due to the fact that the image transmissivity and the depth of field d exist:
ti=e-βd
In the formula (I), the compound is shown in the specification,respectively, by pixel x in the original scene imageiCentered convolution window omegaiThe light intensity varies in the horizontal and vertical directions,respectively, by pixel y in the monitored imageiCentered convolution window omegaiLight intensity variations in horizontal and vertical directions;
according toWindow omega of convolutioniAll the pixels in the system are divided into surface light and backlight, and the surface light pixels occupy a convolution window omegaiThe proportion of the total number of the middle pixels is defined as the luminous flux proportion epsiloni:
Where N is the convolution window omegaiTotal number of middle pixels, max ({ sum (N)k) ) } is the number of area light pixels.
7. The method of claim 6, wherein step 5 comprises: the atmospheric scattering imaging model is as follows:
yi=tixi+(1-ti)·A
in the formula, tiIs a foggy day pixel yiA is the overall ambient light of the monitored scene; y isiIs the pixel value of the foggy day image y at the position i;
the atmospheric scattering imaging model is modified using the following formula:
yi=A·∈i·tixi+(1-ti)·A。
8. the method of claim 7, wherein step 6 comprises: in the monitored image, the ambient light [ A ] is obtained by averaging the pixels with the minimum image transmittance in a certain proportionR,AG,AB]And then:
wherein A isR,AG,ABThe gray values of the red, green and blue channels of the ambient light respectively; if it is notThe luminous flux proportion of the pixel is determined as ei=1;
The step 7 comprises the following steps: introducing transmittance t in modified atmosphere scattering imaging modeliThe luminous flux proportion is epsiloniIntegral ambient light intensity [ A ]R,AG,AB]And restoring the original monitoring scene:
in the formula (I), the compound is shown in the specification,is the intensity of the fog-free pixel in color channel c, AcIs the intensity of the ambient light in color channel c,is the intensity of the foggy day pixel in the color channel c, c belongs to { R, G, B }, and R, G, B respectively represent the intensity of the red, green, blue color channels.
9. The intelligent highway monitoring rapid defogging device in the low-illumination scene is characterized by comprising a monitoring image preprocessing module, a model building and correcting module and a defogging restoration image module;
the monitoring image preprocessing module is used for expressing the dark primary color again in the HSI color space according to the brightness and the color concentration of an observation image, calibrating the dark primary color according to the brightness and the color concentration of a pixel, and calculating the transmissivity of the image according to the calibrated dark primary color;
the model building and correcting module is used for building a light flux ratio calculation model, introducing a light flux ratio, modifying an atmospheric scattering imaging model, utilizing the light flux ratio and calculating local environment light;
the defogging restoration image module is used for restoring an original monitoring scene by combining the transmittance and the local ambient light;
the re-expressing the dark primary color in the HSI color space by the brightness and the color density of the observed image specifically comprises the following steps: calculating the dark primary color of the monitored foggy day image by adopting the following formula
WhereinAndare respectively dark primary colorsThe brightness and the color density of the color,andrespectively corresponding to original haze-free pixels xiLight intensity and color density of dark primary color, dark primary color of real sceneComprises the following steps:
the dark primary color is calibrated according to the brightness and color density of the pixel, and the calibrated dark primary color hasThe body includes: calculating the pixel y of the foggy day image y at the position i by adopting the following formulaiBrightness I ofi:
Wherein the content of the first and second substances,pixel y respectively representing foggy day image y at position iiThe red, green and blue channel intensity values of (1) define SiIs pixel yiThe saturation of (c) is calculated as follows:
where min is a minimum operation, multiplying saturation by brightness, SiIiIs defined as a pixel yiFor constructing a dark primary color complement, wherein,the calculation method of (2) is also referred to the above two formulas;
In the above formula, the brightness of the dark primary color pixel is usedAnd color densityA dark primary color supplemental term is constructed,for calibrating the dark primaries; e is the same asiIs pixel yiLocal background light flux ratio of (1), tiIs pixel yiThe transmittance of (a);
the calculating of the transmittance of the image specifically includes: the following equation for transmittance is constructed:
the resulting transmittance was:
the constructing of the light flux ratio calculation model specifically includes: according to the fact that a monitoring scene has two regions of backlight and surface light, the ambient light intensity of the backlight region is small, the ambient light intensity of the surface light region is strong, the attenuation direction of the ambient light intensity of the local region of the monitoring scene is defined as a light flux direction, namely the light flux direction points to the backlight region from the surface light region, the attenuation direction is the light intensity gradient direction of a local image block on an image, and due to the fact that the image transmissivity and the depth of field d exist:
ti=e-βd
In the formula (I), the compound is shown in the specification,respectively, by pixel x in the original scene imageiCentered convolution window omegaiThe light intensity varies in the horizontal and vertical directions,respectively, by pixel y in the monitored imageiCentered convolution window omegaiLight intensity variations in the horizontal and vertical directions;
according toWindow omega of convolutioniAll the pixels in the convolution window are divided into surface light and backlight, and the surface light pixel occupies the convolution window omegaiThe proportion of the total number of the middle pixels is defined as the luminous flux proportion epsiloni:
Where N is the convolution window omegaiTotal number of middle pixels, max ({ sum (N)k) }) is the number of face light pixels;
the method for modifying the atmospheric scattering imaging model by introducing the light flux ratio specifically comprises the following steps: the atmospheric scattering imaging model is as follows:
yi=tixi+(1-ti)·A
in the formula, tiIs a foggy day pixel yiA is the overall ambient light of the monitored scene; y isiIs the pixel value of the foggy day image y at the position i;
the atmospheric scattering imaging model is modified using the following formula:
yi=A·∈i·tixi+(1-ti)·A
the calculating the local ambient light by using the luminous flux ratio specifically includes: in the monitored image, the ambient light [ A ] is obtained by averaging the pixels with the minimum image transmittance in a certain proportionR,AG,AB]And then:
wherein A isR,AG,ABThe gray values of the red, green and blue channels of the ambient light respectively; if it is notThe luminous flux proportion of the pixel is determined as ei=1;
The restoring of the original monitoring scene by combining the transmittance and the local ambient light specifically includes: introducing transmittance t in modified atmosphere scattering imaging modeliThe luminous flux proportion is epsiloniIntegral ambient light intensity [ A ]R,AG,AB]And restoring the original monitoring scene:
in the formula (I), the compound is shown in the specification,is the intensity of the fog-free pixel in color channel c, AcIs the intensity of the ambient light in color channel c,is the intensity of the foggy day pixel in the color channel c, c belongs to { R, G, B }, and R, G, B respectively represent the intensity of the red, green, blue color channels.
10. An electronic device, comprising: a processor and a memory, having stored therein computer program instructions, which when executed by the processor, cause the processor to perform the low light scene intelligent highway monitoring fast defogging method recited in any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111578311.2A CN114529460A (en) | 2021-12-22 | 2021-12-22 | Low-illumination-scene intelligent highway monitoring rapid defogging method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111578311.2A CN114529460A (en) | 2021-12-22 | 2021-12-22 | Low-illumination-scene intelligent highway monitoring rapid defogging method and device and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114529460A true CN114529460A (en) | 2022-05-24 |
Family
ID=81618882
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111578311.2A Withdrawn CN114529460A (en) | 2021-12-22 | 2021-12-22 | Low-illumination-scene intelligent highway monitoring rapid defogging method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114529460A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116109513A (en) * | 2023-02-27 | 2023-05-12 | 南京林业大学 | Image defogging method based on local ambient light projection constant priori |
-
2021
- 2021-12-22 CN CN202111578311.2A patent/CN114529460A/en not_active Withdrawn
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116109513A (en) * | 2023-02-27 | 2023-05-12 | 南京林业大学 | Image defogging method based on local ambient light projection constant priori |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Shi et al. | Let you see in sand dust weather: A method based on halo-reduced dark channel prior dehazing for sand-dust image enhancement | |
CN107767354B (en) | Image defogging algorithm based on dark channel prior | |
Huang et al. | An efficient visibility enhancement algorithm for road scenes captured by intelligent transportation systems | |
US10949958B2 (en) | Fast fourier color constancy | |
Zhang et al. | Underwater image enhancement via weighted wavelet visual perception fusion | |
Gao et al. | Sand-dust image restoration based on reversing the blue channel prior | |
CN104036466B (en) | A kind of video defogging method and system | |
CN104867121B (en) | Image Quick demisting method based on dark primary priori and Retinex theories | |
Wang et al. | Variational single nighttime image haze removal with a gray haze-line prior | |
TWI489416B (en) | Image recovery method | |
CN103268596B (en) | A kind of method for reducing picture noise and making color be near the mark | |
CN105701783B (en) | A kind of single image to the fog method and device based on environment light model | |
US20120212477A1 (en) | Fast Haze Removal and Three Dimensional Depth Calculation | |
TW202226141A (en) | Image dehazing method and image dehazing apparatus using the same | |
CN113222866B (en) | Gray scale image enhancement method, computer readable medium and computer system | |
CN110827218A (en) | Airborne image defogging method based on image HSV transmissivity weighted correction | |
CN106023108A (en) | Image defogging algorithm based on boundary constraint and context regularization | |
CN109523474A (en) | A kind of enhancement method of low-illumination image based on greasy weather degradation model | |
CN111192205A (en) | Image defogging method and system and computer readable storage medium | |
CN105989583A (en) | Image defogging method | |
Gao et al. | Color balance and sand-dust image enhancement in lab space | |
CN114529460A (en) | Low-illumination-scene intelligent highway monitoring rapid defogging method and device and electronic equipment | |
CN107454317B (en) | Image processing method, image processing device, computer-readable storage medium and computer equipment | |
CN105118032B (en) | A kind of wide method for dynamically processing of view-based access control model system | |
CN109325905B (en) | Image processing method, image processing device, computer readable storage medium and electronic apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20220524 |
|
WW01 | Invention patent application withdrawn after publication |