CN114529460A - Low-illumination-scene intelligent highway monitoring rapid defogging method and device and electronic equipment - Google Patents

Low-illumination-scene intelligent highway monitoring rapid defogging method and device and electronic equipment Download PDF

Info

Publication number
CN114529460A
CN114529460A CN202111578311.2A CN202111578311A CN114529460A CN 114529460 A CN114529460 A CN 114529460A CN 202111578311 A CN202111578311 A CN 202111578311A CN 114529460 A CN114529460 A CN 114529460A
Authority
CN
China
Prior art keywords
pixel
image
color
light
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111578311.2A
Other languages
Chinese (zh)
Inventor
朱柱
谢凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinling Institute of Technology
Original Assignee
Jinling Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinling Institute of Technology filed Critical Jinling Institute of Technology
Priority to CN202111578311.2A priority Critical patent/CN114529460A/en
Publication of CN114529460A publication Critical patent/CN114529460A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30192Weather; Meteorology

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a low-illumination scene intelligent highway monitoring rapid defogging method and device and electronic equipment, wherein the method comprises the following steps: s1, designing a dark original color adjusting model based on light intensity and color density by utilizing HSI color space, and calibrating dark original color; s2, introducing a local luminous flux proportion of the scene, and calculating local ambient light; s3, calculating the transmittance of the monitoring scene according to S1 and S2; and S4, combining the transmissivity and the local ambient light, designing parallel calculation, and restoring a road traffic monitoring scene. According to the method, the accuracy of the transmissivity is improved by calibrating the dark primary color; by introducing the light flux proportion, the ambient light of the monitored scene is restrained, and the brightness of the recovered scene is improved. In addition, the method disclosed by the invention is low in computational complexity and high in parallelization degree, and can realize efficient and rapid defogging of the monitoring scene. Therefore, under the low-illumination foggy-day environment, the intelligent road monitoring system obtains the real-time road condition and provides a technical support for implementing intelligent monitoring.

Description

Low-illumination-scene intelligent highway monitoring rapid defogging method and device and electronic equipment
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for intelligently and quickly defogging a highway in a low-illumination monitoring scene and electronic equipment.
Background
The road surface monitoring is a visual window for acquiring real-time image data by an intelligent road system, prejudging traffic risks, eliminating traffic hidden dangers and ensuring vehicle safety and smooth road surface. However, when the monitoring scene is in a fog, haze, smoke and dust environment, the system is difficult to accurately capture data, and if the monitoring scene is in a traffic risk, traffic accidents are easily caused. Therefore, the interference of removing fog, haze, smoke and dust is avoided, and the recovered real monitoring scene has important value in the intelligent road system. The existing image defogging technology comprises a defogging algorithm based on image enhancement, a defogging algorithm based on an atmospheric light scattering imaging model and a defogging algorithm based on a learning strategy. The defogging method based on image enhancement is poor in defogging performance, and the restored scene is mostly provided with obvious fog residues; the defogging method based on the scattering imaging model can effectively defogge, but obvious light intensity loss is easily caused, and the defogging result is dark; the amount of data required by the defogging of the learning strategy is large, the training time is long, and mostly based on a scattering imaging model and prior, the defogging result is also accompanied with the loss of brightness. In consideration of the fact that the existing defogging method is easy to cause the restoration scene to become dark, the existing defogging method is difficult to accurately restore the monitoring scene to cope with the fog, haze, smoke and dust scenes in the low-illumination environment, and a clear monitoring picture is provided.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the problem of obvious brightness loss in the existing defogging technology, the invention provides a rapid defogging method which can rapidly and effectively defogg and can ensure stable scene brightness so as to deal with the interference of fog, haze, smoke and dust in a smart road monitoring scene in a low-illumination environment.
In order to achieve the purpose, the invention provides a low-illumination scene intelligent highway monitoring rapid defogging method, which comprises the following steps of:
step 1: re-expressing the dark primaries in an HSI (chroma, saturation, luminance) color space with the luminance and color density of the observed image;
step 2, calibrating the dark primary color according to the pixel brightness and the color concentration;
step 3, calculating the transmittance of the image according to the calibrated dark primary color;
step 4, constructing a light flux proportion calculation model;
and 5: introducing a light flux proportion, and modifying an atmospheric scattering imaging model;
step 6, calculating local ambient light by using the luminous flux proportion;
and 7, restoring the original monitoring scene by combining the transmittance and the local ambient light.
The step 1 comprises the following steps: calculating the dark primary color of the monitored foggy day image by adopting the following formula
Figure BDA0003426119540000021
Figure BDA0003426119540000022
Wherein
Figure BDA0003426119540000023
And
Figure BDA0003426119540000024
are respectively dark primary colors
Figure BDA0003426119540000025
The illuminance and the color density of the light,
Figure BDA0003426119540000026
and
Figure BDA0003426119540000027
respectively corresponding to the original fog-free pixels yiThe light intensity and color density of the dark primary color, the dark primary color of the real scene is:
Figure BDA0003426119540000028
and is
Figure BDA0003426119540000029
Equal to zero or slightly greater than 0, i.e.
Figure BDA00034261195400000210
The parameter value intervals are (0, 1).
In step 2, the following formula is adopted to calculate the pixel y of the foggy day image y at the position iiBrightness I ofi
Figure BDA00034261195400000211
Wherein the content of the first and second substances,
Figure BDA00034261195400000212
pixel y respectively representing foggy day image y at position iiThe red, green and blue channel intensity values of (2) are definediIs pixel yiThe saturation of (c) is calculated as follows:
Figure BDA00034261195400000213
where min is a minimum operation, multiplying saturation by brightness, SiIiIs defined as pixel yiFor constructing the dark primary color complementary term.
Figure BDA00034261195400000214
The calculation method of (2) is also referred to the above two formulas.
The step 2 comprises the following steps: the calibrated dark primary color is calculated using the following formula
Figure BDA00034261195400000215
Figure BDA00034261195400000216
In the above formula, the first and second carbon atoms are,
Figure BDA00034261195400000217
is the calibrated dark primary color, and the brightness of the dark primary color pixel
Figure BDA00034261195400000218
And color density
Figure BDA00034261195400000219
A dark primary color supplemental term is constructed,
Figure BDA00034261195400000220
for calibration to the dark primaries. Further, e isiIs pixel yiLocal background light flux ratio of (1), tiIs pixel yiThe transmittance of (c).
The step 3 comprises the following steps: the following equation for transmittance is constructed:
Figure BDA00034261195400000221
the resulting transmission was:
Figure BDA0003426119540000031
in the formula, intermediate parameters
Figure BDA0003426119540000032
Intermediate parameter
Figure BDA0003426119540000033
A0、B0Can be obtained from observation image, sigma is adjustment parameter, and the numerical value interval is [0.8, 1.0 ]]。
Step 4 comprises the following steps: according to the fact that a monitoring scene has two regions of backlight and surface light, the ambient light intensity of the backlight region is small, the ambient light intensity of the surface light region is strong, the attenuation direction of the ambient light intensity of the local region of the monitoring scene is defined as a light flux direction, namely the light flux direction points to the backlight region from the surface light region, the attenuation direction is the light intensity gradient direction of a local image block on an image, and due to the fact that the image transmissivity and the depth of field d exist:
ti=e-βd
where β is the attenuation factor, β may take a constant value according to the atmospheric scattering imaging principle, i.e. the transmission is only related to d. Since the local image block includes a smaller continuous scene, the same depth of field can be set, and therefore, the pixel transmittances in the image block are consistent. In thatOn the basis, the invention finds that the fog has no influence on the luminous flux direction, namely the luminous flux directions of the monitoring image and the original scene image are consistent, and obtains the direction angle
Figure BDA0003426119540000034
Figure BDA0003426119540000035
In the formula (I), the compound is shown in the specification,
Figure BDA0003426119540000036
respectively, a convolution window omega centered on a pixel i in the original scene imageiThe light intensity changes in the horizontal and vertical directions,
Figure BDA0003426119540000037
the light intensity variation of the convolution window centered on the pixel i in the monitored image in the horizontal and vertical directions is respectively due to
Figure BDA0003426119540000038
Thus vector
Figure BDA0003426119540000039
And
Figure BDA00034261195400000310
the directions are coincident, and the direction angle is obtained by arctangent
Figure BDA00034261195400000311
According to
Figure BDA00034261195400000312
Window omega of convolutioniAll the pixels in the system are divided into surface light and backlight, the number of the surface light pixels in a local area is superior due to the dispersion distribution of the ambient light, and the surface light pixels (convolution window omega)iLarger one of) occupies the convolution window omegaiThe proportion of the total number of the middle pixels is defined as the luminous flux proportion epsiloni
Figure BDA00034261195400000313
Where N is the convolution window omegaiTotal number of middle pixels, max ({ sum (N)k) ) } is the number of area light pixels.
The step 5 comprises the following steps: the atmospheric scattering imaging model is as follows:
yi=tixi+(1-ti)·A
in the formula, tiIs a foggy day pixel yiA is the overall ambient light of the monitored scene; y isiIs the pixel value of the foggy day image y at the position i;
the atmospheric scattering imaging model is modified using the following formula:
yi=A·∈i·tixi+(1-ti)·A
introducing eiThe rear local environment light is A · epsiloniThe ambient light becomes small, xiNamely, the brightness value of the scene is enhanced, and the stability of the brightness is kept, so that the defogging recovery of the intelligent road monitoring scene in the low-illumination environment can be realized.
The step 6 comprises the following steps: in the monitored image, the ambient light [ A ] is obtained by averaging a certain proportion (generally 0.1%) of pixels with the smallest image transmittanceR,AG,AB]And then:
Figure BDA0003426119540000041
wherein A isR,AG,ABThe gray values of the red, green and blue channels of the ambient light respectively; in order to avoid overexposure at the long range, if,
Figure BDA0003426119540000042
the luminous flux proportion of the pixel is determined as ei1, its physical meaning lies in a distant scene, such as the sky, its local placeThe partial ambient light is the overall ambient light of the scene. From the light flux ratio, the local ambient light is defined as: e is the same asiA。
The step 7 comprises the following steps: introducing transmittance t in modified atmosphere scattering imaging modeliThe luminous flux proportion is epsiloniIntegral ambient light intensity [ A ]R,AG,AB]And restoring the original monitoring scene:
Figure BDA0003426119540000043
in the formula (I), the compound is shown in the specification,
Figure BDA0003426119540000044
is the intensity of the fog-free pixel in color channel c, AcIs the intensity of the ambient light in color channel c,
Figure BDA0003426119540000045
is the intensity of the foggy day pixel in color channel c, c ∈ { R, G, B }. R, G, B represent the intensity of the red, green, blue color channels, respectively.
The invention also provides a low-illumination scene intelligent road monitoring rapid defogging device which comprises a monitoring image preprocessing module, a model construction and correction module and a defogging restoration image module;
the monitoring image preprocessing module is used for expressing the dark primary color again in the HSI color space according to the brightness and the color concentration of an observation image, calibrating the dark primary color according to the brightness and the color concentration of a pixel, and calculating the transmissivity of the image according to the calibrated dark primary color;
the model building and correcting module is used for building a light flux ratio calculation model, introducing a light flux ratio, modifying an atmospheric scattering imaging model, utilizing the light flux ratio and calculating local environment light;
the defogging restoration image module is used for restoring an original monitoring scene by combining the transmittance and the local ambient light;
the method for re-expressing the dark primary color in the HSI color space by using the brightness and the color density of the observed image specifically comprises the following steps: by using such asThe dark primary color of the monitored foggy day image is calculated by the following formula
Figure BDA0003426119540000051
Figure BDA0003426119540000052
Wherein
Figure BDA0003426119540000053
And
Figure BDA0003426119540000054
are respectively dark primary colors
Figure BDA0003426119540000055
The brightness and the color density of the color,
Figure BDA0003426119540000056
and
Figure BDA0003426119540000057
respectively corresponding to original haze-free pixels xiLight intensity and color density of dark primary color, dark primary color of real scene
Figure BDA0003426119540000058
Comprises the following steps:
Figure BDA0003426119540000059
Figure BDA00034261195400000510
the calibrating the dark primary color according to the pixel brightness and the color density and the calibrating the dark primary color specifically include: calculating the pixel y of the foggy day image y at the position i by adopting the following formulaiBrightness I ofi
Figure BDA00034261195400000511
Wherein the content of the first and second substances,
Figure BDA00034261195400000512
pixel y respectively representing foggy day image y at position iiDefining S as the intensity values of the red, green and blue channelsiIs pixel yiThe saturation of (c) is calculated as follows:
Figure BDA00034261195400000513
where min is a minimum operation, multiplying saturation by brightness, SiIiIs defined as a pixel yiFor constructing a dark primary color complementary term;
the calibrated dark primary color is calculated using the following formula
Figure BDA00034261195400000514
Figure BDA00034261195400000515
In the above formula, the brightness of the dark primary color pixel is used
Figure BDA00034261195400000516
And color density
Figure BDA00034261195400000517
A dark primary color supplemental term is constructed,
Figure BDA00034261195400000518
for calibrating the dark primaries; e is the same asiIs pixel yiLocal background light flux ratio of (1), tiIs pixel yiThe transmittance of (a);
the calculating of the transmittance of the image specifically includes: the following equation for transmittance is constructed:
Figure BDA00034261195400000519
the resulting transmittance was:
Figure BDA0003426119540000061
in the formula, intermediate parameters
Figure BDA0003426119540000062
Intermediate parameter
Figure BDA0003426119540000063
σ is an adjustment parameter;
the constructing of the light flux ratio calculation model specifically includes: according to the fact that a monitoring scene has two regions of backlight and surface light, the ambient light intensity of the backlight region is small, the ambient light intensity of the surface light region is strong, the attenuation direction of the ambient light intensity of the local region of the monitoring scene is defined as a light flux direction, namely the light flux direction points to the backlight region from the surface light region, the attenuation direction is the light intensity gradient direction of a local image block on an image, and due to the fact that the image transmissivity and the depth of field d exist:
ti=e-βd
where β is the attenuation factor, giving the azimuth angle
Figure BDA0003426119540000064
Figure BDA0003426119540000065
In the formula (I), the compound is shown in the specification,
Figure BDA0003426119540000066
respectively, by pixel x in the original scene imageiCentered convolution window omegaiThe light intensity varies in the horizontal and vertical directions,
Figure BDA0003426119540000067
respectively, by pixel y in the monitored imageiCentered convolution window omegaiLight intensity variations in the horizontal and vertical directions;
according to
Figure BDA0003426119540000068
Window omega of convolutioniAll the pixels in the system are divided into surface light and backlight, and the surface light pixels occupy a convolution window omegaiThe proportion of the total number of the middle pixels is defined as the luminous flux proportion epsiloni
Figure BDA0003426119540000069
Where N is the convolution window omegaiTotal number of middle pixels, max ({ sum (N)k) }) is the number of face light pixels;
the method for modifying the atmospheric scattering imaging model by introducing the light flux ratio specifically comprises the following steps: the atmospheric scattering imaging model is as follows:
yi=tixi+(1-ti)·A
in the formula, tiIs a foggy day pixel yiA is the overall ambient light of the monitored scene; y isiIs the pixel value of the foggy day image y at the position i;
the atmospheric scattering imaging model is modified using the following formula:
yi=A·∈i·tixi+(1-ti)·A;
the calculating of the local ambient light by using the luminous flux ratio specifically includes: in the monitored image, the ambient light [ A ] is obtained by averaging the pixels with the minimum image transmittance in a certain proportionR,AG,AB]And then:
Figure BDA00034261195400000610
wherein A isR,AG,ABThe gray values of the red, green and blue channels of the ambient light respectively; if it is not
Figure BDA0003426119540000071
The luminous flux proportion of the pixel is determined as ei=1;
The restoring of the original monitoring scene by combining the transmittance and the local ambient light specifically includes: introducing transmittance t in modified atmosphere scattering imaging modeliThe luminous flux proportion is epsiloniIntegral ambient light intensity [ A ]R,AG,AB]And restoring the original monitoring scene:
Figure BDA0003426119540000072
in the formula (I), the compound is shown in the specification,
Figure BDA0003426119540000073
is the intensity of the fog-free pixel in color channel c, AcIs the intensity of the ambient light in color channel c,
Figure BDA0003426119540000074
is the intensity of the foggy day pixel in the color channel c, c belongs to { R, G, B }, and R, G, B respectively represent the intensity of the red, green, blue color channels.
The device is calculating
Figure BDA0003426119540000075
And when the parameters are equal, the parallel computing method is adopted for completing the operation, and the specific storage of the light flux proportion and the ambient light position of the camera at different monitoring angles is realized.
The present invention also provides an electronic device, comprising: a processor and a memory having computer program instructions stored therein, which when executed by the processor, cause the processor to perform the low light scene intelligent highway monitoring fast defogging method.
The working principle is as follows: the invention uses HSI color space parallel computing calculation parameters
Figure BDA0003426119540000076
Then calculate ti,∈iAnd [ A ]R,AG,AB]And then, a modified atmosphere light scattering imaging model is introduced to obtain a restoration result of the road monitoring scene, and because the monitoring distance and the monitoring scene change are small when the monitoring angle of the camera is unchanged, the ambient light positions and the local light flux ratio under different monitoring angles can be stored, so that the restoration speed is improved.
Has the advantages that: compared with the prior art, the defogging restoration method provided by the invention considers the light change of local environment, improves the transmissivity precision, has less related parameters and high parallel computation degree, thus effectively and quickly defogging, simultaneously improving the light intensity of a monitoring scene and having obvious defogging restoration effect on an intelligent road monitoring scene in a low-illumination environment.
Drawings
The foregoing and/or other advantages of the invention will become further apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings.
FIG. 1 is a schematic flow chart of a low-light-level scene intelligent highway monitoring and rapid defogging method provided by the invention;
FIG. 2 is an image of a road monitoring scene in a low-illumination fog environment;
FIG. 3 shows the defogging recovery result of a conventional Dark Channel Prior (DCP);
FIG. 4 is a defogging restoration result of a DCP apriori combined deep learning (CAN) framework;
FIG. 5 is a scene after defogging recovery of the method provided by the present invention;
fig. 6a, 6b, 6c are histograms of the defogging recovery results of the DCP method, the CAN method, and the method of the present invention.
Detailed Description
As shown in figure 1, the invention provides a low-illumination scene intelligent highway monitoring rapid defogging method which mainly improves the transmissivity precision by modifying dark channel prior, improves the light precision of local environment by introducing a light flux ratio, improves the brightness of a restored scene by modifying an atmospheric light scattering imaging model, and improves the running speed of the method by adopting parallel calculation according to the data characteristics so as to realize rapid defogging restoration of an 'intelligent highway' monitoring scene in a low-illumination environment.
Wherein, the transmittance solving based on the corrected dark primary color comprises the solving of a dark primary color light intensity channel, the solving of a color concentration channel, the solving of ambient light and the transmittance solving, and comprises the following steps:
step 1: solving the dark primary color light intensity channel and the color concentration channel, according to the HSI color space, the expression of the dark primary color:
Figure BDA0003426119540000081
at pixel yiA convolution window omega with a center size of 23 × 23iThe position of the dark primary color pixel is indexed, and the light intensity and the color density of the pixel are obtained as a pixel yiLight intensity channel of
Figure BDA0003426119540000082
And color density channel
Figure BDA0003426119540000083
Step 2: and (5) solving the ambient light, indexing 0.1% pixels with maximum dark primary color brightness of all pixels in the image, and averaging to obtain the overall ambient light [ A ] of the sceneR,AG,AB]。
And step 3: and (3) solving the transmissivity, wherein according to the steps 1 and 2, the corrected dark primary color:
Figure BDA0003426119540000084
then, the pixel y is solved by the modified dark primary coloriTransmittance of (2):
Figure BDA0003426119540000085
in the formula (I), the compound is shown in the specification,
Figure BDA0003426119540000086
iis the local background light flux proportion of the pixel i, which can be obtained by parallel calculation, sigma is the adjustment parameter, and the numerical value interval is [0.8, 1.0 ]]。
The method comprises the following steps of obtaining the light flux proportion of a pixel through parallel operation in a monitoring image, and then introducing the light flux proportion into the whole ambient light to obtain the ambient light of the pixel, wherein the method comprises the following steps:
step a 1: by pixel yiUsing a 23 × 23 convolution window as a center, the background luminance gradient in the horizontal direction and the vertical direction was obtained
Figure BDA0003426119540000091
The inverse tangent yields the luminous flux direction of the pixel:
Figure BDA0003426119540000092
the light flux directions in the 23 × 23 convolution window are divided into two types, and the number of pixels max ({ sum (N) occupied by the number of pixels is obtainedk) ) }) of the convolution window and the total number of the convolution window pixels, namely the luminous flux proportion epsiloni
Step a 2: pixel yiLocal ambient light calculation of (1):
Figure BDA0003426119540000093
the modification of the atmospheric light scattering imaging model and the defogging restoration of the scene are implemented according to the physical fact that the larger the light flux ratio, the stronger the local ambient light of the pixel, and the method comprises the following steps of:
step b 1: introducing local ambient light to modify the atmospheric scattering imaging model:
yi=A·∈i·tixi+(1-ti)·A
in the formula,AiI.e. the modified ambient light,
Figure BDA0003426119540000094
step b 2: calculating a restoration scene according to the modified atmospheric light scattering imaging model:
Figure BDA0003426119540000095
in the formula (I), the compound is shown in the specification,
Figure BDA0003426119540000096
is the intensity of the fog-free pixel in color channel c, AcIs the intensity of the ambient light in color channel c,
Figure BDA0003426119540000097
is the intensity of the foggy day pixel in color channel c, c ∈ { R, G, B }.
The rapid method is realized by parallel computing and data storage of key data, and comprises the following steps.
Step c 1: respectively aligning key parameters in the monitored images
Figure BDA0003426119540000098
And e isiAnd solving at the same time.
Step c 2: due to the fact that the number of monitoring angles of a monitoring scene is limited, the whole ambient light and the local ambient light can be rapidly solved according to the ambient light position and the local light flux ratio under different monitoring angles.
Table 1 shows the operation time of the present invention, DCP method and CAN method, on an Intel (R) core (TM) i7-1510U CPU @1.80GHz 16.00GB RAM PC, using matlab2021a software to actually monitor an image (FIG. 2: 704X 352, the same below). As can be seen from the data in the table, the method has the fastest operation time compared with other two methods.
TABLE 1
Figure BDA0003426119540000101
Fig. 3, 4, and 5 show the results of the DCP method, the CAN method, and the method of the present invention for processing the actual monitored image, and it CAN be seen that compared with the other two methods, the method of the present invention not only has an obvious defogging effect, but also has an obvious brightness enhancement, for example, the histogram data of fig. 6a, 6b, and 6c CAN be verified, the abscissa of the histogram is the range of brightness values, and the ordinate is the number of pixels of each brightness value.
The invention also provides a low-illumination scene intelligent road monitoring rapid defogging device which comprises a monitoring image preprocessing module, a model construction and correction module and a defogging restoration image module;
the monitoring image preprocessing module is used for expressing the dark primary color again in the HSI color space according to the brightness and the color density of an observation image, calibrating the dark primary color according to the pixel brightness and the color density, and calculating the transmissivity of the image according to the calibrated dark primary color;
the model building and correcting module is used for building a light flux ratio calculation model, introducing a light flux ratio, modifying an atmospheric scattering imaging model, utilizing the light flux ratio and calculating local environment light;
the defogging restoration image module is used for restoring an original monitoring scene by combining the transmittance and the local ambient light;
the re-expressing the dark primary color in the HSI color space by the brightness and the color density of the observed image specifically comprises the following steps: calculating the dark primary color of the monitored foggy day image by adopting the following formula
Figure BDA0003426119540000102
Figure BDA0003426119540000103
Wherein
Figure BDA0003426119540000104
And
Figure BDA0003426119540000105
are respectively dark primary colors
Figure BDA0003426119540000106
The brightness and the color density of the color,
Figure BDA0003426119540000107
and
Figure BDA0003426119540000108
respectively corresponding to original haze-free pixels xiLight intensity and color density of dark primary color, dark primary color of real scene
Figure BDA0003426119540000109
Comprises the following steps:
Figure BDA00034261195400001010
Figure BDA00034261195400001011
the calibrating the dark primary color according to the pixel brightness and the color density and the calibrating the dark primary color specifically include: calculating the pixel y of the foggy day image y at the position i by adopting the following formulaiBrightness I ofi
Figure BDA00034261195400001012
Wherein the content of the first and second substances,
Figure BDA00034261195400001013
pixel y respectively representing foggy day image y at position iiThe red, green and blue channel intensity values of (2) are definediIs pixel yiThe saturation of (c) is calculated as follows:
Figure BDA0003426119540000111
wherein min is minimizedOperation, the product S of saturation and brightnessiIiIs defined as pixel yiFor constructing a dark primary color complementary term;
the calibrated dark primary color is calculated using the following formula
Figure BDA0003426119540000112
Figure BDA0003426119540000113
In the above formula, the brightness of the dark primary color pixel is used
Figure BDA0003426119540000114
And color density
Figure BDA0003426119540000115
A dark primary color supplemental term is constructed,
Figure BDA0003426119540000116
for calibrating the dark primaries; e is the same asiIs a pixel yiLocal background light flux ratio of (1), tiIs pixel yiThe transmittance of (a);
the calculating of the transmittance of the image specifically includes: the following equation for transmittance is constructed:
Figure BDA0003426119540000117
the resulting transmittance was:
Figure BDA0003426119540000118
in the formula, intermediate parameters
Figure BDA0003426119540000119
Intermediate parameter
Figure BDA00034261195400001110
σ is an adjustment parameter;
the constructing of the light flux ratio calculation model specifically includes: according to the fact that a monitoring scene has two regions of backlight and surface light, the ambient light intensity of the backlight region is small, the ambient light intensity of the surface light region is strong, the attenuation direction of the ambient light intensity of the local region of the monitoring scene is defined as a light flux direction, namely the light flux direction points to the backlight region from the surface light region, the attenuation direction is the light intensity gradient direction of a local image block on an image, and due to the fact that the image transmissivity and the depth of field d exist:
ti=e-βd
where β is the attenuation factor, giving the azimuth angle
Figure BDA00034261195400001111
Figure BDA00034261195400001112
In the formula (I), the compound is shown in the specification,
Figure BDA00034261195400001113
respectively, in the original scene image by pixel xiCentered convolution window omegaiThe light intensity varies in the horizontal and vertical directions,
Figure BDA00034261195400001114
respectively, by pixel y in the monitored imageiCentered convolution window omegaiLight intensity variations in the horizontal and vertical directions;
according to
Figure BDA0003426119540000121
Window omega of convolutioniAll the pixels in the system are divided into surface light and backlight, and the surface light pixels occupy a convolution window omegaiThe proportion of the total number of the middle pixels is defined as the luminous flux proportion epsiloni
Figure BDA0003426119540000122
Where N is the convolution window omegaiTotal number of middle pixels, max ({ sum (N)k) }) is the number of face light pixels;
the method for modifying the atmospheric scattering imaging model by introducing the light flux ratio specifically comprises the following steps: the atmospheric scattering imaging model is as follows:
yi=tixi+(1-ti)·A
in the formula, tiIs a foggy day pixel yiA is the overall ambient light of the monitored scene; y isiIs the pixel value of the foggy day image y at the position i;
the atmospheric scattering imaging model is modified using the following formula:
yi=A·∈i·tixi+(1-ti)·A
the calculating the local ambient light by using the luminous flux ratio specifically includes: in the monitored image, the ambient light [ A ] is obtained by averaging the pixels with the minimum image transmittance in a certain proportionR,AG,AB]And then:
Figure BDA0003426119540000123
wherein, AR,AG,ABThe gray values of the red, green and blue channels of the ambient light respectively; if it is not
Figure BDA0003426119540000124
The luminous flux proportion of the pixel is determined as ei=1;
The restoring of the original monitoring scene by combining the transmittance and the local ambient light specifically includes: introducing transmittance t in modified atmosphere scattering imaging modeliThe luminous flux proportion is epsiloniIntegral ambient light intensity [ A ]R,AG,AB]And restoring the original monitoring scene:
Figure BDA0003426119540000125
in the formula (I), the compound is shown in the specification,
Figure BDA0003426119540000126
is the intensity of the fog-free pixel in color channel c, AcIs the intensity of the ambient light in color channel c,
Figure BDA0003426119540000127
is the intensity of the foggy day pixel in the color channel c, c belongs to { R, G, B }, and R, G, B respectively represent the intensity of the red, green, blue color channels.
The device is calculating
Figure BDA0003426119540000128
And when the parameters are equal, the parallel computing method is adopted for completing the operation, and the specific storage of the light flux proportion and the ambient light position of the camera at different monitoring angles is realized.
As described above, the low-light scene intelligent highway monitoring fast defogging device according to the embodiment of the present application can be implemented in various terminal devices, such as a server of a distributed computing system. In one example, the low-light scene intelligent road monitoring fast defogging device according to the embodiment of the application can be integrated into the terminal device as a software module and/or a hardware module. For example, the means may be a software module in the operating system of the terminal device, or may be an application developed for the terminal device; of course, the apparatus may also be one of many hardware modules of the terminal device.
Alternatively, in another example, the apparatus and the terminal device may be separate terminal devices, and the apparatus may be connected to the terminal device through a wired and/or wireless network and transmit the interaction information according to an agreed data format.
The present application also provides an electronic device 10 comprising:
one or more processors 11 and memory 12, the processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer readable storage medium and executed by the processor 11 to implement the low-light scene intelligent highway monitoring fast defogging method and/or other desired functions of the various embodiments of the present application described above.
In one example, the electronic device 10 may also include an input device 13 and an output device 14, which may be interconnected by a bus system and/or other form of connection mechanism.
For example, the input device 13 may be a keyboard, a mouse, or the like.
The output device 14 can output various information to the outside, including the result of monitoring the fast defogging on the smart road in the low illumination scene. The output devices 14 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices.
According to another aspect of the present application, there is also provided a computer readable storage medium having stored thereon computer program instructions operable, when executed by a computing device, to perform low light scene intelligent highway monitoring fast defogging as described above.
The invention provides a method, a device and an electronic device for intelligently monitoring and rapidly defogging a low-illumination scene, and a plurality of methods and ways for implementing the technical scheme are provided, the above description is only a preferred embodiment of the invention, and it should be noted that, for a person skilled in the art, a plurality of improvements and embellishments can be made without departing from the principle of the invention, and the improvements and embellishments should also be regarded as the protection scope of the invention. All the components not specified in the present embodiment can be realized by the prior art.

Claims (10)

1. The low-illumination scene intelligent highway monitoring rapid defogging method is characterized by comprising the following steps of:
step 1: re-expressing the dark primary color in the HSI color space by the brightness and the color density of the observed image;
step 2, calibrating the dark primary color according to the pixel brightness and the color concentration;
step 3, calculating the transmittance of the image according to the calibrated dark primary color;
step 4, constructing a light flux proportion calculation model;
and 5: introducing a light flux proportion, and modifying an atmospheric scattering imaging model;
step 6, calculating local ambient light by using the luminous flux proportion;
and 7, restoring the original monitoring scene by combining the transmissivity and the local ambient light.
2. The method of claim 1, wherein step 1 comprises: calculating the dark primary color of the monitored foggy day image by adopting the following formula
Figure FDA0003426119530000011
Figure FDA0003426119530000012
Wherein
Figure FDA0003426119530000013
And
Figure FDA0003426119530000014
are respectively dark primary colors
Figure FDA0003426119530000015
The brightness and the color density of the color,
Figure FDA0003426119530000016
and
Figure FDA0003426119530000017
respectively corresponding to original haze-free pixels xiLight intensity and color density of dark primary color, dark primary color of real scene
Figure FDA0003426119530000018
Comprises the following steps:
Figure FDA0003426119530000019
Figure FDA00034261195300000110
3. the method according to claim 2, wherein in step 2, the following formula is adopted to calculate the pixel y of the foggy day image y at the position iiBrightness I ofi
Figure FDA00034261195300000111
Wherein the content of the first and second substances,
Figure FDA00034261195300000112
pixel y respectively representing foggy day image y at position iiDefining S as the intensity values of the red, green and blue channelsiIs pixel yiThe saturation of (c) is calculated as follows:
Figure FDA00034261195300000113
wherein min is pole findingSmall value operation, multiplying saturation and brightness by SiIiIs defined as a pixel yiFor constructing the dark primary color complementary term.
4. The method of claim 3, wherein step 2 comprises: the calibrated dark primary color is calculated using the following formula
Figure FDA00034261195300000114
Figure FDA0003426119530000021
In the above formula, the brightness of the dark primary color pixel is used
Figure FDA0003426119530000022
And color density
Figure FDA0003426119530000023
A dark primary color supplemental term is constructed,
Figure FDA0003426119530000024
for calibrating the dark primaries; e is the same asiIs pixel yiLocal background light flux ratio of (1), tiIs a pixel yiThe transmittance of (c).
5. The method of claim 4, wherein step 3 comprises: the following equation for transmittance is constructed:
Figure FDA0003426119530000025
the resulting transmittance was:
Figure FDA0003426119530000026
in the formula, intermediate parameters
Figure FDA0003426119530000027
Intermediate parameter
Figure FDA0003426119530000028
σ is the tuning parameter.
6. The method of claim 5, wherein step 4 comprises: according to the fact that a monitoring scene has two regions of backlight and surface light, the ambient light intensity of the backlight region is small, the ambient light intensity of the surface light region is strong, the attenuation direction of the ambient light intensity of the local region of the monitoring scene is defined as a light flux direction, namely the light flux direction points to the backlight region from the surface light region, the attenuation direction is the light intensity gradient direction of a local image block on an image, and due to the fact that the image transmissivity and the depth of field d exist:
ti=e-βd
where β is the attenuation factor, giving the azimuth angle
Figure FDA0003426119530000029
Figure FDA00034261195300000210
In the formula (I), the compound is shown in the specification,
Figure FDA00034261195300000211
respectively, by pixel x in the original scene imageiCentered convolution window omegaiThe light intensity varies in the horizontal and vertical directions,
Figure FDA00034261195300000212
respectively, by pixel y in the monitored imageiCentered convolution window omegaiLight intensity variations in horizontal and vertical directions;
according to
Figure FDA00034261195300000213
Window omega of convolutioniAll the pixels in the system are divided into surface light and backlight, and the surface light pixels occupy a convolution window omegaiThe proportion of the total number of the middle pixels is defined as the luminous flux proportion epsiloni
Figure FDA00034261195300000214
Where N is the convolution window omegaiTotal number of middle pixels, max ({ sum (N)k) ) } is the number of area light pixels.
7. The method of claim 6, wherein step 5 comprises: the atmospheric scattering imaging model is as follows:
yi=tixi+(1-ti)·A
in the formula, tiIs a foggy day pixel yiA is the overall ambient light of the monitored scene; y isiIs the pixel value of the foggy day image y at the position i;
the atmospheric scattering imaging model is modified using the following formula:
yi=A·∈i·tixi+(1-ti)·A。
8. the method of claim 7, wherein step 6 comprises: in the monitored image, the ambient light [ A ] is obtained by averaging the pixels with the minimum image transmittance in a certain proportionR,AG,AB]And then:
Figure FDA0003426119530000031
wherein A isR,AG,ABThe gray values of the red, green and blue channels of the ambient light respectively; if it is not
Figure FDA0003426119530000032
The luminous flux proportion of the pixel is determined as ei=1;
The step 7 comprises the following steps: introducing transmittance t in modified atmosphere scattering imaging modeliThe luminous flux proportion is epsiloniIntegral ambient light intensity [ A ]R,AG,AB]And restoring the original monitoring scene:
Figure FDA0003426119530000033
in the formula (I), the compound is shown in the specification,
Figure FDA0003426119530000034
is the intensity of the fog-free pixel in color channel c, AcIs the intensity of the ambient light in color channel c,
Figure FDA0003426119530000035
is the intensity of the foggy day pixel in the color channel c, c belongs to { R, G, B }, and R, G, B respectively represent the intensity of the red, green, blue color channels.
9. The intelligent highway monitoring rapid defogging device in the low-illumination scene is characterized by comprising a monitoring image preprocessing module, a model building and correcting module and a defogging restoration image module;
the monitoring image preprocessing module is used for expressing the dark primary color again in the HSI color space according to the brightness and the color concentration of an observation image, calibrating the dark primary color according to the brightness and the color concentration of a pixel, and calculating the transmissivity of the image according to the calibrated dark primary color;
the model building and correcting module is used for building a light flux ratio calculation model, introducing a light flux ratio, modifying an atmospheric scattering imaging model, utilizing the light flux ratio and calculating local environment light;
the defogging restoration image module is used for restoring an original monitoring scene by combining the transmittance and the local ambient light;
the re-expressing the dark primary color in the HSI color space by the brightness and the color density of the observed image specifically comprises the following steps: calculating the dark primary color of the monitored foggy day image by adopting the following formula
Figure FDA0003426119530000041
Figure FDA0003426119530000042
Wherein
Figure FDA0003426119530000043
And
Figure FDA0003426119530000044
are respectively dark primary colors
Figure FDA0003426119530000045
The brightness and the color density of the color,
Figure FDA0003426119530000046
and
Figure FDA0003426119530000047
respectively corresponding to original haze-free pixels xiLight intensity and color density of dark primary color, dark primary color of real scene
Figure FDA0003426119530000048
Comprises the following steps:
Figure FDA0003426119530000049
Figure FDA00034261195300000410
the dark primary color is calibrated according to the brightness and color density of the pixel, and the calibrated dark primary color hasThe body includes: calculating the pixel y of the foggy day image y at the position i by adopting the following formulaiBrightness I ofi
Figure FDA00034261195300000411
Wherein the content of the first and second substances,
Figure FDA00034261195300000412
pixel y respectively representing foggy day image y at position iiThe red, green and blue channel intensity values of (1) define SiIs pixel yiThe saturation of (c) is calculated as follows:
Figure FDA00034261195300000413
where min is a minimum operation, multiplying saturation by brightness, SiIiIs defined as a pixel yiFor constructing a dark primary color complement, wherein,
Figure FDA00034261195300000414
the calculation method of (2) is also referred to the above two formulas;
the calibrated dark primary color is calculated using the following formula
Figure FDA00034261195300000415
Figure FDA00034261195300000416
In the above formula, the brightness of the dark primary color pixel is used
Figure FDA00034261195300000417
And color density
Figure FDA00034261195300000418
A dark primary color supplemental term is constructed,
Figure FDA00034261195300000419
for calibrating the dark primaries; e is the same asiIs pixel yiLocal background light flux ratio of (1), tiIs pixel yiThe transmittance of (a);
the calculating of the transmittance of the image specifically includes: the following equation for transmittance is constructed:
Figure FDA00034261195300000420
the resulting transmittance was:
Figure FDA0003426119530000051
in the formula, intermediate parameters
Figure FDA0003426119530000052
Intermediate parameter
Figure FDA0003426119530000053
σ is an adjustment parameter;
the constructing of the light flux ratio calculation model specifically includes: according to the fact that a monitoring scene has two regions of backlight and surface light, the ambient light intensity of the backlight region is small, the ambient light intensity of the surface light region is strong, the attenuation direction of the ambient light intensity of the local region of the monitoring scene is defined as a light flux direction, namely the light flux direction points to the backlight region from the surface light region, the attenuation direction is the light intensity gradient direction of a local image block on an image, and due to the fact that the image transmissivity and the depth of field d exist:
ti=e-βd
where β is the attenuation factor, giving the azimuth angle
Figure FDA0003426119530000054
Figure FDA0003426119530000055
In the formula (I), the compound is shown in the specification,
Figure FDA0003426119530000056
respectively, by pixel x in the original scene imageiCentered convolution window omegaiThe light intensity varies in the horizontal and vertical directions,
Figure FDA0003426119530000057
respectively, by pixel y in the monitored imageiCentered convolution window omegaiLight intensity variations in the horizontal and vertical directions;
according to
Figure FDA0003426119530000058
Window omega of convolutioniAll the pixels in the convolution window are divided into surface light and backlight, and the surface light pixel occupies the convolution window omegaiThe proportion of the total number of the middle pixels is defined as the luminous flux proportion epsiloni
Figure FDA0003426119530000059
Where N is the convolution window omegaiTotal number of middle pixels, max ({ sum (N)k) }) is the number of face light pixels;
the method for modifying the atmospheric scattering imaging model by introducing the light flux ratio specifically comprises the following steps: the atmospheric scattering imaging model is as follows:
yi=tixi+(1-ti)·A
in the formula, tiIs a foggy day pixel yiA is the overall ambient light of the monitored scene; y isiIs the pixel value of the foggy day image y at the position i;
the atmospheric scattering imaging model is modified using the following formula:
yi=A·∈i·tixi+(1-ti)·A
the calculating the local ambient light by using the luminous flux ratio specifically includes: in the monitored image, the ambient light [ A ] is obtained by averaging the pixels with the minimum image transmittance in a certain proportionR,AG,AB]And then:
Figure FDA00034261195300000510
wherein A isR,AG,ABThe gray values of the red, green and blue channels of the ambient light respectively; if it is not
Figure FDA0003426119530000061
The luminous flux proportion of the pixel is determined as ei=1;
The restoring of the original monitoring scene by combining the transmittance and the local ambient light specifically includes: introducing transmittance t in modified atmosphere scattering imaging modeliThe luminous flux proportion is epsiloniIntegral ambient light intensity [ A ]R,AG,AB]And restoring the original monitoring scene:
Figure FDA0003426119530000062
in the formula (I), the compound is shown in the specification,
Figure FDA0003426119530000063
is the intensity of the fog-free pixel in color channel c, AcIs the intensity of the ambient light in color channel c,
Figure FDA0003426119530000064
is the intensity of the foggy day pixel in the color channel c, c belongs to { R, G, B }, and R, G, B respectively represent the intensity of the red, green, blue color channels.
10. An electronic device, comprising: a processor and a memory, having stored therein computer program instructions, which when executed by the processor, cause the processor to perform the low light scene intelligent highway monitoring fast defogging method recited in any one of claims 1-8.
CN202111578311.2A 2021-12-22 2021-12-22 Low-illumination-scene intelligent highway monitoring rapid defogging method and device and electronic equipment Withdrawn CN114529460A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111578311.2A CN114529460A (en) 2021-12-22 2021-12-22 Low-illumination-scene intelligent highway monitoring rapid defogging method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111578311.2A CN114529460A (en) 2021-12-22 2021-12-22 Low-illumination-scene intelligent highway monitoring rapid defogging method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN114529460A true CN114529460A (en) 2022-05-24

Family

ID=81618882

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111578311.2A Withdrawn CN114529460A (en) 2021-12-22 2021-12-22 Low-illumination-scene intelligent highway monitoring rapid defogging method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114529460A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116109513A (en) * 2023-02-27 2023-05-12 南京林业大学 Image defogging method based on local ambient light projection constant priori

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116109513A (en) * 2023-02-27 2023-05-12 南京林业大学 Image defogging method based on local ambient light projection constant priori

Similar Documents

Publication Publication Date Title
Shi et al. Let you see in sand dust weather: A method based on halo-reduced dark channel prior dehazing for sand-dust image enhancement
CN107767354B (en) Image defogging algorithm based on dark channel prior
Huang et al. An efficient visibility enhancement algorithm for road scenes captured by intelligent transportation systems
US10949958B2 (en) Fast fourier color constancy
Zhang et al. Underwater image enhancement via weighted wavelet visual perception fusion
Gao et al. Sand-dust image restoration based on reversing the blue channel prior
CN104036466B (en) A kind of video defogging method and system
CN104867121B (en) Image Quick demisting method based on dark primary priori and Retinex theories
Wang et al. Variational single nighttime image haze removal with a gray haze-line prior
TWI489416B (en) Image recovery method
CN103268596B (en) A kind of method for reducing picture noise and making color be near the mark
CN105701783B (en) A kind of single image to the fog method and device based on environment light model
US20120212477A1 (en) Fast Haze Removal and Three Dimensional Depth Calculation
TW202226141A (en) Image dehazing method and image dehazing apparatus using the same
CN113222866B (en) Gray scale image enhancement method, computer readable medium and computer system
CN110827218A (en) Airborne image defogging method based on image HSV transmissivity weighted correction
CN106023108A (en) Image defogging algorithm based on boundary constraint and context regularization
CN109523474A (en) A kind of enhancement method of low-illumination image based on greasy weather degradation model
CN111192205A (en) Image defogging method and system and computer readable storage medium
CN105989583A (en) Image defogging method
Gao et al. Color balance and sand-dust image enhancement in lab space
CN114529460A (en) Low-illumination-scene intelligent highway monitoring rapid defogging method and device and electronic equipment
CN107454317B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
CN105118032B (en) A kind of wide method for dynamically processing of view-based access control model system
CN109325905B (en) Image processing method, image processing device, computer readable storage medium and electronic apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220524

WW01 Invention patent application withdrawn after publication