CN111383242A - Image fog penetration processing method and device - Google Patents
Image fog penetration processing method and device Download PDFInfo
- Publication number
- CN111383242A CN111383242A CN202010472423.9A CN202010472423A CN111383242A CN 111383242 A CN111383242 A CN 111383242A CN 202010472423 A CN202010472423 A CN 202010472423A CN 111383242 A CN111383242 A CN 111383242A
- Authority
- CN
- China
- Prior art keywords
- image
- video frame
- image block
- component
- infrared light
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000035515 penetration Effects 0.000 title claims abstract description 117
- 238000003672 processing method Methods 0.000 title claims abstract description 18
- 238000002834 transmittance Methods 0.000 claims abstract description 185
- 238000000034 method Methods 0.000 claims abstract description 73
- 238000012545 processing Methods 0.000 claims description 128
- 238000001914 filtration Methods 0.000 claims description 17
- 238000007781 pre-processing Methods 0.000 claims description 11
- 230000006870 function Effects 0.000 description 18
- 238000010586 diagram Methods 0.000 description 15
- 238000004422 calculation algorithm Methods 0.000 description 10
- 238000004590 computer program Methods 0.000 description 10
- 238000004364 calculation method Methods 0.000 description 9
- 230000000149 penetrating effect Effects 0.000 description 8
- 230000000694 effects Effects 0.000 description 7
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 7
- 238000012795 verification Methods 0.000 description 6
- 239000000428 dust Substances 0.000 description 5
- 239000002245 particle Substances 0.000 description 5
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000000903 blocking effect Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 239000000779 smoke Substances 0.000 description 2
- 238000011410 subtraction method Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000003897 fog Substances 0.000 description 1
- 239000008187 granular material Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000007903 penetration ability Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000001429 visible spectrum Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/514—Depth or shape recovery from specularities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/536—Depth or shape recovery from perspective effects, e.g. by using vanishing points
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
The application provides an image fog-penetrating processing method and device. The method comprises the following steps: acquiring a first image in an original format and a second image in a YUV format obtained by converting the first image; generating a third image according to the infrared light component pixel points in the first image, and selecting image blocks of which the infrared light component variance is smaller than a set threshold in the third image; acquiring image blocks at corresponding positions in the second image according to the positions of the selected image blocks in the third image, and determining the atmospheric light intensity according to Y components of pixel points in the image blocks at the corresponding positions in the second image; determining the reflected light transmittance of the image block at the corresponding position according to the Y component variance and the infrared light component variance of the image block at the corresponding position; and carrying out fog penetration treatment on the corresponding image block in the second image according to the reflection light transmittance of each image block and the atmospheric light intensity.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image fog-penetration processing method and apparatus.
Background
The visible light is absorbed or scattered by particles such as tiny water drops in the air, so that the image acquired by the image or video acquisition device is unclear, and the subsequent image processing and application scene are difficult.
Therefore, the image needs to be subjected to fog-penetration processing so that the image becomes clear.
Disclosure of Invention
The embodiment of the application provides an image fog penetration processing method and device, which are used for realizing fog penetration processing on images.
In a first aspect, an image fog-penetrating processing method is provided, including:
acquiring a first image in an original format and a second image in a YUV format obtained by converting the first image; the first image comprises visible light component pixel points and infrared light component pixel points;
generating a third image according to the infrared light component pixel points in the first image, and selecting image blocks of which the infrared light component variance is smaller than a set threshold in the third image;
acquiring image blocks at corresponding positions in the second image according to the positions of the selected image blocks in the third image, and determining the atmospheric light intensity according to Y components of pixel points in the image blocks at the corresponding positions in the second image;
determining the Y component variance of each image block in the second image and the infrared light component variance of each image block in the third image, and determining the reflected light transmittance of the image block at the corresponding position according to the Y component variance and the infrared light component variance of the image block at the corresponding position;
and carrying out fog penetration treatment on the corresponding image block in the second image according to the reflection light transmittance and the atmospheric light intensity of each image block.
Optionally, determining the atmospheric light intensity according to the Y component of the pixel point in the image block at the corresponding position in the second image, includes:
selecting pixel points with the color closest to white in the image blocks at the corresponding positions in the second image;
and determining the Y component of the selected pixel point as the atmospheric light intensity.
Optionally, after determining the reflected light transmittance of the image block at the corresponding position according to the Y component variance and the infrared light component variance of the image block at the corresponding position, the method further includes:
the following operations are performed for each image block:
carrying out fog penetration pretreatment on the image block according to the transmittance of reflected light of the image block and the intensity of atmospheric light;
if the value of the Y component of the image block after the preprocessing is in a set interval, determining that the value of the reflection light transmittance of the image block is reasonable; otherwise, the value of the reflection light transmittance of the image block is determined to be unreasonable, the reflection light transmittance of the image block is increased until the value of the Y component of the image block is within a set interval after fog penetration pretreatment is performed on the image block according to the increased reflection light transmittance and the atmospheric light intensity.
Optionally, after determining the transmittance of the reflected light of the image block at the corresponding position, the method further includes:
and performing Gaussian weighted filtering on the reflection light transmittance of each image block.
In a second aspect, an image fog-penetrating processing method is provided, including:
acquiring a first video frame in a video frame sequence in an original format and a second video frame in a YUV format obtained by converting the first video frame, wherein the first video frame comprises visible light component pixel points and infrared light component pixel points;
if the first video frame is not a key frame, carrying out fog-penetrating processing on a corresponding image block in the second video frame according to the atmospheric light intensity of the previous key frame and the reflection light transmittance of each image block, otherwise, executing a key frame fog-penetrating processing process, wherein the key frame fog-penetrating processing process comprises the following operations:
generating a third video frame according to the infrared light component pixel points in the first video frame, and selecting image blocks of which the infrared light component variance is smaller than a set threshold in the third video frame;
acquiring an image block at a corresponding position in the second video frame according to the position of the selected image block in the third video frame, and determining the atmospheric light intensity according to the Y component of the pixel in the image block at the corresponding position in the second video frame;
determining the Y component variance of each image block in the second video frame and the infrared light component variance of each image block in the third video frame, and determining the reflected light transmittance of the image block at the corresponding position according to the Y component variance and the infrared light component variance of the image block at the corresponding position;
and carrying out fog penetration treatment on the corresponding image blocks in the second video frame according to the reflection light transmittance and the atmospheric light intensity of each image block.
Optionally, a partial region in the first video frame is detected as a target object moving region, and fog penetration processing is performed on a corresponding image block in the second video frame according to the atmospheric light intensity of the previous key frame and the reflection light transmittance of each image block, including:
and carrying out fog penetration treatment on the corresponding image block in the target object moving area in the second video frame according to the atmospheric light intensity of the previous key frame and the reflection light transmittance of the image block in the area corresponding to the target object moving area.
Optionally, if a partial region in the first video frame is detected as a target object moving region, and the first video frame is a key frame, the key frame fog-penetration processing process includes:
converting the image in the target object moving area to obtain a fourth video frame in a YUV format;
generating a fifth video frame according to infrared light component pixel points in the image in the target object moving area, and selecting an image block of which the infrared light component variance is smaller than a set threshold in the fifth video frame;
acquiring an image block at a corresponding position in a fourth video frame according to the position of the selected image block in the fifth video frame, and determining the atmospheric light intensity according to the Y component of the pixel in the image block at the corresponding position in the fourth video frame;
determining the Y component variance of each image block in the fourth video frame and the infrared light component variance of each image block in the fifth video frame, and determining the reflected light transmittance of the image block at the corresponding position according to the Y component variance and the infrared light component variance of the image block at the corresponding position;
carrying out fog penetration treatment on corresponding image blocks in the fourth video frame according to the reflection light transmittance and the atmospheric light intensity of each image block;
and replacing the image in the target object moving area in the second video frame with the fourth video frame after fog penetration processing to obtain the second video frame after fog penetration processing.
In a third aspect, an image fog-penetrating processing method is provided, including:
acquiring a first video frame in a video frame sequence in an original format, wherein the first video frame comprises visible light component pixel points and infrared light component pixel points, part of regions in the first video frame are detected as target object moving regions, converting the first video frame to obtain a second video frame in a YUV format, and executing first processing;
a first process comprising:
acquiring an image in a target object moving area of a first video frame to obtain a first area image, and acquiring an image in a target object moving area of a second video frame to obtain a second area image;
generating a third area image according to the infrared light component pixel points in the first area image, and selecting image blocks of which the infrared light component variance is smaller than a set threshold value in the third area image;
acquiring an image block at a corresponding position in the second area image according to the position of the selected image block in the third area image, and determining the atmospheric light intensity according to the Y component of the pixel in the image block at the corresponding position in the second area image;
determining the Y component variance of each image block in the second area image and the infrared light component variance of each image block in the third area image, and determining the reflected light transmittance of the image block at the corresponding position according to the Y component variance and the infrared light component variance of the image block at the corresponding position;
carrying out fog penetration treatment on corresponding image blocks in the second area image according to the reflection light transmittance and the atmospheric light intensity of each image block;
and replacing the image in the target object moving area in the second video frame with the second area image after fog penetration processing to obtain the second video frame after fog penetration processing.
Optionally, after the first video frame is acquired, the method further includes: judging whether the area ratio of a target object moving region in the first video frame is smaller than a set threshold value or not;
in the method, if the area ratio of a target object moving region in a first video frame in the first video frame is smaller than a set threshold, executing a first process; if the area ratio of the target object moving area in the first video frame is larger than or equal to a set threshold, executing second processing;
the second process includes:
generating a third video frame according to the infrared light component pixel points in the first video frame, and selecting image blocks of which the infrared light component variance is smaller than a set threshold in the third video frame;
acquiring an image block at a corresponding position in the second video frame according to the position of the selected image block in the third video frame, and determining the atmospheric light intensity according to the Y component of the pixel in the image block at the corresponding position in the second video frame;
determining the Y component variance of each image block in the second video frame and the infrared light component variance of each image block in the third video frame, and determining the reflected light transmittance of the image block at the corresponding position according to the Y component variance and the infrared light component variance of the image block at the corresponding position;
and carrying out fog penetration treatment on the corresponding image blocks in the second video frame according to the reflection light transmittance and the atmospheric light intensity of each image block.
In a fourth aspect, an image fog-penetrating processing device is provided, which includes:
the image acquisition module is used for acquiring a first image in an original format and a second image in a YUV format obtained by converting the first image; the first image comprises visible light component pixel points and infrared light component pixel points;
the atmospheric light intensity determining module is used for generating a third image according to the infrared light component pixel points in the first image and selecting image blocks of which the infrared light component variance is smaller than a set threshold in the third image; acquiring image blocks at corresponding positions in the second image according to the positions of the selected image blocks in the third image, and determining the atmospheric light intensity according to Y components of pixel points in the image blocks at the corresponding positions in the second image;
the reflected light transmittance determining module is used for determining the Y component variance of each image block in the second image and the infrared light component variance of each image block in the third image, and determining the reflected light transmittance of the image block at the corresponding position according to the Y component variance and the infrared light component variance of the image block at the corresponding position;
and the fog penetration processing module is used for carrying out fog penetration processing on the corresponding image blocks in the second image according to the reflection light transmittance and the atmospheric light intensity of each image block.
Optionally, the atmospheric light intensity determining module is configured to:
selecting pixel points with the color closest to white in the image blocks at the corresponding positions in the second image;
and determining the Y component of the selected pixel point as the atmospheric light intensity.
Optionally, the apparatus further includes a reflected light transmittance verification module, configured to:
after the reflected light transmittance of the image block at the corresponding position is determined according to the Y component variance and the infrared light component variance of the image block at the corresponding position, the following operations are respectively executed for each image block:
carrying out fog penetration pretreatment on the image block according to the reflection light transmittance and the atmospheric light intensity of the image block;
if the value of the Y component of the image block after the preprocessing is in a set interval, determining that the value of the reflection light transmittance of the image block is reasonable; otherwise, the value of the reflection light transmittance of the image block is determined to be unreasonable, the reflection light transmittance of the image block is increased until the value of the Y component of the image block is within a set interval after fog penetration pretreatment is performed on the image block according to the increased reflection light transmittance and the atmospheric light intensity.
Optionally, the apparatus further includes a filtering module, configured to perform gaussian weighted filtering on the transmittance of the reflected light of each image block.
In a fifth aspect, an image fog-penetrating processing device is provided, which includes:
the video frame acquisition module is used for acquiring a first video frame in a video frame sequence in an original format and a second video frame in a YUV format obtained by converting the first video frame, wherein the first video frame comprises visible light component pixel points and infrared light component pixel points;
the key frame judging module is used for judging whether the first video frame is a key frame or not, if not, carrying out fog penetrating processing on the corresponding image block in the second video frame according to the atmospheric light intensity of the previous key frame and the reflection light transmittance of each image block, otherwise, executing a key frame fog penetrating processing process, and the key frame fog penetrating processing process comprises the following modules:
the atmospheric light intensity determining module is used for generating a third video frame according to the infrared light component pixel points in the first video frame and selecting image blocks of which the infrared light component variance is smaller than a set threshold value in the third video frame; acquiring an image block at a corresponding position in the second video frame according to the position of the selected image block in the third video frame, and determining the atmospheric light intensity according to the Y component of the pixel in the image block at the corresponding position in the second video frame;
the reflected light transmittance determining module is used for determining the Y component variance of each image block in the second video frame and the infrared light component variance of each image block in the third video frame, and determining the reflected light transmittance of the image block at the corresponding position according to the Y component variance and the infrared light component variance of the image block at the corresponding position;
and the fog penetration processing module is used for carrying out fog penetration processing on the corresponding image blocks in the second video frame according to the reflection light transmittance and the atmospheric light intensity of each image block.
Optionally, a partial region in the first video frame is detected as a target object moving region, and the key frame determination module is configured to perform fog penetration processing on a corresponding image block in the target object moving region in the second video frame according to the atmospheric light intensity of a previous key frame and the reflection light transmittance of the image block in the region corresponding to the target object moving region.
Optionally, a partial region in the first video frame is detected as a target object moving region, and the first video frame is a key frame;
the key frame acquisition module is also used for converting the image in the target object moving area to obtain a fourth video frame in a YUV format;
the atmospheric light intensity determining module is used for generating a fifth video frame according to infrared light component pixel points in the image in the target object moving area and selecting an image block of which the infrared light component variance is smaller than a set threshold value in the fifth video frame; acquiring an image block at a corresponding position in a fourth video frame according to the position of the selected image block in the fifth video frame, and determining the atmospheric light intensity according to the Y component of the pixel in the image block at the corresponding position in the fourth video frame;
and the reflected light transmittance determining module is used for determining the Y component variance of each image block in the fourth video frame and the infrared light component variance of each image block in the fifth video frame, and determining the reflected light transmittance of the image block at the corresponding position according to the Y component variance and the infrared light component variance of the image block at the corresponding position.
The fog penetration processing module is used for carrying out fog penetration processing on the corresponding image blocks in the fourth video frame according to the reflection light transmittance and the atmospheric light intensity of each image block; and replacing the image in the target object moving area in the second video frame with the fourth video frame after fog penetration processing to obtain the second video frame after fog penetration processing.
Optionally, the atmospheric light intensity determining module is configured to select a pixel point with a color closest to white in the image block at the corresponding position in the fourth video frame; and determining the Y component of the selected pixel point as the atmospheric light intensity.
Optionally, the apparatus further includes a reflected light transmittance verification module, configured to:
after the reflected light transmittance of the image block at the corresponding position is determined according to the Y component variance and the infrared light component variance of the image block at the corresponding position, the following operations are respectively executed for each image block:
carrying out fog penetration pretreatment on the image block according to the reflection light transmittance and the atmospheric light intensity of the image block;
if the value of the Y component of the image block after the preprocessing is in a set interval, determining that the value of the reflection light transmittance of the image block is reasonable; otherwise, the value of the reflection light transmittance of the image block is determined to be unreasonable, the reflection light transmittance of the image block is increased until the value of the Y component of the image block is within a set interval after fog penetration pretreatment is performed on the image block according to the increased reflection light transmittance and the atmospheric light intensity.
Optionally, the apparatus further includes a filtering module, configured to perform gaussian weighted filtering on the transmittance of the reflected light of each image block.
In a sixth aspect, an image fog-penetrating processing device is provided, which includes:
the video frame acquisition module is used for acquiring a first video frame in a video frame sequence in an original format, wherein the first video frame comprises visible light component pixels and infrared light component pixels, partial area in the first video frame is detected as a target object moving area, a second video frame in a YUV format is obtained by converting the first video frame, and first processing is executed;
a first process comprising:
the image acquisition module is used for acquiring an image in a target object moving area of the first video frame to obtain a first area image, and acquiring an image in a target object moving area of the second video frame to obtain a second area image;
the atmospheric light intensity determining module is used for generating a third area image according to the infrared light component pixel points in the first area image and selecting image blocks of which the infrared light component variance is smaller than a set threshold value in the third area image; acquiring an image block at a corresponding position in the second area image according to the position of the selected image block in the third area image, and determining the atmospheric light intensity according to the Y component of the pixel in the image block at the corresponding position in the second area image;
the reflected light transmittance determining module is used for determining the Y component variance of each image block in the second area image and the infrared light component variance of each image block in the third area image, and determining the reflected light transmittance of the image block at the corresponding position according to the Y component variance and the infrared light component variance of the image block at the corresponding position;
the fog penetration processing module is used for carrying out fog penetration processing on the corresponding image blocks in the second area image according to the reflection light transmittance and the atmospheric light intensity of each image block; and replacing the image in the target object moving area in the second video frame with the second area image after fog penetration processing to obtain the second video frame after fog penetration processing.
Optionally, the apparatus further includes an area ratio determining module, configured to determine whether an area ratio of a target object moving region in the first video frame is smaller than a set threshold, if so, execute a first process, otherwise, execute a second process;
the second process includes the following modules:
the atmospheric light intensity determining module is also used for generating a third video frame according to the infrared light component pixel points in the first video frame and selecting image blocks of which the infrared light component variance is smaller than a set threshold value in the third video frame; acquiring an image block at a corresponding position in the second video frame according to the position of the selected image block in the third video frame, and determining the atmospheric light intensity according to the Y component of the pixel in the image block at the corresponding position in the second video frame;
the reflected light transmittance determining module is further used for determining the Y component variance of each image block in the second video frame and the infrared light component variance of each image block in the third video frame, and determining the reflected light transmittance of the image block at the corresponding position according to the Y component variance and the infrared light component variance of the image block at the corresponding position;
and the fog penetration processing module is also used for carrying out fog penetration processing on the corresponding image blocks in the second video frame according to the reflection light transmittance and the atmospheric light intensity of each image block.
Optionally, the atmospheric light intensity determining module is configured to:
selecting pixel points with the color closest to white in the image blocks at the corresponding positions in the second area image;
determining the Y component of the selected pixel point as the atmospheric light intensity, or,
selecting pixel points with the color closest to white in the image blocks at the corresponding positions in the second video frame;
and determining the Y component of the selected pixel point as the atmospheric light intensity.
Optionally, the apparatus further includes a reflected light transmittance verification module, configured to:
after the reflected light transmittance of the image block at the corresponding position is determined according to the Y component variance and the infrared light component variance of the image block at the corresponding position, the following operations are respectively executed for each image block:
carrying out fog penetration pretreatment on the image block according to the reflection light transmittance and the atmospheric light intensity of the image block;
if the value of the Y component of the image block after the preprocessing is in a set interval, determining that the value of the reflection light transmittance of the image block is reasonable; otherwise, the value of the reflection light transmittance of the image block is determined to be unreasonable, the reflection light transmittance of the image block is increased until the value of the Y component of the image block is within a set interval after fog penetration pretreatment is performed on the image block according to the increased reflection light transmittance and the atmospheric light intensity.
Optionally, the apparatus further includes a filtering module, configured to perform gaussian weighted filtering on the transmittance of the reflected light of each image block.
In the above embodiment of the application, it is considered that the infrared light has a longer wavelength and can penetrate particulate matters such as water droplets and dust in the air, so on one hand, the atmospheric light intensity is determined based on the infrared light component in the original image, the determined atmospheric light intensity can be relatively close to the atmospheric light intensity under the real condition, and then the image can be effectively subjected to fog penetration processing based on the atmospheric light intensity; on the other hand, considering that the Y component is greatly affected by the fog level, determining the reflected light transmittance of the image block based on the infrared light component and the Y component can make the determined reflected light transmittance closer to the reflected light transmittance in the real case, and considering only the effect of the Y component can reduce the computational complexity and processing overhead compared to considering all the components.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1a illustrates an image in Bayer format acquired by an RGB sensor provided by an embodiment of the present application;
FIG. 1b illustrates an image in RGB-IR format captured by an RGB-IR sensor provided by an embodiment of the present application;
FIG. 2 is a diagram illustrating a system architecture of an image fog-penetration process provided by an embodiment of the present application;
fig. 3 is a flowchart illustrating an image fog-penetration processing method provided by an embodiment of the present application;
fig. 4 illustrates, by way of specific example, a schematic diagram for selecting an image block in a third image, where a variance of an infrared light component of the third image is smaller than a set threshold value;
fig. 5 is a flowchart illustrating an image fog-penetration processing method in a video stream provided by an embodiment of the present application;
fig. 6 is a flowchart illustrating another image fog-penetration processing method in a video stream provided by an embodiment of the present application;
fig. 7 is an architecture diagram schematically illustrating an image fog-penetration processing device provided in an embodiment of the present application;
fig. 8 is an architecture diagram schematically illustrating an image fog-penetration processing device provided in an embodiment of the present application;
fig. 9 is an architecture diagram schematically illustrating an image fog-penetration processing device provided in an embodiment of the present application;
fig. 10 is a hardware diagram schematically illustrating an image fog-penetration processing device provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be understood that the terms "first," "second," "third," and the like in the description and in the claims of the present application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used are interchangeable under appropriate circumstances and can be implemented in sequences other than those illustrated or otherwise described herein with respect to the embodiments of the application, for example.
Furthermore, the terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or device that comprises a list of elements is not necessarily limited to those elements explicitly listed, but may include other elements not expressly listed or inherent to such product or device.
The term "module," as used herein, refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the functionality associated with that element.
The embodiment of the application provides an image fog-penetrating processing method and device, which can realize image fog-penetrating processing with less processing overhead.
Under weather conditions such as haze, the condition such as the visible light has been absorbed or scattered to particulate matters such as tiny water droplet that exist in the air, leads to the image that image or video acquisition device gathered not clear, and infrared light wavelength is longer, in the transmission process, can walk around the haze granule (like smoke and dust and fog) and pierce through, consequently this application embodiment passes through fog according to infrared light and can be carried out fog-penetrating treatment by the mixed image of the visible light of perception.
In the embodiment of the application, according to the fact that the light rays with different wavelengths have different abilities to penetrate through particles such as water drops and dust (infrared rays are longer than the wavelength of visible light, and the penetration ability of infrared rays to the particles such as water drops and dust is better than that of visible light), the atmospheric light intensity is determined based on the infrared light component between 600 nanometers (nm) and 1100 nm by utilizing the characteristic that the infrared light can penetrate through the particles in the air (such as smoke dust, small water drops and the like), the fog penetrating processing can be effectively performed on an image, the influence of image obscuration caused by rain and fog days is reduced, and the influence of fog levels on the Y component is considered to be larger, so that the reflected light penetration of an image block is determined based on the infrared light component and the Y component, and compared with the consideration of all components, the calculation complexity and.
In order to describe the embodiments of the present application in detail, the following explains the noun terms used in the embodiments of the present application.
Visible light: in the electromagnetic spectrum, the visible spectrum has no precise range, and the wavelength of the electromagnetic wave which can be perceived by the eyes of a person is 400-760 nanometers (nm).
Infrared light (IR): is an electromagnetic wave with a frequency between microwave and visible light and a wavelength of 0.75 to 1000 μm () The wavelength is 0.75 to 1.5Near infrared light with a wavelength of 1.5-6.0The medium infrared light with the wavelength of 6.0-1000In between is far infrared light.
Image sensor (sensor): the light energy is converted into electric energy, the light image on the light sensing surface is converted into an electric signal in a corresponding proportional relation with the light image, the light sensing principle of the Sensor is that light is sampled and quantized through one light sensing point, and each light sensing point can only sense one color. The image sensing elements may include RGB sensing elements and RGB-IR sensing elements. The RGB-IR photosensitive element can collect a mixed image of visible light and infrared light, and the color channel of the pixel point comprises R, G, B and (infrared) IR four components.
RGB: in one color standard, visible white light is composed of red (R), green (G) and blue (B), and filters are used to filter red, green and blue light respectively.
YUV: a color coding format describes the color and saturation of an image, specifying the color of a pixel, Y representing luminance, and U, V representing chrominance.
Bayer: one color format, see fig. 1a, comprises R, G, B components of visible light, each square in the figure representing a pixel, and the letters in the squares representing the corresponding visible light color channel, wherein R represents the red color channel, G represents the green color channel, and B represents the blue color channel. Since the human eye is more sensitive to green, the G component is heavier, i.e., the G component is twice as heavy as the R component and the B component, respectively.
RGB-IR: a mixed color format of visible and infrared light, see fig. 1b, unlike the Bayer format, the RGB-IR format contains R, G, B, IR four components. Each square in the figure represents a pixel point, the letter IR represents an infrared light color channel, the value interval of the letter IR is [0,255], and the letter R, G, B represents a red channel, a green channel and a blue channel respectively, wherein the G component is 2 times of the IR component, and the G component is 4 times of the R component and the B component respectively. The RGB-IR format is also commonly referred to as raw format.
The fog penetration algorithm comprises the following steps: the method for estimating the atmospheric light intensity and the reflection light transmittance of the shot image and carrying out defogging processing on the image according to the atmospheric light intensity and the reflection light transmittance can reduce the influence of particles such as tiny water drops in the air on the image by adopting a fog penetration algorithm processing, so that the image is clear.
Some embodiments of the present application may be applied to a system architecture as shown in fig. 2.
Fig. 2 schematically shows a system architecture diagram of the image fog-penetration processing provided by the embodiment of the present application. As shown, it includes: the terminal 201 and the server 202 are connected through a network 203.
The terminal 201 has an image or video capturing function, such as an outdoor camera in a security system. The terminal is provided with an RGB-IR image sensing element for collecting a mixed image of visible light and infrared light. The description of the R, G, B, IR component in the blended image is shown in FIG. 1b and will not be repeated here.
The terminal 201 may transmit the captured image or video to the server 202, and the server 202 may perform fog-penetrating processing on the image. The server 202 may be a common web server, enterprise-class server, or the like. The server 202 may be a standalone device, a server cluster, or a cloud device.
The network 203 may be the internet, a local area network, or the internet, etc., and is used for data communication between the terminal 201 and the server 202.
In other embodiments of the present application, the terminal has a function of performing fog-penetration processing on an image in addition to an image or video capturing function.
Fig. 3 is a flowchart illustrating an image fog-penetration processing method provided by an embodiment of the present application, where the flowchart may be executed by an image fog-penetration processing apparatus. The image fog-penetrating processing device can be deployed in a server and can also be deployed in a terminal with an image or video acquisition function. The image fog penetration processing device can be realized by software or hardware, or by a combination of software and hardware.
As shown, the process includes the following steps:
s301: acquiring a first image in an original format and a second image in a YUV format obtained by converting the first image; the first image comprises visible light component pixel points and infrared light component pixel points.
In the step, the image fog penetration processing device receives a first image in an original format collected by a terminal.
According to the embodiment of the application, the first image in the original format can be converted by adopting an interpolation algorithm to obtain the second image in the YUV format. The interpolation algorithm may include one of a bilinear interpolation algorithm, a nearest neighbor interpolation algorithm, and a bicubic interpolation algorithm. R, G, B components in the first image in the original format can be processed by an interpolation algorithm to obtain an image in an RGB format, and then the image in the RGB format is converted into a second image in a YUV format, wherein the conversion formula is as follows:
wherein kr, kg and kb are weight coefficients of R, G, B components,is the weight coefficient of (B-Y),is the weight coefficient of (R-Y).
S302: and generating a third image according to the infrared light component pixel points in the first image, and selecting image blocks of which the infrared light component variance is smaller than a set threshold value in the third image.
In the step, infrared light component pixel points in the first image are extracted, the extracted infrared light component pixel points are recombined to generate a third image according to the arrangement sequence of the infrared light component pixel points in the first image, the third image is segmented, and image blocks of which the infrared light component variance is smaller than a set threshold value in the third image are selected.
According to the embodiment of the application, a quadtree search method can be adopted to select the image block of which the variance of the infrared light component in the third image is smaller than the set threshold. And dividing the third image block into four image blocks, respectively calculating the infrared light component variances of the image blocks, judging whether the minimum infrared light component variance in the four image blocks is smaller than a set threshold, if so, stopping dividing to obtain the image block of which the infrared light component variance in the third image is smaller than the set threshold, otherwise, selecting the image block of which the infrared light component variance is minimum in the four image blocks, continuing to divide the image block into the four image blocks, and calculating the infrared light component variances of the divided image blocks until the infrared light component method of the selected image block is smaller than the set threshold.
As shown in fig. 4, after the third image is divided, a first image block 1, a second image block 2, a third image block 3, and a fourth image block 4 are obtained, and after calculation, the variance of the infrared light component of the second image block 2 is minimum but not smaller than the set threshold, the second image block 2 is continuously divided into four image blocks (the first image block 2-1, the second image block 2-2, the third image block 2-3, and the fourth image block 2-4), and the variance of the infrared light component of the second image block 2-2 in the divided image blocks is smaller than the set threshold, and then the division is stopped, and the second image block 2-2 with the variance of the infrared light component smaller than the set threshold in the third image is selected.
And selecting an image block of which the variance of the infrared light component in the third image is smaller than a set threshold value by a quadtree search method, wherein the image block has low contrast and generally represents the region with the most dense sky or fog, namely the region with determined atmospheric light intensity.
It should be noted that the quadtree search method is only one example of selecting the image block in the third image whose infrared light component variance is smaller than the set threshold in the embodiment of the present application, and other methods may also be used for selection, such as a binary search method, an interpolation search method, an octree search method, and the like.
In some embodiments, after the third image is generated, the third image may be further filtered, such as n × n mean filtering, to reduce the effect of random noise on the third image.
S303: and acquiring the image block at the corresponding position in the second image according to the position of the selected image block in the third image, and determining the atmospheric light intensity according to the Y component of the pixel point in the image block at the corresponding position in the second image.
In this embodiment of the application, the second image and the third image may adopt the same image block division manner, for example, both the second image and the third image are divided into n × n image blocks (that is, each row includes n image blocks, each column includes n image blocks, and n is an integer greater than 1), so that there is a one-to-one correspondence relationship between the image blocks in the second image and the image blocks in the third image according to positions of the image blocks in the image where the image blocks are located.
Since the number of the infrared light component pixels is 1/4 of the total number of pixels in the first image, and the size of the third image generated in step S302 is 1/4 of the second image, when each image block of the third image is mapped to an image block at a corresponding position of the second image, the pixel coordinates of each image block of the third image are multiplied by 2, that is, the pixel coordinates of the corresponding image block in the second image. For example, the coordinates of the four vertices of an image block in the third image are (x, y) (x + a, y) (x, y + b) (x + a, y + b), respectively, and the coordinates of the four vertices of the corresponding image block in the second image are (2 x,2 y) (2 (x + a), 2 y) (2 x,2 (y + b)) (2 (x + a), 2 (y + b)), respectively.
In some embodiments, the position of the image block in the third image, where the variance of the infrared light component is smaller than the set threshold, selected in S302 corresponds to the image block in the corresponding position in the second image, the pixel point in the image block in the corresponding position in the second image, whose color is closest to white (the YVU components are: 255, respectively), is selected, and the Y component of the selected pixel point is determined as the atmospheric light intensity a. The color of the pixel point selected in the above mode is used as the sky color, and the Y component of the pixel point is determined as the atmospheric light intensity, so that the atmospheric light intensity is determined based on the sky color, and the fog penetration treatment effect can be improved.
S304: and determining the Y component variance of each image block in the second image and the infrared light component variance of each image block in the third image, and determining the reflected light transmittance of the image block at the corresponding position according to the Y component variance and the infrared light component variance of the image block at the corresponding position.
In some embodiments, the step of dividing the third image into n × n image blocks, calculating the infrared light component variance of each image block, and recording the variance asDividing the second image into n x n image blocks, extracting Y component of each image block in the second image, and determining Y score of each image block in the second imageVariance of measure, recorded asAccording to the variance of the infrared light component of each image block of the third imageAnd the variance of the Y component of the image block at the corresponding position in the second imageAnd determining the reflection light transmittance t (y) of the image block at the corresponding position.
In some embodiments, the reflected light transmittance t (y) is calculated as follows:
wherein,andthe Y component variance and the infrared component variance of the corresponding image block. The depth of field of one image block is the same, the depth of field of different image blocks is different, the reflection light transmittance of each image block is calculated respectively, and the fog penetration treatment effect of the image is improved.
In some embodiments, since the depth of field is different for each image block, it is difficult to ensure that a sharp white edge appears on an edge image block and blocking artifacts occur, and therefore, after determining the reflected light transmittance of each image block, gaussian weighted filtering may be performed on the reflected light transmittance of each image block, so as to reduce the influence of blocking artifacts.
S305: and carrying out fog penetration treatment on the corresponding image block in the second image according to the reflection light transmittance and the atmospheric light intensity of each image block.
In this step, since the fog level has a large influence on the brightness, in order to reduce the amount of calculation, fog penetrating processing may be performed only on the Y component of the corresponding image block in the second image, so as to obtain an image in YUV format after the fog penetrating processing.
In some embodiments, the image block is subjected to fog penetration processing according to the reflection light transmittance and the atmospheric light intensity of the image block, and the formula of the fog penetration processing is as follows:
wherein, a is atmospheric light intensity, t (Y) is reflection light transmittance of each image block, i (Y) is Y component of corresponding image block in the second image, and j (Y) is corresponding image block after fog penetration treatment.
In some embodiments, in S304, after the reflected light transmittance of the image block at the corresponding position is determined according to the Y component variance and the infrared light component variance of the image block at the corresponding position, the reflected light transmittance is adjusted according to the Y component value of the image block, and the value of the Y component is ensured to be within a set interval, thereby considering the loss of the Y component of the image block and the increase of the contrast in the fog penetration process.
In particular, the following fog-penetrating pre-processing operations may be performed for each image block in the second image:
carrying out fog penetration pretreatment on the image block according to the reflection light transmittance and the atmospheric light intensity of the image block, and determining that the reflection light transmittance of the image block is reasonable if the value of the Y component of the image block after pretreatment is in a set interval; otherwise, the value of the reflection light transmittance of the image block is determined to be unreasonable, the reflection light transmittance of the image block is increased until the Y component value of the image block is in the set interval after fog penetration pretreatment is performed on the image block according to the increased reflection light transmittance and the atmospheric light intensity, and the Y component value is in the set interval by adjusting the reflection light transmittance, so that loss of the Y component of the image and increase of the contrast in the fog penetration treatment process are considered.
In some embodiments, the formula for fog-penetration preprocessing of an image block according to its reflected light transmittance and atmospheric light intensity is given in equation (5).
In some embodiments, whether the value of the Y component of the image block is within a set interval may be determined by a loss function of the Y component. Specifically, if the loss value of the Y component is smaller than the set threshold, the Y component value is within the set interval, and the reflected light transmittance is reasonable, otherwise, the reflected light transmittance needs to be adjusted within the interval in which the Y component value is no longer set.
Defining a loss function for the image block J (y) after the pre-fog-penetration treatment and the image I (y) of the corresponding image block in the second image before the fog-penetration treatmentAnd the loss function is used for calculating the loss value of the Y component of each image block, and the calculation formula of the loss function is as follows:
calculating the loss value of the Y component of the preprocessed image block according to the formula (6)If, ifWhen the value of the Y component of the preprocessed image block is smaller than a set threshold value, the value of the Y component of the preprocessed image block is within a set interval (-x, 255+ x), x is a variable quantity not smaller than 0, and the magnitude of the variable quantity is in a direct proportion relation with the set threshold value, so that the reflected light transmittance of the image block is determined to be reasonable; if it isIf the value of the Y component of the preprocessed image block is larger than the set threshold, the value of the Y component of the preprocessed image block is unreasonable, and the Y component of the preprocessed image block is not in a set interval (-x, 255+ x), the reflection light transmittance t (Y) of the image block is increased until the loss value of the Y component of the image block is obtained after fog penetration preprocessing is performed on the image block according to the increased reflection light transmittance and the atmospheric light intensityLess than setAnd (3) until the threshold value is reached, namely until the value of the Y component of the image block is within a set interval (-x, 255+ x), protecting the loss of the Y component in the image fog penetration treatment by introducing an image loss value, and thus considering the loss of the Y component of the image and the increase of the contrast.
In the above embodiment of the application, the RGB-IR photosensitive element is used to collect the first image containing the infrared component, and in combination with the optical fog penetration mode, the infrared component variance of the image block in the third image obtained by recombining the infrared component pixel points is smaller than the set threshold to locate the difficult-to-process area such as sky, determine the atmospheric light intensity, improve the locating speed of the sky scene, reduce the calculated amount of the atmospheric light intensity, and guide and determine the Y component variance of the corresponding position in the second image according to the infrared component variance of each image block in the third image, further determine the reflected light transmittance of the image block of the corresponding position, reduce the calculated amount of the reflected light transmittance, and further reduce the calculated amount of the reflected light transmittance by using the method of the present inventionThe threshold value limitation reduces the difference of t (Y) in the neighborhood, gives consideration to the loss and the contrast of the Y component in the image, and adopts Gaussian weighted filtering to smooth t (Y) so as to reduce the influence of blocking effect.
In the field of video surveillance and the like, captured images are continuous video stream images (also called video frame sequences), and the continuous video frame sequences need to be subjected to fog penetration processing.
In order to reduce the calculation overhead, only the fog penetration parameters of the key frames can be calculated for continuous video stream images, and the fog penetration parameters of the previous key frames are adopted for non-key frames to carry out fog penetration processing.
In the embodiments of the present application, the key frames in the video frame sequence may be preset at certain intervals, or the key frames in the video frame sequence may be determined by a moving object detection method (e.g., a background subtraction method, a frame subtraction method, an optical flow method, etc.).
Fig. 5 is a flowchart illustrating an image fog-penetration processing method in a video stream provided by an embodiment of the present application. As shown, the process includes the following steps:
s501: the method comprises the steps of obtaining a first video frame in a video frame sequence with an original format and obtaining a second video frame in a YUV format by converting the first video frame, wherein the first video frame comprises visible light component pixel points and infrared light component pixel points.
S502 to S503: and judging whether the first video frame is a key frame, if not, performing fog penetration processing on the corresponding image block in the second video frame according to the atmospheric light intensity of the previous key frame and the reflection light transmittance of each image block, otherwise, executing the step S504.
In this step, a key frame in the video frame sequence may be preset, for example, the frames 1, 5, 10, and 15 … in the video frame sequence are set as key frames, and the first 1 to 4 frames perform fog-penetrating processing on the corresponding image blocks in the first 1 to 4 frames according to the atmospheric light intensity of the frame 1 and the reflection light transmittance of each image block.
S504: and generating a third video frame according to the infrared light component pixel points in the first video frame, and selecting image blocks of which the infrared light component variance is smaller than a set threshold value in the third video frame.
S505: and acquiring the image block at the corresponding position in the second video frame according to the position of the selected image block in the third video frame, and determining the atmospheric light intensity according to the Y component of the pixel in the image block at the corresponding position in the second video frame.
S506: and determining the Y component variance of each image block in the second video frame and the infrared light component variance of each image block in the third video frame, and determining the reflected light transmittance of the image block at the corresponding position according to the Y component variance and the infrared light component variance of the image block at the corresponding position.
S507: and carrying out fog penetration treatment on the corresponding image blocks in the second video frame according to the reflection light transmittance and the atmospheric light intensity of each image block.
In the above embodiment, under the condition of continuous video frames, only the atmospheric light intensity of the key frame and the reflection light transmittance of each image block are calculated, and the fog penetration processing is performed on the non-key frame according to the fog penetration parameters obtained by the key frame, so that the fog penetration parameters of each frame do not need to be calculated in real time, and the calculation amount is reduced.
It should be noted that the descriptions of steps S501 and S301 and the descriptions of steps S504 to S507 and S302 to S305 in the above embodiments are the same, and are not repeated here.
In other embodiments, the partial region of the first video frame is a moving region of the target object, and the fog penetration parameter may be calculated according to the moving region of the target object, thereby reducing the amount of calculation.
In step S501, a partial region in the first video frame is detected as a target object moving region by a smart moving algorithm, and if it is determined in step S502 that the first video frame is a non-key frame, in step S503, fog-penetrating processing may be performed on a corresponding image block in the target object moving region in the second video frame according to the atmospheric light intensity of the previous key frame and the reflected light transmittance of the image block in the region corresponding to the target object moving region.
If it is determined in step S502 that the first video frame is a key frame, the fog penetration parameter is calculated according to the moving area of the target object. Specifically, a fourth video frame in YUV format is obtained by converting an image in a target object moving area, in step S504, a fifth video frame is generated according to infrared component pixel points in the image in the target object moving area, an image block in which an infrared component variance in the fifth video frame is smaller than a set threshold is selected, in step S505, an image block in a corresponding position in the fourth video frame is obtained according to the position of the selected image block in the fifth video frame, atmospheric light intensity is determined according to a Y component of a pixel in the image block in the corresponding position in the fourth video frame, in step S506, a Y component variance of each image block in the fourth video frame and an infrared component variance of each image block in the fifth video frame are determined, and a reflected light transmittance of the image block in the corresponding position is determined according to the Y component variance and the infrared component variance of the image block in the corresponding position, in step S507, performing fog-penetrating processing on a corresponding image block in the fourth video frame according to the reflected light transmittance and the atmospheric light intensity of each image block; and replacing the image in the target object moving area in the second video frame with the fourth video frame after fog penetration processing to obtain the second video frame after fog penetration processing.
In the above embodiment, if it is determined that the ratio of the detected target moving object region to the first video frame area is smaller than the set threshold, the atmospheric light intensity and the reflected light transmittance are calculated according to the local target moving object region, and the atmospheric light intensity and the reflected light transmittance of the entire image do not need to be calculated, thereby improving the calculation efficiency.
Fig. 6 is a flowchart illustrating another image fog-penetration processing method in a video stream provided by an embodiment of the present application. As shown, the process includes the following steps:
s601, acquiring a first video frame in a video frame sequence in an original format, wherein the first video frame comprises visible light component pixel points and infrared light component pixel points, partial area in the first video frame is detected as a target object moving area, and converting the first video frame to obtain a second video frame in YUV format.
In this step, the target object movement region in the first video frame may be obtained by an intelligent movement algorithm.
S602-S604: and judging whether the area ratio of the target object moving area in the first video frame is smaller than a set threshold value, if so, acquiring an image in the target object moving area of the first video frame to obtain a first area image, acquiring an image in the target object moving area in the second video frame to obtain a second area image, and if not, taking the first video frame as the first area image and taking the second video frame as the second area image.
S605: and generating a third area image according to the infrared light component pixel points in the first area image, and selecting image blocks of which the infrared light component variance is smaller than a set threshold value in the third area image.
S606: and acquiring the image block at the corresponding position in the second area image according to the position of the selected image block in the third area image, and determining the atmospheric light intensity according to the Y component of the pixel in the image block at the corresponding position in the second area image.
S607: and determining the Y component variance of each image block in the second area image and the infrared light component variance of each image block in the third area image, and determining the reflected light transmittance of the image block at the corresponding position according to the Y component variance and the infrared light component variance of the image block at the corresponding position.
S608: and carrying out fog penetration treatment on the corresponding image block in the second area image according to the reflection light transmittance and the atmospheric light intensity of each image block and according to the reflection light transmittance and the atmospheric light intensity of each image block.
S609: and replacing the image in the target object moving area in the second video frame with the second area image after fog penetration processing to obtain the second video frame after fog penetration processing.
In the above-described embodiment, the atmospheric light intensity and the reflected light transmittance are calculated from the target moving object region in the first video frame, and the atmospheric light intensity and the reflected light transmittance of the whole map do not need to be calculated, thereby improving the calculation efficiency.
It should be noted that the above steps S605 to S608 are consistent with the description of the steps S302 to S305, and are not repeated here.
In some embodiments, step S602 may be omitted, and the image in the target object moving region is directly used as the first image to perform the subsequent fog penetration processing flow.
Based on the same technical concept, the embodiment of the present application provides an image fog-penetrating processing device, which can implement the functions in the above embodiments.
Referring to fig. 7, the apparatus includes: an image acquisition module 701, an atmospheric light intensity determination module 702, a reflected light transmittance determination module 703, and a fog penetration processing module 704.
An image obtaining module 701, configured to obtain a first image in an original format and a second image in a YUV format obtained by converting the first image; the first image comprises visible light component pixel points and infrared light component pixel points;
an atmospheric light intensity determining module 702, configured to generate a third image according to the infrared light component pixel point in the first image, and select an image block in the third image where a variance of the infrared light component is smaller than a set threshold; acquiring an image block at a corresponding position in the second image according to the position of the selected image block in the third image, and determining the atmospheric light intensity according to the Y component of the pixel in the image block at the corresponding position in the second image;
a reflected light transmittance determining module 703, configured to determine a Y component variance of each image block in the second image and an infrared light component variance of each image block in the third image, and determine a reflected light transmittance of the image block at the corresponding position according to the Y component variance and the infrared light component variance of the image block at the corresponding position;
and the fog penetration processing module 704 is configured to perform fog penetration processing on the corresponding image block in the second image according to the transmittance of the reflected light of each image block and the atmospheric light intensity.
Optionally, the atmospheric light intensity determining module is configured to:
selecting pixel points with the color closest to white in the image blocks at the corresponding positions in the second image;
and determining the Y component of the selected pixel point as the atmospheric light intensity.
Optionally, the apparatus further includes a reflected light transmittance verification module, configured to:
after the reflected light transmittance of the image block at the corresponding position is determined according to the Y component variance and the infrared light component variance of the image block at the corresponding position, the following operations are respectively executed for each image block:
carrying out fog penetration pretreatment on the image block according to the reflection light transmittance and the atmospheric light intensity of the image block;
if the value of the Y component of the image block after the preprocessing is in a set interval, determining that the value of the reflection light transmittance of the image block is reasonable; otherwise, the value of the reflection light transmittance of the image block is determined to be unreasonable, the reflection light transmittance of the image block is increased until the value of the Y component of the image block is within a set interval after fog penetration pretreatment is performed on the image block according to the increased reflection light transmittance and the atmospheric light intensity.
Optionally, the apparatus further includes a filtering module, configured to perform gaussian weighted filtering on the transmittance of the reflected light of each image block.
It should be noted that, the apparatus provided in the embodiment of the present invention can implement all the method steps implemented by the method embodiment and achieve the same technical effect, and detailed descriptions of the same parts and beneficial effects as the method embodiment in this embodiment are omitted here.
Based on the same technical concept, the embodiment of the present application provides an image fog-penetrating processing device, which can implement the functions in the above embodiments.
Referring to fig. 8, the apparatus includes: a video frame acquisition module 801, a key frame judgment module 802, an atmospheric light intensity determination module 803, a reflected light transmittance determination module 804, and a fog penetration processing module 805.
A video frame acquiring module 801, configured to acquire a first video frame in a video frame sequence in an original format and a second video frame in a YUV format obtained by converting the first video frame, where the first video frame includes a visible light component pixel point and an infrared light component pixel point;
a key frame determining module 802, configured to determine whether the first video frame is a key frame, if not, perform fog-penetrating processing on a corresponding image block in the second video frame according to the atmospheric light intensity of the previous key frame and the reflection light transmittance of each image block, otherwise, execute a key frame fog-penetrating processing procedure, where the key frame fog-penetrating processing procedure includes the following modules:
an atmospheric light intensity determining module 803, configured to generate a third video frame according to the infrared light component pixel point in the first video frame, and select an image block in the third video frame where a variance of the infrared light component is smaller than a set threshold; acquiring an image block at a corresponding position in the second video frame according to the position of the selected image block in the third video frame, and determining the atmospheric light intensity according to the Y component of the pixel in the image block at the corresponding position in the second video frame;
a reflected light transmittance determining module 804, configured to determine a Y component variance of each image block in the second video frame and an infrared light component variance of each image block in the third video frame, and determine a reflected light transmittance of the image block at the corresponding position according to the Y component variance and the infrared light component variance of the image block at the corresponding position;
and the fog penetration processing module 805 is configured to perform fog penetration processing on the corresponding image block in the second image according to the transmittance of the reflected light of each image block and the atmospheric light intensity.
Optionally, a partial area in the first video frame is detected as a target object moving area, and the key frame determining module 802 is configured to perform fog penetration processing on a corresponding image block in the target object moving area in the second video frame according to the atmospheric light intensity of the previous key frame and the reflection light transmittance of the image block in the area corresponding to the target object moving area.
Optionally, a partial region in the first video frame is detected as a target object moving region, and the first video frame is a key frame;
the key frame obtaining module 801 is further configured to convert an image in a target object moving area to obtain a fourth video frame in a YUV format;
an atmospheric light intensity determining module 803, configured to generate a fifth video frame according to an infrared light component pixel point in an image in a target object moving area, and select an image block in the fifth video frame where a variance of an infrared light component is smaller than a set threshold; acquiring an image block at a corresponding position in a fourth video frame according to the position of the selected image block in the fifth video frame, and determining the atmospheric light intensity according to the Y component of the pixel in the image block at the corresponding position in the fourth video frame;
and the reflected light transmittance determining module 804 is configured to determine a Y component variance of each image block in the fourth video frame and an infrared light component variance of each image block in the fifth video frame, and determine the reflected light transmittance of the image block at the corresponding position according to the Y component variance and the infrared light component variance of the image block at the corresponding position.
The fog penetration processing module 805 is configured to perform fog penetration processing on corresponding image blocks in the fourth video frame according to the transmittance of reflected light of each image block and the atmospheric light intensity; and replacing the image in the target object moving area in the second video frame with the fourth video frame after fog penetration processing to obtain the second video frame after fog penetration processing.
Optionally, the atmospheric light intensity determining module 803 is configured to select a pixel point with a color closest to white in the image block at the corresponding position in the fourth video frame; and determining the Y component of the selected pixel point as the atmospheric light intensity.
Optionally, the apparatus further includes a reflected light transmittance verification module, configured to:
after the reflected light transmittance of the image block at the corresponding position is determined according to the Y component variance and the infrared light component variance of the image block at the corresponding position, the following operations are respectively executed for each image block:
carrying out fog penetration pretreatment on the image block according to the reflection light transmittance and the atmospheric light intensity of the image block;
if the value of the Y component of the image block after the preprocessing is in a set interval, determining that the value of the reflection light transmittance of the image block is reasonable; otherwise, the value of the reflection light transmittance of the image block is determined to be unreasonable, the reflection light transmittance of the image block is increased until the value of the Y component of the image block is within a set interval after fog penetration pretreatment is performed on the image block according to the increased reflection light transmittance and the atmospheric light intensity.
Optionally, the apparatus further includes a filtering module, configured to perform gaussian weighted filtering on the transmittance of the reflected light of each image block.
It should be noted that, the apparatus provided in the embodiment of the present invention can implement all the method steps implemented by the method embodiment and achieve the same technical effect, and detailed descriptions of the same parts and beneficial effects as the method embodiment in this embodiment are omitted here.
Based on the same technical concept, the embodiment of the present application provides an image fog-penetrating processing device, which can implement the functions in the above embodiments.
Referring to fig. 9, the apparatus includes: a video frame acquisition module 901, an image acquisition module 902, an atmospheric light intensity determination module 903, a reflected light transmittance determination module 904, and a fog penetration processing module 905.
A video frame acquiring module 901, configured to acquire a first video frame in a video frame sequence in an original format, where the first video frame includes visible light component pixels and infrared light component pixels, and a partial region in the first video frame is detected as a target object moving region, convert the first video frame to obtain a second video frame in a YUV format, and execute a first process;
the first process includes the following modules:
an image obtaining module 902, configured to obtain an image in a target object moving area of a first video frame to obtain a first area image, and obtain an image in a target object moving area of a second video frame to obtain a second area image;
the atmospheric light intensity determining module 903 is configured to generate a third area image according to the infrared light component pixel points in the first area image, and select an image block in the third area image, where a variance of the infrared light component is smaller than a set threshold; acquiring an image block at a corresponding position in the second area image according to the position of the selected image block in the third area image, and determining the atmospheric light intensity according to the Y component of the pixel in the image block at the corresponding position in the second area image;
a reflected light transmittance determining module 904, configured to determine a Y component variance of each image block in the second area image and an infrared light component variance of each image block in the third area image, and determine a reflected light transmittance of the image block at the corresponding position according to the Y component variance and the infrared light component variance of the image block at the corresponding position;
the fog penetration processing module 905 is used for performing fog penetration processing on corresponding image blocks in the second area image according to the reflection light transmittance of each image block and the atmospheric light intensity; and replacing the image in the target object moving area in the second video frame with the second area image after fog penetration processing to obtain the second video frame after fog penetration processing.
Optionally, the apparatus further includes an area ratio determining module, configured to determine whether an area ratio of a target object moving region in the first video frame is smaller than a set threshold, if so, execute a first process, otherwise, execute a second process;
the second process includes the following modules:
the atmospheric light intensity determining module 903 is further configured to generate a third video frame according to the infrared light component pixel point in the first video frame, and select an image block in the third video frame, where the infrared light component variance is smaller than a set threshold; acquiring an image block at a corresponding position in the second video frame according to the position of the selected image block in the third video frame, and determining the atmospheric light intensity according to the Y component of the pixel in the image block at the corresponding position in the second video frame;
the reflected light transmittance determining module 904 is further configured to determine a Y component variance of each image block in the second video frame and an infrared light component variance of each image block in the third video frame, and determine the reflected light transmittance of the image block at the corresponding position according to the Y component variance and the infrared light component variance of the image block at the corresponding position;
the fog penetrating processing module 905 is further configured to perform fog penetrating processing on corresponding image blocks in the second video frame according to the transmittance of the reflected light of each image block and the atmospheric light intensity.
Optionally, the atmospheric light intensity determining module is configured to:
selecting pixel points with the color closest to white in the image blocks at the corresponding positions in the second area image;
determining the Y component of the selected pixel point as the atmospheric light intensity, or,
selecting pixel points with the color closest to white in the image blocks at the corresponding positions in the second video frame;
and determining the Y component of the selected pixel point as the atmospheric light intensity.
Optionally, the apparatus further includes a reflected light transmittance verification module, configured to:
after the reflected light transmittance of the image block at the corresponding position is determined according to the Y component variance and the infrared light component variance of the image block at the corresponding position, the following operations are respectively executed for each image block:
carrying out fog penetration pretreatment on the image block according to the reflection light transmittance and the atmospheric light intensity of the image block;
if the value of the Y component of the image block after the preprocessing is in a set interval, determining that the value of the reflection light transmittance of the image block is reasonable; otherwise, the value of the reflection light transmittance of the image block is determined to be unreasonable, the reflection light transmittance of the image block is increased until the value of the Y component of the image block is within a set interval after fog penetration pretreatment is performed on the image block according to the increased reflection light transmittance and the atmospheric light intensity.
Optionally, the apparatus further includes a filtering module, configured to perform gaussian weighted filtering on the transmittance of the reflected light of each image block.
It should be noted that, the apparatus provided in the embodiment of the present invention can implement all the method steps implemented by the method embodiment and achieve the same technical effect, and detailed descriptions of the same parts and beneficial effects as the method embodiment in this embodiment are omitted here.
Based on the same technical concept, the embodiment of the application also provides an image fog-penetrating processing device, and the device can realize the method in the embodiment.
Referring to fig. 10, the apparatus includes a processor 1001 and a network interface 1002. The processor 1001 may also be a controller. The processor 1001 is configured to perform the functions referred to in fig. 3-6. The network interface 1002 is configured to support messaging functionality. The apparatus may also include a memory 1003, the memory 1003 being configured to couple with the processor 1001 and to store program instructions and data necessary for the device. The processor 1001, the network interface 1002 and the memory 1003 are connected, the memory 1003 is used for storing instructions, and the processor 1001 is used for executing the instructions stored in the memory 1003 to control the network interface 1002 to transmit and receive messages, so as to complete the steps of the method for executing corresponding functions.
In the embodiments of the present application, for concepts, explanations, details, and other steps related to the technical solutions provided by the embodiments of the present application, reference is made to the descriptions of the foregoing methods or other embodiments, and details are not described herein.
It should be noted that the processor referred to in the embodiments of the present application may be a Central Processing Unit (CPU), a general purpose processor, a Digital Signal Processor (DSP), an application-specific integrated circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic devices, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. A processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, a DSP and a microprocessor, or the like. Wherein the memory may be integrated in the processor or may be provided separately from the processor.
Embodiments of the present application also provide a computer storage medium for storing instructions that, when executed, may perform the method of the foregoing embodiments.
The embodiments of the present application also provide a computer program product for storing a computer program, where the computer program is used to execute the method of the foregoing embodiments.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
Claims (12)
1. An image fog-penetration processing method is characterized by comprising the following steps:
acquiring a first image in an original format and a second image in a YUV format obtained by converting the first image; the first image comprises visible light component pixel points and infrared light component pixel points;
generating a third image according to the infrared light component pixel points in the first image, and selecting image blocks of which the infrared light component variance is smaller than a set threshold in the third image;
acquiring image blocks at corresponding positions in the second image according to the positions of the selected image blocks in the third image, and determining the atmospheric light intensity according to Y components of pixel points in the image blocks at the corresponding positions in the second image;
determining the Y component variance of each image block in the second image and the infrared light component variance of each image block in the third image, and determining the reflected light transmittance of the image block at the corresponding position according to the Y component variance and the infrared light component variance of the image block at the corresponding position;
and carrying out fog penetration treatment on the corresponding image block in the second image according to the reflection light transmittance of each image block and the atmospheric light intensity.
2. The method of claim 1, wherein determining the atmospheric light intensity from the Y component of the pixel points in the image block at the corresponding location in the second image comprises:
selecting pixel points with the color closest to white in the image blocks at the corresponding positions in the second image;
and determining the Y component of the selected pixel point as the atmospheric light intensity.
3. The method of claim 1, wherein after determining the reflected light transmittance of the image block at the corresponding location according to the Y component variance and the infrared light component variance of the image block at the corresponding location, further comprising:
the following operations are performed for each image block:
carrying out fog penetration pretreatment on the image block according to the transmittance of the reflected light of the image block and the atmospheric light intensity;
if the value of the Y component of the image block after the preprocessing is in a set interval, determining that the value of the reflection light transmittance of the image block is reasonable; otherwise, the value of the reflection light transmittance of the image block is determined to be unreasonable, the reflection light transmittance of the image block is increased until the value of the Y component of the image block is within the set interval after fog penetration pretreatment is performed on the image block according to the increased reflection light transmittance and the atmospheric light intensity.
4. The method of claim 1, wherein determining the transmittance of the reflected light of the image block at the corresponding position further comprises:
and performing Gaussian weighted filtering on the reflection light transmittance of each image block.
5. An image fog-penetration processing method is characterized by comprising the following steps:
acquiring a first video frame in a video frame sequence in an original format and a second video frame in a YUV format obtained by converting the first video frame, wherein the first video frame comprises visible light component pixel points and infrared light component pixel points;
if the first video frame is not a key frame, carrying out fog-penetrating processing on a corresponding image block in a second video frame according to the atmospheric light intensity of the previous key frame and the reflection light transmittance of each image block, otherwise, executing a key frame fog-penetrating processing process, wherein the key frame fog-penetrating processing process comprises the following operations:
generating a third video frame according to the infrared light component pixel points in the first video frame, and selecting image blocks of which the infrared light component variance is smaller than a set threshold in the third video frame;
acquiring an image block at a corresponding position in the second video frame according to the position of the selected image block in the third video frame, and determining the atmospheric light intensity according to the Y component of the pixel in the image block at the corresponding position in the second video frame;
determining the Y component variance of each image block in the second video frame and the infrared light component variance of each image block in the third video frame, and determining the reflected light transmittance of the image block at the corresponding position according to the Y component variance and the infrared light component variance of the image block at the corresponding position;
and carrying out fog penetration treatment on the corresponding image blocks in the second video frame according to the reflection light transmittance of each image block and the atmospheric light intensity.
6. The method as claimed in claim 5, wherein the partial area in the first video frame is detected as the moving area of the target object, and the fog-penetrating processing is performed on the corresponding image block in the second video frame according to the atmospheric light intensity of the previous key frame and the reflected light transmittance of each image block, including:
and carrying out fog penetration treatment on the corresponding image block in the target object moving area in the second video frame according to the atmospheric light intensity of the previous key frame and the reflection light transmittance of the image block in the area corresponding to the target object moving area.
7. The method of claim 5, wherein a partial region in the first video frame is detected as a target object moving region, and the first video frame is a key frame, the key frame fog-penetration processing procedure comprises:
converting the image in the target object moving area to obtain a fourth video frame in a YUV format;
generating a fifth video frame according to the infrared light component pixel points in the image in the target object moving area, and selecting an image block of which the infrared light component variance is smaller than a set threshold in the fifth video frame;
acquiring an image block at a corresponding position in a fourth video frame according to the position of the selected image block in the fifth video frame, and determining the atmospheric light intensity according to the Y component of the pixel in the image block at the corresponding position in the fourth video frame;
determining the Y component variance of each image block in the fourth video frame and the infrared light component variance of each image block in the fifth video frame, and determining the reflected light transmittance of the image block at the corresponding position according to the Y component variance and the infrared light component variance of the image block at the corresponding position;
carrying out fog penetration treatment on corresponding image blocks in the fourth video frame according to the reflection light transmittance of each image block and the atmospheric light intensity;
and replacing the image in the target object moving area in the second video frame with the fourth video frame after fog penetration treatment to obtain the second video frame after fog penetration treatment.
8. An image fog-penetration processing method is characterized by comprising the following steps:
acquiring a first video frame in a video frame sequence in an original format, wherein the first video frame comprises visible light component pixel points and infrared light component pixel points, part of regions in the first video frame are detected as target object moving regions, converting the first video frame to obtain a second video frame in a YUV format, and executing first processing;
the first processing includes:
acquiring an image in a target object moving area of a first video frame to obtain a first area image, and acquiring an image in a target object moving area of a second video frame to obtain a second area image;
generating a third area image according to the infrared light component pixel points in the first area image, and selecting image blocks of which the infrared light component variance is smaller than a set threshold value in the third area image;
acquiring an image block at a corresponding position in the second area image according to the position of the selected image block in the third area image, and determining the atmospheric light intensity according to the Y component of the pixel in the image block at the corresponding position in the second area image;
determining the Y component variance of each image block in the second area image and the infrared light component variance of each image block in the third area image, and determining the reflected light transmittance of the image block at the corresponding position according to the Y component variance and the infrared light component variance of the image block at the corresponding position;
carrying out fog penetration treatment on corresponding image blocks in the second area image according to the reflection light transmittance of each image block and the atmospheric light intensity;
and replacing the image in the target object moving area in the second video frame with the second area image after fog penetration processing to obtain the second video frame after fog penetration processing.
9. The method of claim 8, wherein after acquiring the first video frame, further comprising: judging whether the area ratio of a target object moving region in the first video frame is smaller than a set threshold value or not;
in the method, if the area ratio of a target object moving region in a first video frame in the first video frame is smaller than a set threshold, executing the first processing; if the area ratio of the target object moving area in the first video frame is greater than or equal to the set threshold, executing a second process;
the second processing includes:
generating a third video frame according to the infrared light component pixel points in the first video frame, and selecting image blocks of which the infrared light component variance is smaller than a set threshold in the third video frame;
acquiring an image block at a corresponding position in the second video frame according to the position of the selected image block in the third video frame, and determining the atmospheric light intensity according to the Y component of the pixel in the image block at the corresponding position in the second video frame;
determining the Y component variance of each image block in the second video frame and the infrared light component variance of each image block in the third video frame, and determining the reflected light transmittance of the image block at the corresponding position according to the Y component variance and the infrared light component variance of the image block at the corresponding position;
and carrying out fog penetration treatment on the corresponding image blocks in the second video frame according to the reflection light transmittance of each image block and the atmospheric light intensity.
10. An image fog-penetration processing device, characterized by comprising:
the image acquisition module is used for acquiring a first image in an original format and a second image in a YUV format obtained by converting the first image; the first image comprises visible light component pixel points and infrared light component pixel points;
the atmospheric light intensity determining module is used for generating a third image according to the infrared light component pixel points in the first image and selecting image blocks of which the infrared light component variance is smaller than a set threshold in the third image; acquiring image blocks at corresponding positions in the second image according to the positions of the selected image blocks in the third image, and determining the atmospheric light intensity according to Y components of pixel points in the image blocks at the corresponding positions in the second image;
the reflected light transmittance determining module is used for determining the Y component variance of each image block in the second image and the infrared light component variance of each image block in the third image, and determining the reflected light transmittance of the image block at the corresponding position according to the Y component variance and the infrared light component variance of the image block at the corresponding position;
and the fog penetration processing module is used for carrying out fog penetration processing on the corresponding image blocks in the second image according to the reflection light transmittance of each image block and the atmospheric light intensity.
11. An image fog-penetration processing device, characterized by comprising:
the video frame acquisition module is used for acquiring a first video frame in a video frame sequence in an original format and a second video frame in a YUV format obtained by converting the first video frame, wherein the first video frame comprises visible light component pixel points and infrared light component pixel points;
a key frame judging module, configured to judge whether the first video frame is a key frame, if not, perform fog-penetrating processing on a corresponding image block in the second video frame according to the atmospheric light intensity of a previous key frame and the reflection light transmittance of each image block, otherwise, execute a key frame fog-penetrating processing procedure, where the key frame fog-penetrating processing procedure includes the following modules:
the atmospheric light intensity determining module is used for generating a third video frame according to the infrared light component pixel points in the first video frame and selecting image blocks of which the infrared light component variance is smaller than a set threshold value in the third video frame; acquiring an image block at a corresponding position in the second video frame according to the position of the selected image block in the third video frame, and determining the atmospheric light intensity according to the Y component of the pixel in the image block at the corresponding position in the second video frame;
the reflected light transmittance determining module is used for determining the Y component variance of each image block in the second video frame and the infrared light component variance of each image block in the third video frame, and determining the reflected light transmittance of the image block at the corresponding position according to the Y component variance and the infrared light component variance of the image block at the corresponding position;
and the fog penetration processing module is used for carrying out fog penetration processing on the corresponding image blocks in the second video frame according to the reflection light transmittance of each image block and the atmospheric light intensity.
12. An image fog-penetration processing device, characterized by comprising:
the video frame acquisition module is used for acquiring a first video frame in a video frame sequence in an original format, wherein the first video frame comprises visible light component pixels and infrared light component pixels, partial area in the first video frame is detected as a target object moving area, a second video frame in a YUV format is obtained by converting the first video frame, and first processing is executed;
the first processing includes:
the image acquisition module is used for acquiring an image in a target object moving area of the first video frame to obtain a first area image, and acquiring an image in a target object moving area of the second video frame to obtain a second area image;
the atmospheric light intensity determining module is used for generating a third area image according to the infrared light component pixel points in the first area image and selecting image blocks of which the infrared light component variance is smaller than a set threshold value in the third area image; acquiring an image block at a corresponding position in the second area image according to the position of the selected image block in the third area image, and determining the atmospheric light intensity according to the Y component of the pixel in the image block at the corresponding position in the second area image;
the reflected light transmittance determining module is used for determining the Y component variance of each image block in the second area image and the infrared light component variance of each image block in the third area image, and determining the reflected light transmittance of the image block at the corresponding position according to the Y component variance and the infrared light component variance of the image block at the corresponding position;
the fog penetration processing module is used for carrying out fog penetration processing on the corresponding image blocks in the second area image according to the reflection light transmittance of each image block and the atmospheric light intensity; and replacing the image in the target object moving area in the second video frame with the second area image after fog penetration processing to obtain the second video frame after fog penetration processing.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010472423.9A CN111383242B (en) | 2020-05-29 | 2020-05-29 | Image fog penetration processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010472423.9A CN111383242B (en) | 2020-05-29 | 2020-05-29 | Image fog penetration processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111383242A true CN111383242A (en) | 2020-07-07 |
CN111383242B CN111383242B (en) | 2020-09-29 |
Family
ID=71220403
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010472423.9A Active CN111383242B (en) | 2020-05-29 | 2020-05-29 | Image fog penetration processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111383242B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112017174A (en) * | 2020-09-03 | 2020-12-01 | 湖南省华芯医疗器械有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4636643A (en) * | 1983-07-25 | 1987-01-13 | Nippondenso Co., Ltd. | Fog detecting apparatus for use in vehicle |
CN104571401A (en) * | 2013-10-18 | 2015-04-29 | 中国航天科工集团第三研究院第八三五八研究所 | Implementing device of high-speed guiding filter on FPGA (field programmable gate array) platform |
CN109410161A (en) * | 2018-10-09 | 2019-03-01 | 湖南源信光电科技股份有限公司 | A kind of fusion method of the infrared polarization image separated based on YUV and multiple features |
CN110163804A (en) * | 2018-06-05 | 2019-08-23 | 腾讯科技(深圳)有限公司 | Image defogging method, device, computer equipment and storage medium |
CN110493579A (en) * | 2019-03-14 | 2019-11-22 | 杭州海康威视数字技术股份有限公司 | A kind of colour Penetrating Fog method, apparatus, video camera and image processing system |
-
2020
- 2020-05-29 CN CN202010472423.9A patent/CN111383242B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4636643A (en) * | 1983-07-25 | 1987-01-13 | Nippondenso Co., Ltd. | Fog detecting apparatus for use in vehicle |
CN104571401A (en) * | 2013-10-18 | 2015-04-29 | 中国航天科工集团第三研究院第八三五八研究所 | Implementing device of high-speed guiding filter on FPGA (field programmable gate array) platform |
CN110163804A (en) * | 2018-06-05 | 2019-08-23 | 腾讯科技(深圳)有限公司 | Image defogging method, device, computer equipment and storage medium |
CN109410161A (en) * | 2018-10-09 | 2019-03-01 | 湖南源信光电科技股份有限公司 | A kind of fusion method of the infrared polarization image separated based on YUV and multiple features |
CN110493579A (en) * | 2019-03-14 | 2019-11-22 | 杭州海康威视数字技术股份有限公司 | A kind of colour Penetrating Fog method, apparatus, video camera and image processing system |
Non-Patent Citations (1)
Title |
---|
沈瑜等: "近红外与可见光双通道传感器信息融合的去雾技术", 《光谱学与光谱分析》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112017174A (en) * | 2020-09-03 | 2020-12-01 | 湖南省华芯医疗器械有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN112017174B (en) * | 2020-09-03 | 2024-05-31 | 湖南省华芯医疗器械有限公司 | Image processing method, device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111383242B (en) | 2020-09-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Huang et al. | An advanced single-image visibility restoration algorithm for real-world hazy scenes | |
Park et al. | Single image dehazing with image entropy and information fidelity | |
WO2018099136A1 (en) | Method and device for denoising image with low illumination, and storage medium | |
US9672434B2 (en) | Video-based system and method for parking occupancy detection | |
CN114119378A (en) | Image fusion method, and training method and device of image fusion model | |
CN108596169B (en) | Block signal conversion and target detection method and device based on video stream image | |
CN111784605B (en) | Image noise reduction method based on region guidance, computer device and computer readable storage medium | |
CN111062293B (en) | Unmanned aerial vehicle forest flame identification method based on deep learning | |
CN105404888A (en) | Saliency object detection method integrated with color and depth information | |
KR20140017776A (en) | Image processing device and image defogging method | |
CN110276831A (en) | Constructing method and device, equipment, the computer readable storage medium of threedimensional model | |
Chen et al. | Improve transmission by designing filters for image dehazing | |
CN110175967B (en) | Image defogging processing method, system, computer device and storage medium | |
CN111383242B (en) | Image fog penetration processing method and device | |
Kansal et al. | Fusion-based image de-fogging using dual tree complex wavelet transform | |
CN116263942A (en) | Method for adjusting image contrast, storage medium and computer program product | |
CN110062150A (en) | A kind of Atomatic focusing method and device | |
JP5822739B2 (en) | Image processing apparatus, method, and program | |
CN114418874A (en) | Low-illumination image enhancement method | |
CN112017128A (en) | Image self-adaptive defogging method | |
CN112241935A (en) | Image processing method, device and equipment and storage medium | |
CN112752064A (en) | Processing method and system for power communication optical cable monitoring video | |
CN110930326A (en) | Image and video defogging method and related device | |
Saminadan et al. | Efficient image dehazing based on pixel based dark channel prior and guided filter | |
CN103226813A (en) | Processing method for improving video image quality in rainy days |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |