CN111192213B - Image defogging self-adaptive parameter calculation method, image defogging method and system - Google Patents

Image defogging self-adaptive parameter calculation method, image defogging method and system Download PDF

Info

Publication number
CN111192213B
CN111192213B CN201911374990.4A CN201911374990A CN111192213B CN 111192213 B CN111192213 B CN 111192213B CN 201911374990 A CN201911374990 A CN 201911374990A CN 111192213 B CN111192213 B CN 111192213B
Authority
CN
China
Prior art keywords
image
defogging
value
dark channel
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911374990.4A
Other languages
Chinese (zh)
Other versions
CN111192213A (en
Inventor
罗晶宜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Xinmai Microelectronics Co ltd
Original Assignee
Zhejiang Xinmai Microelectronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Xinmai Microelectronics Co ltd filed Critical Zhejiang Xinmai Microelectronics Co ltd
Priority to CN201911374990.4A priority Critical patent/CN111192213B/en
Publication of CN111192213A publication Critical patent/CN111192213A/en
Application granted granted Critical
Publication of CN111192213B publication Critical patent/CN111192213B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The method comprises training a device gain circle, acquiring a device number through image decoding data, calling out a corresponding device gain circle, identifying an image and outputting a matched self-adaptive parameter k in the gain circle; outputting the identified dark channel image through a dark channel prior algorithm; identifying a sky area, and outputting an identified sky area image; fusing the values of the dark channel image and the sky area image to obtain an input image; and calculating the atmospheric light curtain value before defogging according to a formula, and then further restoring the defogging image. The invention relates to a novel image defogging method and system for calculating self-adaptive parameters by utilizing a Radviz algorithm.

Description

Image defogging self-adaptive parameter calculation method, image defogging method and system
Technical Field
The invention relates to the field of image processing, in particular to a calculation method of image defogging self-adaptive parameters, an image defogging method and a system.
Background
The prior paper patent layer based on defogging is endless, defogging is finally finished based on the assumption of a dark channel to solve the atmospheric light value and the transmissivity based on the He Kaiming doctor classical defogging algorithm Single Image Haze Removal Using Dark Channel Prior, defogging treatment is further carried out based on a defogging imaging model, most of defogging schemes are improved and optimized based on the scheme, but the scheme is beneficial and disadvantageous, and good results are achieved in terms of instantaneity, defogging effect and the like.
In the existing defogging scheme, the running speed is too low, and the resource consumption is too high. Both methods have holes, such as He Kaiming doctor mode, and a large number of floating point operations limit the real-time processing of industrial CCD; defogging by the retinex algorithm is easy to cause color cast, is only suitable for still images, and other novel algorithms in real time have defects in boundary processing, color cast, internal blocking and other places, and are particularly greatly influenced by parameters.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a novel image defogging method and system for calculating self-adaptive parameters by using a Radviz algorithm.
In order to solve the technical problems, the invention is solved by the following technical scheme:
the calculation method of the image defogging self-adaptive parameter comprises the following steps:
step 1, acquiring an image data set P= { A, X, ymean }, wherein A represents a global atmospheric light value, X is the complexity of image textures, and Ymean is the average brightness of an image;
step 2, carrying out normalization processing on the data set P, substituting the P into a Radviz algorithm, and acquiring the balance point position in a circle of the Radviz algorithm to obtain a self-adaptive parameter k;
step 3, using a plurality of images with different gains, repeating the method of step 1-2 to obtain a plurality of balance point positions, and distributing the balance point positions into a circle by using an artificially judged error-free self-adaptive parameter k as an output value;
step 4, obtaining gain circles distributed with k of all cases through interpolation;
step 5, obtaining the equipment number through image decoding data, calculating an image data set P', calling out the pass
And 4, substituting P' into the gain circle of the equipment obtained in the step 4 to identify the image, and outputting the adaptive parameter k matched with the input image.
Optionally, the normalization process calculates the formula:t is an independent variable used for searching the maximum value and the minimum value in all investigation values under the attribute of the object j; j is the attribute of the object; i is the number of objects investigated under the attribute;
the tension balance condition is satisfied:wherein O is j For the coordinates where each attribute is located, x ij For equilibrium point position, a ij A tension value for each attribute; then, the balance point position calculation formula:
optionally, the image dataset p= { a, X, ymean } is normalized after the singular points are excluded.
Optionally, the global atmospheric light value a value calculating method includes: taking the value of the pixel point 0.1% before the pixel value of the image, and calculating the average value of the pixel points after selection; the average brightness Ymean calculation method of the image is to add up the values of all the pixel points and divide the values by the number of the pixel points.
Optionally, the calculating method of the complexity X of the image texture includes: through histogram calculation, let the gray level of the image be L, L be the number possessed by different gray level values, the average value of the gray level be Lmean, wherein the corresponding histogram be h (L), then the calculation formula of X is
The image defogging method comprises the steps of obtaining the self-adaptive parameter k obtained through calculation, obtaining equipment numbers through image decoding data, calling out corresponding equipment gain circles, identifying images, and outputting the self-adaptive parameter k matched in the gain circles;
outputting the identified dark channel image through a dark channel prior algorithm;
identifying a sky area, and outputting an identified sky area image;
fusing the values of the dark channel image and the sky area image to obtain Iin;
performing edge protection filtering on Iin to obtain Y1, calculating the value of Iin-Y1, obtaining the absolute value Y2 of the value, and performing edge protection filtering on Y2 for one time to obtain Yf;
the atmospheric light curtain value yshuchu before defogging is calculated according to the formula: yshuchu=y1-Yf;
restoring the fogless image calculation formula:
i (x): a hazy image; a: global atmospheric light value; yout: and (5) a haze-free image.
Optionally, the formula for calculating the dark channel prior is as follows: idark=min y∈s(x) (min c∈{r,g,b} I c (y));
c is an argument; ic (y) is any one color channel of the image I; idark is dark channel data, s (x) is a sliding window with x as the sliding center, the radius is R, and R is custom.
Optionally, the sky area identification processing method comprises the following steps:
converting the three primary color image into ycbcr image, and extracting Y component;
the Y component is filtered by Gaussian filtering;
identifying gradient values of each pixel point of the Y component in each direction, comparing and selecting the maximum value to obtain output S, wherein the output S is used as the gradient of the image to be stored as the input of the next step;
setting 2 thresholds, namely a gray level threshold on a Y component and a gradient maximum value S, calculating the variance of the gradient maximum value S, setting a variance threshold, and cutting off a sky region through the gray level threshold and the variance threshold to obtain Isky. And a small average filter is performed on Isky.
Optionally, the value fusion formula of the dark channel image and the sky area image is as follows:
I in =fix((b*Isky+(255-b)*Idark)/255);
and b is a weight parameter used for controlling the effect of finally processing the sky.
Optionally, the edge protection filtering method comprises the following steps: and (3) setting an area with the size of r, identifying pixels similar to the pixels with the value within the radius of r, setting a similarity threshold value of d, carrying out average filtering when the similarity is larger than the threshold value d, replacing the original pixel value with the pixel value after average filtering, and reserving the area with the similarity smaller than the threshold value d as a boundary area.
An image defogging system comprises a front-end module, a defogging module and a rear-end module, wherein the front-end module comprises a video data collection module and an encoder, and the collected video data and a sensor equipment pattern are transmitted to the encoder for compression; the back-end module comprises a video data coding and collecting module and a decoder, and decodes the data obtained through network transmission.
The video signal is input into the front-end module for video data acquisition and encoding, the processed data is transmitted to the defogging module through a network, defogging processing is carried out through the image defogging method, and the defogged data is transmitted to the back-end module through the network for decoding.
Optionally, the defogging module comprises a dark channel processing module, a sky area processing module and a special defogging processing module;
the dark channel processing module is used for identifying dark channel images;
the sky area processing module is used for identifying sky area images;
and the special defogging processing module is used for fusing the image data processed by the dark channel processing module and the sky area processing module and inputting the fused image data into the special defogging processing module for defogging.
The invention has the beneficial effects that:
1. the Radviz algorithm adopted by the invention completes the self-adaptive parameter identification, and can simply and rapidly obtain the information of the images and the rules among the images. Meanwhile, the image processing and the visualization algorithm can be further combined, and the clustering algorithm is used for stabilizing the image processing effect.
2. The defogging module effectively reserves the sky, and well suppresses the phenomenon of spiral lines in the sky area due to defogging on a plurality of imaging systems.
3. The special defogging processing module is designed, the edge protection filtering flow is increased, abnormal conditions such as black edges and the like in the later period are avoided, and the object is more real and has better contrast.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings can be obtained according to these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a flow chart of an image defogging system;
FIG. 2 is a flow chart of a cloud defogging module process;
FIG. 3 is a schematic diagram of the equilibrium of one of the points of Radviz;
fig. 4 is a graph of the distribution of Radviz adaptive parameter k.
Detailed Description
The present invention will be described in further detail with reference to the following examples, which are illustrative of the present invention and are not intended to limit the present invention thereto.
Example 1:
as in fig. 1 and 2: an image defogging system comprises a front-end module, a cloud defogging module and a rear-end module. The front-end module comprises a video data collection module and an MPEG-4.10 encoder, the collected video data and the sensor equipment pattern are transmitted to the MPEG-4.10 encoder for compression, and the compressed code stream is transmitted to the cloud for further processing.
The back-end module comprises an MPEG-4.10 coding and collecting module and an MPEG-4.10 decoder of video data, decodes the data obtained through network transmission and plays the data through a video player.
As in fig. 2: the defogging module comprises a dark channel processing module, a sky area processing module and a special defogging processing module;
the dark channel processing module is used for identifying dark channel images; the sky area processing module is used for identifying sky area images;
and the special defogging processing module is used for fusing the image data processed by the dark channel processing module and the sky area processing module and inputting the fused image data into the special defogging processing module for defogging.
Example 2:
in the prior input image I, the length of the image is m, the width is n, and the bit width of the image is Nbit.
An image defogging method comprises the steps of firstly training a sensor device gain circle distributed with defogging self-adaptive parameters based on a Radviz algorithm according to machine learning.
And acquiring the equipment number through image decoding data, calling out a corresponding sensor equipment gain circle, identifying the image, and outputting a matched self-adaptive parameter k in the gain circle.
Then, outputting the identified dark channel image through a dark channel priori algorithm, wherein the dark channel priori calculation formula comprises the following steps: idark=min y∈s(x) (min c∈{r,g,b} I c (y)); c is an argument for selecting one of the 3 channels of RGB, resulting in the minimum value of the 3 channels comparing each pixel together; ic (y) is any one color channel of the image I; idark is dark channel data, s (x) is a sliding window with x as the sliding center, the radius is R, and R is custom. The radius of the size of 3 to 7 pixels is preferably selected to be 7 in this embodiment.
Identifying a sky area, and outputting an identified sky area image; and fusing the values of the dark channel image and the sky area image to obtain Iin.
Further performing edge protection filtering on the Iin to obtain Y1, calculating the value of the Iin-Y1, obtaining the absolute value Y2 of the value, and performing edge protection filtering on the Y2 for one time to obtain Yf;
the atmospheric light curtain value yshuchu before defogging is calculated according to the formula: yshuchu=y1-Yf;
restoring the fogless image calculation formula:
i (x): a hazy image; a: global atmospheric light value; yout: and (5) a haze-free image.
The sky area recognition processing method comprises the following steps:
1. converting the three primary color image into ycbcr image, and extracting Y component;
2. the Y component is filtered by Gaussian filtering;
3. identifying gradient values of each pixel point of the Y component in each direction, comparing and selecting the maximum value to obtain output S, wherein the output S is used as the gradient of the image to be stored as the input of the next step;
4. setting 2 thresholds, one for the gray level threshold on the Y component, e.g. 0.7X12 N ≤Y≤2 N For brightness threshold, two is gradient maximum S, calculating variance of gradient maximum S, setting variance thresholdCutting off a sky area through the gray level threshold and the variance threshold to obtain Isky;
5. a small average value filtering is carried out on Isky, for example, a full 1 average value filtering of 5 times 5 can be carried out, and other data can be selected for average value filtering in actual operation, wherein the sky area Isky sets the result of the average value filtering to 0.
The edge protection filtering method comprises the following steps: the region with the size of r is set, r can be a radius with the size of 3 to 9 pixel points, pixels similar to the pixels with the value within the radius of r are identified, the similarity threshold value is set as d, and if the similarity is lower than 10%, the pixel points can be considered as boundaries and can be reserved. And if the similarity is greater than the threshold value d, carrying out average filtering of [ m/40, n/40], and then replacing the original pixel point value with the pixel point value after average filtering.
Example 3:
the Radviz algorithm is a concept of tension and springs, and the entire algorithm operates on a circle.
The image defogging self-adaptive parameter k calculating method comprises the following steps:
step 1, 3 image data sets p= { a, X, ymean }, where a represents a global atmospheric light value, X is the complexity of image texture, and Ymean is the average brightness of the image. And (3) injection: radviz can support the presence of many data sets, but is optimally around 3-7.
Step 2, as in fig. 3: and carrying out normalization processing on the data set P, substituting the P into a Radviz algorithm, and acquiring the balance point position in a circle of the Radviz algorithm to obtain the self-adaptive parameter k.
And 3, repeating the method of the step 1-2 by using a plurality of images with different gains to obtain a plurality of self-adaptive parameters k, and distributing output values into a circle by manually judging the self-adaptive parameters k without errors.
And 4, obtaining gain circles distributed with k under all conditions through interpolation.
And 5, obtaining the equipment number through image decoding data, calculating an image data set P ', calling out the equipment gain circle obtained in the step 4, substituting P' into the gain circle to identify the image, and outputting the self-adaptive parameter k matched with the input image.
Wherein, the normalization processing calculation formula:t is an independent variable used for searching the maximum value and the minimum value in all investigation values under the attribute of the object j; j is the attribute of the object; i is the number of objects investigated under the attribute;
the tension balance condition is satisfied:wherein O is j For the coordinates where each attribute is located, x ij For equilibrium point position, a i j A tension value for each attribute; then, the balance point position calculation formula:
according to the method, at least 1000 images are investigated and calculated, at least 20 images with different gains in different scenes are obtained, the different scenes refer to differences such as normal light of indoor and outdoor backlight, the gains are usually digital gain, analog gain, ISP gain and other gains in the chip, the gains are usually changed along with brightness change, all image frames of the gains are distributed on a circle to obtain a distribution map of k, and an automatic gain parameter k passing through observation of each frame is distributed on the circle according to an output value to form a circle, wherein the circle is a regular circle with 1000 points distributed.
Further, the k value of each position is inserted by software through an interpolation method, and the automatic gain parameter k distributed with all conditions is obtained. Fig. 4 is a graph of a Radviz adaptive parameter k, which shows a possible law of k values, which may be an arc law, or a straight line law. For example, the distribution law of the middle vertical line, the adaptive parameter k may be smaller when the average luminance is higher.
Wherein, the image dataset P= { A, X, ymean } is normalized after singular points are eliminatedThe singular point is the abnormal value of the 3 components of A, X and Ymean, the range control of each value can be set according to the actual requirement, only the value in the range, such as the global atmosphere light value A, is taken, the image bit width is assumed to be Nbit, and the value range of A is 0.7X2 N ≤A≤2 N So as not to have a significant impact on the results.
The global atmosphere light value A value calculating method comprises the following steps: taking the value of the pixel point 0.1% before the pixel value of the image, and calculating the average value of the pixel points after selection; the average brightness Ymean calculation method of the image is to add up the values of all the pixel points and divide the values by the number of the pixel points.
The method for calculating the complexity degree X of the image texture comprises the following steps: through histogram calculation, let the gray level of the image be L, L be the number possessed by different gray level values, the average value of the gray level be Lmean, wherein the corresponding histogram be h (L), then the calculation formula of X is
It should be noted that:
reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. Thus, the appearances of the phrase "one embodiment" or "an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
In addition, it should be noted that all equivalent or simple changes of the structure, features and principles described in the inventive concept are included in the scope of the present invention. Those skilled in the art may make various modifications or additions to the described embodiments or substitutions in a similar manner without departing from the scope of the invention as defined in the accompanying claims.

Claims (13)

1. The calculation method of the image defogging self-adaptive parameter is characterized by comprising the following steps:
step 1, acquiring an image data set P= { A, X, ymean }, wherein A represents a global atmospheric light value, X is the complexity of image textures, and Ymean is the average brightness of an image;
step 2, carrying out normalization processing on the data set P, substituting the P into a Radviz algorithm, and acquiring the balance point position in a circle of the Radviz algorithm to obtain a self-adaptive parameter k;
step 3, using a plurality of images with different gains, repeating the method of step 1-2 to obtain a plurality of balance point positions, and distributing the balance point positions into a circle by using an artificially judged error-free self-adaptive parameter k as an output value;
step 4, obtaining gain circles distributed with k of all cases through interpolation;
and 5, obtaining the equipment number through the image decoding data, calculating an image data set P ', calling out the equipment gain circle obtained in the step 4, substituting P' into the gain circle to identify the image, and outputting the self-adaptive parameter k matched with the input image.
2. The method for calculating an image defogging adaptive parameter according to claim 1, wherein the normalization processing calculates the formula:t is an independent variable used for searching the maximum value and the minimum value in all investigation values under the attribute of the object j; j is the attribute of the object; i is the number of objects investigated under the attribute;
the tension balance condition is satisfied:wherein O is j For the coordinates where each attribute is located, x ij To balance withPoint location, a ij A tension value for each attribute;
then, the balance point position calculation formula:
3. the method for calculating the image defogging adaptive parameter according to claim 1, wherein the image dataset p= { a, X, ymean } is normalized after singular points are excluded.
4. The method for calculating an image defogging adaptive parameter according to claim 1, wherein the global atmospheric light value a value calculating method comprises: taking the value of the pixel point 0.1% before the pixel value of the image, and calculating the average value of the pixel points after selection;
the average brightness Ymean calculation method of the image is to add up the values of all the pixel points and divide the values by the number of the pixel points.
5. The method for calculating image defogging adaptive parameters according to claim 1, wherein,
the complexity degree X calculation method of the image texture comprises the following steps: through histogram calculation, let the gray level of the image be L, L be the number possessed by different gray level values, the average value of the gray level be Lmean, wherein the corresponding histogram be h (L), then the calculation formula of X is
6. An image defogging method is characterized in that the obtained self-adaptive parameter k calculated according to any one of claims 1-5 is obtained, the equipment number is obtained through image decoding data, the corresponding equipment gain circle is called, the image is identified, and the matched self-adaptive parameter k in the gain circle is output;
outputting the identified dark channel image through a dark channel prior algorithm;
identifying a sky area, and outputting an identified sky area image;
fusing the values of the dark channel image and the sky area image to obtain I in
Pair I in Performing edge protection filtering to obtain Y1, calculating the value of Iin-Y1, obtaining the absolute value Y2 of the value, and performing edge protection filtering on the Y2 once to obtain Yf;
the atmospheric light curtain value yshuchu before defogging is calculated according to the formula: yshuchu=y1-Yf;
restoring the fogless image calculation formula:
i (x): a hazy image; a: global atmospheric light value; yout: and (5) a haze-free image.
7. The image defogging method according to claim 6, wherein said dark channel prior calculation formula:
Idark=min y∈s(x) (min c∈{r,g,b} I c (y));
c is an argument; i c (y) is any one color channel of image I; idark is dark channel data, s (x) is a sliding window with x as the sliding center, the radius is R, and R is custom.
8. The image defogging method according to claim 6, wherein,
the sky area identification processing method comprises the following steps:
converting the three primary color image into ycbcr image, and extracting Y component;
the Y component is filtered by Gaussian filtering;
identifying gradient values of each pixel point of the Y component in each direction, comparing and selecting the maximum value to obtain output S, wherein the output S is used as the gradient of the image to be stored as the input of the next step;
setting 2 thresholds, namely a gray level threshold on a Y component and a gradient maximum value S, calculating the variance of the gradient maximum value S, setting a variance threshold, and cutting off a sky region through the gray level threshold and the variance threshold to obtain Isky.
9. The image defogging method according to claim 6, wherein the value fusion formula of the dark channel image and the sky region image is as follows:
I in =fix((b*Isky+(255-b)*Idark)/255);
and b is a weight parameter used for controlling the effect of finally processing the sky.
10. The image defogging method according to claim 6, wherein,
the edge protection filtering method comprises the following steps: and (3) setting an area with the size of r, identifying pixels similar to the pixels with the value within the radius of r, setting a similarity threshold value of d, carrying out average filtering when the similarity is larger than the threshold value d, replacing the original pixel value with the pixel value after average filtering, and reserving the area with the similarity smaller than the threshold value d as a boundary area.
11. An image defogging method according to claim 8 or 9, wherein Isky is subjected to a small mean value filtering.
12. An image defogging system is characterized by comprising a front end module, a defogging module and a rear end module,
the front-end module comprises a video data collection module and an encoder, and transmits the collected video data and the sensor equipment pattern to the encoder for compression;
the back-end module comprises a video data coding and collecting module and a decoder, and decodes the data obtained through network transmission;
the video signal is input into the front-end module for video data acquisition and encoding, the processed data is transmitted to the defogging module through a network, defogging processing is carried out through the method of claim 6, and the defogged data is transmitted to the back-end module through the network for decoding.
13. The image defogging system of claim 12, wherein said defogging module comprises a dark channel processing module, a sky region processing module, and a special defogging processing module;
the dark channel processing module is used for identifying dark channel images;
the sky area processing module is used for identifying sky area images;
and the special defogging processing module is used for fusing the image data processed by the dark channel processing module and the sky area processing module and inputting the fused image data into the special defogging processing module for defogging.
CN201911374990.4A 2019-12-27 2019-12-27 Image defogging self-adaptive parameter calculation method, image defogging method and system Active CN111192213B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911374990.4A CN111192213B (en) 2019-12-27 2019-12-27 Image defogging self-adaptive parameter calculation method, image defogging method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911374990.4A CN111192213B (en) 2019-12-27 2019-12-27 Image defogging self-adaptive parameter calculation method, image defogging method and system

Publications (2)

Publication Number Publication Date
CN111192213A CN111192213A (en) 2020-05-22
CN111192213B true CN111192213B (en) 2023-11-14

Family

ID=70707734

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911374990.4A Active CN111192213B (en) 2019-12-27 2019-12-27 Image defogging self-adaptive parameter calculation method, image defogging method and system

Country Status (1)

Country Link
CN (1) CN111192213B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114004834B (en) * 2021-12-31 2022-04-19 山东信通电子股份有限公司 Method, equipment and device for analyzing foggy weather condition in image processing
CN116110053B (en) * 2023-04-13 2023-07-21 济宁能源发展集团有限公司 Container surface information detection method based on image recognition
CN116630349B (en) * 2023-07-25 2023-10-20 山东爱福地生物股份有限公司 Straw returning area rapid segmentation method based on high-resolution remote sensing image

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013141210A (en) * 2011-12-30 2013-07-18 Hitachi Ltd Image defogging apparatus, image defogging method, and image processing system
KR101426484B1 (en) * 2014-04-29 2014-08-06 한양대학교 산학협력단 System for processing spray image and the method for the same
CN104050162A (en) * 2013-03-11 2014-09-17 富士通株式会社 Data processing method and data processing device
CN106055580A (en) * 2016-05-23 2016-10-26 中南大学 Radviz-based fuzzy clustering result visualization method
CN106530246A (en) * 2016-10-28 2017-03-22 大连理工大学 Image dehazing method and system based on dark channel and non-local prior
WO2017175231A1 (en) * 2016-04-07 2017-10-12 Carmel Haifa University Economic Corporation Ltd. Image dehazing and restoration
CN209118462U (en) * 2019-01-04 2019-07-16 江苏弘冉智能科技有限公司 A kind of visualization phase battle array intelligent fire alarm system of front-end convergence

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI674804B (en) * 2018-03-15 2019-10-11 國立交通大學 Video dehazing device and method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013141210A (en) * 2011-12-30 2013-07-18 Hitachi Ltd Image defogging apparatus, image defogging method, and image processing system
CN104050162A (en) * 2013-03-11 2014-09-17 富士通株式会社 Data processing method and data processing device
KR101426484B1 (en) * 2014-04-29 2014-08-06 한양대학교 산학협력단 System for processing spray image and the method for the same
WO2017175231A1 (en) * 2016-04-07 2017-10-12 Carmel Haifa University Economic Corporation Ltd. Image dehazing and restoration
CN106055580A (en) * 2016-05-23 2016-10-26 中南大学 Radviz-based fuzzy clustering result visualization method
CN106530246A (en) * 2016-10-28 2017-03-22 大连理工大学 Image dehazing method and system based on dark channel and non-local prior
CN209118462U (en) * 2019-01-04 2019-07-16 江苏弘冉智能科技有限公司 A kind of visualization phase battle array intelligent fire alarm system of front-end convergence

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种改进的基于暗通道先验的图像去雾算法;杨旭;任世卿;苗芳;;沈阳理工大学学报(06);全文 *

Also Published As

Publication number Publication date
CN111192213A (en) 2020-05-22

Similar Documents

Publication Publication Date Title
CN108876743B (en) Image rapid defogging method, system, terminal and storage medium
CN111192213B (en) Image defogging self-adaptive parameter calculation method, image defogging method and system
CN116229276B (en) River entering pollution discharge detection method based on computer vision
US8280165B2 (en) System and method for segmenting foreground and background in a video
CN104794688B (en) Single image to the fog method and device based on depth information separation sky areas
CN112288658A (en) Underwater image enhancement method based on multi-residual joint learning
CN113766125B (en) Focusing method and device, electronic equipment and computer readable storage medium
US20050175253A1 (en) Method for producing cloud free and cloud-shadow free images
CN107705254B (en) City environment assessment method based on street view
CN108154492B (en) A kind of image based on non-local mean filtering goes haze method
CN112200746B (en) Defogging method and equipment for foggy-day traffic scene image
CN112053298B (en) Image defogging method
CN113657528B (en) Image feature point extraction method and device, computer terminal and storage medium
CN111476744A (en) Underwater image enhancement method based on classification and atmospheric imaging model
CN112258545A (en) Tobacco leaf image online background processing system and online background processing method
CN114004850A (en) Sky segmentation method, image defogging method, electronic device and storage medium
CN115660998A (en) Image defogging method based on deep learning and traditional priori knowledge fusion
CN117496019B (en) Image animation processing method and system for driving static image
CN111027564A (en) Low-illumination imaging license plate recognition method and device based on deep learning integration
CN110728645A (en) Image detail enhancement method and device based on guide filter regularization parameter and electronic equipment
CN107424134B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
CN108898561A (en) A kind of defogging method, server and the system of the Misty Image containing sky areas
CN116993614A (en) Defogging method for fused image of fine sky segmentation and transmissivity
CN111768355A (en) Method for enhancing image of refrigeration type infrared sensor
CN107292853B (en) Image processing method, image processing device, computer-readable storage medium and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 311400 4th floor, building 9, Yinhu innovation center, No.9 Fuxian Road, Yinhu street, Fuyang District, Hangzhou City, Zhejiang Province

Applicant after: Zhejiang Xinmai Microelectronics Co.,Ltd.

Address before: 311400 4th floor, building 9, Yinhu innovation center, No.9 Fuxian Road, Yinhu street, Fuyang District, Hangzhou City, Zhejiang Province

Applicant before: Hangzhou xiongmai integrated circuit technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant