CN113344796A - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN113344796A
CN113344796A CN202010098881.0A CN202010098881A CN113344796A CN 113344796 A CN113344796 A CN 113344796A CN 202010098881 A CN202010098881 A CN 202010098881A CN 113344796 A CN113344796 A CN 113344796A
Authority
CN
China
Prior art keywords
gray
image
value
pixel point
variance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010098881.0A
Other languages
Chinese (zh)
Inventor
刘恩雨
李松南
刘杉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010098881.0A priority Critical patent/CN113344796A/en
Publication of CN113344796A publication Critical patent/CN113344796A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Abstract

The application relates to an image processing method, an image processing device, an image processing apparatus and a storage medium, wherein the method comprises the following steps: acquiring an original image, and determining the value of three channels of each pixel point in the original image; converting the original image into a gray image, and determining the variance of gray values in the gray image; and adjusting the atmospheric light value of the image and the transmissivity of each pixel point based on the variance of the gray scale values in the gray scale image, inputting the three-channel value of each pixel point in the original image, the adjusted atmospheric light value and the adjusted target transmissivity into a preset atmospheric light scattering model, and generating a restored image corresponding to the original image. The method and the device can restore the original degraded image without causing image distortion.

Description

Image processing method, device, equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, an image processing device, and a storage medium.
Background
When image acquisition is performed, the acquired image is influenced by the surrounding environment, and when fog, water vapor, dust or other factors possibly influencing the image acquisition effect exist in the surrounding environment, the quality of the acquired image is reduced, so that the acquired image is a degraded image. For example, when image acquisition is performed through image acquisition equipment under a foggy weather condition, because light reflected by an object and entering a visual field is scattered, refracted and reflected by a large number of fine particles in the air before entering the image acquisition equipment, the quality of an acquired image is reduced, the contrast and the definition are reduced, and a large number of details are lost compared with a real scene, so that the application value of the acquired image is greatly reduced, and great influence is brought to industrial production and daily life of people.
When the existing image defogging method is used for processing the foggy image, the image after defogging is distorted, so that an effective image processing method needs to be provided.
Disclosure of Invention
An object of the present application is to provide an image processing method, an image processing apparatus, an image processing device, and a storage medium, which can restore an original degraded image without causing image distortion.
In order to solve the above technical problem, in one aspect, the present application provides an image processing method, including:
acquiring an original image, and determining the value of three channels of each pixel point in the original image;
determining a first atmospheric light value of the original image based on the value of three channels of each pixel point in the original image;
converting the original image into a gray image, and determining the variance of gray values in the gray image;
adjusting the first atmospheric light value based on the variance of the gray values in the gray map to obtain a second atmospheric light value;
obtaining a first transmittance of each pixel point in the original image based on the three-channel value of each pixel point in the original image and the second atmospheric light value;
adjusting the first transmittance of each pixel point in the original image based on the variance of the gray values in the gray image to obtain the target transmittance of each pixel point;
and inputting the three-channel value of each pixel point in the original image, the second atmospheric light value and the target transmittance into a preset atmospheric light scattering model to generate a restored image corresponding to the original image.
In another aspect, the present application provides an image processing apparatus, comprising:
the original image acquisition module is used for acquiring an original image and determining the value of three channels of each pixel point in the original image;
the first atmospheric light value determining module is used for determining a first atmospheric light value of the original image based on the value of three channels of each pixel point in the original image;
the variance determining module is used for converting the original image into a gray image and determining the variance of gray values in the gray image;
the atmospheric light value adjusting module is used for adjusting the first atmospheric light value based on the variance of the gray values in the gray map to obtain a second atmospheric light value;
the first transmittance determining module is used for obtaining a first transmittance of each pixel point in the original image based on the three-channel value of each pixel point in the original image and the second atmospheric light value;
the transmissivity adjusting module is used for adjusting the first transmissivity of each pixel point in the original image based on the variance of the gray values in the gray image to obtain the target transmissivity of each pixel point;
and the restored image generating module is used for inputting the three-channel value of each pixel point in the original image, the second atmospheric light value and the target transmittance into a preset atmospheric light scattering model to generate a restored image corresponding to the original image.
In another aspect, the present application provides an apparatus comprising a processor and a memory, wherein at least one instruction or at least one program is stored in the memory, and the at least one instruction or the at least one program is loaded and executed by the processor to implement the image processing method as described above.
In another aspect, the present application provides a computer storage medium, in which at least one instruction or at least one program is stored, and the at least one instruction or the at least one program is loaded by a processor and executes the image processing method as described above.
The embodiment of the application has the following beneficial effects:
the method comprises the steps of converting an original image into a gray image, and determining the variance of gray values in the gray image; based on the variance of the gray values in the gray image, adjusting a first atmospheric light value of the original image and a first transmittance of each pixel point in the original image; and inputting the three-channel value of each pixel point in the original image, the adjusted atmospheric light value and the adjusted transmissivity into a preset atmospheric light scattering model to generate a restored image corresponding to the original image. According to the method and the device, the atmospheric light value and the transmissivity of the image are adjusted based on the variance of the gray values in the gray map, and the accuracy of estimating the atmospheric light value and the transmissivity is improved, so that the restored image obtained by calculation based on the adjusted atmospheric light value and the transmissivity is more fit with a real scene, the detail information of the original scene is restored, and the technical effect that the image distortion cannot be caused when the original degraded image is restored is achieved.
Drawings
In order to more clearly illustrate the technical solutions and advantages of the embodiments of the present application or the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application;
FIG. 2 is a flowchart of an image processing method provided in an embodiment of the present application;
FIG. 3 is a flowchart of a method for calculating variance of gray scale values in a gray scale image according to an embodiment of the present disclosure;
fig. 4 is a flowchart of an atmospheric light value adjustment method according to an embodiment of the present disclosure;
fig. 5 is a flowchart of a method for determining a target atmospheric light value adjustment coefficient according to an embodiment of the present application;
FIG. 6 is a diagram illustrating a relationship between an atmospheric light value adjustment coefficient and a variance according to an embodiment of the present application;
fig. 7 is a flowchart of a method for adjusting transmittance of a pixel according to an embodiment of the present disclosure;
fig. 8 is a flowchart of a target transmittance adjustment coefficient determination method provided in an embodiment of the present application;
FIG. 9 is a diagram illustrating a relationship between a transmittance adjustment coefficient and a variance according to an embodiment of the present application;
FIG. 10 is a flowchart of a transmittance refinement method provided by an embodiment of the present application;
FIG. 11 is a schematic diagram illustrating an image processing effect provided by an embodiment of the present application;
fig. 12 is a schematic diagram of an image processing apparatus according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an apparatus according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, the present application will be further described in detail with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The image processing method provided by the embodiment of the application relates to an artificial intelligence computer vision technology, and computer vision is science for researching how to enable a machine to see, and further means that a camera and a computer are used for replacing human eyes to perform machine vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the computer is processed into an image which is more suitable for human eye observation or is transmitted to an instrument for detection. Computer vision technology attempts to build artificial intelligence systems that can obtain information from images or multidimensional data, so that computers simulate human visual processes, and technologies that have the ability to feel the environment and human visual functions are a combination of technologies such as image processing, artificial intelligence, and pattern recognition.
Referring to fig. 1, a schematic diagram of an implementation environment provided by an embodiment of the present application is shown, where the implementation environment may include: at least a first terminal 110 and a second terminal 120, said first terminal 110 and said second terminal 120 being capable of data communication over a network.
Specifically, the first terminal 110 sends an original image to be restored to the second terminal 120, where the original image to be restored in this embodiment may be an image acquired in real time by an image acquisition device of the first terminal 110, or an image stored in the first terminal 110, where the image may include a video image or a separate image frame; the second terminal 120 receives the original image to be restored sent by the first terminal 110, and processes the original image to be restored based on the image processing model to obtain a restored image.
The first terminal 110 may communicate with the second terminal 120 based on a Browser/Server mode (Browser/Server, B/S) or a Client/Server mode (Client/Server, C/S). The first terminal 110 may include: the physical devices may also include software running in the physical devices, such as application programs and the like. The operating system running on the first terminal 110 in this embodiment of the present application may include, but is not limited to, an android system, an IOS system, linux, windows, and the like.
The second terminal 120 and the first terminal 110 may establish a communication connection through a wired or wireless connection, and the second terminal 120 may include an independently operating server, or a distributed server, or a server cluster composed of multiple servers, where the server may be a cloud server.
In order to solve the problem of image distortion possibly caused in the process of performing defogging processing on a foggy image in the prior art, an embodiment of the present application provides an image processing method, where an execution subject of the method may be the first terminal or the second terminal, that is, the method may specifically be executed on a client terminal or a server terminal, and specifically refer to fig. 2, the method may include:
s210, obtaining an original image, and determining the value of three channels of each pixel point in the original image.
The original image in the embodiment of the application may be a degraded image obtained by being affected by environmental factors in an image acquisition process, where the degraded image refers to an image with a part of scene details lost compared with a real scene, such as a foggy image, a rainy image, or an image containing dust; specifically, the original image in the embodiment of the present application may be a foggy image acquired in a foggy scene, and the specific foggy image may be determined from a foggy video image or may be directly determined from an independent foggy image frame; specifically, when an original image is determined from a foggy video image, image frames of the foggy video image may be divided, and each divided video image frame is respectively used as the original image to sequentially perform image processing; when the original image is determined from the independent foggy image frame, specifically, the independent foggy image frame can be taken as the original image to be processed. Therefore, the image processing method provided by the embodiment of the application is suitable for the video defogging scene and the picture defogging scene.
Further, the original image in the embodiment of the present application may also be an image that is a long-term image, such as an old photo, and the like, so that the image processing method in the embodiment of the present application may also be applied to a scene repaired by an old photo.
The three-channel value of each pixel point refers to the values of the three RGB color channels of the pixel point, that is, the value of the R channel, the value of the G channel, and the value of the B channel.
S220, determining a first atmospheric light value of the original image based on the value of three channels of each pixel point in the original image.
Determining a first atmospheric light value of an original image, wherein a dark channel image corresponding to the original image can be generated based on three channel values of each pixel point in the original image; and calculating a first atmospheric light value of the original image based on the single-channel value of each pixel point in the dark channel image.
Specifically, according to a dark channel prior theory, the minimum value of RGB three channels of each pixel point in an original image is taken, and the gray value of the pixel in the corresponding dark channel image is obtained, so that a dark channel image is generated; then, the first 0.1% pixel points with the maximum gray scale value in the dark channel image are found out, the corresponding pixel points of the pixel points in the original image are respectively determined, the mean values of the pixel points in R channel, G channel and B channel pixel points are respectively calculated, and the obtained mean values of the three channels are used as the first atmospheric light value A of the corresponding RGB channel0
The atmospheric light value estimated by the method is a rough estimation value, which is inaccurate, and especially when there is a large bright sky area, the atmospheric light value estimation is usually too large, so that the first atmospheric light value needs to be adjusted subsequently in the embodiment of the present application.
And S230, converting the original image into a gray image, and determining the variance of gray values in the gray image.
For a specific method for calculating the variance of gray values in a gray image, the original image needs to be converted into the gray image, and then the corresponding variance calculation is performed, and the specific process can refer to fig. 3, and the method can include:
s310, for each pixel point in the original image, corresponding weights are respectively given to the three channel values of the pixel point, the weighted sum of the three channel values of the pixel point is calculated, and the weighted sum of the three channel values of the pixel point is determined as the gray value corresponding to the pixel point.
And S320, generating the gray image based on the gray value corresponding to each pixel point.
S330, calculating the average value of the gray values of all pixel points in the gray image.
And S340, calculating the variance of the gray values in the gray image based on the gray values of all the pixel points in the gray image and the average value of the gray values.
Specifically, the formula for converting a color RGB three-channel image into a grayscale image is as follows:
Gray(x)=R(x)·w1+G(x)·w2+B(x)·w3 (1)
wherein, w1+w2+w3R (x), g (x), and b (x) are the values of three channels of the corresponding pixel points, respectively.
Specifically, in the embodiment of the present application, w may be taken as an example1=0.299,w2=0.587,w3=0.114。
Averaging the gray values of all the pixel points in the gray image to obtain the average value of the gray values of all the pixel points
Figure BDA0002386175470000071
After the gray value and the average value of each pixel point in the gray image are obtained, the variance S of the gray value of the gray image can be calculated by the following formula2The variance S of Gray (x) is calculated2
Figure BDA0002386175470000072
Wherein, w is the width of the gray image, h is the height of the gray image, i.e. the gray image is regarded as a pixel matrix, w is the row number of the pixel matrix, and h is the column number of the pixel matrix;
Figure BDA0002386175470000073
is the mean of the Gray values of the pixels of the Gray map, Gray (x)i) The gray value of the ith pixel point of the gray map.
In the embodiment of the application, the gray value variance of the gray map can be calculated for subsequently adjusting the atmospheric light value and the transmittance. When the variance of the gray level image is small, the content change of the image is small, and the scene is possibly far away from the camera in a dense fog scene or under a fixed camera, the depth of field change is small, and the degree of defogging is larger. The larger the variance of the grayscale image is, the larger the content change of the image is, and in a haze weather, the haze concentration is more likely to be small, or the depth of field change of the image under the mobile camera is larger, so that the degree of defogging is required to be smaller.
S240, adjusting the first atmospheric light value based on the variance of the gray values in the gray map to obtain a second atmospheric light value.
Referring specifically to fig. 4, the method for adjusting the first atmospheric light value may include:
and S410, determining a target atmospheric light value adjusting coefficient corresponding to the variance of the gray scale values in the gray scale map based on the relation between the atmospheric light value adjusting coefficient and the variance.
The relationship between the atmospheric light value adjustment coefficient and the variance in the embodiment of the application can be determined based on a preset mapping relationship, so that a target atmospheric light value adjustment coefficient corresponding to the calculated variance of the gray value can be obtained according to the calculated variance of the gray value.
S420, determining the product of the target atmospheric light value adjustment coefficient and the first atmospheric light value as the second atmospheric light value.
Referring to fig. 5, a method for determining a target atmospheric light value adjustment coefficient according to an embodiment of the present application is shown, where the method may include:
and S510, judging whether the variance of the gray values in the gray map is smaller than a first threshold value.
S520, when the variance of the gray values in the gray map is smaller than a first threshold value, determining a first adjusting coefficient as the target atmospheric light value adjusting coefficient.
S530, when the variance of the gray values in the gray map is not smaller than a first threshold value, judging whether the variance of the gray values in the gray map is larger than a second threshold value.
And S540, when the variance of the gray values in the gray map is larger than a second threshold value, determining a second adjustment coefficient as the target atmospheric light value adjustment coefficient.
And S550, when the variance of the gray values in the gray map is not larger than a second threshold value, obtaining a third adjustment coefficient based on a first preset function, and determining the third adjustment coefficient as the target atmospheric light value adjustment coefficient.
When the variance of the gray values in the gray map is greater than or equal to the first threshold and less than or equal to the second threshold, a third adjustment coefficient is obtained based on a first preset function, and the third adjustment coefficient is determined to be the target atmospheric light value adjustment coefficient.
Wherein the first threshold is less than the second threshold.
As an example, the relationship between the atmospheric light value adjustment coefficient and the variance may be implemented by the following function:
Figure BDA0002386175470000091
the corresponding relationship diagram can be seen in FIG. 6, where α is the adjustment coefficient of the atmospheric light value, and the minimum boundary of the variance is set to k1Maximum boundary is k2When the variance is less than the minimum boundary k1When all the adjustment coefficients are setMinimum value of (a)minI.e. the first adjustment factor; when the variance is larger than the maximum boundary k2Then, the maximum value of the adjustment coefficients is 1, i.e. the second adjustment coefficient, i.e. the final atmospheric light value completely adopts the estimated atmospheric light value A0(ii) a When the variance is greater than or equal to the minimum boundary k1Is less than or equal to the maximum boundary k2Then the third adjustment factor can be determined by the second equation in equation (3), i.e.
Figure BDA0002386175470000092
That is, the first preset function in the above step S550 may be formula (4).
In particular embodiments, k may be1Is set to 20, k2Is set to 60, k1And k2The value of (a) can be dynamically changed and adjusted according to the video image characteristics.
αminSetting the minimum value of the adjustment coefficient, wherein the adjustment coefficient cannot be less than or equal to 0, and when the adjustment coefficient is 0, the image is judged to have no illumination, so that serious distortion is caused; the adjustment coefficient cannot be too small, which causes the image to be too dark and not to conform to the real scene. In the embodiment of the present application, the minimum value α of the adjustment coefficient can be setminIs 0.6, which is only used for reference and can be dynamically adjusted according to the video image characteristics.
In step S420, a product of the target atmospheric light value adjustment coefficient and the first atmospheric light value is determined as the second atmospheric light value, specifically, the estimated first atmospheric light value a is determined0Multiplying the atmospheric light value by an atmospheric light value adjustment coefficient alpha to obtain a final atmospheric light value A, wherein the corresponding formula is as follows:
A=α×A0 (5)
s250, obtaining a first transmittance of each pixel point in the original image based on the three-channel value of each pixel point in the original image and the second atmospheric light value.
According to the dark channel prior theory, the dark channel value of the fog-free image approaches to 0, and the estimation formula of the transmittance obtained after the theoretical reasoning calculation is as follows:
Figure BDA0002386175470000101
wherein, Imin(x) And (3) obtaining the first transmittance of each pixel point in the image based on the formula (6) by taking the minimum value in the three-channel values of the pixel points and taking A' as the atmospheric light value corresponding to the minimum value channel.
And S260, adjusting the first transmittance of each pixel point in the original image based on the variance of the gray values in the gray map to obtain the target transmittance of each pixel point.
Referring to fig. 7, a method for adjusting a pixel point transmittance according to an embodiment of the present disclosure is shown, where the method includes:
and S710, determining a target transmittance adjustment coefficient corresponding to the variance of the gray scale values in the gray scale map based on the relationship between the transmittance adjustment coefficient and the variance.
The relationship between the transmittance adjustment coefficient and the variance in the embodiment of the application can be determined based on a preset mapping relationship, so that the transmittance adjustment coefficient corresponding to the calculated variance of the gray value can be obtained according to the calculated variance of the gray value.
And S720, respectively calculating the sum of the first transmittance of each pixel point in the original image and the target transmittance adjustment coefficient to obtain the second transmittance of each pixel point.
And S730, comparing the second transmittance of each pixel point with a preset value respectively, and determining the smaller value of the second transmittance of each pixel point and the preset value as the optimized transmittance of each pixel point.
And S740, determining the optimized transmissivity of each pixel point as the target transmissivity of the pixel point.
Referring to fig. 8, a method for determining a target transmittance adjustment factor is shown, which may include:
and S810, judging whether the variance of the gray values in the gray map is smaller than a third threshold value.
And S820, when the variance of the gray values in the gray-scale image is smaller than a third threshold value, determining a fourth adjustment coefficient as the target transmissivity adjustment coefficient.
S830, when the variance of the gray values in the gray map is not smaller than a third threshold value, judging whether the variance of the gray values in the gray map is larger than a fourth threshold value.
S840, when the variance of the gray values in the gray map is larger than a fourth threshold value, determining a fifth adjustment coefficient as the target transmittance adjustment coefficient.
And S850, when the variance of the gray values in the gray map is not larger than a fourth threshold value, obtaining a sixth adjusting coefficient based on a second preset function, and determining the sixth adjusting coefficient as the target transmissivity adjusting coefficient.
When the variance of the gray values in the gray map is greater than or equal to the third threshold and less than or equal to the fourth threshold, a sixth adjustment coefficient is obtained based on a second preset function, and the sixth adjustment coefficient is determined as the target transmittance adjustment coefficient.
Wherein the third threshold is less than the fourth threshold.
As an example, the relationship of the transmittance adjustment coefficient to the variance may be implemented by the following function:
Figure BDA0002386175470000111
the corresponding relation diagram can be seen in FIG. 9, where β is the transmittance adjustment coefficient, and the maximum boundary k of the variance is set4And a minimum boundary k3When the variance is less than the minimum boundary k3If so, the adjustment coefficients are all 0, namely the fourth adjustment coefficient; when the variance is larger than the maximum boundary k4When the adjustment coefficients are all at the maximum value of betamaxI.e. the fifth adjustment factor; when the variance is greater than or equal to the minimum boundary k3Is less than or equal to the maximum boundary k4Then the sixth adjustment factor can be determined by the second equation in equation (7), i.e.
Figure BDA0002386175470000112
That is, the second preset function in the above step S850 may be formula (8).
In particular embodiments, the method may be used to convert k3Is set to 20, k4Set to 50, adjust the maximum value of the coefficient betamaxCan be set to 0.3, which can be dynamically changed and adjusted according to the video image characteristics.
After the target transmittance adjustment coefficient is obtained, the target transmittance adjustment coefficient t is adjusted according to the methods of steps S720 and S7300Adding the obtained product with a target transmissivity adjustment coefficient beta, and comparing the obtained product with a preset value theta to obtain an optimized transmissivity t0' is:
t0'=min(t0+β,θ) (9)
so that the above-described optimized transmittance t can be obtained0' as the target transmittance.
Further, the optimized transmittance t can be obtained0' refining, a specific refining method can be seen in fig. 10, and the method comprises the following steps:
s1010, based on the three-channel value of each pixel point in the original image, the optimized transmittance of each pixel point is refined by adopting guided filtering, and a third transmittance corresponding to each pixel point is obtained.
And S1020, determining the third transmittance corresponding to each pixel point as the target transmittance of each pixel point.
Using guided filtering to transmittance t0' thinning, using the foggy image I (x) as a guide image, and setting the window with the pixel x as the center as omegaxY is a pixel point in the window, and the refined transmittance is expressed as:
Figure BDA0002386175470000121
i (y) is a three-channel image with a size of w × h, t is a single-channel w × h matrix, and the final result of t is obtained by processing and adding the three channels of i (y) respectively.
Also axIs a three-channel wx h matrix, bxIs a single-channel w × h matrix, in which
Figure BDA0002386175470000122
Figure BDA0002386175470000123
Wherein, axThe molecular part of (A) is three channels which are respectively processed to obtain a w x h matrix of the three channels; mu.sxAnd
Figure BDA0002386175470000124
respectively, the image I (x) is at the window omegaxMean sum of t0At window omegaxAverage value of, thus μxA w x h matrix of three channels in size,
Figure BDA0002386175470000125
a w × h matrix of size a single channel.
Wherein
Figure BDA0002386175470000126
Is t0In the window omegaxThe variance in (i) ω is the window ωxNumber of pixels, muxIs the image I (x) in the window omegaxIs determined by the average value of (a) of (b),
Figure BDA0002386175470000127
is t0In the window omegaxFinally, the refined transmittance t is obtained. In the embodiment of the application, the denominator part
Figure BDA0002386175470000128
It may be embodied as a 3 x 3 matrix, and epsilon is a normalization parameter, which may be set to 10-6For a three-dimensional vector, the window size may be set to 16.
And S270, inputting the three-channel value of each pixel point in the original image, the second atmospheric light value and the target transmittance into a preset atmospheric light scattering model, and generating a restored image corresponding to the original image.
The atmospheric light scattering model is:
I(x)=J(x)·t(x)+A·(1-t(x)) (13)
wherein, i (x) is a fog image, i.e. an observed image, j (x) is a fog-free image, i.e. an image to be restored, t (x) is a transmittance, a is an atmospheric light value, i.e. a color vector of the atmospheric light, and x is a pixel index.
The atmospheric light value A and the transmittance t (x) are obtained through the above calculation, and if I (x) is known, the defogged image J (x), namely the defogged image J (x), can be obtained according to the formula (13)
Figure BDA0002386175470000131
Fig. 11 can be referred to in a schematic diagram after the image processing method according to the embodiment of the present application is used for defogging an acquired image in a foggy day, and it can be seen from fig. 11 that after the image processing method according to the present application is used for processing a foggy image, the defogging effect is obvious, and the defogged image retains information of an actual scene.
In the specific implementation process, the scenes needing image defogging processing are mostly divided into two types: one is shot by a fixed camera, such as traffic monitoring; one is small video shot by a mobile camera, such as daily entertainment. The characteristics of the two scenarios are different; in traffic monitoring, the depth of field in a general image is large, namely the distance from a lens is large, and the depth of field of each pixel point is close; moving cameras typically contain objects at close distances, and the depth of field of various parts of the image is very different. The related image defogging processing method in the prior art cannot be completely suitable for the two shooting scenes due to the limitation of the application scene; the image processing method provided by the embodiment of the application has no scene limitation, and images acquired through any scene can be processed by adopting the image processing method in the application, so that the image processing method can be flexibly suitable for different scenes, and is adaptive to the defogging degree, obvious in defogging effect and free from distortion.
In the related image defogging processing method in the prior art, inaccurate estimation of an atmospheric light value is ignored, so that the generated defogged image has a dark effect, and subsequent processing is required to improve the brightness of the image; the image processing method adjusts the estimated value of the atmospheric light value based on the gray value variance of the gray level image, so that the effect that the defogged image is dark is avoided.
The method comprises the steps of converting an original image into a gray image, and determining the variance of gray values in the gray image; based on the variance of the gray values in the gray image, adjusting a first atmospheric light value of the original image and a first transmittance of each pixel point in the original image; and inputting the three-channel value of each pixel point in the original image, the adjusted atmospheric light value and the adjusted transmissivity into a preset atmospheric light scattering model to generate a restored image corresponding to the original image. According to the method and the device, the atmospheric light value and the transmissivity of the image are adjusted based on the variance of the gray values in the gray map, and the accuracy of estimating the atmospheric light value and the transmissivity is improved, so that the restored image obtained by calculation based on the adjusted atmospheric light value and the transmissivity is more fit with a real scene, the detail information of the original scene is restored, and the technical effect that the image distortion cannot be caused when the original foggy image is restored is achieved.
Referring to fig. 12, an embodiment of the present application further provides an image processing apparatus, including:
an original image obtaining module 1210, configured to obtain an original image, and determine a three-channel value of each pixel in the original image;
a first atmospheric light value determining module 1220, configured to determine a first atmospheric light value of the original image based on three channel values of each pixel in the original image;
a variance determining module 1230, configured to convert the original image into a grayscale image, and determine a variance of grayscale values in the grayscale image;
the atmospheric light value adjusting module 1240 is configured to adjust the first atmospheric light value based on the variance of the gray scale values in the gray scale map to obtain a second atmospheric light value;
a first transmittance determining module 1250, configured to obtain a first transmittance of each pixel in the original image based on the three-channel value of each pixel in the original image and the second atmospheric light value;
a transmittance adjustment module 1260, configured to adjust a first transmittance of each pixel in the original image based on a variance of the gray scale values in the gray scale map, to obtain a target transmittance of each pixel;
the restored image generating module 1270 is configured to input the three-channel value of each pixel point in the original image, the second atmospheric light value, and the target transmittance into a preset atmospheric light scattering model, and generate a restored image corresponding to the original image.
Further, the variance determining module 1230 includes:
the gray value calculation module is used for respectively giving corresponding weights to the three-channel values of the pixel points for each pixel point in the original image, calculating the weighted sum of the three-channel values of the pixel points, and determining the weighted sum of the three-channel values of the pixel points as a gray value corresponding to the pixel point;
the gray image generation module is used for generating the gray image based on the gray value corresponding to each pixel point;
the gray value average value calculating module is used for calculating the average value of the gray values of all pixel points in the gray image;
and the gray value variance calculation module is used for calculating the variance of the gray values in the gray image based on the gray values of all the pixel points in the gray image and the average value of the gray values.
Further, the atmospheric light value adjustment module 1240 includes:
the target atmosphere light value adjusting coefficient determining module is used for determining a target atmosphere light value adjusting coefficient corresponding to the variance of the gray values in the gray map based on the relation between the atmosphere light value adjusting coefficient and the variance;
and the second atmosphere light value determining module is used for determining the product of the target atmosphere light value adjusting coefficient and the first atmosphere light value as the second atmosphere light value.
Further, the target atmosphere light value adjustment coefficient determination module includes:
the first determining module is used for determining a first adjusting coefficient as the target atmosphere light value adjusting coefficient when the variance of the gray values in the gray map is smaller than a first threshold value;
the second determining module is used for determining a second adjusting coefficient as the target atmosphere light value adjusting coefficient when the variance of the gray values in the gray map is larger than a second threshold value;
a third determining module, configured to, when a variance of a gray scale value in the gray scale map is greater than or equal to the first threshold and less than or equal to the second threshold, obtain a third adjustment coefficient based on a first preset function, and determine that the third adjustment coefficient is the target atmospheric light value adjustment coefficient;
wherein the first threshold is less than the second threshold.
Further, the transmittance adjustment module 1260 includes:
a target transmittance adjustment coefficient determination module for determining a target transmittance adjustment coefficient corresponding to the variance of the gray scale values in the gray scale map based on a relationship between the transmittance adjustment coefficient and the variance;
the second transmittance calculation module is used for calculating the sum of the first transmittance of each pixel point in the original image and the target transmittance adjustment coefficient respectively to obtain the second transmittance of each pixel point;
the optimized transmittance determining module is used for comparing the second transmittance of each pixel point with a preset value respectively and determining the smaller value of the second transmittance of each pixel point and the preset value as the optimized transmittance of each pixel point;
and the first target transmittance determining module is used for determining the optimized transmittance of each pixel point as the target transmittance of the pixel point.
Further, the target transmittance adjustment coefficient determination module includes:
a fourth determining module, configured to determine a fourth adjustment coefficient as the target transmittance adjustment coefficient when a variance of gray values in the gray map is smaller than a third threshold;
a fifth determining module, configured to determine a fifth adjustment coefficient as the target transmittance adjustment coefficient when a variance of gray values in the gray map is greater than a fourth threshold;
a sixth determining module, configured to, when a variance of gray scale values in the gray scale map is greater than or equal to the third threshold and less than or equal to the fourth threshold, obtain a sixth adjustment coefficient based on a second preset function, and determine that the sixth adjustment coefficient is the target transmittance adjustment coefficient;
wherein the third threshold is less than the fourth threshold.
Further, the apparatus may further include:
the transmissivity refining module is used for refining the optimized transmissivity of each pixel point by adopting guided filtering based on the three-channel value of each pixel point in the original image to obtain a third transmissivity corresponding to each pixel point;
and the second target transmittance determining module is used for determining the third transmittance corresponding to each pixel point as the target transmittance of each pixel point.
The device provided in the above embodiments can execute the method provided in any embodiment of the present invention, and has corresponding functional modules and beneficial effects for executing the method. Technical details that have not been elaborated upon in the above-described embodiments may be referred to a method provided in any embodiment of the invention.
The present embodiments also provide a computer-readable storage medium having stored therein at least one instruction, at least one program, set of codes, or set of instructions that is loaded by a processor and performs any of the methods described above in the present embodiments.
Referring to fig. 13, the apparatus 1300 may include one or more Central Processing Units (CPUs) 1322 (e.g., one or more processors) and a memory 1332, and one or more storage media 1330 (e.g., one or more mass storage devices) storing applications 1342 or data 1344. Memory 1332 and storage medium 1330 may be, among other things, transitory or persistent storage. The program stored on the storage medium 1330 may include one or more modules (not shown), each of which may include a sequence of instructions operating on a device. Still further, central processor 1322 may be disposed in communication with storage medium 1330 such that a sequence of instruction operations in storage medium 1330 is executed on device 1300. The apparatus 1300 may also include one or more power supplies 1326, one or more wired or wireless network interfaces 1350, one or more input-output interfaces 1358, and/or one or more operating systems 1341, such as a Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTMAnd so on. Any of the methods described above in this embodiment can be implemented based on the apparatus shown in fig. 13.
The present specification provides method steps as described in the examples or flowcharts, but may include more or fewer steps based on routine or non-inventive labor. The steps and sequences recited in the embodiments are but one manner of performing the steps in a multitude of sequences and do not represent a unique order of performance. In the actual system or interrupted product execution, it may be performed sequentially or in parallel (e.g., in the context of parallel processors or multi-threaded processing) according to the embodiments or methods shown in the figures.
The configurations shown in the present embodiment are only partial configurations related to the present application, and do not constitute a limitation on the devices to which the present application is applied, and a specific device may include more or less components than those shown, or combine some components, or have an arrangement of different components. It should be understood that the methods, apparatuses, and the like disclosed in the embodiments may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a division of one logic function, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or unit modules.
Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. An image processing method, comprising:
acquiring an original image, and determining the value of three channels of each pixel point in the original image;
determining a first atmospheric light value of the original image based on the value of three channels of each pixel point in the original image;
converting the original image into a gray image, and determining the variance of gray values in the gray image;
adjusting the first atmospheric light value based on the variance of the gray values in the gray map to obtain a second atmospheric light value;
obtaining a first transmittance of each pixel point in the original image based on the three-channel value of each pixel point in the original image and the second atmospheric light value;
adjusting the first transmittance of each pixel point in the original image based on the variance of the gray values in the gray image to obtain the target transmittance of each pixel point;
and inputting the three-channel value of each pixel point in the original image, the second atmospheric light value and the target transmittance into a preset atmospheric light scattering model to generate a restored image corresponding to the original image.
2. The method of claim 1, wherein the adjusting the first atmospheric light value based on the variance of the gray scale values in the gray scale map to obtain a second atmospheric light value comprises:
determining a target atmospheric light value adjustment coefficient corresponding to the variance of the gray scale values in the gray scale map based on the relationship between the atmospheric light value adjustment coefficient and the variance;
and determining the product of the target atmosphere light value adjustment coefficient and the first atmosphere light value as the second atmosphere light value.
3. An image processing method according to claim 2, wherein determining a target atmospheric light value adjustment coefficient corresponding to the variance of the gray scale values in the gray scale map based on the relationship between the atmospheric light value adjustment coefficient and the variance comprises:
when the variance of the gray values in the gray map is smaller than a first threshold value, determining a first adjustment coefficient as the target atmospheric light value adjustment coefficient;
when the variance of the gray values in the gray map is larger than a second threshold value, determining a second adjustment coefficient as the target atmospheric light value adjustment coefficient;
when the variance of the gray values in the gray map is greater than or equal to the first threshold and less than or equal to the second threshold, obtaining a third adjustment coefficient based on a first preset function, and determining the third adjustment coefficient as the target atmospheric light value adjustment coefficient;
wherein the first threshold is less than the second threshold.
4. The method of claim 1, wherein the adjusting the first transmittance of each pixel in the original image based on the variance of the gray scale values in the gray scale map to obtain the target transmittance of each pixel comprises:
determining a target transmittance adjustment coefficient corresponding to the variance of the gray values in the gray map based on the relationship between the transmittance adjustment coefficient and the variance;
respectively calculating the sum of the first transmittance of each pixel point in the original image and the target transmittance adjustment coefficient to obtain the second transmittance of each pixel point;
comparing the second transmittance of each pixel point with a preset value respectively, and determining the smaller value of the second transmittance of each pixel point and the preset value as the optimized transmittance of each pixel point;
and determining the optimized transmissivity of each pixel point as the target transmissivity of the pixel point.
5. The method according to claim 4, wherein determining the target transmittance adjustment coefficient corresponding to the variance of the gray scale values in the gray scale map based on the relationship between the transmittance adjustment coefficient and the variance comprises:
when the variance of the gray values in the gray map is smaller than a third threshold value, determining a fourth adjustment coefficient as the target transmittance adjustment coefficient;
when the variance of the gray values in the gray map is larger than a fourth threshold value, determining a fifth adjustment coefficient as the target transmittance adjustment coefficient;
when the variance of the gray values in the gray map is greater than or equal to the third threshold and less than or equal to the fourth threshold, obtaining a sixth adjustment coefficient based on a second preset function, and determining the sixth adjustment coefficient as the target transmittance adjustment coefficient;
wherein the third threshold is less than the fourth threshold.
6. The image processing method according to claim 4, wherein said determining the smaller of the second transmittance of each pixel and the preset value as the optimized transmittance of each pixel further comprises:
refining the optimized transmittance of each pixel point by adopting guided filtering based on the three-channel value of each pixel point in the original image to obtain a third transmittance corresponding to each pixel point;
and determining the third transmittance corresponding to each pixel point as the target transmittance of each pixel point.
7. An image processing method according to claim 1, wherein said converting said original image into a gray-scale image, and said determining a variance of gray-scale values in said gray-scale image comprises:
for each pixel point in the original image, respectively giving corresponding weights to the three-channel values of the pixel point, calculating the weighted sum of the three-channel values of the pixel point, and determining the weighted sum of the three-channel values of the pixel point as a gray value corresponding to the pixel point;
generating the gray image based on the gray value corresponding to each pixel point;
calculating the average value of the gray values of all pixel points in the gray image;
and calculating the variance of the gray values in the gray image based on the gray value of each pixel point in the gray image and the average value of the gray values.
8. An image processing apparatus characterized by comprising:
the original image acquisition module is used for acquiring an original image and determining the value of three channels of each pixel point in the original image;
the first atmospheric light value determining module is used for determining a first atmospheric light value of the original image based on the value of three channels of each pixel point in the original image;
the variance determining module is used for converting the original image into a gray image and determining the variance of gray values in the gray image;
the atmospheric light value adjusting module is used for adjusting the first atmospheric light value based on the variance of the gray values in the gray map to obtain a second atmospheric light value;
the first transmittance determining module is used for obtaining a first transmittance of each pixel point in the original image based on the three-channel value of each pixel point in the original image and the second atmospheric light value;
the transmissivity adjusting module is used for adjusting the first transmissivity of each pixel point in the original image based on the variance of the gray values in the gray image to obtain the target transmissivity of each pixel point;
and the restored image generating module is used for inputting the three-channel value of each pixel point in the original image, the second atmospheric light value and the target transmittance into a preset atmospheric light scattering model to generate a restored image corresponding to the original image.
9. An apparatus comprising a processor and a memory, wherein at least one instruction or at least one program is stored in the memory, and wherein the at least one instruction or the at least one program is loaded and executed by the processor to implement the image processing method according to any one of claims 1 to 7.
10. A computer storage medium, characterized in that at least one instruction or at least one program is stored in the storage medium, which is loaded by a processor and executes the image processing method according to any one of claims 1 to 7.
CN202010098881.0A 2020-02-18 2020-02-18 Image processing method, device, equipment and storage medium Pending CN113344796A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010098881.0A CN113344796A (en) 2020-02-18 2020-02-18 Image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010098881.0A CN113344796A (en) 2020-02-18 2020-02-18 Image processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113344796A true CN113344796A (en) 2021-09-03

Family

ID=77466966

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010098881.0A Pending CN113344796A (en) 2020-02-18 2020-02-18 Image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113344796A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114022397A (en) * 2022-01-06 2022-02-08 广东欧谱曼迪科技有限公司 Endoscope image defogging method and device, electronic equipment and storage medium
CN115170443A (en) * 2022-09-08 2022-10-11 荣耀终端有限公司 Image processing method, shooting method and electronic equipment
CN116703787A (en) * 2023-08-09 2023-09-05 中铁建工集团第二建设有限公司 Building construction safety risk early warning method and system

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114022397A (en) * 2022-01-06 2022-02-08 广东欧谱曼迪科技有限公司 Endoscope image defogging method and device, electronic equipment and storage medium
CN114022397B (en) * 2022-01-06 2022-04-19 广东欧谱曼迪科技有限公司 Endoscope image defogging method and device, electronic equipment and storage medium
CN115170443A (en) * 2022-09-08 2022-10-11 荣耀终端有限公司 Image processing method, shooting method and electronic equipment
CN115170443B (en) * 2022-09-08 2023-01-13 荣耀终端有限公司 Image processing method, shooting method and electronic equipment
CN116703787A (en) * 2023-08-09 2023-09-05 中铁建工集团第二建设有限公司 Building construction safety risk early warning method and system
CN116703787B (en) * 2023-08-09 2023-10-31 中铁建工集团第二建设有限公司 Building construction safety risk early warning method and system

Similar Documents

Publication Publication Date Title
CN113344796A (en) Image processing method, device, equipment and storage medium
CN107103591B (en) Single image defogging method based on image haze concentration estimation
CN107767354A (en) A kind of image defogging algorithm based on dark primary priori
US20210004947A1 (en) Evaluation system, evaluation device, evaluation method, evaluation program, and recording medium
CN113269862A (en) Scene-adaptive fine three-dimensional face reconstruction method, system and electronic equipment
CN110147816B (en) Method and device for acquiring color depth image and computer storage medium
CN107292272B (en) Method and system for recognizing human face in real-time transmission video
US20230059499A1 (en) Image processing system, image processing method, and non-transitory computer readable medium
CN113744315B (en) Semi-direct vision odometer based on binocular vision
JP5927728B2 (en) Image fog removal apparatus, image fog removal method, and image processing system
CN111626087A (en) Neural network training and eye opening and closing state detection method, device and equipment
CN114463389B (en) Moving target detection method and detection system
CN112703532A (en) Image processing method, device, equipment and storage medium
CN112396016B (en) Face recognition system based on big data technology
CN111738241B (en) Pupil detection method and device based on double cameras
CN109191405B (en) Aerial image defogging algorithm based on transmittance global estimation
CN113989164B (en) Underwater color image restoration method, system and storage medium
CN112598777B (en) Haze fusion method based on dark channel prior
CN108629333A (en) A kind of face image processing process of low-light (level), device, equipment and readable medium
CN111784658B (en) Quality analysis method and system for face image
CN109961413B (en) Image defogging iterative algorithm for optimized estimation of atmospheric light direction
CN113920023A (en) Image processing method and device, computer readable medium and electronic device
CN110781712B (en) Human head space positioning method based on human face detection and recognition
CN111899198A (en) Defogging method and device for marine image
Zhang et al. An improved aerial remote sensing image defogging method based on dark channel prior information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40051857

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination