CN112102204B - Image enhancement method and device and electronic equipment - Google Patents

Image enhancement method and device and electronic equipment Download PDF

Info

Publication number
CN112102204B
CN112102204B CN202011036003.2A CN202011036003A CN112102204B CN 112102204 B CN112102204 B CN 112102204B CN 202011036003 A CN202011036003 A CN 202011036003A CN 112102204 B CN112102204 B CN 112102204B
Authority
CN
China
Prior art keywords
image
information
brightness
enhanced
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011036003.2A
Other languages
Chinese (zh)
Other versions
CN112102204A (en
Inventor
王诗韵
李瑮
吴家新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Keda Technology Co Ltd
Original Assignee
Suzhou Keda Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Keda Technology Co Ltd filed Critical Suzhou Keda Technology Co Ltd
Priority to CN202011036003.2A priority Critical patent/CN112102204B/en
Publication of CN112102204A publication Critical patent/CN112102204A/en
Priority to PCT/CN2021/082754 priority patent/WO2022062346A1/en
Application granted granted Critical
Publication of CN112102204B publication Critical patent/CN112102204B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention relates to the technical field of image processing, in particular to equipment, an image enhancement method, an image enhancement device and electronic equipment, wherein the method comprises the steps of obtaining an image to be processed corresponding to an original image and extracting color information and saturation information of the original image; inputting the image to be processed into a brightness enhancement model to obtain enhanced brightness information; and fusing the color information, the saturation information and the enhanced brightness information to determine the enhanced image to be processed. The image to be processed is used as the input of the brightness enhancement model, the enhanced brightness information is output, the enhanced brightness information is fused with the color information and the saturation information of the original image, the obtained enhanced image to be processed can keep the color information and the saturation information of the image to be processed unchanged, only the brightness information is enhanced, and therefore the color cast problem of the enhanced image is effectively solved.

Description

Image enhancement method and device and electronic equipment
Technical Field
The invention relates to the technical field of image processing, in particular to an image enhancement method and device and electronic equipment.
Background
The image often has the problems of low brightness, more noise, poor contrast and the like in a low illumination environment or under a backlight condition, subjectively influences the feeling of people on the image, and simultaneously influences subsequent related tasks on the image pixel level, such as license plate recognition under video monitoring at night, face recognition under the backlight condition and the like. The image enhancement technology can reduce the noise in the image, improve the contrast between the object and the background, improve the visual effect of the image and be the basis of the subsequent related identification task. Therefore, the research of the low image enhancement method has relatively important practical significance and value.
The image enhancement method adopted at present is to acquire an image acquired by an image sensor, input the image into an image enhancement model, and output the image as the integral enhancement of the input image. However, in this processing method, since the entire image is enhanced in the process of enhancing the image, the image after the enhancement processing is likely to have a color cast problem.
Disclosure of Invention
In view of this, embodiments of the present invention provide an image enhancement method, an image enhancement device, and an electronic device, so as to solve the color cast problem of an enhanced image.
According to a first aspect, an embodiment of the present invention provides an image enhancement method, including:
acquiring an image to be processed corresponding to an original image and extracting color information and saturation information of the original image;
inputting the image to be processed into a brightness enhancement model to obtain enhanced brightness information;
and fusing the color information, the saturation information and the enhanced brightness information to determine the enhanced image to be processed.
According to the image enhancement method provided by the embodiment of the invention, the image to be processed is used as the input of the brightness enhancement model, the image to be processed is output as the enhanced brightness information, and the enhanced brightness information is fused with the color information and the saturation information of the original image, so that the obtained enhanced image to be processed can keep the color information and the saturation information of the image to be processed unchanged, and only the brightness information is enhanced, thereby effectively solving the color cast problem of the enhanced image. Furthermore, the brightness enhancement model only enhances the brightness information, so that the memory overhead can be reduced, and the image enhancement method can be applied to the terminal equipment to enhance the images acquired by the terminal equipment in real time.
With reference to the first aspect, in a first implementation manner of the first aspect, the inputting the image to be processed into a brightness enhancement model to obtain enhanced brightness information includes:
performing at least one time of feature extraction on the image to be processed by utilizing at least one convolution unit in the brightness enhancement model;
restoring image details of output information of the convolution unit or the transposed convolution unit by utilizing at least one attention mechanism unit in the brightness enhancement model;
decoding output information of the attention mechanism unit with at least one transposed convolution unit in the luma enhancement model;
and fusing the output information of the first convolution unit connected with the input layer of the brightness enhancement model with the output information of the last attention mechanism unit to determine the enhanced brightness information.
According to the image enhancement method provided by the embodiment of the invention, the transposed convolution unit is utilized to decode the output information of the attention mechanism unit in the brightness enhancement model, and the attention mechanism unit is utilized to recover the image details, so that the brightness enhancement model can more accurately enhance the brightness information of the image to be processed.
With reference to the first implementation manner of the first aspect, in a second implementation manner of the first aspect, the restoring image details of the output information of the convolution unit or the transposed convolution unit by using at least one attention mechanism unit in the luminance enhancement model includes:
performing convolution processing on the output information of the convolution unit or the output information of the transposition unit by utilizing a convolution layer in the attention mechanism unit to obtain first output information;
performing global pooling and full-connection processing on the first output information to obtain second output information;
and calculating the product of the first output information and the second output information to obtain an image detail recovery result corresponding to the output information of the convolution unit or an image detail recovery result corresponding to the output information of the transposition unit.
According to the image enhancement method provided by the embodiment of the invention, the details of the output information can be enhanced by processing the first output information in the attention mechanism unit in a global pooling manner and in a full connection manner, so that the detail recovery effect of the attention mechanism unit is improved.
With reference to the first implementation manner of the first aspect, in a third implementation manner of the first aspect, the inputting the image to be processed into a brightness enhancement model to obtain enhanced brightness information includes:
inputting the image to be processed into a first convolution unit to obtain a feature extraction result of the first convolution unit;
inputting the feature extraction result of the first convolution unit into a first convolution unit to obtain the feature extraction result of the first convolution unit, wherein the first convolution unit comprises at least one convolution unit;
inputting the feature extraction result of the first convolution unit and the decoding result of the first convolution conversion unit into the first attention mechanism unit to obtain the result of image detail recovery; wherein the input of the first deconvolution unit is the output of a second attention mechanism unit;
and fusing the output of the last attention mechanism unit with the output of the first convolution unit to determine the enhanced brightness information.
With reference to the first implementation manner of the first aspect, in a fourth implementation manner of the first aspect, the fusing the output information of the first convolution unit connected to the input layer of the luminance enhancement model and the output information of the last attention mechanism unit to determine the enhanced luminance information includes:
calculating the sum of the output information of the first convolution unit and the output information of the last attention mechanism unit to obtain third output information;
and performing convolution processing on the third output information to obtain the enhanced brightness information.
According to the image enhancement method provided by the embodiment of the invention, the third output information is output after being subjected to convolution processing, so that the chessboard effect caused by the transposition convolution is reduced, and the reliability of the enhanced brightness information is improved.
With reference to the first aspect, in a fifth implementation manner of the first aspect, the acquiring an image to be processed corresponding to an original image includes:
acquiring an original image;
and preprocessing the original image to obtain the image to be processed, wherein all pixel points in the image to be processed are arranged according to a preset rule.
According to the image enhancement method provided by the embodiment of the invention, the acquired original image is preprocessed to obtain the to-be-processed image with each pixel point arranged according to the preset rule, namely, the input image of the brightness enhancement model is the to-be-processed image formed by the pixel points arranged according to the preset rule, so that the image enhancement method can be applied to various devices, and the problem of poor adaptability of new devices is solved.
With reference to the first aspect or any one of the first to fifth embodiments of the first aspect, in a sixth embodiment of the first aspect, the brightness enhancement model is trained by:
obtaining sample images of two kinds of brightness information of at least one scene to obtain a first sample image corresponding to the first brightness information and a second sample image corresponding to the second brightness information;
inputting the first sample image into a brightness enhancement model to obtain enhanced brightness information;
and updating the parameters of the brightness enhancement model based on the enhanced brightness information and the corresponding brightness information of the second sample image.
According to the image enhancement method provided by the embodiment of the invention, the brightness enhancement model is trained by acquiring the sample images with different brightness information in different scenes, so that the diversity of the samples is ensured, and the finally trained image enhancement model can recover the details more accurately and has less noise.
With reference to the sixth implementation manner of the first aspect, in a seventh implementation manner of the first aspect, the obtaining sample images of two kinds of luminance information of at least one scene to obtain a first sample image corresponding to the first luminance information and a second sample image corresponding to the second luminance information includes:
acquiring original sample images of two kinds of brightness information of at least one scene;
preprocessing the original sample image to obtain the first sample image and the second sample image, wherein each pixel point in the first sample image and each pixel point in the second sample image are arranged according to a preset rule.
According to the image enhancement method provided by the embodiment of the invention, before the image is input into the brightness enhancement model, the original sample image is preprocessed to obtain the first sample image and the second sample image of which the pixel points are arranged according to the preset rule, so that the image enhancement method can process various types of original images, the image enhancement method can be operated on various terminal devices, and the adaptability of the method to new devices is improved.
With reference to the sixth implementation manner of the first aspect, in an eighth implementation manner of the first aspect, the updating the parameters of the luminance enhancement model based on the enhanced luminance information and the corresponding luminance information of the second sample image includes:
calculating a value of a first loss function using the difference between the enhanced luminance information and the luminance information of the second sample image;
calculating a value of a second loss function by using a preset feature vector in the brightness enhancement model, the enhanced brightness information and the brightness information of the second sample image;
calculating a value of a third loss function by using the enhanced brightness information and the similarity of the brightness information of the second sample image;
carrying out weighted summation on the value of the first loss function, the value of the second loss function and the value of the third loss function to obtain the value of a target loss function;
updating parameters of the brightness enhancement model based on the value of the target loss function.
According to the image enhancement method provided by the embodiment of the invention, the loss function is calculated by utilizing the enhanced brightness information and the brightness information of the second sample image and respectively utilizing the difference value, the similarity and the combination characteristic vector of the enhanced brightness information and the brightness information of the second sample image, so that the accuracy of calculation of the loss function is improved, and an accurate basis is provided for parameter modification of a brightness enhancement model.
According to a second aspect, an embodiment of the present invention further provides an image enhancement apparatus, including:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring an image to be processed corresponding to an original image and extracting color information and saturation information of the original image;
the brightness enhancement module is used for inputting the image to be processed into a brightness enhancement model to obtain enhanced brightness information;
and the fusion module is used for fusing the color information, the saturation information and the enhanced brightness information to determine the enhanced image to be processed.
According to the image enhancement device provided by the embodiment of the invention, the image to be processed is used as the input of the brightness enhancement model, the enhanced brightness information is output, and the enhanced brightness information is fused with the color information and the saturation information of the original image, so that the obtained enhanced image to be processed can keep the color information and the saturation information of the image to be processed unchanged, and only the brightness information is enhanced, thereby effectively solving the color cast problem of the enhanced image. Furthermore, the brightness enhancement model only enhances the brightness information, so that the memory overhead can be reduced, and the image enhancement method can be applied to the terminal equipment to enhance the images acquired by the terminal equipment in real time.
According to a third aspect, an embodiment of the present invention provides an electronic device, including: a memory and a processor, the memory and the processor being communicatively connected to each other, the memory having stored therein computer instructions, and the processor executing the computer instructions to perform the image enhancement method according to the first aspect or any one of the embodiments of the first aspect.
According to a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium storing computer instructions for causing a computer to execute the image enhancement method described in the first aspect or any one of the implementation manners of the first aspect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow diagram of an image enhancement method according to an embodiment of the invention;
FIG. 2 is a flow chart of an image enhancement method according to an embodiment of the invention;
FIG. 3 is a schematic diagram of a structure of a luminance enhancement model according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an attention mechanism unit according to an embodiment of the present invention;
FIG. 5 is a flow chart of an image enhancement method according to an embodiment of the present invention;
FIGS. 6 a-6 b are schematic diagrams of image cropping according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a front-to-back comparison of image enhancement according to an embodiment of the present invention;
fig. 8 is a block diagram of the structure of an image enhancement apparatus according to an embodiment of the present invention;
fig. 9 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that, the application scenarios of the image enhancement method provided by the embodiment of the present invention are as follows: the image acquisition equipment acquires an original image, and the acquired original image is subjected to image enhancement processing by using the image enhancement method in the embodiment of the invention to obtain an enhanced image. The image enhancement method is particularly suitable for the real-time enhancement processing of low-illumination images.
In accordance with an embodiment of the present invention, there is provided an image enhancement method embodiment, it being noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
In this embodiment, an image enhancement method is provided, which can be used in the above-mentioned electronic devices, such as an image capturing device, a computer, a mobile phone, and the like, and fig. 1 is a flowchart of an image enhancement method according to an embodiment of the present invention, and as shown in fig. 1, the flowchart includes the following steps:
and S11, acquiring the image to be processed corresponding to the original image and extracting the color information and the saturation information of the original image.
The original image may be acquired by the electronic device in real time, may be stored in a storage space of the electronic device, and the source of the original image is not limited. The image to be processed corresponding to the original image may be the original image itself, or may be obtained by preprocessing the original image, and the like, and is not limited herein.
As described above, the electronic device may be an image capturing device, and accordingly, the image capturing device may capture an image in real time and perform image enhancement processing on the captured real-time image.
The electronic device processes the image data of the original image to obtain color information and saturation information of the original image. For example, if the original image is an RGB image, the RGB image may be converted into a YUV image by using image format conversion, so that the color information and saturation information of the original image may be obtained. Wherein, U represents the color information of the image to be processed, V represents the saturation information of the image to be processed, and Y represents the brightness information of the image to be processed.
And S12, inputting the image to be processed into the brightness enhancement model to obtain the enhanced brightness information.
After acquiring the image to be processed, the electronic device inputs the image to be processed into a brightness enhancement model, and the output of the brightness enhancement model is enhanced brightness information of the image to be processed. Namely, the input of the brightness enhancement model is the image to be processed, and the output is the enhanced brightness information.
The specific structural details of the brightness enhancement model are not limited herein, and may be set according to the actual situation, for example, the combination of the convolution network and the attention mechanism may be used to enhance the brightness information; other reinforcement learning models may also be utilized to output enhanced luminance information, and so on.
S13, fusing the color information, the saturation information and the enhanced brightness information, and determining the enhanced image to be processed.
After obtaining the enhanced luminance information in S12, the electronic device fuses the enhanced luminance information with the color information and the saturation information obtained in S11 to obtain enhanced luminance information. For example, the electronic device may perform image format conversion using the enhanced luminance information represented as Y', the color information U, and the saturation information V to obtain an enhanced RGB image.
In the image enhancement method provided by the embodiment, the image to be processed is used as the input of the brightness enhancement model, the enhanced brightness information is output, and the enhanced brightness information is fused with the color information and the saturation information of the original image, so that the obtained enhanced image to be processed can keep the color information and the saturation information of the image to be processed unchanged, and only the brightness information is enhanced, thereby effectively solving the color cast problem of the enhanced image. Furthermore, the brightness enhancement model only enhances the brightness information, so that the memory overhead can be reduced, and the image enhancement method can be applied to the terminal equipment to enhance the images acquired by the terminal equipment in real time.
In this embodiment, an image enhancement method is provided, which can be used in the above-mentioned electronic devices, such as an image capturing device, a computer, a mobile phone, etc., fig. 2 is a flowchart of the image enhancement method according to the embodiment of the present invention, and as shown in fig. 2, the flowchart includes the following steps:
and S21, acquiring the image to be processed corresponding to the original image and extracting the color information and the saturation information of the original image.
Please refer to S11 in fig. 1, which is not described herein again.
And S22, inputting the image to be processed into the brightness enhancement model to obtain the enhanced brightness information.
Fig. 3 shows a schematic structural diagram of a brightness enhancement model, and it should be noted that fig. 3 is only an alternative structural diagram of the brightness enhancement model, but the scope of the present invention is not limited thereto, and the configuration may be set according to actual situations. For example, the luminance enhancement model shown in fig. 3 includes 8 convolution units, 4 transposed convolution units, and 5 attention mechanism units; the number of the convolution units, the transposition convolution units and the attention mechanism units can be correspondingly set according to actual conditions.
In addition, the specific structures of the convolution units in fig. 3 are not necessarily all the same, and different convolution layers may be included in each convolution unit, different activation functions are adopted, and the like, and the specific configuration may be performed according to the actual situation. Likewise, the specific structures of the attention mechanism units are not necessarily identical, and the specific structures can be set correspondingly according to actual conditions.
Specifically, the step S22 includes the following steps:
s221, at least one convolution unit in the brightness enhancement model is utilized to extract the features of the image to be processed at least once.
The electronic device performs feature extraction on the image to be processed by using the convolution unit in the brightness enhancement model, and performs feature extraction on the image to be processed by using 8 convolution units connected in sequence as shown in fig. 3.
S222, restoring image details of the output information of the convolution unit or the transposed convolution unit by utilizing at least one attention mechanism unit in the brightness enhancement model.
As shown in fig. 3, the output of the convolution unit 8 is connected to an attention mechanism unit, where it is used to perform image detail recovery on the output information of the convolution unit 8.
Meanwhile, the attention mechanism unit is also connected with the output of the transposition convolution unit and used for recovering the image details of the output information of each transposition convolution unit. Further, as shown in FIG. 3, the input information to attention mechanism units 1-3 is derived from the output of the corresponding convolution unit and the transposed convolution unit.
Wherein details regarding the specific structure of the attention mechanism unit will be described in detail below.
And S223, decoding the output information of the attention mechanism unit by using at least one transposition convolution unit in the brightness enhancement model.
As shown in fig. 3, the electronic device also decodes the output information of each attention mechanism unit using a transpose unit in the luminance enhancement model.
In some alternative embodiments of the present embodiment, fig. 4 shows a schematic structural diagram of the attention mechanism unit. S223 described above in conjunction with fig. 4 includes the following steps:
(1) and performing convolution processing on the output information of the convolution unit or the output information of the transposition unit by utilizing the convolution layer in the attention mechanism unit to obtain first output information.
As described above, it is noted that the input information of the force mechanism unit can be derived from two places, namely, the output information of the convolution unit and the output information of the transposed convolution unit. When the input information of the attention mechanism unit is derived from the output information of the convolution unit, the electronic equipment performs convolution processing on the convolution layer in the attention mechanism unit by using the convolution layer; when the input information of the attention mechanism unit is derived from the output information of the transposed convolution unit, the electronic equipment performs convolution processing on the convolution layer in the attention mechanism unit by utilizing the convolution layer; when the input information of the attention mechanism unit is derived from the output information of the convolution unit and the output information of the transposition unit, the electronic device may first fuse the output information of the convolution unit and the output information of the transposition convolution unit, and then perform convolution processing on the output information of the attention mechanism unit by using the convolution layer in the attention mechanism unit to obtain the first output information.
(2) And carrying out global pooling and full connection processing on the first output information to obtain second output information.
As shown in fig. 4, the electronic device performs global pooling, full connection, and activation processing on the first output information to obtain second output information. In the structure of the attention mechanism unit shown in fig. 4, the attention mechanism unit has two fully-connected and activation function layers, wherein the first fully-connected layer, the first activation function layer, the second fully-connected layer and the second activation function layer are connected in sequence.
(3) And calculating the product of the first output information and the second output information to obtain an image detail recovery result corresponding to the output information of the convolution unit or an image detail recovery result corresponding to the output information of the transposition unit.
After the electronic device obtains the first output information and the second output information, the product of the first output information and the second output information is calculated, and then the corresponding image detail recovery result can be obtained.
By performing global pooling and full connection processing on the first output information in the attention mechanism unit, details of the output information can be enhanced, and the detail recovery effect of the attention mechanism unit is improved.
And S224, fusing the output information of the first convolution unit connected with the input layer of the brightness enhancement model with the output information of the last attention mechanism unit to determine the enhanced brightness information.
As shown in fig. 3, the electronic device adds the output information of the attention mechanism unit 1 and the output information of the convolution unit 1 to obtain enhanced luminance information. Specifically, the accumulated result may be directly output as the enhanced luminance information, or after performing convolution processing again on the accumulated result, the accumulated result may be output as the enhanced luminance information, and the like.
As an optional implementation manner of this embodiment, the step S224 may include the following steps:
(1) and calculating the sum of the output information of the first convolution unit and the output information of the last attention mechanism unit to obtain third output information.
As shown in fig. 4, the electronic device accumulates the output information of the attention mechanism unit 1 and the output information of the convolution unit 1 to obtain third output information.
(2) And performing convolution processing on the third output information to obtain enhanced brightness information.
And the electronic equipment performs convolution processing on the third output information by using a convolution unit to obtain the enhanced brightness information.
The third output information is output after convolution processing, so that the chessboard effect caused by the transposition convolution is reduced, and the reliability of the enhanced brightness information is improved.
For ease of understanding, the above S22 will be described in detail below in conjunction with fig. 3:
(1) and inputting the image to be processed into a first convolution unit to obtain a feature extraction result of the first convolution unit.
The first convolution unit, i.e. convolution unit 1 shown in fig. 3, inputs the image to be processed into convolution unit 1 to obtain the feature extraction result of convolution unit 1.
(2) And inputting the feature extraction result of the first convolution unit into the first convolution unit to obtain the feature extraction result of the first convolution unit, wherein the first convolution unit comprises at least one convolution unit.
The first convolution unit comprises two convolution units, which may be convolution unit 2 and convolution unit 3. The convolution unit 2 is connected with the convolution unit 1, and the output of the convolution unit 3 is the feature extraction result of the first convolution unit.
(3) Inputting the feature extraction result of the first convolution unit and the decoding result of the first convolution conversion unit into the first attention mechanism unit to obtain the result of image detail recovery; wherein the input of the first transposition convolution unit is the output of the second attention mechanism unit.
The first transpose unit may be the transpose convolution 4 in fig. 3, the first attention mechanism unit may be the attention mechanism unit 1 in fig. 3, and the second attention mechanism unit may be the attention mechanism unit 2 in fig. 3. The decoding result of the transposed convolution 4 and the feature extraction result of the convolution unit 3 are input into the attention mechanism unit 1, and the result of the image detail recovery is obtained.
(4) And fusing the output of the last attention mechanism unit and the output of the first convolution unit to determine the enhanced brightness information.
As shown in fig. 3, the electronic device adds the output information of the attention mechanism unit 1 and the output information of the convolution unit 1 to obtain enhanced luminance information. Specifically, the accumulated result may be directly output as the enhanced luminance information, or after performing convolution processing again on the accumulated result, the accumulated result may be output as the enhanced luminance information, and the like.
S23, fusing the color information, the saturation information and the enhanced brightness information, and determining the enhanced image to be processed.
Please refer to S13 in fig. 1, which is not described herein again.
In the image enhancement method provided by this embodiment, the transposed convolution unit is used in the brightness enhancement model to decode the output information of the attention mechanism unit, and the attention mechanism unit is used to recover the details of the image, so that the brightness enhancement model can more accurately enhance the brightness information of the image to be processed.
In this embodiment, an image enhancement method is provided, which can be used in the above-mentioned electronic devices, such as an image capturing device, a computer, a mobile phone, etc., fig. 5 is a flowchart of the image enhancement method according to the embodiment of the present invention, and as shown in fig. 5, the flowchart includes the following steps:
and S31, acquiring the image to be processed corresponding to the original image and extracting the color information and the saturation information of the original image.
Specifically, the step S31 includes the following steps:
s311, acquiring an original image.
In this embodiment, an electronic device is taken as an example of an image capturing device. The original image can be a real-time image acquired by an image acquisition device.
And S312, preprocessing the original image to obtain an image to be processed.
And arranging all pixel points in the image to be processed according to a preset rule.
After the image acquisition equipment acquires the original image, uniform format processing can be performed on the original image so that all pixel points in the image to be processed are arranged according to a certain rule. Wherein, the preprocessing can be image cutting, image turning and the like.
Specifically, the step S312 includes the steps of:
(1) and (5) clipping the original image.
As shown in fig. 6a, the arrangement of GRBG is converted into the arrangement of BGGR by cropping the original image; as shown in fig. 6b, the GBRG arrangement is converted to the BGGR arrangement by cropping the original image.
Because the data arrangement modes of the original images acquired by different image acquisition devices are different, and the existing image enhancement algorithms basically aim at specific image acquisition devices, the adaptability to new devices is poor. Therefore, the original image is cut to obtain the image to be processed with a uniform arrangement mode, and the adaptability of the subsequent brightness enhancement model to different image acquisition devices can be improved.
(2) And turning the cut image to obtain the image to be processed.
After the original image is cut by the image acquisition equipment, data enhancement such as horizontal mirroring, vertical mirroring and random inversion by 90 degrees can be performed on the original image, so that the robustness of the brightness enhancement model is improved.
S313, the color information and saturation information of the original image are extracted.
For the rest, please refer to S21 in the embodiment shown in fig. 2, which is not described herein again.
And S32, inputting the image to be processed into the brightness enhancement model to obtain the enhanced brightness information.
Please refer to S22 in fig. 2, which is not repeated herein.
S33, fusing the color information, the saturation information and the enhanced brightness information, and determining the enhanced image to be processed.
Please refer to S23 in fig. 2 for details, which are not described herein.
In the image enhancement method provided by this embodiment, the to-be-processed image in which the pixel points are arranged according to the preset rule is obtained by preprocessing the obtained original image, that is, the input image of the brightness enhancement model is the to-be-processed image formed by the pixel points arranged according to the preset rule, so that the image enhancement method can be applied to various devices, thereby solving the problem of poor adaptability of new devices.
In some optional implementations of this embodiment, the brightness enhancement model is obtained by training as follows:
(1) the method comprises the steps of obtaining sample images of two kinds of brightness information of at least one scene, obtaining a first sample image corresponding to first brightness information and a second sample image corresponding to second brightness information.
The electronic equipment can collect original image data of different illumination, different gains and different shutters in different scenes, divide the original image data into data groups according to the scenes, and then divide the original image data into a first sample image and a second sample image according to the size of the shutter value. The first sample image is a low-quality sample image, and the second sample image is high-quality image data. The low-quality image data represents data collected in a low-illumination environment, and the image is low in brightness, poor in quality and noisy, is data needing to be learned and is input into a brightness enhancement model; the high-quality image data has high image brightness, clear image and no noise and represents a target to be learned by a brightness enhancement model. The goals of luminance enhancement model learning are from low-illumination images to high-illumination images, from low-quality images to high-quality images.
It should be noted that the first sample image and the second sample image correspond to each other.
As an optional implementation manner of this embodiment, the step (1) includes the following steps:
1.1) obtaining original sample images of two luminance information of at least one scene.
As described above, the electronic device may obtain the original sample image of two kinds of luminance information by capturing.
1.2) preprocessing the original sample image to obtain a first sample image and a second sample image.
And arranging the pixel points in the first sample image and the second sample image according to a preset rule.
For a method for preprocessing an original sample image, please refer to corresponding description of S31 in the embodiment shown in fig. 5, which is not repeated herein.
Before an image is input into a brightness enhancement model, an original image is preprocessed to obtain a first sample image and a second sample image, wherein each pixel point is arranged according to a preset rule, so that the image enhancement method can process various types of original images, the image enhancement method can be operated on various terminal devices, and the adaptability of the method to new devices is improved.
(2) And inputting the first sample image into a brightness enhancement model to obtain enhanced brightness information.
(3) And updating the parameters of the brightness enhancement model based on the enhanced brightness information and the brightness information of the corresponding second sample image.
The electronic device may perform a calculation of a loss function using the enhanced luminance information and the luminance information of the second sample image, thereby updating a parameter of the luminance model.
The loss function may be set according to actual conditions, and is not limited herein. For example, the loss function may include three parts, one of which is a difference value of the enhanced luminance information and the luminance information of the second sample image, to obtain a value of a first loss function (hereinafter, referred to as a pixel loss function); secondly, calculating a value of a second loss function (hereinafter referred to as a characteristic loss function) by using a preset feature vector in the brightness enhancement model, the enhanced brightness information and the brightness information of the second sample image; third, a value of a third loss function (hereinafter, referred to as a color loss function) is calculated using the similarity between the enhanced luminance information and the luminance information of the second sample image.
Accordingly, the value of the loss function can be calculated by the following formula:
Figure BDA0002705113440000141
wherein alpha is1,α2And alpha3Weights for the pixel loss function, the characteristic loss function, and the color loss function, respectively; y is an output value of the luminance enhancement model,
Figure BDA0002705113440000151
is luminance information of the second sample image, LpixelAs a function of pixel loss, LfeatureAs a characteristic loss function, LYAs a function of color loss.
Figure BDA0002705113440000152
Figure BDA0002705113440000153
Figure BDA0002705113440000154
Wherein the content of the first and second substances,
Figure BDA0002705113440000155
for brightness enhancement modelsAnd (4) extracting the feature vector. The feature vector here is an output feature vector of some structures in the brightness enhancement model, such as the feature vector output by attention mechanism unit 1 or attention mechanism unit 2 shown in fig. 3.
The brightness enhancement model is trained by acquiring sample images of different brightness information in different scenes, so that the diversity of the samples is ensured, the image enhancement model obtained by final training can recover details more accurately, and the noise is less.
The image enhancement method provided by the embodiment of the invention realizes the image enhancement under the environments of dark light, backlight and the like, effectively improves the adaptability of the image enhancement method to equipment and clearer recovery of details, can deploy the model on the terminal equipment, and has higher real-time property.
As a specific application example of the embodiment, the image enhancement method is used for enhancing a low-illumination image of a person entrance in a monitored scene. The image acquisition device is a CMOS sensor, and specifically, the image enhancement method mainly comprises three processing parts, namely image preprocessing, a brightness enhancement model and information fusion, and the output of the brightness enhancement model is fused with the UV component of the original image to generate a final enhanced result. Specifically, the method comprises the following steps:
a CMOS sensor of the personnel bayonet is used for capturing personnel in a dark environment with dark background light, and then image enhancement is carried out on captured images.
Image preprocessing: the part mainly realizes black level correction, data cutting, horizontal mirroring, vertical mirroring, random overturning and other operations on an original image acquired by a sensor to form an image to be processed with all pixel points arranged according to a preset rule. In addition, it is necessary to extract the UV component in the original image collected by the sensor for the subsequent fusion operation.
Luminance enhancement model: and performing brightness enhancement on the preprocessed image by using the trained and optimized brightness enhancement model, and outputting enhanced brightness information.
Information fusion: and fusing the enhanced brightness information with the UV component in the original image to obtain a final enhanced RGB image and outputting the final enhanced RGB image.
Fig. 7 shows a comparison of results obtained after the original image is processed by the image enhancement method described in this embodiment, where the image on the left side of fig. 7 is the original image, and the image on the right side of fig. 7 is the corresponding enhanced image to be processed.
In this embodiment, an image enhancement apparatus is further provided, and the apparatus is used to implement the above embodiments and preferred embodiments, which have already been described and are not described again. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
The present embodiment provides an image enhancement apparatus, as shown in fig. 8, including:
an obtaining module 41, configured to obtain an image to be processed corresponding to an original image and extract color information and saturation information of the original image;
a brightness enhancement module 42, configured to input the image to be processed into a brightness enhancement model to obtain enhanced brightness information;
and a fusion module 43, configured to fuse the color information, the saturation information, and the enhanced luminance information, and determine the enhanced image to be processed.
The image enhancement device provided by the embodiment takes the to-be-processed image as the input of the brightness enhancement model, outputs the to-be-processed image as the enhanced brightness information, and fuses the enhanced brightness information with the color information and the saturation information of the original image, so that the obtained enhanced to-be-processed image can keep the color information and the saturation information of the to-be-processed image unchanged, and only enhances the brightness information, thereby effectively solving the color cast problem of the enhanced image. Furthermore, the brightness enhancement model only enhances the brightness information, so that the memory overhead can be reduced, and the image enhancement method can be applied to the terminal equipment to enhance the images acquired by the terminal equipment in real time.
The image enhancement apparatus in this embodiment is presented in the form of a functional unit, where the unit refers to an ASIC circuit, a processor and memory executing one or more software or fixed programs, and/or other devices that may provide the above-described functionality.
Further functional descriptions of the modules are the same as those of the corresponding embodiments, and are not repeated herein.
An embodiment of the present invention further provides an electronic device, which has the image enhancement apparatus shown in fig. 8.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an electronic device according to an alternative embodiment of the present invention, and as shown in fig. 9, the electronic device may include: at least one processor 51, such as a CPU (Central Processing Unit), at least one communication interface 53, memory 54, at least one communication bus 52. Wherein the communication bus 52 is used to enable connection communication between these components. The communication interface 53 may include a Display (Display) and a Keyboard (Keyboard), and the optional communication interface 53 may also include a standard wired interface and a standard wireless interface. The Memory 54 may be a high-speed RAM Memory (volatile Random Access Memory) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The memory 54 may alternatively be at least one memory device located remotely from the processor 51. Wherein the processor 51 may be in connection with the apparatus described in fig. 8, the memory 54 stores an application program, and the processor 51 calls the program code stored in the memory 54 for performing any of the above-mentioned method steps.
The communication bus 52 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. The communication bus 52 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 9, but this does not indicate only one bus or one type of bus.
The memory 54 may include a volatile memory (RAM), such as a random-access memory (RAM); the memory may also include a non-volatile memory (english: non-volatile memory), such as a flash memory (english: flash memory), a hard disk (english: hard disk drive, abbreviated: HDD) or a solid-state drive (english: SSD); the memory 54 may also comprise a combination of the above types of memories.
The processor 51 may be a Central Processing Unit (CPU), a Network Processor (NP), or a combination of a CPU and an NP.
The processor 51 may further include a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof.
Optionally, the memory 54 is also used to store program instructions. The processor 51 may call program instructions to implement the image enhancement method as shown in the embodiments of fig. 1, 2 and 5 of the present application.
Embodiments of the present invention further provide a non-transitory computer storage medium, where computer-executable instructions are stored, and the computer-executable instructions may execute the image enhancement method in any of the above method embodiments. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD), a Solid State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.
Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

Claims (11)

1. An image enhancement method, comprising:
acquiring an image to be processed corresponding to an original image and extracting color information and saturation information of the original image;
inputting the image to be processed into a brightness enhancement model to obtain enhanced brightness information;
fusing the color information, the saturation information and the enhanced brightness information to determine the enhanced image to be processed;
inputting the image to be processed into a brightness enhancement model to obtain enhanced brightness information, wherein the obtaining of the enhanced brightness information includes:
performing at least one time of feature extraction on the image to be processed by utilizing at least one convolution unit in the brightness enhancement model;
restoring image details of output information of the convolution unit or the transposed convolution unit by utilizing at least one attention mechanism unit in the brightness enhancement model;
decoding output information of the attention mechanism unit with at least one transposed convolution unit in the luma enhancement model;
and fusing the output information of the first convolution unit connected with the input layer of the brightness enhancement model with the output information of the last attention mechanism unit to determine the enhanced brightness information.
2. The method of claim 1, wherein the recovering image details of the output information of the convolution unit or the transposed convolution unit using at least one attention mechanism unit in the luminance enhancement model comprises:
performing convolution processing on the output information of the convolution unit or the output information of the transposed convolution unit by utilizing the convolution layer in the attention mechanism unit to obtain first output information;
performing global pooling and full-connection processing on the first output information to obtain second output information;
and calculating the product of the first output information and the second output information to obtain an image detail recovery result corresponding to the output information of the convolution unit or an image detail recovery result corresponding to the output information of the transposed convolution unit.
3. The method according to claim 1, wherein the inputting the image to be processed into a brightness enhancement model to obtain enhanced brightness information comprises:
inputting the image to be processed into a first convolution unit of the brightness enhancement model to obtain a feature extraction result of the first convolution unit;
inputting the feature extraction result of the first convolution unit into a first convolution unit to obtain the feature extraction result of the first convolution unit, wherein the first convolution unit comprises at least one convolution unit;
inputting the feature extraction result of the first convolution unit and the decoding result of the first convolution conversion unit into the first attention mechanism unit to obtain the result of image detail recovery; wherein the input of the first deconvolution unit is the output of a second attention mechanism unit;
and fusing the output of the first attention mechanism unit and the output of the first convolution unit to determine the enhanced brightness information.
4. The method of claim 1, wherein said fusing the output information of the first convolution unit connected to the input layer of the luminance enhancement model with the output information of the last attention mechanism unit to determine the enhanced luminance information comprises:
calculating the sum of the output information of the first convolution unit and the output information of the last attention mechanism unit to obtain third output information;
and performing convolution processing on the third output information to obtain the enhanced brightness information.
5. The method according to claim 1, wherein the acquiring the to-be-processed image corresponding to the original image comprises:
acquiring an original image;
and preprocessing the original image to obtain the image to be processed, wherein all pixel points in the image to be processed are arranged according to a preset rule.
6. The method according to any of claims 1-5, wherein the luminance enhancement model is trained by:
obtaining sample images of two kinds of brightness information of at least one scene to obtain a first sample image corresponding to the first brightness information and a second sample image corresponding to the second brightness information;
inputting the first sample image into a brightness enhancement model to obtain enhanced brightness information;
and updating the parameters of the brightness enhancement model based on the enhanced brightness information and the corresponding brightness information of the second sample image.
7. The method of claim 6, wherein obtaining sample images of two kinds of luminance information of at least one scene, obtaining a first sample image corresponding to the first luminance information and a second sample image corresponding to the second luminance information comprises:
acquiring original sample images of two kinds of brightness information of at least one scene;
preprocessing the original sample image to obtain the first sample image and the second sample image, wherein each pixel point in the first sample image and each pixel point in the second sample image are arranged according to a preset rule.
8. The method of claim 6, wherein the updating the parameters of the brightness enhancement model based on the enhanced brightness information and the corresponding brightness information of the second sample image comprises:
calculating a value of a first loss function using the difference between the enhanced luminance information and the luminance information of the second sample image;
calculating a value of a second loss function by using a preset feature vector in the brightness enhancement model, the enhanced brightness information and the brightness information of the second sample image;
calculating a value of a third loss function by using the enhanced brightness information and the similarity of the brightness information of the second sample image;
carrying out weighted summation on the value of the first loss function, the value of the second loss function and the value of the third loss function to obtain the value of a target loss function;
updating parameters of the brightness enhancement model based on the value of the target loss function.
9. An image enhancement apparatus, comprising:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring an image to be processed corresponding to an original image and extracting color information and saturation information of the original image;
the brightness enhancement module is used for inputting the image to be processed into a brightness enhancement model to obtain enhanced brightness information;
the fusion module is used for fusing the color information, the saturation information and the enhanced brightness information to determine the enhanced image to be processed;
inputting the image to be processed into a brightness enhancement model to obtain enhanced brightness information, wherein the obtaining of the enhanced brightness information includes:
performing at least one time of feature extraction on the image to be processed by utilizing at least one convolution unit in the brightness enhancement model;
restoring image details of output information of the convolution unit or the transposed convolution unit by utilizing at least one attention mechanism unit in the brightness enhancement model;
decoding output information of the attention mechanism unit with at least one transposed convolution unit in the luma enhancement model;
and fusing the output information of the first convolution unit connected with the input layer of the brightness enhancement model with the output information of the last attention mechanism unit to determine the enhanced brightness information.
10. An electronic device, comprising:
a memory and a processor, the memory and the processor being communicatively coupled to each other, the memory having stored therein computer instructions, the processor executing the computer instructions to perform the image enhancement method of any one of claims 1-8.
11. A computer-readable storage medium storing computer instructions for causing a computer to perform the image enhancement method of any one of claims 1-8.
CN202011036003.2A 2020-09-27 2020-09-27 Image enhancement method and device and electronic equipment Active CN112102204B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011036003.2A CN112102204B (en) 2020-09-27 2020-09-27 Image enhancement method and device and electronic equipment
PCT/CN2021/082754 WO2022062346A1 (en) 2020-09-27 2021-03-24 Image enhancement method and apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011036003.2A CN112102204B (en) 2020-09-27 2020-09-27 Image enhancement method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112102204A CN112102204A (en) 2020-12-18
CN112102204B true CN112102204B (en) 2022-07-08

Family

ID=73783707

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011036003.2A Active CN112102204B (en) 2020-09-27 2020-09-27 Image enhancement method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN112102204B (en)
WO (1) WO2022062346A1 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112102204B (en) * 2020-09-27 2022-07-08 苏州科达科技股份有限公司 Image enhancement method and device and electronic equipment
CN112801918A (en) * 2021-03-11 2021-05-14 苏州科达科技股份有限公司 Training method of image enhancement model, image enhancement method and electronic equipment
CN113469897A (en) * 2021-05-24 2021-10-01 苏州市科远软件技术开发有限公司 Training method and device of image enhancement model, image enhancement method and device and electronic equipment
CN114926359B (en) * 2022-05-20 2023-04-07 电子科技大学 Underwater image enhancement method combining bicolor space recovery and multi-stage decoding structure
CN115294055A (en) * 2022-08-03 2022-11-04 维沃移动通信有限公司 Image processing method, image processing device, electronic equipment and readable storage medium
CN116385813B (en) * 2023-06-07 2023-08-29 南京隼眼电子科技有限公司 ISAR image space target classification method, device and storage medium based on unsupervised contrast learning
CN116721041B (en) * 2023-08-09 2023-11-28 广州医科大学附属第一医院(广州呼吸中心) Image processing method, apparatus, system, and readable storage medium
CN117012135B (en) * 2023-10-08 2024-02-02 深圳市卓信特通讯科技有限公司 Display screen adjusting method, device, equipment and storage medium
CN117437490B (en) * 2023-12-04 2024-03-22 深圳咔咔可洛信息技术有限公司 Clothing information processing method and device, electronic equipment and storage medium
CN117808721B (en) * 2024-02-28 2024-05-03 深圳市瓴鹰智能科技有限公司 Low-illumination image enhancement method, device, equipment and medium based on deep learning

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AUPR827201A0 (en) * 2001-10-15 2001-11-08 Polartechnics Limited Image enhancement
CN105654437B (en) * 2015-12-24 2019-04-19 广东迅通科技股份有限公司 A kind of Enhancement Method of pair of low-light (level) image
CN109035175A (en) * 2018-08-22 2018-12-18 深圳市联合视觉创新科技有限公司 Facial image Enhancement Method based on color correction and Pulse Coupled Neural Network
CN109741281B (en) * 2019-01-04 2020-09-29 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and terminal
CN110458833B (en) * 2019-08-15 2023-07-11 腾讯科技(深圳)有限公司 Medical image processing method, medical device and storage medium based on artificial intelligence
CN110796612B (en) * 2019-10-09 2022-03-25 陈根生 Image enhancement method and system
CN112102204B (en) * 2020-09-27 2022-07-08 苏州科达科技股份有限公司 Image enhancement method and device and electronic equipment

Also Published As

Publication number Publication date
CN112102204A (en) 2020-12-18
WO2022062346A1 (en) 2022-03-31

Similar Documents

Publication Publication Date Title
CN112102204B (en) Image enhancement method and device and electronic equipment
CN109636754B (en) Extremely-low-illumination image enhancement method based on generation countermeasure network
CN110276767B (en) Image processing method and device, electronic equipment and computer readable storage medium
EP3579180A1 (en) Image processing method and apparatus, electronic device and non-transitory computer-readable recording medium for selective image enhancement
US11233933B2 (en) Method and device for processing image, and mobile terminal
CN108875619B (en) Video processing method and device, electronic equipment and computer readable storage medium
CN109951635B (en) Photographing processing method and device, mobile terminal and storage medium
CN110580428A (en) image processing method, image processing device, computer-readable storage medium and electronic equipment
CN112602088B (en) Method, system and computer readable medium for improving quality of low light images
CN111985281B (en) Image generation model generation method and device and image generation method and device
CN108734684B (en) Image background subtraction for dynamic illumination scene
CN107465855B (en) Image shooting method and device and unmanned aerial vehicle
CN110443766B (en) Image processing method and device, electronic equipment and readable storage medium
US20230127009A1 (en) Joint objects image signal processing in temporal domain
CN107295261B (en) Image defogging method and device, storage medium and mobile terminal
CN111311573B (en) Branch determination method and device and electronic equipment
CN111062272A (en) Image processing and pedestrian identification method and device based on color recovery and readable storage medium
CN108805883B (en) Image segmentation method, image segmentation device and electronic equipment
CN107481199B (en) Image defogging method and device, storage medium and mobile terminal
CN116167945A (en) Image restoration method and device, electronic equipment and storage medium
CN113298829B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN111147693B (en) Noise reduction method and device for full-size photographed image
CN111489289B (en) Image processing method, image processing device and terminal equipment
CN114119376A (en) Image processing method and device, electronic equipment and storage medium
CN112801932A (en) Image display method, image display device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant