CN118429229A - Image restoration method, device, storage medium and computer program product - Google Patents

Image restoration method, device, storage medium and computer program product Download PDF

Info

Publication number
CN118429229A
CN118429229A CN202410891770.3A CN202410891770A CN118429229A CN 118429229 A CN118429229 A CN 118429229A CN 202410891770 A CN202410891770 A CN 202410891770A CN 118429229 A CN118429229 A CN 118429229A
Authority
CN
China
Prior art keywords
image
features
edge
repaired
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410891770.3A
Other languages
Chinese (zh)
Other versions
CN118429229B (en
Inventor
余仲慰
陈盛福
时海若
张庆荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Post Consumer Finance Co ltd
Original Assignee
China Post Consumer Finance Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Post Consumer Finance Co ltd filed Critical China Post Consumer Finance Co ltd
Priority to CN202410891770.3A priority Critical patent/CN118429229B/en
Publication of CN118429229A publication Critical patent/CN118429229A/en
Application granted granted Critical
Publication of CN118429229B publication Critical patent/CN118429229B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The application discloses an image restoration method, equipment, a storage medium and a computer program product, relating to the technical field of image processing, wherein the method comprises the following steps: acquiring a gray image corresponding to the image to be repaired, and acquiring edge characteristics of the image to be repaired based on the gray image; taking an image to be repaired as a first branch input and edge characteristics as a second branch input, respectively inputting the first branch input and the second branch input into a preset image repairing network, wherein a plurality of RepConv modules are arranged in the preset image repairing network; repairing the image to be repaired based on the image to be repaired and the edge characteristics through a preset image repairing network, and outputting the repaired image. By applying the technical scheme, the technical problems that an image restoration model in the prior art needs to be trained on a data set of a specific type to pertinently solve a certain single degradation type in a low-quality image, and the model has weak generalization capability and low practicability are solved.

Description

Image restoration method, device, storage medium and computer program product
Technical Field
The present application relates to the field of image processing technology, and in particular, to an image restoration method, apparatus, storage medium, and computer program product.
Background
The quality of images as an important means of human communication, recording and preserving information is critical to the accuracy of information delivery. With the development of computer technology, digital images have become a major medium for information recording and storage. However, due to the influence of environmental factors such as rainy and foggy weather, equipment, shooting means and the like, digital images may be accompanied by degradation problems such as blurred, insufficient brightness, loss of details and the like, and the image quality may be degraded.
In the existing scheme, a specific solution is often carried out aiming at a single degradation type in low-quality images such as rain, fog, blurring and the like, and training is carried out on a data set of a specific type. However, these methods tend to perform well on specific tasks, but tend to be difficult to adapt and have poor model generalization ability when faced with unknown combinations of degenerations or types of degenerations that are not seen. In addition, when repairing low-quality images with phenomena such as rain, fog and blurring based on GAN, transformer and other image repairing algorithms, the image repairing algorithms are not suitable for being used on some platforms with limited resources due to high requirements on computing resources, so that the practicability is not high.
The foregoing is provided merely for the purpose of facilitating understanding of the technical solutions of the present invention and is not intended to represent an admission that the foregoing is prior art.
Disclosure of Invention
The application mainly aims to provide an image restoration method, equipment, a storage medium and a computer program product, and aims to solve the technical problems that an image restoration model in the prior art needs to be trained on a data set of a specific type to pertinently solve a certain single degradation type in a low-quality image, and the model generalization capability is weak and the practicability is low.
In order to achieve the above object, the present application provides an image restoration method, which includes:
acquiring a gray image corresponding to an image to be repaired, and acquiring edge characteristics of the image to be repaired based on the gray image;
Taking the image to be repaired as a first branch input and the edge feature as a second branch input, respectively inputting the first branch input and the second branch input into a preset image repairing network, wherein a plurality of RepConv modules are arranged in the preset image repairing network, and the RepConv modules are used for extracting image features;
Repairing the image to be repaired based on the image to be repaired and the edge characteristic through the preset image repairing network, and outputting the repaired image.
In an embodiment, the step of acquiring the edge feature of the image to be repaired based on the gray scale image includes:
Performing horizontal gradient operation on the gray level image to obtain a first edge characteristic;
Performing vertical gradient operation on the gray level image to obtain a second edge characteristic;
And acquiring edge characteristics of the image to be repaired based on the gray level image, the first edge characteristics and the second edge characteristics.
In an embodiment, a skip splicing module is arranged in the preset image restoration network; the step of repairing the image to be repaired based on the image to be repaired and the edge feature through the preset image repairing network and outputting the repaired image comprises the following steps:
performing step-by-step downsampling on the image to be repaired through a plurality of RepConv modules in a first branch of the preset image repairing network to obtain target image characteristics of a plurality of dimensions;
Extracting features of the edge features through a plurality of RepConv modules in a second branch of the preset image restoration network to obtain edge image features with a plurality of dimensions;
Splicing the target image features and the edge image features with different dimensions through the jump splicing module to obtain spliced image features;
and repairing the image to be repaired based on the spliced image features, and outputting the repaired image.
In an embodiment, the step of stitching, by the skip stitching module, the target image features and the edge image features with different dimensions to obtain stitched image features includes:
Bilinear interpolation processing is carried out on the first target image feature and the first edge image feature corresponding to the first dimension through the jump stitching module, and the processed target image feature and the processed edge image feature are obtained;
The jump stitching module is used for stitching the processed target image features and the processed edge image features with second target image features and second edge image features corresponding to second dimension respectively, and stitched image features are obtained; wherein the first dimension is greater than the second dimension.
In an embodiment, an image restoration module is further provided in the preset image restoration network; the step of repairing the image to be repaired based on the spliced image features and outputting the repaired image comprises the following steps:
Activating the spliced image features based on a preset activation function through the image restoration module to obtain activated image features;
bilinear interpolation processing is carried out on the first target image feature and the first edge image feature corresponding to the first dimension through the image restoration module, and the processed first target image feature and the processed first edge image feature are obtained;
Obtaining, by the image restoration module, a fused image feature based on the activated image feature, the second target image feature, and the second edge image feature;
splicing the fusion image features, the processed first target image features and the processed first edge image features through the image restoration module to obtain spliced first image features;
performing feature fusion on the spliced first image features through the RepConv module to obtain intermediate image features;
extracting edge perception features of the intermediate image features and the activated image features by the RepConv module to obtain output image features;
And performing image size reduction based on the output image features by the RepConv module to output a repaired image.
In an embodiment, before the step of acquiring the gray level image corresponding to the image to be repaired and acquiring the edge feature of the image to be repaired based on the gray level image, the method further includes:
Determining structural similarity loss, perception loss and color loss corresponding to the initial image restoration network based on the first training image and the second training image;
determining a loss function corresponding to the initial image restoration network based on the structural similarity loss, the perceived loss and the color loss;
Training the initial image restoration network through the loss function, and obtaining a preset image restoration network when training is completed.
In an embodiment, the step of determining the structural similarity loss corresponding to the initial image restoration network based on the first training image and the second training image includes:
Constructing an image brightness comparison function based on a first image mean value corresponding to the first training image and a second image mean value corresponding to the second training image;
Constructing an image contrast comparison function based on a first image variance corresponding to the first training image and a second image variance corresponding to the second training image;
Constructing an image structure comparison function based on a first image covariance corresponding to the first training image and a second image covariance corresponding to the second training image;
and determining the structural similarity loss corresponding to the initial image restoration network based on the image brightness comparison function, the image contrast comparison function and the image structure comparison function.
In addition, to achieve the above object, the present application also proposes an image restoration apparatus including: a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program being configured to implement the steps of the image restoration method as described above.
In addition, to achieve the above object, the present application also proposes a storage medium, which is a computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the steps of the image restoration method as described above.
Furthermore, to achieve the above object, the present application also provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of the image restoration method as described above.
The application provides an image restoration method, which comprises the steps of obtaining a gray image corresponding to an image to be restored, and obtaining edge characteristics of the image to be restored based on the gray image; taking an image to be repaired as a first branch input and edge characteristics as a second branch input, respectively inputting the first branch input and the second branch input into a preset image repairing network, wherein a plurality of RepConv modules are arranged in the preset image repairing network, and RepConv modules are used for extracting the image characteristics; repairing the image to be repaired based on the image to be repaired and the edge characteristics through a preset image repairing network, and outputting the repaired image; compared with the image restoration method in the prior art, the image restoration method is excellent in specific tasks, but is difficult to adapt to when facing unknown degradation combinations or unseen degradation types, and has weak model generalization capability.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic flow chart of an embodiment of an image restoration method according to the present application;
FIG. 2 is a schematic flow chart of a second embodiment of an image restoration method according to the present application;
FIG. 3 is a diagram of the overall architecture of EIRNet network in the image restoration method of the present application;
FIG. 4 is a schematic diagram of a skip splice module in the image restoration method of the present application;
FIG. 5 is a block diagram of a EIRNet module in an image restoration method according to the present application;
FIG. 6 is a schematic diagram of a RepConv module in an image restoration method according to the present application;
FIG. 7 is an internal design of a channel-by-channel convolution in the image restoration method of the present application;
FIG. 8 is a schematic flow chart of a third embodiment of an image restoration method according to the present application;
Fig. 9 is a schematic device structure diagram of a hardware operating environment related to an image restoration method in an embodiment of the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the technical solution of the present application and are not intended to limit the present application.
For a better understanding of the technical solution of the present application, the following detailed description will be given with reference to the drawings and the specific embodiments.
The main solutions of the embodiments of the present application are: acquiring a gray image corresponding to an image to be repaired, and acquiring edge characteristics of the image to be repaired based on the gray image; taking the image to be repaired as a first branch input and the edge feature as a second branch input, respectively inputting the first branch input and the second branch input into a preset image repairing network, wherein a plurality of RepConv modules are arranged in the preset image repairing network, and the RepConv modules are used for extracting image features; repairing the image to be repaired based on the image to be repaired and the edge characteristic through the preset image repairing network, and outputting the repaired image.
Because the prior art is often targeted to address a single degradation type in low quality images such as rain, fog, blur, etc., training is performed on a specific type of dataset. However, these methods tend to perform well on specific tasks, but tend to be difficult to adapt and have poor model generalization ability when faced with unknown combinations of degenerations or types of degenerations that are not seen. In addition, when repairing low-quality images with phenomena such as rain, fog and blurring based on GAN, transformer and other image repairing algorithms, the image repairing algorithms are not suitable for being used on some platforms with limited resources due to high requirements on computing resources, so that the practicability is not high.
The application provides a solution, which enables an image to be repaired based on the image to be repaired and the edge characteristics of the image to be repaired through a preset image repairing network, wherein a plurality of RepConv modules for extracting the image characteristics are arranged in the preset image repairing network, so that the technical problems that an image repairing model in the prior art needs to be trained on a specific type of data set to pointedly solve a certain single degradation type in a low-quality image, and the generalization capability of the model is weak and the practicability is low are solved.
It should be noted that, the execution body of the embodiment may be a computing service device having functions of data processing, network communication and program running, such as a tablet computer, a personal computer, a mobile phone, or an electronic device, an image restoration device, or the like, which can implement the above functions. Hereinafter, this embodiment and the following embodiments will be described with reference to an image restoration apparatus (hereinafter referred to as an apparatus).
Based on this, an embodiment of the present application provides an image restoration method, referring to fig. 1, fig. 1 is a schematic flow chart of a first embodiment of the image restoration method of the present application.
In this embodiment, the image restoration method includes steps S10 to S30:
Step S10: and acquiring a gray image corresponding to the image to be repaired, and acquiring edge characteristics of the image to be repaired based on the gray image.
It should be understood that the image to be repaired may be any low-quality image with phenomena such as blurred, insufficient brightness, and loss of details, such as any low-quality image with phenomena such as rain, fog, and blurring, which is not limited in this embodiment. In practical application, the image to be repaired is usually a color image, and in this embodiment, the color image can be converted into a gray image when the image to be repaired is repaired, so that the gray image corresponding to the image to be repaired can be obtained.
It is understood that the edge features of the image to be repaired may be features of abrupt regions in terms of color, brightness, texture, etc. in the image to be repaired.
Specifically, the step of acquiring the edge feature of the image to be repaired based on the gray level image includes: performing horizontal gradient operation on the gray level image to obtain a first edge characteristic; performing vertical gradient operation on the gray level image to obtain a second edge characteristic; and acquiring edge characteristics of the image to be repaired based on the gray level image, the first edge characteristics and the second edge characteristics.
It should be noted that the first edge feature may be a feature for characterizing a gray scale variation of an image in a horizontal direction; accordingly, the second edge feature may be a feature for characterizing a gray scale variation of the image in the vertical direction.
In a specific implementation, the process of obtaining the edge feature may be designed as follows:
In the method, in the process of the invention, A gray scale representing the input image to be repaired,AndRepresenting the gradient operations performed on the gray scale map in the horizontal direction and the vertical direction respectively,Representing the first edge feature described above,Representing the second edge feature described above,Representing the sum of the inputted gray-scale image of the image to be repaired and the result of the gradient operation of the gray-scale image in the horizontal direction and the result of the gradient operation of the gray-scale image in the vertical direction,Representing the edge features described above.
Step S20: the image to be repaired is used as a first branch input, the edge feature is used as a second branch input, the first branch input and the second branch input are respectively input into a preset image repairing network, a plurality of RepConv modules are arranged in the preset image repairing network, and the RepConv modules are used for extracting image features.
It should be noted that, the preset image restoration network may be a network for restoring a high-quality image corresponding to a low-quality image. In this embodiment, a similar concept of semantic segmentation may be used to design a similar dual-branch deep neural network structure (i.e. the above-mentioned preset image restoration network) named EIRNet (Edge-AWARE IMAGE Restoration Network ), where the objective of the network is to input a low-quality image with any phenomena of rain, fog, blur, etc., so as to restore the corresponding high-quality image end to end, and the network backbone adopts a classical encoder-decoder network design similar to U-Net.
It should be noted that, the new design EIRNet in the present solution includes the new design RepConv module, and the RepConv module can efficiently extract the image features. In general, feature extraction modules in classical architectures such as MobileNet, ghostNet may be used instead; if processing efficiency is not a consideration, the use of a visual transducer module for replacement is contemplated.
Step S30: repairing the image to be repaired based on the image to be repaired and the edge characteristic through the preset image repairing network, and outputting the repaired image.
It should be noted that the scheme provides a general image restoration method, which can solve the degradation problem of various low-quality images including the phenomena of rain, fog, blurring and the like, and overcomes the limitation that the prior art only carries out restoration aiming at a single degradation type. Meanwhile, the scheme provides a new repair network EIRNet which can realize image repair end to end, wherein EIRNet is designed into a two-branch structure, features are extracted based on a high-efficiency RepConv module, edge perception features of images are designed and integrated, and finally a special module capable of integrating the two-branch features can be designed based on the Retinex theory, so that the network can cope with different low-quality image types. That is, the scheme can realize that after any low-quality image such as rain, fog, blur and the like is input into the preset image restoration network, the preset image restoration network can restore the corresponding high-quality image end to end. Further, when the input image is a normal image, the output image is not changed. Further, the output result of the preset image restoration network can be continuously used for scenes such as target detection and semantic segmentation, so that the performance of downstream tasks of the images such as target detection and semantic segmentation can be correspondingly improved.
In practical applications, the network backbone of the preset image restoration network EIRNet adopts a classical encoder-decoder network design similar to U-Net, which designs two branches, wherein the input of the first branch can be the image to be restored, and the input of the second branch can be the edge feature of the image to be restored. Specifically, the EIRNet network may take the image to be repaired as input in the first branch region, downsample the input image to be repaired to a certain dimension using a plurality of RepConv modules, upsample back to the original resolution of the image, and fuse the features of the encoder and decoder through a jump connection. The EIRNet network can take edge characteristics of an image to be repaired as input in a second branch area, so as to learn the edge-perceived image characteristics and provide rich textures and edge information for image repair. Similarly, the second branching region also takes the design of an encoder-decoder to help fuse the multi-scale image features. Then, a connection between the image features of the first branch region and the second branch region may be established, the fusion features being continuously fused and upsampled to a final resolution along the path of the decoder in the first branch region, outputting a restored image.
The embodiment provides an image restoration method, which discloses that a gray image corresponding to an image to be restored is obtained, and edge characteristics of the image to be restored are obtained based on the gray image; taking an image to be repaired as a first branch input and edge characteristics as a second branch input, respectively inputting the first branch input and the second branch input into a preset image repairing network, wherein a plurality of RepConv modules are arranged in the preset image repairing network, and RepConv modules are used for extracting the image characteristics; repairing the image to be repaired based on the image to be repaired and the edge characteristics through a preset image repairing network, and outputting the repaired image; compared with the image restoration method in the prior art, the image restoration method is excellent in specific tasks, but is difficult to adapt to when facing unknown degradation combinations or unseen degradation types, and has weak model generalization capability, because the image to be restored is restored based on the image to be restored and the edge characteristics of the image to be restored through the preset image restoration network, a plurality of RepConv modules for extracting the image characteristics are arranged in the preset image restoration network, the technical problems that the image restoration model in the prior art needs to be trained on a specific type of data set to pertinently solve a certain single degradation type in a low-quality image, and the model generalization capability is weak and the practicability is not high are solved.
In the second embodiment of the present application, the same or similar content as in the first embodiment of the present application may be referred to the above description, and will not be repeated. On this basis, please refer to fig. 2, fig. 2 is a flow chart of a second embodiment of the image restoration method according to the present application.
In this embodiment, a skip splicing module is disposed in the preset image restoration network; step S30 includes: steps S301 to S304:
Step S301: and gradually downsampling the image to be repaired through a plurality of RepConv modules in a first branch of the preset image repairing network to obtain target image characteristics of a plurality of dimensions.
It should be appreciated that the target image features described above may be image features obtained after downsampling an image to be restored. In practical application, referring to fig. 3, fig. 3 is an overall architecture diagram of EIRNet networks in the image restoration method according to the present application. As shown in fig. 3, for the left side of the first branch region (e.g., the left side region of fig. 3) of the preset image restoration network EIRNet, in the encoder stage, the EIRNet network may receive the image S to be restored as input, and use a plurality of RepConv modules to downsample the image S to be restored step by step, so as to obtain high-order semantic features, where features at each level and resolution scale are respectively denoted as L 0,L1,L2,L3,L4. Wherein the dimension of the L 0 feature isThe dimension of the L 1 feature isThe dimension of the L 2 feature isThe dimension of the L 3 feature isThe dimension of the L 4 feature is
It is understood that a plurality of RepConv modules may be stacked in a certain feature level in this embodiment, and similarly, a plurality of other modules, such as MobileOne modules, ghostNet modules, etc., may be stacked to enhance the effect of feature extraction.
Step S302: and extracting the characteristics of the edge characteristics through a plurality of RepConv modules in a second branch of the preset image restoration network to obtain edge image characteristics of a plurality of dimensions.
It should be noted that the edge image feature may be an image feature obtained after downsampling an edge feature of an image to be restored. In this embodiment, for the right side of the second branch area (the right side area in fig. 3) of the preset image restoration network EIRNet, the EIRNet network may receive the edge feature C of the image to be restored as input, and perform deeper extraction on the edge feature C to obtain feature dimensions as followsIs a feature of the edge image of (a).
It should be noted that EIRNet designed in this solution adopts a structure similar to U-Net, where the feature can be downsampled to 1/8 of the original size and the number of channels can be extended to 8 times. Similarly, if considering better network effect, a deeper network structure can be designed, for example, the features can be further downsampled to 1/16 of the original size, and the channel number can be correspondingly expanded.
Step S303: and splicing the target image features and the edge image features with different dimensions through the jump splicing module to obtain spliced image features.
It should be noted that the skip splice module may be a module for establishing a connection between the features of the encoder and the decoder. In this embodiment, feature stitching may be performed on the target image features and the edge image features of different dimensions by using the skip stitching module, so as to obtain stitched image features.
Specifically, the step S303 includes: bilinear interpolation processing is carried out on the first target image feature and the first edge image feature corresponding to the first dimension through the jump stitching module, and the processed target image feature and the processed edge image feature are obtained; the jump stitching module is used for stitching the processed target image features and the processed edge image features with second target image features and second edge image features corresponding to second dimension respectively, and stitched image features are obtained; wherein the first dimension is greater than the second dimension.
It should be noted that the first dimension and the second dimension may beBut the first dimension and the second dimension must satisfy that the first dimension is greater than the second dimension, and the first dimension and the second dimension areIs included in the first dimension. Accordingly, the first target image feature and the first edge image feature may be a target image feature and an edge image feature of a first dimension, respectively, and the second target image feature and the second edge image feature may be a target image feature and an edge image feature of a second dimension, respectively. For example, if the first dimension isThen the second dimension should beAt this time, the first target image feature is the L 4 feature in the left side of fig. 3, the first edge image feature is the L 4 feature in the right side of fig. 3, the second target image feature is the L 3 feature in the left side of fig. 3, and the second edge image feature is the L 3 feature in the right side of fig. 3.
In a specific implementation, referring to fig. 4, fig. 4 is a schematic diagram of a skip splice module in the image restoration method according to the present application. As shown in fig. 4, taking the skip splice module C 3 in fig. 3 as an example (i.e., n=3 at this time), the input is the final L 4 feature of the encoder stage (i.e., the first target image feature and the first edge image feature corresponding to the first dimension), and the L 3 feature of the encoder stage (i.e., the second target image feature and the second edge image feature corresponding to the second dimension). Firstly, carrying out bilinear interpolation on the L 4 features to align the wide and high dimensions with the L 3 features, splicing the features with the L 3 features, carrying out feature fusion on the spliced features by using RepConv modules, and outputting a result 3 (I.e., the above stitched image features), features 3 After RepConv module processing, the processed result enters a jump splice module C 2 together with the L 2 characteristic of the encoder to perform similar operation, and the result is output 2 And so on. The result output by the skip splicing module is used as an illumination perception factor and plays a role in the EIR module (namely a subsequent image restoration module).
Step S304: and repairing the image to be repaired based on the spliced image features, and outputting the repaired image.
Further, an image restoration module is further arranged in the preset image restoration network; the step S304 includes:
step S304a: and activating the spliced image features based on a preset activation function by the image restoration module to obtain activated image features.
It should be appreciated that the image restoration module described above may be a module for learning edge-aware image information. In this embodiment, an EIR module that fuses EIRNet two branch features may be designed based on Retinex theory and attention mechanism, so that a connection between image features of a first branch region and a second branch region in the EIRNet network is efficiently established, and then, the fused features are continuously fused and up-sampled to a final resolution along with a path of a decoder in the first branch region, and a repaired image is output.
The Retinex theory is understood to mean that, assuming that the color of an object seen by the human eye is not determined by the object itself, but is generated by the interaction of light and the object, the human eye can observe objects of different colors, in fact, because the objects have different reflective capacities for light of different wavelengths, and have little correlation with the illumination intensity of the surface thereof. This theory believes that the reflective nature of the object itself determines its color, thus placing a point of interest on how to eliminate the uncertain illumination properties in the image. Where the illumination component is denoted by I and the reflection component is denoted by R, the image S may be denoted as s=r·i.
In practical application, referring to fig. 5, fig. 5 is a schematic diagram of a EIRNet module in the image restoration method according to the present application. As shown in fig. 5, taking EIRNet block E 3 in fig. 3 as an example (i.e., where n=3), its inputs are the final L 4 feature of the encoder stage, the L 3 feature of the encoder stage, and the spliced image feature of the edge-aware tap output 3 . First, a preset activation function (such as Sigmoid activation function) is used for the characteristics of the spliced image 3 And activating to obtain activated image features.
Step S304b: and carrying out bilinear interpolation processing on the first target image feature and the first edge image feature corresponding to the first dimension through the image restoration module to obtain the processed first target image feature and the processed first edge image feature.
In this embodiment, as shown in fig. 5, the L 4 feature may be bilinear interpolated to align its wide-to-high dimension with the L 3 feature, and a processed first target image feature aligned with the L 3 feature and a processed first edge image feature aligned with the L 3 feature are obtained.
Step S304c: and obtaining, by the image restoration module, a fused image feature based on the activated image feature, the second target image feature, and the second edge image feature.
It should be noted that, after the activated image feature is obtained, the activated image feature may be multiplied by the L 3 feature of the encoder stage to obtain the intermediate result T 3. This operation is intended to learn the operation of s=r·i in Retinex theory, after activation 3 Features are used as reflection components, L 3 features of the encoder stage are used as illumination components, and the optimal scheme for restoring the image is learned. From another point of view, the operation is based on 3 The provided edge-aware information is subjected to attention operation to obtain an intermediate result T 3 (i.e., the above-mentioned fused image feature).
Step S304d: and splicing the fusion image feature, the processed first target image feature and the processed first edge image feature through the image restoration module to obtain a spliced first image feature.
It should be appreciated that after the fused image feature T 3, the processed first target image feature, and the processed first edge image feature are obtained, the processed first target image feature and the processed first edge image feature may be stitched with the T 3 feature to obtain a stitched first image feature.
Step S304e: and carrying out feature fusion on the spliced first image features through the RepConv module to obtain intermediate image features.
Step S304f: and extracting the edge perception features of the intermediate image features and the activated image features by the RepConv module to obtain output image features.
Step S304g: and performing image size reduction based on the output image features by the RepConv module to output a repaired image.
In a specific implementation, taking an E 3 module in the figure as an example, its input is the final L 4 feature of the encoder stage, the L 3 feature of the encoder stage, and the spliced image feature of the edge-aware tap output 3 . First using Sigmoid activation function pairs 3 The feature is activated and multiplied by the L 3 feature of the encoder stage to obtain the fused image feature T 3. Next, performing bilinear interpolation on the L 4 features to align the wide and high dimensions with the L 3 features, splicing the L 4 features with the fusion image features T 3 to obtain spliced first image features, performing feature fusion on the spliced first image features by using a RepConv module, and outputting intermediate image features 3 Finally, the attention mechanism is used again to extract the intermediate image features 3 And after activation 3 Edge perception feature to obtain output image feature 3 . Outputting image features 3 Characterized by passing through RepConv modules 3 After processing, the processing is combined with the L 2 characteristic of the encoder and the output of the edge perception branch 2 Features enter EIR module E 2 together and perform a similar operation. And so on, after RepConv modules are run out 1 After calculation, the RepConv module is performed again 0 Calculating, restoring the original input size of the image, and recording the output asI.e. outputting the restored image.
It should be noted that the re-parameterization technique has proved its effectiveness in convolutional neural networks such as MobileOne, ghostNet V2, etc. Inspired by the method, the scheme can be used for designing a module named RepConv as an efficient feature extraction module and is widely applied to EIRNet design. Referring to fig. 6, fig. 6 is a design diagram of RepConv modules in the image restoration method according to the present application. As shown in fig. 6, given input data, repConv modules perform channel-by-channel convolution and point-by-point convolution, and add the design of Shortcut to the modules, add the output results of the input and point-by-point convolution, and output the final result. Referring to fig. 7, fig. 7 is an internal design diagram of a channel-by-channel convolution in the image restoration method according to the present application. As shown in fig. 7, the inside of the channel-by-channel convolution includes a group of parallel convolution layers, three different convolution kernel sizes of 1x1,3x3 and 5x5 are selected, features with different scales are obtained, and then the features with different scales are added to output a result of the channel-by-channel convolution. In the reasoning process, a re-parameterization technology can be used for fusing the same group of convolution layers, so that the reasoning speed is improved and the model effect is not influenced.
In the embodiment, a step-by-step downsampling of an image to be repaired is performed through a plurality of RepConv modules in a first branch of a preset image repair network to obtain target image characteristics of a plurality of dimensions; extracting the edge characteristics through a plurality of RepConv modules in a second branch of a preset image restoration network to obtain edge image characteristics of a plurality of dimensions; splicing the target image features and the edge image features with different dimensions through a skip splicing module to obtain spliced image features; repairing the image to be repaired based on the spliced image features, outputting the repaired image, and fusing the features of the encoder and the decoder through jump connection, so that rich texture and edge information can be provided for image repairing, and further image repairing precision is improved.
In the third embodiment of the present application, the same or similar content as the above-described embodiments may be referred to the above description, and will not be repeated herein. On this basis, please refer to fig. 8, fig. 8 is a flowchart of a third embodiment of the image restoration method according to the present application.
In this embodiment, before step S10, the image restoration method further includes steps S01 to S03:
step S01: and determining structural similarity loss, perception loss and color loss corresponding to the initial image restoration network based on the first training image and the second training image.
It should be appreciated that the first training image and the second training image may be any images used to train the initial image restoration network.
Further, the step of determining the structural similarity loss corresponding to the initial image restoration network based on the first training image and the second training image includes:
Step S011: and constructing an image brightness comparison function based on the first image mean value corresponding to the first training image and the second image mean value corresponding to the second training image.
It can be understood that the first image mean is the mean of the first training image; the second image mean value is the mean value of the second training image.
It should be noted that the above image brightness comparison function may be a function for comparing brightness of the first training image and the second training image, where the image brightness comparison function in this embodiment may be expressed as:
In the method, in the process of the invention, Representing the brightness comparison function of the image,AndRepresenting a first training image and a second training image respectively,AndRepresenting a first image mean and a second image mean respectively,As a constant, the calculation is as follows:
In the method, in the process of the invention, Representing the number of image gray levels, for an 8-bit gray scale image,AndTwo constants well below 1 are indicated, typically 0.01 and 0.03, respectively.
Step S012: and constructing an image contrast comparison function based on the first image variance corresponding to the first training image and the second image variance corresponding to the second training image.
It can be appreciated that the first image variance is the variance of the first training image; the second image variance is a variance of the second training image.
It should be noted that the image contrast comparison function described above may be a function for comparing the contrast of the first training image and the second training image, where the image contrast comparison function in this embodiment may be expressed as:
In the method, in the process of the invention, Representing the contrast ratio comparison function of the image,AndRepresenting a first image variance and a second image variance respectively,Representing standard deviations of pixel values of the first training image and the second training image, respectively, x and y representing the first training image and the second training image, respectively,The calculation is as described above for the constant.
Step S013: and constructing an image structure comparison function based on the first image covariance corresponding to the first training image and the second image covariance corresponding to the second training image.
It can be appreciated that the first image covariance is the covariance of the first training image; the second image covariance is covariance of the second training image.
It should be noted that the first image covariance may be a function for comparing the structures of the first training image and the second training image, where the image structure comparison function in this embodiment may be expressed as:
In the method, in the process of the invention, Representing the comparison function of the image structure,Representing a first image covariance and a second image covariance respectively,The calculation is as described above for the constant.
Step S014: and determining the structural similarity loss corresponding to the initial image restoration network based on the image brightness comparison function, the image contrast comparison function and the image structure comparison function.
In practical applications, after determining the image brightness comparison function, the image contrast comparison function, and the image structure comparison function, the structural similarity loss SSIM corresponding to the final initial image restoration network may be expressed as:
In the method, in the process of the invention, Representing the coefficients of the image brightness comparison function, the image contrast comparison function and the image structure comparison function, respectively.
Assume thatThen further obtain:
in actual calculation, a window with a certain size is generally selected for sliding calculation, and each area is The value is finally averaged to be taken as globalA value, wherein,The value of (2) is atThe closer 1 is between the two images, the more similar.
Finally, the similarity loss can be expressed as:
wherein,
In the method, in the process of the invention,A loss of similarity is indicated and is indicative of,AndAll of which represent a loss function and,AndRespectively representAndIs used for the weight of the (c),AndCan be adjusted according to actual training conditions, and in the embodiment, the method can be used for adjustingIs set to be 3 and is set to be a plurality of times,The setting is made to be 0.3,Representing the repaired image and the label image used for training, respectively.
Step S02: and determining a loss function corresponding to the initial image restoration network based on the structural similarity loss, the perceived loss and the color loss.
It should be noted that, the perceived loss target is to make the generated image approach to the actual visual perception, and the perceived distance between the images is measured by using the similarity of the learning perceived image blocks, and the calculation mode is expressed as follows:
In the method, in the process of the invention, Representing the learning of the perceived image block similarity,Representing the height and width of the image respectively,AndRepresenting a first training image and a second training image respectively,Representing the passage of the first training image and the second training image, respectivelySome layer of features obtained in pre-trained neural networks, here VGG-16 networks are selected, by vectorScaling activation and computationThe distances are finally averaged spatially and summed over the channels.
It should be noted that, in order to solve the color distortion problem that may occur in the restoration result, color loss is introduced for correction, where the color loss is divided into two parts, the first part uses cosine similarity loss:
In the method, in the process of the invention, Representing a loss of cosine similarity,Representing the image after the color distortion is repaired,Representing each pixel point in the image,AndRepresenting the width and height of the image, respectively.
The second part processes the image by using Gaussian blur and then calculates the difference between the two images, and the corresponding form is as follows:
In the method, in the process of the invention, Representing the loss of gaussian blur,Respectively representing the result of the post-repair image and the post-training label image after Gaussian blur,As a function of the gaussian kernel,Parameters for controlling the shape of the gaussian distribution in the gaussian kernel function are represented,K and l respectively represent the positions of the numerical values inside the gaussian kernel in its rows and columns, and i and j represent each pixel in the image.
Eventually, color is lostThe definition is as follows:
In the method, in the process of the invention, AndCoefficients representing cosine similarity loss and Gaussian blur loss, respectively, which can be adjusted according to the actual training situation, whereIs set to be 5 and is set to be a constant value,Set to 1.
In this embodiment, after the structural similarity loss, the perceived loss, and the color loss corresponding to the initial image restoration network, a final loss function may be formed based on the structural similarity loss, the perceived loss, and the color lossWherein the loss functionIs defined as follows:
Step S03: training the initial image restoration network through the loss function, and obtaining a preset image restoration network when training is completed.
In practical application, the embodiment can train by using open source data synthesis data and real shooting data in the fields of rain removal, defogging, deblurring and the like, and expand by adopting image enhancement methods such as random overturning, random rotation, random scaling, random cutting and the like, fill an input image during training, unify the width and height dimensions of the image to multiples of 16, and normalize the image to be between [0,1 ]. During evaluation, the mean square error, the structural similarity, the peak signal-to-noise ratio and the perceived similarity of the restored image and the label are calculated, and the model with the best performance is stored on the verification set in the training process. In addition, in model reasoning, a low-quality image or a normal image with rain, fog, blurring and the like can be given, the width and the height of the image are unified to be multiple of 16, the image is normalized to be between [0,1], and the model is used for reasoning. During reasoning, the RepConv module in EIRNet can use a re-parameterization technology to fuse the multi-branch structure in the module into a single-branch structure, so that the reasoning speed and effect of the model are ensured, a repaired image can be quickly obtained, the output result can be continuously used for scenes such as target detection and semantic segmentation, and the effect of corresponding downstream tasks is ensured.
In the embodiment, determining structural similarity loss, perception loss and color loss corresponding to an initial image restoration network based on a first training image and a second training image is disclosed; determining a loss function corresponding to the initial image restoration network based on the structural similarity loss, the perceived loss and the color loss; the initial image restoration network is trained through the loss function, and when training is completed, a preset image restoration network is obtained, so that the image restoration network can learn the relationship between the input low-quality image and the high-quality image, and the corresponding high-quality image can be restored end to end.
It should be noted that the foregoing examples are only for understanding the present application, and are not meant to limit the image restoration method of the present application, and more forms of simple transformation based on the technical concept are all within the scope of the present application.
The present application provides an image restoration apparatus including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the image restoration method in the first embodiment.
Referring now to fig. 9, a schematic diagram of an image restoration device suitable for use in implementing embodiments of the present application is shown. The image restoration device in the embodiment of the present application may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (Personal DIGITAL ASSISTANT: personal digital assistant), a PAD (Portable Application Description: tablet computer), a PMP (Portable MEDIA PLAYER: portable multimedia player), an in-vehicle terminal (e.g., an in-vehicle navigation terminal), and the like, and a fixed terminal such as a digital TV, a desktop computer, and the like. The image restoration device illustrated in fig. 9 is merely an example, and should not impose any limitation on the functionality and scope of use of the embodiments of the present application.
As shown in fig. 9, the image repair apparatus may include a processing device 1001 (e.g., a central processing unit, a graphics processor, etc.), which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1002 or a program loaded from a storage device 1003 into a random access Memory (RAM: random Access Memory) 1004. In the RAM1004, various programs and data required for the operation of the image restoration apparatus are also stored. The processing device 1001, the ROM1002, and the RAM1004 are connected to each other by a bus 1005. An input/output (I/O) interface 1006 is also connected to the bus. In general, the following systems may be connected to the I/O interface 1006: input devices 1007 including, for example, a touch screen, touchpad, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, and the like; an output device 1008 including, for example, a Liquid crystal display (LCD: liquid CRYSTAL DISPLAY), a speaker, a vibrator, and the like; storage device 1003 including, for example, a magnetic tape, a hard disk, and the like; and communication means 1009. The communication means 1009 may allow the image restoration device to communicate wirelessly or by wire with other devices to exchange data. While an image restoration device having various systems is shown in the figures, it should be understood that not all of the illustrated systems are required to be implemented or provided. More or fewer systems may alternatively be implemented or provided.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through a communication device, or installed from the storage device 1003, or installed from the ROM 1002. The above-described functions defined in the method of the disclosed embodiment of the application are performed when the computer program is executed by the processing device 1001.
The image restoration device provided by the application can solve the technical problem of image restoration by adopting the image restoration method in the embodiment. Compared with the prior art, the beneficial effects of the image restoration device provided by the application are the same as those of the image restoration method provided by the embodiment, and other technical features of the image restoration device are the same as those disclosed by the method of the previous embodiment, and are not repeated here.
It is to be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the description of the above embodiments, particular features, structures, materials, or characteristics may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
The present application provides a computer-readable storage medium having computer-readable program instructions (i.e., a computer program) stored thereon for performing the image restoration method of the above-described embodiments.
The computer readable storage medium provided by the present application may be, for example, a U disk, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access Memory (RAM: random Access Memory), a Read-Only Memory (ROM: read Only Memory), an erasable programmable Read-Only Memory (EPROM: erasable Programmable Read Only Memory or flash Memory), an optical fiber, a portable compact disc Read-Only Memory (CD-ROM: CD-Read Only Memory), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this embodiment, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wire, fiber optic cable, RF (Radio Frequency), and the like, or any suitable combination of the foregoing.
The above-described computer-readable storage medium may be contained in an image restoration device; or may exist alone without being assembled into the image restoration device.
The computer-readable storage medium carries one or more programs that, when executed by the image restoration device, cause the image restoration device to: acquiring a gray image corresponding to an image to be repaired, and acquiring edge characteristics of the image to be repaired based on the gray image; taking the image to be repaired as a first branch input and the edge feature as a second branch input, respectively inputting the first branch input and the second branch input into a preset image repairing network, wherein a plurality of RepConv modules are arranged in the preset image repairing network, and the RepConv modules are used for extracting image features; repairing the image to be repaired based on the image to be repaired and the edge characteristic through the preset image repairing network, and outputting the repaired image.
Computer program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of remote computers, the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN: local Area Network) or a wide area network (WAN: wide Area Network), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules involved in the embodiments of the present application may be implemented in software or in hardware. Wherein the name of the module does not constitute a limitation of the unit itself in some cases.
The readable storage medium provided by the application is a computer readable storage medium, and the computer readable storage medium stores computer readable program instructions (namely a computer program) for executing the image restoration method, so that the technical problems that an image restoration model in the prior art needs to be trained on a data set of a specific type to pertinently solve a certain single degradation type in a low-quality image, and the model generalization capability is weak and the practicability is not high can be solved. Compared with the prior art, the beneficial effects of the computer readable storage medium provided by the application are the same as those of the image restoration method provided by the above embodiment, and are not described herein.
The application also provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of the image restoration method as described above.
The computer program product provided by the application can solve the technical problems that an image restoration model in the prior art needs to be trained on a data set of a specific type to pertinently solve a certain single degradation type in a low-quality image, and the model generalization capability is weak and the practicability is not high. Compared with the prior art, the beneficial effects of the computer program product provided by the application are the same as those of the image restoration method provided by the above embodiment, and are not described herein.
The foregoing description is only a partial embodiment of the present application, and is not intended to limit the scope of the present application, and all the equivalent structural changes made by the description and the accompanying drawings under the technical concept of the present application, or the direct/indirect application in other related technical fields are included in the scope of the present application.

Claims (10)

1.A method of image restoration, said method comprising:
acquiring a gray image corresponding to an image to be repaired, and acquiring edge characteristics of the image to be repaired based on the gray image;
Taking the image to be repaired as a first branch input and the edge feature as a second branch input, respectively inputting the first branch input and the second branch input into a preset image repairing network, wherein a plurality of RepConv modules are arranged in the preset image repairing network, and the RepConv modules are used for extracting image features;
Repairing the image to be repaired based on the image to be repaired and the edge characteristic through the preset image repairing network, and outputting the repaired image.
2. The method of claim 1, wherein the step of acquiring edge features of the image to be repaired based on the gray scale image comprises:
Performing horizontal gradient operation on the gray level image to obtain a first edge characteristic;
Performing vertical gradient operation on the gray level image to obtain a second edge characteristic;
And acquiring edge characteristics of the image to be repaired based on the gray level image, the first edge characteristics and the second edge characteristics.
3. The method of claim 1, wherein a skip splice module is provided in the preset image restoration network; the step of repairing the image to be repaired based on the image to be repaired and the edge feature through the preset image repairing network and outputting the repaired image comprises the following steps:
performing step-by-step downsampling on the image to be repaired through a plurality of RepConv modules in a first branch of the preset image repairing network to obtain target image characteristics of a plurality of dimensions;
Extracting features of the edge features through a plurality of RepConv modules in a second branch of the preset image restoration network to obtain edge image features with a plurality of dimensions;
Splicing the target image features and the edge image features with different dimensions through the jump splicing module to obtain spliced image features;
and repairing the image to be repaired based on the spliced image features, and outputting the repaired image.
4. The method of claim 3, wherein the step of stitching, by the skip stitching module, the target image features and the edge image features of different dimensions to obtain stitched image features comprises:
Bilinear interpolation processing is carried out on the first target image feature and the first edge image feature corresponding to the first dimension through the jump stitching module, and the processed target image feature and the processed edge image feature are obtained;
The jump stitching module is used for stitching the processed target image features and the processed edge image features with second target image features and second edge image features corresponding to second dimension respectively, and stitched image features are obtained; wherein the first dimension is greater than the second dimension.
5. The method of claim 4, wherein the preset image restoration network is further provided with an image restoration module; the step of repairing the image to be repaired based on the spliced image features and outputting the repaired image comprises the following steps:
Activating the spliced image features based on a preset activation function through the image restoration module to obtain activated image features;
bilinear interpolation processing is carried out on the first target image feature and the first edge image feature corresponding to the first dimension through the image restoration module, and the processed first target image feature and the processed first edge image feature are obtained;
Obtaining, by the image restoration module, a fused image feature based on the activated image feature, the second target image feature, and the second edge image feature;
splicing the fusion image features, the processed first target image features and the processed first edge image features through the image restoration module to obtain spliced first image features;
performing feature fusion on the spliced first image features through the RepConv module to obtain intermediate image features;
extracting edge perception features of the intermediate image features and the activated image features by the RepConv module to obtain output image features;
And performing image size reduction based on the output image features by the RepConv module to output a repaired image.
6. The method according to any one of claims 1 to 5, wherein before the step of acquiring a grayscale image corresponding to an image to be repaired and acquiring an edge feature of the image to be repaired based on the grayscale image, further comprises:
Determining structural similarity loss, perception loss and color loss corresponding to the initial image restoration network based on the first training image and the second training image;
determining a loss function corresponding to the initial image restoration network based on the structural similarity loss, the perceived loss and the color loss;
Training the initial image restoration network through the loss function, and obtaining a preset image restoration network when training is completed.
7. The method of claim 6, wherein the step of determining a corresponding structural similarity loss for the initial image restoration network based on the first training image and the second training image comprises:
Constructing an image brightness comparison function based on a first image mean value corresponding to the first training image and a second image mean value corresponding to the second training image;
Constructing an image contrast comparison function based on a first image variance corresponding to the first training image and a second image variance corresponding to the second training image;
Constructing an image structure comparison function based on a first image covariance corresponding to the first training image and a second image covariance corresponding to the second training image;
and determining the structural similarity loss corresponding to the initial image restoration network based on the image brightness comparison function, the image contrast comparison function and the image structure comparison function.
8. An image restoration device, the device comprising: memory, a processor and a computer program stored on the memory and executable on the processor, the computer program being configured to implement the steps of the image restoration method according to any of claims 1 to 7.
9. A storage medium, characterized in that the storage medium is a computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the image restoration method according to any one of claims 1 to 7.
10. A computer program product, characterized in that the computer program product comprises a computer program which, when executed by a processor, implements the steps of the image restoration method according to any one of claims 1 to 7.
CN202410891770.3A 2024-07-04 2024-07-04 Image restoration method, device, storage medium and computer program product Active CN118429229B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410891770.3A CN118429229B (en) 2024-07-04 2024-07-04 Image restoration method, device, storage medium and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410891770.3A CN118429229B (en) 2024-07-04 2024-07-04 Image restoration method, device, storage medium and computer program product

Publications (2)

Publication Number Publication Date
CN118429229A true CN118429229A (en) 2024-08-02
CN118429229B CN118429229B (en) 2024-09-17

Family

ID=92321899

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410891770.3A Active CN118429229B (en) 2024-07-04 2024-07-04 Image restoration method, device, storage medium and computer program product

Country Status (1)

Country Link
CN (1) CN118429229B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113096207A (en) * 2021-03-16 2021-07-09 天津大学 Rapid magnetic resonance imaging method and system based on deep learning and edge assistance
CN114596218A (en) * 2022-01-25 2022-06-07 西北大学 Ancient painting image restoration method, model and device based on convolutional neural network
WO2022179124A1 (en) * 2021-02-27 2022-09-01 华为技术有限公司 Image restoration method and apparatus
WO2023279890A1 (en) * 2021-07-06 2023-01-12 北京锐安科技有限公司 Image processing method and apparatus, electronic device, and storage medium
CN116385288A (en) * 2023-03-20 2023-07-04 深圳大学 Depth image restoration method and device and readable storage medium
CN116579952A (en) * 2023-06-07 2023-08-11 阜阳师范大学 Image restoration method based on DU-GAN network
CN117689540A (en) * 2023-11-13 2024-03-12 北京交通大学 Dynamic heavy parameterization-based light-weight image super-resolution method and system
US20240135496A1 (en) * 2022-10-07 2024-04-25 Mohamed bin Zayed University of Artificial Intelligence System and method for burst image restoration and enhancement

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022179124A1 (en) * 2021-02-27 2022-09-01 华为技术有限公司 Image restoration method and apparatus
CN113096207A (en) * 2021-03-16 2021-07-09 天津大学 Rapid magnetic resonance imaging method and system based on deep learning and edge assistance
WO2023279890A1 (en) * 2021-07-06 2023-01-12 北京锐安科技有限公司 Image processing method and apparatus, electronic device, and storage medium
CN114596218A (en) * 2022-01-25 2022-06-07 西北大学 Ancient painting image restoration method, model and device based on convolutional neural network
US20240135496A1 (en) * 2022-10-07 2024-04-25 Mohamed bin Zayed University of Artificial Intelligence System and method for burst image restoration and enhancement
CN116385288A (en) * 2023-03-20 2023-07-04 深圳大学 Depth image restoration method and device and readable storage medium
CN116579952A (en) * 2023-06-07 2023-08-11 阜阳师范大学 Image restoration method based on DU-GAN network
CN117689540A (en) * 2023-11-13 2024-03-12 北京交通大学 Dynamic heavy parameterization-based light-weight image super-resolution method and system

Also Published As

Publication number Publication date
CN118429229B (en) 2024-09-17

Similar Documents

Publication Publication Date Title
CN109753971B (en) Correction method and device for distorted text lines, character recognition method and device
Xiao et al. Single image dehazing based on learning of haze layers
CN112541876B (en) Satellite image processing method, network training method, related device and electronic equipment
CN115293992B (en) Polarization image defogging method and device based on unsupervised weight depth model
KR102628115B1 (en) Image processing method, device, storage medium, and electronic device
EP4322109A1 (en) Green screen matting method and apparatus, and electronic device
CN114332150A (en) Handwriting erasing method, device, equipment and readable storage medium
CN114898177B (en) Defect image generation method, model training method, device, medium and product
CN113724136A (en) Video restoration method, device and medium
CN114444653A (en) Method and system for evaluating influence of data augmentation on deep learning model performance
CN117078574A (en) Image rain removing method and device
CN118429229B (en) Image restoration method, device, storage medium and computer program product
CN114565953A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN115760641B (en) Remote sensing image cloud and fog removing method and equipment based on multiscale characteristic attention network
CN112052863B (en) Image detection method and device, computer storage medium and electronic equipment
CN111612715B (en) Image restoration method and device and electronic equipment
CN113205530A (en) Shadow area processing method and device, computer readable medium and electronic equipment
CN112651926A (en) Method and device for detecting cracks based on recursive attention mechanism
WO2024051690A1 (en) Image restoration method and apparatus, and electronic device
CN116309274B (en) Method and device for detecting small target in image, computer equipment and storage medium
CN117911908B (en) Enhancement processing method and system for aerial image of unmanned aerial vehicle
Hou et al. Image inpainting via progressive decoder and gradient guidance
CN118470059A (en) Diffusion model training and image processing method, device, equipment and storage medium
CN111612714A (en) Image restoration method and device and electronic equipment
CN115761752A (en) Natural scene text detection model training method and device, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant