CN113256519A - Image restoration method, apparatus, storage medium, and program product - Google Patents

Image restoration method, apparatus, storage medium, and program product Download PDF

Info

Publication number
CN113256519A
CN113256519A CN202110552715.8A CN202110552715A CN113256519A CN 113256519 A CN113256519 A CN 113256519A CN 202110552715 A CN202110552715 A CN 202110552715A CN 113256519 A CN113256519 A CN 113256519A
Authority
CN
China
Prior art keywords
target variable
image
image data
model
updated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110552715.8A
Other languages
Chinese (zh)
Inventor
沈力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Wodong Tianjun Information Technology Co Ltd
Priority to CN202110552715.8A priority Critical patent/CN113256519A/en
Publication of CN113256519A publication Critical patent/CN113256519A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The embodiment of the invention provides an image recovery method, equipment, a storage medium and a program product, wherein image data to be recovered and an initialized target variable are obtained, and the initialized target variable is a randomly generated Lagrange multiplier with the same dimension as the image data; inputting image data and an initialized target variable into a pre-trained image recovery model, and sequentially carrying out iterative updating on the target variable through a plurality of three-operator splitting algorithm TOS network layers of the image recovery model to obtain a final target variable; and inputting the final target variable into a nonlinear mapping network layer of the image recovery model, generating and outputting final recovered image data. The image recovery model is an interpretable network model of a depth expansion network structure based on a three-operator splitting algorithm, combines the advantages of a traditional model optimization image recovery method and a heuristic depth learning image recovery method, and can improve the image recovery quality and the robustness.

Description

Image restoration method, apparatus, storage medium, and program product
Technical Field
Embodiments of the present invention relate to the field of image processing, and in particular, to a method, an apparatus, a storage medium, and a program product for restoring an image.
Background
The method specifically comprises the classical problems of image denoising, image deblurring, image super-resolution reconstruction, image compressed sensing and the like in the field of image recovery.
Techniques for solving problems in the field of image restoration in the prior art can be mainly classified into three categories: the method comprises an image recovery method based on traditional model optimization, an image recovery method based on heuristic deep learning and an image recovery method based on interpretable deep learning driven by an optimization algorithm model. However, the image restored by the method in the prior art has low quality and robustness.
Disclosure of Invention
Embodiments of the present invention provide a method, an apparatus, a storage medium, and a program product for image restoration, which are used to improve image restoration quality and improve robustness.
In a first aspect, an embodiment of the present invention provides an image recovery method, including:
acquiring image data to be restored and an initialized target variable, wherein the initialized target variable is a randomly generated Lagrange multiplier with the same dimension as the image data;
inputting the image data and the initialized target variable into a pre-trained image recovery model, and sequentially carrying out iterative updating on the target variable through a plurality of three-operator splitting algorithm TOS network layers of the image recovery model to obtain a final target variable;
and inputting the final target variable into a nonlinear mapping network layer of the image recovery model, generating finally recovered image data, and outputting the finally recovered image data.
In a second aspect, an embodiment of the present invention provides an apparatus for restoring an image, including:
the device comprises an acquisition module, a recovery module and a recovery module, wherein the acquisition module is used for acquiring image data to be recovered and initialized target variables, and the initialized target variables are randomly generated Lagrange multipliers with the same dimensionality as the image data;
the three-operator splitting algorithm network module is used for inputting the image data and the initialized target variable into a pre-trained image recovery model, and sequentially carrying out iterative update on the target variable through a plurality of three-operator splitting algorithm TOS network layers of the image recovery model to obtain a final target variable;
and the nonlinear mapping network module is used for inputting the final target variable into the nonlinear mapping network layer of the image recovery model, generating finally recovered image data and outputting the finally recovered image data.
In a third aspect, an embodiment of the present invention provides an electronic device, including: at least one processor; and a memory;
the memory stores computer-executable instructions;
the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the method of the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, in which computer-executable instructions are stored, and when a processor executes the computer-executable instructions, the method according to the first aspect is implemented.
In a fifth aspect, embodiments of the invention provide a computer program product comprising computer instructions which, when executed by a processor, implement a method as described in the first aspect.
According to the image restoration method, the device, the storage medium and the program product provided by the embodiment of the invention, the image data to be restored and the initialized target variable are obtained, wherein the initialized target variable is a randomly generated Lagrange multiplier with the same dimension as the image data; inputting the image data and the initialized target variable into a pre-trained image recovery model, and sequentially carrying out iterative updating on the target variable through a plurality of three-operator splitting algorithm TOS network layers of the image recovery model to obtain a final target variable; and inputting the final target variable into a nonlinear mapping network layer of the image recovery model, generating finally recovered image data, and outputting the finally recovered image data. The image recovery model provided by the embodiment of the invention is an interpretable network model of a depth expansion network structure based on a three-operator splitting algorithm, and the structure combines the advantages of an image recovery method optimized by a traditional model and an image recovery method for heuristic depth learning, so that the image recovery quality can be improved, and the robustness can be improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a schematic view of an application scenario of a method for restoring an image according to an embodiment of the present invention;
FIG. 2 is a flowchart of a method for restoring an image according to an embodiment of the present invention;
FIG. 3 is a network architecture diagram of the TOS-Net model according to an embodiment of the present invention;
FIG. 4 is a flowchart of a method for restoring an image according to another embodiment of the present invention;
FIG. 5 is a flowchart of a method for restoring an image according to another embodiment of the present invention;
FIG. 6 is a schematic illustration of a norm visualization provided by an embodiment of the present invention;
fig. 7 is a schematic diagram of a TOS-IDBlock network structure and a data flow process according to an embodiment of the present invention;
FIG. 8 is a diagram illustrating a deep denoising network model H according to an embodiment of the present inventiondipThe network structure of (1);
fig. 9 is a schematic diagram of a TOS-IDBlock network structure and a data flow process according to an embodiment of the present invention;
FIG. 10 is a schematic structural diagram of an image restoration apparatus according to an embodiment of the present invention
Fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
With the foregoing drawings in mind, certain embodiments of the disclosure have been shown and described in more detail below. These drawings and written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the concepts of the disclosure to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The method specifically comprises the classical problems of image denoising, image deblurring, image super-resolution reconstruction, image compressed sensing and the like in the field of image recovery.
Techniques for solving problems in the field of image restoration in the prior art can be mainly classified into three categories: the method comprises an image recovery method based on traditional model optimization, an image recovery method based on heuristic deep learning and an image recovery method based on interpretable deep learning driven by an optimization algorithm model.
The image restoration method based on traditional model optimization mainly builds a corresponding mathematical model based on maximum posterior probability estimation (MAP) of Bayesian theorem, designs complex image prior constraints or regular terms based on professional domain knowledge of image restoration, and utilizes the image prior constraints and regular terms to depict the characteristics of image restoration. In general lpA series of regularization terms such as norm, Total Variation (TV) norm and the like are used for sparsely constraining a specific conversion domain of the image, and meanwhile edge information and the like of the image are reserved. After the mathematical model of image recovery is established, the image recovery method based on the traditional optimization model can design different optimization methods according to the characteristics of the model to solve the converged optimal solution, namely the high-quality image to be recovered. Classical optimization algorithms such as alternating direction multiplier (ADMM), projection gradient algorithm (ISTA), and operator splitting algorithm (TOS) are commonly used to optimize and solve the image recovery model with constraint terms.
The image recovery method based on heuristic deep learning is mainly based on heuristic design of a deep neural network model based on experience, end-to-end training is carried out on a large number of labeled paired data sets, the designed deep neural network model is made to learn a mapping, and the mapping can recover low-quality/damaged images into high-quality images. The main focus of such methods is the design of the network structure and the loss function. Based on a heuristic deep learning method, a series of classical network structures are available at present, classical structures such as a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN) and deformations thereof are used for extracting features of images at different degrees from different angles, and classical networks such as ResNet, UNet and ResUNet are common image recovery neural network structures at present. Partial differentiable image prior constraints and regularization terms are added into a loss function of the neural network, and the output result of the deep neural network plays a role in constraint.
The image restoration method based on model-driven deep learning can be mainly divided into a method based on neural network expansion and a method based on deep neural network embedding. The method based on neural network expansion introduces potential learnable image priors, the optimization iteration process of a mathematical model for image recovery is designed into a graph calculation process of a neural network structure, each layer structure of the neural network structure is equivalent to each iteration of the optimization process, the network is trained end to learn the mapping from low-quality/damaged images to high-quality images, the construction interpretability of the network structure is stronger, and the effect of each layer is clearer. Another type of image restoration method based on model-driven deep learning is a deep neural network embedding method, which replaces the substep of the iterative optimization process in the process of inserting a deep neural network pre-trained based on a specific task into the iterative optimization process of an image restoration method based on conventional model optimization. In general, the image recovery method based on model-driven deep learning combines the advantages of the image recovery method based on traditional model optimization and the image recovery method based on heuristic deep learning, and designs image priors based on professional domain knowledge to be embedded into the forward propagation process of a neural network, so that the designed network structure has better interpretability and robustness, hidden information of image recovery problems can be more deeply mined, and higher-quality images can be recovered under the common drive of training data and domain knowledge.
However, the prior art techniques for solving the problems in the field of image restoration have the following disadvantages:
1) the image restoration method based on the traditional model optimization is to establish a corresponding mathematical model based on maximum posterior probability estimation (MAP) of Bayesian theorem, design complex image prior constraints or regular terms based on professional domain knowledge of image restoration, and use the image prior constraints and the regular terms to depict the characteristics of image restoration. The image restoration method based on the traditional model optimization has the disadvantages that the construction of the prior regular terms of the image needs to have strong mathematical skills and abstract understanding of the image restoration problem, and in the process of optimizing and solving the traditional model, the problems that some regular terms are not differentiable and the like need to be solved, and the optimization and solving process of the method is usually time-consuming.
2) The image recovery method based on heuristic deep learning is mainly based on heuristic design of a deep neural network model based on experience, end-to-end training is carried out on a large number of labeled paired data sets, and the designed deep neural network model is used for learning and can recover low-quality/damaged images into high-quality image mapping. The image recovery method based on the heuristic deep learning has the following defects: the method depends heavily on a large number of labeled training data sets, and the effect of the trained neural network model is good if the training data amount is insufficient. The neural network structure designed for a specific image recovery problem has poor generalization, and the effect of migration to other image recovery problems is generally not good. The design of neural network structures is usually based on heuristic black box network structures, which lack interpretability.
3) The image recovery method based on model-driven deep learning introduces potential learnable image prior, and designs an optimization iteration process of a mathematical model for image recovery into a calculation process of forward reasoning of a neural network structure; or inserting the depth neural network pre-trained based on the specific task into the iterative optimization process of the image recovery method based on the traditional model optimization, and replacing the substep of the iterative optimization process. Generally speaking, the more image priors or regularization terms a model contains, the more abundant the image priors information it contains, and the better the quality of the restored image, however, the image restoration method based on model-driven depth learning has the disadvantage that the selected model usually only considers a single image priors or regularization term, and rarely considers an image restoration model containing two image priors or regularization terms at the same time, so that the structure of the expanded depth neural network cannot process two image priors at the same time, and therefore the effect of the restored image is usually suboptimal. In addition, the method adopts learnable latent image priors, and omits the traditional classic manual design image priors.
In order to solve the technical problems and improve image restoration quality and robustness, an embodiment of the invention provides an image restoration method, which is based on a pre-trained image restoration model, wherein the image restoration model comprises a plurality of three-operator splitting algorithm TOS network layers (TOS-Block) and a nonlinear mapping network layer, and image data to be restored and initialized target variables are obtained, wherein the initialized target variables are randomly generated Lagrangian multipliers with the same dimension as the image data; inputting the image data and the initialized target variable into a pre-trained image recovery model, and sequentially carrying out iterative updating on the target variable through a plurality of three-operator splitting algorithm TOS network layers of the image recovery model to obtain a final target variable; and inputting the final target variable into a nonlinear mapping network layer of the image recovery model, generating finally recovered image data, and outputting the finally recovered image data. The image recovery model provided by the embodiment of the invention is an interpretable network model of a deep expansion network structure based on a three-operator splitting algorithm, the structure combines the advantages of a traditional model optimization image recovery method and a heuristic deep learning image recovery method, the learning process of the structure is driven by field knowledge and data together, and the image recovery model can improve the image recovery quality and the robustness.
As shown in fig. 1, the image recovery method provided in the embodiment of the present invention is applicable to an electronic device 101 (only a server is taken as an example in the figure) such as a server and a terminal device, where the terminal device includes but is not limited to a mobile phone, a tablet computer, a notebook computer, a desktop computer, a personal digital assistant, an intelligent wearable device, an intelligent home device, and the like, and the server includes but is not limited to an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a big data and an artificial intelligence platform, and the like. In the embodiment of the present invention, the electronic device 101 may obtain image data to be restored from other electronic devices 102, such as a mobile phone, a tablet, a camera, a server, and the like, or may also obtain image data to be restored from a storage unit of the electronic device 101, and obtain the image data to be restored and an initialized target variable based on a pre-trained image restoration model, where the initialized target variable is a randomly generated lagrangian multiplier having the same dimension as the image data; inputting image data and initialized target variables into a pre-trained image recovery model, and sequentially performing iterative updating on the target variables through a plurality of three-operator splitting algorithm TOS network layers of the image recovery model to obtain final target variables; and inputting the final target variable into a nonlinear mapping network layer of the image recovery model, generating finally recovered image data, and outputting the finally recovered image data. The image recovery model may be trained in advance on the electronic device 101, or may be deployed on the electronic device 101 after being trained in advance on other devices, where the model training process may use a conventional training method, which is not limited herein.
Some terms related to the embodiments of the present invention are explained below:
image prior information/regularization term/constraint term: the image priori information refers to priori knowledge information obtained by people through experience summary of natural images, for example, the images can be generally expressed as a matrix of W x H x C, and the value range of each pixel point under 8 bits is 0-255. The regularization term: control of the parameters is introduced for surface model overfitting. Constraint term: means that the image needs to satisfy local smoothing constraint, the image block (patch) in the image satisfies self-similarity, and so on.
Operator: on the basis of an operator mathematical concept, mapping of a function space is corresponding to an operator function, a computer model established in the problem solving process is decomposed and functionally packaged through a problem to realize model deconstruction, and a model unit formed through the method is called as an operator.
The near-end mapping function: when the objective function comprises a non-differentiable function
Figure BDA0003076025720000071
Then find a z, z enabling a non-differentiable function by the near-end mapping function
Figure BDA0003076025720000072
As small as possible and close to the original non-differentiable point x, the specific formula is as follows.
Figure BDA0003076025720000073
Loss function: the degree of the difference between the predicted value and the actual value of the model is evaluated, and the better the loss function is, the better the performance of the model is. The loss function can be divided into an empirical risk loss function and a structural risk loss function. The empirical risk loss function refers to the difference between the predicted structure and the real structure, and the structural risk loss function refers to the empirical risk loss function plus a regularization term.
The following describes the technical solutions of the present invention and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present invention will be described below with reference to the accompanying drawings.
Fig. 2 is a flowchart of an image restoration method according to an embodiment of the present invention. The embodiment provides an image recovery method, an execution subject of the image recovery method is an electronic device such as a server, a smart phone, a notebook computer, a desktop computer, a wearable device and the like, the image recovery method of the embodiment can be applied to application scenes of image de-noising, image de-blurring, image super-resolution reconstruction or image compressed sensing, a corresponding image recovery model can be constructed according to a model corresponding to a specific application scene, and the image recovery method comprises the following specific steps:
s201, obtaining image data to be restored and initialized target variables, wherein the initialized target variables are randomly generated Lagrange multipliers with the same dimensionality as the image data.
In this embodiment, the image data y to be restored may be obtained by any way, which is not limited herein; the initialized target variables can be randomly generated, and the target variables are lagrangian multipliers with the same dimension (pixels) as the image data, wherein the lagrangian multipliers are variable parameters based on a lagrangian multiplier method, and the basic lagrangian multiplier method is a method for solving the extremum of a function f (x1, x 2.) under the constraint condition of g (x1, x 2.) -0.
S202, inputting the image data and the initialized target variable into a pre-trained image recovery model, and sequentially performing iterative updating on the target variable through a plurality of three-operator splitting algorithm TOS network layers of the image recovery model to obtain a final target variable.
In the present embodiment, the image restoration model is a depth expansion network designed based on the Three Operator Splitting algorithm (TOS), named TOS-Net, wherein the TOS algorithm is an algorithm in the optimization field (refer to DAVIS, Damek; YIN, Wotao. A third-Operator Splitting scheme and times optimization application, set-valued and statistical analysis,2017,25.4: 829-858.). First, in this embodiment, the image restoration problem is modeled as an optimization problem as follows:
Figure BDA0003076025720000081
where f (-) represents the fidelity loss of the image and g (-) and h (-) represent two different priors of the image. Aiming at the image recovery problem, a model with interpretability based on model-driven deep learning, called TOS-Net, is designed, and as shown in FIG. 3, the network consists of n TOS network layers (TOS-Block) and 1 learnable nonlinear mapping network layer ΓgAnd (4) forming. The TOS network layer can sequentially carry out iterative update on the target variable to obtain a final target variable, and through n-word iteration, the target variable can be updated by initialized z0Obtaining the target variable z after n times of iteration updatingn,zk-1Then inputting the signal into a nonlinear mapping network layer gammagAnd carrying out nonlinear mapping to generate finally recovered image data.
Specifically, as shown in fig. 4, when the target variable is sequentially updated iteratively by the multiple three-operator splitting algorithm TOS network layers of the image recovery model in S202, the updating specifically may include:
s301, inputting the image data to be restored and the target variable to be updated for any TOS network layer; for the first TOS network layer, the target variable to be updated is the initialized target variable, and for other TOS network layers, the target variable to be updated is the target variable output by the last TOS network layer;
s302, optimizing the target variable to be updated under the constraint of a preset double regular term according to the image data to be restored and the target variable to be updated to obtain the target variable meeting the constraint of the preset double regular term;
and S303, determining the target variable meeting the preset double regular term constraint as the updated target variable, and outputting the updated target variable.
In the present embodiment, the image data y to be restored and the initialized target variable z0Input to TOS-Net, specifically, image data y and initialized target variable z0Inputting the data into a first TOS-Block, updating the target variable to obtain the data meeting the preset doubleUpdated target variable z constrained by regularization term1After the image data y to be restored and the updated target variable z1Inputting the target variable to a second TOS-Block, and updating the continuous target variable to obtain an updated target variable z meeting the preset double regular term constraint2By analogy, the kth TOS-Block will update the multiplier z last timek-1And taking the image data y to be recovered as input, and outputting an updated target variable z meeting the preset dual regular term constraintkUntil the nth TOS-Block outputs the updated target variable z meeting the preset double regular term constraintnAnd the final target variable is output to the nonlinear mapping network layer gammag. For different application scenes, corresponding dual regular term constraints (such as image prior information) can be adopted according to actual requirements, the constraint is not limited, and the TOS-Block can design a corresponding network structure according to different dual regular constraints.
S203, inputting the final target variable into the nonlinear mapping network layer of the image recovery model, generating the final recovered image data, and outputting the final recovered image data.
In the present embodiment, the final target variable z is setnInput non-linear mapping network layer gammagMapping the network layer Γ by a previously learned non-linearitygFor target variable znThe final restored image data may be generated by performing a non-linear mapping.
In the method for restoring an image provided by this embodiment, image data to be restored and an initialized target variable are obtained, where the initialized target variable is a randomly generated lagrangian multiplier having the same dimension as the image data; inputting the image data and the initialized target variable into a pre-trained image recovery model, and sequentially carrying out iterative updating on the target variable through a plurality of three-operator splitting algorithm TOS network layers of the image recovery model to obtain a final target variable; and inputting the final target variable into a nonlinear mapping network layer of the image recovery model, generating finally recovered image data, and outputting the finally recovered image data. The image recovery model provided by the embodiment of the invention is an interpretable network model of a depth expansion network structure based on a three-operator splitting algorithm, and the structure combines the advantages of an image recovery method optimized by a traditional model and an image recovery method for heuristic depth learning, so that the image recovery quality can be improved, and the robustness can be improved.
On the basis of the above embodiment, optionally, TOS-Block is mapped by the first non-linear learnable sub-network layer ΓgFidelity sub-network layer F, a learnable second non-linear mapping sub-network layer ΓhAnd a variable update sub-network layer M. Learnable non-linear mapping module gammagAnd ΓhThe network structure is designed according to the near-end mapping functions corresponding to different image priors.
As shown in fig. 5, the optimizing the target variable to be updated according to the image data to be restored and the target variable to be updated in S302 in the foregoing embodiment under a preset double regular term constraint to obtain the target variable meeting the preset double regular term constraint may specifically include:
s401, sub-network layer gamma is mapped through first nonlinearitygGenerating first image data meeting a first preset regular term according to a target variable to be updated;
s402, updating the target variable to be updated according to the target variable to be updated, the image data to be restored and the first image data through a fidelity sub-network layer F to obtain an intermediate target variable;
s403, sub-network layer gamma is mapped through second nonlinearityhGenerating second image data meeting second preset image prior information or a second preset regular term according to the intermediate target variable;
s404, updating the sub-network layer M through variables, updating the target variable to be updated according to the target variable to be updated, the first image data and the second image data to obtain an updated target variable, and outputting the updated target variable.
In this embodiment, the sub-network layer Γ is mapped by a first non-linearitygFidelity sub-network layer F and nonlinear mapping sub-network layer gammahAnd the sub-network layer M for variable updating can realize the updating of the target variable, the target variable meets the preset double regular term constraint, and the final target variable is continuously optimized through continuous iterative updating.
Optionally, the first preset regular term is first preset image prior information; the second preset regular term is second preset image prior information. Of course, for different application scenarios, the first preset regular term and the second preset regular term may also adopt corresponding other regular term constraints according to actual requirements, and no limitation is made here.
In an optional embodiment, on the basis of the above embodiment, the method is applied to an image compressed sensing recovery scene, and the image data to be recovered is sparse observation data; preset dual regularization term constraints may be employed1-l2Norm, wherein the norm is visualized as shown in fig. 6, and the first preset regularization term in this embodiment is llA norm; the second preset regular term is-l2And (4) norm. Correspondingly, when the deep-developed network is designed in this embodiment, the TOS-Net network structure model may be a TOS-CSNet (three Operator Splitting algorithm based compressing Sensing network), and the overall network structure framework of the TOS-CSNet is the same as that of fig. 3, where the TOS network layer TOS-Block may be a TOS-CSBlock, and the principle and implementation details thereof will be described in detail below.
For the compressive sensing problem, a target image is restored according to sparse observation data y obtained by random sampling, and generally, the compressive sensing problem is mainly used for restoring an original target image x by the following model:
Figure BDA0003076025720000111
where Φ represents the upsampling matrix, Ψ represents the transformation matrix (transform matrix), and g (-) is a regularization term, usually taken
Figure BDA00030760257200001110
Norm is used for carrying out sparse constraint on x in other conversion domains, and lambda is a penalty term parameter and is used for balancing fidelity terms
Figure BDA0003076025720000112
And the regularization term g (Ψ x).
Since Ψ is only a fixed matrix as a model in the original optimization problem, the new image compression sensing model proposed in this embodiment replaces the transformation matrix Ψ with a parameterized learnable nonlinear transformation matrix H, and updates the model of the conventional optimization problem into a module in a deep neural network, which becomes a parameterized mapping, so that the deep neural network can be used to learn the transformation matrix, thereby obtaining a better result; meanwhile, the embodiment has better sparse constraint effect
Figure BDA00030760257200001111
Norm is taken as the regularization term of the model, and the dual regularized compressive sensing model we propose is as follows:
Figure BDA0003076025720000113
wherein the content of the first and second substances,
Figure BDA0003076025720000114
Figure BDA0003076025720000115
and
Figure BDA0003076025720000116
is a convolution operation, ReLu (·) is an activation function. Definition of
Figure BDA0003076025720000117
g(·)=λ||Hx||1,h(·)=-λ||Hx||2. According to a conventional TOS optimization algorithm (references: Davis D, Yin W.A three-operator splitting scheme and its optimization, set-valued and spatial analysis,2017 Dec; 2)5(4): 829-58.), the calculation process of each module in each iteration is as follows:
Figure BDA0003076025720000118
wherein the near-end mapping function
Figure BDA0003076025720000119
Substituting the corresponding mathematical formula and carrying out mathematical derivation, wherein the updating process of each module is as follows:
Figure BDA0003076025720000121
wherein Sγλ(. cndot.) is a function of a soft threshold,
Figure BDA0003076025720000122
is the left inverse of H, which possesses a network structure symmetrical to H, and G is defined as follows:
Figure BDA0003076025720000123
wherein (x)+The activation function relu (x) may be used in network design to perform, i.e. max (0, x) is implemented using the activation function relu (x).
It can be seen that for the first non-linear mapping sub-network layer ΓgTarget variable z to be updatedkInputting the first nonlinear mapping sub-network layer ΓgCalculating a model H, a soft threshold function model S from a parameterized learnable non-linear transformation matrixγλ(. cndot.) and left inverse matrix calculation model of nonlinear transformation matrix H
Figure BDA0003076025720000126
Obtaining satisfies l1Norm of first image data xk(ii) a Wherein the nonlinear transformation matrix computation model H is implemented by a convolution operation and an activation function;
for fidelity sub-network layer F, according to the target variable z to be updatedkImage data y to be restored, and first image data xkUpdating the target variable by specific operation to obtain intermediate target variable
Figure BDA0003076025720000124
Mapping sub-network layer Γ for a learnable second non-linearityhIntermediate target variables
Figure BDA0003076025720000125
Inputting a second nonlinear mapping sub-network layer gammahA calculation model H based on the nonlinear transformation matrix and a calculation model of the left inverse of the nonlinear transformation matrix H
Figure BDA0003076025720000127
And activating the function model G to obtain the satisfied-l2Second image data v of normk
For variable update of the sub-network layer M, according to the target variable z to be updatedkFirst image data xkSecond image data vkUpdating the target variable to obtain an updated target variable zk+1
Based on the above embodiment, the overall network structure framework of the TOS-CSNet is as shown in fig. 3, it should be noted that each TOS-Block in the TOS-CSNet is replaced by a TOS-CSBlock, the TOS-CSBlock is designed based on the TOS optimization expansion process of the image compression perception model under the constraint of the dual regularization term proposed in the technical solution, and the specific network structure and the data circulation process are as shown in fig. 7. For the (k + 1) th layer TOS-CSBlock, updating a target variable z of the (k) th layer TOS-CSBlockkAnd sampling to obtain sparse observation data y (image data to be recovered) as the input of the (k + 1) th layer TOS-CSBlock, and outputting an updated target variable zk+1. In particular, the input target variable zkFirstly, the network module H is entered, and then a soft threshold function S is passedγλ(. to) finally enter another network module
Figure BDA0003076025720000136
Get the term l satisfying the specific regularization1Norm image xk(ii) a The updated image xkAnd a target variable zkTogether with the sparse observation data y, to a fidelity module F, which is based on an objective function minxThe gradient information of the f (x) function in f (x) g (x) h (x) updates a new intermediate target variable
Figure BDA0003076025720000131
Updated intermediate target variables
Figure BDA0003076025720000132
Input to a non-linear mapping module ΓhGenerating a regularization term-l2Image v of normk(ii) a Finally, the variable updating module M updates the variable z according to the target variablekImage xkAnd an image vkUpdate out a new target variable zk+1. The final target variable z updated after n TOS-CSBlocknThe final target variable znInput to the non-linear mapping network layer ΓgGenerating a final restored image
Figure BDA0003076025720000133
Wherein the network layer Γ is non-linearly mappedgGenerating a final restored image by the following process
Figure BDA0003076025720000134
In another optional embodiment, on the basis of the above embodiment, the image restoration method may also be applied to an image denoising scene, where the image data to be restored is image data with noise; the first preset regular term is depth image prior information; the second predetermined regularization term is l1And (4) norm. Correspondingly, when the deep-expanding network is designed in this embodiment, the TOS-Net network structure model may be TOS-id network (three Operator Splitting algorithm based Image Denoising network), and the whole network structure framework and diagram of TOS-id network3, wherein the TOS network layer TOS-Block may be TOS-IDBlock, the principle and implementation details of which will be described in detail below.
For the image denoising problem, for an input image y with noise, the additive noise thereof can be represented by the following formula:
y=s+n
wherein s represents an image without noise, n represents a noise term, the noise may include but is not limited to salt-pepper noise, gaussian noise, and the like, and different denoising models may exist according to different noises, wherein a classical image denoising model is as follows:
Figure BDA0003076025720000135
where s represents the sparse mapping matrix of the image without noise, D is the dictionary matrix, Ds represents the image without noise that we need to recover, g(s) is a regular term, which acts as a sparse constraint on s, and σ represents the noise level. The model in the present embodiment assumes that the image can be represented by a combination of a dictionary and a sparse matrix. In this embodiment, a learnable nonlinear transformation matrix H is adopteddReplacing the dictionary matrix D and introducing a Depth Image Prior (DIP) to learn a potential noise prior, the image denoising model constrained by a dual regularization term proposed in this embodiment is as follows:
Figure BDA0003076025720000141
wherein alpha is1=σ2λ1,α2=σ2λ2Non-linear transformation matrix
Figure BDA0003076025720000142
Figure BDA0003076025720000143
And
Figure BDA0003076025720000144
is a convolution operation, ReLu (·) is an activation function; the first regular term is DIP depth image prior information, and the second regular term is l1The norm is used for restoring data from sparse observation mainly for solving the problem of image sparsity. According to the TOS optimization algorithm, through mathematical derivation, the calculation flow of each module of the optimization iteration process is as follows:
Figure BDA0003076025720000145
wherein S isγλ(. cndot.) is a function of a soft threshold,
Figure BDA0003076025720000146
is HdPseudo inverse, its network structure and HdSymmetry, HdipThe specific structure of the deep denoising network model is ResUnnet (reference: Ulyanov D, Vedaldi A, Lempiitsky V. deep image prior. proceedings of the IEEE conference on computer vision and pattern recognition 2018(pp. 9446-9454)), as shown in FIG. 8.
It can be seen that for the first non-linear mapping sub-network layer ΓgTarget variable z to be updatedkInputting the first nonlinear mapping sub-network layer ΓgComputing the model H from a parameterized learnable non-linear transformation matrixdDeep denoising network model HdipAnd a nonlinear transformation matrix HdPseudo-inverse matrix computation model HeAcquiring first image data s satisfying depth image prior information DIPk(ii) a Wherein the nonlinear transformation matrix calculation model HdRealized by convolution operation and activation function; the deep denoising network model is a deep denoising network model of a ResUnet structure;
for fidelity sub-network layer F, according to the target variable z to be updatedkImage data y to be restored, and first image data skUpdating the target variable by specific operation to obtain intermediate target variable
Figure BDA0003076025720000147
Mapping sub-network layer Γ for a learnable second non-linearityhIntermediate target variables
Figure BDA0003076025720000148
Inputting a second nonlinear mapping sub-network layer gammahAccording to a soft threshold function model Sγλ(. obtaining satisfies l1Second image data v of normk
For variable update of the sub-network layer M, according to the target variable z to be updatedkFirst image data skSecond image data vkUpdating the target variable to obtain an updated target variable zk+1
Based on the above embodiment, the overall network structure framework of the TOS-IDNet is similar to that of FIG. 3, and the TOS-IDNet can be obtained by replacing the TOS-Block with the TOS-IDBlock. The TOS-IDBlock is designed based on the TOS optimization expansion process of the image denoising model constrained by the double regular terms proposed in the technical scheme, and a specific network structure and a data flow process are shown in fig. 9. For the (k + 1) th TOS-IDBlock, the output z of the k layer is processedkAnd the observation data y obtained by sampling is used as the input of the (k + 1) th layer TOS-IDBlock, and the updated target variable z is outputk+1. Specifically, the k-th layer input variable zkFirst enters the network module HdThen passes through a depth image prior module Hdip(ResUnet as shown in FIG. 8), and finally enter another network module
Figure BDA0003076025720000155
Obtaining a preliminary restored image s satisfying the prior information of the depth imagek(corresponding to x in FIG. 3)k) (ii) a The updated image skAnd a target variable zkInputting the data and the sparse observation data y into a fidelity module F together, and updating a new variable
Figure BDA0003076025720000151
Updated intermediate target variables
Figure BDA0003076025720000152
Input to a non-linear mapping block fhGenerating a term l satisfying sparse regularization1Image v of normk(ii) a Finally, the variable updating module M updates the variable z according to the target variablekImage skAnd an image vkUpdate out a new target variable zk+1. The final target variable z is updated after n TOS-IDBlocknThe final target variable znInput to the non-linear mapping network layer ΓgGenerating a final restored denoised image
Figure BDA0003076025720000153
Wherein the network layer Γ is non-linearly mappedgGenerating a final restored image by the following process
Figure BDA0003076025720000154
The depth-expanded network structure based on the three-operator splitting algorithm provided by the embodiments is an interpretable network structure, the structure combines the advantages of both the image recovery method of the traditional model optimization and the image recovery method of the heuristic depth learning, and the learning process of the structure is driven by the domain knowledge and the data together. The forward propagation module of the TOS-Net network structure provided by the above embodiments can process two image priors or regularization terms simultaneously, and can recover an image with higher quality.
The implementation can expand the TOS-Net into corresponding structures of the TOS-CSNet and the TOS-IDnet according to the image compressed sensing and image denoising problems respectively, and a specific learnable nonlinear mapping module is designed according to specific constraint terms of the structures to replace a near-end mapping operator of the iterative optimization process.
It should be noted that, TOS-Net proposed in the above embodiments is a general network structure, which can process an image recovery model with two image priors or regularization terms, and according to different image recovery problems, its corresponding TOS-Block can be extended to different network structures to process a specific image recovery problem, so that each TOS-Block can be replaced by other TOS-Block modules according to different image priors and regularization terms. It is within the scope of the present invention to replace the image prior and regularization terms in TOS-Block with other image prior and regularization terms.
In the above embodiment, optionally, processing an image recovery model with two image priors or regularization terms may be solved by using an image recovery method based on conventional model optimization, and an optimization model capable of processing three operators may be selected, but such a method has disadvantages that a near-end mapping function in an iterative optimization calculation process is not well solved, and the calculation process is time-consuming. In addition, an image recovery model with two image priors or a regular term is processed, and an image recovery method based on heuristic depth learning can be used, the method is that the image priors or the regular term is added into a loss function for training the depth network model, and in the training process, the solution of network output approaches to a corresponding constraint term, and the method has the defect that penalty parameters corresponding to the constraint term in the loss function are not well adjusted, and the effect between different image priors can be mutually inhibited, so that the quality of the recovered image is not good.
Fig. 10 is a block diagram of an image restoration apparatus according to an embodiment of the present invention. The image restoration apparatus provided in this embodiment may execute the processing flow provided in the method embodiment, as shown in fig. 10, the image restoration apparatus 500 includes an obtaining module 501, a three-operator splitting algorithm network module 502, and a nonlinear mapping network module 503.
An obtaining module 501, configured to obtain image data to be restored and an initialized target variable, where the initialized target variable is a randomly generated lagrangian multiplier having the same dimension as the image data;
the three-operator splitting algorithm network module 502 is configured to input the image data and the initialized target variable into a pre-trained image recovery model, and sequentially perform iterative update on the target variable through a plurality of three-operator splitting algorithm TOS network layers of the image recovery model to obtain a final target variable;
and a nonlinear mapping network module 503, configured to input the final target variable into a nonlinear mapping network layer of the image recovery model, generate finally recovered image data, and output the finally recovered image data.
On the basis of any of the above embodiments, when the three-operator splitting algorithm network module 502 sequentially iteratively updates the target variable through the multiple three-operator splitting algorithm TOS network layers of the image recovery model, it is configured to:
inputting the image data to be restored and the target variable to be updated for any TOS network layer; for the first TOS network layer, the target variable to be updated is the initialized target variable, and for other TOS network layers, the target variable to be updated is the target variable output by the last TOS network layer;
optimizing the target variable to be updated under the constraint of a preset double regular term according to the image data to be recovered and the target variable to be updated to obtain a target variable meeting the constraint of the preset double regular term;
and determining the target variable meeting the preset double regular term constraint as an updated target variable, and outputting the updated target variable.
On the basis of any one of the above embodiments, the TOS network layer includes a first nonlinear mapping sub-network layer, a fidelity sub-network layer, a second nonlinear mapping sub-network layer, and a variable updating sub-network layer;
the three-operator splitting algorithm network module 502 is configured to, when optimizing the target variable to be updated under a preset double regular term constraint according to the image data to be restored and the target variable to be updated to obtain a target variable satisfying the preset double regular term constraint:
generating first image data meeting a first preset regular term according to a target variable to be updated through a first nonlinear mapping sub-network layer;
updating the target variable to be updated according to the target variable to be updated, the image data to be restored and the first image data through a fidelity sub-network layer to obtain an intermediate target variable;
generating second image data meeting second preset image prior information or a second preset regular term according to the intermediate target variable through a second nonlinear mapping sub-network layer;
and updating the target variable to be updated according to the target variable to be updated, the first image data and the second image data through a variable updating sub-network layer to obtain an updated target variable, and outputting the updated target variable.
On the basis of any one of the above embodiments, the first preset regular term is first preset image prior information; the second preset regular term is second preset image prior information.
In an optional embodiment, on the basis of any of the above embodiments, the method is applied to an image compressed sensing recovery scene, and the image data to be recovered is sparse observation data;
the first preset regular term is l1A norm; the second preset regular term is-l2And (4) norm.
Further, when the network module 502 generates, through the first nonlinear mapping sub-network layer, the first image data meeting the first preset regular term according to the target variable to be updated, the three-operator splitting algorithm is configured to:
inputting the target variable to be updated into the first nonlinear mapping sub-network layer, and obtaining the target variable satisfying l according to a parameterized learnable nonlinear transformation matrix calculation model, a soft threshold function model and a left inverse matrix calculation model of a nonlinear transformation matrix1A norm of the first image data; wherein the nonlinear transformation matrix computational model is implemented by a convolution operation and an activation function;
the three-operator splitting algorithm network module 502 is configured to, when generating, according to the intermediate target variable, second image data that satisfies second preset image prior information or a second preset regular term through a second nonlinear mapping sub-network layer:
inputting the intermediate target variable into the second nonlinear mapping sub-network layer, and converting according to the nonlinearityA transformation matrix calculation model, a left inverse matrix calculation model of the nonlinear transformation matrix, and an activation function model are obtained to satisfy l2Norm second image data.
In an alternative embodiment, on the basis of any of the above embodiments, the method is applied to an image denoising scene, and the image data to be restored is image data with noise;
the first preset regular term is depth image prior information; the second preset regular term is l1And (4) norm.
Further, when the network module 502 generates, through the first nonlinear mapping sub-network layer, the first image data meeting the first preset regular term according to the target variable to be updated, the three-operator splitting algorithm is configured to:
inputting the target variable to be updated into the first nonlinear mapping sub-network layer, and acquiring first image data meeting the prior information of the depth image according to a parameterized learnable nonlinear transformation matrix calculation model, a depth denoising network model and a pseudo-inverse matrix calculation model of a nonlinear transformation matrix; wherein the nonlinear transformation matrix computational model is implemented by a convolution operation and an activation function; the deep denoising network model is a deep denoising network model of a ResUnet structure;
the three-operator splitting algorithm network module 502 is configured to, when generating, according to the intermediate target variable, second image data that satisfies second preset image prior information or a second preset regular term through a second nonlinear mapping sub-network layer:
inputting the intermediate target variable into the second nonlinear mapping sub-network layer, and obtaining the intermediate target variable satisfying l according to a soft threshold function model1Norm second image data.
The image recovery device provided in the embodiment of the present invention may be specifically configured to execute the method embodiments provided in fig. 2 and 4 to 5, and specific functions are not described herein again.
The image recovery device provided by the embodiment of the invention obtains image data to be recovered and an initialized target variable, wherein the initialized target variable is a randomly generated Lagrange multiplier with the same dimension as the image data; inputting the image data and the initialized target variable into a pre-trained image recovery model, and sequentially carrying out iterative updating on the target variable through a plurality of three-operator splitting algorithm TOS network layers of the image recovery model to obtain a final target variable; and inputting the final target variable into a nonlinear mapping network layer of the image recovery model, generating finally recovered image data, and outputting the finally recovered image data. The image recovery model provided by the embodiment of the invention is an interpretable network model of a depth expansion network structure based on a three-operator splitting algorithm, and the structure combines the advantages of an image recovery method optimized by a traditional model and an image recovery method for heuristic depth learning, so that the image recovery quality can be improved, and the robustness can be improved.
Fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. The electronic device provided by the embodiment of the present invention may execute the processing flow provided by the embodiment of the image recovery method, as shown in fig. 11, the electronic device 60 includes a memory 61, a processor 62, and a computer program; wherein the computer program is stored in the memory 61 and is configured to execute the restoration method of the image described in the above embodiment by the processor 62. Furthermore, the electronic device 60 may have a communication interface 63 for transmitting control commands and/or data.
The electronic device in the embodiment shown in fig. 11 may be used to implement the technical solution of the above-mentioned image recovery method embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
In addition, the present embodiment also provides a computer-readable storage medium on which a computer program is stored, the computer program being executed by a processor to implement the restoration method of an image described in the above embodiment.
In addition, the present embodiment also provides a computer program product including a computer program, which is executed by a processor to implement the image restoration method described in the above embodiment.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
It is obvious to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to perform all or part of the above described functions. For the specific working process of the device described above, reference may be made to the corresponding process in the foregoing method embodiment, which is not described herein again.
The above embodiments are only used for illustrating the technical solutions of the embodiments of the present invention, and are not limited thereto; although embodiments of the present invention have been described in detail with reference to the foregoing embodiments, those skilled in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. A method for restoring an image, comprising:
acquiring image data to be restored and an initialized target variable, wherein the initialized target variable is a randomly generated Lagrange multiplier with the same dimension as the image data;
inputting the image data and the initialized target variable into a pre-trained image recovery model, and sequentially carrying out iterative updating on the target variable through a plurality of three-operator splitting algorithm TOS network layers of the image recovery model to obtain a final target variable;
and inputting the final target variable into a nonlinear mapping network layer of the image recovery model, generating finally recovered image data, and outputting the finally recovered image data.
2. The method of claim 1, wherein the sequentially iteratively updating the target variables through the plurality of three-operator splitting algorithm (TOS) network layers of the image restoration model comprises:
inputting the image data to be restored and the target variable to be updated for any TOS network layer; for the first TOS network layer, the target variable to be updated is the initialized target variable, and for other TOS network layers, the target variable to be updated is the target variable output by the last TOS network layer;
optimizing the target variable to be updated under the constraint of a preset double regular term according to the image data to be recovered and the target variable to be updated to obtain a target variable meeting the constraint of the preset double regular term;
and determining the target variable meeting the preset double regular term constraint as an updated target variable, and outputting the updated target variable.
3. The method of claim 2, wherein the TOS network layers comprise a first nonlinear mapping sub-network layer, a fidelity sub-network layer, a second nonlinear mapping sub-network layer, and a variable update sub-network layer;
the method for optimizing the target variable to be updated under the constraint of a preset double regular term according to the image data to be restored and the target variable to be updated to obtain the target variable meeting the constraint of the preset double regular term includes:
generating first image data meeting a first preset regular term according to a target variable to be updated through a first nonlinear mapping sub-network layer;
updating the target variable to be updated according to the target variable to be updated, the image data to be restored and the first image data through a fidelity sub-network layer to obtain an intermediate target variable;
generating second image data meeting second preset image prior information or a second preset regular term according to the intermediate target variable through a second nonlinear mapping sub-network layer;
and updating the target variable to be updated according to the target variable to be updated, the first image data and the second image data through a variable updating sub-network layer to obtain an updated target variable, and outputting the updated target variable.
4. The method according to claim 3, wherein the first predetermined regularization term is first predetermined image prior information; the second preset regular term is second preset image prior information.
5. The method according to claim 3, wherein the method is applied to an image compressed sensing recovery scene, and the image data to be recovered is sparse observation data;
the first preset regular term is l1A norm; the second preset regular term is-l2And (4) norm.
6. The method according to claim 5, wherein the generating, by the first nonlinear mapping sub-network layer, the first image data satisfying a first preset regulation term according to the target variable to be updated comprises:
inputting the target variable to be updated into the first nonlinear mapping sub-network layer, and obtaining the target variable satisfying l according to a parameterized learnable nonlinear transformation matrix calculation model, a soft threshold function model and a left inverse matrix calculation model of a nonlinear transformation matrix1A norm of the first image data; wherein the nonlinear transformation matrix computational model is implemented by a convolution operation and an activation function;
the generating, by the second nonlinear mapping sub-network layer, second image data that satisfies second preset image prior information or a second preset regular term according to the intermediate target variable includes:
inputting the intermediate target variable into the second nonlinear mapping sub-network layer, and obtaining the intermediate target variable satisfying-l according to the nonlinear transformation matrix calculation model, the left inverse matrix calculation model of the nonlinear transformation matrix and the activation function model2Norm second image data.
7. The method according to claim 3, wherein the method is applied to an image denoising scene, and the image data to be recovered is image data with noise;
the first preset regular term is depth image prior information; the second preset regular term is l1And (4) norm.
8. The method according to claim 7, wherein the sub-network layer is mapped through a first non-linearity, and first image data meeting a first preset regular term is generated according to a target variable to be updated;
inputting the target variable to be updated into the first nonlinear mapping sub-network layer, and acquiring first image data meeting the prior information of the depth image according to a parameterized learnable nonlinear transformation matrix calculation model, a depth denoising network model and a pseudo-inverse matrix calculation model of a nonlinear transformation matrix; wherein the nonlinear transformation matrix computational model is implemented by a convolution operation and an activation function; the deep denoising network model is a deep denoising network model of a ResUnet structure;
the generating, by the second nonlinear mapping sub-network layer, second image data that satisfies second preset image prior information or a second preset regular term according to the intermediate target variable includes:
the intermediate mesh is arrangedThe scalar quantity is input into the second nonlinear mapping sub-network layer, and the condition that l is met is obtained according to a soft threshold function model1Norm second image data.
9. An apparatus for restoring an image, comprising:
the device comprises an acquisition module, a recovery module and a recovery module, wherein the acquisition module is used for acquiring image data to be recovered and initialized target variables, and the initialized target variables are randomly generated Lagrange multipliers with the same dimensionality as the image data;
the three-operator splitting algorithm network module is used for inputting the image data and the initialized target variable into a pre-trained image recovery model, and sequentially carrying out iterative update on the target variable through a plurality of three-operator splitting algorithm TOS network layers of the image recovery model to obtain a final target variable;
and the nonlinear mapping network module is used for inputting the final target variable into the nonlinear mapping network layer of the image recovery model, generating finally recovered image data and outputting the finally recovered image data.
10. An electronic device, comprising: at least one processor; and a memory;
the memory stores computer-executable instructions;
the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the method of any one of claims 1-8.
11. A computer-readable storage medium having computer-executable instructions stored thereon which, when executed by a processor, implement the method of any one of claims 1-8.
12. A computer program product comprising computer instructions, characterized in that the computer instructions, when executed by a processor, implement the method of any of claims 1-8.
CN202110552715.8A 2021-05-20 2021-05-20 Image restoration method, apparatus, storage medium, and program product Pending CN113256519A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110552715.8A CN113256519A (en) 2021-05-20 2021-05-20 Image restoration method, apparatus, storage medium, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110552715.8A CN113256519A (en) 2021-05-20 2021-05-20 Image restoration method, apparatus, storage medium, and program product

Publications (1)

Publication Number Publication Date
CN113256519A true CN113256519A (en) 2021-08-13

Family

ID=77183082

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110552715.8A Pending CN113256519A (en) 2021-05-20 2021-05-20 Image restoration method, apparatus, storage medium, and program product

Country Status (1)

Country Link
CN (1) CN113256519A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220156884A1 (en) * 2019-05-06 2022-05-19 Sony Group Corporation Electronic device, method and computer program
CN114841901A (en) * 2022-07-01 2022-08-02 北京大学深圳研究生院 Image reconstruction method based on generalized depth expansion network

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220156884A1 (en) * 2019-05-06 2022-05-19 Sony Group Corporation Electronic device, method and computer program
CN114841901A (en) * 2022-07-01 2022-08-02 北京大学深圳研究生院 Image reconstruction method based on generalized depth expansion network
CN114841901B (en) * 2022-07-01 2022-10-25 北京大学深圳研究生院 Image reconstruction method based on generalized depth expansion network

Similar Documents

Publication Publication Date Title
CN109859147B (en) Real image denoising method based on generation of antagonistic network noise modeling
CN109949255B (en) Image reconstruction method and device
JP6656111B2 (en) Method and system for removing image noise
Gai et al. New image denoising algorithm via improved deep convolutional neural network with perceptive loss
CN109271933B (en) Method for estimating three-dimensional human body posture based on video stream
Ma et al. Efficient and fast real-world noisy image denoising by combining pyramid neural network and two-pathway unscented Kalman filter
CN113222834B (en) Visual data tensor completion method based on smoothness constraint and matrix decomposition
CN113256519A (en) Image restoration method, apparatus, storage medium, and program product
Wang et al. MAGAN: Unsupervised low-light image enhancement guided by mixed-attention
CN112581397B (en) Degraded image restoration method, system, medium and equipment based on image priori information
US11741579B2 (en) Methods and systems for deblurring blurry images
WO2022143812A1 (en) Image restoration method, apparatus and device, and storage medium
CN115239591A (en) Image processing method, image processing apparatus, electronic device, storage medium, and program product
Zheng et al. T-net: Deep stacked scale-iteration network for image dehazing
Wang et al. Deep recursive network for image denoising with global non-linear smoothness constraint prior
CN113344804B (en) Training method of low-light image enhancement model and low-light image enhancement method
CN114742911A (en) Image compressed sensing reconstruction method, system, equipment and medium
CN111798381A (en) Image conversion method, image conversion device, computer equipment and storage medium
CN113763268B (en) Blind restoration method and system for face image
CN115375909A (en) Image processing method and device
CN114862699A (en) Face repairing method, device and storage medium based on generation countermeasure network
Srinivasan et al. An Efficient Video Inpainting Approach Using Deep Belief Network.
CN113362338A (en) Rail segmentation method, device, computer equipment and rail segmentation processing system
Wen et al. Patch-wise blind image deblurring via Michelson channel prior
Wu et al. Semantic image inpainting based on generative adversarial networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination