CN112734763B - Image decomposition method based on convolution and K-SVD dictionary joint sparse coding - Google Patents
Image decomposition method based on convolution and K-SVD dictionary joint sparse coding Download PDFInfo
- Publication number
- CN112734763B CN112734763B CN202110125214.1A CN202110125214A CN112734763B CN 112734763 B CN112734763 B CN 112734763B CN 202110125214 A CN202110125214 A CN 202110125214A CN 112734763 B CN112734763 B CN 112734763B
- Authority
- CN
- China
- Prior art keywords
- image
- component
- convolution
- model
- dictionary
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000000354 decomposition reaction Methods 0.000 title claims abstract description 20
- 230000004044 response Effects 0.000 claims abstract description 15
- 238000005457 optimization Methods 0.000 claims abstract description 7
- 238000007781 pre-processing Methods 0.000 claims abstract description 4
- 239000000470 constituent Substances 0.000 claims description 6
- 239000013598 vector Substances 0.000 claims description 6
- 230000000007 visual effect Effects 0.000 claims description 5
- 102000016550 Complement Factor H Human genes 0.000 claims description 3
- 108010053085 Complement Factor H Proteins 0.000 claims description 3
- OAICVXFJPJFONN-UHFFFAOYSA-N Phosphorus Chemical compound [P] OAICVXFJPJFONN-UHFFFAOYSA-N 0.000 claims description 3
- 230000008602 contraction Effects 0.000 claims description 3
- 230000006870 function Effects 0.000 claims description 3
- 238000011478 gradient descent method Methods 0.000 claims description 3
- 238000005286 illumination Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 230000007547 defect Effects 0.000 abstract description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an image decomposition method based on convolution and K-SVD dictionary joint sparse coding, which comprises the steps of firstly preprocessing an obtained color image to obtain an image to be decomposed; then decomposing the image to be decomposed into linear superposition of two unknown components; constructing prior constraint according to prior characteristics of two unknown components; finally, solving the two unknown components through alternate optimization; and judging whether a feasible solution is reached or not according to the convergence condition. The method can be used for image denoising, a group of convolution operators are obtained according to different noise types through learning, the noise is approximated through updating the convolution kernel and the response coefficient, the method can be dynamically applied to various noise types, and the defect that different regularization constraint terms are constructed according to the noise types is overcome.
Description
Technical Field
The invention belongs to the technical field of digital image processing and computer vision, and particularly relates to an image decomposition method based on convolution and K-SVD dictionary joint sparse coding.
Background
In the field of computer vision, many underlying vision problems, such as noise removal, cartoon texture decomposition, reflection elimination, Retinex models, etc., can be modeled as a sum of two unknown components of a single image. One component B represents the main structural information of the imaged scene, and the other component S may represent noise points, textures, or illuminated parts, etc., depending on the visual task. The decomposition problem is theoretically ill-conditioned due to the fact that the number of unknown quantities is greater than the known quantity. The invention discloses an image decomposition method based on convolution and K-SVD dictionary joint sparse coding, and aims to construct effective approximation models for B and S respectively according to different prior characteristics of the B and S, so that an effective decoupling effect is achieved. Based on the noise removal field of image decomposition, the modeling of an image main body structure B has a variational model and sparse coding reconstruction based on a complete dictionary, and a noise component S has the characteristics of sparsity, low rank and the like, so that the L1 norm and a kernel norm regularization term are used for constraint. However, the regularization terms are based on a strong prior assumption, different regularization terms need to be constructed according to different noise types, and the expandability is poor. For example, in the case of stripe noise, a one-way variation model is often used. For the Retinex model, B and S are both modeled by adopting anisotropic variation, but when noise exists in an input image, the noise contained in a decomposition result can generate adverse effect on subsequent image enhancement.
Disclosure of Invention
The invention aims to provide an image decomposition method based on convolution and K-SVD dictionary joint sparse coding, which can be used for image denoising, a group of convolution operators are obtained according to different noise types through learning, and the method can be dynamically applied to various noise types by updating convolution kernels and response coefficients to approximate noise, so that the defect that different regularization constraint terms are constructed according to the noise types is overcome.
The technical scheme adopted by the invention is that an image decomposition method based on convolution and K-SVD dictionary joint sparse coding is implemented according to the following steps:
step 1, preprocessing an acquired color image to obtain an image to be decomposed;
step 2, decomposing the image to be decomposed into linear superposition of two unknown components;
step 3, constructing prior constraints according to prior characteristics of the two unknown components;
step 4, solving the two unknown components through alternate optimization;
and 5, judging whether a feasible solution is achieved according to the convergence condition.
The present invention is also characterized in that,
the step 1 is as follows:
representing an image to be decomposed by I, and converting the RGB color space into YUV color space aiming at the color noise image to enable I to be equal to Y component; for color low-illumination images, the conversion from RGB color space to HSV space is done with I equal to the Log transform of the V component.
The step 2 is as follows:
order:
I=B+S (1)
for the denoising task, B represents a noiseless image, and S represents a noise point;
for the Retinex model, B represents the reflection property image of the object and S represents the incident light image.
The step 3 is as follows:
constructing a constraint model Φ (B) for component B:
s.t.B=Dα (2)
wherein λ is 1 ,λ 2 And gamma is a weight parameter, p i Denotes the ith analysis operator, i 1, m, and commonly used analysis operators are [ 1-1 ]]And [ 1-1 ]] T ,Representing a two-dimensional convolution operation;is a centralized sparse representation model CSR, where α j Image block B centered on pixel point j j Alpha is all alpha j Connected coefficient vector, | alpha | | non-woven phosphor L1 An L1 norm representing α; the dictionary D is created by a K-SVD method, and D alpha is a classical sparse coding model and is used for reconstructing an image main body structure B, mu j Is in B and image block B j The weighted average of the sparse coefficients of the most similar N image blocks is calculated by the following specific method:
where Γ is τ j,n And, a j,n Is the nth and block B j Sparse coefficients of similar image blocks, N1 i,n Represents the nth similar image block and B i The similarity between the image blocks is represented by Euclidean distance of the vectors;
and (3) constructing a constraint model psi (S) for the S component by adopting a multi-scale-based convolution sparse coding model:
wherein λ is 3 Is a weight parameter, C kt Is the t-th convolution kernel under the K-th scale constraint, K1 k ,
Is C kt Set of constituents, H kt Is a convolution kernel C kt The response coefficient of (a) is,is H kt Set of constituents, | C kt || F Is C kt Frobenius norm, C of kt And initializing by using a convolution dictionary learning method based on tensor decomposition according to different visual tasks.
In the step 3:
aiming at the task of removing noise points of rain and snow, k is 3, and t is 2;
for the Retinex model, k is 1 and t is 2.
The step 4 is as follows:
based on Φ (B) and Ψ (S) in step 3, the complete decomposition model is represented as:
the dictionary D is fixed after initialization, so the variables to be solved by the formula (5) are B, alpha, S and C kt And H kt ;
Equation (5) is solved using an alternate iteration strategy:
fixing the values of the component S, the dictionary D and the coefficient alpha, and optimizing the sub-problem of the component B as follows:
wherein L is 1 Is a linear constraint of Lagrangian multiplication, beta 1 Is a penalty parameter;
And converting the convolution operation into a matrix multiplication form, so thatIntroducing ADMM algorithm to solve, and introducing a set of auxiliary parameters to ensure that
{Y i =P i B} i=1,...,m A new optimization problem can be obtained:
wherein, ω is 1 And μ is the weight coefficient, Π i Is to approach Y i Lagrange variable of (d); the solution to equation (7) is:
Π i =Π i +μ(P i B-Y i )
The step 5 is as follows:
fixing the component S, the dictionary D and the component B, and updating the coefficient alpha to obtain the following sub-problems:
formula (8) is a classical centralized sparse representation CSR model, and an approximate solution can be obtained by using an expanded iterative contraction method;
updating lagrangian constraint variable L 1 :
Fixed B component, dictionary D, convolution kernel C kt And response coefficient H kt The sub-problem of updating the S component is:
wherein L is 2 Is a linear constraint of Lagrangian multiplication, beta 2 Is a penalty parameter; equation (10) can lead to the solution of the S component as
Fixed response factor H kt Sum component S, updating convolution kernel C kt The sub-problems of (1) are:
defining a set of linear operatorsSo thatWherein c is kt =vec(C kt ) Is namely C kt So equation (12) can be converted to:
wherein,
and
equation (13) is solved by the Proximal Gradient Descent algorithm (Proximal Gradient Descent method):
where τ is the gradient descent step, Prox ||·|| 1 is L2 near end operator, each convolution operator is ensured to satisfy
Fixed convolution kernel C kt Sum component S, updating response coefficient H kt The sub-problems of (1) are:
formula (15) is a classical convolutional sparse coding model, and an approximate solution can be obtained;
updating L 2 :
And alternately and iteratively updating the component B and the component S until a convergence condition is reached:
wherein B is t Denotes the result of the t-th iteration, B t-1 Is the result of the t-1 th iteration, and p is a given threshold.
The method has the advantages that a group of analysis operators are used for modeling the smoothness of pixel intensity change of adjacent spatial domain pixels in B, and Centralized Sparse Representation (CSR) is used for reconstructing similar image blocks in B. And aiming at S, approximating by using convolution sparse coding, learning according to different visual tasks to obtain a group of convolution operators, and approximating S by updating a convolution kernel and a response coefficient.
Detailed Description
The present invention will be described in detail with reference to the following embodiments.
The invention discloses an image decomposition method based on convolution and K-SVD dictionary joint sparse coding, which is implemented according to the following steps:
step 1, preprocessing the obtained color image to obtain an image to be decomposed;
the step 1 is as follows:
representing an image to be decomposed by I, and converting the RGB color space into YUV color space aiming at the color noise image to enable I to be equal to Y component; for color low-illumination images, the conversion from RGB color space to HSV space is done with I equal to the Log transform of the V component.
Step 2, decomposing the image to be decomposed into linear superposition of two unknown components;
the step 2 is as follows:
order:
I=B+S (1)
for the denoising task, B represents a noiseless image, and S represents a noise point;
for the Retinex model, B represents the reflection property image of the object and S represents the incident light image.
Step 3, constructing prior constraints according to prior characteristics of the two unknown components;
the step 3 is as follows:
constructing a constraint model Φ (B) for component B:
wherein λ is 1 ,λ 2 s.t.b ═ D α and γ are weight parameters, p i Denotes the ith analysis operator, i 1, m, and commonly used analysis operators are [ 1-1 ]]And
[1 -1] T ,representing a two-dimensional convolution operation;is a centralized sparse representation model CSR, where α j Image block B centered on pixel point j j Alpha is all alpha j Connected coefficient vector, | alpha | | non-woven phosphor L1 An L1 norm representing α; the dictionary D is created by a K-SVD method, and D alpha isClassical sparse coding model for reconstructing image subject structure B, mu j Is in B and image block B j The weighted average of the sparse coefficients of the most similar N image blocks is calculated by the following specific method:
where Γ is τ j,n And, a j,n Is the nth and block B j Sparse coefficients of similar image blocks, N1 i,n Represents the nth similar image block and B i The similarity between the image blocks is represented by Euclidean distance of the vectors;
and (3) constructing a constraint model psi (S) for the S component by adopting a multi-scale-based convolution sparse coding model:
wherein λ is 3 Is a weight parameter, C kt Is the t-th convolution kernel under the K-th scale constraint, K1 k ,
Is C kt Set of constituents, H kt Is a convolution kernel C kt The response coefficient of (a) is,is H kt Set of constituents, | C kt || F Is C kt Frobenius norm, C of kt And initializing by using a convolution dictionary learning method based on tensor decomposition according to different visual tasks.
In the step 3:
aiming at the task of removing the noise points of rain and snow, k is 3, and t is 2;
for the Retinex model, k is 1 and t is 2.
Step 4, solving the two unknown components through alternate optimization;
the step 4 is specifically as follows:
based on Φ (B) and Ψ (S) in step 3, the complete decomposition model is represented as:
the dictionary D is fixed after initialization, so the variables to be solved by the formula (5) are B, alpha, S and C kt And H kt ;
Equation (5) is solved using an alternate iteration strategy:
fixing the values of the component S, the dictionary D and the coefficient alpha, and optimizing the sub-problem of the component B as follows:
wherein L is 1 Is a linear constraint of Lagrangian multiplication, beta 1 Is a penalty parameter;
And converting the convolution operation into a matrix multiplication form, so thatBecause of the inconductivity of the L1 norm, the invention introduces an ADMM (alternating Direction Method of multipliers) algorithm for solving, and introduces a group of auxiliary parameters to ensure that the { Y is i =P i B} i=1,...,m A new optimization problem can be obtained:
wherein, ω is 1 And μ is the weight coefficient, Π i Is to approach Y i Lagrange variable of (d); the solution to equation (7) is:
Π i =Π i +μ(P i B-Y i )
And 5, judging whether a feasible solution is achieved according to the convergence condition.
The step 5 is as follows:
fixing the component S, the dictionary D and the component B, and updating the coefficient alpha to obtain the following sub-problems:
formula (8) is a classical centralized sparse representation CSR model, and an approximate solution can be obtained by using an expanded iterative contraction method;
updating lagrangian constraint variable L 1 :
Fixed B component, dictionary D, convolution kernel C kt And response coefficient H kt The sub-problem of updating the S component is:
wherein L is 2 Is a linear constraint of Lagrangian multiplication, beta 2 Is a penalty parameter; equation (10) can lead to the solution of the S component as
Fixed response factor H kt Sum component S, updating convolution kernel C kt The sub-problems of (1) are:
defining a set of linear operatorsSo thatWherein c is kt =vec(C kt ) Is namely C kt So equation (12) can be converted to:
wherein,
and
equation (13) is solved by the Proximal Gradient Descent algorithm (Proximal Gradient Descent method):
where τ is the gradient descent step, Prox ||·|| 1 is L2 near end operator, each convolution operator is ensured to satisfy
Fixed convolution kernel C kt Sum component S, updating response coefficient H kt The sub-problems of (1) are:
formula (15) is a classical convolutional sparse coding model, and an approximate solution can be obtained;
updating L 2 :
And alternately and iteratively updating the component B and the component S until a convergence condition is reached:
wherein B is t Denotes the result of the t-th iteration, B t-1 Is the result of the t-1 th iteration, and p is a given threshold.
Claims (2)
1. An image decomposition method based on convolution and K-SVD dictionary joint sparse coding is characterized by comprising the following steps:
step 1, preprocessing the obtained color image to obtain an image to be decomposed;
the step 1 is specifically as follows:
representing an image to be decomposed by I, and converting the RGB color space into YUV color space aiming at the color noise image to enable I to be equal to Y component; for a color low-illumination image, converting from an RGB color space to an HSV space, and converting Log with I equal to V component;
step 2, decomposing the image to be decomposed into linear superposition of two unknown components;
the step 2 is specifically as follows:
order:
I=B+S (1)
for the denoising task, B represents a noiseless image, and S represents a noise point;
for the Retinex model, B represents an object reflection property image, and S represents an incident light image;
step 3, constructing prior constraints according to prior characteristics of the two unknown components;
the step 3 is specifically as follows:
constructing a constraint model Φ (B) for component B:
wherein λ is 1 ,λ 2 And gamma is a weight parameter, p i Denotes the ith analysis operator, i 1.., m, which includes the analysis operator [ 1-1 ]]And [ 1-1 ]] T ,Representing a two-dimensional convolution operation;is a centralized sparse representation model CSR, where α j Image block B centered on pixel point j j Alpha is all alpha j Connected coefficient vector, | alpha | | non-woven phosphor L1 An L1 norm representing α; the dictionary D is created by a K-SVD method, and D alpha is a classical sparse coding model and is used for reconstructing an image main body structure B, mu j Is in B and image block B j The weighted average of the sparse coefficients of the most similar N image blocks is calculated by the following specific method:
where Γ is τ j,n And, a j,n Is the nth and block B j Sparse coefficients of similar image blocks, N1 i,n Represents the nth similar image block and B i The similarity between the image blocks is represented by Euclidean distance of the vectors;
and (3) constructing a constraint model psi (S) for the S component by adopting a multi-scale-based convolution sparse coding model:
wherein λ is 3 Is a weight parameter, C kt Is the t-th convolution kernel under the K-th scale constraint, K1 k ,
Is C kt Set of constituents, H kt Is a convolution kernel C kt The response coefficient of (a) is,is H kt Set of constituents, | C kt || F Is C kt Frobenius norm, C of kt Initializing by a convolution dictionary learning method based on tensor decomposition according to different visual tasks;
step 4, solving the two unknown components through alternate optimization;
the step 4 is specifically as follows:
based on Φ (B) and Ψ (S) in step 3, the complete decomposition model is represented as:
the dictionary D is fixed after initialization, so the variables to be solved by the formula (5) are B, alpha, S and C kt And H kt ;
Equation (5) is solved using an alternate iteration strategy:
fixing the values of the component S, the dictionary D and the coefficient alpha, and optimizing the sub-problem of the component B as follows:
wherein L is 1 Is a linear constraint of Lagrangian multiplication, beta 1 Is a penalty parameter;
And converting the convolution operation into a matrix multiplication form, so thatDue to the inconductivity of the L1 norm, an ADMM algorithm is introduced for solving, and a set of auxiliary parameters is introduced to enable { Y to be { Y i =P i B} i=1,...,m Obtaining a new optimization problem:
wherein, ω is 1 And μ is the weight coefficient, [ pi ] i Is to approach Y i Lagrange variable of (d); the solution to equation (7) is:
Π i =Π i +μ(P i B-Y i )
step 5, judging whether a feasible solution is reached according to the convergence condition, wherein the step 5 is as follows:
fixing the component S, the dictionary D and the component B, and updating the coefficient alpha to obtain the following sub-problems:
formula (8) is a classical centralized sparse representation CSR model, and an approximate solution is solved by using an extended iterative contraction method;
updating lagrangian constraint variable L 1 :
Fixed B component, dictionary D, convolution kernel C kt And response coefficient H kt The sub-problem of updating the S component is:
wherein L is 2 Is a linear constraint of Lagrangian multiplication, beta 2 Is a penalty parameter; equation (10) can lead to the solution of the S component as
Fixed response factor H kt Sum component S, updating convolution kernel C kt The sub-problems of (1) are:
defining a set of linear operatorsSo thatWherein c is kt =vec(C kt ) Is namely C kt So equation (12) translates to:
wherein,
and
equation (13) is solved by the near-end Gradient Descent algorithm (Proximal Gradient Descent method):
where τ is the gradient descent step, Prox ||·|| ≤1 Is L2 near-end operatorEnsure that each convolution operator satisfies
Fixed convolution kernel C kt Sum component S, updating response coefficient H kt The sub-problems of (1) are:
formula (15) is a classical convolution sparse coding model, and an approximate solution is solved;
updating L 2 :
And alternately and iteratively updating the component B and the component S until a convergence condition is reached:
wherein B is t Denotes the result of the t-th iteration, B t-1 Is the result of the t-1 th iteration, and p is a given threshold.
2. The image decomposition method based on convolution and K-SVD dictionary joint sparse coding according to claim 1, characterized in that in said step 3:
for the denoising task, k is 3, and t is 2;
for the Retinex model, k is 1 and t is 2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110125214.1A CN112734763B (en) | 2021-01-29 | 2021-01-29 | Image decomposition method based on convolution and K-SVD dictionary joint sparse coding |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110125214.1A CN112734763B (en) | 2021-01-29 | 2021-01-29 | Image decomposition method based on convolution and K-SVD dictionary joint sparse coding |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112734763A CN112734763A (en) | 2021-04-30 |
CN112734763B true CN112734763B (en) | 2022-09-16 |
Family
ID=75594715
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110125214.1A Active CN112734763B (en) | 2021-01-29 | 2021-01-29 | Image decomposition method based on convolution and K-SVD dictionary joint sparse coding |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112734763B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113808036B (en) * | 2021-08-31 | 2023-02-24 | 西安理工大学 | Low-illumination image enhancement and denoising method based on Retinex model |
CN114881885A (en) * | 2022-05-25 | 2022-08-09 | 南京邮电大学 | Image denoising method based on decoupling depth dictionary learning |
CN115115551B (en) * | 2022-07-26 | 2024-03-29 | 北京计算机技术及应用研究所 | Parallax map restoration method based on convolution dictionary |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103745442A (en) * | 2014-01-08 | 2014-04-23 | 西安电子科技大学 | Non-local wavelet coefficient contraction-based image denoising method |
CN106780342A (en) * | 2016-12-28 | 2017-05-31 | 深圳市华星光电技术有限公司 | Single-frame image super-resolution reconstruction method and device based on the reconstruct of sparse domain |
CN108573263A (en) * | 2018-05-10 | 2018-09-25 | 西安理工大学 | A kind of dictionary learning method of co-ordinative construction rarefaction representation and low-dimensional insertion |
CN109064406A (en) * | 2018-08-26 | 2018-12-21 | 东南大学 | A kind of rarefaction representation image rebuilding method that regularization parameter is adaptive |
AU2020100460A4 (en) * | 2020-03-26 | 2020-04-30 | Huang, Shuying DR | Single image deraining algorithm based on multi-scale dictionary |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8913686B2 (en) * | 2010-05-07 | 2014-12-16 | Yale University | Sparse superposition encoder and decoder for communications system |
US9582916B2 (en) * | 2014-11-10 | 2017-02-28 | Siemens Healthcare Gmbh | Method and system for unsupervised cross-modal medical image synthesis |
CN107133930A (en) * | 2017-04-30 | 2017-09-05 | 天津大学 | Ranks missing image fill method with rarefaction representation is rebuild based on low-rank matrix |
CN108305297A (en) * | 2017-12-22 | 2018-07-20 | 上海交通大学 | A kind of image processing method based on multidimensional tensor dictionary learning algorithm |
CN110717354B (en) * | 2018-07-11 | 2023-05-12 | 哈尔滨工业大学 | Super-pixel classification method based on semi-supervised K-SVD and multi-scale sparse representation |
-
2021
- 2021-01-29 CN CN202110125214.1A patent/CN112734763B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103745442A (en) * | 2014-01-08 | 2014-04-23 | 西安电子科技大学 | Non-local wavelet coefficient contraction-based image denoising method |
CN106780342A (en) * | 2016-12-28 | 2017-05-31 | 深圳市华星光电技术有限公司 | Single-frame image super-resolution reconstruction method and device based on the reconstruct of sparse domain |
CN108573263A (en) * | 2018-05-10 | 2018-09-25 | 西安理工大学 | A kind of dictionary learning method of co-ordinative construction rarefaction representation and low-dimensional insertion |
CN109064406A (en) * | 2018-08-26 | 2018-12-21 | 东南大学 | A kind of rarefaction representation image rebuilding method that regularization parameter is adaptive |
AU2020100460A4 (en) * | 2020-03-26 | 2020-04-30 | Huang, Shuying DR | Single image deraining algorithm based on multi-scale dictionary |
Non-Patent Citations (4)
Title |
---|
Dictionaries for Sparse Representation Modeling;Ron Rubinstein et al;《Proceedings of the IEEE》;20100422;第1045-1057页 * |
Joint Bi-layer Optimization for Single-image Rain Streak Removal;Lei Zhu et al;《2017 IEEE International Conference on Computer Vision》;20171225;摘要及第3-5节 * |
一个基于卷积稀疏表示的图像重构算法;陈小陶等;《计算机与数字工程》;20170420(第04期);第31-734页及第744页 * |
一种改进的基于K-SVD字典的图像去噪算法;王欣等;《电子设计工程》;20141205(第23期);第189-192页 * |
Also Published As
Publication number | Publication date |
---|---|
CN112734763A (en) | 2021-04-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112734763B (en) | Image decomposition method based on convolution and K-SVD dictionary joint sparse coding | |
CN109102477B (en) | Hyperspectral remote sensing image recovery method based on non-convex low-rank sparse constraint | |
CN103049892B (en) | Non-local image denoising method based on similar block matrix rank minimization | |
Zhang et al. | High-quality image restoration using low-rank patch regularization and global structure sparsity | |
CN110210282B (en) | Moving target detection method based on non-convex low-rank sparse decomposition | |
CN110992292B (en) | Enhanced low-rank sparse decomposition model medical CT image denoising method | |
CN109636722B (en) | Method for reconstructing super-resolution of online dictionary learning based on sparse representation | |
CN113808036B (en) | Low-illumination image enhancement and denoising method based on Retinex model | |
CN114820352A (en) | Hyperspectral image denoising method and device and storage medium | |
CN104657951A (en) | Multiplicative noise removal method for image | |
CN112529777A (en) | Image super-resolution analysis method based on multi-mode learning convolution sparse coding network | |
CN112233046B (en) | Image restoration method under Cauchy noise and application thereof | |
Shahdoosti et al. | Combined ripplet and total variation image denoising methods using twin support vector machines | |
Singhal et al. | A domain adaptation approach to solve inverse problems in imaging via coupled deep dictionary learning | |
CN116993621A (en) | Dim light image enhancement method | |
CN111161184B (en) | Rapid MR image denoising method based on MCP sparse constraint | |
Jiang et al. | A new nonlocal means based framework for mixed noise removal | |
Chen et al. | Hyperspectral image denoising via texture-preserved total variation regularizer | |
CN115797205A (en) | Unsupervised single image enhancement method and system based on Retinex fractional order variation network | |
Liu et al. | Joint dehazing and denoising for single nighttime image via multi-scale decomposition | |
CN112241938A (en) | Image restoration method based on smooth Tak decomposition and high-order tensor Hank transformation | |
CN116843559A (en) | Underwater image enhancement method based on image processing and deep learning | |
Cao et al. | Remote sensing image recovery and enhancement by joint blind denoising and dehazing | |
Baraha et al. | Speckle removal using dictionary learning and pnp-based fast iterative shrinkage threshold algorithm | |
CN114359064B (en) | Hyperspectral image recovery method based on dual gradient constraint |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |