CN112734763B - Image decomposition method based on convolution and K-SVD dictionary joint sparse coding - Google Patents

Image decomposition method based on convolution and K-SVD dictionary joint sparse coding Download PDF

Info

Publication number
CN112734763B
CN112734763B CN202110125214.1A CN202110125214A CN112734763B CN 112734763 B CN112734763 B CN 112734763B CN 202110125214 A CN202110125214 A CN 202110125214A CN 112734763 B CN112734763 B CN 112734763B
Authority
CN
China
Prior art keywords
image
component
convolution
model
dictionary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110125214.1A
Other languages
Chinese (zh)
Other versions
CN112734763A (en
Inventor
都双丽
赵明华
刘怡光
尤珍臻
石程
李�杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN202110125214.1A priority Critical patent/CN112734763B/en
Publication of CN112734763A publication Critical patent/CN112734763A/en
Application granted granted Critical
Publication of CN112734763B publication Critical patent/CN112734763B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image decomposition method based on convolution and K-SVD dictionary joint sparse coding, which comprises the steps of firstly preprocessing an obtained color image to obtain an image to be decomposed; then decomposing the image to be decomposed into linear superposition of two unknown components; constructing prior constraint according to prior characteristics of two unknown components; finally, solving the two unknown components through alternate optimization; and judging whether a feasible solution is reached or not according to the convergence condition. The method can be used for image denoising, a group of convolution operators are obtained according to different noise types through learning, the noise is approximated through updating the convolution kernel and the response coefficient, the method can be dynamically applied to various noise types, and the defect that different regularization constraint terms are constructed according to the noise types is overcome.

Description

Image decomposition method based on convolution and K-SVD dictionary joint sparse coding
Technical Field
The invention belongs to the technical field of digital image processing and computer vision, and particularly relates to an image decomposition method based on convolution and K-SVD dictionary joint sparse coding.
Background
In the field of computer vision, many underlying vision problems, such as noise removal, cartoon texture decomposition, reflection elimination, Retinex models, etc., can be modeled as a sum of two unknown components of a single image. One component B represents the main structural information of the imaged scene, and the other component S may represent noise points, textures, or illuminated parts, etc., depending on the visual task. The decomposition problem is theoretically ill-conditioned due to the fact that the number of unknown quantities is greater than the known quantity. The invention discloses an image decomposition method based on convolution and K-SVD dictionary joint sparse coding, and aims to construct effective approximation models for B and S respectively according to different prior characteristics of the B and S, so that an effective decoupling effect is achieved. Based on the noise removal field of image decomposition, the modeling of an image main body structure B has a variational model and sparse coding reconstruction based on a complete dictionary, and a noise component S has the characteristics of sparsity, low rank and the like, so that the L1 norm and a kernel norm regularization term are used for constraint. However, the regularization terms are based on a strong prior assumption, different regularization terms need to be constructed according to different noise types, and the expandability is poor. For example, in the case of stripe noise, a one-way variation model is often used. For the Retinex model, B and S are both modeled by adopting anisotropic variation, but when noise exists in an input image, the noise contained in a decomposition result can generate adverse effect on subsequent image enhancement.
Disclosure of Invention
The invention aims to provide an image decomposition method based on convolution and K-SVD dictionary joint sparse coding, which can be used for image denoising, a group of convolution operators are obtained according to different noise types through learning, and the method can be dynamically applied to various noise types by updating convolution kernels and response coefficients to approximate noise, so that the defect that different regularization constraint terms are constructed according to the noise types is overcome.
The technical scheme adopted by the invention is that an image decomposition method based on convolution and K-SVD dictionary joint sparse coding is implemented according to the following steps:
step 1, preprocessing an acquired color image to obtain an image to be decomposed;
step 2, decomposing the image to be decomposed into linear superposition of two unknown components;
step 3, constructing prior constraints according to prior characteristics of the two unknown components;
step 4, solving the two unknown components through alternate optimization;
and 5, judging whether a feasible solution is achieved according to the convergence condition.
The present invention is also characterized in that,
the step 1 is as follows:
representing an image to be decomposed by I, and converting the RGB color space into YUV color space aiming at the color noise image to enable I to be equal to Y component; for color low-illumination images, the conversion from RGB color space to HSV space is done with I equal to the Log transform of the V component.
The step 2 is as follows:
order:
I=B+S (1)
for the denoising task, B represents a noiseless image, and S represents a noise point;
for the Retinex model, B represents the reflection property image of the object and S represents the incident light image.
The step 3 is as follows:
constructing a constraint model Φ (B) for component B:
Figure BDA0002923371770000021
s.t.B=Dα (2)
wherein λ is 12 And gamma is a weight parameter, p i Denotes the ith analysis operator, i 1, m, and commonly used analysis operators are [ 1-1 ]]And [ 1-1 ]] T ,
Figure BDA0002923371770000031
Representing a two-dimensional convolution operation;
Figure BDA0002923371770000032
is a centralized sparse representation model CSR, where α j Image block B centered on pixel point j j Alpha is all alpha j Connected coefficient vector, | alpha | | non-woven phosphor L1 An L1 norm representing α; the dictionary D is created by a K-SVD method, and D alpha is a classical sparse coding model and is used for reconstructing an image main body structure B, mu j Is in B and image block B j The weighted average of the sparse coefficients of the most similar N image blocks is calculated by the following specific method:
Figure BDA0002923371770000033
where Γ is τ j,n And, a j,n Is the nth and block B j Sparse coefficients of similar image blocks, N1 i,n Represents the nth similar image block and B i The similarity between the image blocks is represented by Euclidean distance of the vectors;
and (3) constructing a constraint model psi (S) for the S component by adopting a multi-scale-based convolution sparse coding model:
Figure BDA0002923371770000034
Figure BDA0002923371770000035
wherein λ is 3 Is a weight parameter, C kt Is the t-th convolution kernel under the K-th scale constraint, K1 k
Figure BDA0002923371770000041
Is C kt Set of constituents, H kt Is a convolution kernel C kt The response coefficient of (a) is,
Figure BDA0002923371770000042
is H kt Set of constituents, | C kt || F Is C kt Frobenius norm, C of kt And initializing by using a convolution dictionary learning method based on tensor decomposition according to different visual tasks.
In the step 3:
aiming at the task of removing noise points of rain and snow, k is 3, and t is 2;
for the Retinex model, k is 1 and t is 2.
The step 4 is as follows:
based on Φ (B) and Ψ (S) in step 3, the complete decomposition model is represented as:
Figure BDA0002923371770000043
the dictionary D is fixed after initialization, so the variables to be solved by the formula (5) are B, alpha, S and C kt And H kt
Equation (5) is solved using an alternate iteration strategy:
fixing the values of the component S, the dictionary D and the coefficient alpha, and optimizing the sub-problem of the component B as follows:
Figure BDA0002923371770000044
wherein L is 1 Is a linear constraint of Lagrangian multiplication, beta 1 Is a penalty parameter;
order to
Figure BDA0002923371770000045
And converting the convolution operation into a matrix multiplication form, so that
Figure BDA0002923371770000051
Introducing ADMM algorithm to solve, and introducing a set of auxiliary parameters to ensure that
{Y i =P i B} i=1,...,m A new optimization problem can be obtained:
Figure BDA0002923371770000052
wherein, ω is 1 And μ is the weight coefficient, Π i Is to approach Y i Lagrange variable of (d); the solution to equation (7) is:
Figure BDA0002923371770000053
Π i =Π i +μ(P i B-Y i )
Figure BDA0002923371770000054
where sign is a function of the sign,
Figure BDA0002923371770000055
is taken as the maximum of the two elements.
The step 5 is as follows:
fixing the component S, the dictionary D and the component B, and updating the coefficient alpha to obtain the following sub-problems:
Figure BDA0002923371770000056
formula (8) is a classical centralized sparse representation CSR model, and an approximate solution can be obtained by using an expanded iterative contraction method;
updating lagrangian constraint variable L 1
Figure BDA0002923371770000057
(9)
Fixed B component, dictionary D, convolution kernel C kt And response coefficient H kt The sub-problem of updating the S component is:
Figure BDA0002923371770000061
wherein L is 2 Is a linear constraint of Lagrangian multiplication, beta 2 Is a penalty parameter; equation (10) can lead to the solution of the S component as
Figure BDA0002923371770000062
Fixed response factor H kt Sum component S, updating convolution kernel C kt The sub-problems of (1) are:
Figure BDA0002923371770000063
defining a set of linear operators
Figure BDA0002923371770000064
So that
Figure BDA0002923371770000065
Wherein c is kt =vec(C kt ) Is namely C kt So equation (12) can be converted to:
Figure BDA0002923371770000066
wherein,
Figure BDA0002923371770000067
and
Figure BDA0002923371770000068
equation (13) is solved by the Proximal Gradient Descent algorithm (Proximal Gradient Descent method):
Figure BDA0002923371770000071
where τ is the gradient descent step, Prox ||·|| 1 is L2 near end operator, each convolution operator is ensured to satisfy
Figure BDA0002923371770000072
Fixed convolution kernel C kt Sum component S, updating response coefficient H kt The sub-problems of (1) are:
Figure BDA0002923371770000073
formula (15) is a classical convolutional sparse coding model, and an approximate solution can be obtained;
updating L 2
Figure BDA0002923371770000074
And alternately and iteratively updating the component B and the component S until a convergence condition is reached:
Figure BDA0002923371770000075
wherein B is t Denotes the result of the t-th iteration, B t-1 Is the result of the t-1 th iteration, and p is a given threshold.
The method has the advantages that a group of analysis operators are used for modeling the smoothness of pixel intensity change of adjacent spatial domain pixels in B, and Centralized Sparse Representation (CSR) is used for reconstructing similar image blocks in B. And aiming at S, approximating by using convolution sparse coding, learning according to different visual tasks to obtain a group of convolution operators, and approximating S by updating a convolution kernel and a response coefficient.
Detailed Description
The present invention will be described in detail with reference to the following embodiments.
The invention discloses an image decomposition method based on convolution and K-SVD dictionary joint sparse coding, which is implemented according to the following steps:
step 1, preprocessing the obtained color image to obtain an image to be decomposed;
the step 1 is as follows:
representing an image to be decomposed by I, and converting the RGB color space into YUV color space aiming at the color noise image to enable I to be equal to Y component; for color low-illumination images, the conversion from RGB color space to HSV space is done with I equal to the Log transform of the V component.
Step 2, decomposing the image to be decomposed into linear superposition of two unknown components;
the step 2 is as follows:
order:
I=B+S (1)
for the denoising task, B represents a noiseless image, and S represents a noise point;
for the Retinex model, B represents the reflection property image of the object and S represents the incident light image.
Step 3, constructing prior constraints according to prior characteristics of the two unknown components;
the step 3 is as follows:
constructing a constraint model Φ (B) for component B:
Figure BDA0002923371770000081
wherein λ is 12 s.t.b ═ D α and γ are weight parameters, p i Denotes the ith analysis operator, i 1, m, and commonly used analysis operators are [ 1-1 ]]And
[1 -1] T ,
Figure BDA0002923371770000082
representing a two-dimensional convolution operation;
Figure BDA0002923371770000083
is a centralized sparse representation model CSR, where α j Image block B centered on pixel point j j Alpha is all alpha j Connected coefficient vector, | alpha | | non-woven phosphor L1 An L1 norm representing α; the dictionary D is created by a K-SVD method, and D alpha isClassical sparse coding model for reconstructing image subject structure B, mu j Is in B and image block B j The weighted average of the sparse coefficients of the most similar N image blocks is calculated by the following specific method:
Figure BDA0002923371770000091
where Γ is τ j,n And, a j,n Is the nth and block B j Sparse coefficients of similar image blocks, N1 i,n Represents the nth similar image block and B i The similarity between the image blocks is represented by Euclidean distance of the vectors;
and (3) constructing a constraint model psi (S) for the S component by adopting a multi-scale-based convolution sparse coding model:
Figure BDA0002923371770000092
Figure BDA0002923371770000093
wherein λ is 3 Is a weight parameter, C kt Is the t-th convolution kernel under the K-th scale constraint, K1 k
Figure BDA0002923371770000094
Is C kt Set of constituents, H kt Is a convolution kernel C kt The response coefficient of (a) is,
Figure BDA0002923371770000095
is H kt Set of constituents, | C kt || F Is C kt Frobenius norm, C of kt And initializing by using a convolution dictionary learning method based on tensor decomposition according to different visual tasks.
In the step 3:
aiming at the task of removing the noise points of rain and snow, k is 3, and t is 2;
for the Retinex model, k is 1 and t is 2.
Step 4, solving the two unknown components through alternate optimization;
the step 4 is specifically as follows:
based on Φ (B) and Ψ (S) in step 3, the complete decomposition model is represented as:
Figure BDA0002923371770000101
the dictionary D is fixed after initialization, so the variables to be solved by the formula (5) are B, alpha, S and C kt And H kt
Equation (5) is solved using an alternate iteration strategy:
fixing the values of the component S, the dictionary D and the coefficient alpha, and optimizing the sub-problem of the component B as follows:
Figure BDA0002923371770000102
wherein L is 1 Is a linear constraint of Lagrangian multiplication, beta 1 Is a penalty parameter;
order to
Figure BDA0002923371770000103
And converting the convolution operation into a matrix multiplication form, so that
Figure BDA0002923371770000104
Because of the inconductivity of the L1 norm, the invention introduces an ADMM (alternating Direction Method of multipliers) algorithm for solving, and introduces a group of auxiliary parameters to ensure that the { Y is i =P i B} i=1,...,m A new optimization problem can be obtained:
Figure BDA0002923371770000105
wherein, ω is 1 And μ is the weight coefficient, Π i Is to approach Y i Lagrange variable of (d); the solution to equation (7) is:
Figure BDA0002923371770000111
Π i =Π i +μ(P i B-Y i )
Figure BDA0002923371770000112
where sign is a function of the sign,
Figure BDA0002923371770000113
is taken as the maximum of the two elements.
And 5, judging whether a feasible solution is achieved according to the convergence condition.
The step 5 is as follows:
fixing the component S, the dictionary D and the component B, and updating the coefficient alpha to obtain the following sub-problems:
Figure BDA0002923371770000114
formula (8) is a classical centralized sparse representation CSR model, and an approximate solution can be obtained by using an expanded iterative contraction method;
updating lagrangian constraint variable L 1
Figure BDA0002923371770000115
Fixed B component, dictionary D, convolution kernel C kt And response coefficient H kt The sub-problem of updating the S component is:
Figure BDA0002923371770000116
wherein L is 2 Is a linear constraint of Lagrangian multiplication, beta 2 Is a penalty parameter; equation (10) can lead to the solution of the S component as
Figure BDA0002923371770000121
Fixed response factor H kt Sum component S, updating convolution kernel C kt The sub-problems of (1) are:
Figure BDA0002923371770000122
defining a set of linear operators
Figure BDA0002923371770000123
So that
Figure BDA0002923371770000124
Wherein c is kt =vec(C kt ) Is namely C kt So equation (12) can be converted to:
Figure BDA0002923371770000125
wherein,
Figure BDA0002923371770000126
and
Figure BDA0002923371770000127
equation (13) is solved by the Proximal Gradient Descent algorithm (Proximal Gradient Descent method):
Figure BDA0002923371770000128
where τ is the gradient descent step, Prox ||·|| 1 is L2 near end operator, each convolution operator is ensured to satisfy
Figure BDA0002923371770000129
Fixed convolution kernel C kt Sum component S, updating response coefficient H kt The sub-problems of (1) are:
Figure BDA0002923371770000131
formula (15) is a classical convolutional sparse coding model, and an approximate solution can be obtained;
updating L 2
Figure BDA0002923371770000132
And alternately and iteratively updating the component B and the component S until a convergence condition is reached:
Figure BDA0002923371770000133
wherein B is t Denotes the result of the t-th iteration, B t-1 Is the result of the t-1 th iteration, and p is a given threshold.

Claims (2)

1. An image decomposition method based on convolution and K-SVD dictionary joint sparse coding is characterized by comprising the following steps:
step 1, preprocessing the obtained color image to obtain an image to be decomposed;
the step 1 is specifically as follows:
representing an image to be decomposed by I, and converting the RGB color space into YUV color space aiming at the color noise image to enable I to be equal to Y component; for a color low-illumination image, converting from an RGB color space to an HSV space, and converting Log with I equal to V component;
step 2, decomposing the image to be decomposed into linear superposition of two unknown components;
the step 2 is specifically as follows:
order:
I=B+S (1)
for the denoising task, B represents a noiseless image, and S represents a noise point;
for the Retinex model, B represents an object reflection property image, and S represents an incident light image;
step 3, constructing prior constraints according to prior characteristics of the two unknown components;
the step 3 is specifically as follows:
constructing a constraint model Φ (B) for component B:
Figure FDA0003788919480000011
wherein λ is 12 And gamma is a weight parameter, p i Denotes the ith analysis operator, i 1.., m, which includes the analysis operator [ 1-1 ]]And [ 1-1 ]] T ,
Figure FDA0003788919480000012
Representing a two-dimensional convolution operation;
Figure FDA0003788919480000013
is a centralized sparse representation model CSR, where α j Image block B centered on pixel point j j Alpha is all alpha j Connected coefficient vector, | alpha | | non-woven phosphor L1 An L1 norm representing α; the dictionary D is created by a K-SVD method, and D alpha is a classical sparse coding model and is used for reconstructing an image main body structure B, mu j Is in B and image block B j The weighted average of the sparse coefficients of the most similar N image blocks is calculated by the following specific method:
Figure FDA0003788919480000021
where Γ is τ j,n And, a j,n Is the nth and block B j Sparse coefficients of similar image blocks, N1 i,n Represents the nth similar image block and B i The similarity between the image blocks is represented by Euclidean distance of the vectors;
and (3) constructing a constraint model psi (S) for the S component by adopting a multi-scale-based convolution sparse coding model:
Figure FDA0003788919480000022
Figure FDA0003788919480000023
wherein λ is 3 Is a weight parameter, C kt Is the t-th convolution kernel under the K-th scale constraint, K1 k
Figure FDA0003788919480000024
Is C kt Set of constituents, H kt Is a convolution kernel C kt The response coefficient of (a) is,
Figure FDA0003788919480000025
is H kt Set of constituents, | C kt || F Is C kt Frobenius norm, C of kt Initializing by a convolution dictionary learning method based on tensor decomposition according to different visual tasks;
step 4, solving the two unknown components through alternate optimization;
the step 4 is specifically as follows:
based on Φ (B) and Ψ (S) in step 3, the complete decomposition model is represented as:
Figure FDA0003788919480000031
the dictionary D is fixed after initialization, so the variables to be solved by the formula (5) are B, alpha, S and C kt And H kt
Equation (5) is solved using an alternate iteration strategy:
fixing the values of the component S, the dictionary D and the coefficient alpha, and optimizing the sub-problem of the component B as follows:
Figure FDA0003788919480000032
wherein L is 1 Is a linear constraint of Lagrangian multiplication, beta 1 Is a penalty parameter;
order to
Figure FDA0003788919480000033
And converting the convolution operation into a matrix multiplication form, so that
Figure FDA0003788919480000034
Due to the inconductivity of the L1 norm, an ADMM algorithm is introduced for solving, and a set of auxiliary parameters is introduced to enable { Y to be { Y i =P i B} i=1,...,m Obtaining a new optimization problem:
Figure FDA0003788919480000035
wherein, ω is 1 And μ is the weight coefficient, [ pi ] i Is to approach Y i Lagrange variable of (d); the solution to equation (7) is:
Figure FDA0003788919480000041
Π i =Π i +μ(P i B-Y i )
Figure FDA0003788919480000042
where sign is a function of the sign,
Figure FDA0003788919480000043
taking the maximum value of the two elements;
step 5, judging whether a feasible solution is reached according to the convergence condition, wherein the step 5 is as follows:
fixing the component S, the dictionary D and the component B, and updating the coefficient alpha to obtain the following sub-problems:
Figure FDA0003788919480000044
formula (8) is a classical centralized sparse representation CSR model, and an approximate solution is solved by using an extended iterative contraction method;
updating lagrangian constraint variable L 1
Figure FDA0003788919480000045
Fixed B component, dictionary D, convolution kernel C kt And response coefficient H kt The sub-problem of updating the S component is:
Figure FDA0003788919480000046
wherein L is 2 Is a linear constraint of Lagrangian multiplication, beta 2 Is a penalty parameter; equation (10) can lead to the solution of the S component as
Figure FDA0003788919480000051
Fixed response factor H kt Sum component S, updating convolution kernel C kt The sub-problems of (1) are:
Figure FDA0003788919480000052
defining a set of linear operators
Figure FDA0003788919480000053
So that
Figure FDA0003788919480000054
Wherein c is kt =vec(C kt ) Is namely C kt So equation (12) translates to:
Figure FDA0003788919480000055
wherein,
Figure FDA0003788919480000056
and
Figure FDA0003788919480000057
equation (13) is solved by the near-end Gradient Descent algorithm (Proximal Gradient Descent method):
Figure FDA0003788919480000058
where τ is the gradient descent step, Prox ||·|| ≤1 Is L2 near-end operatorEnsure that each convolution operator satisfies
Figure FDA0003788919480000059
Fixed convolution kernel C kt Sum component S, updating response coefficient H kt The sub-problems of (1) are:
Figure FDA00037889194800000510
Figure FDA0003788919480000063
formula (15) is a classical convolution sparse coding model, and an approximate solution is solved;
updating L 2
Figure FDA0003788919480000061
And alternately and iteratively updating the component B and the component S until a convergence condition is reached:
Figure FDA0003788919480000062
wherein B is t Denotes the result of the t-th iteration, B t-1 Is the result of the t-1 th iteration, and p is a given threshold.
2. The image decomposition method based on convolution and K-SVD dictionary joint sparse coding according to claim 1, characterized in that in said step 3:
for the denoising task, k is 3, and t is 2;
for the Retinex model, k is 1 and t is 2.
CN202110125214.1A 2021-01-29 2021-01-29 Image decomposition method based on convolution and K-SVD dictionary joint sparse coding Active CN112734763B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110125214.1A CN112734763B (en) 2021-01-29 2021-01-29 Image decomposition method based on convolution and K-SVD dictionary joint sparse coding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110125214.1A CN112734763B (en) 2021-01-29 2021-01-29 Image decomposition method based on convolution and K-SVD dictionary joint sparse coding

Publications (2)

Publication Number Publication Date
CN112734763A CN112734763A (en) 2021-04-30
CN112734763B true CN112734763B (en) 2022-09-16

Family

ID=75594715

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110125214.1A Active CN112734763B (en) 2021-01-29 2021-01-29 Image decomposition method based on convolution and K-SVD dictionary joint sparse coding

Country Status (1)

Country Link
CN (1) CN112734763B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113808036B (en) * 2021-08-31 2023-02-24 西安理工大学 Low-illumination image enhancement and denoising method based on Retinex model
CN114881885A (en) * 2022-05-25 2022-08-09 南京邮电大学 Image denoising method based on decoupling depth dictionary learning
CN115115551B (en) * 2022-07-26 2024-03-29 北京计算机技术及应用研究所 Parallax map restoration method based on convolution dictionary

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103745442A (en) * 2014-01-08 2014-04-23 西安电子科技大学 Non-local wavelet coefficient contraction-based image denoising method
CN106780342A (en) * 2016-12-28 2017-05-31 深圳市华星光电技术有限公司 Single-frame image super-resolution reconstruction method and device based on the reconstruct of sparse domain
CN108573263A (en) * 2018-05-10 2018-09-25 西安理工大学 A kind of dictionary learning method of co-ordinative construction rarefaction representation and low-dimensional insertion
CN109064406A (en) * 2018-08-26 2018-12-21 东南大学 A kind of rarefaction representation image rebuilding method that regularization parameter is adaptive
AU2020100460A4 (en) * 2020-03-26 2020-04-30 Huang, Shuying DR Single image deraining algorithm based on multi-scale dictionary

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8913686B2 (en) * 2010-05-07 2014-12-16 Yale University Sparse superposition encoder and decoder for communications system
US9582916B2 (en) * 2014-11-10 2017-02-28 Siemens Healthcare Gmbh Method and system for unsupervised cross-modal medical image synthesis
CN107133930A (en) * 2017-04-30 2017-09-05 天津大学 Ranks missing image fill method with rarefaction representation is rebuild based on low-rank matrix
CN108305297A (en) * 2017-12-22 2018-07-20 上海交通大学 A kind of image processing method based on multidimensional tensor dictionary learning algorithm
CN110717354B (en) * 2018-07-11 2023-05-12 哈尔滨工业大学 Super-pixel classification method based on semi-supervised K-SVD and multi-scale sparse representation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103745442A (en) * 2014-01-08 2014-04-23 西安电子科技大学 Non-local wavelet coefficient contraction-based image denoising method
CN106780342A (en) * 2016-12-28 2017-05-31 深圳市华星光电技术有限公司 Single-frame image super-resolution reconstruction method and device based on the reconstruct of sparse domain
CN108573263A (en) * 2018-05-10 2018-09-25 西安理工大学 A kind of dictionary learning method of co-ordinative construction rarefaction representation and low-dimensional insertion
CN109064406A (en) * 2018-08-26 2018-12-21 东南大学 A kind of rarefaction representation image rebuilding method that regularization parameter is adaptive
AU2020100460A4 (en) * 2020-03-26 2020-04-30 Huang, Shuying DR Single image deraining algorithm based on multi-scale dictionary

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Dictionaries for Sparse Representation Modeling;Ron Rubinstein et al;《Proceedings of the IEEE》;20100422;第1045-1057页 *
Joint Bi-layer Optimization for Single-image Rain Streak Removal;Lei Zhu et al;《2017 IEEE International Conference on Computer Vision》;20171225;摘要及第3-5节 *
一个基于卷积稀疏表示的图像重构算法;陈小陶等;《计算机与数字工程》;20170420(第04期);第31-734页及第744页 *
一种改进的基于K-SVD字典的图像去噪算法;王欣等;《电子设计工程》;20141205(第23期);第189-192页 *

Also Published As

Publication number Publication date
CN112734763A (en) 2021-04-30

Similar Documents

Publication Publication Date Title
CN112734763B (en) Image decomposition method based on convolution and K-SVD dictionary joint sparse coding
CN109102477B (en) Hyperspectral remote sensing image recovery method based on non-convex low-rank sparse constraint
CN103049892B (en) Non-local image denoising method based on similar block matrix rank minimization
Zhang et al. High-quality image restoration using low-rank patch regularization and global structure sparsity
CN110210282B (en) Moving target detection method based on non-convex low-rank sparse decomposition
CN110992292B (en) Enhanced low-rank sparse decomposition model medical CT image denoising method
CN109636722B (en) Method for reconstructing super-resolution of online dictionary learning based on sparse representation
CN113808036B (en) Low-illumination image enhancement and denoising method based on Retinex model
CN114820352A (en) Hyperspectral image denoising method and device and storage medium
CN104657951A (en) Multiplicative noise removal method for image
CN112529777A (en) Image super-resolution analysis method based on multi-mode learning convolution sparse coding network
CN112233046B (en) Image restoration method under Cauchy noise and application thereof
Shahdoosti et al. Combined ripplet and total variation image denoising methods using twin support vector machines
Singhal et al. A domain adaptation approach to solve inverse problems in imaging via coupled deep dictionary learning
CN116993621A (en) Dim light image enhancement method
CN111161184B (en) Rapid MR image denoising method based on MCP sparse constraint
Jiang et al. A new nonlocal means based framework for mixed noise removal
Chen et al. Hyperspectral image denoising via texture-preserved total variation regularizer
CN115797205A (en) Unsupervised single image enhancement method and system based on Retinex fractional order variation network
Liu et al. Joint dehazing and denoising for single nighttime image via multi-scale decomposition
CN112241938A (en) Image restoration method based on smooth Tak decomposition and high-order tensor Hank transformation
CN116843559A (en) Underwater image enhancement method based on image processing and deep learning
Cao et al. Remote sensing image recovery and enhancement by joint blind denoising and dehazing
Baraha et al. Speckle removal using dictionary learning and pnp-based fast iterative shrinkage threshold algorithm
CN114359064B (en) Hyperspectral image recovery method based on dual gradient constraint

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant