CN109903233B - Combined image restoration and matching method and system based on linear features - Google Patents

Combined image restoration and matching method and system based on linear features Download PDF

Info

Publication number
CN109903233B
CN109903233B CN201910023228.5A CN201910023228A CN109903233B CN 109903233 B CN109903233 B CN 109903233B CN 201910023228 A CN201910023228 A CN 201910023228A CN 109903233 B CN109903233 B CN 109903233B
Authority
CN
China
Prior art keywords
matrix
image
real
time image
linear
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201910023228.5A
Other languages
Chinese (zh)
Other versions
CN109903233A (en
Inventor
桑农
彭军才
邵远杰
高常鑫
李文豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201910023228.5A priority Critical patent/CN109903233B/en
Publication of CN109903233A publication Critical patent/CN109903233A/en
Application granted granted Critical
Publication of CN109903233B publication Critical patent/CN109903233B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a combined image restoration and matching method based on linear features, which constructs a linear mapping matrix, extracts main linear features to calculate and initialize a clear real-time image and a weighting sparse coefficient, selects the maximum component in the weighting sparse coefficient of the clear real-time image, and the position of an element in a corresponding pixel dictionary matrix in a reference image is the final matching result. The invention also provides a system for realizing the method. The method extracts linear characteristics, plays a role in reducing interference caused by image blurring, removing a large amount of useless information and keeping the linear relation of clear real-time images reconstructed by sparse coefficients, has the technical effects of improving image restoration and matching accuracy, and simultaneously reduces the calculation amount and improves the real-time property.

Description

Combined image restoration and matching method and system based on linear features
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a combined image restoration and matching method and system based on linear features.
Background
The image matching is a core technology in a visual navigation system, and plays a key role in improving the navigation precision. In the visual navigation system, a camera continuously shoots real-time images, scene matching is carried out on the real-time images and stored reference images, and accurate position information can be obtained.
Image matching algorithms are generally classified into feature-based methods and pixel-based methods. The image matching algorithm based on the features is divided into two steps, wherein the first step is to extract the features from the real-time image and the reference image, and the second step is to compare the similarity based on the two groups of features to obtain a matching result. The algorithm mainly researches and works on extracting different features, such as sift features, SURF features and SUSAN features aiming at point features, Canny features and orthogonal barren wave features aiming at line features and the like. Experiments show that the image matching algorithm based on the features has certain image affine transformation invariance, but when noise and blurring exist in a real-time image and a reference image, the algorithm is difficult to extract accurate corresponding feature vectors, so that the matching result is not ideal. The pixel-based image matching algorithm is to extract different image blocks on a reference image by using a sliding window and compare the different image blocks with a real-time image, and because all pixels are directly used, a better matching result can be obtained under the condition of noise and blur. The main research of the algorithm is to accurately calculate the similarity of pixels, such as selective correlation coefficient, cross-correlation coefficient, and the like. Recently, Sai Yang et al apply sparse representation to image matching, and the method can improve the real-time performance of the algorithm and has certain noise and occlusion resistance. However, the above image matching algorithms all assume that the real-time image has no blur. In practical applications, due to the intrinsic and extrinsic reasons of an image system, the image is inevitably degraded by blurring and the like during formation, recording, processing and transmission, which makes image matching a very challenging problem.
The article "Joint Image retrieval and Matching Based on Distance-Weighted Sparse retrieval" discloses a Joint Image Restoration and Matching method for blurred real-time images, and the general scheme is that when a blur kernel is correctly estimated, a restored Image can be expressed by a dictionary in the most Sparse way, and meanwhile, Sparse Representation coefficients can also locate the position of a target. On one hand, the sparse representation prior can effectively restrict the solution space of a possible clear image; on the other hand, better image restoration is helpful for sparse representation, and better sparse representation coefficients are obtained, so that better positioning accuracy is obtained. Because the real-time image has blur, the scheme calculates the weighting sparse coefficient based on the one-dimensional pixel vector of the image, and has the influence of a large amount of noise information introduced by image blur, so that the sparse coefficient is not calculated accurately, and the accuracy of the final image restoration and matching result is reduced.
Disclosure of Invention
Aiming at the technical defects in the prior art, the invention aims to provide an image matching method aiming at the real-time image blur, and aims to solve the problem that the matching precision is low due to inaccurate sparse coefficient calculation caused by noise information introduced by image blur in the prior art.
A joint image restoration and matching method based on linear features comprises the following steps:
(1) an input step:
inputting a reference image I and a blurred real-time image y;
(2) a pixel dictionary matrix D construction step:
extracting image blocks with the same size as the blurred real-time image y in a sliding mode in the reference image I, stretching pixels of the single image block into column vectors, and arranging the pixel column vectors of all the image blocks to form a pixel dictionary matrix D;
(3) linear feature dictionary matrix DfThe construction steps are as follows:
constructing a mapping matrix T representing linear characteristics of the pixel dictionary matrix D based on the pixel dictionary matrix D, and further obtaining a linear characteristic dictionary matrix Df
(4) Initializing a clear real-time image x:
stretching the pixels of the blurred real-time image y into a column vector v1, and multiplying the vector v1 by a mapping matrix T to obtain a linear characteristic column vector v 2; combining the linear feature column vector v2 and the linear feature dictionary matrix DfCalculating a weighted sparse coefficient of the fuzzy real-time image y; multiplying the sparse coefficient by a pixel dictionary matrix D to obtain an initialized clear real-time image x;
(5) iterative restoration and matching steps:
calculating a blur kernel based on the sharp real-time image x and the blur real-time image y; updating a clear real-time image x according to the fuzzy kernel; stretching the pixels of the updated clear real-time image x into a column vector v3, and multiplying the vector v3 by a mapping matrix T to obtain a linear characteristic column vector v 4; combining the linear feature column vector v4 and the linear feature dictionary matrix DfCalculating a weighting sparse coefficient of a clear real-time image x; repeating the step (5) until a predetermined number of iterations is reached;
(6) and a matching result output step:
and selecting the maximum component in the weighting sparse coefficient of the clear real-time image x, wherein the position of the element in the corresponding pixel dictionary matrix D in the reference image I is the final matching result.
Further, it is characterized byThe linear feature dictionary matrix DfThe concrete implementation manner of the construction steps is as follows:
calculating a covariance matrix M of column vectors in the pixel dictionary matrix D; solving the eigenvalue and eigenvector of the covariance matrix M; screening m characteristic values, wherein the ratio of the sum of the m characteristic values to the sum of all the characteristic values is larger than a preset threshold value; extracting eigenvectors corresponding to the m eigenvalues to form a mapping matrix T; multiplying D by T to obtain a linear feature dictionary matrix Df
Further, the linear feature dictionary matrix DfThe concrete implementation manner of the construction steps is as follows:
converting the elements in the pixel dictionary matrix D into a two-dimensional image, denoted AiI is 1,2, …, and N is the number of dictionary elements; according to the formula
Figure BDA0001941537810000031
Computing an overall scatter matrix G of an imagetCalculate GtThe eigenvalues and eigenvectors of (a);
screening m characteristic values, wherein the ratio of the sum of the m characteristic values to the sum of all the characteristic values is larger than a preset threshold value; extracting the eigenvectors corresponding to the m eigenvalues to form a mapping matrix T1Calculating a linear mapping feature map Fi=Ai·T1,i=1,2,…,N;
Computing an overall scatter matrix
Figure BDA0001941537810000041
Calculation of GfThe eigenvalues and eigenvectors of (a); screening n characteristic values, wherein the ratio of the sum of the n characteristic values to the sum of all the characteristic values is larger than a preset threshold value; extracting the eigenvectors corresponding to the n eigenvalues to form a mapping matrix T2Calculating the final linear mapping feature map Di=T2 T·FiI 1,2, …, N, and DiConverting into column vectors to form linear feature dictionary matrix Df
Further, the specific implementation manner of the step x of initializing the clear real-time image is as follows:
(4-1) stretching the blurred real-time image y into a pixel column vector v1 according to the row priority order, and multiplying the vector v1 by the mapping matrix T to obtain a column vector v 2;
(4-2) calculating a column vector v2 and a dictionary matrix DfThe Euclidean distance w1 of each column vector is used as a weighting coefficient to calculate the vector v2 in the dictionary matrix D by taking the Euclidean distance w1 as a weighting coefficientfObtaining a sparse coefficient alpha 1 by sparse expression;
and (4-3) multiplying the sparse coefficient alpha 1 by the pixel dictionary matrix D to obtain the initialized clear real-time image x.
Further, the specific implementation manner of calculating the blur kernel based on the clear real-time image x and the blur real-time image y is as follows:
fuzzy kernel
Figure BDA0001941537810000042
Wherein,
f (-) is a fast Fourier transform, F (-) in-1Is an inverse fast fourier transform and is,
Figure BDA0001941537810000043
is the complex conjugate of F (·), the coefficient gamma is 0.003-0.007, I is the unit matrix,
Figure BDA0001941537810000046
is a matrix dot product.
Further, the specific implementation manner of updating the clear real-time image x according to the blur kernel is as follows:
according to an objective function:
Figure BDA0001941537810000044
introducing an auxiliary variable u, and then the objective function is:
Figure BDA0001941537810000045
where k is the blur kernel of the previous estimation, y is the blurred real-time image, and D is the pixel dictionaryMatrix, e1=[1,-1],e2=[1,-1]Tη, τ and β are weight parameters;
and (5) solving by minimizing an objective function to obtain a clear real-time image x.
Further, a clear real-time image x is solved by adopting an alternative minimization method, and in each iteration, u is fixed to optimize x, and then x is fixed to optimize u;
fixed u optimizes x:
x is optimized by solving the following objective function:
Figure BDA0001941537810000051
the solution to the above objective function is:
Figure BDA0001941537810000052
x is fixed again to optimize u:
solving u:
Figure BDA0001941537810000053
a linear feature based joint image restoration and matching system, comprising the following modules:
the input module is used for inputting a reference image I and a blurred real-time image y;
the pixel dictionary matrix D constructing module is used for extracting a plurality of image blocks with the same size as the blurred real-time image y in a sliding mode in the reference image I, stretching pixels of the single image block into column vectors, and then arranging the pixel column vectors of all the image blocks to form a pixel dictionary matrix D;
linear feature dictionary matrix DfA construction module for constructing a mapping matrix T representing the linear characteristics of the pixel dictionary matrix D based on the pixel dictionary matrix D, and further obtaining a linear characteristic dictionary matrix Df
Initializing a sharp real-time image x-module for blurringStretching the pixels of the real-time image y into a column vector v1, and multiplying the vector v1 by a mapping matrix T to obtain a linear characteristic column vector v 2; combining the linear feature column vector v2 and the linear feature dictionary matrix DfCalculating a weighted sparse coefficient of the fuzzy real-time image y; multiplying the sparse coefficient by a pixel dictionary matrix D to obtain an initialized clear real-time image x;
an iterative restoration and matching module for calculating a blur kernel based on the sharp real-time image x and the blurred real-time image y; updating a clear real-time image x according to the fuzzy kernel; stretching the pixels of the updated clear real-time image x into a column vector v3, and multiplying the vector v3 by a mapping matrix T to obtain a linear characteristic column vector v 4; combining the linear feature column vector v4 and the linear feature dictionary matrix DfCalculating a weighting sparse coefficient of a clear real-time image x; repeating the iteration restoration and matching module until a preset iteration number is reached;
and the matching result output module is used for selecting the maximum component in the weighting sparse coefficient of the clear real-time image x, and the position of the element in the corresponding pixel dictionary matrix D in the reference image I is the final matching result.
Through the technical scheme, compared with the prior art, the invention has the following beneficial effects:
according to the combined image restoration and matching method based on the linear features, due to the fact that the linear mapping matrix is constructed, the main linear features are extracted and used for calculating the initialized clear real-time image and the weighted sparse coefficient, interference caused by image blurring is reduced, a large amount of useless information is removed, the linear relation of the clear real-time image reconstructed by the sparse coefficient is kept, the technical effects of improving image restoration and matching accuracy are achieved, meanwhile, the calculation amount of an algorithm can be reduced, and algorithm instantaneity is improved.
According to a preferred embodiment, a linear mapping matrix of one-dimensional vectors is analyzed and constructed, and due to the fact that specific characteristic values are screened, corresponding characteristic vectors are extracted to form the linear mapping matrix, the effects of screening main information and removing interference information are achieved.
According to a better implementation mode, a linear mapping matrix for constructing the two-dimensional image is analyzed, and the linear mapping matrix is constructed for the rows and the columns of the two-dimensional image respectively, so that the total dispersion matrix and the two-dimensional image have the same dimension, the main characteristics of an image airspace are kept, the difficulty of calculating characteristic values and characteristic roots is reduced, and the technical effect of improving the accuracy and the real-time performance of an algorithm is achieved.
Drawings
FIG. 1 is a flow chart of a method for joint image restoration and matching of linear features.
FIG. 2 is a schematic diagram of a joint image restoration and matching method for linear features.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The terms used in the present invention are explained and illustrated first below.
Image restoration: image restoration is one of the important fields of digital image processing, and is widely applied to the fields of aerospace, astronomical observation, medical image diagnosis and the like. The restoration of images is required because the images are inevitably degraded by noise, blurring, etc. during the formation, recording, processing and transmission processes due to the internal and external reasons of the image system. Image restoration attempts to reconstruct or restore the degraded image using some a priori knowledge of the degradation phenomenon, so the restoration technique models the degradation and uses the reverse process to restore the original image. The general image degradation process can be modeled as a degradation function and an additive noise term, and an input image f (x, y) is processed to produce a degraded image g (x, y). Given g (x, y) and some knowledge about the degradation function H, and the applied noise term η (x, y), the purpose of image restoration is to obtain an approximate estimate about the original image
Figure BDA0001941537810000071
If the system H is a linear, position invariant process, then the degraded image given in the spatial domain can be given by:
g(x,y)=h(x,y)*f(x,y)+η(x,y)
image restoration is mainly classified into a non-blind deconvolution image restoration algorithm and a blind deconvolution image restoration algorithm according to whether a fuzzy convolution kernel is known or not.
Image matching: image matching algorithms are generally classified into feature-based methods and pixel-based methods. The image matching algorithm based on the features is divided into two steps, wherein the first step is to extract the features from the real-time image and the reference image, and the second step is to compare the similarity based on the two groups of features to obtain a matching result. The algorithm mainly researches and works on extracting different features, such as sift features, SURF features and SUSAN features aiming at point features, Canny features and orthogonal barren wave features aiming at line features and the like. The pixel-based image matching algorithm is to extract different image blocks on a reference image by using a sliding window and compare the different image blocks with a real-time image, and because all pixels are directly used, a better matching result can be obtained under the condition of noise and blur. The main research of the algorithm is to accurately calculate the similarity of pixels, such as selective correlation coefficient, cross-correlation coefficient, and the like.
Sparse representation: sparse representation given data may be based on dictionaries, represented as a linear combination of a portion of elements. Given data x ∈ RmDictionary D ═ D1,d2,…,dn]∈Rm×nAnd m is less than or equal to n, we can obtain based on sparse expression:
Figure BDA0001941537810000081
the above model yields the sparsest representation of a given x based on dictionary D. For the conventional non-orthogonal and non-overcomplete dictionary D, the model solution is an NP difficult problem, so L is generally adopted1Regularization mentions L0The regularization is solved. Based on Lagrange dual algorithm, the method can be converted into the following formula for solvingSolution:
Figure BDA0001941537810000082
fig. 1 shows a preferred embodiment of the present invention, which specifically includes the following steps:
(1) an input step:
inputting a reference image I and a blurred real-time image y;
(2) constructing a pixel dictionary matrix D:
(2-1) extracting an image block B on the reference image I by adopting a sliding window according to a specific step length, wherein the length and the width of the extracted image block B are the same as those of a real-time image y;
(2-2) stretching the pixels of the image block B into a pixel column vector v in row-priority order;
(2-3) forming a matrix by pixel column vectors formed by stretching all the image blocks, namely forming a pixel dictionary matrix D;
(3) constructing a Linear feature dictionary matrix DfThe method comprises the following steps:
the first embodiment:
constructing a mapping matrix T representing the linear characteristics of the pixel dictionary matrix D based on the pixel dictionary matrix D, and multiplying the pixel dictionary matrix D by the mapping matrix T to obtain a linear characteristic dictionary matrix DfMore specifically, the preferred embodiment is:
(3-1) calculating a covariance matrix M of the pixel dictionary matrix D;
(3-2) solving the eigenvalue of the covariance matrix M by eigenvalue decomposition;
(3-3) selecting m eigenvalues, dividing the sum of the m eigenvalues by the sum of all eigenvalues to be more than 0.9, and taking out eigenvectors corresponding to the m eigenvalues to form a mapping matrix T;
(3-4) multiplying the pixel dictionary matrix D by the mapping matrix T to obtain a linear feature dictionary matrix Df
The second embodiment:
(3-1) converting the elements in the pixel dictionary matrix D into a two-dimensional image, which is marked as AiI is 1,2, …, and N is the number of dictionary elements; according to the formula
Figure BDA0001941537810000091
Computing an overall scatter matrix G of an imagetCalculate GtThe eigenvalues and eigenvectors of (a);
(3-2) screening m characteristic values, wherein the ratio of the sum of the m characteristic values to the sum of all the characteristic values is greater than a preset threshold value; extracting the eigenvectors corresponding to the m eigenvalues to form a mapping matrix T1Calculating a linear mapping feature map Fi=Ai·T1,i=1,2,…,N;
(3-3) calculating the Total scatter matrix
Figure BDA0001941537810000092
Calculation of GfThe eigenvalues and eigenvectors of (a); screening n characteristic values, wherein the ratio of the sum of the n characteristic values to the sum of all the characteristic values is larger than a preset threshold value; extracting the eigenvectors corresponding to the n eigenvalues to form a mapping matrix T2Calculating the final linear mapping feature map Di=T2 T·FiI 1,2, …, N, and DiConverting into column vectors to form linear feature dictionary matrix Df
(4) Initializing a clear real-time image x:
stretching the pixels of the blurred real-time image y into a column vector v1, and multiplying the vector v1 by a mapping matrix T to obtain a linear characteristic column vector v 2; combining the linear feature column vector v2 and the linear feature dictionary matrix DfCalculating a weighted sparse coefficient of the fuzzy real-time image y; the sparse coefficient is multiplied by the pixel dictionary matrix D to obtain the initialized sharp real-time image x, and a more specific preferred embodiment is as follows:
(4-1) stretching the blurred real-time image y into a pixel column vector v1 according to the row priority order, and multiplying the vector v1 by the mapping matrix T to obtain a reduced-dimension column vector v 2;
(4-2) calculating vector v2 and dictionary matrix DfThe Euclidean distance w1 of each column vector is calculated by taking the Euclidean distance w1 as a weighting coefficient to calculate the vector v2 in the dictionary matrix DfObtaining a sparse coefficient alpha by sparse expression;
(4-3) selectionDetermining n sparse coefficient components, dividing the sum of the n sparse coefficient components by the sum of all sparse coefficient components by a predetermined threshold (e.g., 0.95), and combining the n sparse coefficient components with a dictionary matrix DfMultiplying corresponding elements in the image to obtain an initialized clear real-time image x.
(5) Iterative restoration and matching steps:
setting iteration times T; calculating a blur kernel based on the sharp real-time image x and the blur real-time image y; updating a clear real-time image x according to the fuzzy kernel; stretching the pixels of the updated clear real-time image x into a column vector v3, and multiplying the vector v3 by a mapping matrix T to obtain a linear characteristic column vector v 4; combining the linear feature column vector v4 and the linear feature dictionary matrix DfCalculating a weighting sparse coefficient of a clear real-time image x; and (5) repeating the step until a preset iteration number is reached.
Calculating a fuzzy kernel k:
according to the formula
Figure BDA0001941537810000101
Computing a blur kernel k, where x is the sharp real-time image, y is the blurred real-time image, F (-) is the fast Fourier transform, F (-) and-1is an inverse fast fourier transform and is,
Figure BDA0001941537810000102
is the complex conjugate of F (·), γ can be taken as 0.005, I is a unit matrix, and omicron is a matrix dot product
Updating the clear real-time image x:
according to the objective function
Figure BDA0001941537810000103
Introducing an auxiliary variable u, and then the objective function is:
Figure BDA0001941537810000104
where k is the blur kernel of the previous estimation, y is the blurred real-time image, D is the pixel dictionary matrix, e1=[1,-1],e2=[1,-1]TS is 0.5, β is 0.015, η is 1, and τ is 1.4. The method adopts an alternate minimization method to solve, and the basic idea is iterative solution, wherein in each iteration, u is fixed to optimize x, and then x is fixed to optimize u.
Optimizing x:
fixing u, optimizing x by solving the following objective function:
Figure BDA0001941537810000111
the solution to the above objective function is:
Figure BDA0001941537810000112
optimizing u:
fixing x, solving u:
Figure BDA0001941537810000113
calculating weighted sparse coefficients, comprising the steps of
Stretching the estimated clear real-time image x into a column vector v3 according to the row priority order, and multiplying the vector v3 by a mapping matrix T to obtain a reduced-dimension column vector v 4;
computing vector v4 and dictionary matrix DfThe Euclidean distance w2 of each column vector is calculated by taking the Euclidean distance w2 as a weighting coefficient to calculate the vector v2 in the dictionary matrix DfTo obtain a sparse coefficient alpha
(6) And a matching result output step:
the position of the element in the reference image in the pixel dictionary matrix D corresponding to the largest component in the sparse coefficient α is the final matching result.
Example (c):
to compare the differences between the present invention and other methods, the following experiments were performed. Giving 3 reference images with the size of 600 × 600, randomly taking 100 same coordinates on each image as a central point, extracting a picture with the size of 49 × 49 as a real-time image, adding Gaussian blur to the real-time image, setting the standard deviation of the Gaussian blur to be 1 to 5, setting the step length of a sliding window extraction dictionary image to be 1, and carrying out experiments by adopting different methods. And recording the sum of pixel differences between the real-time image matching coordinates and the actual coordinates as PD, and counting the sample proportion of which the PD is smaller than different specific thresholds. As shown in table 1 below, which is an experimental result of a combined image restoration and matching algorithm based on weighted distance, and table 2, which is a result of combined image restoration and matching based on linear characteristics according to the present invention, it can be seen that when the standard deviation of gaussian blur is 1, the matching precision of the two methods is the same. However, as the standard deviation of the gaussian blur gradually increases, the matching accuracy of the weighted distance-based joint image restoration and matching algorithm sharply decreases, and when the standard deviation of the gaussian blur is 5, the sample ratio of PD < 6 is 0.62. Although the accuracy of the joint image restoration and matching based on the linear feature is reduced to a certain extent, the matching accuracy is much higher than that of the former, and when the gaussian blur standard deviation is 5, the sample ratio of PD < 6 is 0.9767.
PD<=0 PD<=1 PD<=2 PD<=3 PD<=4 PD<=5 PD<=6
σ=1 1 1 1 1 1 1 1
σ=2 0.9433 0.9867 0.9933 0.9933 0.9933 0.9933 0.9933
σ=3 0.4300 0.8100 0.9133 0.9400 0.9433 0.9433 0.9433
σ=4 0.1400 0.4600 0.6767 0.7667 0.8067 0.8133 0.8133
σ=5 0.0300 0.1633 0.3467 0.4733 0.5567 0.6000 0.6200
TABLE 1 Experimental results of joint image restoration and matching method based on weighted distance
PD<=0 PD<=1 PD<=2 PD<=3 PD<=4 PD<=5 PD<=6
σ=1 0.9900 1 1 1 1 1 1
σ=2 0.9100 0.9800 1 1 1 1 1
σ=3 0.6267 0.9000 0.9900 1 1 1 1
σ=4 0.2767 0.6867 0.9167 0.9800 1 1 1
σ=5 0.1267 0.4233 0.7200 0.8767 0.9367 0.9567 0.9767
TABLE 2 Combined image restoration and matching experiment results based on linear features
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (3)

1. A joint image restoration and matching method based on linear features is characterized by comprising the following steps:
(1) an input step:
inputting a reference image I and a blurred real-time image y;
(2) a pixel dictionary matrix D construction step:
extracting image blocks with the same size as the blurred real-time image y in a sliding mode in the reference image I, stretching pixels of the single image block into column vectors, and arranging the pixel column vectors of all the image blocks to form a pixel dictionary matrix D;
(3) linear feature dictionary matrix DfThe construction steps are as follows:
constructing a mapping matrix T representing linear characteristics of the pixel dictionary matrix D based on the pixel dictionary matrix D, and further obtaining a linear characteristic dictionary matrix Df(ii) a The linear feature dictionary matrix DfThe concrete implementation manner of the construction steps is as follows:
calculating a covariance matrix M of column vectors in the pixel dictionary matrix D; solving the eigenvalue and eigenvector of the covariance matrix M; screening m characteristic values, wherein the ratio of the sum of the m characteristic values to the sum of all the characteristic values is larger than a preset threshold value; extracting the eigenvectors corresponding to the m eigenvaluesForming a mapping matrix T; multiplying D by T to obtain a linear feature dictionary matrix Df
Or,
converting the elements in the pixel dictionary matrix D into a two-dimensional image, denoted AiI is 1,2, …, and N is the number of dictionary elements; according to the formula
Figure FDA0003106030340000011
Computing an overall scatter matrix G of an imagetCalculate GtThe eigenvalues and eigenvectors of (a);
screening m characteristic values, wherein the ratio of the sum of the m characteristic values to the sum of all the characteristic values is larger than a preset threshold value; extracting the eigenvectors corresponding to the m eigenvalues to form a mapping matrix T1Calculating a linear mapping feature map Fi=Ai·T1,i=1,2,…,N;
Computing an overall scatter matrix
Figure FDA0003106030340000021
Calculation of GfThe eigenvalues and eigenvectors of (a); screening n characteristic values, wherein the ratio of the sum of the n characteristic values to the sum of all the characteristic values is larger than a preset threshold value; extracting the eigenvectors corresponding to the n eigenvalues to form a mapping matrix T2Calculating the final linear mapping feature map Di=T2 T·FiI 1,2, …, N, and DiConverting into column vectors to form linear feature dictionary matrix Df
(4) Initializing a clear real-time image x:
stretching the blurred real-time image y into a pixel column vector v1 according to the row priority order, and multiplying the vector v1 by a mapping matrix T to obtain a column vector v 2; calculating a column vector v2 and a dictionary matrix DfThe Euclidean distance w1 of each column vector is used as a weighting coefficient to calculate the vector v2 in the dictionary matrix D by taking the Euclidean distance w1 as a weighting coefficientfObtaining a sparse coefficient alpha 1 by sparse expression; multiplying the sparse coefficient alpha 1 with a pixel dictionary matrix D to obtain an initialized clear real-time image x;
(5) iterative restoration and matching steps:
calculating a blur kernel based on the sharp real-time image x and the blur real-time image y; updating a clear real-time image x according to the fuzzy kernel; stretching the pixels of the updated clear real-time image x into a column vector v3, and multiplying the vector v3 by a mapping matrix T to obtain a linear characteristic column vector v 4; combining the linear feature column vector v4 and the linear feature dictionary matrix DfCalculating a weighting sparse coefficient of a clear real-time image x; repeating the step (5) until a predetermined number of iterations is reached;
the specific implementation manner of calculating the blur kernel based on the clear real-time image x and the blur real-time image y is as follows:
fuzzy kernel
Figure FDA0003106030340000022
Wherein,
f (-) is a fast Fourier transform, F-1(. cndot.) is an inverse fast Fourier transform,
Figure FDA0003106030340000023
is the complex conjugate of F (·), the coefficient gamma is 0.003-0.007, I is the unit matrix,
Figure FDA0003106030340000024
is a matrix dot product;
the specific implementation manner of updating the clear real-time image x according to the fuzzy kernel is as follows:
according to an objective function:
Figure FDA0003106030340000031
introducing an auxiliary variable u, and then the objective function is:
Figure FDA0003106030340000032
where k is the blur kernel of the previous estimation, y is the blurred real-time image, D is the pixel dictionary matrix, e1=[1,-1],e2=[1,-1]Tη, τ and β are weight parameters;
carrying out minimum solving by using an objective function to obtain a clear real-time image x;
(6) and a matching result output step:
and selecting the maximum component in the weighting sparse coefficient of the clear real-time image x, wherein the position of the element in the corresponding pixel dictionary matrix D in the reference image I is the final matching result.
2. The linear-feature-based joint image restoration and matching method according to claim 1, wherein an alternate minimization method is adopted to solve the sharp real-time image x, and in each iteration, u is fixed to optimize x, and then x is fixed to optimize u;
fixed u optimizes x:
x is optimized by solving the following objective function:
Figure FDA0003106030340000033
the solution to the above objective function is:
Figure FDA0003106030340000034
x is fixed again to optimize u:
solving u:
Figure FDA0003106030340000035
3. a joint image restoration and matching system based on linear features, comprising the following modules:
the input module is used for inputting a reference image I and a blurred real-time image y;
the pixel dictionary matrix D constructing module is used for extracting a plurality of image blocks with the same size as the blurred real-time image y in a sliding mode in the reference image I, stretching pixels of the single image block into column vectors, and then arranging the pixel column vectors of all the image blocks to form a pixel dictionary matrix D;
linear feature dictionary matrix DfA construction module for constructing a mapping matrix T representing the linear characteristics of the pixel dictionary matrix D based on the pixel dictionary matrix D, and further obtaining a linear characteristic dictionary matrix Df(ii) a The linear feature dictionary matrix DfThe concrete implementation manner of the construction steps is as follows:
calculating a covariance matrix M of column vectors in the pixel dictionary matrix D; solving the eigenvalue and eigenvector of the covariance matrix M; screening m characteristic values, wherein the ratio of the sum of the m characteristic values to the sum of all the characteristic values is larger than a preset threshold value; extracting eigenvectors corresponding to the m eigenvalues to form a mapping matrix T; multiplying D by T to obtain a linear feature dictionary matrix Df
Or,
converting the elements in the pixel dictionary matrix D into a two-dimensional image, denoted AiI is 1,2, …, and N is the number of dictionary elements; according to the formula
Figure FDA0003106030340000041
Computing an overall scatter matrix G of an imagetCalculate GtThe eigenvalues and eigenvectors of (a);
screening m characteristic values, wherein the ratio of the sum of the m characteristic values to the sum of all the characteristic values is larger than a preset threshold value; extracting the eigenvectors corresponding to the m eigenvalues to form a mapping matrix T1Calculating a linear mapping feature map Fi=Ai·T1,i=1,2,…,N;
Computing an overall scatter matrix
Figure FDA0003106030340000042
Calculation of GfThe eigenvalues and eigenvectors of (a); screening n characteristic values, wherein the ratio of the sum of the n characteristic values to the sum of all the characteristic values is larger than a preset threshold value; extracting the eigenvectors corresponding to the n eigenvalues to form a mapping matrix T2Calculating the final linear mapping feature map Di=T2 T·FiI 1,2, …, N, and DiConverting into column vectors to form linear feature dictionary matrix Df
The clear real-time image x initialization module is used for stretching the blurred real-time image y into a pixel column vector v1 according to the row priority order, and multiplying the vector v1 by the mapping matrix T to obtain a column vector v 2; calculating a column vector v2 and a dictionary matrix DfThe Euclidean distance w1 of each column vector is used as a weighting coefficient to calculate the vector v2 in the dictionary matrix D by taking the Euclidean distance w1 as a weighting coefficientfObtaining a sparse coefficient alpha 1 by sparse expression; multiplying the sparse coefficient alpha 1 with a pixel dictionary matrix D to obtain an initialized clear real-time image x;
an iterative restoration and matching module for calculating a blur kernel based on the sharp real-time image x and the blurred real-time image y; updating a clear real-time image x according to the fuzzy kernel; stretching the pixels of the updated clear real-time image x into a column vector v3, and multiplying the vector v3 by a mapping matrix T to obtain a linear characteristic column vector v 4; combining the linear feature column vector v4 and the linear feature dictionary matrix DfCalculating a weighting sparse coefficient of a clear real-time image x; repeating the iteration restoration and matching module until a preset iteration number is reached;
the specific implementation manner of calculating the blur kernel based on the clear real-time image x and the blur real-time image y is as follows:
fuzzy kernel
Figure FDA0003106030340000051
Wherein,
f (-) is a fast Fourier transform, F-1(. cndot.) is an inverse fast Fourier transform,
Figure FDA0003106030340000052
is the complex conjugate of F (·), the coefficient gamma is 0.003-0.007, I is the unit matrix,
Figure FDA0003106030340000053
is a matrix dot product;
the specific implementation manner of updating the clear real-time image x according to the fuzzy kernel is as follows:
according to an objective function:
Figure FDA0003106030340000054
introducing an auxiliary variable u, and then the objective function is:
Figure FDA0003106030340000055
where k is the blur kernel of the previous estimation, y is the blurred real-time image, D is the pixel dictionary matrix, e1=[1,-1],e2=[1,-1]Tη, τ and β are weight parameters;
carrying out minimum solving by using an objective function to obtain a clear real-time image x;
and the matching result output module is used for selecting the maximum component in the weighting sparse coefficient of the clear real-time image x, and the position of the element in the corresponding pixel dictionary matrix D in the reference image I is the final matching result.
CN201910023228.5A 2019-01-10 2019-01-10 Combined image restoration and matching method and system based on linear features Expired - Fee Related CN109903233B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910023228.5A CN109903233B (en) 2019-01-10 2019-01-10 Combined image restoration and matching method and system based on linear features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910023228.5A CN109903233B (en) 2019-01-10 2019-01-10 Combined image restoration and matching method and system based on linear features

Publications (2)

Publication Number Publication Date
CN109903233A CN109903233A (en) 2019-06-18
CN109903233B true CN109903233B (en) 2021-08-03

Family

ID=66943626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910023228.5A Expired - Fee Related CN109903233B (en) 2019-01-10 2019-01-10 Combined image restoration and matching method and system based on linear features

Country Status (1)

Country Link
CN (1) CN109903233B (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9436981B2 (en) * 2011-12-12 2016-09-06 Nec Corporation Dictionary creation device, image processing device, image processing system, dictionary creation method, image processing method, and program
CN103337058B (en) * 2013-07-05 2015-12-02 西北工业大学 Based on the method for blindly restoring image of fuzzy noise image to combined optimization
CN104751420B (en) * 2015-03-06 2017-12-26 湖南大学 A kind of blind restoration method based on rarefaction representation and multiple-objection optimization
JP6353415B2 (en) * 2015-07-30 2018-07-04 日本電信電話株式会社 Dictionary generation method, dictionary generation device, image processing method, image processing device, and image processing program
CN107506718A (en) * 2017-08-17 2017-12-22 南昌航空大学 Facial expression recognizing method based on MB 2DPCA features
CN107563978A (en) * 2017-08-31 2018-01-09 苏州科达科技股份有限公司 Face deblurring method and device
CN108520497B (en) * 2018-03-15 2020-08-04 华中科技大学 Image restoration and matching integrated method based on distance weighted sparse expression prior

Also Published As

Publication number Publication date
CN109903233A (en) 2019-06-18

Similar Documents

Publication Publication Date Title
Hou et al. NLH: A blind pixel-level non-local method for real-world image denoising
Zhang et al. Learning deep CNN denoiser prior for image restoration
Kotera et al. Blind deconvolution using alternating maximum a posteriori estimation with heavy-tailed priors
Guo et al. An efficient SVD-based method for image denoising
CN106934766B (en) Infrared image super-resolution reconstruction method based on sparse representation
CN111275643B (en) Real noise blind denoising network system and method based on channel and space attention
CN109671029B (en) Image denoising method based on gamma norm minimization
Ren et al. Partial deconvolution with inaccurate blur kernel
Li et al. FilterNet: Adaptive information filtering network for accurate and fast image super-resolution
Sun et al. Learning non-local range Markov random field for image restoration
CN106952228A (en) The super resolution ratio reconstruction method of single image based on the non local self-similarity of image
CN104933678B (en) A kind of image super-resolution rebuilding method based on image pixel intensities
CN110675347A (en) Image blind restoration method based on group sparse representation
CN111709435A (en) Countermeasure sample generation method based on discrete wavelet transform
CN105590304B (en) Super-resolution image reconstruction method and device
CN106934398B (en) Image de-noising method based on super-pixel cluster and rarefaction representation
CN108564544A (en) Image Blind deblurring based on edge perception combines sparse optimization method
CN106097256A (en) A kind of video image fuzziness detection method based on Image Blind deblurring
Marais et al. Proximal-gradient methods for poisson image reconstruction with bm3d-based regularization
CN111931722B (en) Correlated filtering tracking method combining color ratio characteristics
Liu et al. CAS: Correlation adaptive sparse modeling for image denoising
CN115082336A (en) SAR image speckle suppression method based on machine learning
Yang et al. A two-stage network with wavelet transformation for single-image deraining
Xu et al. Blind image deblurring using group sparse representation
Liu et al. Image restoration approach using a joint sparse representation in 3D-transform domain

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210803

Termination date: 20220110

CF01 Termination of patent right due to non-payment of annual fee