CN113920210A - Image low-rank reconstruction method based on adaptive graph learning principal component analysis method - Google Patents

Image low-rank reconstruction method based on adaptive graph learning principal component analysis method Download PDF

Info

Publication number
CN113920210A
CN113920210A CN202111058969.0A CN202111058969A CN113920210A CN 113920210 A CN113920210 A CN 113920210A CN 202111058969 A CN202111058969 A CN 202111058969A CN 113920210 A CN113920210 A CN 113920210A
Authority
CN
China
Prior art keywords
image
reconstruction
matrix
graph
rank
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111058969.0A
Other languages
Chinese (zh)
Other versions
CN113920210B (en
Inventor
张睿
张文林
李学龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Publication of CN113920210A publication Critical patent/CN113920210A/en
Application granted granted Critical
Publication of CN113920210B publication Critical patent/CN113920210B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an image low-rank reconstruction method based on an adaptive graph learning principal component analysis method. Firstly, carrying out normalization preprocessing on input image data; then, calculating an adjacency matrix based on the relationship between the image sample points; then, constructing an image low-rank reconstruction network model, which mainly comprises a graph encoder, a fully-connected decoder and a graph decoder, wherein the graph self-encoder encodes data to obtain depth representation, then the depth representation of the data is simultaneously obtained through the two decoders to respectively obtain low-rank reconstruction characteristics and reconstruction graphs, Adaptive Loss and Schatten p norm are adopted in a Loss function, and Loss between the reconstruction graphs and an original graph is optimized to enable the reconstruction graphs to keep local structure information of the data; and finally, obtaining a reconstructed image through adaptive iterative updating of the adjacency matrix and the network model. The method has good nonlinear characterization capability and strong robustness, and can perform low-rank reconstruction of the image even if the data is interfered by noise.

Description

Image low-rank reconstruction method based on adaptive graph learning principal component analysis method
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an image low-rank reconstruction method based on an adaptive graph learning principal component analysis method.
Background
In the field of image low-rank reconstruction, principal component analysis is one of the most widely used unsupervised dimension reduction methods. The principal component analysis algorithm aims to find the optimal linear projection direction of data by learning the characteristics of the original data, so that the data can keep the original data information as much as possible in a low-dimensional space. Due to the lack of robustness of traditional principal component analysis methods, the noise contained in the data often causes the algorithm to deviate from the original solution. In order to improve the noise immunity of the principal component analysis method, Ding et al used l.Ding, D.ZHou, X.He, and H.ZHa.R1-PCA: nutritional innovative L1-norm primary component analysis for robust subspace catalysis in proc.IEEE confession.proceedings of the 23rd international conference on Machine learning,2006, pp.281-2882,1Norm to improve the robustness of the principal component analysis algorithm, and Nie et al further consider the best mean of the features in the documents "f.nie, j.yuan, and h.huang.optimal mean distribution principal analysis. Another drawback of principal component analysis algorithms is the lack of mining capabilities for complex features of the data. The principal component analysis algorithm is a linear method that maps raw data to a low-dimensional space through an orthogonal matrix. In order to improve the capability of a principal component analysis algorithm for processing nonlinear data, kernel functions are introduced in the kernel principal component analysis on the basis of the principal component analysis, original data are mapped to a high-dimensional feature space through the kernel functions, and then a low-dimensional form of the data is obtained through the principal component analysis. Based on the above discussion, how to improve the robustness based on principal component analysisThe performance of the characterization learning method under complex data is a very valuable problem.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides an image low-rank reconstruction method based on an adaptive graph learning principal component analysis method. Firstly, carrying out normalization preprocessing on input image data; then, calculating an adjacency matrix based on the relationship between the image sample points; then, constructing an image low-rank reconstruction network model, which mainly comprises a graph encoder, a fully-connected decoder and a graph decoder, wherein the graph self-encoder encodes data to obtain depth representation, then the depth representation of the data is simultaneously obtained through the two decoders to respectively obtain low-rank reconstruction characteristics and reconstruction graphs, Adaptive Loss and Schatten p norm are adopted in a Loss function, and Loss between the reconstruction graphs and an original graph is optimized to enable the reconstruction graphs to keep local structure information of the data; and finally, obtaining a reconstructed image through adaptive iterative updating of the adjacency matrix and the network model. The method has good nonlinear characterization capability and strong robustness, and can perform low-rank reconstruction of the image even if the data is interfered by noise.
An image low-rank reconstruction method based on an adaptive graph learning principal component analysis method is characterized by comprising the following steps:
step 1: input raw image dataset XrawNormalizing all pixels to obtain a preprocessed image data set X' expressed in a matrix form
Figure BDA0003253783460000021
Wherein n is the total number of image sample points, m is the number of features of each image, and each row vector of the matrix corresponds to one image sample point;
step 2: based on the samples in the image dataset X', an adjacency matrix a is calculated, where each element value is calculated as:
Figure BDA0003253783460000022
wherein, aijI, j, 1,2, …, n, which represents the ith row and jth column element of adjacency matrix a; k represents sparsity, and is selected from {5,10,15,25 }; di,jRepresenting the Euclidean distance, d, between the ith and jth image sample pointsi,k+1Representing the Euclidean distance between the ith and (k + 1) th image sample points, (.)+=max(·,0);d′i,lRepresenting the ith image sample point, and sorting the Euclidean distances between other sample points according to the first Euclidean distance value from small to large, wherein l is 1,2, …, k;
and step 3: inputting the image data set X' and the adjacent matrix A into an image low-rank reconstruction network model for training to obtain a trained network;
the image low-rank reconstruction network model consists of a graph encoder, a fully-connected decoder and a graph decoder, wherein a parallel structure is formed between the two decoders, and image data are simultaneously sent to the two decoders after passing through the graph encoder; wherein the graph encoder is composed of L layers of graph convolution layers, an image dataset X' and an adjacency matrix A are input, and the output of each layer is as follows:
Figure BDA0003253783460000023
wherein H(l)Denotes the output of the L-th layer, L is 1,2, …, L is 2; h(0)=X′,φ(l)An activation function representing the l-th layer, set to the ReLU function, W(l)Representing the weight parameter of the l layer, and carrying out iterative updating through training;
Figure BDA0003253783460000031
the matrix is obtained after adjacent matrix A is subjected to Laplace standardization;
the full-connection decoder is composed of K full-connection layers, and the output of the graph encoder is input to the full-connection encoder to obtain the reconstruction characteristics
Figure BDA0003253783460000032
Of each layer thereofThe outputs are as follows:
Figure BDA0003253783460000033
wherein,
Figure BDA0003253783460000034
denotes the output of the l-th layer, l is 1,2, …, K is 2;
Figure BDA0003253783460000035
an activation function, representing the l-th layer, set to the ReLU function,
Figure BDA0003253783460000036
representing the weight parameter of the l layer, and carrying out iterative updating through training;
the graph encoder utilizes the output H of the graph encoder(L)The Euclidean distance between adjacent matrixes is reconstructed to obtain the reconstructed adjacent matrix
Figure BDA0003253783460000037
The calculation formula is as follows:
Figure BDA0003253783460000038
wherein,
Figure BDA0003253783460000039
adjacency matrix representing reconstruction
Figure BDA00032537834600000310
The ith row and the jth column of elements,
Figure BDA00032537834600000311
depth characterization representing the ith sample point of an image
Figure BDA00032537834600000312
Depth characterization from jth sample point
Figure BDA00032537834600000313
The euclidean distance between them,
Figure BDA00032537834600000314
i.e. the output H of the picture encoder(L)The number of the ith row of (a),
Figure BDA00032537834600000315
i.e. the output H of the picture encoder(L)J, i, j ═ 1,2, …, n;
the loss function of the image low-rank reconstruction network model is calculated according to the following formula:
Figure BDA00032537834600000316
wherein L isgrossWhich is indicative of a loss of the network,
Figure BDA00032537834600000317
adjacency matrix representing reconstruction
Figure BDA00032537834600000318
Corresponding Laplace matrix according to
Figure BDA00032537834600000319
The calculation results in that,
Figure BDA00032537834600000320
is composed of
Figure BDA00032537834600000321
A degree matrix of (c);
Figure BDA00032537834600000322
representing reconstruction features
Figure BDA00032537834600000323
The low-rank constraint term of (a) is,
Figure BDA0003253783460000041
p is 0.5; lambda [ alpha ]1Is a weight coefficient of one, λ2Is a weight coefficient of two, λ3Is a weight coefficient of three, respectively at {10 }-6,10-5,...,103Taking values in the tree; for an n × m matrix X, | | X | | non-woven phosphorσRepresents an adaptive loss function in accordance with
Figure BDA0003253783460000042
Calculated, the value range of sigma is (0, infinity), xiRepresents the ith row of the matrix X, | | X | | non-woven phosphor2A 2 norm representing the matrix X;
the training process is to update the network parameters by optimizing the loss function of the network model by adopting a gradient descent method;
and 4, step 4: calculating according to k-k + T to obtain a new sparsity k, then returning to the step 2, updating the adjacency matrix A and the image low-rank reconstruction network, and updating for T times to obtain the reconstruction characteristics output by the fully-connected decoder
Figure BDA0003253783460000043
Carrying out normalization inverse operation processing on the low-rank reconstructed image to obtain a final low-rank reconstructed image; where t represents the sparsity update increment, at [2,15 ]]Internally taking a value, and setting the updating times T as 5; the inverse operation of normalization refers to the inverse operation of the normalization process in step 1.
The invention has the beneficial effects that: because the adjacency matrix is calculated based on the point pair relation of the image sample points, the local structure information of the image data can be fully utilized, thereby being beneficial to carrying out graph characteristic learning by utilizing a network subsequently and learning the manifold structure of the image data, and leading the model to have better nonlinear characterization capability; because an image low-rank reconstruction network model with a graph encoder and a double decoder is constructed, the data is subjected to depth representation after passing through the graph encoder, and then the reconstruction characteristics and the reconstruction graph are simultaneously obtained through the fully-connected decoder and the graph encoder, the local structure information of the data is kept in the training process, and a self-adaptive updating mechanism for updating the adjacency matrix through the depth representation reconstruction graph is adopted, the representation capability and the robustness of the model are further improved, so that the model can adaptively learn the manifold structure of the depth representation; because Adaptive Loss and Schatten p norm are introduced into the Loss function of the network model, the anti-noise capability of the model is improved through Adaptive Loss, and the rank norm is approximated through the Schatten p norm, so that a Loss function with strong robustness is obtained, and the model has good anti-noise capability and low-rank reconstruction capability.
Drawings
FIG. 1 is a flow chart of an image low rank reconstruction method based on an adaptive graph learning principal component analysis method according to the present invention;
FIG. 2 is an image of experimental results of image reconstruction using different methods;
in the figure, (a) -an original image, (b) -an image after noise addition, (c) -an image after reconstruction by a PCA method, (d) -an image after reconstruction by a LpPCA method, (e) -an image after reconstruction by an RPCA-OM method, and (f) -an image after reconstruction by the method of the invention.
Detailed Description
The present invention will be further described with reference to the following drawings and examples, which include, but are not limited to, the following examples.
As shown in FIG. 1, the invention provides an image low rank reconstruction method based on an adaptive graph learning principal component analysis method. The method is based on point-to-information composition of data, a graph self-encoder is constructed to learn graph characteristics, a self-adaptive learning mechanism is introduced to realize self-adaptive updating of the graph, a depth graph characteristic learning method and a principal component analysis method are combined, manifold structure characteristics can be extracted from local structure information of the data, low-rank reconstruction can be performed on the image data under the condition that a data set is interfered by noise, and the method has good anti-noise capability. The specific implementation process of the invention is as follows:
step 1: input raw image dataset XrawNormalizing all pixels to obtain a preprocessed image data set X' expressed in a matrix form
Figure BDA0003253783460000051
Wherein n is the total number of image sample points, m is the number of features of each image, and each row vector of the matrix corresponds to one image sample point.
Step 2: based on the samples in the image dataset X', an adjacency matrix a is calculated, where each element value is calculated as:
Figure BDA0003253783460000052
wherein, aijI, j, 1,2, …, n, which represents the ith row and jth column element of adjacency matrix a; k represents sparsity, and is selected from {5,10,15,25 }; di,jRepresenting the Euclidean distance, d, between the ith and jth image sample pointsi,k+1Representing the Euclidean distance between the ith and (k + 1) th image sample points, (.)+=max(·,0);d′i,lRepresenting the ith image sample point, and sorting the Euclidean distances between other sample points according to the first Euclidean distance value from small to large, wherein l is 1,2, …, k;
and step 3: inputting the image data set X' and the adjacent matrix A into an image low-rank reconstruction network model for training to obtain a trained network;
the image low-rank reconstruction network model consists of a graph encoder, a fully-connected decoder and a graph decoder, wherein a parallel structure is formed between the two decoders, and image data are simultaneously sent to the two decoders after passing through the graph encoder.
The graph encoder consists of L layers of graph convolution layers, inputting an image dataset X' and an adjacency matrix A, the output of each layer being as follows:
Figure BDA0003253783460000053
wherein H(l)Denotes the output of the L-th layer, L is 1,2, …, L is 2; h(0)=X′,φ(l)An activation function representing the l-th layer, set to the ReLU function, W(l)Represents the weight of the l-th layerParameters are iteratively updated through training;
Figure BDA0003253783460000061
the matrix obtained by laplace normalization of the adjacency matrix a.
Depth characterization H of data encoded by a picture encoder(L)And simultaneously input to both decoders.
The full-connection decoder is composed of K full-connection layers, and the output of the graph encoder is input to the full-connection encoder to obtain the reconstruction characteristics
Figure BDA0003253783460000062
The output of each layer is as follows:
Figure BDA0003253783460000063
wherein,
Figure BDA0003253783460000064
denotes the output of the l-th layer, l is 1,2, …, K is 2;
Figure BDA0003253783460000065
an activation function, representing the l-th layer, set to the ReLU function,
Figure BDA0003253783460000066
and the weight parameter representing the ith layer is iteratively updated through training.
The graph encoder utilizes the output H of the graph encoder(L)The Euclidean distance between adjacent matrixes is reconstructed to obtain the reconstructed adjacent matrix
Figure BDA0003253783460000067
The calculation formula is as follows:
Figure BDA0003253783460000068
wherein,
Figure BDA0003253783460000069
adjacency matrix representing reconstruction
Figure BDA00032537834600000610
The ith row and the jth column of elements,
Figure BDA00032537834600000611
depth characterization representing the ith sample point of an image
Figure BDA00032537834600000612
Depth characterization from jth sample point
Figure BDA00032537834600000613
The euclidean distance between them,
Figure BDA00032537834600000614
i.e. the output H of the picture encoder(L)The number of the ith row of (a),
Figure BDA00032537834600000615
i.e. the output H of the picture encoder(L)J, 1,2, …, n.
The loss function of the image low-rank reconstruction network model is calculated according to the following formula:
Figure BDA00032537834600000616
wherein L isgrossWhich is indicative of a loss of the network,
Figure BDA00032537834600000617
adjacency matrix representing reconstruction
Figure BDA00032537834600000618
Corresponding Laplace matrix according to
Figure BDA0003253783460000071
The calculation results in that,
Figure BDA0003253783460000072
is composed of
Figure BDA0003253783460000073
A degree matrix of (c);
Figure BDA0003253783460000074
representing reconstruction features
Figure BDA0003253783460000075
The low-rank constraint term of (a) is,
Figure BDA0003253783460000076
p is 0.5, and is represented by the solution of Schatten p norm to achieve the effect of more approaching to the rank norm; lambda [ alpha ]1Is a weight coefficient of one, λ2Is a weight coefficient of two, λ3Is a weight coefficient of three, respectively at {10 }-6,10-5,...,103Taking values in the tree; for an n × m matrix X, | | X | | non-woven phosphorσRepresents an Adaptive Loss of Loss function in terms of
Figure BDA0003253783460000077
The sigma value range is (0, infinity), and the Adaptive Loss fuses the F norm and the l2,1Advantage of norm, robustness is controlled by parameter σ, where xiRepresents the ith row of the matrix X, | | X | | non-woven phosphor2Representing the 2 norm of matrix X.
The noise immunity of the model is improved by the Loss function of the formula (10) through adaptive Loss, and the rank norm is approximated by the Schatten p norm, so that the Loss function is a Loss function with strong robustness.
The training process is to update the network parameters by optimizing the loss function of the network model by adopting a gradient descent method.
And 4, step 4: calculating according to k-k + T to obtain a new sparsity k, then returning to the step 2, updating the adjacency matrix A and the image low-rank reconstruction network, and updating for T times to obtain the reconstruction characteristics output by the fully-connected decoder
Figure BDA0003253783460000078
Carrying out normalization inverse operation processing on the low-rank reconstructed image to obtain a final low-rank reconstructed image; where t represents the sparsity update increment, at [2,15 ]]Internally taking a value, and setting the updating times T as 5; the inverse operation of normalization refers to the inverse operation of the normalization process in step 1.
In summary, the technical route of the algorithm is shown in the following table:
TABLE 1
Figure BDA0003253783460000079
Figure BDA0003253783460000081
In order to verify the effect of the method of the present invention, the CPU is
Figure BDA0003253783460000083
And carrying out simulation experiments on the i7-10700F 2.90GHz CPU and the memory 16G, WINDOWS 10 operating system by using Python software.
The Yale face image dataset used in the experiment, from documents "A.S. Georghiades, P.N.Belhumour, and D.J. Kriegman.from now to many: Illumination connection models for face recognition with lighting and position. IEEE transactions on pattern analysis and machine interaction, vol.23, No.6, pp.643-660,2001", contains 165 samples, each sample containing 1024 features, the samples being divided into 15 categories in total.
In order to embody the robustness of the method, 20% of image samples are selected in the experiment, 20% of features in the image samples are set to be random values between 0 and 1, the noisy data are used as the input of the algorithm, and the dimensionality of the data is set to be {10, 30 and 50} respectively for carrying out the experiment. For the comparison effect, the existing PCA, LpPCA and RPCA-OM algorithms are used as comparison algorithms, and indexes such as reconstruction loss and clustering accuracy are calculated for quantitative comparison, wherein the reconstruction loss represents the sum of squares of pixel point difference values between a reconstructed image and an original image, the larger the value is, the larger the error between the reconstructed image and the original image is, the clustering accuracy represents that the depth characterization of the image is subjected to spectral clustering and then is subjected to maximum matching with the image category, and the larger the value is, the smaller the depth characterization loss information of the image is, so that the low-rank reconstruction of the image is facilitated.
The calculation results are shown in table 2. As can be seen, the clustering accuracy of the reconstruction loss and the depth characterization is superior to that of other comparison methods. An example result image obtained by performing image reconstruction by using different methods is shown in fig. 2, and it is obvious that the method of the present invention can still obtain a better reconstruction result image under the condition of noise interference.
TABLE 2
Figure BDA0003253783460000082
Figure BDA0003253783460000091

Claims (1)

1. An image low-rank reconstruction method based on an adaptive graph learning principal component analysis method is characterized by comprising the following steps:
step 1: input raw image dataset XrawNormalizing all pixels to obtain a preprocessed image data set X' expressed in a matrix form
Figure FDA0003253783450000011
Wherein n is the total number of image sample points, m is the number of features of each image, and each row vector of the matrix corresponds to one image sample point;
step 2: based on the samples in the image dataset X', an adjacency matrix a is calculated, where each element value is calculated as:
Figure FDA0003253783450000012
wherein, aijI, j, 1,2, …, n, which represents the ith row and jth column element of adjacency matrix a; k represents sparsity, and is selected from {5,10,15,25 }; di,jRepresenting the Euclidean distance, d, between the ith and jth image sample pointsi,k+1Representing the Euclidean distance between the ith and (k + 1) th image sample points, (.)+=max(·,0);d′i,lRepresenting the ith image sample point, and sorting the Euclidean distances between other sample points according to the first Euclidean distance value from small to large, wherein l is 1,2, …, k;
and step 3: inputting the image data set X' and the adjacent matrix A into an image low-rank reconstruction network model for training to obtain a trained network;
the image low-rank reconstruction network model consists of a graph encoder, a fully-connected decoder and a graph decoder, wherein a parallel structure is formed between the two decoders, and image data are simultaneously sent to the two decoders after passing through the graph encoder; wherein the graph encoder is composed of L layers of graph convolution layers, an image dataset X' and an adjacency matrix A are input, and the output of each layer is as follows:
Figure FDA0003253783450000013
wherein H(l)Denotes the output of the L-th layer, L is 1,2, …, L is 2; h(0)=X′,φ(l)An activation function representing the l-th layer, set to the ReLU function, W(l)Representing the weight parameter of the l layer, and carrying out iterative updating through training;
Figure FDA0003253783450000014
the matrix is obtained after adjacent matrix A is subjected to Laplace standardization;
the full-connection decoder consists of K layers of full connectionLayer-by-layer composition, with the output of the graph encoder input to the fully-connected encoder to obtain the reconstruction characteristics
Figure FDA0003253783450000015
The output of each layer is as follows:
Figure FDA0003253783450000016
wherein,
Figure FDA0003253783450000017
denotes the output of the l-th layer, l is 1,2, …, K is 2;
Figure FDA0003253783450000018
an activation function, representing the l-th layer, set to the ReLU function,
Figure FDA0003253783450000019
representing the weight parameter of the l layer, and carrying out iterative updating through training;
the graph encoder utilizes the output H of the graph encoder(L)The Euclidean distance between adjacent matrixes is reconstructed to obtain the reconstructed adjacent matrix
Figure FDA0003253783450000021
The calculation formula is as follows:
Figure FDA0003253783450000022
wherein,
Figure FDA0003253783450000023
adjacency matrix representing reconstruction
Figure FDA0003253783450000024
The ith row and the jth column of elements,
Figure FDA0003253783450000025
depth characterization representing the ith sample point of an image
Figure FDA0003253783450000026
Depth characterization from jth sample point
Figure FDA0003253783450000027
The euclidean distance between them,
Figure FDA0003253783450000028
i.e. the output H of the picture encoder(L)The number of the ith row of (a),
Figure FDA0003253783450000029
i.e. the output H of the picture encoder(L)J, i, j ═ 1,2, …, n;
the loss function of the image low-rank reconstruction network model is calculated according to the following formula:
Figure FDA00032537834500000210
wherein L isgrossWhich is indicative of a loss of the network,
Figure FDA00032537834500000211
adjacency matrix representing reconstruction
Figure FDA00032537834500000212
Corresponding Laplace matrix according to
Figure FDA00032537834500000213
The calculation results in that,
Figure FDA00032537834500000214
is composed of
Figure FDA00032537834500000215
A degree matrix of (c);
Figure FDA00032537834500000216
representing reconstruction features
Figure FDA00032537834500000217
The low-rank constraint term of (a) is,
Figure FDA00032537834500000218
Figure FDA00032537834500000219
p is 0.5; lambda [ alpha ]1Is a weight coefficient of one, λ2Is a weight coefficient of two, λ3Is a weight coefficient of three, respectively at {10 }-6,10-5,…,103Taking values in the tree; for an n m matrix X, | X |σRepresents an adaptive loss function in accordance with
Figure FDA00032537834500000220
Calculated, the value range of sigma is (0, infinity), xiRepresents the ith row, | X | of the matrix X2A 2 norm representing the matrix X;
the training process is to update the network parameters by optimizing the loss function of the network model by adopting a gradient descent method;
and 4, step 4: calculating according to k-k + T to obtain a new sparsity k, then returning to the step 2, updating the adjacency matrix A and the image low-rank reconstruction network, and updating for T times to obtain the reconstruction characteristics output by the fully-connected decoder
Figure FDA00032537834500000221
Carrying out normalization inverse operation processing on the low-rank reconstructed image to obtain a final low-rank reconstructed image; where t represents the sparsity update increment, at [2,15 ]]Internally taking a value, and setting the updating times T as 5; the inverse operation of normalization is the inverse operation of the normalization process in step 1。
CN202111058969.0A 2021-06-21 2021-09-09 Image low-rank reconstruction method based on adaptive graph learning principal component analysis method Active CN113920210B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110686899 2021-06-21
CN2021106868997 2021-06-21

Publications (2)

Publication Number Publication Date
CN113920210A true CN113920210A (en) 2022-01-11
CN113920210B CN113920210B (en) 2024-03-08

Family

ID=79234305

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111058969.0A Active CN113920210B (en) 2021-06-21 2021-09-09 Image low-rank reconstruction method based on adaptive graph learning principal component analysis method

Country Status (1)

Country Link
CN (1) CN113920210B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115375600A (en) * 2022-10-20 2022-11-22 福建亿榕信息技术有限公司 Reconstructed image quality weighing method and system based on self-encoder
CN116704537A (en) * 2022-12-02 2023-09-05 大连理工大学 Lightweight pharmacopoeia picture and text extraction method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109754018A (en) * 2019-01-09 2019-05-14 北京工业大学 A kind of image-recognizing method of the low-rank locality preserving projections based on F norm
WO2019100723A1 (en) * 2017-11-24 2019-05-31 华为技术有限公司 Method and device for training multi-label classification model
CN110443169A (en) * 2019-07-24 2019-11-12 广东工业大学 A kind of face identification method of edge reserve judgement analysis

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019100723A1 (en) * 2017-11-24 2019-05-31 华为技术有限公司 Method and device for training multi-label classification model
CN109754018A (en) * 2019-01-09 2019-05-14 北京工业大学 A kind of image-recognizing method of the low-rank locality preserving projections based on F norm
CN110443169A (en) * 2019-07-24 2019-11-12 广东工业大学 A kind of face identification method of edge reserve judgement analysis

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李骜;刘鑫;陈德运;张英涛;孙广路;: "基于低秩表示的鲁棒判别特征子空间学习模型", 电子与信息学报, no. 05, 31 May 2020 (2020-05-31), pages 1223 - 1230 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115375600A (en) * 2022-10-20 2022-11-22 福建亿榕信息技术有限公司 Reconstructed image quality weighing method and system based on self-encoder
CN116704537A (en) * 2022-12-02 2023-09-05 大连理工大学 Lightweight pharmacopoeia picture and text extraction method
CN116704537B (en) * 2022-12-02 2023-11-03 大连理工大学 Lightweight pharmacopoeia picture and text extraction method

Also Published As

Publication number Publication date
CN113920210B (en) 2024-03-08

Similar Documents

Publication Publication Date Title
CN107203787B (en) Unsupervised regularization matrix decomposition feature selection method
CN107578007A (en) A kind of deep learning face identification method based on multi-feature fusion
CN112765352A (en) Graph convolution neural network text classification method based on self-attention mechanism
CN109447098B (en) Image clustering algorithm based on deep semantic embedding
CN110717519B (en) Training, feature extraction and classification method, device and storage medium
CN113920210A (en) Image low-rank reconstruction method based on adaptive graph learning principal component analysis method
CN108960422B (en) Width learning method based on principal component analysis
CN111476272B (en) Dimension reduction method based on structural constraint symmetric low-rank retention projection
CN109190511B (en) Hyperspectral classification method based on local and structural constraint low-rank representation
CN111191719A (en) Image clustering method based on self-expression and atlas constraint non-negative matrix factorization
Yin Nonlinear dimensionality reduction and data visualization: a review
CN113157957A (en) Attribute graph document clustering method based on graph convolution neural network
CN106067165B (en) High spectrum image denoising method based on clustering sparse random field
CN110348287A (en) A kind of unsupervised feature selection approach and device based on dictionary and sample similar diagram
CN115640842A (en) Network representation learning method based on graph attention self-encoder
CN117196963A (en) Point cloud denoising method based on noise reduction self-encoder
CN113962262A (en) Radar signal intelligent sorting method based on continuous learning
CN109815440A (en) The Dimensionality Reduction method of the optimization of joint figure and projection study
CN110264482B (en) Active contour segmentation method based on transformation matrix factorization of noose set
CN110852304B (en) Hyperspectral data processing method based on deep learning method
CN109063766B (en) Image classification method based on discriminant prediction sparse decomposition model
Chen et al. Capped $ l_1 $-norm sparse representation method for graph clustering
CN110781972A (en) Increment unsupervised multi-mode related feature learning model
CN113313153B (en) Low-rank NMF image clustering method and system based on self-adaptive graph regularization
CN115169436A (en) Data dimension reduction method based on fuzzy local discriminant analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant