CN113920210A - Image low-rank reconstruction method based on adaptive graph learning principal component analysis method - Google Patents
Image low-rank reconstruction method based on adaptive graph learning principal component analysis method Download PDFInfo
- Publication number
- CN113920210A CN113920210A CN202111058969.0A CN202111058969A CN113920210A CN 113920210 A CN113920210 A CN 113920210A CN 202111058969 A CN202111058969 A CN 202111058969A CN 113920210 A CN113920210 A CN 113920210A
- Authority
- CN
- China
- Prior art keywords
- image
- reconstruction
- matrix
- graph
- rank
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 230000003044 adaptive effect Effects 0.000 title claims abstract description 20
- 238000012847 principal component analysis method Methods 0.000 title claims abstract description 12
- 239000011159 matrix material Substances 0.000 claims abstract description 58
- 230000006870 function Effects 0.000 claims abstract description 30
- 238000012512 characterization method Methods 0.000 claims abstract description 14
- 238000010606 normalization Methods 0.000 claims abstract description 12
- 230000008569 process Effects 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 7
- 230000004913 activation Effects 0.000 claims description 6
- 238000011478 gradient descent method Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 abstract description 2
- 238000004422 calculation algorithm Methods 0.000 description 10
- 238000000513 principal component analysis Methods 0.000 description 9
- OAICVXFJPJFONN-UHFFFAOYSA-N Phosphorus Chemical compound [P] OAICVXFJPJFONN-UHFFFAOYSA-N 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000036039 immunity Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 238000006555 catalytic reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 235000016709 nutrition Nutrition 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/002—Image coding using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an image low-rank reconstruction method based on an adaptive graph learning principal component analysis method. Firstly, carrying out normalization preprocessing on input image data; then, calculating an adjacency matrix based on the relationship between the image sample points; then, constructing an image low-rank reconstruction network model, which mainly comprises a graph encoder, a fully-connected decoder and a graph decoder, wherein the graph self-encoder encodes data to obtain depth representation, then the depth representation of the data is simultaneously obtained through the two decoders to respectively obtain low-rank reconstruction characteristics and reconstruction graphs, Adaptive Loss and Schatten p norm are adopted in a Loss function, and Loss between the reconstruction graphs and an original graph is optimized to enable the reconstruction graphs to keep local structure information of the data; and finally, obtaining a reconstructed image through adaptive iterative updating of the adjacency matrix and the network model. The method has good nonlinear characterization capability and strong robustness, and can perform low-rank reconstruction of the image even if the data is interfered by noise.
Description
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an image low-rank reconstruction method based on an adaptive graph learning principal component analysis method.
Background
In the field of image low-rank reconstruction, principal component analysis is one of the most widely used unsupervised dimension reduction methods. The principal component analysis algorithm aims to find the optimal linear projection direction of data by learning the characteristics of the original data, so that the data can keep the original data information as much as possible in a low-dimensional space. Due to the lack of robustness of traditional principal component analysis methods, the noise contained in the data often causes the algorithm to deviate from the original solution. In order to improve the noise immunity of the principal component analysis method, Ding et al used l.Ding, D.ZHou, X.He, and H.ZHa.R1-PCA: nutritional innovative L1-norm primary component analysis for robust subspace catalysis in proc.IEEE confession.proceedings of the 23rd international conference on Machine learning,2006, pp.281-2882,1Norm to improve the robustness of the principal component analysis algorithm, and Nie et al further consider the best mean of the features in the documents "f.nie, j.yuan, and h.huang.optimal mean distribution principal analysis. Another drawback of principal component analysis algorithms is the lack of mining capabilities for complex features of the data. The principal component analysis algorithm is a linear method that maps raw data to a low-dimensional space through an orthogonal matrix. In order to improve the capability of a principal component analysis algorithm for processing nonlinear data, kernel functions are introduced in the kernel principal component analysis on the basis of the principal component analysis, original data are mapped to a high-dimensional feature space through the kernel functions, and then a low-dimensional form of the data is obtained through the principal component analysis. Based on the above discussion, how to improve the robustness based on principal component analysisThe performance of the characterization learning method under complex data is a very valuable problem.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides an image low-rank reconstruction method based on an adaptive graph learning principal component analysis method. Firstly, carrying out normalization preprocessing on input image data; then, calculating an adjacency matrix based on the relationship between the image sample points; then, constructing an image low-rank reconstruction network model, which mainly comprises a graph encoder, a fully-connected decoder and a graph decoder, wherein the graph self-encoder encodes data to obtain depth representation, then the depth representation of the data is simultaneously obtained through the two decoders to respectively obtain low-rank reconstruction characteristics and reconstruction graphs, Adaptive Loss and Schatten p norm are adopted in a Loss function, and Loss between the reconstruction graphs and an original graph is optimized to enable the reconstruction graphs to keep local structure information of the data; and finally, obtaining a reconstructed image through adaptive iterative updating of the adjacency matrix and the network model. The method has good nonlinear characterization capability and strong robustness, and can perform low-rank reconstruction of the image even if the data is interfered by noise.
An image low-rank reconstruction method based on an adaptive graph learning principal component analysis method is characterized by comprising the following steps:
step 1: input raw image dataset XrawNormalizing all pixels to obtain a preprocessed image data set X' expressed in a matrix formWherein n is the total number of image sample points, m is the number of features of each image, and each row vector of the matrix corresponds to one image sample point;
step 2: based on the samples in the image dataset X', an adjacency matrix a is calculated, where each element value is calculated as:
wherein, aijI, j, 1,2, …, n, which represents the ith row and jth column element of adjacency matrix a; k represents sparsity, and is selected from {5,10,15,25 }; di,jRepresenting the Euclidean distance, d, between the ith and jth image sample pointsi,k+1Representing the Euclidean distance between the ith and (k + 1) th image sample points, (.)+=max(·,0);d′i,lRepresenting the ith image sample point, and sorting the Euclidean distances between other sample points according to the first Euclidean distance value from small to large, wherein l is 1,2, …, k;
and step 3: inputting the image data set X' and the adjacent matrix A into an image low-rank reconstruction network model for training to obtain a trained network;
the image low-rank reconstruction network model consists of a graph encoder, a fully-connected decoder and a graph decoder, wherein a parallel structure is formed between the two decoders, and image data are simultaneously sent to the two decoders after passing through the graph encoder; wherein the graph encoder is composed of L layers of graph convolution layers, an image dataset X' and an adjacency matrix A are input, and the output of each layer is as follows:
wherein H(l)Denotes the output of the L-th layer, L is 1,2, …, L is 2; h(0)=X′,φ(l)An activation function representing the l-th layer, set to the ReLU function, W(l)Representing the weight parameter of the l layer, and carrying out iterative updating through training;the matrix is obtained after adjacent matrix A is subjected to Laplace standardization;
the full-connection decoder is composed of K full-connection layers, and the output of the graph encoder is input to the full-connection encoder to obtain the reconstruction characteristicsOf each layer thereofThe outputs are as follows:
wherein,denotes the output of the l-th layer, l is 1,2, …, K is 2;an activation function, representing the l-th layer, set to the ReLU function,representing the weight parameter of the l layer, and carrying out iterative updating through training;
the graph encoder utilizes the output H of the graph encoder(L)The Euclidean distance between adjacent matrixes is reconstructed to obtain the reconstructed adjacent matrixThe calculation formula is as follows:
wherein,adjacency matrix representing reconstructionThe ith row and the jth column of elements,depth characterization representing the ith sample point of an imageDepth characterization from jth sample pointThe euclidean distance between them,i.e. the output H of the picture encoder(L)The number of the ith row of (a),i.e. the output H of the picture encoder(L)J, i, j ═ 1,2, …, n;
the loss function of the image low-rank reconstruction network model is calculated according to the following formula:
wherein L isgrossWhich is indicative of a loss of the network,adjacency matrix representing reconstructionCorresponding Laplace matrix according toThe calculation results in that,is composed ofA degree matrix of (c);representing reconstruction featuresThe low-rank constraint term of (a) is,p is 0.5; lambda [ alpha ]1Is a weight coefficient of one, λ2Is a weight coefficient of two, λ3Is a weight coefficient of three, respectively at {10 }-6,10-5,...,103Taking values in the tree; for an n × m matrix X, | | X | | non-woven phosphorσRepresents an adaptive loss function in accordance withCalculated, the value range of sigma is (0, infinity), xiRepresents the ith row of the matrix X, | | X | | non-woven phosphor2A 2 norm representing the matrix X;
the training process is to update the network parameters by optimizing the loss function of the network model by adopting a gradient descent method;
and 4, step 4: calculating according to k-k + T to obtain a new sparsity k, then returning to the step 2, updating the adjacency matrix A and the image low-rank reconstruction network, and updating for T times to obtain the reconstruction characteristics output by the fully-connected decoderCarrying out normalization inverse operation processing on the low-rank reconstructed image to obtain a final low-rank reconstructed image; where t represents the sparsity update increment, at [2,15 ]]Internally taking a value, and setting the updating times T as 5; the inverse operation of normalization refers to the inverse operation of the normalization process in step 1.
The invention has the beneficial effects that: because the adjacency matrix is calculated based on the point pair relation of the image sample points, the local structure information of the image data can be fully utilized, thereby being beneficial to carrying out graph characteristic learning by utilizing a network subsequently and learning the manifold structure of the image data, and leading the model to have better nonlinear characterization capability; because an image low-rank reconstruction network model with a graph encoder and a double decoder is constructed, the data is subjected to depth representation after passing through the graph encoder, and then the reconstruction characteristics and the reconstruction graph are simultaneously obtained through the fully-connected decoder and the graph encoder, the local structure information of the data is kept in the training process, and a self-adaptive updating mechanism for updating the adjacency matrix through the depth representation reconstruction graph is adopted, the representation capability and the robustness of the model are further improved, so that the model can adaptively learn the manifold structure of the depth representation; because Adaptive Loss and Schatten p norm are introduced into the Loss function of the network model, the anti-noise capability of the model is improved through Adaptive Loss, and the rank norm is approximated through the Schatten p norm, so that a Loss function with strong robustness is obtained, and the model has good anti-noise capability and low-rank reconstruction capability.
Drawings
FIG. 1 is a flow chart of an image low rank reconstruction method based on an adaptive graph learning principal component analysis method according to the present invention;
FIG. 2 is an image of experimental results of image reconstruction using different methods;
in the figure, (a) -an original image, (b) -an image after noise addition, (c) -an image after reconstruction by a PCA method, (d) -an image after reconstruction by a LpPCA method, (e) -an image after reconstruction by an RPCA-OM method, and (f) -an image after reconstruction by the method of the invention.
Detailed Description
The present invention will be further described with reference to the following drawings and examples, which include, but are not limited to, the following examples.
As shown in FIG. 1, the invention provides an image low rank reconstruction method based on an adaptive graph learning principal component analysis method. The method is based on point-to-information composition of data, a graph self-encoder is constructed to learn graph characteristics, a self-adaptive learning mechanism is introduced to realize self-adaptive updating of the graph, a depth graph characteristic learning method and a principal component analysis method are combined, manifold structure characteristics can be extracted from local structure information of the data, low-rank reconstruction can be performed on the image data under the condition that a data set is interfered by noise, and the method has good anti-noise capability. The specific implementation process of the invention is as follows:
step 1: input raw image dataset XrawNormalizing all pixels to obtain a preprocessed image data set X' expressed in a matrix formWherein n is the total number of image sample points, m is the number of features of each image, and each row vector of the matrix corresponds to one image sample point.
Step 2: based on the samples in the image dataset X', an adjacency matrix a is calculated, where each element value is calculated as:
wherein, aijI, j, 1,2, …, n, which represents the ith row and jth column element of adjacency matrix a; k represents sparsity, and is selected from {5,10,15,25 }; di,jRepresenting the Euclidean distance, d, between the ith and jth image sample pointsi,k+1Representing the Euclidean distance between the ith and (k + 1) th image sample points, (.)+=max(·,0);d′i,lRepresenting the ith image sample point, and sorting the Euclidean distances between other sample points according to the first Euclidean distance value from small to large, wherein l is 1,2, …, k;
and step 3: inputting the image data set X' and the adjacent matrix A into an image low-rank reconstruction network model for training to obtain a trained network;
the image low-rank reconstruction network model consists of a graph encoder, a fully-connected decoder and a graph decoder, wherein a parallel structure is formed between the two decoders, and image data are simultaneously sent to the two decoders after passing through the graph encoder.
The graph encoder consists of L layers of graph convolution layers, inputting an image dataset X' and an adjacency matrix A, the output of each layer being as follows:
wherein H(l)Denotes the output of the L-th layer, L is 1,2, …, L is 2; h(0)=X′,φ(l)An activation function representing the l-th layer, set to the ReLU function, W(l)Represents the weight of the l-th layerParameters are iteratively updated through training;the matrix obtained by laplace normalization of the adjacency matrix a.
Depth characterization H of data encoded by a picture encoder(L)And simultaneously input to both decoders.
The full-connection decoder is composed of K full-connection layers, and the output of the graph encoder is input to the full-connection encoder to obtain the reconstruction characteristicsThe output of each layer is as follows:
wherein,denotes the output of the l-th layer, l is 1,2, …, K is 2;an activation function, representing the l-th layer, set to the ReLU function,and the weight parameter representing the ith layer is iteratively updated through training.
The graph encoder utilizes the output H of the graph encoder(L)The Euclidean distance between adjacent matrixes is reconstructed to obtain the reconstructed adjacent matrixThe calculation formula is as follows:
wherein,adjacency matrix representing reconstructionThe ith row and the jth column of elements,depth characterization representing the ith sample point of an imageDepth characterization from jth sample pointThe euclidean distance between them,i.e. the output H of the picture encoder(L)The number of the ith row of (a),i.e. the output H of the picture encoder(L)J, 1,2, …, n.
The loss function of the image low-rank reconstruction network model is calculated according to the following formula:
wherein L isgrossWhich is indicative of a loss of the network,adjacency matrix representing reconstructionCorresponding Laplace matrix according toThe calculation results in that,is composed ofA degree matrix of (c);representing reconstruction featuresThe low-rank constraint term of (a) is,p is 0.5, and is represented by the solution of Schatten p norm to achieve the effect of more approaching to the rank norm; lambda [ alpha ]1Is a weight coefficient of one, λ2Is a weight coefficient of two, λ3Is a weight coefficient of three, respectively at {10 }-6,10-5,...,103Taking values in the tree; for an n × m matrix X, | | X | | non-woven phosphorσRepresents an Adaptive Loss of Loss function in terms ofThe sigma value range is (0, infinity), and the Adaptive Loss fuses the F norm and the l2,1Advantage of norm, robustness is controlled by parameter σ, where xiRepresents the ith row of the matrix X, | | X | | non-woven phosphor2Representing the 2 norm of matrix X.
The noise immunity of the model is improved by the Loss function of the formula (10) through adaptive Loss, and the rank norm is approximated by the Schatten p norm, so that the Loss function is a Loss function with strong robustness.
The training process is to update the network parameters by optimizing the loss function of the network model by adopting a gradient descent method.
And 4, step 4: calculating according to k-k + T to obtain a new sparsity k, then returning to the step 2, updating the adjacency matrix A and the image low-rank reconstruction network, and updating for T times to obtain the reconstruction characteristics output by the fully-connected decoderCarrying out normalization inverse operation processing on the low-rank reconstructed image to obtain a final low-rank reconstructed image; where t represents the sparsity update increment, at [2,15 ]]Internally taking a value, and setting the updating times T as 5; the inverse operation of normalization refers to the inverse operation of the normalization process in step 1.
In summary, the technical route of the algorithm is shown in the following table:
TABLE 1
In order to verify the effect of the method of the present invention, the CPU isAnd carrying out simulation experiments on the i7-10700F 2.90GHz CPU and the memory 16G, WINDOWS 10 operating system by using Python software.
The Yale face image dataset used in the experiment, from documents "A.S. Georghiades, P.N.Belhumour, and D.J. Kriegman.from now to many: Illumination connection models for face recognition with lighting and position. IEEE transactions on pattern analysis and machine interaction, vol.23, No.6, pp.643-660,2001", contains 165 samples, each sample containing 1024 features, the samples being divided into 15 categories in total.
In order to embody the robustness of the method, 20% of image samples are selected in the experiment, 20% of features in the image samples are set to be random values between 0 and 1, the noisy data are used as the input of the algorithm, and the dimensionality of the data is set to be {10, 30 and 50} respectively for carrying out the experiment. For the comparison effect, the existing PCA, LpPCA and RPCA-OM algorithms are used as comparison algorithms, and indexes such as reconstruction loss and clustering accuracy are calculated for quantitative comparison, wherein the reconstruction loss represents the sum of squares of pixel point difference values between a reconstructed image and an original image, the larger the value is, the larger the error between the reconstructed image and the original image is, the clustering accuracy represents that the depth characterization of the image is subjected to spectral clustering and then is subjected to maximum matching with the image category, and the larger the value is, the smaller the depth characterization loss information of the image is, so that the low-rank reconstruction of the image is facilitated.
The calculation results are shown in table 2. As can be seen, the clustering accuracy of the reconstruction loss and the depth characterization is superior to that of other comparison methods. An example result image obtained by performing image reconstruction by using different methods is shown in fig. 2, and it is obvious that the method of the present invention can still obtain a better reconstruction result image under the condition of noise interference.
TABLE 2
Claims (1)
1. An image low-rank reconstruction method based on an adaptive graph learning principal component analysis method is characterized by comprising the following steps:
step 1: input raw image dataset XrawNormalizing all pixels to obtain a preprocessed image data set X' expressed in a matrix formWherein n is the total number of image sample points, m is the number of features of each image, and each row vector of the matrix corresponds to one image sample point;
step 2: based on the samples in the image dataset X', an adjacency matrix a is calculated, where each element value is calculated as:
wherein, aijI, j, 1,2, …, n, which represents the ith row and jth column element of adjacency matrix a; k represents sparsity, and is selected from {5,10,15,25 }; di,jRepresenting the Euclidean distance, d, between the ith and jth image sample pointsi,k+1Representing the Euclidean distance between the ith and (k + 1) th image sample points, (.)+=max(·,0);d′i,lRepresenting the ith image sample point, and sorting the Euclidean distances between other sample points according to the first Euclidean distance value from small to large, wherein l is 1,2, …, k;
and step 3: inputting the image data set X' and the adjacent matrix A into an image low-rank reconstruction network model for training to obtain a trained network;
the image low-rank reconstruction network model consists of a graph encoder, a fully-connected decoder and a graph decoder, wherein a parallel structure is formed between the two decoders, and image data are simultaneously sent to the two decoders after passing through the graph encoder; wherein the graph encoder is composed of L layers of graph convolution layers, an image dataset X' and an adjacency matrix A are input, and the output of each layer is as follows:
wherein H(l)Denotes the output of the L-th layer, L is 1,2, …, L is 2; h(0)=X′,φ(l)An activation function representing the l-th layer, set to the ReLU function, W(l)Representing the weight parameter of the l layer, and carrying out iterative updating through training;the matrix is obtained after adjacent matrix A is subjected to Laplace standardization;
the full-connection decoder consists of K layers of full connectionLayer-by-layer composition, with the output of the graph encoder input to the fully-connected encoder to obtain the reconstruction characteristicsThe output of each layer is as follows:
wherein,denotes the output of the l-th layer, l is 1,2, …, K is 2;an activation function, representing the l-th layer, set to the ReLU function,representing the weight parameter of the l layer, and carrying out iterative updating through training;
the graph encoder utilizes the output H of the graph encoder(L)The Euclidean distance between adjacent matrixes is reconstructed to obtain the reconstructed adjacent matrixThe calculation formula is as follows:
wherein,adjacency matrix representing reconstructionThe ith row and the jth column of elements,depth characterization representing the ith sample point of an imageDepth characterization from jth sample pointThe euclidean distance between them,i.e. the output H of the picture encoder(L)The number of the ith row of (a),i.e. the output H of the picture encoder(L)J, i, j ═ 1,2, …, n;
the loss function of the image low-rank reconstruction network model is calculated according to the following formula:
wherein L isgrossWhich is indicative of a loss of the network,adjacency matrix representing reconstructionCorresponding Laplace matrix according toThe calculation results in that,is composed ofA degree matrix of (c);representing reconstruction featuresThe low-rank constraint term of (a) is, p is 0.5; lambda [ alpha ]1Is a weight coefficient of one, λ2Is a weight coefficient of two, λ3Is a weight coefficient of three, respectively at {10 }-6,10-5,…,103Taking values in the tree; for an n m matrix X, | X |σRepresents an adaptive loss function in accordance withCalculated, the value range of sigma is (0, infinity), xiRepresents the ith row, | X | of the matrix X2A 2 norm representing the matrix X;
the training process is to update the network parameters by optimizing the loss function of the network model by adopting a gradient descent method;
and 4, step 4: calculating according to k-k + T to obtain a new sparsity k, then returning to the step 2, updating the adjacency matrix A and the image low-rank reconstruction network, and updating for T times to obtain the reconstruction characteristics output by the fully-connected decoderCarrying out normalization inverse operation processing on the low-rank reconstructed image to obtain a final low-rank reconstructed image; where t represents the sparsity update increment, at [2,15 ]]Internally taking a value, and setting the updating times T as 5; the inverse operation of normalization is the inverse operation of the normalization process in step 1。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110686899 | 2021-06-21 | ||
CN2021106868997 | 2021-06-21 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113920210A true CN113920210A (en) | 2022-01-11 |
CN113920210B CN113920210B (en) | 2024-03-08 |
Family
ID=79234305
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111058969.0A Active CN113920210B (en) | 2021-06-21 | 2021-09-09 | Image low-rank reconstruction method based on adaptive graph learning principal component analysis method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113920210B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115375600A (en) * | 2022-10-20 | 2022-11-22 | 福建亿榕信息技术有限公司 | Reconstructed image quality weighing method and system based on self-encoder |
CN116704537A (en) * | 2022-12-02 | 2023-09-05 | 大连理工大学 | Lightweight pharmacopoeia picture and text extraction method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109754018A (en) * | 2019-01-09 | 2019-05-14 | 北京工业大学 | A kind of image-recognizing method of the low-rank locality preserving projections based on F norm |
WO2019100723A1 (en) * | 2017-11-24 | 2019-05-31 | 华为技术有限公司 | Method and device for training multi-label classification model |
CN110443169A (en) * | 2019-07-24 | 2019-11-12 | 广东工业大学 | A kind of face identification method of edge reserve judgement analysis |
-
2021
- 2021-09-09 CN CN202111058969.0A patent/CN113920210B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019100723A1 (en) * | 2017-11-24 | 2019-05-31 | 华为技术有限公司 | Method and device for training multi-label classification model |
CN109754018A (en) * | 2019-01-09 | 2019-05-14 | 北京工业大学 | A kind of image-recognizing method of the low-rank locality preserving projections based on F norm |
CN110443169A (en) * | 2019-07-24 | 2019-11-12 | 广东工业大学 | A kind of face identification method of edge reserve judgement analysis |
Non-Patent Citations (1)
Title |
---|
李骜;刘鑫;陈德运;张英涛;孙广路;: "基于低秩表示的鲁棒判别特征子空间学习模型", 电子与信息学报, no. 05, 31 May 2020 (2020-05-31), pages 1223 - 1230 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115375600A (en) * | 2022-10-20 | 2022-11-22 | 福建亿榕信息技术有限公司 | Reconstructed image quality weighing method and system based on self-encoder |
CN116704537A (en) * | 2022-12-02 | 2023-09-05 | 大连理工大学 | Lightweight pharmacopoeia picture and text extraction method |
CN116704537B (en) * | 2022-12-02 | 2023-11-03 | 大连理工大学 | Lightweight pharmacopoeia picture and text extraction method |
Also Published As
Publication number | Publication date |
---|---|
CN113920210B (en) | 2024-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107203787B (en) | Unsupervised regularization matrix decomposition feature selection method | |
CN107578007A (en) | A kind of deep learning face identification method based on multi-feature fusion | |
CN112765352A (en) | Graph convolution neural network text classification method based on self-attention mechanism | |
CN109447098B (en) | Image clustering algorithm based on deep semantic embedding | |
CN110717519B (en) | Training, feature extraction and classification method, device and storage medium | |
CN113920210A (en) | Image low-rank reconstruction method based on adaptive graph learning principal component analysis method | |
CN108960422B (en) | Width learning method based on principal component analysis | |
CN111476272B (en) | Dimension reduction method based on structural constraint symmetric low-rank retention projection | |
CN109190511B (en) | Hyperspectral classification method based on local and structural constraint low-rank representation | |
CN111191719A (en) | Image clustering method based on self-expression and atlas constraint non-negative matrix factorization | |
Yin | Nonlinear dimensionality reduction and data visualization: a review | |
CN113157957A (en) | Attribute graph document clustering method based on graph convolution neural network | |
CN106067165B (en) | High spectrum image denoising method based on clustering sparse random field | |
CN110348287A (en) | A kind of unsupervised feature selection approach and device based on dictionary and sample similar diagram | |
CN115640842A (en) | Network representation learning method based on graph attention self-encoder | |
CN117196963A (en) | Point cloud denoising method based on noise reduction self-encoder | |
CN113962262A (en) | Radar signal intelligent sorting method based on continuous learning | |
CN109815440A (en) | The Dimensionality Reduction method of the optimization of joint figure and projection study | |
CN110264482B (en) | Active contour segmentation method based on transformation matrix factorization of noose set | |
CN110852304B (en) | Hyperspectral data processing method based on deep learning method | |
CN109063766B (en) | Image classification method based on discriminant prediction sparse decomposition model | |
Chen et al. | Capped $ l_1 $-norm sparse representation method for graph clustering | |
CN110781972A (en) | Increment unsupervised multi-mode related feature learning model | |
CN113313153B (en) | Low-rank NMF image clustering method and system based on self-adaptive graph regularization | |
CN115169436A (en) | Data dimension reduction method based on fuzzy local discriminant analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |