CN110473140B - Image dimension reduction method of extreme learning machine based on graph embedding - Google Patents

Image dimension reduction method of extreme learning machine based on graph embedding Download PDF

Info

Publication number
CN110473140B
CN110473140B CN201910648074.9A CN201910648074A CN110473140B CN 110473140 B CN110473140 B CN 110473140B CN 201910648074 A CN201910648074 A CN 201910648074A CN 110473140 B CN110473140 B CN 110473140B
Authority
CN
China
Prior art keywords
image
sample
matrix
samples
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910648074.9A
Other languages
Chinese (zh)
Other versions
CN110473140A (en
Inventor
宋士吉
杨乐
黄高
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201910648074.9A priority Critical patent/CN110473140B/en
Publication of CN110473140A publication Critical patent/CN110473140A/en
Application granted granted Critical
Publication of CN110473140B publication Critical patent/CN110473140B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an image dimension reduction method of an extreme learning machine based on graph embedding, and belongs to the field of machine learning and data mining. Firstly, selecting an original image sample set, and constructing a sample relation matrix by using the inter-sample distance and label information of the original image sample; and then, according to the constructed sample relation matrix, firstly carrying out random mapping on the input vectorization image samples, and then learning a feature extraction matrix by minimizing the reconstruction error of the weighted samples. Finally, the learned feature extraction matrix pair quantization image data is used for realizing data dimension reduction. The invention has short training time and high data compression efficiency, and effectively improves the compression quality and the dimension reduction stability of the data.

Description

Image dimension reduction method of extreme learning machine based on graph embedding
Technical Field
The invention belongs to the field of machine learning and data mining, and particularly relates to an image dimension reduction method of an extreme learning machine based on graph embedding.
Background
With the advent of the big data age, the data volume and data dimension of data present a rapid trend, and the problem of high data dimension becomes more obvious in the aspect of image data. When data analysis and processing are carried out, in the face of the high-dimensional data, the problem of dimension explosion can be effectively avoided by data compression, and meanwhile, the storage burden is reduced; in addition, the data compression can effectively remove redundant features in the original data and improve the post-data processing performance. An important branch of data compression is the dimension reduction algorithm.
Traditional dimension reduction algorithms such as Principal Component Analysis (PCA), linear discriminant method (LDA), local linear embedding method (LLE) and the like are widely applied in the field of machine learning. And as the complexity of data is increased, the nonlinear characteristics are strengthened, the data mining capability of the traditional algorithms meets the bottleneck, the loss amount of the characteristics of the compression results is large, and the discrimination of different categories is low. Compared with the traditional dimension reduction algorithm, the dimension reduction method (also called as an autoencoder) based on the artificial neural network greatly improves the quality of data compression. However, such methods consume too much time and have high calculation overhead in the actual use process, and meanwhile, the global optimal solution is difficult to find. Therefore, the acceleration of the dimension reduction method based on the artificial neural network is very important.
The Extreme Learning Machine (ELM) is a single hidden layer neural network and has the same network structure as the self-encoder. The difference is that the weight of the front layer is randomly generated, and the weight of the second layer can be directly obtained by a least square method, so that the method has good generalization performance and fast training speed. Researchers have therefore proposed extreme learning machine-based autocoders. Experiments show that compared with the traditional self-encoder, the self-encoder based on the extreme learning machine can greatly shorten the training time of the dimensionality reduction algorithm on the basis of ensuring the quality of extracting low-dimensional features. However, most of the self-coders based on extreme learning machines in the present day are from the viewpoint of unsupervised learning, namely, only the data itself is used, and a large amount of valuable label information is ignored. In addition, the conventional extreme learning machine-based self-coding encoder projects original samples by using a random matrix at the initial stage of extraction, so that the distance relationship between the original samples may be damaged, and the performance of the self-coding device is not stable enough.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an image dimension reduction method based on a graph-embedded extreme learning machine. The method has the advantages of short training time and high data compression efficiency, and the method utilizes the label information and the original data to construct the sample relation matrix, and obtains the data compression matrix by minimizing the sample weighted reconstruction error according to the elements in the relation matrix, thereby improving the compression quality and the stability of dimension reduction of the data.
The invention provides an image dimension reduction method of an extreme learning machine based on graph embedding, which is characterized by comprising the following steps of:
1) selecting an original image sample set, and recording as X 'epsilon { X'1,x′2,...,x′NEach original image sample of }, X
Figure GDA0002716772580000021
Figure GDA0002716772580000022
The label data corresponding to all the original image samples in X' form a set y1,y2,...,yNIn which y isiE {1,2, …, M } is sample x'iCorresponding labels, wherein N represents the total number of original image samples in X', S represents the number of image channels, H represents the image height, W represents the image width, and M is the total number of classes of the samples;
2) initializing; the method comprises the following specific steps:
2-1) converting each original image sample in X' from a matrix to a column vector, i.e.
Figure GDA0002716772580000023
Wherein D ═ sxhxw is the initial dimension; the converted image sample set is
Figure GDA0002716772580000024
Each image sample in X
Figure GDA0002716772580000025
Figure GDA0002716772580000026
2-2) calculating the number of image samples corresponding to each category by using a vector P ═ P1,…,pM]Denotes, where the k-th element P of the P vectorkRepresenting the number of image samples contained in the kth category; the calculation steps are as follows:
2-2-1) making i ═ 1; the initial values of all elements in the vector P are 0;
2-2-2) to yiAnd (4) judging: if yiWhen k, let pk=pk+1;
2-2-3) making i ═ i +1, and then returning to the step 2-2-2); obtaining a calculated vector P until i is equal to N;
2-3) determining a target dimension L; l & lt, D;
3) establishing a relation matrix S; the method comprises the following specific steps:
3-1) calculating the median of Euclidean distances between all image samples and recording the median as omega; wherein the image sample
Figure GDA0002716772580000027
And image sample
Figure GDA0002716772580000028
The expression for the euclidean distance calculation of (c) is as follows:
bij=||xi-xj||
3-2) calculating the number of neighbors
Figure GDA0002716772580000029
Figure GDA00027167725800000210
3-3) making i ═ 1;
3-4) pairs of image samples xiRemember yiM, find and xiEuropean nearest
Figure GDA00027167725800000211
A sample, the
Figure GDA00027167725800000212
One sample subscript constitutes xiA set of neighboring samples, denoted as
Figure GDA00027167725800000213
3-5) the collection obtained in step 3-4)
Figure GDA00027167725800000214
Let x be a set of indices having label values of corresponding samples equal to miIs a neighbor homogeneous sample set, and is recorded as
Figure GDA00027167725800000215
Namely, it is
Figure GDA00027167725800000216
3-6) the collection obtained in step 3-4)
Figure GDA0002716772580000031
Counting the subscript number of the corresponding samples with the label value not equal to m, and recording the subscript number as
Figure GDA0002716772580000032
3-7) generating the ith row of matrix S:
Figure GDA0002716772580000033
3-8) making i ═ i +1, and then returning to the step 3-4); until i is equal to N, the matrix S is generated;
4) solving a feature extraction matrix beta; the method comprises the following specific steps:
4-1) generating a relation diagonal matrix U: the method comprises the following specific steps:
4-1-1) making i ═ 1;
4-1-2) solving for the diagonal element for column i of U, i.e.
Figure GDA0002716772580000034
Filling 0 in the ith column of residual positions;
4-1-3) making i ═ i +1, and then returning to the step 4-1-2); until i is equal to N, the generation of U is finished;
4-2) generating random matrix
Figure GDA0002716772580000035
Calculating a random feature mapping result H for X:
H=g(XA);
wherein the function g is a sigmoid function, and the matrix H is random feature mapping of X;
4-3) solving a feature extraction matrix beta:
β=(HTUH+10IL)-1HTSX
5) compressing the converted image sample set X by using a characteristic extraction matrix beta, wherein compressed data is represented as Z:
Z=g(XβT);
and Z is the dimension reduction result of the converted image sample set X.
The invention has the characteristics and beneficial effects that:
the feature extraction method provided by the invention inherits the advantages of the self-encoder and the extreme learning machine, has strong data compression capability, can mine the essential features of data, and has higher training speed. The invention is based on the view of supervised learning, and fully utilizes the label information of the original data to guide the process of feature extraction, so that the compressed data has stronger inter-class distinction degree and is beneficial to subsequent analysis and processing. The graph embedding technology enables data label information to be effectively introduced into the proposed compression algorithm, and the stability of the algorithm is guaranteed.
1) Compared with the traditional linear dimension reduction method, the method belongs to a nonlinear dimension reduction method, so that the data feature extraction capability is stronger. Because the neural network structure is utilized, the essential characteristics of the original data can be fully explored, and therefore the quality of the compressed data is effectively improved.
2) Compared with the traditional self-encoder method, the method has higher training speed and cannot get into the local optimal solution, so the efficiency of feature extraction is greatly improved.
3) Compared with other self-coders based on extreme learning machines, the self-coder structure and the label information are fully utilized, the distance relation information and the popular information between the original samples are reserved, and the label information is utilized, so that the inter-class discrimination of the compressed data can be effectively improved, the dimensionality reduction quality is improved, the extracted features have better discrimination while the essential information is ensured, and the data analysis such as later-stage clustering and classification is facilitated.
4) Compared with other self-coders based on the extreme learning machine, the self-coder fully utilizes the graph embedding technology to correct the problem of sample distance loss possibly existing in the self-coder of the traditional extreme learning machine, ensures the feature quality of compressed data, improves the stability of an algorithm, and enables the performance and the effect of feature extraction to be more stable.
5) The method disclosed by the invention integrates a self-coder, an extreme learning machine and a graph embedding technology, has the advantages of good feature extraction effect, simple implementation scheme and high operation speed, can be used for discovering the essential features of the sample from high-dimensional data, and has good application prospects in the fields of image processing, face recognition, image classification and the like.
Drawings
FIG. 1 is an overall flow chart of the method of the present invention
FIG. 2 is a diagram illustrating the results of performing a dimensionality reduction and data recovery test on image data in an embodiment of the present invention.
Detailed Description
The invention provides an image dimension reduction method based on a graph-embedded extreme learning machine, which is further described in detail below by combining the drawings and specific embodiments.
The invention provides an image dimension reduction method (GDR-ELM) based on a graph-embedded extreme learning machine, which comprises four stages: initializing a self-encoder, establishing a graph embedding matrix, calculating a selected feature extraction matrix and extracting features. The technical solution of the invention is as follows: firstly, constructing a sample relation matrix by using the distance between samples of an original image sample and label information; and then, according to the constructed sample relation matrix, firstly carrying out random mapping on the input vectorized image, and then learning a feature extraction matrix by minimizing the weighted sample reconstruction error. Finally, the learned feature extraction matrix pair quantization image data is used for realizing data dimension reduction. The overall flow of the method is shown in fig. 1, and comprises the following steps:
1) selecting an original image sample set, and recording as X 'epsilon { X'1,x′2,...,x′NEach original image sample of }, X
Figure GDA0002716772580000041
Figure GDA0002716772580000042
The label data corresponding to all the original image samples in X' form a set y1,y2,...,yNIn which y isiE {1,2, …, M } is sample x'iCorresponding label, N represents the total number of original image samples in X', and S representsThe number of image channels (the grayscale image is 1, and RGB is 3), H represents the image height, W represents the image width, all original image samples C, H, and W in X' have the same value, and M is the total number of classes of samples. Tag data yiDenotes sample x'iClass of, i.e. if sample x'iBelong to class m, then yiThe value is m.
2) Initializing; the method comprises the following specific steps:
2-1) original image vectorization: converting each original image sample in X' from a matrix to a column vector, i.e.
Figure GDA0002716772580000043
Figure GDA0002716772580000051
Where D ═ sxhxw is the initial dimension. The transformed image sample set may be composed of
Figure GDA0002716772580000052
Figure GDA0002716772580000053
Representing that each image sample in X can be represented by
Figure GDA0002716772580000054
Showing that the label corresponding to each image sample is kept unchanged after conversion;
2-2) calculating the number of image samples corresponding to each category by using a vector P ═ P1,…,pM]Denotes, where the k-th element P of the P vectorkRepresenting the number of image samples contained in the kth category; the calculation steps are as follows:
2-2-1) making i ═ 1; the initial values of all elements in the vector P are 0;
2-2-2) to yiAnd (4) judging: if yiLet the kth element of the P vector add 1, i.e. Pk=pk+1;
2-2-3) making i ═ i +1, and then returning to the step 2-2-2); obtaining a calculated vector P until i is equal to N;
2-3) determining a target dimension L; can be any positive integer smaller than the original dimension D, and the requirement of 0< L < D is satisfied. The smaller the L, the higher the degree of compression, but the more serious the loss of the original information of the data. The values of the target dimension in the subsequent verification are shown in table 1.
Table 1: target dimension value in actual verification
Data set Target dimension (L)
Oral 50,100
YaleB 50,100
UMIST 50,100
COIL20 30,50
USPST 10,30
MNISTsub 30,50
3) Establishing a relation matrix S; the method comprises the following specific steps:
3-1) calculating the median of Euclidean distances between all image samples and recording the median as omega; wherein the image sample
Figure GDA0002716772580000055
And image sample
Figure GDA0002716772580000056
The euclidean distance of (c) can be calculated by:
bij=||xi-xj||
3-2) calculating the number of neighbors
Figure GDA0002716772580000057
Figure GDA0002716772580000058
3-3) making i ═ 1;
3-4) pairs of image samples xiRemember yiM, find and xiEuropean nearest
Figure GDA0002716772580000059
A sample, the
Figure GDA00027167725800000510
One sample subscript constitutes xiA set of neighboring samples, denoted as
Figure GDA00027167725800000511
3-5) the collection obtained in step 3-4)
Figure GDA00027167725800000512
In, turning on; let x be the set of indices whose label values for corresponding samples equal miIs a neighbor homogeneous sample set, and is recorded as
Figure GDA00027167725800000513
Namely, it is
Figure GDA00027167725800000514
3-6) the collection obtained in step 3-4)
Figure GDA00027167725800000515
Counting the subscript number of the corresponding samples with the label value not equal to m, and recording the subscript number as
Figure GDA00027167725800000516
3-7) generating the ith row of matrix S:
Figure GDA0002716772580000061
3-8) making i ═ i +1, and then returning to the step 3-4); until i is equal to N, the matrix S is generated;
4) solving a feature extraction matrix beta; the method comprises the following specific steps:
4-1) generating a relation diagonal matrix U: the method comprises the following specific steps:
4-1-1) making i ═ 1;
4-1-2) solving for the diagonal element for column i of U, i.e.
Figure GDA0002716772580000062
Filling 0 in the ith column of residual positions;
4-1-3) making i ═ i +1, and then returning to the step 4-1-2); until i is equal to N, the generation of U is finished;
4-2) generating random matrix
Figure GDA0002716772580000063
Calculating a random feature mapping result H for X:
H=g(XA);
the function g is a sigmoid function, and the matrix H is random feature mapping of the vectorized image sample set X;
4-3) solving a feature extraction matrix beta:
β=(HTUH+10IL)-1HTSX
5) compressing the converted image sample set X by using a characteristic extraction matrix beta, wherein compressed data is represented as Z:
X=g(XβT);
and X is a dimension reduction result of the converted image sample set X and can be used for subsequent tasks such as data processing, analysis, classification and the like. Data recovery may be achieved by:
Figure GDA0002716772580000064
Figure GDA0002716772580000065
representing the restored set of vectorized image samples.
Validity verification of embodiments of the invention
The performance analysis of the image dimension reduction method based on the graph-embedded extreme learning machine provided by the invention is as follows, all experiments take MATLAB 2016a as an experiment platform, and the computer performance parameters are as follows: intel (R) core (TM) i7-4770K CPU @3.50GH, computer memory 32 GB.
Validity verification firstly tests the data compression capability and reconstruction capability of the method. In order to test the inter-class distinction degree after data compression, different dimensionality reduction methods are used for extracting features on a plurality of image data sets, then a classification task is carried out, the performance of the dimensionality reduction method is tested through the classification accuracy, and the higher the classification accuracy is, the better the extracted feature quality is.
1) Testing data compression capability and reconstruction capability;
to better observe the stability of the proposed method, the experiment was performed using a subset of the MNIST handwritten digit data set for compression capability and data recovery tests, which experiment is shown in fig. 2. The test was performed using an MNIST subset, which included 5000 handwritten digital grayscale images (original image size 1x28x28, i.e., S-1, H-28, W-28), the 5000 images were divided into 10 classes according to image content, each representing a number 0 to 9, the initial data length after vectorization was 784 (i.e., D-784), and the image data was represented as 784
Figure GDA0002716772580000071
We use the proposed algorithm for data compression to make itThe data length becomes 100, i.e., the data length becomes 12.76% of the original. After compression, the compressed data is reconstructed to obtain reconstructed data. From the experimental results, it can be seen that although the reconstructed data has a certain information loss relative to the original data, the original image is basically recovered.
2) Comparison of Performance analysis with other dimension reduction methods
To verify the effectiveness of the proposed method (GDR-ELM) we used 4 different other dimension reduction methods as comparisons to verify the performance of the proposed method. Other dimension reduction methods include extreme learning machine-based autocoder (ELM-AE), extreme learning machine iterative autocoder (ELM-AEIF), and Principal Component Analysis (PCA). The validation datasets used were machine learning standard datasets including: an Oral face dataset; a YaleB face data set; a UMIST face dataset; COIL20 object classification dataset, UPSPT handwritten digit classification dataset, MNIST handwritten digit dataset subset (MNIST tsub). The data set related information is shown in table 2.
Table 2: validating data set information
Figure GDA0002716772580000072
In a verification experiment, an original training sample is used as a training sample of the proposed method, the training sample is divided into a training set and a verification set, after a final feature extraction matrix is obtained, the matrix is used for carrying out feature extraction on a test sample, and a least square classifier is used for classification. Ten independent experiments were performed on each data set and the accuracy and standard deviation of the experimental results are shown in table 3, with the highest accuracy being indicated in bold.
Table 3 verifies the results of the experiments
Figure GDA0002716772580000073
Figure GDA0002716772580000081
From table 3 the following conclusions can be drawn:
a) compared with other extreme learning machine-based self-encoders (ELM-AE and ELM-AEIF), the proposed method achieves better experimental results in most experiments. The classification accuracy of the proposed method is higher than that of other two self-encoders based on extreme learning machines in most experiments, which shows that the introduction of label information effectively improves the discrimination of the extracted low-dimensional features, so that the subsequent classification result is improved. On the other hand, the experimental result variance of the proposed method is lower, and the performance is more stable, which indicates that the introduction of the label information effectively corrects the aforementioned sample distance loss problem, so that the stability of the proposed method is guaranteed.
b) The accuracy of the proposed method is superior to principal component analysis algorithm (PCA) in most experiments. On one hand, the two methods belong to linear methods, the feature extraction capability is weak, and the distinguishing degree of the features is weak because label information is not used. In addition, PCA only focuses on the global statistical properties of features and ignores the local properties of features, in contrast, the proposed method can take both into account, and thus has better feature extraction capability.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (1)

1. An image dimension reduction method based on a graph-embedded extreme learning machine is characterized by comprising the following steps:
1) selecting an original image sample set, and recording as X 'epsilon { X'1,x'2,...,x'NEach original image sample of }, X
Figure FDA0002716772570000011
Figure FDA0002716772570000012
The label data corresponding to all the original image samples in X' form a set y1,y2,...,yNIn which y isiE {1,2, …, M } is sample x'iCorresponding labels, wherein N represents the total number of original image samples in X', S represents the number of image channels, H represents the image height, W represents the image width, and M is the total number of classes of the samples;
2) initializing; the method comprises the following specific steps:
2-1) converting each original image sample in X' from a matrix to a column vector, i.e.
Figure FDA0002716772570000013
Wherein D ═ sxhxw is the initial dimension; the converted image sample set is
Figure FDA0002716772570000014
Each image sample in X
Figure FDA0002716772570000015
Figure FDA0002716772570000016
2-2) calculating the number of image samples corresponding to each category by using a vector P ═ P1,…,pM]Denotes, where the k-th element P of the P vectorkRepresenting the number of image samples contained in the kth category; the calculation steps are as follows:
2-2-1) making i ═ 1; the initial values of all elements in the vector P are 0;
2-2-2) to yiAnd (4) judging: if yiWhen k, let pk=pk+1;
2-2-3) making i ═ i +1, and then returning to the step 2-2-2); obtaining a calculated vector P until i is equal to N;
2-3) determining a target dimension L; l & lt, D;
3) establishing a relation matrix S; the method comprises the following specific steps:
3-1) calculating the median of Euclidean distances between all image samples and recording the median as omega; wherein the image sample
Figure FDA0002716772570000017
And image sample
Figure FDA0002716772570000018
The expression for the euclidean distance calculation of (c) is as follows:
bij=||xi-xj||
3-2) calculating the number of neighbors
Figure FDA0002716772570000019
Figure FDA00027167725700000110
3-3) making i ═ 1;
3-4) pairs of image samples xiRemember yiM, find and xiEuropean nearest
Figure FDA00027167725700000111
A sample, the
Figure FDA00027167725700000118
One sample subscript constitutes xiA set of neighboring samples, denoted as
Figure FDA00027167725700000112
3-5) the collection obtained in step 3-4)
Figure FDA00027167725700000113
Let x be a set of indices having label values of corresponding samples equal to miNear neighbor homogeneous sample setIs marked as
Figure FDA00027167725700000114
Namely, it is
Figure FDA00027167725700000115
3-6) the collection obtained in step 3-4)
Figure FDA00027167725700000116
Counting the subscript number of the corresponding samples with the label value not equal to m, and recording the subscript number as
Figure FDA00027167725700000117
3-7) generating the ith row of matrix S:
Figure FDA0002716772570000021
3-8) making i ═ i +1, and then returning to the step 3-4); until i is equal to N, the matrix S is generated;
4) solving a feature extraction matrix beta; the method comprises the following specific steps:
4-1) generating a relation diagonal matrix U: the method comprises the following specific steps:
4-1-1) making i ═ 1;
4-1-2) solving for the diagonal element for column i of U, i.e.
Figure FDA0002716772570000022
Filling 0 in the ith column of residual positions;
4-1-3) making i ═ i +1, and then returning to the step 4-1-2); until i is equal to N, the generation of U is finished;
4-2) generating random matrix
Figure FDA0002716772570000023
Calculating a random feature mapping result H for X:
H=g(XA);
wherein the function g is a sigmoid function, and the matrix H is random feature mapping of X;
4-3) solving a feature extraction matrix beta:
β=(HTUH+10IL)-1HTSX
5) compressing the converted image sample set X by using a characteristic extraction matrix beta, wherein compressed data is represented as Z:
Z=g(XβT);
and Z is the dimension reduction result of the converted image sample set X.
CN201910648074.9A 2019-07-18 2019-07-18 Image dimension reduction method of extreme learning machine based on graph embedding Active CN110473140B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910648074.9A CN110473140B (en) 2019-07-18 2019-07-18 Image dimension reduction method of extreme learning machine based on graph embedding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910648074.9A CN110473140B (en) 2019-07-18 2019-07-18 Image dimension reduction method of extreme learning machine based on graph embedding

Publications (2)

Publication Number Publication Date
CN110473140A CN110473140A (en) 2019-11-19
CN110473140B true CN110473140B (en) 2021-05-07

Family

ID=68509695

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910648074.9A Active CN110473140B (en) 2019-07-18 2019-07-18 Image dimension reduction method of extreme learning machine based on graph embedding

Country Status (1)

Country Link
CN (1) CN110473140B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111259917B (en) * 2020-02-20 2022-06-07 西北工业大学 Image feature extraction method based on local neighbor component analysis
CN111401434B (en) * 2020-03-12 2024-03-08 西北工业大学 Image classification method based on unsupervised feature learning
CN111783845B (en) * 2020-06-12 2024-04-16 浙江工业大学 Hidden false data injection attack detection method based on local linear embedding and extreme learning machine
CN112485394A (en) * 2020-11-10 2021-03-12 浙江大学 Water quality soft measurement method based on sparse self-coding and extreme learning machine
CN112364927A (en) * 2020-11-17 2021-02-12 哈尔滨市科佳通用机电股份有限公司 Foreign matter detection method based on filter bank
WO2024020870A1 (en) * 2022-07-27 2024-02-01 Huawei Technologies Co., Ltd. Methods and systems for data feature extraction

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845512A (en) * 2016-11-30 2017-06-13 湖南文理学院 Beasts shape recognition method and system based on fractal parameter

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104867138A (en) * 2015-05-07 2015-08-26 天津大学 Principal component analysis (PCA) and genetic algorithm (GA)-extreme learning machine (ELM)-based three-dimensional image quality objective evaluation method
CN106991132A (en) * 2017-03-08 2017-07-28 南京信息工程大学 A kind of figure sorting technique reconstructed based on atlas with kernel of graph dimensionality reduction
CN108319964B (en) * 2018-02-07 2021-10-22 嘉兴学院 Fire image recognition method based on mixed features and manifold learning
CN108845974A (en) * 2018-04-24 2018-11-20 清华大学 Linear dimension reduction method is supervised using the having for separation probability of minimax probability machine
CN109086886A (en) * 2018-08-02 2018-12-25 工极(北京)智能科技有限公司 A kind of convolutional neural networks learning algorithm based on extreme learning machine

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845512A (en) * 2016-11-30 2017-06-13 湖南文理学院 Beasts shape recognition method and system based on fractal parameter

Also Published As

Publication number Publication date
CN110473140A (en) 2019-11-19

Similar Documents

Publication Publication Date Title
CN110473140B (en) Image dimension reduction method of extreme learning machine based on graph embedding
CN111581405B (en) Cross-modal generalization zero sample retrieval method for generating confrontation network based on dual learning
CN107122809B (en) Neural network feature learning method based on image self-coding
CN105760821B (en) The face identification method of the grouped accumulation rarefaction representation based on nuclear space
CN113627482B (en) Cross-modal image generation method and device based on audio-touch signal fusion
Ning et al. Semantics-consistent representation learning for remote sensing image–voice retrieval
CN105740912B (en) The recognition methods and system of low-rank image characteristics extraction based on nuclear norm regularization
CN111553297B (en) Method and system for diagnosing production fault of polyester filament based on 2D-CNN and DBN
Djeddi et al. Artificial immune recognition system for Arabic writer identification
CN113887661B (en) Image set classification method and system based on representation learning reconstruction residual analysis
CN110796022B (en) Low-resolution face recognition method based on multi-manifold coupling mapping
CN117058266B (en) Handwriting word generation method based on skeleton and outline
CN116311483B (en) Micro-expression recognition method based on local facial area reconstruction and memory contrast learning
CN104850859A (en) Multi-scale analysis based image feature bag constructing method
CN110414616A (en) A kind of remote sensing images dictionary learning classification method using spatial relationship
CN111008224A (en) Time sequence classification and retrieval method based on deep multitask representation learning
CN108198324B (en) A kind of multinational bank note currency type recognition methods based on finger image
CN111695455B (en) Low-resolution face recognition method based on coupling discrimination manifold alignment
CN113222072A (en) Lung X-ray image classification method based on K-means clustering and GAN
CN114357307B (en) News recommendation method based on multidimensional features
Jiang et al. Forgery-free signature verification with stroke-aware cycle-consistent generative adversarial network
CN117690178B (en) Face image recognition method and system based on computer vision
CN111178254A (en) Signature identification method and device
CN113762151A (en) Fault data processing method and system and fault prediction method
Sun et al. Multiple-kernel, multiple-instance similarity features for efficient visual object detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant