CN111062308A - Face recognition method based on sparse expression and neural network - Google Patents

Face recognition method based on sparse expression and neural network Download PDF

Info

Publication number
CN111062308A
CN111062308A CN201911276159.5A CN201911276159A CN111062308A CN 111062308 A CN111062308 A CN 111062308A CN 201911276159 A CN201911276159 A CN 201911276159A CN 111062308 A CN111062308 A CN 111062308A
Authority
CN
China
Prior art keywords
face
gray level
level images
recognized
sparse
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911276159.5A
Other languages
Chinese (zh)
Inventor
祁彦庆
崔力民
张海波
马斌
郝玮
冯杰
杨站齐
吾米提
康龄泰
张南
张振杰
胡红艳
左航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
Information and Telecommunication Branch of State Grid Xinjiang Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
Information and Telecommunication Branch of State Grid Xinjiang Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, Information and Telecommunication Branch of State Grid Xinjiang Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN201911276159.5A priority Critical patent/CN111062308A/en
Publication of CN111062308A publication Critical patent/CN111062308A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2136Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on sparsity criteria, e.g. with an overcomplete basis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/28Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of face recognition, in particular to a face recognition method based on sparse expression and a neural network, which comprises the following steps: establishing a training sample set and a picture set to be identified according to a plurality of groups of face gray level images; calculating the training samples through a KSVD algorithm to obtain sparse domain expression training samples and an updated overcomplete dictionary; carrying out dimensionality reduction on the training sample expressed in the sparse domain to obtain a training sample subjected to dimensionality reduction; using the training samples to obtain a recognition model through machine learning training; and inputting the picture to be recognized into a recognition model for face recognition. The invention provides a novel face recognition method, namely sparse coding is used as a data source for expressing a face image, dimension reduction is carried out on the data source to extract features, and finally a recognition model is input for recognition, so that noise and redundant information in a face image feature object can be effectively removed, and the face recognition rate and the face recognition speed are improved.

Description

Face recognition method based on sparse expression and neural network
Technical Field
The invention relates to the technical field of face recognition, in particular to a face recognition method based on sparse expression and a neural network.
Background
The human face feature extraction is a key step of human face recognition, algorithms such as PCA, 2DPCA and LDA are mostly used for extracting human face features at present, wherein the PCA algorithm is used for reducing dimensions, and then the LDA algorithm is used for extracting the human face features, the PCA algorithm can use less data dimensions and ensures that the data intrinsic information is kept to the maximum after dimension reduction, but the data is not greatly distinguished, and data points are mixed together to cause indistinguishable, the LDA algorithm is used for enabling the data points after dimension reduction to be distinguished as easily as possible, the combination of the two algorithms plays a role of mutual complementation, the human face features are extracted very effectively, but the extraction mode is to directly extract the human face data from a gray space, so that the extracted human face data contains a large amount of noise and redundant information, the recognition rate is reduced, and the recognition time is prolonged.
Disclosure of Invention
The invention provides a face recognition method based on sparse expression and a neural network, overcomes the defects of the prior art, and can effectively solve the problem that the extracted face data contains a large amount of noise and redundant information to reduce the recognition rate in the face recognition method.
The technical scheme of the invention is realized by the following measures: a face recognition method based on sparse expression and neural network comprises the following steps:
establishing a training sample set and a picture set to be identified according to a plurality of groups of face gray level images;
calculating the training samples through a KSVD algorithm to obtain sparse domain expression training samples and an updated overcomplete dictionary;
carrying out dimensionality reduction on the training sample expressed in the sparse domain to obtain a training sample subjected to dimensionality reduction;
using the training samples to obtain a recognition model through machine learning training;
and inputting the picture to be recognized into a recognition model for face recognition.
The following is further optimization or/and improvement of the technical scheme of the invention:
the calculating the training sample by the KSVD algorithm to obtain the training sample expressed in the sparse domain and the over-complete dictionary after training comprises the following steps:
setting Y as training sample set, Y ═ Y1,y2L yiL yn]Y is the vector representation of the training samples in the gray level space, namely the vector representation of the training samples in the gray level space, n is the number of the training samples, and x is the vector representation in the sparse domain of the training samples;
converting y to x by the overcomplete dictionary D according to:
x=D-1y
wherein D is an over-complete dictionary, D belongs to Rm multiplied by n, x belongs to Rn multiplied by l, and n is larger than m; y is the vector representation of the training sample in the gray scale space;
obtaining a new training sample set X according to X, wherein X is [ X ]1,x2L xiL xn]N is the number of training samples; and iteratively updating the overcomplete dictionary D through an X loop to obtain the updated overcomplete dictionary D.
The above-mentioned training sample to sparse domain expression is reduced dimension through LDA algorithm, obtains the training sample after reducing the dimension, includes:
setting the dimension reduction of a training sample expressed by a sparse domain from an n-dimensional vector to a d-dimensional vector;
constructing a walking matrix between classes and a walking matrix in the classes;
calculating the maximum d eigenvalues and corresponding d eigenvectors in the matrix to construct a transformation matrix Wopt
Using a transformation matrix WoptAnd mapping the training sample expressed by the sparse domain to a new characteristic subspace to obtain the training sample after dimension reduction.
The above-mentioned picture to be discerned is input and is carried out face identification in discerning the model, need to treat the picture to be discerned, and the processing includes:
calculating the picture to be recognized by using the updated over-complete dictionary through a KSVD algorithm to obtain the picture to be recognized expressed in a sparse domain;
reducing the dimension of the image to be identified expressed in the sparse domain to obtain the image to be identified after dimension reduction;
and inputting the dimension-reduced picture to be recognized into a recognition model for face recognition.
The above-mentioned establishment training sample set and the picture set to be identified according to the multiple groups of face gray level images includes:
acquiring a plurality of groups of face gray level images, wherein one group corresponds to one person, each group of face gray level images comprises n face gray level images of one person, and each face gray level image is sampled into a gray level image of a fixed pixel;
setting n-1 face gray level images in each group of face gray level images in the multiple groups of face gray level images as training samples to form a training sample set, and setting the rest face gray level images in each group of face gray level images as pictures to be recognized to form a picture set to be recognized.
Each of the face grayscale images described above is sampled to a 37 × 30 grayscale image.
The machine learning is realized through an RBF artificial neural network model.
The invention provides a novel face recognition method, namely sparse coding is used as a data source for expressing a face image, dimension reduction is carried out on the data source to extract features, and finally a recognition model is input for recognition, so that noise and redundant information in a face image feature object (namely a training sample (gray face image)) can be effectively removed, and the face recognition rate and the face recognition speed are improved.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a flow chart of the present invention for computing training samples using the KSVD algorithm.
FIG. 3 is a flow chart of the present invention for performing dimension reduction on training samples expressed in sparse domains.
FIG. 4 is a flow chart of the present invention for inputting a picture to be recognized into a recognition model for face recognition.
FIG. 5 is a flow chart of the present invention for creating a training sample set and a to-be-recognized picture set.
Detailed Description
The present invention is not limited by the following examples, and specific embodiments may be determined according to the technical solutions and practical situations of the present invention.
The invention is further described with reference to the following examples and figures:
example 1: as shown in fig. 1, the face recognition method based on sparse representation and neural network includes the following steps:
s1, establishing a training sample set and a picture set to be recognized according to a plurality of groups of face gray level images;
s2, calculating the training samples through a KSVD algorithm to obtain training samples expressed in a sparse domain and an updated over-complete dictionary;
s3, performing dimensionality reduction on the training sample expressed in the sparse domain to obtain a dimensionality-reduced training sample;
s4, obtaining a recognition model through machine learning training by using the training sample;
and S5, inputting the picture to be recognized into the recognition model for face recognition.
Through the steps, sparse coding is carried out on training samples (gray face images) in a training sample set through a KSVD algorithm, and the training samples are converted into training samples expressed in a sparse domain, so that noise and redundant information in the training samples are removed, and classified information is favorably retained; and the dimension reduction is carried out on the training sample expressed by the sparse domain, so that the noise in the training sample is further reduced, and the identification training of the identification model is accelerated by extracting the features through the dimension reduction.
Therefore, the invention provides a new face recognition method, namely sparse coding is used as a data source for expressing a face image, dimension reduction is carried out on the data source to extract features, and finally a recognition model is input for recognition, so that noise and redundant information in a face image feature object (namely a training sample (gray face image)) can be effectively removed, and the face recognition rate and the face recognition speed are improved.
The following is further optimization or/and improvement of the technical scheme of the invention:
as shown in fig. 1 and 2, in S2, the calculating the training samples by the KSVD algorithm to obtain the training samples expressed in the sparse domain and the over-complete dictionary after training includes:
s21, setting Y as training sample set, Y ═ Y1,y2L yiL yn]Y is the vector representation of the training samples in the gray level space, namely the vector representation of the training samples in the gray level space, n is the number of the training samples, and x is the vector representation in the sparse domain of the training samples;
s22, converting y to x by the overcomplete dictionary D according to:
x=D-1y
wherein D is an over-complete dictionary, D belongs to Rm multiplied by n, x belongs to Rn multiplied by l, and n is larger than m; y is the vector representation of the training sample in the gray scale space;
s23, obtaining a new training sample set X according to X, wherein X is [ X ═ X1,x2L xiL xn]N is the number of training samples; and iteratively updating the overcomplete dictionary D through an X loop to obtain the updated overcomplete dictionary D.
The "sparse" means that after image data is converted from a gray domain to a sparse domain, a plurality of coding values of a sample are all 0, coding in the sparse domain and coding in a gray space domain are both expressions of a face image, the difference is that the bases of the coding in the sparse domain and the coding in the gray space domain are different, the coding under different bases has the characteristics of the coding, and if the image is subjected to sparse coding, a plurality of noises and redundant information are removed, so that classified information can be kept more favorably. Therefore, the invention uses the KSVD algorithm to calculate the training sample, and obtains the training sample expressed in the sparse domain and the over-complete dictionary after training.
When y is converted into x by the overcomplete dictionary D according to the following equation in S22, if the accurate x cannot be obtained due to the interference of noise, x is calculated by the following equation:
x≈D-1y,s.t||x-D-1y||2≤ξ
the method comprises the following steps of obtaining a training sample, wherein D is an over-complete dictionary, D belongs to Rm multiplied by n, x belongs to Rn multiplied by l, n is larger than m, y is vector representation of the training sample in a gray space, x is vector representation in a sparse domain of the training sample, ξ is a set parameter, and a specific set value is set according to actual conditions.
The distance is calculated in the above equation using a 2 norm (euclidean distance). However, since n > m, the above equation is an underdetermined system of equations, and an objective function is required to search sparse solutions from the myriad sets of solutions of x, namely:
min||x||1s.t||x-D-1y||2≤ξ
in order to obtain ideal sparse coding, the invention selects a KSVD [13,14] self-adaptive dictionary learning algorithm to obtain sparse coding.
In the above S23, the dictionary atoms are updated by iteration of the X loop in the new training sample set X on the basis of the current dictionary. So that D ∈ R obtained by iterationm×nThe sum coefficient x ∈ Rn×lSatisfies the following conditions:
Figure BDA0002315594780000041
wherein i is 1,2, … n; t is0Is a given sufficiently small number.
As shown in fig. 1 and 3, in S3, performing dimensionality reduction on the training sample expressed in the sparse domain by using the LDA algorithm to obtain a training sample after dimensionality reduction, including:
s31, setting the dimensionality reduction of the training sample expressed by the sparse domain from the n-dimensional vector to the d-dimensional vector;
s32, constructing a step matrix between classes and a step matrix in the classes;
s33, calculating the maximum d eigenvalues and corresponding d eigenvectors in the matrix, and constructing a conversion matrix Wopt
S34, using the transformation matrix WoptAnd mapping the training sample expressed by the sparse domain to a new characteristic subspace to obtain the training sample after dimension reduction.
The step S32 of constructing the inter-class step matrix and the intra-class step matrix includes:
training sample set X ═ X expressed in sparse domain1,x2L xiL xn]Wherein each xiIs a class of training samples, and the training samples of each class have q in commoniOpening a face image;
calculating the mean value m of various training samplesiAnd the total mean value m0
Figure BDA0002315594780000051
Figure BDA0002315594780000052
Using the mean m of various training samplesiAnd the total mean value m0Computing an inter-class scattering matrix SbAnd an intra-like scattering matrix Sw
Figure BDA0002315594780000053
Figure BDA0002315594780000054
If the direction of the eigenvector is dominant because of the larger distance class, the interspecies scatter matrix S is causedbThe class with large inter-class distance is highlighted, and the sample inter-class scattering matrix S is redefined by the following formula regardless of the class with small inter-class distance and large amount of overlapb
Figure BDA0002315594780000055
Wherein (m)i-mo)T*(mi-mo) Represents miTo moSquared Euclidean distance, smaller distance to SbThe greater the contribution and the smaller the inverse, i.e., mapping categories onto a unit sphere such that centers of classes that are far from the center of the sphere are pulled closer to the center of the sphere, while centers of classes that may overlap are pulled farther from the center of the sphere.
In the above S33, the transformation matrix W is determined by Fisher linear discriminant criterionoptAs follows:
Figure BDA0002315594780000056
wherein, Wopt=[W1W2…WM]Is Sb*Sw-1Is characterized in thatThe first M largest eigenvalues to which the eigenvector corresponds.
As shown in fig. 1 and 4, in S5, inputting the picture to be recognized into a recognition model for face recognition, including:
s51, calculating the picture to be recognized by using the updated over-complete dictionary through a KSVD algorithm to obtain the picture to be recognized expressed in a sparse domain;
s52, performing dimension reduction on the image to be recognized expressed in the sparse domain to obtain the image to be recognized after dimension reduction;
and S53, inputting the image to be recognized after the dimension reduction into a recognition model for face recognition.
The calculation processes of S51 and S52 are the same as those of S2 and S3, and are not described again.
As shown in fig. 1 and 5, in S1, establishing a training sample set and a to-be-recognized picture set according to multiple groups of face grayscale images, including:
s11, acquiring a plurality of groups of face gray level images, wherein one group corresponds to one person, each group of face gray level images comprises n face gray level images of one person, and each face gray level image is sampled into a gray level image with fixed pixels;
s12, setting n-1 face gray images in each group of face gray images in the multiple groups of face gray images as training samples to form a training sample set, and setting the rest face gray images in each group of face gray images as pictures to be recognized to form a picture set to be recognized.
Each set of face grayscale images in S11 above includes n face images obtained by a person under various gesture captures, different facial expressions and lighting conditions, and each face image is normalized and cropped into a 100 × 80 face grayscale image, and then sampled into a grayscale image of fixed pixels, where each face grayscale image is sampled into a 37 × 30 grayscale image.
As shown in fig. 1, the machine learning in S4 is implemented by an RBF artificial neural network model.
The training sample set is divided into a first training sample set and a second training sample set, one person in the training samples in the first training sample set acquires all face images under various gesture capture, different facial expressions and illumination conditions as input vectors, a predicted face recognition result is used as an output vector, network training is performed on the RBF artificial neural network model, one person in the training samples in the second training sample set acquires all face images under various gesture capture, different facial expressions and illumination conditions as input vectors, the predicted face recognition result is used as an output vector, network testing is performed on the RBF artificial neural network model, and therefore the RBF artificial neural network model is trained, and the recognition model is obtained.
The RBF artificial neural network is widely used for function approximation and pattern recognition, and the local regulation function of the neuron enables the algorithm to have excellent approximation property and very fast learning speed.
The RBF artificial neural network can be described as a mapping Vr → Vs. Let P ∈ Vr be the input vector and Ci ∈ Vr (1 ≦ i ≦ u) be the prototype of the input vector. The output of each RBF unit is:
Vi(P)=Vi(||P-Ci||)i=1,2......u
where | represents the euclidean distance of the input space.
Generally, the gaussian function has the advantage of being resolvable, and is therefore the preferred choice among all possible radial basis functions, so:
Figure BDA0002315594780000061
wherein σiIs the width of the ith radial base unit.
J output y of RBF artificial neural networki(P) is:
Figure BDA0002315594780000062
where w (j, i) is the weight from the ith receptive field to the jth output.
In the invention, the weight w (j, i), the hidden layer CiAnd the Gaussian kernel function σiDie ofThe type parameters are all adjusted in gradient according to a mixed learning algorithm and a linear least square method (LLS).
The above technical features constitute the best embodiment of the present invention, which has strong adaptability and best implementation effect, and unnecessary technical features can be increased or decreased according to actual needs to meet the requirements of different situations.

Claims (10)

1. A face recognition method based on sparse expression and neural network is characterized by comprising the following steps:
establishing a training sample set and a picture set to be identified according to a plurality of groups of face gray level images;
calculating the training samples through a KSVD algorithm to obtain sparse domain expression training samples and an updated overcomplete dictionary;
carrying out dimensionality reduction on the training sample expressed in the sparse domain to obtain a training sample subjected to dimensionality reduction;
using the training samples to obtain a recognition model through machine learning training;
and inputting the picture to be recognized into a recognition model for face recognition.
2. The face recognition method based on sparse representation and neural network of claim 1, wherein the calculating the training samples by the KSVD algorithm to obtain the training samples of sparse domain representation and the over-complete dictionary after training comprises:
setting Y as training sample set, Y ═ Y1,y2L yiL yn]Y is the vector representation of the training samples in the gray level space, namely the vector representation of the training samples in the gray level space, n is the number of the training samples, and x is the vector representation in the sparse domain of the training samples;
converting y to x by the overcomplete dictionary D according to:
x=D-1y
wherein D is an over-complete dictionary, D belongs to Rm multiplied by n, x belongs to Rn multiplied by l, and n is larger than m; y is the vector representation of the training sample in the gray scale space;
get new training from xTraining sample set X, X ═ X1,x2L xiL xn]N is the number of training samples; and iteratively updating the overcomplete dictionary D through an X loop to obtain the updated overcomplete dictionary D.
3. The face recognition method based on sparse expression and neural network of claim 1 or 2, wherein the performing dimensionality reduction on the training sample expressed in the sparse domain through the LDA algorithm to obtain the training sample after dimensionality reduction comprises:
setting the dimension reduction of a training sample expressed by a sparse domain from an n-dimensional vector to a d-dimensional vector;
constructing a walking matrix between classes and a walking matrix in the classes;
calculating the maximum d eigenvalues and corresponding d eigenvectors in the matrix to construct a transformation matrix Wopt
Using a transformation matrix WoptAnd mapping the training sample expressed by the sparse domain to a new characteristic subspace to obtain the training sample after dimension reduction.
4. The face recognition method based on sparse representation and neural network according to claim 1 or 2, wherein the image to be recognized is input into the recognition model for face recognition, and the image to be recognized needs to be processed, and the processing comprises:
calculating the picture to be recognized by using the updated over-complete dictionary through a KSVD algorithm to obtain the picture to be recognized expressed in a sparse domain;
reducing the dimension of the image to be identified expressed in the sparse domain to obtain the image to be identified after dimension reduction;
and inputting the dimension-reduced picture to be recognized into a recognition model for face recognition.
5. The face recognition method based on sparse representation and neural network of claim 3, wherein the image to be recognized is input into the recognition model for face recognition, and needs to be processed, and the processing comprises:
calculating the picture to be recognized by using the updated over-complete dictionary through a KSVD algorithm to obtain the picture to be recognized expressed in a sparse domain;
reducing the dimension of the image to be identified expressed in the sparse domain to obtain the image to be identified after dimension reduction;
and inputting the dimension-reduced picture to be recognized into a recognition model for face recognition.
6. The face recognition method based on sparse representation and neural network according to claim 1,2 or 5, wherein the establishing of the training sample set and the picture set to be recognized according to the plurality of groups of face gray level images comprises:
acquiring a plurality of groups of face gray level images, wherein one group corresponds to one person, each group of face gray level images comprises n face gray level images of one person, and each face gray level image is sampled into a gray level image of a fixed pixel;
setting n-1 face gray level images in each group of face gray level images in the multiple groups of face gray level images as training samples to form a training sample set, and setting the rest face gray level images in each group of face gray level images as pictures to be recognized to form a picture set to be recognized.
7. The face recognition method based on sparse representation and neural network of claim 3, wherein the establishing of the training sample set and the picture set to be recognized according to the plurality of groups of face gray level images comprises:
acquiring a plurality of groups of face gray level images, wherein one group corresponds to one person, each group of face gray level images comprises n face gray level images of one person, and each face gray level image is sampled into a gray level image of a fixed pixel;
setting n-1 face gray level images in each group of face gray level images in the multiple groups of face gray level images as training samples to form a training sample set, and setting the rest face gray level images in each group of face gray level images as pictures to be recognized to form a picture set to be recognized.
8. The face recognition method based on sparse representation and neural network of claim 4, wherein the establishing of the training sample set and the picture set to be recognized according to the plurality of groups of face gray level images comprises:
acquiring a plurality of groups of face gray level images, wherein one group corresponds to one person, each group of face gray level images comprises n face gray level images of one person, and each face gray level image is sampled into a gray level image of a fixed pixel;
setting n-1 face gray level images in each group of face gray level images in the multiple groups of face gray level images as training samples to form a training sample set, and setting the rest face gray level images in each group of face gray level images as pictures to be recognized to form a picture set to be recognized.
9. The sparse representation and neural network based face recognition method of any one of claims 6, 7 or 8, wherein each face grayscale image is sampled to 37 x 30 grayscale images.
10. The sparse representation and neural network based face recognition method of any one of claims 1 to 9, wherein the machine learning is implemented by an RBF artificial neural network model.
CN201911276159.5A 2019-12-12 2019-12-12 Face recognition method based on sparse expression and neural network Pending CN111062308A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911276159.5A CN111062308A (en) 2019-12-12 2019-12-12 Face recognition method based on sparse expression and neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911276159.5A CN111062308A (en) 2019-12-12 2019-12-12 Face recognition method based on sparse expression and neural network

Publications (1)

Publication Number Publication Date
CN111062308A true CN111062308A (en) 2020-04-24

Family

ID=70300675

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911276159.5A Pending CN111062308A (en) 2019-12-12 2019-12-12 Face recognition method based on sparse expression and neural network

Country Status (1)

Country Link
CN (1) CN111062308A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111597376A (en) * 2020-07-09 2020-08-28 腾讯科技(深圳)有限公司 Image data processing method and device and computer readable storage medium
CN111652311A (en) * 2020-06-03 2020-09-11 苏州大学 Image sparse representation method based on sparse elliptic RBF neural network
CN112613480A (en) * 2021-01-04 2021-04-06 上海明略人工智能(集团)有限公司 Face recognition method, face recognition system, electronic equipment and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652311A (en) * 2020-06-03 2020-09-11 苏州大学 Image sparse representation method based on sparse elliptic RBF neural network
CN111652311B (en) * 2020-06-03 2024-02-20 苏州大学 Sparse elliptic RBF neural network-based image sparse representation method
CN111597376A (en) * 2020-07-09 2020-08-28 腾讯科技(深圳)有限公司 Image data processing method and device and computer readable storage medium
CN111597376B (en) * 2020-07-09 2021-08-10 腾讯科技(深圳)有限公司 Image data processing method and device and computer readable storage medium
CN112613480A (en) * 2021-01-04 2021-04-06 上海明略人工智能(集团)有限公司 Face recognition method, face recognition system, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
Chen et al. Segmentation of fingerprint images using linear classifier
Wu et al. Lut-based adaboost for gender classification
Eng et al. Facial expression recognition in JAFFE and KDEF Datasets using histogram of oriented gradients and support vector machine
CN111062308A (en) Face recognition method based on sparse expression and neural network
Haque et al. Two-handed bangla sign language recognition using principal component analysis (PCA) and KNN algorithm
Chitaliya et al. Feature extraction using wavelet-pca and neural network for application of object classification & face recognition
CN112464730B (en) Pedestrian re-identification method based on domain-independent foreground feature learning
Bawane et al. Object and character recognition using spiking neural network
CN109325472B (en) Face living body detection method based on depth information
JP2010039778A (en) Method for reducing dimension, apparatus for generating dictionary for pattern recognition, and apparatus for recognizing pattern
CN111310813A (en) Subspace clustering method and device for potential low-rank representation
CN111832405A (en) Face recognition method based on HOG and depth residual error network
CN112836589A (en) Method for recognizing facial expressions in video based on feature fusion
Archana et al. Real time face detection and optimal face mapping for online classes
Rani et al. Face recognition using principal component analysis
Szankin et al. Influence of thermal imagery resolution on accuracy of deep learning based face recognition
Tong et al. Local dominant directional symmetrical coding patterns for facial expression recognition
Strukova et al. Gait analysis for person recognition using principal component analysis and support vector machines
Hsia et al. A fast face detection method for illumination variant condition
Elmansori et al. An enhanced face detection method using skin color and back-propagation neural network
CN107341485B (en) Face recognition method and device
Tan et al. Face recognition algorithm based on open CV
Leng et al. Gender classification based on fuzzy SVM
Hashemi et al. A novel hybrid method for face recognition based on 2d wavelet and singular value decomposition
Zehani et al. Features extraction using different histograms for texture classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination