CN112069978B - Face recognition method based on mutual information and dictionary learning - Google Patents

Face recognition method based on mutual information and dictionary learning Download PDF

Info

Publication number
CN112069978B
CN112069978B CN202010912544.0A CN202010912544A CN112069978B CN 112069978 B CN112069978 B CN 112069978B CN 202010912544 A CN202010912544 A CN 202010912544A CN 112069978 B CN112069978 B CN 112069978B
Authority
CN
China
Prior art keywords
dictionary
mutual information
learning
matrix
sparse matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010912544.0A
Other languages
Chinese (zh)
Other versions
CN112069978A (en
Inventor
刘侍刚
曹清华
彭亚丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Normal University
Original Assignee
Shaanxi Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi Normal University filed Critical Shaanxi Normal University
Priority to CN202010912544.0A priority Critical patent/CN112069978B/en
Publication of CN112069978A publication Critical patent/CN112069978A/en
Application granted granted Critical
Publication of CN112069978B publication Critical patent/CN112069978B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2136Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on sparsity criteria, e.g. with an overcomplete basis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/28Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

A face recognition method based on mutual information and dictionary learning comprises the steps of image preprocessing, determining a maximum mutual information coefficient MIC, designing a dictionary learning objective function J (D, X), initializing a dictionary D and a sparse matrix X, learning the dictionary D and the sparse matrix X, and recognizing a face image. The invention adopts the determination of the maximum mutual information coefficient MIC and the design of the dictionary learning objective function J (D, X), the classification accuracy of the face image is steadily increased along with the increase of the number of atoms, the classification accuracy is higher than that of the current dictionary learning face recognition method, the mutual information quantifies the degree of association between two variables, the maximum mutual information coefficient between the dictionary atoms and the training sample is worked out according to the mutual information principle, the maximum mutual information coefficient is used as a reference item, a corresponding weight coefficient is added to each atom in the dictionary learning process, each atom can have different weights to learn, a dictionary with stronger discrimination capability is obtained, the accuracy of the face recognition is improved, and the method can be used for processing the face image.

Description

Face recognition method based on mutual information and dictionary learning
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a mutual information-based dictionary learning face recognition method.
Background
At present, there are many dictionary learning methods, and the methods for constructing dictionaries can be divided into analytic methods and learning methods. The analytic method mainly utilizes a harmonic technology or a mathematical transformation technology to construct a corresponding dictionary, for example: contourlet transform, wavelet transform, discrete cosine transform, shear wave, parameterized dictionary, etc. The learning method aims to learn an overcomplete dictionary based on each training sample, so that atoms in the learned overcomplete dictionary can be matched with the training samples, the training samples can be better represented, and sparsity is achieved.
Recent research shows that the dictionary obtained by the learning method has better effects than the analysis method in the aspects of image classification, image denoising, image super-resolution and image restoration. The K-SVD (K-means-based discrete value decomposition) algorithm proposed by Aharon et al is a representative algorithm among the learning methods. In the dictionary, those atoms with characteristics and discriminability of the training samples are kept and are helpful for image classification. Many scholars make a lot of attempts to improve the performance of dictionaries, and designing discriminant constraints in terms of atom adaptivity and reconstruction is a common method at present. For example, the FDDL (Fisher discriminant knowledge learning) algorithm proposed by Yang et al constructs constraint terms using discriminant information in both the representation residual and the representation coefficients of each class atom. Zhang et al propose a D-KSVD algorithm according to a K-SVD algorithm, and obtain a good image classification result by using an error between a coding coefficient and a training sample class label as a weight constraint. According to the D-KSVD algorithm, jang et al propose an LC-KSVD (label-coherent K-means-based single value decomposition) algorithm, and construct a sparse coding error term by using an atomic class label. Cai et al propose SVGDL (support vector guided differential learning) algorithm that uses class labels of training samples to select weights of coding coefficients. The dictionary learning algorithms above all exhibit good effects in image classification. However, most of the methods only consider the association between the training sample labels and sparse coding, and do not pay attention to the relation between the number of dictionary atoms and the sample labels. The image classification accuracy of the above method does not scale up with the number of atoms.
In the technical field of image processing, a technical problem to be urgently solved at present is to provide a human face image recognition invention with high human face image recognition rate and strong stability.
Disclosure of Invention
The invention aims to overcome the defects of the existing face recognition technology and provides a face recognition method based on mutual information and dictionary learning, which has high face recognition accuracy and stable effect.
The technical scheme adopted for solving the technical problems comprises the following steps:
(1) Image pre-processing
The method comprises The steps of acquiring front face pictures of different types of different illumination conditions and expressions in The extended Yale B data set, enabling each person to have 59-64 pictures, normalizing The size of each picture to be 32 x 32, selecting 32 pictures of each person as training samples, and using The rest pictures as test samples.
(2) Determining maximum mutual information coefficient MIC
Determining the maximum mutual information coefficient MIC according to the following formula:
Figure BDA0002663853110000021
where D is the dictionary learned after each iteration, C is the label vector of the training sample, S is the data space size, D is the atoms in the dictionary, C is the label vector corresponding to the atoms, and p (D, C) represents the joint distribution of (D, C).
(3) Design dictionary learning objective function J (D, X)
The objective function J (D, X) is designed as follows:
Figure BDA0002663853110000022
wherein Y is a picture matrix of the training set, X is a sparse matrix, the sparse matrix is represented in the matrix, the number of elements with the numerical value of 0 is far more than the number of elements with the numerical value of 0, and the distribution of the elements with the numerical value of 0 has no rule, | · | F Representing the Frobenius paradigm, W is the weight coefficient W (s, s) between each atom and the corresponding tag vector:
Figure BDA0002663853110000031
wherein s is 1, \8230, the number of atoms in n dictionary, n is finite positive integer, λ is reference standard for judging weight parameter value, and λ 1 The value is 0.5 to 1.
(4) Initialization of dictionary D and sparse matrix X
And initializing the dictionary D and the sparse matrix X by adopting a conventional method.
(5) Learning dictionary D and sparse matrix X
Learning a dictionary D and a sparse matrix X by:
fixed dictionary D (t-1) The formula (2) is rewritten as:
Figure BDA0002663853110000032
wherein D (t-1) Representing the dictionary D, D corresponding to the time t-1 i Representing the ith atom of the dictionary.
Fixing W and a sparse matrix X, removing the atom of the l-th layer in a dictionary D, and training an error matrix E of a sample l Comprises the following steps:
Figure BDA0002663853110000033
wherein w j Is a jth column atom d j And a weight parameter between the sample label matrix,
Figure BDA0002663853110000034
represents the jth row of the sparse matrix X.
(6) Recognizing face images
The face image pre (X, α) is identified as follows:
pre(X,a)=(X T X+βΙ) -1 XH T a (6)
wherein alpha is a test sample, H is a label matrix of a training sample, the label matrix is set to be 1 corresponding to the class of the image, and the rest is all 0.I is a unit diagonal matrix, the row and column number of I is the same as that of the sparse matrix X, and beta is a small positive integer.
And (5) selecting the row and column number corresponding to the maximum value through the formula (6) to obtain a certain face image of the test sample.
In the step (3) of learning the target function J (D, X) by the design dictionary, the value of n in the formula (3) is 21-32 times of that of the front face pictures of different types.
In the step (3) of learning the target function J (D, X) by the design dictionary, the value range of the lambda is (0, 1).
In the step (3) of learning the target function J (D, X) of the design dictionary, the lambda is 1 Is preferably 0.8.
The invention adopts the determined maximum mutual information coefficient MIC and the design dictionary learning objective function J (D, X), so that the face image classification accuracy rate stably increases along with the increase of the number of atoms, and the classification accuracy rate is higher than that of the current dictionary learning face recognition method. The invention can obtain the maximum mutual information coefficient between the dictionary atom and the training sample according to the mutual information principle, takes the maximum mutual information coefficient as a reference item, adds a corresponding weight coefficient to each atom in the dictionary learning process, leads each atom to have different weights to learn, finally obtains the dictionary with stronger discrimination capability, further improves the accuracy of face recognition, and can be used for face image processing.
Drawings
FIG. 1 is a flowchart of example 1 of the present invention.
FIG. 2 is a plot of classification accuracy as a function of atoms on the extended Yale B dataset.
Fig. 3 is a plot of classification accuracy against atom on an LFW dataset.
Fig. 4 is a plot of classification accuracy as a function of atoms on the PIE data set.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, but the present invention is not limited to the embodiments described below.
Example 1
In this embodiment, taking a front face picture of 38 people with different lighting conditions and expressions in The extendedYale B data set as an example, the face recognition method based on mutual information and dictionary learning includes The following steps (see FIG. 1):
(1) Image pre-processing
The method comprises The steps of acquiring 38 front face pictures with different illumination conditions and expressions in The extended Yale B data set, wherein 59-64 pictures exist in each person, randomly selecting 59-64 pictures, normalizing The size of each picture to be 32 x 32, selecting 32 pictures of each person as training samples, and using The rest pictures as test samples.
(2) Determining maximum mutual information coefficient MIC
Determining the maximum mutual information coefficient MIC according to the following formula:
Figure BDA0002663853110000051
where D is the dictionary learned after each iteration, C is the label vector of the training sample, D is the atom in the dictionary, C is the label vector corresponding to the atom, S is the data space size, and p (D, C) represents the joint distribution of (D, C). The maximum mutual information coefficient can quantize the relationship between two variables well. The maximum mutual information coefficient MIC is introduced into dictionary learning, the identification accuracy can be stably increased along with the increase of the number of atoms, and the identification accuracy is higher than that of the existing dictionary learning face identification method.
(3) Design dictionary learning objective function J (D, X)
The objective function J (D, X) is designed as follows:
Figure BDA0002663853110000052
wherein Y is a picture matrix of the training set, X is a sparse matrix,the sparse matrix is expressed in the matrix, the number of elements with the value of 0 is more than that of elements with the value of non-0, and the distribution of the elements with the value of non-0 has no regularity, | · calu | calving F Representing the Frobenius paradigm, W is the weight coefficient W (s, s) between each atom and the corresponding tag vector:
Figure BDA0002663853110000053
wherein s is 1, \8230, the number of atoms in the n dictionary, n is a multiple of 38, n in the embodiment is selected from 798 to 1216, lambda is a reference standard for judging the value of the weight parameter, the value range of lambda is (0, 1), the value of lambda is 0.4, and lambda is a multiple of 1 Is 0.8. The maximum mutual information coefficient is used as a reference item, and a corresponding weight coefficient is added to each atom in the process of designing a dictionary learning objective function J (D, X), so that each atom has different weights to learn, and finally, a dictionary with stronger discrimination capability is obtained, and the accuracy of face recognition is further improved.
(4) Initialization of dictionary D and sparse matrix X
And initializing the dictionary D and the sparse matrix X by adopting a conventional method.
(5) Learning dictionary D and sparse matrix X
Learning a dictionary D and a sparse matrix X by:
fixed dictionary D (t-1) The formula (2) is rewritten as:
Figure BDA0002663853110000061
wherein D (t-1) Representing the dictionary D, D corresponding to the time t-1 i Representing the ith atom of the dictionary.
Fixing W and a sparse matrix X, removing atoms on the l-th layer in a dictionary D, and training an error matrix E of a sample l Comprises the following steps:
Figure BDA0002663853110000062
wherein w j Is a jth column atom d j And a weight parameter between the sample label matrices. In the process of learning the dictionary D and the sparse matrix X, the atoms are updated one by one, so that autocorrelation among the atoms is reduced, and the accuracy and stability of face recognition are further improved.
(6) Recognizing human face images
The face image pre (X, α) is identified as follows:
pre(X,a)=(X T X+βΙ) -1 XH T a (6)
wherein alpha is a test sample, H is a label matrix of a training sample, the label matrix is set to be a unit diagonal matrix with the corresponding numerical value of 1 for the type of the image, the rest is all 0, I is the same as the sparse matrix X, and beta is a smaller positive integer.
And (4) selecting the number of rows and columns corresponding to the maximum value through the formula (6) to obtain a certain face image of the 38 persons of the test sample. And finishing the face recognition.
Example 2
In this embodiment, taking a front face picture of 38 people with different lighting conditions and expressions in The extendedYale B data set as an example, the face recognition method based on mutual information and dictionary learning includes The following steps:
(1) Image pre-processing
This procedure is the same as in example 1.
(2) Determining maximum mutual information coefficient MIC
This procedure is the same as in example 1.
(3) Design dictionary learning objective function J (D, X)
The objective function J (D, X) is designed as follows:
Figure BDA0002663853110000071
wherein Y is a picture matrix of the training set, X is a sparse matrix, the sparse matrix is represented in the matrix, the number of elements with the numerical value of 0 is more than the number of elements with the value of non-0, and the distribution of the elements with the value of non-0 has no regularity. W is the weight coefficient W (s, s) between each atom and the corresponding tag vector:
Figure BDA0002663853110000072
wherein s is 1, \ 8230, the number of atoms in n dictionary, n is multiple of 38, n in this embodiment is selected from 798-1216, λ is a reference standard for judging the value of weight parameter, the value range of λ is (0, 1), the value of λ in this embodiment is 0.1, and λ is a multiple of 38 1 Is 0.51.
The other steps were the same as in example 1. And finishing the face recognition.
Example 3
In this embodiment, taking a front face picture of 38 people with different lighting conditions and expressions in The extendedYale B data set as an example, the face recognition method based on mutual information and dictionary learning includes The following steps:
(1) Image pre-processing
This procedure is the same as in example 1.
(2) Determining maximum mutual information coefficient MIC
This procedure is the same as in example 1.
(3) Design dictionary learning objective function J (D, X)
The objective function J (D, X) is designed as follows:
Figure BDA0002663853110000073
/>
wherein Y is a picture matrix of the training set, X is a sparse matrix, the sparse matrix is represented in the matrix, the number of elements with the numerical value of 0 is more than the number of elements with the value of non-0, and the distribution of the elements with the value of 0 is irregular. W is the weight coefficient W (s, s) between each atom and the corresponding tag vector:
Figure BDA0002663853110000074
wherein s is a number of the formula 1,8230, n is the number of atoms in the dictionary, is a multiple of 38, n in this embodiment is selected from 798 to 1216, λ is a reference standard for judging the value of the weight parameter, the value range of λ is (0, 1), the value of λ in this embodiment is 0.99, and λ is a multiple of 38 1 Is 0.98.
The other steps were the same as in example 1. And finishing the face recognition.
In order to verify the beneficial effects of the invention, the inventor adopts the face recognition method based on mutual information and dictionary learning in the embodiment 1 of the invention and the face recognition methods of K-SVD, LC-KSVD1, LC-KSVD2, D-KSVD, SRC and SVGDL to carry out simulation comparison experiments on different data sets, and the experimental results are shown in tables 1-4 and FIGS. 2-4.
Table 1 average recognition rates of embodiment 1 on The extendedYale B dataset and 6 face recognition methods
Figure BDA0002663853110000081
As can be seen from table 1, the average recognition rate of the face recognition method of example 1 is 10.9% higher than the D-KSVD face recognition method with the lowest recognition rate, and 0.7% higher than the SVGDL face recognition method with the highest recognition rate.
Table 2 example 1 mean recognition rate on LFW data set with 5 comparative test methods
Figure BDA0002663853110000082
As can be seen from table 2, the average recognition rate of the face recognition method of example 1 is 13.2% higher than the D-KSVD face recognition method with the lowest recognition rate, and 0.5% higher than the K-SVD face recognition method with the highest recognition rate.
Table 3 average recognition rate of example 1 on PIE data set with 5 comparative test methods
Figure BDA0002663853110000083
/>
Figure BDA0002663853110000091
As can be seen from table 3, the average recognition rate of the face recognition method of example 1 is 72.5% higher than the LC-KSVD1 face recognition method with the lowest recognition rate, and 0.5% higher than the D-KSVD face recognition method with the highest recognition rate.
Table 4 example 1 average recognition rate on GT data set with 5 comparative test methods
Figure BDA0002663853110000092
As can be seen from table 4, the average recognition rate of the face recognition method in embodiment 1 is 10.9% higher than that of the SVGDL face recognition method with the lowest recognition rate, and 0.6% higher than that of the D-KSVD face recognition method with the highest recognition rate.
As can be seen from fig. 2 to 4, the recognition rate of the face recognition method of the present invention increases proportionally with the increase of atoms, and the face recognition method of the present invention exhibits a higher recognition rate under the condition of a larger number of atoms.

Claims (4)

1. A face recognition method based on mutual information and dictionary learning is characterized by comprising the following steps:
(1) Image pre-processing
Acquiring front face pictures of different types of different illumination conditions and expressions in The extended Yale B data set, wherein 59-64 pictures exist in each person, the size of each picture is normalized to be 32 x 32, 32 pictures of each person are selected as training samples, and The rest pictures are used as test samples;
(2) Determining maximum mutual information coefficient MIC
Determining the maximum mutual information coefficient MIC according to the following formula:
Figure FDA0004053821040000011
wherein D is a dictionary learned after each iteration, C is a label vector of a training sample, S is the size of a data space, D is an atom in the dictionary, C is a label vector corresponding to the atom, and p (D, C) represents the joint distribution of (D, C);
(3) Design dictionary learning objective function J (D, X)
The objective function J (D, X) is designed as follows:
Figure FDA0004053821040000012
wherein Y is a picture matrix of the training set, X is a sparse matrix, the sparse matrix is represented in the matrix, the number of elements with the numerical value of 0 is far more than the number of elements with the value of non-0, the distribution of the elements with the value of non-0 has no rule, | · | | either F Representing the Frobenius paradigm, W is the weight coefficient W (s, s) between each atom and the corresponding tag vector:
Figure FDA0004053821040000013
wherein s is 1, \ 8230, n and n denote the number of atoms in the dictionary, n is a finite positive integer, λ is a reference standard for judging the value of the weight parameter, and λ 1 The value is 0.5 to 1;
(4) Initialization of dictionary D and sparse matrix X
Initializing a dictionary D and a sparse matrix X by adopting a conventional method;
(5) Learning dictionary D and sparse matrix X
Learning a dictionary D and a sparse matrix X by:
fixed dictionary D (t-1) The formula (2) is rewritten as:
Figure FDA0004053821040000021
wherein D (t-1) Representing the dictionary D, D corresponding to the time t-1 i The ith atom representing the dictionary;
fixing W and a sparse matrix X, removing the atom of the l-th layer in a dictionary D, and training an error matrix E of a sample l Comprises the following steps:
Figure FDA0004053821040000022
wherein w j Is a jth column atom d j And a weight parameter between the sample label matrix,
Figure FDA0004053821040000023
represents the jth row of the sparse matrix X; />
(6) Recognizing face images
The face image pre (X, α) is identified as follows:
pre(X,α)=(X T X+βΙ) -1 XH T α (6)
wherein alpha is a test sample, H is a label matrix of a training sample, the label matrix is set to be a unit diagonal matrix with the corresponding numerical value of the type of the image being 1, the rest is all 0, I is the same as the sparse matrix X, the row and column number of I is the same as that of the sparse matrix X, and beta is a smaller positive integer; and (4) selecting the number of rows and columns corresponding to the maximum value through the formula (6) to obtain a certain face image of the test sample.
2. The face recognition method based on mutual information and dictionary learning according to claim 1, characterized in that: in the step (3) of learning the target function J (D, X) by designing a dictionary, the value of n in the formula (3) is a multiple of 38, and n is selected from 798-1216.
3. The face recognition method based on mutual information and dictionary learning according to claim 1, characterized in that: in the step (3) of learning the target function J (D, X) by designing the dictionary, the value range of the lambda is (0, 1).
4. The face recognition method based on mutual information and dictionary learning according to claim 1, characterized in that: learning an objective function in a design dictionary of the present inventionJ (D, X) in step (3), said λ 1 Is 0.8.
CN202010912544.0A 2020-09-03 2020-09-03 Face recognition method based on mutual information and dictionary learning Active CN112069978B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010912544.0A CN112069978B (en) 2020-09-03 2020-09-03 Face recognition method based on mutual information and dictionary learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010912544.0A CN112069978B (en) 2020-09-03 2020-09-03 Face recognition method based on mutual information and dictionary learning

Publications (2)

Publication Number Publication Date
CN112069978A CN112069978A (en) 2020-12-11
CN112069978B true CN112069978B (en) 2023-04-07

Family

ID=73666375

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010912544.0A Active CN112069978B (en) 2020-09-03 2020-09-03 Face recognition method based on mutual information and dictionary learning

Country Status (1)

Country Link
CN (1) CN112069978B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115049814B (en) * 2022-08-15 2022-11-08 聊城市飓风工业设计有限公司 Intelligent eye protection lamp adjusting method adopting neural network model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831614A (en) * 2012-09-10 2012-12-19 西安电子科技大学 Sequential medical image quick segmentation method based on interactive dictionary migration
CN110766695A (en) * 2019-09-26 2020-02-07 山东工商学院 Image matting algorithm research based on sparse representation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9411800B2 (en) * 2008-06-27 2016-08-09 Microsoft Technology Licensing, Llc Adaptive generation of out-of-dictionary personalized long words

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831614A (en) * 2012-09-10 2012-12-19 西安电子科技大学 Sequential medical image quick segmentation method based on interactive dictionary migration
CN110766695A (en) * 2019-09-26 2020-02-07 山东工商学院 Image matting algorithm research based on sparse representation

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Histopathological image classification through discriminative feature learning and mutual information-based multi-channel joint sparse representation;Xiao. Li 等;《J. Vis. Commun. Image R》;20200329;第1-11页 *
Information-theoretic Dictionary Learning for Image Classification;Qiu Q 等;《IEEE Transactions on Pattern Analysis & Machine Intelligence》;20141231;第2173-2184页 *
Task-Driven Dictionary Learning Based on Mutual Information for Medical Image Classification;Idit Diamant 等;《IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING》;20170630;第1380-1392页 *
基于互信息的多通道联合稀疏模型及其组织病理图像分类;汤红忠 等;《计算机辅助设计与图形学学报》;20180831;第1514-1521页 *
面向分类的增量字典学习算法;张志武 等;《计算机工程》;20171031;第167-185页 *

Also Published As

Publication number Publication date
CN112069978A (en) 2020-12-11

Similar Documents

Publication Publication Date Title
Kim et al. Effective representation using ICA for face recognition robust to local distortion and partial occlusion
CN108664911B (en) Robust face recognition method based on image sparse representation
CN112116017B (en) Image data dimension reduction method based on kernel preservation
CN112368708B (en) Facial image recognition using pseudo-images
CN110659665B (en) Model construction method of different-dimension characteristics and image recognition method and device
CN108416374B (en) Non-negative matrix factorization method based on discrimination orthogonal subspace constraint
CN111695456A (en) Low-resolution face recognition method based on active discriminability cross-domain alignment
CN111324791B (en) Multi-view data subspace clustering method
CN110889865A (en) Video target tracking method based on local weighted sparse feature selection
CN110717519A (en) Training, feature extraction and classification method, device and storage medium
Jin et al. Multiple graph regularized sparse coding and multiple hypergraph regularized sparse coding for image representation
Xu et al. Face recognition by fast independent component analysis and genetic algorithm
CN110399814B (en) Face recognition method based on local linear representation field adaptive measurement
CN106803105B (en) Image classification method based on sparse representation dictionary learning
CN112069978B (en) Face recognition method based on mutual information and dictionary learning
CN108388918B (en) Data feature selection method with structure retention characteristics
CN111325275A (en) Robust image classification method and device based on low-rank two-dimensional local discriminant map embedding
Jena et al. Implementation of linear discriminant analysis for Odia numeral recognition
CN108121964B (en) Matrix-based joint sparse local preserving projection face recognition method
CN111723759B (en) Unconstrained face recognition method based on weighted tensor sparse graph mapping
CN111310807B (en) Feature subspace and affinity matrix joint learning method based on heterogeneous feature joint self-expression
CN106909944B (en) Face picture clustering method
CN112966735A (en) Supervision multi-set correlation feature fusion method based on spectral reconstruction
CN116884067A (en) Micro-expression recognition method based on improved implicit semantic data enhancement
CN109063766B (en) Image classification method based on discriminant prediction sparse decomposition model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant