CN108520215B - Single-sample face recognition method based on multi-scale joint feature encoder - Google Patents

Single-sample face recognition method based on multi-scale joint feature encoder Download PDF

Info

Publication number
CN108520215B
CN108520215B CN201810262820.6A CN201810262820A CN108520215B CN 108520215 B CN108520215 B CN 108520215B CN 201810262820 A CN201810262820 A CN 201810262820A CN 108520215 B CN108520215 B CN 108520215B
Authority
CN
China
Prior art keywords
face
feature
image
matched
scale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810262820.6A
Other languages
Chinese (zh)
Other versions
CN108520215A (en
Inventor
刘欣刚
梅永勇
余涛
严敬仁
王艺豪
代成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201810262820.6A priority Critical patent/CN108520215B/en
Publication of CN108520215A publication Critical patent/CN108520215A/en
Application granted granted Critical
Publication of CN108520215B publication Critical patent/CN108520215B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The invention discloses a single-sample face recognition method based on a multi-scale joint feature encoder, which comprises the following steps: preprocessing an image; dividing a face area; independent feature learning of multi-scale regions, wherein each face region is trained respectively to obtain an independent feature coding matrix; performing joint feature learning, namely performing decomposition and clustering learning on the multi-scale independent feature coding matrix of each region to obtain a joint feature coding matrix; extracting features, namely extracting the features of the face image through a multi-scale combined feature coding matrix; and (4) feature identification, namely comparing the features extracted from each region to obtain an identification result. The invention provides a combined feature learning method based on a sparse automatic coding technology, which can learn robust face features under the condition of limited face training data, solve the face recognition problem under the conditions of shielding and illumination interference and greatly expand the application scene of face recognition.

Description

Single-sample face recognition method based on multi-scale joint feature encoder
Technical Field
The invention relates to the technical field of face recognition, in particular to a single-sample face recognition method based on a multi-scale joint feature encoder.
Background
Because of the uniqueness and the non-replicability of the human face, the human face is an important biological feature in identity recognition. Face recognition technology is widely used in various security fields due to its non-contact and easy acquisition. However, in practical applications, various complex scenes and single sample set limitations exist, so that efficient and ubiquitous face recognition algorithms are increasingly hot in current research.
The face recognition technology in an actual recognition application scene is often influenced by factors such as shielding, illumination and posture, the traditional face recognition method is often simple and low in efficiency, and the face recognition in a complex scene cannot be dealt with. Most targets to be identified in an actual identification application scene only have unique or limited face sample images, and the popular deep learning method at present needs a large number of training images. New methods are therefore urgently needed to address such problems.
Disclosure of Invention
The invention aims to: in order to solve the defects of the prior art and provide a more accurate and more convenient face recognition method aiming at the conditions of complex scenes and single-sample face data, the invention provides a single-sample face recognition method based on a multi-scale joint feature encoder.
The technical scheme of the invention is as follows: a single sample face recognition method based on a multi-scale joint feature encoder comprises the following steps:
training a multi-scale joint feature encoder:
carrying out image preprocessing on the face image, including size normalization, gray level image conversion, image enhancement and the like;
dividing the preprocessed face image into a plurality of face areas with the same size and without overlap;
traversing and sampling templates with different scales for each face region to obtain a training data set of each face region under different scales; based on a preset image feature coding mode (any conventional mode can be adopted for the image feature coding mode), performing feature coding training on a training data set of each face region under each scale to obtain independent feature coding matrixes of different face regions under different scales, and completing multi-scale region independent feature learning;
decomposing each independent feature coding matrix into a plurality of feature coding vectors based on the same decomposition number, namely the feature coding vectors obtained by decomposing each independent feature coding matrix are the same in number; the feature coding vectors of the same face region under the same scale form a feature coding vector set of different face regions under different scales, the feature coding vector set is called as the feature of each face region under different scales for short, then the features of all the face regions under the same scale are fused into a combined feature coding matrix under the current scale to obtain combined feature coding matrices of different scales, and a multi-scale combined feature encoder is obtained based on a preset image feature coding mode and the combined feature coding matrices of different scales;
carrying out feature extraction on the face image to be recognized and the face image to be matched:
after image preprocessing is carried out on the face image to be recognized and the face image to be matched (the image preprocessing mode is the same as that of the multi-scale joint feature encoder during training), the face image to be recognized and the face image to be matched are divided into a plurality of face regions to be recognized and face regions to be matched, wherein the number of the face regions is the same as that of the face regions divided during training of the multi-scale joint feature encoder;
traversing and sampling templates with different scales for each face region to obtain a data set to be recognized and a data set to be matched of each face region under different scales;
extracting features of a data set to be recognized based on a multi-scale joint feature encoder to obtain a feature matrix of a face image to be recognized under different scales;
extracting the features of the data set to be matched based on a multi-scale combined feature encoder to obtain a feature matrix of the face image to be matched under different scales;
calculating the matching degree of the face image to be recognized and the face image to be matched:
based on the feature matrix under each scale, calculating the similarity of the image to be recognized and the image to be matched in the same face region based on the vector distance, wherein the optimal similarity calculation formula is as follows:
Figure BDA0001610663040000021
wherein s is t (i, j) represents the similarity of the face image i to be recognized and the face image j to be matched in the face area t,
Figure BDA0001610663040000022
representing the characteristic distance extracted from the face image i to be recognized and the face image j to be matched in the face area t, namely
Figure BDA0001610663040000023
Where d () represents a distance function which,
Figure BDA0001610663040000024
Figure BDA0001610663040000025
respectively representing the characteristics of a face image i to be recognized and a face image j to be matched in a face area t, M representing the number of the face images to be matched, k representing an identifier of the face image to be matched, and epsilon representing a smoothing parameter, wherein the value is usually a number greater than 0, and the preferred value is 0.1;
and calculating the overall similarity of the face image to be recognized and the face image to be matched according to the preset weights of different face regions, namely the matching degree of the face image to be recognized and the face image to be matched is the weighted sum of the similarity of all the face regions.
Preferably, the weight setting mode of each face region is as follows:
Figure BDA0001610663040000026
in order to further improve the recognition accuracy, after the face region of the face image to be recognized is screened, the overall similarity is calculated: based on a face region classifier trained to identify whether the face region is a normal face region, inputting each face region of a face image to be identified into the trained face region classifier, discarding an abnormal face region (such as a face region under the influence of shielding, illumination and expression), and only weighting and summing the similarity of the face regions on the normal face region to obtain the overall similarity.
In addition, a similarity threshold value can be set, the face image to be matched which is lower than the similarity threshold value is used as an abnormal matching result, and only the matching result which is not lower than the similarity threshold value is output.
In summary, due to the adoption of the technical solutions, one or more technical solutions provided in the embodiments of the present application have at least the following technical effects or advantages:
the face region features under different scales are learned through a region independent feature coding technology to obtain face information under different scales, then all independent coding matrixes are decomposed, and clustering reconstruction is carried out to obtain global combined features, so that the single-sample face information representation capability is stronger. Through the calculation of the similarity of each region, each region is weighted, and the beneficial region suppression interference region is enhanced, so that the face recognition rate in a complex scene is improved.
Drawings
FIG. 1: the invention discloses a schematic diagram of a training principle of a multi-scale joint feature encoder.
FIG. 2: the testing principle in the embodiment of the invention is shown schematically.
FIG. 3: the human face region division schematic diagram in the embodiment of the invention.
FIG. 4: the sampling scale in the embodiment of the invention is shown schematically.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the embodiments and the accompanying drawings.
Examples
1. Referring to fig. 1, the specific implementation steps of training the multi-scale joint feature encoder in the present invention are as follows:
s1: the specific steps of training set face image preprocessing are as follows:
s11: converting the image from a color space to a gray scale space;
s12: and carrying out image enhancement operation.
S2: the method comprises the following specific steps of training set face image region division and dividing a face into non-overlapping regions:
s21: the face data of the training set are label-free data, and the face of the training set can be related or unrelated to the category of the face data of the test set;
s22: dividing the preprocessed face image into T non-overlapping regions of the same size, referring to fig. 3;
s23: traversing and sampling in each area by taking the size of m multiplied by m pixels as a template to obtain a training data set, wherein the data set of the t-th area is
Figure BDA0001610663040000031
Wherein t represents a sampling region, i represents the number of samples in the t region, x represents the sampled data, and the sampling template refers to fig. 4;
s24: traversing and sampling in each area by taking the size of m multiplied by 2m pixels as a template to obtain a training data set, wherein the data set of the t-th area is
Figure BDA0001610663040000041
Wherein t represents a sampling region, i represents the number of samples in the region t, x represents the sampled data, and the sampling template refers to fig. 4;
s25: traversing and sampling in each area by taking the size of 2m multiplied by m pixels as a template to obtain a training data set, wherein the data set of the t-th area is
Figure BDA0001610663040000042
Wherein t represents a sampling region, i represents the number of samples in the t region, x represents the sampled data, and the sampling template refers to fig. 4;
s3: independent region feature learning, namely training each face region based on a preset feature coding mode (such as sparse automatic coding) to obtain an independent feature coding matrix;
s31: data for each region at m × m scale
Figure BDA0001610663040000043
Respectively training to obtain independent feature coding matrix
Figure BDA0001610663040000044
S32: data set for each region at m × 2m scale
Figure BDA0001610663040000045
Respectively training to obtain independent feature coding matrix
Figure BDA0001610663040000046
S33: data set for each region at 2m x m scale
Figure BDA0001610663040000047
Respectively training to obtain independent feature coding matrix
Figure BDA0001610663040000048
S4: performing joint feature learning, namely performing decomposition and clustering learning on feature coding matrixes of which the regions are independent from each other to obtain a joint feature coding matrix;
s41: decomposing all independent feature coding matrixes into a plurality of feature coding vectors, and using W to simplify description t Representing the independent characteristic coding matrix to be decomposed currently, and the decomposition steps are as follows:
Figure BDA0001610663040000049
wherein, W t An independent feature encoding matrix representing the tth individual face region at the current scale,
Figure BDA00016106630400000410
a set of feature code vectors representing the tth personal face region, i representing a decomposed feature code vector identifier, k representing the number of decomposed feature code vectors, and ω representing a feature code vector;
s42: and (3) fusing all feature coding vectors into a joint feature coding matrix through a clustering algorithm, such as:
Figure BDA00016106630400000411
wherein the content of the first and second substances,
Figure BDA00016106630400000412
features indicating the tth individual face regionSyndrome vector set, W u Representing the joint feature coding matrix at the current scale, in this embodiment, using
Figure BDA00016106630400000413
Representing the joint feature coding matrix at the m × m,2m × m and m × 2m scales, respectively.
By
Figure BDA00016106630400000414
And obtaining the multi-scale joint feature encoder by a preset feature encoding mode.
2. Referring to fig. 2, the single-sample face recognition processing based on the multi-scale joint feature encoder is implemented by the following steps:
s1: the sample set and the specific steps of the pretreatment of the tested face image are as follows:
s11: converting the image from a color space to a gray scale space;
s12: and carrying out image enhancement operation.
S2: the method comprises the following specific steps of sample set and test face image region division, wherein the specific steps of dividing a face into non-overlapping regions are as follows:
s21: dividing the preprocessed face image into T non-overlapping regions of the same size, referring to fig. 3;
s22: traversing and sampling in each area by taking the size of m multiplied by m pixels as a template to obtain a training data set, wherein the data set of the t-th area is
Figure BDA0001610663040000051
Wherein t represents a sampling region, i represents the number of samples in the region t, x represents the sampled data, and the sampling template refers to fig. 4;
s23: traversing and sampling in each area by taking the size of m multiplied by 2m pixels as a template to obtain a training data set, wherein the data set of the t-th area is
Figure BDA0001610663040000052
Wherein t represents a sampling region, i represents the number of samples in the region t, x represents the sampled data, and the sampling template refers to fig. 4;
s24: traversing and sampling in each area by taking the size of 2m multiplied by m pixels as a template to obtain a training data set, wherein the data set of the t-th area is
Figure BDA0001610663040000053
Wherein t represents a sampling region, i represents the number of samples in the t region, x represents the sampled data, and the sampling template refers to fig. 4;
s3: and extracting the face characteristics of the sample set by using the multi-scale joint characteristic coding matrix, and extracting the characteristics of the sampled data of each region under the corresponding scale by using the trained joint characteristic coding matrix under each scale.
S4: calculating the characteristic distance of corresponding area between the tested face and all sample faces
S41: calculating the characteristic distance extracted from the corresponding face area of the sample face i and the test face j
Figure BDA0001610663040000054
Figure BDA0001610663040000055
Wherein
Figure BDA0001610663040000056
Representing the characteristics of the face i in the t area;
s5: calculating the similarity of corresponding areas between the test face and all sample faces, wherein the similarity calculation formula is as follows:
Figure BDA0001610663040000057
wherein s is t (i, j) represents the similarity of the faces i and j in the t area,
Figure BDA0001610663040000058
the feature distance extracted in the t region is represented, M represents the total number of classes included in the sample when the sample number to be compared is identified, for example, in the case of image class identification, and ∈ represents a smoothing parameter, and is usually 0.1.
S6: judging each area as a normal face area through the classifier, referring to fig. 2, discarding an abnormal face area, such as a face area under the influence of shielding, illumination and expression, and weighting and calculating the whole face similarity of the remaining areas, wherein a calculation formula of the similarity is as follows:
Figure BDA0001610663040000059
wherein, w t Is the weight occupied by the t region similarity in the recognition, s t (i, j) represents the similarity of the faces i and j in the t area, and N represents the number of normal face areas.
In this embodiment, the image category is subjected to recognition processing, in order to further improve recognition accuracy, a similarity threshold is trained, if the similarity threshold is lower than the similarity threshold, it is determined that the test image classification does not exist in the sample set, otherwise, the sample class with the largest similarity to the test face is the recognition result.
The face recognition algorithm tests a local shielding face sequence, an illumination change face sequence and an expression change face sequence contained in an AR face library on the premise of a single sample, and the average recognition rate reaches 95.85 percent, while the average recognition rate of the current similar method is 94.38 percent; on the premise of a single sample, the illumination change face sequence contained in the Yale _ B face library is tested, and the average recognition rate reaches 93.66%, while the average recognition rate of the current similar method is 82.22%. The face recognition method improves the accuracy of single-sample face recognition in a complex scene, and has good popularization and application prospects.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (3)

1. A single sample face recognition method based on a multi-scale joint feature encoder is characterized by comprising the following steps:
training a multiscale joint feature encoder:
carrying out image preprocessing on the face image;
dividing the preprocessed face image into a plurality of face areas which have the same size and are not overlapped;
traversing and sampling templates with different scales for each face region to obtain a training data set of each face region under different scales; performing feature coding training on a training data set of each face region under each scale based on a preset image feature coding mode to obtain independent feature coding matrixes of different face regions under different scales; the different scale template sizes include: m × m, m × 2m,2m × m, where m represents the width of the template;
decomposing each independent feature coding matrix into a plurality of feature coding vectors based on the same decomposition number, namely the feature coding vectors obtained by decomposing each independent feature coding matrix are the same in number; feature coding vectors of the same face region under the same scale form feature coding vector sets of different face regions under different scales, and then the feature coding vector sets of all the face regions under the same scale are fused into a combined feature coding matrix under the current scale to obtain combined feature coding matrices under different scales;
obtaining a multi-scale joint feature encoder based on a preset image feature encoding mode and joint feature encoding matrixes with different scales;
carrying out feature extraction on the face image to be recognized and the face image to be matched:
after image preprocessing is carried out on a face image to be recognized and a face image to be matched, the face image to be recognized and the face image to be matched are divided into a plurality of face regions to be recognized and face regions to be matched, wherein the number of the face regions is the same as the number of the face regions divided when a multi-scale combined feature encoder is trained;
traversing and sampling templates with different scales for each face region to obtain a data set to be recognized and a data set to be matched of each face region under different scales;
performing feature extraction on a data set to be recognized based on a multi-scale joint feature encoder to obtain a feature matrix of a face image to be recognized under different scales;
extracting the features of the data set to be matched based on a multi-scale combined feature encoder to obtain a feature matrix of the face image to be matched under different scales;
calculating the matching degree of the face image to be recognized and the face image to be matched:
based on the feature matrix under each scale, calculating the similarity of the image to be recognized and the image to be matched in the same face region based on the vector distance:
Figure FDA0003670561740000011
wherein s is t (i, j) represents the similarity of the face image i to be recognized and the face image j to be matched in the face area t,
Figure FDA0003670561740000012
representing the characteristic distance extracted from the face image i to be recognized and the face image j to be matched in the face area t, namely
Figure FDA0003670561740000021
Where d () represents a distance function,
Figure FDA0003670561740000022
f t j respectively representing the characteristics of a face image i to be recognized and a face image j to be matched in a face area t, wherein M represents the number of the face images to be matched, k represents an identifier of the face image to be matched, and epsilon represents a smoothing parameter;
according to the formula
Figure FDA0003670561740000023
Calculating the weight w of the face region t t
The similarity of each face area is weighted to obtain the matching degree of the face image to be recognized and the face image to be matched, and the face image to be matched with the maximum matching degree is used as a recognition result.
2. The method according to claim 1, further comprising performing normal face region screening processing on the face image to be recognized, and performing face region similarity weighting summation only on the normal face region to obtain the matching degree between the face image to be recognized and the face image to be matched.
3. The method of claim 1, wherein image pre-processing comprises: size normalization, grey scale map conversion and image enhancement processing.
CN201810262820.6A 2018-03-28 2018-03-28 Single-sample face recognition method based on multi-scale joint feature encoder Expired - Fee Related CN108520215B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810262820.6A CN108520215B (en) 2018-03-28 2018-03-28 Single-sample face recognition method based on multi-scale joint feature encoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810262820.6A CN108520215B (en) 2018-03-28 2018-03-28 Single-sample face recognition method based on multi-scale joint feature encoder

Publications (2)

Publication Number Publication Date
CN108520215A CN108520215A (en) 2018-09-11
CN108520215B true CN108520215B (en) 2022-10-11

Family

ID=63434356

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810262820.6A Expired - Fee Related CN108520215B (en) 2018-03-28 2018-03-28 Single-sample face recognition method based on multi-scale joint feature encoder

Country Status (1)

Country Link
CN (1) CN108520215B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109766857A (en) * 2019-01-16 2019-05-17 嘉兴学院 A kind of three-dimensional face identification method based on semantic alignment multizone template fusion
CN109951289B (en) * 2019-01-25 2021-01-12 北京三快在线科技有限公司 Identification method, device, equipment and readable storage medium
CN109919888B (en) * 2019-02-26 2023-09-19 腾讯科技(深圳)有限公司 Image fusion method, model training method and related device
CN110210321B (en) * 2019-05-08 2020-11-24 华南理工大学 Under-sample face recognition method based on multi-dimensional scale transformation network and block weighting method
CN111274883B (en) * 2020-01-10 2023-04-25 杭州电子科技大学 Synthetic sketch face recognition method based on multi-scale HOG features and deep features
CN115601484B (en) * 2022-11-07 2023-03-28 广州趣丸网络科技有限公司 Virtual character face driving method and device, terminal equipment and readable storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521561A (en) * 2011-11-16 2012-06-27 湖南大学 Face identification method on basis of multi-scale weber local features and hierarchical decision fusion
CN103310236A (en) * 2013-06-27 2013-09-18 上海数据分析与处理技术研究所 Mosaic image detection method and system based on local two-dimensional characteristics
CN103632132A (en) * 2012-12-11 2014-03-12 广西工学院 Face detection and recognition method based on skin color segmentation and template matching
CN103745207A (en) * 2014-01-27 2014-04-23 中国科学院深圳先进技术研究院 Feature extraction method and device for human face identification
CN104680158A (en) * 2015-03-31 2015-06-03 盐城工学院 Face recognition method based on multi-scale block partial multi-valued mode
CN104992223A (en) * 2015-06-12 2015-10-21 安徽大学 Dense population estimation method based on deep learning
CN105975931A (en) * 2016-05-04 2016-09-28 浙江大学 Convolutional neural network face recognition method based on multi-scale pooling
CN106845362A (en) * 2016-12-27 2017-06-13 湖南长城信息金融设备有限责任公司 A kind of face identification method of the rarefaction representation based on multi-scale transform
CN106897700A (en) * 2017-02-27 2017-06-27 苏州大学 A kind of single sample face recognition method and system

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592136B (en) * 2011-12-21 2013-10-16 东南大学 Three-dimensional human face recognition method based on intermediate frequency information in geometry image
CN103514432B (en) * 2012-06-25 2017-09-01 诺基亚技术有限公司 Face feature extraction method, equipment and computer program product
WO2015127394A1 (en) * 2014-02-23 2015-08-27 Northeastern University System for beauty, cosmetic, and fashion analysis
US9858502B2 (en) * 2014-03-31 2018-01-02 Los Alamos National Security, Llc Classification of multispectral or hyperspectral satellite imagery using clustering of sparse approximations on sparse representations in learned dictionaries obtained using efficient convolutional sparse coding
CN103902977B (en) * 2014-03-31 2017-04-05 华为技术有限公司 Face identification method and device based on Gabor binary patterns
EP3029606A3 (en) * 2014-11-14 2016-09-14 Thomson Licensing Method and apparatus for image classification with joint feature adaptation and classifier learning
CN105447441B (en) * 2015-03-19 2019-03-29 北京眼神智能科技有限公司 Face authentication method and device
CN107273824B (en) * 2017-05-27 2020-04-07 西安电子科技大学 Face recognition method based on multi-scale and multi-direction local binary pattern

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521561A (en) * 2011-11-16 2012-06-27 湖南大学 Face identification method on basis of multi-scale weber local features and hierarchical decision fusion
CN103632132A (en) * 2012-12-11 2014-03-12 广西工学院 Face detection and recognition method based on skin color segmentation and template matching
CN103310236A (en) * 2013-06-27 2013-09-18 上海数据分析与处理技术研究所 Mosaic image detection method and system based on local two-dimensional characteristics
CN103745207A (en) * 2014-01-27 2014-04-23 中国科学院深圳先进技术研究院 Feature extraction method and device for human face identification
CN104680158A (en) * 2015-03-31 2015-06-03 盐城工学院 Face recognition method based on multi-scale block partial multi-valued mode
CN104992223A (en) * 2015-06-12 2015-10-21 安徽大学 Dense population estimation method based on deep learning
CN105975931A (en) * 2016-05-04 2016-09-28 浙江大学 Convolutional neural network face recognition method based on multi-scale pooling
CN106845362A (en) * 2016-12-27 2017-06-13 湖南长城信息金融设备有限责任公司 A kind of face identification method of the rarefaction representation based on multi-scale transform
CN106897700A (en) * 2017-02-27 2017-06-27 苏州大学 A kind of single sample face recognition method and system

Also Published As

Publication number Publication date
CN108520215A (en) 2018-09-11

Similar Documents

Publication Publication Date Title
CN108520215B (en) Single-sample face recognition method based on multi-scale joint feature encoder
CN110443143B (en) Multi-branch convolutional neural network fused remote sensing image scene classification method
Gao et al. Automatic change detection in synthetic aperture radar images based on PCANet
CN110348319B (en) Face anti-counterfeiting method based on face depth information and edge image fusion
CN110633708A (en) Deep network significance detection method based on global model and local optimization
CN109801305B (en) SAR image change detection method based on deep capsule network
US20200134382A1 (en) Neural network training utilizing specialized loss functions
CN110827265B (en) Image anomaly detection method based on deep learning
CN113011357A (en) Depth fake face video positioning method based on space-time fusion
CN116910752B (en) Malicious code detection method based on big data
WO2015146113A1 (en) Identification dictionary learning system, identification dictionary learning method, and recording medium
CN111241987B (en) Multi-target model visual tracking method based on cost-sensitive three-branch decision
CN110647897B (en) Zero sample image classification and identification method based on multi-part attention mechanism
CN107729863B (en) Human finger vein recognition method
CN110909678B (en) Face recognition method and system based on width learning network feature extraction
CN105844299B (en) A kind of image classification method based on bag of words
CN110969128A (en) Method for detecting infrared ship under sea surface background based on multi-feature fusion
US11715288B2 (en) Optical character recognition using specialized confidence functions
CN113033305B (en) Living body detection method, living body detection device, terminal equipment and storage medium
CN115511867A (en) Weaving method and system for high-wear-resistance textile fabric
CN112330562B (en) Heterogeneous remote sensing image transformation method and system
CN108154107B (en) Method for determining scene category to which remote sensing image belongs
CN110555342B (en) Image identification method and device and image equipment
Chakroun et al. Hierarchical convolutional neural networks for leaf disease detection
CN111931757A (en) Finger vein quick sorting method and device based on MDLBP block histogram and PCA dimension reduction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20221011

CF01 Termination of patent right due to non-payment of annual fee