CN106203356A - A kind of face identification method based on convolutional network feature extraction - Google Patents

A kind of face identification method based on convolutional network feature extraction Download PDF

Info

Publication number
CN106203356A
CN106203356A CN201610555256.8A CN201610555256A CN106203356A CN 106203356 A CN106203356 A CN 106203356A CN 201610555256 A CN201610555256 A CN 201610555256A CN 106203356 A CN106203356 A CN 106203356A
Authority
CN
China
Prior art keywords
matrix
image
recovery
rank
face recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610555256.8A
Other languages
Chinese (zh)
Other versions
CN106203356B (en
Inventor
赵建伟
吕永标
曹飞龙
周正华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Jiliang University
Original Assignee
China Jiliang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Jiliang University filed Critical China Jiliang University
Priority to CN201610555256.8A priority Critical patent/CN106203356B/en
Publication of CN106203356A publication Critical patent/CN106203356A/en
Application granted granted Critical
Publication of CN106203356B publication Critical patent/CN106203356B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of technology lacking face image restoration and identification, be specifically related to a kind of face identification method based on convolutional network feature extraction, belong to living things feature recognition field.First with blocking core model algorithm, original missing image is carried out matrix recovery process, the recovery matrix that the information that obtains is recovered substantially, then utilize low-rank matrix decomposition algorithm that recovery matrix is carried out the extraction of low-rank information, and it is converted into matrix convolution core by vector form, extract, followed by convolutional neural networks, the characteristics of image recovered and encode, obtain the final feature of every image, finally by SVM, feature samples is trained and Classification and Identification.The present invention is avoided that in traditional method because image lacks the low discrimination problem brought, and the missing image of disparate databases is attained by preferable result.

Description

Face recognition method based on convolutional network feature extraction
Technical Field
The invention relates to a technology for restoring and identifying a missing face image, in particular to a face identification method based on convolutional network feature extraction, and belongs to the field of biological feature identification.
Background
The human face is the most important and most direct carrier for human emotion expression and communication, and information such as race, region, identity, status and the like of a person can be deduced through the human face, so that the human face also becomes the most obvious part for distinguishing different points between people. The human face image is acquired without contacting with the detected person, the acquisition can be carried out under the condition of not frightening the detected person, the counter feeling of the detected person is not caused, and the human face recognition can be completed only by adopting the embedded cameras of the common camera, the digital camera and the mobile phone, so that the human face recognition system is very convenient to use, and has great significance in the research of the human face recognition. With the advent and development of high-speed and high-performance computers, the scientific community has studied human faces from many disciplines such as computer graphics, image processing, computer vision and anthropology. Currently, many researchers have made a lot of research on face recognition, and many practical recognition methods are proposed, which mainly include:
(1) in the patent of face recognition method and system (publication number CN103955681A), a filtering module is used to filter an input face image to obtain a face image to be recognized. Searching a template image matched with the face image to be recognized in an image database by using a nearest neighbor classification module to obtain a matched template image, wherein the template image is obtained by filtering an original template image by using the filtering module, and the type of the matched template image is determined as the type of the face image.
The disadvantages are as follows: because the method directly carries out filtering feature transformation on the image, the face recognition accuracy is sacrificed for improving the face image recognition efficiency, especially under the condition that the image is lost or shielded.
(2) In the patent of face recognition method and system (publication number CN105095829A), face features to be recognized are extracted from an acquired face image, and according to the face features to be recognized, whether a blocking object exists in the acquired face image is detected, if yes, the blocking object is extracted from the face image, whether a matching reference face image exists in a face image library is judged, if yes, face recognition is successful, otherwise, face recognition is failed.
The disadvantages of the above technique: although information of an occlusion is taken into consideration, the image may lose information of an occlusion part, which may reduce recognition accuracy, especially when a key part is occluded.
Disclosure of Invention
The invention aims to provide an identification method based on convolutional network feature extraction in a missing face image. Firstly, performing matrix recovery operation on an input missing face image by using a truncation kernel to enable the image to recover most of missing information, then performing low-rank information extraction on the recovered image, performing convolution on the image converted from the low-rank information to extract a feature map, performing binarization and histogram statistics on the feature map to obtain final features, and finally, recognizing the face image by using an SVM (support vector machine).
In order to achieve the purpose, the invention provides the following technical scheme:
the invention provides a incomplete face recognition method based on a convolutional neural network, which comprises the following steps:
the method comprises the following steps: incomplete face recovery
(1.1) taking N original incomplete face images as input, firstly judging whether the images are gray images or color images, if so, directly selecting the images as missing matrixes, and if so, respectively selecting the images from R, G, B three color channels of the images as the missing matrixes to operate, wherein the missing matrixes are marked as Ii,i=1,...,N;
(1.2) for each selected deletion matrix I, a method of truncating the kernel range is utilizediPerforming matrix recovery operation to obtain an image matrix, namely a recovery matrix Xi
Step two: face feature extraction
(2.1) for the resulting recovery matrix XiExtracting according to blocks, and extracting a low-rank matrix after arranging in sequence;
(2.2) for the obtained low-rank matrix, converting the low-rank matrix into L after extracting the characteristic vector1A convolution kernel in the form of a matrix, for the recovery matrix XiPerforming convolution operation to obtain N × L1Repeating the previous step (2.1) by using a feature map, and performing comparison on the N × L1Extracting corresponding L from the facial feature map2A convolution kernel, and usingThe L is2The convolution kernel convolves the characteristic map to obtain a new characteristic map;
(2.3) carrying out binarization processing on each obtained new characteristic map, and then obtaining a value range in which a value range is positioned after weighting processingThe final feature map of (a);
(2.4) performing histogram statistics on the obtained final feature map according to blocks to obtain the vector form features of each input image;
step three: face recognition classification
And (3.1) training and classifying the obtained features by utilizing an SVM.
In the step one (1.2): the process of matrix recovery is to the missing matrix IiOptimizing the rank so that the recovered matrix XiThe rank is minimum; the truncation kernel-norm algorithm used is an optimization problem of solving rank:
m i n X i | | X i | | r s . t . P Ω ( X i ) = P Ω ( I i ) ;
recovery matrix XiThe dimensions of (a) are m, n, r are an empirically determined constant, in the above formula,σj(Xi) Representation matrix XiThe jth maximum singular value of (a); pΩ(Xi) Representing a set of pixel positions, P, not missing in the original imageΩ(Ii) A set of pixel values representing all locations in the original image.
In the second step (2.1): for the obtained recovery matrix XiBy block extraction, i.e. to the recovery matrix XiWith the dimension k1×k2And extracting matrix blocks according to blocks in a mode that the step length is 1 pixel, then performing mean value removing operation on pixel values in each matrix block, and arranging each matrix block in a large matrix P in sequence.
In the second step (2.1): performing low-rank matrix extraction on the arranged large matrix P, and extracting information by using an RPCA (resilient packet access) low-rank matrix decomposition algorithm, wherein the extraction process is a solution optimization problem:
m i n X , E | | X | | * + λ | | E | | 1
s.t.P=X+E
where X is the low rank matrix of the large matrix P and E is its sparse matrix. | X | non-conducting phosphor*Representing a kernel norm operation taking matrix X, | E | | luminance1Representing l of the taking matrix E1Norm operation, i.e.Where m, n denote the number of rows and columns, respectively, of the matrix E, EijRepresenting the element in matrix E located in row i and column j.
In the second step (2.2): extracting L for low rank matrix X1The eigenvectors corresponding to the maximum eigenvalues can be directly obtained in the RPCA solving process, and then the vectors are rearranged into k-size vectors according to the columns1×k2Form L of1A convolution kernelL1One convolution kernel and N recovery matrices XiPerforming convolution operation to obtain N × L1Zhang feature atlas
Repeat step (2.1) for this N × L1Zhang feature atlasExtracting corresponding L2A convolution kernelAnd use the L2The convolution kernel convolves the feature mapObtaining a new characteristic map
In the second step (2.3), the new feature map is subjected to binarization treatment, and the formula is as follows:
H ( y ) = 1 , i f y > 0 0 , o t h e r w i s e ,
wherein y is a novel characteristic mapEach pixel value of (a).
In the second step (2.3), weighting processing is carried out on the new feature map,
wherein,the final characteristic map has a value range of
In the second step (2.4), the final characteristic map is processedPerforming histogram statistics according to blocks; for each input image there is L1Final feature atlasEach sheet is putDividing the image into B image blocks; making histogram statistics in each image block, each image block hasItem, pulling B block image blocks into vectors in sequence to obtainHere, theThe vector dimension obtained by the representation isMaintaining;
handle L1AnPutting the image into a vector to obtain the final characteristics of each input image
In the third step (3.1),
by the final features of each figureTaking the class labels corresponding to the class labels as input, and training the corresponding relation by using an SVM (support vector machine) to obtain a classifier;
and for a new test sample, obtaining the final characteristics of the input image through the first step and the second step, inputting the final characteristics into the SVM, and comparing the final characteristics with the classifier to identify the specific class label corresponding to the new sample, thereby realizing the classification and identification of the face image.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention allows to achieve a better recognition result in case of missing input images.
2. Compared with the existing traditional convolutional neural network identification technology, the method has the advantage of high speed.
Drawings
FIG. 1 is an overall block diagram of the face recognition method based on convolutional network feature extraction of the present invention;
FIG. 2 is a fragmentary face image;
fig. 3 is a restored face image.
Detailed Description
The present invention will be further illustrated with reference to the following examples.
A face recognition method based on convolutional network feature extraction is disclosed, the specific process is shown in figure 1, and the method comprises the following steps:
the method comprises the following steps: incomplete face recovery
(1.1) N original incomplete face images IiIf the image is a gray image, the image is directly selected as a missing matrix (since the image is a matrix in a computer, the two matrixes are equivalent); if the image is a color image, the image is respectively selected from R, G, B color channels of the image to be used as a missing matrix for operation.
(1.2) for each selected deletion matrix I, a method of truncating the kernel range is utilizediPerforming matrix recovery operation to obtain an image matrix, namely a recovery matrix Xi. Recovery matrix XiThe deletion matrix I is reservediMost of the information in (1). The process of matrix recovery being missing matrix I to the inputiOptimizing the rank so that the matrix X is recoverediIs the smallest. The truncation kernel-norm algorithm used is as follows:
m i n X i | | X i | | r s . t . P Ω ( X i ) = P Ω ( I i ) .
recovery matrix XiRespectively m, n, r is an empirically determined constant (generally selected from one of 6, 7, 8, 9, 10, 11, 12), in the above formula,σj(Xi) Representation recovery matrix XiThe jth maximum singular value of (a). PΩ(Xi) A set of pixel values representing positions in the original image that are not missing. PΩ(Ii) A set of pixel values representing all locations in the original image. Expression PΩ(Xi)=PΩ(Ii) The mathematical meaning of (1) is: the pixel values of the positions not missing in the image are kept unchanged, and then the pixel values of the missing positions are restored.
The mathematical principle of truncating the kernel is as follows:
to optimize the rank of a matrix (direct optimization is difficult to solve, so the rank is approximated by other methods), approximation by using a kernel (namely the sum of all singular values of the matrix) is inaccurate, because the singular value has a certain size, and the rank is only related to the number of the first r non-zero singular values of the matrix, the optimization target is converted into min (m, n) -r singular values with the minimum matrix, so that a better effect can be achieved.
Step two: face feature extraction
(2.1) pair recovery matrix XiPerforming extraction operation according to block (image block), wherein the specific operation is to restore the matrix XiWith the dimension k1×k2Step size of 1 pixel (pixel)The method of (1) performs extraction of matrix blocks by block, and then performs a mean value removal operation on pixel values in each matrix block. Arranged in a large matrix P in order for each matrix block.
And (3) performing low-rank information extraction on the obtained large matrix P, and extracting information by using an RPCA (resilient packet access) low-rank matrix decomposition algorithm, wherein the extraction process is a solution optimization problem:
m i n X , E | | X | | * + λ | | E | | 1
s.t.P=X+E
where X is the low rank matrix of the large matrix P and E is its sparse matrix. | X | non-conducting phosphor*Representing the operation of taking the kernel norm of the matrix X (to ensure the low rank of the matrix X), | E | survival1Representing l of the taking matrix E1Norm operation (to ensure sparsity of matrix E), i.e.Where m, n denote the number of rows and columns, respectively, of the matrix E, EijRepresenting the element in matrix E located in row i and column j.
The mathematical principle of applying RPCA is as follows:
a matrix can be decomposed into a superimposed combination of a low-rank matrix representing most of the information of the original matrix and a sparse matrix representing noise and some other minor details of the original matrix. The feature extraction is performed by using the low-rank matrix, so that the final feature is more concise and effective.
(2.2) then extract L for the low rank matrix X1The eigenvectors (directly obtained in the RPCA solving process) corresponding to the maximum eigenvalue are rearranged into k size according to the column1×k2Form L of1A convolution kernelL1One convolution kernel and N recovery matrices XiPerforming convolution operation to obtain N × L1Zhang feature atlas
Repeat step (2.1) for this N × L1Zhang feature atlasExtracting corresponding L2A convolution kernelAnd use the L2A convolution check feature mapConvolution is carried out to obtain a new characteristic map
(2.3) carrying out binarization processing on each obtained new characteristic map, wherein a specific formula is as follows:where y is a new feature mapEach pixel value of (a).
Then, through weighting processing on the binarized image:obtain a value range inFinal feature map of
(2.4) to the final feature mapHistogram statistics are made on a block basis. For each input image there is L1Final feature atlasEach final feature mapAnd dividing the image into B image blocks. Making histogram statistics in each image block, each image block hasItem, pulling B block image blocks into vectors in sequence to obtainHere, theThe vector dimension obtained by the representation isAnd (5) maintaining.
Handle L1AnPutting the image into a vector to obtain the final characteristics of each input image
Step three: face recognition classification
(3.1) Final features in each figureAnd training the corresponding relation by using the SVM by taking the class label corresponding to the classifier as input to obtain a classifier.
And for a new test sample, obtaining the final characteristics of the input image through the first step and the second step, inputting the final characteristics into the SVM, and comparing the final characteristics with the classifier to identify the specific class label corresponding to the new sample, thereby realizing the classification and identification of the face image.

Claims (9)

1. A incomplete face recognition method based on a convolutional neural network is characterized in that: the method comprises the following steps:
the method comprises the following steps: incomplete face recovery
(1.1) taking N original incomplete face images as input, firstly judging whether the images are gray images or color images, if so, directly selecting the images as missing matrixes, and if so, respectively selecting the images from R, G, B three color channels of the images as the missing matrixes to operate, wherein the missing matrixes are marked as Ii,i=1,...,N;
(1.2) for each selected deletion matrix I, a method of truncating the kernel range is utilizediPerforming matrix recovery operation to obtain an image matrix, namely a recovery matrix Xi
Step two: face feature extraction
(2.1) for the resulting recovery matrix XiExtracting according to blocks, and extracting a low-rank matrix after arranging in sequence;
(2.2) for the obtained low-rank matrix, converting the low-rank matrix into L after extracting the characteristic vector1A convolution kernel in the form of a matrix, for the recovery matrix XiPerforming convolution operation to obtain N × L1Repeating the previous step (2.1) by using a feature map, and performing comparison on the N × L1Extracting corresponding L from the facial feature map2A convolution kernel and using the L2The convolution kernel convolves the characteristic map to obtain a new characteristic map;
(2.3) carrying out binarization processing on each obtained new characteristic map, and then obtaining a value range in which a value range is positioned after weighting processingThe final feature map of (a);
(2.4) performing histogram statistics on the obtained final feature map according to blocks to obtain the vector form features of each input image;
step three: face recognition classification
And (3.1) training and classifying the obtained features by utilizing an SVM.
2. The incomplete face recognition method based on the convolutional neural network as claimed in claim 1, wherein: in the step one (1.2): the process of matrix recovery is to the missing matrix IiOptimizing the rank so that the recovered matrix XiThe rank is minimum; the truncation kernel-norm algorithm used is an optimization problem of solving rank:
m i n X i | | X i | | r s . t . P Ω ( X i ) = P Ω ( I i ) ;
recovery matrix XiThe dimensions of (a) are m, n, r are an empirically determined constant, in the above formula,σj(Xi) Representation matrix XiThe jth maximum singular value of (a); pΩ(Xi) Representing a set of pixel positions, P, not missing in the original imageΩ(Ii) A set of pixel values representing all locations in the original image.
3. The convolution-based nerve of claim 1The incomplete face recognition method of the network is characterized in that: in the second step (2.1): for the obtained recovery matrix XiBy block extraction, i.e. to the recovery matrix XiWith the dimension k1×k2And extracting matrix blocks according to blocks in a mode that the step length is 1 pixel, then performing mean value removing operation on pixel values in each matrix block, and arranging each matrix block in a large matrix P in sequence.
4. The incomplete face recognition method based on the convolutional neural network as claimed in claim 1, wherein: in the second step (2.1): performing low-rank matrix extraction on the arranged large matrix P, and extracting information by using an RPCA (resilient packet access) low-rank matrix decomposition algorithm, wherein the extraction process is a solution optimization problem:
m i n X , E | | X | | * + λ | | E | | 1
s.t.P=X+E
where X is the low rank matrix of the large matrix P and E is its sparse matrix. | X | non-conducting phosphor*Representing a kernel norm operation taking matrix X, | E | | luminance1Representing l of the taking matrix E1Norm operation, i.e.Where m, n denote the number of rows and columns, respectively, of the matrix E, EijRepresenting the element in matrix E located in row i and column j.
5. The incomplete face recognition method based on the convolutional neural network as claimed in claim 1, wherein: in the second step (2.2): extracting L for low rank matrix X1The eigenvectors corresponding to the maximum eigenvalues can be directly obtained in the RPCA solving process, and then the vectors are rearranged into k-size vectors according to the columns1×k2Form L of1A convolution kernelL1One convolution kernel and N recovery matrices XiPerforming convolution operation to obtain N × L1Zhang feature atlas
Repeat step (2.1) for this N × L1Zhang feature atlasExtracting corresponding L2A convolution kernelAnd use the L2The convolution kernel convolves the feature map to obtain a new feature map
6. The incomplete face recognition method based on the convolutional neural network as claimed in claim 1, wherein: in the second step (2.3), the new feature map is subjected to binarization treatment, and the formula is as follows:
H ( y ) = 1 , i f y > 0 0 , o t h e r w i s e ,
wherein y is a novel characteristic mapEach pixel value of (a).
7. The incomplete face recognition method based on the convolutional neural network as claimed in claim 1, wherein: in the second step (2.3), weighting processing is carried out on the new feature map,
wherein,the final characteristic map has a value range of
8. The incomplete face recognition method based on the convolutional neural network as claimed in claim 1, wherein: in the second step (2.4), the final characteristic map is processedPerforming histogram statistics according to blocks; for each input image there is L1Final feature atlasEach sheet is putDividing the image into B image blocks; making histogram statistics in each image block, each image block hasItem, pulling B block image blocks into vectors in sequence to obtainHere, theThe vector dimension obtained by the representation isB dimension;
handle L1AnPutting the image into a vector to obtain the final characteristics of each input image
9. The incomplete face recognition method based on the convolutional neural network as claimed in claim 1, wherein: in the third step (3.1),
by the final features of each figureTaking the class labels corresponding to the class labels as input, and training the corresponding relation by using an SVM (support vector machine) to obtain a classifier;
and for a new test sample, obtaining the final characteristics of the input image through the first step and the second step, inputting the final characteristics into the SVM, and comparing the final characteristics with the classifier to identify the specific class label corresponding to the new sample, thereby realizing the classification and identification of the face image.
CN201610555256.8A 2016-07-12 2016-07-12 A kind of face identification method based on convolutional network feature extraction Active CN106203356B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610555256.8A CN106203356B (en) 2016-07-12 2016-07-12 A kind of face identification method based on convolutional network feature extraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610555256.8A CN106203356B (en) 2016-07-12 2016-07-12 A kind of face identification method based on convolutional network feature extraction

Publications (2)

Publication Number Publication Date
CN106203356A true CN106203356A (en) 2016-12-07
CN106203356B CN106203356B (en) 2019-04-26

Family

ID=57475895

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610555256.8A Active CN106203356B (en) 2016-07-12 2016-07-12 A kind of face identification method based on convolutional network feature extraction

Country Status (1)

Country Link
CN (1) CN106203356B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650694A (en) * 2016-12-30 2017-05-10 江苏四点灵机器人有限公司 Human face recognition method taking convolutional neural network as feature extractor
CN107563328A (en) * 2017-09-01 2018-01-09 广州智慧城市发展研究院 A kind of face identification method and system based under complex environment
CN107958247A (en) * 2018-01-17 2018-04-24 百度在线网络技术(北京)有限公司 Method and apparatus for facial image identification
CN107977661A (en) * 2017-10-13 2018-05-01 天津工业大学 The region of interest area detecting method decomposed based on full convolutional neural networks and low-rank sparse
CN108052920A (en) * 2017-12-27 2018-05-18 百度在线网络技术(北京)有限公司 For the method and apparatus of output information
CN109145765A (en) * 2018-07-27 2019-01-04 华南理工大学 Method for detecting human face, device, computer equipment and storage medium
CN109325490A (en) * 2018-09-30 2019-02-12 西安电子科技大学 Terahertz image target identification method based on deep learning and RPCA
CN109522844A (en) * 2018-11-19 2019-03-26 燕山大学 It is a kind of social activity cohesion determine method and system
CN110490149A (en) * 2019-08-22 2019-11-22 广东工业大学 A kind of face identification method and device based on svm classifier
CN110717550A (en) * 2019-10-18 2020-01-21 山东大学 Multi-modal image missing completion based classification method
CN118155105A (en) * 2024-05-13 2024-06-07 齐鲁空天信息研究院 Unmanned aerial vehicle mountain area rescue method, unmanned aerial vehicle mountain area rescue system, unmanned aerial vehicle mountain area rescue medium and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140079297A1 (en) * 2012-09-17 2014-03-20 Saied Tadayon Application of Z-Webs and Z-factors to Analytics, Search Engine, Learning, Recognition, Natural Language, and Other Utilities
CN103761537A (en) * 2014-02-07 2014-04-30 重庆市国土资源和房屋勘测规划院 Image classification method based on low-rank optimization feature dictionary model
CN103902958A (en) * 2012-12-28 2014-07-02 重庆凯泽科技有限公司 Method for face recognition

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140079297A1 (en) * 2012-09-17 2014-03-20 Saied Tadayon Application of Z-Webs and Z-factors to Analytics, Search Engine, Learning, Recognition, Natural Language, and Other Utilities
CN103902958A (en) * 2012-12-28 2014-07-02 重庆凯泽科技有限公司 Method for face recognition
CN103761537A (en) * 2014-02-07 2014-04-30 重庆市国土资源和房屋勘测规划院 Image classification method based on low-rank optimization feature dictionary model

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650694A (en) * 2016-12-30 2017-05-10 江苏四点灵机器人有限公司 Human face recognition method taking convolutional neural network as feature extractor
CN107563328A (en) * 2017-09-01 2018-01-09 广州智慧城市发展研究院 A kind of face identification method and system based under complex environment
CN107977661B (en) * 2017-10-13 2022-05-03 天津工业大学 Region-of-interest detection method based on FCN and low-rank sparse decomposition
CN107977661A (en) * 2017-10-13 2018-05-01 天津工业大学 The region of interest area detecting method decomposed based on full convolutional neural networks and low-rank sparse
CN108052920A (en) * 2017-12-27 2018-05-18 百度在线网络技术(北京)有限公司 For the method and apparatus of output information
CN107958247A (en) * 2018-01-17 2018-04-24 百度在线网络技术(北京)有限公司 Method and apparatus for facial image identification
CN109145765A (en) * 2018-07-27 2019-01-04 华南理工大学 Method for detecting human face, device, computer equipment and storage medium
CN109325490A (en) * 2018-09-30 2019-02-12 西安电子科技大学 Terahertz image target identification method based on deep learning and RPCA
CN109325490B (en) * 2018-09-30 2021-04-27 西安电子科技大学 Terahertz image target identification method based on deep learning and RPCA
CN109522844B (en) * 2018-11-19 2020-07-24 燕山大学 Social affinity determination method and system
CN109522844A (en) * 2018-11-19 2019-03-26 燕山大学 It is a kind of social activity cohesion determine method and system
CN110490149A (en) * 2019-08-22 2019-11-22 广东工业大学 A kind of face identification method and device based on svm classifier
CN110717550A (en) * 2019-10-18 2020-01-21 山东大学 Multi-modal image missing completion based classification method
CN118155105A (en) * 2024-05-13 2024-06-07 齐鲁空天信息研究院 Unmanned aerial vehicle mountain area rescue method, unmanned aerial vehicle mountain area rescue system, unmanned aerial vehicle mountain area rescue medium and electronic equipment

Also Published As

Publication number Publication date
CN106203356B (en) 2019-04-26

Similar Documents

Publication Publication Date Title
CN106203356B (en) A kind of face identification method based on convolutional network feature extraction
CN107633513B (en) 3D image quality measuring method based on deep learning
CN110348357B (en) Rapid target detection method based on deep convolutional neural network
CN105205449B (en) Sign Language Recognition Method based on deep learning
CN105528595A (en) Method for identifying and positioning power transmission line insulators in unmanned aerial vehicle aerial images
WO2017080196A1 (en) Video classification method and device based on human face image
CN102609681A (en) Face recognition method based on dictionary learning models
CN104616000B (en) A kind of face identification method and device
CN104268593A (en) Multiple-sparse-representation face recognition method for solving small sample size problem
CN103136516A (en) Face recognition method and system fusing visible light and near-infrared information
CN104281835B (en) Face recognition method based on local sensitive kernel sparse representation
CN109522841A (en) A kind of face identification method restored based on group's rarefaction representation and low-rank matrix
Shinde et al. Analysis of fingerprint image for gender classification or identification: using wavelet transform and singular value decomposition
CN105046272A (en) Image classification method based on concise unsupervised convolutional network
CN106778714B (en) LDA face identification method based on nonlinear characteristic and model combination
CN106096660A (en) Convolutional neural networks based on independent composition analysis algorithm
CN111401434B (en) Image classification method based on unsupervised feature learning
CN106951819A (en) The single sample face recognition method screened based on sparse probability distribution and multistage classification
CN111126169B (en) Face recognition method and system based on orthogonalization graph regular nonnegative matrix factorization
CN104715266A (en) Image characteristics extracting method based on combination of SRC-DP and LDA
CN110414431B (en) Face recognition method and system based on elastic context relation loss function
CN117877068B (en) Mask self-supervision shielding pixel reconstruction-based shielding pedestrian re-identification method
CN109886160A (en) It is a kind of it is non-limiting under the conditions of face identification method
CN106909944B (en) Face picture clustering method
CN108205666A (en) A kind of face identification method based on depth converging network

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant