CN111523404A - Partial face recognition method based on convolutional neural network and sparse representation - Google Patents
Partial face recognition method based on convolutional neural network and sparse representation Download PDFInfo
- Publication number
- CN111523404A CN111523404A CN202010267944.0A CN202010267944A CN111523404A CN 111523404 A CN111523404 A CN 111523404A CN 202010267944 A CN202010267944 A CN 202010267944A CN 111523404 A CN111523404 A CN 111523404A
- Authority
- CN
- China
- Prior art keywords
- image
- tested
- sparse representation
- neural network
- convolutional neural
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 22
- 238000012549 training Methods 0.000 claims abstract description 51
- 238000012937 correction Methods 0.000 claims abstract description 11
- 239000013598 vector Substances 0.000 claims abstract description 9
- 239000000654 additive Substances 0.000 claims description 2
- 230000000996 additive effect Effects 0.000 claims description 2
- 238000000605 extraction Methods 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 claims 1
- 230000014509 gene expression Effects 0.000 abstract description 5
- 238000005286 illumination Methods 0.000 abstract description 5
- 230000002708 enhancing effect Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000036544 posture Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/513—Sparse representations
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Data Mining & Analysis (AREA)
- Human Computer Interaction (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Molecular Biology (AREA)
- Evolutionary Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种基于卷积神经网络和稀疏表示的部分人脸识别方法,一种基于卷积神经网络和稀疏表示的部分人脸识别方法,其特点是采用增加待测图像的镜像图像来实现样本扩充,利用卷积神经网络对训练集和待测图像进行特征提取,获得其对应的特征映射构建特征向量,利用稀疏表示和样本矫正方法计算残差,并基于得分最小化准则对输入图像进行分类。本发明与现有技术相比具有减少待测图像的遮挡、光照和表情等变化的影响,提高分类的准确性,提取的特征更为精确,有效地增强了对图像中不同变化的鲁棒性,较好解决了部分人脸信息较少的识别问题。
The invention discloses a partial face recognition method based on convolutional neural network and sparse representation, and a partial face recognition method based on convolutional neural network and sparse representation. To achieve sample expansion, use convolutional neural network to extract features from the training set and the images to be tested, obtain their corresponding feature maps to construct feature vectors, use sparse representation and sample correction methods to calculate residuals, and based on the score minimization criterion. sort. Compared with the prior art, the invention has the advantages of reducing the influence of changes such as occlusion, illumination and expression of the image to be measured, improving the accuracy of classification, extracting features more accurately, and effectively enhancing the robustness to different changes in the image. , which better solves the recognition problem with less face information.
Description
技术领域technical field
本发明涉及人脸识别技术领域,尤其是一种基于卷积神经网络和稀疏表示的部分人脸识别方法。The invention relates to the technical field of face recognition, in particular to a partial face recognition method based on a convolutional neural network and sparse representation.
背景技术Background technique
由于人工智能技术的广泛应用,人脸识别成为计算机视觉和图像处理领域近年来的研究热点。随着神经网络和深度学习的蓬勃发展,计算能力的显著提高,人脸识别已经被应用于现实生活中。Parkhi在2015年提出了VGGFace,它基于Simonyan和Zisserman在2014年提出的VGGNet,具有良好的可迁移性和优秀的识别效果。以往的大多数人脸识别技术主要是针对完整的人脸图像。在实际生活中,由于受到遮挡、光照、不同姿态和表情等因素的影响,待识别的人脸图像往往不是完整的。因此,部分人脸识别是一个值得探讨的热点问题,其识别技术和识别精度亟待提升。部分人脸图像往往比完整人脸图像缺失较多的信息,而且由于其尺寸较小,一般不会直接与图像库中的图像进行匹配。He等人基于全卷积网络(Fully Convolutional Network,FCN)提出了动态特征匹配,在识别精度上有了很大提升,但是当待测图像含有遮挡、光照和表情等变化时,鲁棒性不够好。Due to the wide application of artificial intelligence technology, face recognition has become a research hotspot in the field of computer vision and image processing in recent years. With the vigorous development of neural networks and deep learning, and the significant increase in computing power, face recognition has been applied in real life. Parkhi proposed VGGFace in 2015, which is based on VGGNet proposed by Simonyan and Zisserman in 2014, and has good transferability and excellent recognition effect. Most of the previous face recognition technologies are mainly for complete face images. In real life, due to the influence of factors such as occlusion, illumination, different postures and expressions, the face images to be recognized are often incomplete. Therefore, some face recognition is a hot issue worthy of discussion, and its recognition technology and recognition accuracy need to be improved urgently. Partial face images tend to lack more information than complete face images, and because of their small size, they are generally not directly matched with images in the image library. He et al. proposed dynamic feature matching based on Fully Convolutional Network (FCN), which has greatly improved the recognition accuracy, but when the image to be tested contains changes such as occlusion, illumination and expression, the robustness is not enough. it is good.
发明内容SUMMARY OF THE INVENTION
本发明的目的是针对现有技术的不足而设计的一种基于卷积神经网络和稀疏表示的部分人脸识别方法,采用增加待测图像的镜像图像来实现样本扩充,将镜像与原始图像结合来提升识别准确,利用卷积神经网络对训练集和待测图像进行特征提取,获得其对应的特征映射构建特征向量,利用稀疏表示和样本矫正方法计算残差,引入并基于得分最小化准则对输入图像进行分类,对未知类别的待测图像进行预测,有效减少待测图像的遮挡、光照和表情等变化的影响,大大提高了分类的准确性,引入的变化字典有效地增强了对图像中不同变化的鲁棒性,可应用于多种场景的人脸识别,方法简便,提取的特征更为精确,较好解决了部分人脸信息较少的识别问题,具有广泛的应用前景。The purpose of the present invention is to design a partial face recognition method based on convolutional neural network and sparse representation designed for the deficiencies of the prior art. To improve the recognition accuracy, the convolutional neural network is used to extract features from the training set and the image to be tested, and the corresponding feature maps are obtained to construct feature vectors, and sparse representation and sample correction methods are used to calculate residuals. The input image is classified, and the unknown category of the image to be tested is predicted, which effectively reduces the influence of changes in the occlusion, illumination and expression of the image to be tested, and greatly improves the accuracy of the classification. The robustness of different changes can be applied to face recognition in various scenarios. The method is simple and the extracted features are more accurate. It can better solve some recognition problems with less face information, and has broad application prospects.
实现本发明目的的具体技术方案是:一种基于卷积神经网络和稀疏表示的部分人脸识别方法,其特点是采用增加待测图像的镜像图像来实现样本扩充,利用卷积神经网络对训练集和待测图像进行特征提取,获得其对应的特征映射构建特征向量,利用稀疏表示和样本矫正方法计算残差,并基于得分最小化准则对输入图像进行分类,其具体过程包括如下步骤:The specific technical scheme for realizing the purpose of the present invention is: a partial face recognition method based on convolutional neural network and sparse representation, which is characterized in that the mirror image of the image to be tested is added to realize sample expansion, and the convolutional neural network is used for training. Perform feature extraction on the set and the image to be tested, obtain its corresponding feature map to construct a feature vector, use the sparse representation and sample correction method to calculate the residual, and classify the input image based on the score minimization criterion. The specific process includes the following steps:
步骤a:制作图像训练集;Step a: Make an image training set;
步骤b:获得输入的待测图像的镜像图像来进行样本扩充;Step b: obtaining a mirror image of the input image to be tested for sample expansion;
步骤c:利用滑动窗口、稀疏表示和样本矫正方法计算残差;Step c: Calculate residuals using sliding window, sparse representation and sample correction methods;
步骤d:利用待测图像的镜像图像重复步骤c,计算残差;Step d: Repeat step c with the mirror image of the image to be tested to calculate the residual;
步骤e:将样本扩充后得到的两个残差相结合,计算出新的得分,并基于得分最小化准则,对未知类别的待测图像进行预测。Step e: Combine the two residuals obtained after the sample expansion to calculate a new score, and predict the unknown category of the image to be tested based on the score minimization criterion.
所述a步骤中需要的是有标签的样本图像集(必要时需裁剪原始图像,以消除背景的干扰,仅保留完整的人脸部分),使用CNN提取所有训练图像的特征,构建特征矩阵。What is required in the step a is a labeled sample image set (if necessary, the original image needs to be cropped to eliminate the interference of the background and only retain the complete face part), use CNN to extract the features of all training images, and construct a feature matrix.
所述b步骤中对输入的原始待测图像进行镜像反转,作为新的待测图像,其残差将与原始待测图像的残差相结合,二者共同参与识别。In the step b, the input original image to be tested is mirror-inverted, and as a new image to be tested, its residual will be combined with the residual of the original image to be tested, and the two will jointly participate in the identification.
所述c步骤中对待测图像进行的局部匹配包括如下步骤:The local matching performed on the image to be tested in the step c includes the following steps:
步骤c1:利用CNN生成待测图像的特征映射和特征向量;Step c1 : use CNN to generate the feature map and feature vector of the image to be tested;
步骤c2:对于训练集中的每个训练图像,利用滑动窗口生成与待测图像的特征映射相同大小的子集,从而得到一个新的训练集;Step c2 : for each training image in the training set, use a sliding window to generate a subset of the same size as the feature map of the image to be tested, thereby obtaining a new training set;
步骤c3:基于新生成的训练集和稀疏表示方法,计算稀疏系数;Step c3: Calculate the sparse coefficient based on the newly generated training set and the sparse representation method;
步骤c4:利用新生成的训练集构建变化字典;Step c4 : Construct a change dictionary using the newly generated training set;
步骤c5:利用稀疏表示和线性加性模型的思想,从待测图像中减去变化字典中存在的相应的变化部分,完成样本矫正;Step c5 : Using the idea of sparse representation and linear additive model, subtract the corresponding change part existing in the change dictionary from the image to be tested to complete the sample correction;
步骤c6:利用稀疏系数和校正后的图像,计算待测图像与训练集中的每个训练图像之间的残差。Step c6 : Calculate the residual between the image to be tested and each training image in the training set by using the sparse coefficient and the corrected image.
所述e步骤中将样本扩充后得到的两个残差相结合,计算出新的分数,其具体步骤为分别赋不同权重给原始待测图像和镜像图像的残差,二者加权相加;基于得分最小化准则,预测待测图像的类别,其具体步骤为计算待测图像与训练集中的每个训练图像之间的最小分数,将其对应的训练图像的所属类别,作为预测结果。In the step e, the two residuals obtained after the sample expansion are combined to calculate a new score, and the specific steps are to assign different weights to the residuals of the original image to be tested and the mirror image respectively, and the two are weighted and added; Based on the score minimization criterion, the category of the image to be tested is predicted, and the specific steps are to calculate the minimum score between the image to be tested and each training image in the training set, and use the category of the corresponding training image as the prediction result.
本发明与现有技术相比具有减少待测图像的遮挡、光照和表情等变化的影响,提高分类的准确性,引入的变化字典有效地增强了对图像中不同变化的鲁棒性,可应用于多种场景的人脸识别,方法简便,提取的特征更为精确,较好解决了部分人脸信息较少的识别问题,具有广泛的应用前景。Compared with the prior art, the invention has the advantages of reducing the influence of changes such as occlusion, illumination and expression of the image to be tested, improving the accuracy of classification, and the introduced change dictionary effectively enhances the robustness to different changes in the image, and can be applied For face recognition in a variety of scenes, the method is simple, the extracted features are more accurate, and the recognition problem with less face information is better solved, and it has a wide range of application prospects.
附图说明Description of drawings
图1为本发明流程图;Fig. 1 is the flow chart of the present invention;
图2为训练图像示例图;Figure 2 is an example of a training image;
图3为待测图像示例图;Fig. 3 is an example diagram of an image to be tested;
图4为待测图像分别对应的镜像图像示例图;4 is an example diagram of mirror images corresponding to the images to be tested respectively;
图5为利用滑动窗口在一个训练图像的特征映射上第一次滑动的示意图。Figure 5 is a schematic diagram of the first sliding on the feature map of a training image using a sliding window.
具体实施方式Detailed ways
以下结合部分人脸识别的具体实施例,对本发明做进一步的详细说明。The present invention will be further described in detail below with reference to some specific embodiments of face recognition.
参阅图1,本发明包括:制作训练集、样本扩充、稀疏表示和融合分类四个部分,其部分人脸识别的具体步骤如下:Referring to Fig. 1, the present invention includes four parts: making training set, sample expansion, sparse representation and fusion classification, and the specific steps of partial face recognition are as follows:
步骤a:制作图像训练集Step a: Make a training set of images
参阅图2,图像训练集为有标签的样本图像集,对于背景部分较多的初始图像,需要根据双眼坐标和“三庭五眼”的五官比例裁剪,消除背景干扰,尽量保留完整的脸部图像的图像集,并统一图片大小。然后利用CNN提取每个原始训练图像的特征映射,并构建特征矩阵(转换成特征向量),将N个训练图像的特征向量共同组成训练集G=[G1,G2,...,GN]。如果考虑属于同一类的训练图像,假定训练集中有L种不同的类别,那么训练集G可以改写成F=[F1,F2,...,FL],其中表示具有lk张图的第k类训练图像,而且k=[1,2,...,L], Refer to Figure 2, the image training set is a labeled sample image set. For the initial image with many background parts, it needs to be cropped according to the coordinates of the eyes and the facial features ratio of "three courts and five eyes" to eliminate background interference and keep the complete face as much as possible An image set of images, and unified image size. Then use CNN to extract the feature map of each original training image, and construct a feature matrix (convert into feature vector), and combine the feature vectors of N training images to form a training set G=[G 1 , G 2 ,...,G N ]. If training images belonging to the same class are considered, assuming that there are L different classes in the training set, then the training set G can be rewritten as F=[F 1 , F 2 , . . . , F L ], where represents the k-th training image with lk images, and k = [1, 2, ..., L],
步骤b:获得输入的待测图像的镜像图像来进行样本扩充Step b: Obtain a mirror image of the input image to be tested for sample expansion
参阅图3,输入部分人脸图像的待测图像y。Referring to Fig. 3, input the image to be tested y of part of the face image.
参阅图4,对待测图像y以图像的纵向中轴线为中心,将其左右两部分进行镜像变换,得到待测图像y的镜像图像y′,以此来实现样本扩充。在整个识别过程中,以完全相同的方式处理这两张图,将y和y′的残差加权相加得到最终分数。Referring to FIG. 4 , the image to be tested y is centered on the longitudinal axis of the image, and its left and right parts are mirror-transformed to obtain a mirror image y′ of the image to be tested y, so as to achieve sample expansion. Throughout the identification process, these two images are processed in exactly the same way, and the residuals of y and y' are weighted and summed to obtain the final score.
步骤c:利用滑动窗口、稀疏表示和样本矫正方法计算残差Step c: Calculate residuals using sliding window, sparse representation and sample correction methods
当待测图像y与训练集中的图像维度缩放到同一尺度时,利用CNN对待测图像提取到的特征映射的尺寸往往小于训练集G中图像的特征映射。When the dimensions of the image to be tested y and the image in the training set are scaled to the same scale, the size of the feature map extracted from the image to be tested by CNN is often smaller than that of the image in the training set G.
参阅图5,待测图像的特征映射p利用滑动窗口在一个训练图像的特征映射g1上第一次滑动。利用滑动窗口对于待测图像y提取的特征映射p,以步长为step在训练集中每个图像的特征映射上滑动,得到尺寸相同的M个子特征映射,作为新的训练集G′,其中G′i=[91,92,...,gN],i=[1,2,...,M],所述步长为step是一个正整数且小于特征映射p的长和宽,取step=1,步长step的值越小,通过滑动窗口获取到的训练集图像特征越精细。Referring to FIG. 5 , the feature map p of the image to be tested slides for the first time on the feature map g 1 of a training image using a sliding window. Using the sliding window to extract the feature map p of the image to be tested y, slide on the feature map of each image in the training set with the step size as step, and obtain M sub-feature maps of the same size, as a new training set G', where G ′ i = [9 1 , 9 2 , . Width, take step=1, the smaller the value of step step, the finer the image features of the training set obtained through the sliding window.
对G′中属于同一类的图像取均值,构建原型字典P=[μ1,μ2,...,μL];然后构建变化字典y=[F1-μ1e1,F2-μ2e2,...,FL-μLeL],其中ek是全为1的lk维行向量。Take the average value of the images belonging to the same class in G', construct the prototype dictionary P=[μ 1 , μ 2 , . . . , μ L ]; then construct the change dictionary y=[F 1 -μ 1 e 1 , F 2 - μ 2 e 2 , ..., F L - μ L e L ], where ek is an lk -dimensional row vector of all ones.
基于稀疏表示的方法,待测图像p可以被表示成p=G′α,Based on the sparse representation method, the image to be tested p can be expressed as p=G′α,
式中:d=[d1,d2,...,dN],di=||p-gi||2是待测图像与训练集G′中第i个图像的相似度;λ1和λ2是两个根据已有理论和方法设置的常数项,取λ1=3.1、λ2=0.4。where: d = [ d 1 , d 2 , . are two constant terms set according to existing theories and methods, and take λ 1 =3.1 and λ 2 =0.4.
基于样本矫正的方法得到: The method based on sample correction gets:
式中:λ3是一个根据已有理论和方法设置的常数项,取λ3=3.6。where: λ 3 is a constant term set according to existing theories and methods, and takes λ 3 =3.6.
然后,按下述a式计算残差ri(y):Then, the residual r i (y) is calculated according to the following formula a:
步骤d:利用待测图像的镜像图像y′重复步骤c计算残差ri(y′)。Step d: Repeat step c to calculate the residual ri (y') using the mirror image y' of the image to be tested.
步骤e:计算加权得分时,考虑到人的左右半边脸不可能完全相同,所以待测图像y的残差权重为ω1,则镜像图像y′的残差权重为1-ω1也不相同,按下述b式计算新的分数score:Step e: When calculating the weighted score, considering that the left and right halves of a person's face cannot be exactly the same, the residual weight of the image y to be tested is ω 1 , and the residual weight of the mirror image y' is 1-ω 1 is also different. , calculate the new score according to the following formula b:
scorei(y)=ω1×ri(y)+(1-ω1)×ri(y′) (b);score i (y)=ω 1 ×r i (y)+(1-ω 1 )× ri (y′) (b);
考虑到人的左右半边脸不可能完全相同,理应更重视原始待测图像,所以y和y′的残差的权重分别设置为ω1=0.6和1-0.6=0.4。Considering that the left and right halves of a person's face cannot be exactly the same, the original image to be tested should be paid more attention, so the weights of the residuals of y and y' are set as ω 1 =0.6 and 1-0.6=0.4, respectively.
基于得分最小化准则,按下述c式计算待测图像与训练集中的每个训练图像之间的最小分数待测图像的标签label,将其对应的训练图像的所属类别作为预测结果:Based on the score minimization criterion, the label label of the image to be tested with the minimum score between the image to be tested and each training image in the training set is calculated according to the following formula c, and the category of the corresponding training image is used as the prediction result:
其中zi表示训练集中第i个的图像的标签。where z i denotes the label of the ith image in the training set.
以上各实施例只是对本发明做进一步说明,并非用以限制本发明专利,凡为本发明等效实施,均应包含于本发明专利的权利要求范围之内。The above embodiments are only to further illustrate the present invention, and are not intended to limit the patent of the present invention. All equivalent implementations of the present invention should be included within the scope of the claims of the patent of the present invention.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010267944.0A CN111523404A (en) | 2020-04-08 | 2020-04-08 | Partial face recognition method based on convolutional neural network and sparse representation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010267944.0A CN111523404A (en) | 2020-04-08 | 2020-04-08 | Partial face recognition method based on convolutional neural network and sparse representation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111523404A true CN111523404A (en) | 2020-08-11 |
Family
ID=71901929
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010267944.0A Pending CN111523404A (en) | 2020-04-08 | 2020-04-08 | Partial face recognition method based on convolutional neural network and sparse representation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111523404A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112381070A (en) * | 2021-01-08 | 2021-02-19 | 浙江科技学院 | A Fast and Robust Face Recognition Method |
CN112949636A (en) * | 2021-03-31 | 2021-06-11 | 上海电机学院 | License plate super-resolution identification method and system and computer readable medium |
CN114997253A (en) * | 2021-02-23 | 2022-09-02 | 哈尔滨工业大学 | Intelligent state anomaly detection method, monitoring system and monitoring method for satellite constellation |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104392246A (en) * | 2014-12-03 | 2015-03-04 | 北京理工大学 | Inter-class inner-class face change dictionary based single-sample face identification method |
US20160034789A1 (en) * | 2014-08-01 | 2016-02-04 | TCL Research America Inc. | System and method for rapid face recognition |
CN107563328A (en) * | 2017-09-01 | 2018-01-09 | 广州智慧城市发展研究院 | A kind of face identification method and system based under complex environment |
CN108197573A (en) * | 2018-01-03 | 2018-06-22 | 南京信息工程大学 | The face identification method that LRC and CRC deviations based on mirror image combine |
CN108664917A (en) * | 2018-05-08 | 2018-10-16 | 佛山市顺德区中山大学研究院 | Face identification method and system based on auxiliary change dictionary and maximum marginal Linear Mapping |
CN108681725A (en) * | 2018-05-31 | 2018-10-19 | 西安理工大学 | A kind of weighting sparse representation face identification method |
CN108875459A (en) * | 2017-05-08 | 2018-11-23 | 武汉科技大学 | One kind being based on the similar weighting sparse representation face identification method of sparse coefficient and system |
CN109766813A (en) * | 2018-12-31 | 2019-05-17 | 陕西师范大学 | A Dictionary Learning Face Recognition Method Based on Symmetric Face Augmentation Samples |
CN110210336A (en) * | 2019-05-16 | 2019-09-06 | 赣南师范大学 | A kind of low resolution single sample face recognition method |
-
2020
- 2020-04-08 CN CN202010267944.0A patent/CN111523404A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160034789A1 (en) * | 2014-08-01 | 2016-02-04 | TCL Research America Inc. | System and method for rapid face recognition |
CN104392246A (en) * | 2014-12-03 | 2015-03-04 | 北京理工大学 | Inter-class inner-class face change dictionary based single-sample face identification method |
CN108875459A (en) * | 2017-05-08 | 2018-11-23 | 武汉科技大学 | One kind being based on the similar weighting sparse representation face identification method of sparse coefficient and system |
CN107563328A (en) * | 2017-09-01 | 2018-01-09 | 广州智慧城市发展研究院 | A kind of face identification method and system based under complex environment |
CN108197573A (en) * | 2018-01-03 | 2018-06-22 | 南京信息工程大学 | The face identification method that LRC and CRC deviations based on mirror image combine |
CN108664917A (en) * | 2018-05-08 | 2018-10-16 | 佛山市顺德区中山大学研究院 | Face identification method and system based on auxiliary change dictionary and maximum marginal Linear Mapping |
CN108681725A (en) * | 2018-05-31 | 2018-10-19 | 西安理工大学 | A kind of weighting sparse representation face identification method |
CN109766813A (en) * | 2018-12-31 | 2019-05-17 | 陕西师范大学 | A Dictionary Learning Face Recognition Method Based on Symmetric Face Augmentation Samples |
CN110210336A (en) * | 2019-05-16 | 2019-09-06 | 赣南师范大学 | A kind of low resolution single sample face recognition method |
Non-Patent Citations (3)
Title |
---|
L.HE: "《Dynamic Feature Matching for Partial Face Recognition》", 《IEEE TRANS. IMAGE PROCESS》 * |
Y.GAO: "《Semi-Supervised Sparse Representation Based Classification for Face Recognition with Insufficient Labeled Samples》", 《IEEE TRANS. IMAGE PROCESS》 * |
张彦: "《基于变化稀疏表示的单样本人脸识别》", 《信息工程大学学报》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112381070A (en) * | 2021-01-08 | 2021-02-19 | 浙江科技学院 | A Fast and Robust Face Recognition Method |
CN114997253A (en) * | 2021-02-23 | 2022-09-02 | 哈尔滨工业大学 | Intelligent state anomaly detection method, monitoring system and monitoring method for satellite constellation |
CN112949636A (en) * | 2021-03-31 | 2021-06-11 | 上海电机学院 | License plate super-resolution identification method and system and computer readable medium |
CN112949636B (en) * | 2021-03-31 | 2023-05-30 | 上海电机学院 | License plate super-resolution recognition method, system and computer readable medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107122809B (en) | A neural network feature learning method based on image self-encoding | |
CN103605972B (en) | Non-restricted environment face verification method based on block depth neural network | |
CN109711283B (en) | Occlusion expression recognition method combining double dictionaries and error matrix | |
CN109886121A (en) | An Occlusion Robust Face Keypoint Localization Method | |
CN110163258A (en) | A kind of zero sample learning method and system reassigning mechanism based on semantic attribute attention | |
CN107808113B (en) | A method and system for facial expression recognition based on differential depth feature | |
CN110543906B (en) | Automatic skin recognition method based on Mask R-CNN model | |
CN106709936A (en) | Single target tracking method based on convolution neural network | |
CN109993236A (en) | A Few-Shot Manchurian Matching Method Based on One-shot Siamese Convolutional Neural Network | |
CN106055576A (en) | Rapid and effective image retrieval method under large-scale data background | |
CN108595558B (en) | Image annotation method based on data equalization strategy and multi-feature fusion | |
CN111046732A (en) | Pedestrian re-identification method based on multi-granularity semantic analysis and storage medium | |
CN111523404A (en) | Partial face recognition method based on convolutional neural network and sparse representation | |
CN109635726B (en) | Landslide identification method based on combination of symmetric deep network and multi-scale pooling | |
CN112784728A (en) | Multi-granularity clothes changing pedestrian re-identification method based on clothing desensitization network | |
CN112036511B (en) | Image retrieval method based on attention mechanism graph convolutional neural network | |
CN110880010A (en) | Visual SLAM closed loop detection algorithm based on convolutional neural network | |
CN117671673B (en) | Small sample cervical cell classification method based on self-adaptive tensor subspace | |
CN117557857B (en) | Detection network light weight method combining progressive guided distillation and structural reconstruction | |
CN110866134A (en) | A Distribution Consistency Preserving Metric Learning Method for Image Retrieval | |
CN113139464B (en) | A method for fault detection of power grid | |
CN109740672B (en) | Multi-stream feature distance fusion system and fusion method | |
CN115311502A (en) | A small sample scene classification method for remote sensing images based on multi-scale dual-stream architecture | |
CN119027411A (en) | Integrated circuit process defect diagnosis analysis method, device and medium | |
CN116246326A (en) | Pain expression assessment method based on multi-task transformer |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200811 |
|
WD01 | Invention patent application deemed withdrawn after publication |