WO2023103372A1 - 一种人脸口罩佩戴状态下的识别方法 - Google Patents

一种人脸口罩佩戴状态下的识别方法 Download PDF

Info

Publication number
WO2023103372A1
WO2023103372A1 PCT/CN2022/104572 CN2022104572W WO2023103372A1 WO 2023103372 A1 WO2023103372 A1 WO 2023103372A1 CN 2022104572 W CN2022104572 W CN 2022104572W WO 2023103372 A1 WO2023103372 A1 WO 2023103372A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
face
recognition
output
mask
Prior art date
Application number
PCT/CN2022/104572
Other languages
English (en)
French (fr)
Inventor
姚克明
王羿
姜绍忠
李峰
王小兰
Original Assignee
江苏理工学院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 江苏理工学院 filed Critical 江苏理工学院
Priority to ZA2022/13209A priority Critical patent/ZA202213209B/en
Publication of WO2023103372A1 publication Critical patent/WO2023103372A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the invention belongs to the technical field of image recognition, and specifically relates to a recognition method in a wearing state of a face mask.
  • face recognition technology As the fastest growing and most potential biometric means, has important applications in various fields. Under normal circumstances, the development of face recognition has been quite mature. . During the epidemic, wearing a mask has become a normal life; taking off the mask for face recognition has a high risk; the recognition process is not only inconvenient, but also inefficient. Therefore, the identity recognition under the wearing state of the face mask has considerable value and significance, and also has an urgent demand.
  • the purpose of the present invention is to propose a recognition method in the wearing state of a face mask, so as to make the face recognition effect more efficient and accurate when wearing a mask.
  • a recognition method in the wearing state of a face mask includes:
  • Step 1 Preprocess and construct a preliminary face image dataset through public face image datasets containing masks and face images captured by image acquisition equipment;
  • Step 2 Use the Labelimg tool to manually label the face images collected by yourself in the initially constructed face image dataset, and save the image and label information files with mask labels;
  • Step 3 Input the processed image into the improved YOLO V4 network for training, and output the detection result if a mask is detected;
  • Step 4 Perform improved edge detection on the image in the data set constructed in step 1, and use the idea of region segmentation to remove the lower half of the contour image containing the mask to obtain a partial contour image;
  • Step 5 Extract contour features from the local contour image obtained in step 4, and enter the candidate target library after the preliminary screening in the recognition stage to prepare for subsequent accurate recognition;
  • Step 6 Combine the local contour image coordinate information obtained in step 4 with the image in the dataset constructed in step 1 to obtain a local face image, extract scale-invariant features (SIFT) from it, and combine principal component analysis for dimensionality reduction. Save the output feature point feature information to the corresponding database, and in the recognition stage, extract features from the selected objects in the candidate target library after step 5 screening;
  • SIFT scale-invariant features
  • Step 7 Input the target face image, complete the mask wearing detection, use the feature extraction method in step 6 for the objects that pass the preliminary screening of the profile features in step 5, match the output feature vector information with the information in the database, and finally output the recognition result.
  • step 1 the face image is preprocessed, and the specific preprocessing operation is: select an image with a correct face posture in the public face image data set including wearing a mask, and ensure that the face position is relatively correct.
  • these selected images include denoising, image information enhancement, size normalization, rotation and other operations, and the final preliminary face image data set consists of multiple Users have multiple pictures of faces wearing masks and those without masks.
  • step 2 use the Label img tool to manually label the face images captured by the image acquisition device, and save the image and label information files with mask labels.
  • step 3 the YOLO V4 network is improved to train the face images in the database.
  • the deep convolution module is used to improve the backbone feature extraction network, and the speed of mask detection is improved after improvement.
  • the specific method is: firstly, perform 1*1 convolution on the input feature layer, BatchNorm normalization and Swish activation function activation to upgrade the dimension Operation; then perform depth-separable convolution on the feature layer after the dimension increase, the convolution kernel size is 3 ⁇ 3 or 5 ⁇ 5, and the semantic information of the feature layer is enriched by depth-separable convolution; finally, 1 ⁇ 1 is performed Convolutional BatchNorm normalization and Swish activation for dimensionality reduction, output feature layer. Input a picture of size x*y, and finally output according to P6, P7, P8 The eigenvectors of the three scales output the results of mask wearing, and z is the number of channels output at the end.
  • step 4 improve the edge detection of the images in the data set constructed in step 1.
  • the specific method is: integrate mathematical morphology technology into the traditional Canny edge detection algorithm, and select the scales as 3*3 and 5*5 respectively.
  • the elliptical structural element, the structural element b1 is a small scale, which can better retain the detailed information of the image but the denoising effect is relatively poor; the structural element b2 has a large scale, which has a better denoising effect but loses more detailed information.
  • First perform a closing operation on the original image, and then perform an opening operation, I f ⁇ b2 ⁇ b1. where I is the output image, and f is the face image in the preliminary dataset.
  • step 4 use the idea of region segmentation to remove the lower half of the contour image containing the mask to obtain a local contour image.
  • the specific method is: obtain the binary contour of the image by improving edge detection, and call the opencv library after smoothing the mean value filter
  • the findContours function finds the edge and the rectangle function creates a rectangular frame surrounding the contour. For the multiple output rectangular frames, select the rectangular frame with the largest horizontal pixel distance difference in the image pixel coordinate system or the lowest vertical pixel position of the center point of the rectangular frame to identify the rectangular frame. It is a rectangular frame containing the outline of the mask. Taking the longitudinal coordinates of the rectangular frame as a reference, remove the lower half of the outline image to obtain a partial outline image.
  • step five contour features are extracted from the local contour image obtained in step four, and the contour features are used for preliminary screening in the recognition stage, and those that pass the preliminary screening are entered into the candidate target database.
  • the basis for preliminary screening is to calculate the matchShapes function of two images If f is less than the set threshold k, then the next step of identification will be performed on the identified object through preliminary screening. Among them, A represents object 1, B represents object 2, Indicates the Hu value of object 1. The Hu invariant moment can still maintain the invariance of the moment after image rotation, zooming, translation and other operations.
  • the inner parameter of the matchShapes function f selects the best invariance among the 7 Hu invariant moments. first and second.
  • x 0 m 10 /m 00
  • y 0 m 01 /m 00
  • step six the coordinate information of the partial contour image obtained in step four is combined with the image in the data set constructed in step one to obtain a partial face image.
  • SIFT scale-invariant features
  • step seven adopt the idea of pyramid hierarchical processing structure, take the object that has passed the preliminary screening of the outline feature in step five as the candidate object, use the feature extraction method in step six to extract features, and match the output feature vector information with the information in the database And finally output the recognition result; use the feature extraction method in step 6 to match the output feature vector information with the information in the database for the object that has passed the preliminary screening of the outline feature in step 5, and finally output the recognition result, where the basis for screening and matching of corner points is as follows:
  • N corner points are detected for the object to be recognized, i is the object to be matched in the database, and f(i) represents the number of corner points detected by the i-th object.
  • Z[f(i)] represents the number of corner points where the i-th object matches A successfully.
  • Z[f k (i)] represents the number of corner points that are successfully matched with A when the i-th object detects the k-th corner point.
  • Y[K i , K i+1 ] represents the smallest value of object i among output K i and K i+1 .
  • p nk (m) is the similarity between the feature vectors of two corner points, and a threshold P ⁇ is set when matching.
  • P ⁇ is set according to the experience value and sample training, and the similarity is set as the relative Euclidean distance between object A and the feature vector between the corner points of the matching object in the sample library.
  • p nk (m) represents the relative Euclidean distance between the nth corner point in object A and the kth corner point of the object in the sample library, where the mth corner point is successfully matched.
  • d When calculating p nk (m), first calculate If the relative Euclidean distance of the first d dimension is already greater than the threshold value P ⁇ , then the calculation of the following dimension will not be performed. According to experience, d generally takes a value smaller than the overall dimension D.
  • the Euclidean distance of the nth corner of object A is:
  • the absolute Euclidean distance between the nth corner point of object A and the kth corner point of the object in the sample library is:
  • R n (r n1 , r n2 ,..., r nD ) is the D-dimensional feature description vector of the nth corner point of the recognition object
  • S k (s k1 , s k2 ,..., s kD ) is The D-dimensional feature description vector of the comparison and matching of the kth corner point of the object in the sample library.
  • the last output X is the matching object number.
  • each corner point Accumulate, select the object with the smallest cumulative value as the most similar object to A; in the corner matching process, when the object in the sample library detects the kth corner point, the number of corner points that are successfully matched with A plus the remaining detected corner points The number of all corner points is less than the number of successful matching of the previous object, and the remaining corner points are not matched.
  • the present invention aims at the problem of face recognition under the condition of wearing a mask at present.
  • the improved YOLO network is used to detect the mask.
  • a pyramid-type hierarchical processing structure is adopted, and the outline is passed in the preliminary screening stage.
  • the candidate target library is obtained by feature screening; in the selection stage, the improved scale-invariant features are extracted from the selected objects in the candidate target library, and the algorithm of corner point screening and matching is improved, which saves the time of feature extraction and matching of most corner points in the database, and significantly improves The speed of feature extraction and matching accuracy of SIFT algorithm. It can realize fast and high-precision recognition of human faces including wearing masks.
  • Fig. 1 is a flow chart of labeling and establishing a sample library in the present invention.
  • Fig. 2 is a flowchart of the identification process of the present invention.
  • Fig. 3 is the overall network diagram for improving YOLO V4 in the present invention.
  • Fig. 4 is the deep convolution module structure in the backbone feature extraction network of the improved YOLO V4 network in the present invention.
  • Fig. 5 is an elliptical structural element with a size of 3*3 and 5*5 in the present invention.
  • this embodiment designs a fast, accurate and effective recognition method, the specific process is as follows:
  • Step 1 Preprocess and construct a preliminary face image dataset through public face image datasets containing masks and face images captured by image acquisition equipment;
  • the face image is preprocessed.
  • the specific preprocessing operation is: select an image with a correct face posture from the public face image dataset containing masks, and use it on the premise of ensuring that the face position is relatively correct.
  • the image acquisition device captures relevant images, and performs operations on these selected images, including denoising, image information enhancement, size normalization, rotation, etc., and finally constructs a preliminary face image dataset that includes multiple masks worn by multiple users. And pictures of faces without masks;
  • Step 2 Use the Labelimg tool to manually label the face images collected by yourself in the initially constructed face image dataset, and save the image and label information files with mask labels;
  • step 2 use the Labelimg tool to manually label the face image obtained by using the image acquisition device, and save the image and label information file with the mask label;
  • Step 3 Input the processed image into the improved YOLO V4 network for training, and output the detection result if a mask is detected;
  • step three the YOLO V4 network is improved to train the face images in the database.
  • the deep convolution module is used to improve the backbone feature extraction network, and the speed of mask detection is improved after improvement.
  • the specific method is: firstly, perform 1*1 convolution on the input feature layer, BatchNorm normalization and Swish activation function activation to upgrade the dimension Operation; then perform depth-separable convolution on the feature layer after the dimension increase, the convolution kernel size is 3 ⁇ 3 or 5 ⁇ 5, and the semantic information of the feature layer is enriched by depth-separable convolution; finally, 1 ⁇ 1 is performed Convolutional BatchNorm normalization and Swish activation for dimensionality reduction, output feature layer. Input a picture of size x*y, and finally output according to P6, P7, p8 The eigenvectors of the three scales output the results of mask wearing, and z is the number of channels output at the end;
  • Step 4 Perform improved edge detection on the image in the dataset constructed in step 1, and use the idea of region segmentation to remove the lower half of the contour image containing the mask to obtain a local contour image.
  • Step 4 an improved edge detection is performed on the images in the dataset constructed in Step 1.
  • step 4 the idea of region segmentation is used to remove the lower part of the contour image containing the mask to obtain a local contour image.
  • the specific method is: obtain the binary contour of the image by improving the edge detection, and then call the findContours function in the opencv library to find the edge and the rectangle function to create a rectangular frame surrounding the contour after smoothing the mean value filter.
  • For the multiple output rectangular frames select the rectangular frame with the largest horizontal pixel distance difference in the image pixel coordinate system or the lowest vertical pixel position of the center point of the rectangular frame, and determine that the rectangular frame is a rectangular frame containing the outline of the mask, and use the vertical coordinates of the rectangular frame As a benchmark, remove the lower half of the contour image to obtain a local contour image.
  • Step 5 Extract contour features from the local contour image obtained in step 4, and enter the candidate target library after the preliminary screening in the recognition stage to prepare for subsequent accurate recognition.
  • step five contour features are extracted from the local contour image obtained in step four, and the contour features are used for preliminary screening in the recognition stage, and those that pass the preliminary screening are entered into the candidate target library.
  • the basis for preliminary screening is to calculate the matchShapes function of two images If f is less than the set threshold k, then the next step of identification will be performed on the identified object through preliminary screening.
  • A represents object 1
  • B represents object 2
  • the Hu invariant moment can still maintain the invariance of the moment after image rotation, zooming, translation and other operations.
  • the inner parameter of the matchShapes function f selects the best invariance among the 7 Hu invariant moments. first and second.
  • x 0 m 10 /m 00
  • y 0 m 01 /m 00
  • Step 6 Combine the local contour image coordinate information obtained in step 4 with the image in the dataset constructed in step 1 to obtain a local face image, extract scale-invariant features (SIFT) from it, and combine principal component analysis for dimensionality reduction. Save the output feature point feature information to the corresponding database.
  • SIFT scale-invariant features
  • step six the coordinate information of the partial contour image obtained in step four is combined with the image in the data set constructed in step one to obtain a partial face image.
  • the dimensionality of the output feature vector is reduced to D dimension. To do this, principal component analysis is performed on the matrix X.
  • Step 7 Input the target face image, complete the mask wearing detection, use the feature extraction method in step 6 for the objects that pass the preliminary screening of the profile features in step 5, match the output feature vector information with the information in the database, and finally output the recognition result.
  • step seven adopt the idea of pyramid hierarchical processing structure, take the object that has passed the preliminary screening of step five contour features as candidate objects, use the feature extraction method in step six to extract features, and compare the output feature vector information with the information in the database Match and finally output the recognition result.
  • step seven use the feature extraction method in step six for the objects that pass the preliminary screening of profile features in step five, match the output feature vector information with the information in the database, and finally output the recognition result.
  • the basis of the corner point screening and matching is as follows:
  • N corner points are detected for the object to be recognized, i is the object to be matched in the database, and f(i) represents the number of corner points detected by the i-th object.
  • Z[f(i)] represents the number of corner points where the i-th object matches A successfully.
  • Z[f k (i)] represents the number of corner points that are successfully matched with A when the i-th object detects the k-th corner point.
  • Y[K i , K i+1 ] represents the smallest value of object i among output K i and K i+1 .
  • p nk (m) is the similarity between the feature vectors of two corner points, and a threshold P ⁇ is set when matching.
  • P ⁇ is set according to the experience value and sample training, and the similarity is set as the relative Euclidean distance between object A and the feature vector between the corner points of the matching object in the sample library.
  • p nk (m) represents the relative Euclidean distance between the nth corner point in object A and the kth corner point of the object in the sample library, where the mth corner point is successfully matched.
  • the Euclidean distance of the nth corner of object A is:
  • the absolute Euclidean distance between the nth corner point of object A and the kth corner point of the object in the sample library is:
  • R n (r n1 , r n2 ,..., r nD ) is the D-dimensional feature description vector of the nth corner point of the recognition object
  • S k (s k1 , s k2 ,..., s kD ) is The D-dimensional feature description vector of the comparison and matching of the kth corner point of the object in the sample library.
  • the last output X is the matching object number.
  • each corner point Accumulate, select the object with the smallest cumulative value as the most similar object to A; in the corner matching process, when the object in the sample library detects the kth corner point, the number of corner points that are successfully matched with A plus the remaining detected corner points The number of all corner points is less than the number of successful matching of the previous object, and the remaining corner points are not matched.
  • the present invention aims at the current face recognition problem under the condition of wearing a mask.
  • the improved YOLO network is used to detect the mask.
  • a pyramid-type hierarchical processing structure is adopted.
  • the candidate target library is obtained by screening; in the selection stage, the improved scale-invariant features are extracted from the selected objects in the candidate target library, the corner point screening and matching algorithm is improved, and the time for corner point feature extraction and matching in most databases is saved, and the SIFT is significantly improved.
  • the speed of feature extraction and matching accuracy of the algorithm It can realize fast and high-precision recognition of human faces including wearing masks.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

本发明属于图像识别技术领域,具体的说,是一种人脸口罩佩戴状态下的识别方法,首先使用改进的YOLO网络进行口罩检测,为了提高识别效率和速度之后采用金字塔式分层处理结构,初筛阶段通过轮廓特征筛选得到候选目标库;精选阶段从候选目标库内选择对象提取改进尺度不变特征,改善了角点筛选匹配的算法,节省了大部分数据库内角点特征提取及匹配的时间,显著提高了SIFT算法提取特征的速度和匹配的准确率,能够实现包含佩戴口罩情况下对人脸的快速且高精准识别。

Description

一种人脸口罩佩戴状态下的识别方法 技术领域
本发明属于图像识别技术领域,具体的说,是一种人脸口罩佩戴状态下的识别方法。
背景技术
随着机器视觉及人工智能技术的蓬勃发展,人脸识别技术作为目前发展最快也最有潜力的生物识别手段在各个领域都有着重要的应用,正常情况下的人脸识别的发展已经相当成熟。疫情期间,佩戴口罩已经成为一种生活常态;摘下口罩进行人脸识别具有很高的风险;识别过程不仅不方便,而且效率不高。因此人脸口罩佩戴状态下的身份识别具有相当大的价值与意义,同时也有着迫切的需求。
发明内容
本发明的目的是提出一种人脸口罩佩戴状态下的识别方法,使佩戴口罩情况下的人脸识别的效果更加高效、准确。
为了实现上述目标,本发明采用的技术方案是:
一种人脸口罩佩戴状态下的识别方法,具体实现过程包括:
步骤一:对通过公开的包含佩戴口罩的人脸图像数据集以及自行使用图像采集设备拍摄获得的人脸图像做预处理构建初步人脸图像数据集;
步骤二:对初步构建的人脸图像数据集中自行采集的人脸图像使用Labelimg工具手动标注,保存带有口罩标签的图像和标签信息文件;
步骤三:将处理后的图像输入改进的YOLO V4网络进行训练,若检测出口罩则输出检测结果;
步骤四:对步骤一构建的数据集内处的图像进行改进边缘检测,运用区域分割的思想去除包含口罩的下半部分轮廓图像获取局部轮廓图像;
步骤五:对步骤四获得的局部轮廓图像提取轮廓特征,识别阶段通过初筛的进入候选目标库,为后续精确识别做准备;
步骤六:将步骤四获得的局部轮廓图像坐标信息与步骤一构建的数据集内的图像结合获取局部人脸图像,对其提取尺度不变特征(SIFT),并结合主成分分析降维处理,保存输出特征点特征信息至对应数据库内,识别阶段对通过步骤五筛选后的候选目标库内选择对象提取特征;
步骤七:输入目标人脸图像,完成口罩佩戴检测,对通过步骤五轮廓特征初筛的对象用步骤六的提取特征方法,将输出特征向量信息与数据库内信息进行匹配并最终输出识别结果。
在上述技术方案中,步骤一中,对人脸图像做预处理,具体预处理操作为:在公开的包含佩戴口罩的人脸图像数据集中选取人脸姿态端正的图像,在保证人脸位置相对端正的前提下自行使用图像采集设备拍摄获取相关图像,对这些选取后的图片进行包括去噪、 图像信息增强、尺寸归一化、旋转等操作,最终构建的初步人脸图像数据集为包含多个用户多张佩戴口罩以及无佩戴口罩人脸图片。
步骤二中,对使用图像采集设备拍摄获得的人脸图像使用Label img工具手动标注,保存带有口罩标签的图像和标签信息文件。
步骤三中,改进了YOLO V4网络对数据库内人脸图像进行训练。其中使用深度卷积模块改进了主干特征提取网络,改进后提高了口罩检测的速度,具体方法为:首先对输入的特征层进行1*1的卷积,BatchNorm标准化和Swish激活函数激活进行升维操作;接着对升维后的特征层进行深度可分离卷积,卷积核大小为3×3或5×5,通过深度可分离卷积使特征层语义信息更加丰富;最后进行1×1的卷积BatchNorm标准化和Swish激活进行降维,输出特征层。输入大小为x*y的图片,最终根据P6,P7,P8输出的
Figure PCTCN2022104572-appb-000001
三个尺度的特征向量输出口罩佩戴的结果,z为最后输出的通道数。
步骤四中,对步骤一构建的数据集内处的图像进行改进边缘检测,具体方法为:在传统的Canny边缘检测算法中融入数学形态学技术,选用尺度大小分别为3*3和5*5的椭圆结构元素,结构元素b1是小尺度,能较好地保留图像的细节信息但是去噪效果比较差;结构元素b2尺度较大,有着较好的去噪效果但是细节信息丢失多。对原图像先进行一次闭运算,再进行一次开运算,I=f·b2·b1。其中I为输出图像,f为初步数据集内人脸图像。
步骤四中,运用区域分割的思想去除包含口罩的下半部分轮廓图像获取局部轮廓图像,具体方法为:通过改进边缘检测获得图像的二值轮廓,对其进行均值滤波平滑处理后调用opencv库内findContours函数找到边缘以及rectangle函数创建包围轮廓的矩形框,对于输出的多个矩形框,选择图像像素坐标系内横向像素距离差最大或者矩形框中心点纵向像素位置最低的矩形框,判别该矩形框为包含口罩轮廓的矩形框,以该矩形框纵向坐标为基准,去除以下半部分的轮廓图像获得局部轮廓图像。
步骤五中,对步骤四获得的局部轮廓图像提取轮廓特征,识别阶段通过轮廓特征进行初筛,通过初筛的进入候选目标库。其中初步筛选的依据为:计算两幅图像的matchShapes函数
Figure PCTCN2022104572-appb-000002
若f小于设置阈值k,则通过初步筛选,对该识别对象进行下一步骤的识别工作。其中,A表示对象1,B表示对象2,
Figure PCTCN2022104572-appb-000003
表示对象1的Hu值,Hu不变矩在图像旋转、缩放、平移等操作后,仍能保持矩的不变性,matchShapes函数f内参数选取7个Hu不变矩中不变性保持的最好的第一个和第二个。
Figure PCTCN2022104572-appb-000004
其中
Figure PCTCN2022104572-appb-000005
r=(q+p)/2+1,
Figure PCTCN2022104572-appb-000006
x 0=m 10/m 00,y 0=m 01/m 00
Figure PCTCN2022104572-appb-000007
步骤六中,通过步骤四获得的局部轮廓图像坐标信息与步骤一构建的数据集内的 图像结合获取局部人脸图像。对获取的局部人脸图像提取尺度不变特征(SIFT)后将输出的所有角点特征向量组合为矩阵X=[x 1,x 2,...,x i,...,x n] T,i表示识别对象的第i个角点,x i表示识别对象的第i个角点的128维的特征向量。为了提高匹配速度,将输出特征向量的维度降至D维。为此对矩阵X进行主成分分析,具体操作为:将X的每一行进行零均值化,即减去这一行的均值;求出协方差矩阵
Figure PCTCN2022104572-appb-000008
求出协方差矩阵的特征值及对应的特征向量;将特征向量按对应特征值大小从上到下按行排列为矩阵,取前D行组成矩阵P;Y=PX即为最后输出的降维后D维的特征向量。
步骤七中,采用金字塔式分层处理结构思想,对通过步骤五轮廓特征初筛的对象作为候选对象,对其运用步骤六的提取特征方法提取特征,将输出特征向量信息与数据库内信息进行匹配并最终输出识别结果;对通过步骤五轮廓特征初筛的对象用步骤六的提取特征方法,将输出特征向量信息与数据库内信息进行匹配并最终输出识别结果,其中角点筛选匹配依据如下:
Figure PCTCN2022104572-appb-000009
其中对于待识别对象A检测出N个角点,i为数据库内待匹配对象,f(i)表示第i个对象检测出的角点数量。Z[f(i)]表示第i个对象与A匹配成功的角点数量。Z[f k(i)]表示第i个对象检测到第k个角点时与A匹配成功的角点数量。Y[K i,K i+1]表示输出K i与K i+1中最小的对象i的值。
Figure PCTCN2022104572-appb-000010
p nk(m)为两个角点特征向量间相似度,匹配时设置一个阈值P α,若匹配时p nk(m)>P α,则两个角点不匹配。P α根据经验值及样本训练来设定,相似度设置为对象A与样本库内匹配对象角点间特征向量的相对欧式距离。
Figure PCTCN2022104572-appb-000011
p nk(m)表示对象A内第n个角点与样本库内对象第k个角点匹配,其中第m个匹配成功的角点之间的相对欧式距离。
为了进一步提高搜索速度,
当计算p nk(m)时,先计算
Figure PCTCN2022104572-appb-000012
若前d维度相对欧式距离已经大于阈值P α,则不进行下面维度计算,d按经验一般取小于整体维度D的值。
对象A第n个角点的欧式距离为:
Figure PCTCN2022104572-appb-000013
对象A第n个角点与样本库内对象第k个角点之间的绝对欧式距离为:
Figure PCTCN2022104572-appb-000014
R n=(r n1,r n2,...,r nD)为识别对象第n个角点D维的特征描述向量,S k=(s k1,s k2,...,s kD)为样本库内对象第k个角点对比匹配的D维特征描述向量。最后输出的X为匹配对象编号。
具体来说:对于待识别对象A检测出N个角点,在样本库内对象检测出M个角点,当该对象与A中N个角点匹配成功的数量多于样本库内前一对象时,将该对象作为与A最相似的对象;若该对象以及样本库内前一对象与A中N个角点匹配成功的数量一致时,将他们与对象A匹配成功每一个角点相似度累加,选择累加值最小的对象作为与A最相似的对象;在角点匹配过程中,当样本库内对象检测到第k个角点时与A匹配成功的角点数量加上剩余检测出来的所有角点数量小于前一对象匹配成功的数量,不进行剩余角点匹配。
本发明的有益效果:本发明针对目前佩戴口罩情况下人脸识别的问题,首先使用改进的YOLO网络进行口罩检测,为了提高识别效率和速度之后采用金字塔式分层处理结构,初筛阶段通过轮廓特征筛选得到候选目标库;精选阶段从候选目标库内选择对象提取改进尺度不变特征,改善了角点筛选匹配的算法,节省了大部分数据库内角点特征提取及匹配的时间,显著提高了SIFT算法提取特征的速度和匹配的准确率。能够实现包含佩戴口罩情况下对人脸的快速且高精准识别。
附图说明
图1为本发明标注并建立样本库流程图。
图2为本发明识别过程流程图。
图3为本发明为改进YOLO V4整体网络图。
图4为本发明为改进YOLO V4网络的主干特征提取网络中的深度卷积模块结构。
图5为本发明中大小为3*3和5*5的椭圆结构元。
具体实施方式
为了加深对本发明的理解,下面将结合附图和实施例对本发明做进一步详细描 述,该实施例仅用于解释本发明,并不对本发明的保护范围构成限定。
如图1-图5所示,为了解决佩戴口罩情况下的人脸识别问题,本实施例设计了一种快速、准确、效果明显的识别方法,具体流程如下:
步骤一:对通过公开的包含佩戴口罩的人脸图像数据集以及自行使用图像采集设备拍摄获得的人脸图像做预处理构建初步人脸图像数据集;
在步骤一中,对人脸图像做预处理,具体预处理操作为:在公开的包含佩戴口罩的人脸图像数据集中选取人脸姿态端正的图像,在保证人脸位置相对端正的前提下自行使用图像采集设备拍摄获取相关图像,对这些选取后的图片进行包括去噪、图像信息增强、尺寸归一化、旋转等操作,最终构建的初步人脸图像数据集为包含多个用户多张佩戴口罩以及无佩戴口罩人脸图片;
步骤二:对初步构建的人脸图像数据集中自行采集的人脸图像使用Labelimg工具手动标注,保存带有口罩标签的图像和标签信息文件;
在步骤二中,对使用图像采集设备拍摄获得的人脸图像使用Labelimg工具手动标注,保存带有口罩标签的图像和标签信息文件;
步骤三:将处理后的图像输入改进的YOLO V4网络进行训练,若检测出口罩则输出检测结果;
在步骤三中,改进了YOLO V4网络对数据库内人脸图像进行训练。其中使用深度卷积模块改进了主干特征提取网络,改进后提高了口罩检测的速度,具体方法为:首先对输入的特征层进行1*1的卷积,BatchNorm标准化和Swish激活函数激活进行升维操作;接着对升维后的特征层进行深度可分离卷积,卷积核大小为3×3或5×5,通过深度可分离卷积使特征层语义信息更加丰富;最后进行1×1的卷积BatchNorm标准化和Swish激活进行降维,输出特征层。输入大小为x*y的图片,最终根据P6,P7,p8输出的
Figure PCTCN2022104572-appb-000015
三个尺度的特征向量输出口罩佩戴的结果,z为最后输出的通道数;
步骤四:对步骤一构建的数据集内处的图像进行改进边缘检测,运用区域分割的思想去除包含口罩的下半部分轮廓图像获取局部轮廓图像。
在步骤四中,对步骤一构建的数据集内处的图像进行改进边缘检测。具体方法为:在传统的Canny边缘检测算法中融入数学形态学技术,选用尺度大小分别为3*3和5*5的椭圆结构元素,结构元素b1是小尺度,能较好地保留图像的细节信息但是去噪效果比较差;结构元素b2尺度较大,有着较好的去噪效果但是细节信息丢失多。对原图像先进行一次闭运算,再进行一次开运算,I=f·b2·b1。其中I为输出图像,f为初步数据集内人脸图像。
在步骤四中,运用区域分割的思想去除包含口罩的下半部分轮廓图像获取局部轮廓图像。具体方法为:通过改进边缘检测获得图像的二值轮廓,对其进行均值滤波平滑处理后调用opencv库内findContours函数找到边缘以及rectangle函数创建包围轮廓的矩形框。对于输出的多个矩形框,选择图像像素坐标系内横向像素距离差最大或者矩形框中心点纵向像素位置最低的矩形框,判别该矩形框为包含口罩轮廓的矩形框,以该矩形框纵向坐标为基准,去除以下半部分的轮廓图像获得局部轮廓图像。
步骤五:对步骤四获得的局部轮廓图像提取轮廓特征,识别阶段通过初筛的进入 候选目标库,为后续精确识别做准备。
在步骤五中,对步骤四获得的局部轮廓图像提取轮廓特征,识别阶段通过轮廓特征进行初筛,通过初筛的进入候选目标库。其中初步筛选的依据为:计算两幅图像的matchShapes函数
Figure PCTCN2022104572-appb-000016
若f小于设置阈值k,则通过初步筛选,对该识别对象进行下一步骤的识别工作。A表示对象1,B表示对象2,
Figure PCTCN2022104572-appb-000017
表示对象1的Hu值,Hu不变矩在图像旋转、缩放、平移等操作后,仍能保持矩的不变性,matchShapes函数f内参数选取7个Hu不变矩中不变性保持的最好的第一个和第二个。
Figure PCTCN2022104572-appb-000018
其中
Figure PCTCN2022104572-appb-000019
r=(q+p)/2+1,
Figure PCTCN2022104572-appb-000020
x 0=m 10/m 00,y 0=m 01/m 00
Figure PCTCN2022104572-appb-000021
步骤六:将步骤四获得的局部轮廓图像坐标信息与步骤一构建的数据集内的图像结合获取局部人脸图像,对其提取尺度不变特征(SIFT),并结合主成分分析降维处理,保存输出特征点特征信息至对应数据库内。识别阶段对通过步骤五筛选后的候选目标库内选择对象提取特征。
在步骤六中,通过步骤四获得的局部轮廓图像坐标信息与步骤一构建的数据集内的图像结合获取局部人脸图像。
在步骤六中,对获取的局部人脸图像提取尺度不变特征(SIFT)后将输出的所有角点特征向量组合为矩阵X=[x 1,x 2,...,x i,...,x n] T,i表示识别对象的第i个角点,x i表示识别对象的第i个角点的128维的特征向量。为了提高匹配速度,将输出特征向量的维度降至D维。为此对矩阵X进行主成分分析,具体操作为:将X的每一行进行零均值化,即减去这一行的均值;求出协方差矩阵
Figure PCTCN2022104572-appb-000022
求出协方差矩阵的特征值及对应的特征向量;将特征向量按对应特征值大小从上到下按行排列为矩阵,取前D行组成矩阵P;Y=PX即为最后输出的降维后D维的特征向量。
步骤七:输入目标人脸图像,完成口罩佩戴检测,对通过步骤五轮廓特征初筛的对象用步骤六的提取特征方法,将输出特征向量信息与数据库内信息进行匹配并最终输出识别结果。
在步骤七中,采用金字塔式分层处理结构思想,对通过步骤五轮廓特征初筛的对象作为候选对象,对其运用步骤六的提取特征方法提取特征,将输出特征向量信息与数据库内信息进行匹配并最终输出识别结果。
在步骤七中,对通过步骤五轮廓特征初筛的对象用步骤六的提取特征方法,将输出特征向量信息与数据库内信息进行匹配并最终输出识别结果。其中角点筛选匹配依据如下:
Figure PCTCN2022104572-appb-000023
其中对于待识别对象A检测出N个角点,i为数据库内待匹配对象,f(i)表示第i个对象检测出的角点数量。Z[f(i)]表示第i个对象与A匹配成功的角点数量。Z[f k(i)]表示第i个对象检测到第k个角点时与A匹配成功的角点数量。Y[K i,K i+1]表示输出K i与K i+1中最小的对象i的值。
Figure PCTCN2022104572-appb-000024
p nk(m)为两个角点特征向量间相似度,匹配时设置一个阈值P α,若匹配时p nk(m)>P α,则两个角点不匹配。P α根据经验值及样本训练来设定,相似度设置为对象A与样本库内匹配对象角点间特征向量的相对欧式距离。
Figure PCTCN2022104572-appb-000025
p nk(m)表示对象A内第n个角点与样本库内对象第k个角点匹配,其中第m个匹配成功的角点之间的相对欧式距离。
为了进一步提高搜索速度,当计算p nk(m)时,先计算
Figure PCTCN2022104572-appb-000026
Figure PCTCN2022104572-appb-000027
若前d维度相对欧式距离已经大于阈值P α,则不进行下面维度计算,d按经验一般取小于整体维度D的值。
对象A第n个角点的欧式距离为:
Figure PCTCN2022104572-appb-000028
对象A第n个角点与样本库内对象第k个角点之间的绝对欧式距离为:
Figure PCTCN2022104572-appb-000029
R n=(r n1,r n2,...,r nD)为识别对象第n个角点D维的特征描述向量,S k=(s k1,s k2,...,s kD)为样本库内对象第k个角点对比匹配的D维特征描述向量。
最后输出的X为匹配对象编号。
具体来说:对于待识别对象A检测出N个角点,在样本库内对象检测出M个角点,当该对象与A中N个角点匹配成功的数量多于样本库内前一对象时,将该对象作为与A最相似 的对象;若该对象以及样本库内前一对象与A中N个角点匹配成功的数量一致时,将他们与对象A匹配成功每一个角点相似度累加,选择累加值最小的对象作为与A最相似的对象;在角点匹配过程中,当样本库内对象检测到第k个角点时与A匹配成功的角点数量加上剩余检测出来的所有角点数量小于前一对象匹配成功的数量,不进行剩余角点匹配。
综上所述,本发明针对目前佩戴口罩情况下人脸识别的问题,首先使用改进的YOLO网络进行口罩检测,为了提高识别效率和速度之后采用金字塔式分层处理结构,初筛阶段通过轮廓特征筛选得到候选目标库;精选阶段从候选目标库内选择对象提取改进尺度不变特征,改善了角点筛选匹配的算法,节省了大部分数据库内角点特征提取及匹配的时间,显著提高了SIFT算法提取特征的速度和匹配的准确率。能够实现包含佩戴口罩情况下对人脸的快速且高精准识别。
以上显示和描述了本发明的基本原理、主要特征及优点。本行业的技术人员应该了解,本发明不受上述实施例的限制,上述实施例和说明书中描述的只是说明本发明的原理,在不脱离本发明精神和范围的前提下,本发明还会有各种变化和改进,这些变化和改进都落入要求保护的本发明范围内。本发明要求保护范围由所附的权利要求书及其等效物界定。

Claims (8)

  1. 一种人脸口罩佩戴状态下的识别方法,其特征在于,具体实现过程包括以下步骤:
    步骤一:对通过公开的包含佩戴口罩的人脸图像数据集以及自行使用图像采集设备拍摄获得的人脸图像做预处理构建初步人脸图像数据集;
    步骤二:对初步构建的人脸图像数据集中自行采集的人脸图像使用Labelimg工具手动标注,保存带有口罩标签的图像和标签信息文件;
    步骤三:将处理后的图像输入改进的YOLO V4网络进行训练,若检测出口罩则输出检测结果;
    步骤四:对步骤一构建的数据集内处的图像进行改进边缘检测,运用区域分割的思想去除包含口罩的下半部分轮廓图像获取局部轮廓图像;
    步骤五:对步骤四获得的局部轮廓图像提取轮廓特征,识别阶段通过初筛的进入候选目标库,为后续精确识别做准备;
    步骤六:将步骤四获得的局部轮廓图像坐标信息与步骤一构建的数据集内的图像结合获取局部人脸图像,对其提取尺度不变特征,并结合主成分分析降维处理,保存输出特征点特征信息至对应数据库内,识别阶段对通过步骤五筛选后的候选目标库内选择对象提取特征;
    步骤七:输入目标人脸图像,完成口罩佩戴检测,对通过步骤五轮廓特征初筛的对象用步骤六的提取特征方法,将输出特征向量信息与数据库内信息进行匹配并最终输出识别结果。
  2. 根据权利要求1所述的一种人脸口罩佩戴状态下的识别方法,其特征在于,在所述步骤一中,对人脸图像做预处理,具体预处理操作为:在公开的包含佩戴口罩的人脸图像数据集中选取人脸姿态端正的图像,在保证人脸位置相对端正的前提下自行使用图像采集设备拍摄获取相关图像,对这些选取后的图片进行包括去噪、图像信息增强、尺寸归一化、旋转操作,最终构建的初步人脸图像数据集为包含多个用户多张佩戴口罩以及无佩戴口罩人脸图片。
  3. 根据权利要求1所述的一种人脸口罩佩戴状态下的识别方法,其特征在于,在所述步骤三中,改进了YOLO V4网络对数据库内人脸图像进行训练,其中使用深度卷积模块改进了主干特征提取网络,具体方法为:首先对输入的特征层进行1*1的卷积,BatchNorm标准化和Swish激活函数激活进行升维操作,接着对升维后的特征层进行深度可分离卷积,卷积核大小为3×3或5×5,通过深度可分离卷积使特征层语义信息更加丰富,最后进行1×1的卷积BatchNorm标准化和Swish激活进行降维,输出特征层,输入大小为x*y的图片,最终根据P6,P7,P8输出的
    Figure PCTCN2022104572-appb-100001
    三个尺度的特征向量输出口罩佩戴的结果,z为最后输出的通道数。
  4. 根据权利要求1所述的一种人脸口罩佩戴状态下的识别方法,其特征在于,在所述步骤四中,对步骤一构建的数据集内处的图像进行改进边缘检测,具体方法为:在传统的Canny边缘检测算法中融入数学形态学技术,选用尺度大小分别为3*3和5*5的椭圆结构元素,结构元素b1是小尺度,结构元素b2尺度较大,对原图像先进行一次闭运算,再进行一次开运算,I=f·b2·b1,其中I为输出图像,f为初步数据集内人脸图像。
  5. 根据权利要求1所述的一种人脸口罩佩戴状态下的识别方法,其特征在于,在所述步 骤四中,运用区域分割的思想去除包含口罩的下半部分轮廓图像获取局部轮廓图像,具体方法为:通过改进边缘检测获得图像的二值轮廓,对其进行均值滤波平滑处理后调用opencv库内findContours函数找到边缘以及rectangle函数创建包围轮廓的矩形框,对于输出的多个矩形框,选择图像像素坐标系内横向像素距离差最大或者矩形框中心点纵向像素位置最低的矩形框,判别该矩形框为包含口罩轮廓的矩形框,以该矩形框纵向坐标为基准,去除以下半部分的轮廓图像获得局部轮廓图像。
  6. 根据权利要求1所述的一种人脸口罩佩戴状态下的识别方法,其特征在于,在所述步骤五中,对步骤四获得的局部轮廓图像提取轮廓特征,识别阶段通过轮廓特征进行初筛,通过初筛的进入候选目标库,其中初步筛选的依据为:计算两幅图像的matchShapes函数
    Figure PCTCN2022104572-appb-100002
    若f小于设置阈值k,则通过初步筛选,对该识别对象进行下一步骤的识别工作,其中,A表示对象1,B表示对象2,
    Figure PCTCN2022104572-appb-100003
    表示对象1的Hu值,Hu不变矩在图像旋转、缩放、平移等操作后,仍能保持矩的不变性,matchShapes函数f内参数选取7个Hu不变矩中不变性保持的最好的第一个和第二个,
    Figure PCTCN2022104572-appb-100004
    其中
    Figure PCTCN2022104572-appb-100005
    r=(q+p)/2+1,
    Figure PCTCN2022104572-appb-100006
    x 0=m 10/m 00,y 0=m 01/m 00
    Figure PCTCN2022104572-appb-100007
  7. 根据权利要求1所述的一种人脸口罩佩戴状态下的识别方法,其特征在于,在所述步骤六中,对获取的局部人脸图像提取尺度不变特征后将输出的所有角点特征向量组合为矩阵X=[x 1,x 2,...,x i,...,x n] T,i表示识别对象的第i个角点,x i表示识别对象的第i个角点的128维的特征向量,为了提高匹配速度,将输出特征向量的维度降至D维,为此对矩阵X进行主成分分析,具体操作为:将X的每一行进行零均值化,即减去这一行的均值;求出协方差矩阵
    Figure PCTCN2022104572-appb-100008
    求出协方差矩阵的特征值及对应的特征向量;将特征向量按对应特征值大小从上到下按行排列为矩阵,取前D行组成矩阵P;Y=PX即为最后输出的降维后D维的特征向量。
  8. 根据权利要求1所述的一种人脸口罩佩戴状态下的识别方法,其特征在于,在所述步骤七中,对通过步骤五轮廓特征初筛的对象用步骤六的提取特征方法,将输出特征向量信息与数据库内信息进行匹配并最终输出识别结果,其中角点筛选匹配依据如下:
    Figure PCTCN2022104572-appb-100009
    其中对于待识别对象A检测出N个角点,i为数据库内待匹配对象,f(i)表示第i个对象检测出的角点数量,Z[f(i)]表示第i个对象与A匹配成功的角点数量,Z[f k(i)]表示第i个对象检测到第k个角点时与A匹配成功的角点数量,Y[K i,K i+1]表示输出K i与K i+1中最小的对象i的值,
    Figure PCTCN2022104572-appb-100010
    p nk(m)为两个角点特征向量间相似度,匹配时设置一个阈值P α,若匹配时p nk(m)>P α,则两个角点不匹配,P α根据经验值及样本训练来设定,相似度设置为对象A与样本库内匹配对象角点间特征向量的相对欧式距离,
    Figure PCTCN2022104572-appb-100011
    p nk(m)表示对象A内第n个角点与样本库内对象第k个角点匹配,其中第m个匹配成功的角点之间的相对欧式距离;
    为了进一步提高搜索速度,当计算p nk(m)时,
    先计算
    Figure PCTCN2022104572-appb-100012
    若前d维度相对欧式距离已经大于阈值P α,则不进行下面维度计算,d按经验一般取小于整体维度D的值,
    对象A第n个角点的欧式距离为:
    Figure PCTCN2022104572-appb-100013
    对象A第n个角点与样本库内对象第k个角点之间的绝对欧式距离为:
    Figure PCTCN2022104572-appb-100014
    R n=(r n1,r n2,...,r nD)为识别对象第n个角点D维的特征描述向量,
    S k=(s k1,s k2,...,s kD)为样本库内对象第k个角点对比匹配的D维特征描述向量,最后输出的X为匹配对象编号。
PCT/CN2022/104572 2021-12-06 2022-07-08 一种人脸口罩佩戴状态下的识别方法 WO2023103372A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
ZA2022/13209A ZA202213209B (en) 2021-12-06 2022-12-06 Face recognition method in mask wearing state

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111478584.XA CN114359998B (zh) 2021-12-06 2021-12-06 一种人脸口罩佩戴状态下的识别方法
CN202111478584.X 2021-12-06

Publications (1)

Publication Number Publication Date
WO2023103372A1 true WO2023103372A1 (zh) 2023-06-15

Family

ID=81098160

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/104572 WO2023103372A1 (zh) 2021-12-06 2022-07-08 一种人脸口罩佩戴状态下的识别方法

Country Status (3)

Country Link
CN (1) CN114359998B (zh)
WO (1) WO2023103372A1 (zh)
ZA (1) ZA202213209B (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117668669A (zh) * 2024-02-01 2024-03-08 齐鲁工业大学(山东省科学院) 基于改进YOLOv7的管道安全监测方法及系统
CN117744745A (zh) * 2023-12-29 2024-03-22 江苏理工学院 一种基于YOLOv5网络模型的图像优化方法及优化系统

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114359998B (zh) * 2021-12-06 2024-03-15 江苏理工学院 一种人脸口罩佩戴状态下的识别方法
CN115619410B (zh) * 2022-10-19 2024-01-26 闫雪 自适应金融支付平台
CN116452667B (zh) * 2023-06-16 2023-08-22 成都实时技术股份有限公司 一种基于图像处理的目标识别与定位方法

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108985212A (zh) * 2018-07-06 2018-12-11 深圳市科脉技术股份有限公司 人脸识别方法及装置
WO2019134327A1 (zh) * 2018-01-03 2019-07-11 东北大学 一种基于边缘检测与sift的人脸表情识别特征提取方法
CN111460962A (zh) * 2020-03-27 2020-07-28 武汉大学 一种口罩人脸识别方法及系统
CN111598047A (zh) * 2020-05-28 2020-08-28 重庆康普达科技有限公司 一种人脸识别方法
CN112418177A (zh) * 2020-12-09 2021-02-26 南京甄视智能科技有限公司 人脸识别方法与系统
CN112487886A (zh) * 2020-11-16 2021-03-12 北京大学 一种有遮挡的人脸识别方法、装置、存储介质及终端
JP2021060866A (ja) * 2019-10-08 2021-04-15 キヤノン株式会社 情報処理装置、情報処理方法、及びプログラム
CN114359998A (zh) * 2021-12-06 2022-04-15 江苏理工学院 一种人脸口罩佩戴状态下的识别方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101741761B1 (ko) * 2015-12-04 2017-05-30 광운대학교 산학협력단 멀티 프레임 기반 건물 인식을 위한 특징점 분류 방법
CN108491773B (zh) * 2018-03-12 2022-11-08 中国工商银行股份有限公司 一种识别方法及系统
WO2020248096A1 (zh) * 2019-06-10 2020-12-17 哈尔滨工业大学(深圳) 基于局部特征的三维人脸识别方法和系统
CN111768543A (zh) * 2020-06-29 2020-10-13 杭州翔毅科技有限公司 基于人脸识别的通行管理方法、设备、存储介质及装置
CN111914748B (zh) * 2020-07-31 2023-10-27 平安科技(深圳)有限公司 人脸识别方法、装置、电子设备及计算机可读存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019134327A1 (zh) * 2018-01-03 2019-07-11 东北大学 一种基于边缘检测与sift的人脸表情识别特征提取方法
CN108985212A (zh) * 2018-07-06 2018-12-11 深圳市科脉技术股份有限公司 人脸识别方法及装置
JP2021060866A (ja) * 2019-10-08 2021-04-15 キヤノン株式会社 情報処理装置、情報処理方法、及びプログラム
CN111460962A (zh) * 2020-03-27 2020-07-28 武汉大学 一种口罩人脸识别方法及系统
CN111598047A (zh) * 2020-05-28 2020-08-28 重庆康普达科技有限公司 一种人脸识别方法
CN112487886A (zh) * 2020-11-16 2021-03-12 北京大学 一种有遮挡的人脸识别方法、装置、存储介质及终端
CN112418177A (zh) * 2020-12-09 2021-02-26 南京甄视智能科技有限公司 人脸识别方法与系统
CN114359998A (zh) * 2021-12-06 2022-04-15 江苏理工学院 一种人脸口罩佩戴状态下的识别方法

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117744745A (zh) * 2023-12-29 2024-03-22 江苏理工学院 一种基于YOLOv5网络模型的图像优化方法及优化系统
CN117668669A (zh) * 2024-02-01 2024-03-08 齐鲁工业大学(山东省科学院) 基于改进YOLOv7的管道安全监测方法及系统
CN117668669B (zh) * 2024-02-01 2024-04-19 齐鲁工业大学(山东省科学院) 基于改进YOLOv7的管道安全监测方法及系统

Also Published As

Publication number Publication date
CN114359998A (zh) 2022-04-15
ZA202213209B (en) 2023-08-30
CN114359998B (zh) 2024-03-15

Similar Documents

Publication Publication Date Title
WO2023103372A1 (zh) 一种人脸口罩佩戴状态下的识别方法
WO2019134327A1 (zh) 一种基于边缘检测与sift的人脸表情识别特征提取方法
CN109389074B (zh) 一种基于人脸特征点提取的表情识别方法
CN108805076B (zh) 环境影响评估报告书表格文字的提取方法及系统
Wiskott et al. Face recognition by elastic bunch graph matching
CN107330397A (zh) 一种基于大间隔相对距离度量学习的行人重识别方法
CN111126240B (zh) 一种三通道特征融合人脸识别方法
CN104134061A (zh) 一种基于特征融合的支持向量机的数字手势识别方法
CN106127193B (zh) 一种人脸图像识别方法
Bagchi et al. Robust 3D face recognition in presence of pose and partial occlusions or missing parts
Kheirkhah et al. A hybrid face detection approach in color images with complex background
Weerasekera et al. Robust asl fingerspelling recognition using local binary patterns and geometric features
Dhinesh et al. Detection of leaf disease using principal component analysis and linear support vector machine
Ahmed et al. Intelligent techniques for matching palm vein images
CN112101293A (zh) 人脸表情的识别方法、装置、设备及存储介质
Septiarini et al. Analysis of color and texture features for samarinda sarong classification
CN112418210A (zh) 一种杆塔巡检信息智能分类方法
Soni et al. A Review of Recent Advances Methodologies for Face Detection
Xu et al. Car detection using deformable part models with composite features
Yi et al. Face detection method based on skin color segmentation and facial component localization
CN110909678B (zh) 一种基于宽度学习网络特征提取的人脸识别方法及系统
Wang et al. Face detection based on color template and least square matching method
Bhat et al. Restoration of characters in degraded inscriptions using phase based binarization and geodesic morphology
Zhang et al. Character recognition in natural scene images using local description
Choudhury et al. Biometrics security: Facial marks detection from the low quality images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22902807

Country of ref document: EP

Kind code of ref document: A1