CN112507963A - Automatic generation and mask face identification method for mask face samples in batches - Google Patents

Automatic generation and mask face identification method for mask face samples in batches Download PDF

Info

Publication number
CN112507963A
CN112507963A CN202011530655.1A CN202011530655A CN112507963A CN 112507963 A CN112507963 A CN 112507963A CN 202011530655 A CN202011530655 A CN 202011530655A CN 112507963 A CN112507963 A CN 112507963A
Authority
CN
China
Prior art keywords
face
mask
picture
points
samples
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011530655.1A
Other languages
Chinese (zh)
Other versions
CN112507963B (en
Inventor
谢巍
周延
陈定权
许练濠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202011530655.1A priority Critical patent/CN112507963B/en
Publication of CN112507963A publication Critical patent/CN112507963A/en
Application granted granted Critical
Publication of CN112507963B publication Critical patent/CN112507963B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Library & Information Science (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Evolutionary Biology (AREA)
  • Databases & Information Systems (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention relates to a method for automatically generating mask face samples in batches and identifying mask faces, which comprises the following steps: s1, positioning key points of the face without the mask in the face library; s2, finding the chin and nose bridge positioning points of the mask-free face respectively; s3, aligning the mask picture with the chin and nose bridge positioning points of the face, automatically generating the mask face picture, and warehousing the picture; s4, when the face is recognized, whether the mask is worn or not is judged by checking whether the key points are shielded; and S5, according to the judgment of the step S4, selecting the person corresponding to the face with the similarity higher than the threshold and the highest similarity, and completing face recognition. The mask area is found out through the face key point positioning, the mask face picture is automatically generated and stored in the storage, and the mask face picture is compared with the mask face, so that the mask face picture is not required to be shot independently and stored in the storage, and the purpose of identifying the face of a wearer can be realized.

Description

Automatic generation and mask face identification method for mask face samples in batches
Technical Field
The invention relates to the field of image processing technology, computer vision and pattern recognition, in particular to a method for automatically generating mask face samples in batches and recognizing mask faces.
Background
With the outbreak of new crown epidemic situation, the public health protection degree is improved to an unprecedented state. In order to prevent cross infection of new coronavirus and cause epidemic spread, people are required to wear the mask when moving in various public places. Therefore, mask face recognition also becomes an important research subject, automatic recognition of face wearing masks can effectively supervise people wearing masks, the mask is an important technical means for inhibiting rapid disease propagation and protecting body health, with the gradual improvement of public health protection degree, and the premise that people come in and go out of various public places, face mask detection becomes a necessary operation for management of various public places, and with the acceleration of intelligent and automatic processes, mask face detection has gradually been transferred from manual work to machines.
Mask face recognition has the following difficulties that the accuracy of face detection of a face wearing a mask and face key point detection is reduced due to the influence of mask shielding; due to the fact that the mask is used for shielding, portrait information is reduced, and the learned characteristics are less in discriminability. Specifically, two-dimensional texture information is lost due to occlusion, and three-dimensional shape information is noisy; the mask types are more and the mask shielding degrees are different, and how to utilize the information of the non-shielding area more is also an influence factor.
The existing mainstream mask identification technology mainly depends on uploading a picture of wearing a mask on the face of a user to a library, and then identifying the picture by comparing the pictures of the face of the mask in the library, which takes time for the user; the identification technology of the mask worn by the face under all-weather natural scenes [ J ] Chinese science: information science 2020(7):1110 + 1120.) in the literature (Zhangbao, Linziyuan, Tianwangxin, and the like) is realized by positioning the non-shielding area of the face, so that the mask wearing mode is limited, the wearing specification of a user is required, and the picture of the user wearing the mask is required to be added in a library.
Disclosure of Invention
In order to solve the problems, the invention provides a method for automatically generating a batch of mask face samples and identifying mask faces. When the fact that the face to be recognized wears the mask is judged, mask features are extracted, a reference image library of the mask to be worn is newly built on the basis of the original image library, and then the face image to be recognized is matched with reference images in the library for searching, so that recognition is completed.
The invention is realized by at least one of the following technical schemes.
A method for automatically generating mask face samples in batches and identifying mask faces comprises the following steps:
s1, positioning key points of the face without the mask in the face library;
s2, finding the chin and nose bridge positioning points of the mask-free face respectively;
s3, aligning the mask picture with the chin and nose bridge positioning points of the face, automatically generating a mask face picture, and storing the picture in a database;
s4, when the face is recognized, whether the mask is worn or not is judged by checking whether the key points are shielded;
and S5, according to the judgment of the step S4, selecting the person corresponding to the face with the similarity higher than the threshold and the highest similarity, and completing face recognition.
Preferably, step S1 is to locate the key points of the mask-less face in the face library by using the integrated regression tree model.
Preferably, the training of the integrated regression tree model comprises the following steps:
marking feature points of Face images in a training Set FDDB (Face Detection Data Set and Benchmark: 5171 faces in 2845 pictures), calculating the average Face shape as the shape of the model initialized in the test, taking the intensity of pixel points as the features during the training, taking the distance between the pixel points near the calibrated training Set and the point pairs as a feature pool, dividing the distance by the distance between two eyes for normalization, introducing exponential distance prior, and loading an integrated regression tree model; the integrated regression tree model is a cascade of N regression trees, each regression tree is provided with M weak regressors, the depth of each tree is h, and a gradient lifting algorithm is used for continuously regressing through residual errors and fitting errors to obtain a final regression tree model;
testing the integrated regression tree model: inputting a face detection result into a model, firstly pasting an average face in a new test face to obtain an initial shape, predicting feature points by using the face shape, storing all the obtained feature points in a key point dictionary table, and storing the names and coordinates of the key points of the face in the table.
Preferably, in step S2, the chin and nose bridge locating points of the face without the mask are found by using a dictionary screening method.
Preferably, the dictionary screening method specifically comprises: in the key point dictionary table obtained in step S1, the chin and nose bridge names are retrieved, the key points and their coordinates corresponding to the chin and nose bridge names are selected, and stored as a new array.
Preferably, the alignment manner of the positioning points in step S3 is as follows:
the method is a process of rotating, scaling and translating other shapes to be close to a reference shape as much as possible by taking a certain shape as the reference;
firstly, selecting a reference image in a training set, transforming other pictures to be as close to the reference image as possible, wherein the specific change process is represented by a scaling parameter s, a rotation parameter theta and a translation parameter matrix t, wherein M (s, theta) represents a transformation matrix with the rotation parameter theta and the scaling amplitude s, and supposing that the 2 nd image X is used2Transformed as close as possible to the 1 st image X1The transformed image matrix is represented as:
X1=M(s,θ)[X2]-t
measuring the approaching degree of the two images by using the Euclidean distance, and assuming that the index point matrix of the image i is XiThe index point matrix of image j is XjThe Euclidean distance between the two is as follows:
d2 ik=(Xi-Xj)T(Xi-Xj)。
preferably, the recognizing the detected face is to determine whether to wear the mask by checking whether the key points of the chin and the nose bridge of the face are blocked.
Preferably, the method for determining whether the key point is blocked in step S4 is as follows:
and in the key point dictionary table, checking whether coordinates corresponding to a chin key word and a nose bridge key word are empty, if so, judging that the key points are shielded, namely, the person wears the mask, otherwise, judging that the person does not wear the mask, and otherwise, judging that the person does not wear the mask.
Preferably, according to the judgment in the step S4, if the face is determined to be a mask, calculating the similarity between the face and the face with the mask in the database, and selecting the person corresponding to the face with the similarity higher than the threshold and the highest similarity to complete face recognition; if the face recognition is judged to be not worn, calculating the similarity between the face recognition and the face without the worn mask in the database, and selecting the person corresponding to the face with the similarity higher than the threshold value and the highest similarity to finish the face recognition.
Preferably, the similarity calculation method in step S5 is as follows:
s51 zooming pictures
The picture to be processed is placed to a specified size, and the size of the zoomed picture is determined by the information content and the complexity of the picture;
s52, graying the picture;
s53, calculating average value
Respectively and sequentially calculating the average value of each row of pixel points of the image, and recording the average value of each row of pixel points, wherein each average value corresponds to one row of characteristics;
s54, calculating variance
Calculating the variance of all the obtained average values, wherein the obtained variance is the characteristic value of the image;
s55, comparing the variance
After the steps S51-S55, each graph generates a featureThe value is the variance; the characteristic value of the picture to be recognized is f, wherein the characteristic value of the face library of the mask is fiN, the characteristic value of the mask-free face library is fjJ is 1,.. n, wherein n is the number of persons registered for warehousing;
if the test picture is judged to be a mask wearing face picture, calculating the distance between the face and the characteristic value of all mask wearing sample faces in the face library:
Figure BDA0002851933900000051
if the picture is judged to be a mask-free picture, calculating the distance between the face and the characteristic value of all sample faces in the mask-free face library:
Figure BDA0002851933900000052
obtaining a person name corresponding to the face of the picture to be recognized according to the picture corresponding to the minimum distance from the characteristic value of the picture to be recognized;
s56 face recognition
According to the calculation result of the step S55, using the set threshold, the face picture which is closest to the feature value of the test picture and whose measured distance is higher than the threshold is screened out from the database, and the name of the person corresponding to the face picture in the database is used as the recognition result.
Compared with the prior art, the invention has the beneficial effects that:
the user does not need to shoot and upload the picture of the mask, and only needs to extract the shielding object and add the shielding object to the face image which is not shielded in the reference database when the mask with the shielding object is judged to be arranged on the face image needing to be recognized, so that the face mask picture which is generated based on the face self-adaption and worn on the face is put in storage, and the face mask recognition with time saving and high accuracy is realized.
Drawings
Fig. 1 is a flow chart of a method for automatically generating mask face samples in batches and identifying mask faces according to the present embodiment;
fig. 2 is a face key point positioning diagram in the embodiment.
Detailed Description
The invention is further described with reference to the following figures and specific embodiments.
As shown in fig. 1, a method for automatic generation of mask face samples in batch and mask face recognition includes the following steps:
s1, positioning key points of the face without the mask in the face library by utilizing the integrated regression tree model
Firstly, marking the characteristic points of Face images in a training Set FDDB (Face Detection Data Set and Benchmark: 5171 faces in 2845 pictures), then training by using a regression tree model, and firstly, calculating the average Face shape as the initialized shape of the model during testing. During training, the intensity of a pixel point is used as a characteristic, the distance between the pixel point near a calibrated training set and a point pair is used as a characteristic pool, the distance is divided by the distance between two eyes to be normalized, an exponential distance prior is introduced, an integrated regression tree model is loaded, the model is cascaded 10 regression trees, each regression tree is provided with 500 weak regressors, the depth of each tree is 5, a gradient lifting algorithm is used for continuously regressing through residual errors, and a fitting error is obtained to obtain a final regression tree model.
During testing, a human face image to be recognized is input into the trained regression tree model, an average face is firstly pasted in a new testing face to obtain an initial shape, and the shape of the face is used for predicting feature points, as shown in fig. 2:
1 st to 17 th points: a cheek;
18 th to 22 nd points: the right eyebrow;
23 th to 27 th spots: the left eyebrow;
28 th to 36 th points: a nose;
37 th to 42 th points: a right eye;
43 th to 48 th points: a left eye;
49 th to 68 th points: mouth;
all the obtained feature points are stored in a key point dictionary table, and names and coordinates of key points of the human face are stored in the table.
And S2, finding the chin and nose bridge positioning points of the mask-free face by using a dictionary screening method.
The coordinates corresponding to the chin and nose bridge key points are selected from the key point dictionary table obtained in step S1 and stored in a new array, as shown in fig. 2, where the 4 th to 14 th points are chin coordinate points and the 30 th to 36 th points are nose bridge coordinate points.
S3, aligning the mask picture with the chin and nose bridge positioning points of the face, automatically generating the mask face picture, and warehousing the picture;
the alignment mode of the positioning points is as follows:
firstly, a reference image is selected, other pictures in a training set are close to the reference image as much as possible after transformation, and the specific change process can be represented by a scaling amplitude parameter s, a rotation parameter theta and a translation parameter matrix t. Suppose that the 2 nd image is to be taken
Figure BDA0002851933900000071
Transformed as close as possible to the 1 st image X1The transformed image matrix is represented as:
X1=M(s,θ)[X2]-t
to get as close as possible to the reference shape, the magnitude of the euclidean distance is used to measure the degree of closeness. Suppose the index point matrix of image i is XiThe index point matrix of image j is XjThe Euclidean distance between the two is as large as
d2 ik=(Xi-Xj)T(Xi-Xj)
The above process is a process of aligning the mask with the region between the bridge of the nose and the key point of the chin of the human face.
S4, when the face is recognized, whether the mask is worn or not is judged by checking whether the key points are shielded;
and in the key point dictionary table, checking whether coordinates corresponding to a chin key word and a nose bridge key word are empty, if so, judging that the key point is shielded, namely, the person wears the mask, otherwise, judging that the person does not wear the mask, and otherwise, judging that the person does not wear the mask.
S5, if the wearing mask is judged to be the wearing mask, calculating the similarity between the wearing mask and the face of the wearing mask in the database, and selecting the person corresponding to the face with the similarity higher than the threshold value and the highest similarity to finish face recognition; if the face recognition is judged to be not worn, calculating the similarity between the face recognition and the face without the worn mask in the database, and selecting the person corresponding to the face with the similarity higher than the threshold value and the highest similarity to finish the face recognition.
The similarity calculation mode is as follows:
1) zooming pictures
The picture to be processed is placed to a specified size, and the size of the zoomed picture is determined by the information content and complexity of the picture. Since the face contains rich information, the scaling is 64 × 64.
2) Graying processing
Because the contrast image similarity and the color relation are not very large, the contrast image is processed into a gray scale image, and the complexity of post-calculation is reduced.
3) Calculating the mean value
And respectively calculating the average value of each row of pixel points of the image in sequence, and recording the average value of each row of pixel points. Each average corresponds to a feature of a row.
4) Calculating variance
And calculating the variance of all the obtained average values, wherein the obtained variance is the characteristic value of the image. The variance can well reflect the fluctuation of the pixel characteristics of each line, namely, the main information of the recorded picture.
5) Comparison of variance
After the steps 1) to 4), generating a characteristic value (variance) for each graph. The characteristic value (variance) of the picture to be recognized is f, wherein the characteristic value of the face library of the mask is fiN, the characteristic value of the mask-free face library is fjJ is 1.. n, where n is the number of persons registered in the warehouse.
If the picture is judged to be the wearing mask picture, calculating the distance between the face and the characteristic value of all sample faces in the wearing mask face library:
Figure BDA0002851933900000091
if the picture is judged to be a mask-free picture, calculating the distance between the face and the characteristic value of all sample faces in the mask-free face library:
Figure BDA0002851933900000092
obtaining the name of a person corresponding to the face in the database according to the picture corresponding to the minimum distance from the characteristic value of the picture to be recognized, wherein the specific method is that the corresponding names are stored for the faces in the database, and the corresponding names are only required to be taken out after a certain face is confirmed;
the detection effect of the face mask is shown in table 1:
TABLE 1 detection effect of face mask
Index (I) Wearing mask Mask without wearing
Precision ratio (%) 98.10 99.34
Recall (%) 99.11 100
Accuracy (%) 98.23 99.46
In order to evaluate the performance of the trained mask face recognition model, Precision and Recall of a worn mask and a non-worn mask are evaluated, and comprehensive performance of the worn mask and the non-worn mask is evaluated by Accuracy.
Figure BDA0002851933900000101
Figure BDA0002851933900000102
Figure BDA0002851933900000103
Wherein, the sample of the wearing mask is correctly classified into the type of the wearing mask, and the type of the sample is recorded as TP; the class of the non-wearing mask is wrongly classified into the class of the wearing mask, and the class is recorded as FP; the sample of the wearing mask is wrongly classified into the class of the non-wearing mask, and the sample is recorded as FN; the non-mask sample was correctly classified into the non-mask type, and this sample was designated as TN. The accuracy rate represents the proportion of the number of the correct samples in all samples detected by the model; the recall rate represents the proportion of the number of the correct samples of the type in all the samples detected by the model to the number of all the samples of the type in the set; the accuracy rate represents the proportion of the sum of the respective correct number of samples in all classes detected by the model to the total number of samples in the set.
The recognition effect of the face mask is shown in table 2:
TABLE 2 face mask recognition effect
FAR(%) 1.21
FRR(%) 0.55
Wherein FAR is the false recognition rate: the probability of mistaking other people as the designated person.
FRR is rejection rate: the probability of a person mistaking another person will be specified.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention.

Claims (10)

1. A method for automatically generating mask face samples in batches and identifying mask faces is characterized by comprising the following steps:
s1, positioning key points of the face without the mask in the face library;
s2, finding the chin and nose bridge positioning points of the mask-free face respectively;
s3, aligning the mask picture with the chin and nose bridge positioning points of the face, automatically generating a mask face picture, and storing the picture in a database;
s4, when the face is recognized, whether the mask is worn or not is judged by checking whether the key points are shielded;
and S5, according to the judgment of the step S4, selecting the person corresponding to the face with the similarity higher than the threshold and the highest similarity, and completing face recognition.
2. The method for automatic generation of mask face samples and mask face recognition according to claim 1, wherein step S1 is to locate the key points of the unworn mask faces in the face library by using an integrated regression tree model.
3. The method for automatic generation of mask face samples in batches and face recognition of masks according to claim 2, wherein the training of the integrated regression tree model comprises the following steps:
marking feature points of Face images in a training Set FDDB (Face Detection Data Set and Benchmark: 5171 faces in 2845 pictures), calculating the average Face shape as the shape of the model initialized in the test, taking the intensity of pixel points as the features during the training, taking the distance between the pixel points near the calibrated training Set and the point pairs as a feature pool, dividing the distance by the distance between two eyes for normalization, introducing exponential distance prior, and loading an integrated regression tree model; the integrated regression tree model is a cascade of N regression trees, each regression tree is provided with M weak regressors, the depth of each tree is h, and a gradient lifting algorithm is used for continuously regressing through residual errors and fitting errors to obtain a final regression tree model;
testing the integrated regression tree model: inputting a face detection result into a model, firstly pasting an average face in a new test face to obtain an initial shape, predicting feature points by using the face shape, storing all the obtained feature points in a key point dictionary table, and storing the names and coordinates of the key points of the face in the table.
4. The method for automatic generation of mask face samples and mask face recognition according to claim 3, wherein step S2 is to find the chin and nose bridge locating points of the mask-free face by dictionary screening method.
5. The method for automatic generation of mask face samples in batches and mask face recognition according to claim 4, wherein the method comprises the following steps: the dictionary screening method specifically comprises the following steps: in the key point dictionary table obtained in step S1, the chin and nose bridge names are retrieved, the key points and their coordinates corresponding to the chin and nose bridge names are selected, and stored as a new array.
6. The method for automatically generating mask face samples in batch and identifying mask faces according to claim 5, wherein the alignment of the positioning points in step S3 is as follows:
the method is a process of rotating, scaling and translating other shapes to be close to a reference shape as much as possible by taking a certain shape as the reference;
firstly, selecting a reference image in a training set, transforming other pictures to be as close to the reference image as possible, wherein the specific change process is represented by a scaling parameter s, a rotation parameter theta and a translation parameter matrix t, wherein M (s, theta) represents a transformation matrix with the rotation parameter theta and the scaling amplitude s, and supposing that the 2 nd image X is used2Transformed as close as possible to the 1 st image X1The transformed image matrix is represented as:
X1=M(s,θ)[X2]-t
measuring the approaching degree of the two images by using the Euclidean distance, and assuming that the index point matrix of the image i is XiThe index point matrix of image j is XjThe Euclidean distance between the two is as follows:
d2 ik=(Xi-Xj)T(Xi-Xj)。
7. the method for automatic generation of mask face samples and facial recognition of mask according to claim 6, wherein the identification of the detected face is to determine whether to wear the mask by checking whether the key points of the chin and the nose bridge of the face are blocked.
8. The method for automatically generating mask face samples in batch and identifying mask faces according to claim 7, wherein the method for determining whether the key points are blocked in step S4 is as follows:
and in the key point dictionary table, checking whether coordinates corresponding to a chin key word and a nose bridge key word are empty, if so, judging that the key points are shielded, namely, the person wears the mask, otherwise, judging that the person does not wear the mask, and otherwise, judging that the person does not wear the mask.
9. The method for automatically generating a batch of mask face samples and identifying a mask face according to claim 8, wherein according to the judgment of step S4, if the mask is worn, the similarity between the face and the face of the mask worn in the database is calculated, and the person corresponding to the face with the similarity higher than the threshold and the highest similarity is selected to complete face identification; if the face recognition is judged to be not worn, calculating the similarity between the face recognition and the face without the worn mask in the database, and selecting the person corresponding to the face with the similarity higher than the threshold value and the highest similarity to finish the face recognition.
10. The method for automatically generating mask face samples in batch and identifying mask faces according to claim 9, wherein the similarity calculation method in step S5 is as follows:
s51 zooming pictures
The picture to be processed is placed to a specified size, and the size of the zoomed picture is determined by the information content and the complexity of the picture;
s52, graying the picture;
s53, calculating average value
Respectively and sequentially calculating the average value of each row of pixel points of the image, and recording the average value of each row of pixel points, wherein each average value corresponds to one row of characteristics;
s54, calculating variance
Calculating the variance of all the obtained average values, wherein the obtained variance is the characteristic value of the image;
s55, comparing the variance
After the steps of S51 to S55, each graph generates a variance as a feature value; the characteristic value of the picture to be identified is fWherein the characteristic value of the face library of the mask is fiN, the characteristic value of the mask-free face library is fjJ is 1,.. n, wherein n is the number of persons registered for warehousing;
if the test picture is judged to be a mask wearing face picture, calculating the distance between the face and the characteristic value of all mask wearing sample faces in the face library:
Figure FDA0002851933890000041
if the picture is judged to be a mask-free picture, calculating the distance between the face and the characteristic value of all sample faces in the mask-free face library:
Figure FDA0002851933890000042
obtaining a person name corresponding to the face of the picture to be recognized according to the picture corresponding to the minimum distance from the characteristic value of the picture to be recognized;
s56 face recognition
According to the calculation result of the step S55, using the set threshold, the face picture which is closest to the feature value of the test picture and whose measured distance is higher than the threshold is screened out from the database, and the name of the person corresponding to the face picture in the database is used as the recognition result.
CN202011530655.1A 2020-12-22 2020-12-22 Automatic generation of batch mask face samples and mask face recognition method Active CN112507963B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011530655.1A CN112507963B (en) 2020-12-22 2020-12-22 Automatic generation of batch mask face samples and mask face recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011530655.1A CN112507963B (en) 2020-12-22 2020-12-22 Automatic generation of batch mask face samples and mask face recognition method

Publications (2)

Publication Number Publication Date
CN112507963A true CN112507963A (en) 2021-03-16
CN112507963B CN112507963B (en) 2023-08-25

Family

ID=74922993

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011530655.1A Active CN112507963B (en) 2020-12-22 2020-12-22 Automatic generation of batch mask face samples and mask face recognition method

Country Status (1)

Country Link
CN (1) CN112507963B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113435248A (en) * 2021-05-18 2021-09-24 武汉天喻信息产业股份有限公司 Mask face recognition base enhancement method, device, equipment and readable storage medium
CN113610115A (en) * 2021-07-14 2021-11-05 广州敏视数码科技有限公司 Efficient face alignment method based on gray level image
CN115018696A (en) * 2022-06-08 2022-09-06 东北师范大学 Face mask data generation method based on OpenCV (open source/consumer computer vision library) affine transformation
CN115620380A (en) * 2022-12-19 2023-01-17 成都成电金盘健康数据技术有限公司 Face recognition method for wearing medical mask
CN118081163A (en) * 2024-04-24 2024-05-28 陕西能源电力运营有限公司 Header accurate welding control method and system based on image recognition

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102496174A (en) * 2011-12-08 2012-06-13 中国科学院苏州纳米技术与纳米仿生研究所 Method for generating face sketch index for security monitoring
CN108268885A (en) * 2017-01-03 2018-07-10 京东方科技集团股份有限公司 Feature point detecting method, equipment and computer readable storage medium
CN109117801A (en) * 2018-08-20 2019-01-01 深圳壹账通智能科技有限公司 Method, apparatus, terminal and the computer readable storage medium of recognition of face
CN109886173A (en) * 2019-02-02 2019-06-14 中国科学院电子学研究所 The autonomous service robot of side face attitude algorithm method and mood sensing of view-based access control model
CN110175529A (en) * 2019-04-30 2019-08-27 东南大学 A kind of three-dimensional face features' independent positioning method based on noise reduction autoencoder network
CN111460962A (en) * 2020-03-27 2020-07-28 武汉大学 Mask face recognition method and system
CN111626246A (en) * 2020-06-01 2020-09-04 浙江中正智能科技有限公司 Face alignment method under mask shielding
CN111695431A (en) * 2020-05-19 2020-09-22 深圳禾思众成科技有限公司 Face recognition method, face recognition device, terminal equipment and storage medium
CN111860453A (en) * 2020-08-04 2020-10-30 沈阳工业大学 Face recognition method for mask

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102496174A (en) * 2011-12-08 2012-06-13 中国科学院苏州纳米技术与纳米仿生研究所 Method for generating face sketch index for security monitoring
CN108268885A (en) * 2017-01-03 2018-07-10 京东方科技集团股份有限公司 Feature point detecting method, equipment and computer readable storage medium
CN109117801A (en) * 2018-08-20 2019-01-01 深圳壹账通智能科技有限公司 Method, apparatus, terminal and the computer readable storage medium of recognition of face
CN109886173A (en) * 2019-02-02 2019-06-14 中国科学院电子学研究所 The autonomous service robot of side face attitude algorithm method and mood sensing of view-based access control model
CN110175529A (en) * 2019-04-30 2019-08-27 东南大学 A kind of three-dimensional face features' independent positioning method based on noise reduction autoencoder network
CN111460962A (en) * 2020-03-27 2020-07-28 武汉大学 Mask face recognition method and system
CN111695431A (en) * 2020-05-19 2020-09-22 深圳禾思众成科技有限公司 Face recognition method, face recognition device, terminal equipment and storage medium
CN111626246A (en) * 2020-06-01 2020-09-04 浙江中正智能科技有限公司 Face alignment method under mask shielding
CN111860453A (en) * 2020-08-04 2020-10-30 沈阳工业大学 Face recognition method for mask

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113435248A (en) * 2021-05-18 2021-09-24 武汉天喻信息产业股份有限公司 Mask face recognition base enhancement method, device, equipment and readable storage medium
CN113610115A (en) * 2021-07-14 2021-11-05 广州敏视数码科技有限公司 Efficient face alignment method based on gray level image
CN113610115B (en) * 2021-07-14 2024-04-12 广州敏视数码科技有限公司 Efficient face alignment method based on gray level image
CN115018696A (en) * 2022-06-08 2022-09-06 东北师范大学 Face mask data generation method based on OpenCV (open source/consumer computer vision library) affine transformation
CN115018696B (en) * 2022-06-08 2024-05-03 东北师范大学 Face mask data generation method based on OpenCV affine transformation
CN115620380A (en) * 2022-12-19 2023-01-17 成都成电金盘健康数据技术有限公司 Face recognition method for wearing medical mask
CN118081163A (en) * 2024-04-24 2024-05-28 陕西能源电力运营有限公司 Header accurate welding control method and system based on image recognition

Also Published As

Publication number Publication date
CN112507963B (en) 2023-08-25

Similar Documents

Publication Publication Date Title
CN112507963A (en) Automatic generation and mask face identification method for mask face samples in batches
CN110580445B (en) Face key point detection method based on GIoU and weighted NMS improvement
CN111881770B (en) Face recognition method and system
CN109800648A (en) Face datection recognition methods and device based on the correction of face key point
CN101763500B (en) Method applied to palm shape extraction and feature positioning in high-freedom degree palm image
CN108090830B (en) Credit risk rating method and device based on facial portrait
CN110728225B (en) High-speed face searching method for attendance checking
Dibeklioglu et al. 3D facial landmarking under expression, pose, and occlusion variations
CN108898131A (en) It is a kind of complexity natural scene under digital instrument recognition methods
CN101751559B (en) Method for detecting skin stains on face and identifying face by utilizing skin stains
US8577094B2 (en) Image template masking
CN111539912A (en) Health index evaluation method and equipment based on face structure positioning and storage medium
EP3680794A1 (en) Device and method for user authentication on basis of iris recognition
CN108108760A (en) A kind of fast human face recognition
CN113449704B (en) Face recognition model training method and device, electronic equipment and storage medium
WO2022213396A1 (en) Cat face recognition apparatus and method, computer device, and storage medium
CN114445879A (en) High-precision face recognition method and face recognition equipment
CN111488943A (en) Face recognition method and device
CN111539911B (en) Mouth breathing face recognition method, device and storage medium
CN111274883A (en) Synthetic sketch face recognition method based on multi-scale HOG (histogram of oriented gradient) features and deep features
CN108875549A (en) Image-recognizing method, device, system and computer storage medium
CN113436735A (en) Body weight index prediction method, device and storage medium based on face structure measurement
CN116884045B (en) Identity recognition method, identity recognition device, computer equipment and storage medium
CN115273150A (en) Novel identification method and system for wearing safety helmet based on human body posture estimation
CN109190489A (en) A kind of abnormal face detecting method based on reparation autocoder residual error

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant