CN111639617A - High-precision face recognition technology for mask - Google Patents
High-precision face recognition technology for mask Download PDFInfo
- Publication number
- CN111639617A CN111639617A CN202010509631.1A CN202010509631A CN111639617A CN 111639617 A CN111639617 A CN 111639617A CN 202010509631 A CN202010509631 A CN 202010509631A CN 111639617 A CN111639617 A CN 111639617A
- Authority
- CN
- China
- Prior art keywords
- mask
- face
- wearing
- model
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 claims description 13
- 238000001514 detection method Methods 0.000 claims description 7
- 238000013135 deep learning Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 5
- 210000001061 forehead Anatomy 0.000 claims description 4
- 230000006870 function Effects 0.000 claims description 4
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 2
- 230000009466 transformation Effects 0.000 claims description 2
- 210000004709 eyebrow Anatomy 0.000 claims 1
- 238000007781 pre-processing Methods 0.000 description 3
- HPTJABJPZMULFH-UHFFFAOYSA-N 12-[(Cyclohexylcarbamoyl)amino]dodecanoic acid Chemical compound OC(=O)CCCCCCCCCCCNC(=O)NC1CCCCC1 HPTJABJPZMULFH-UHFFFAOYSA-N 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 210000001525 retina Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention aims to solve the problem of accurate face identification in the mask wearing occasion. The invention can realize the face recognition work aiming at the occasions of wearing a mask or shielding faces, and is used in the fields of security protection, attendance checking and the like.
Description
Technical Field
The invention relates to a face recognition algorithm model for a mask wearing occasion, which is trained on the basis of a preprocessed specific data set by utilizing a deep learning algorithm in the technical field of machine vision. The method realizes the high-precision face recognition function of the mask by carrying out precise feature extraction and comparison on the face wearing the mask.
Background
At present, under the influence of new crown epidemic situations, wearing the mask gradually becomes a normal state, no matter when the people go out daily or in work, and for the existing face recognition equipment and system on the market, the original face recognition algorithm can not meet the current requirements, and in order to solve the face recognition function under the mask wearing condition, a face recognition algorithm model capable of meeting the mask wearing situation is researched and released.
Disclosure of Invention
The invention is based on a deep learning network model, and obtains an algorithm recognition model capable of accurately extracting eye and forehead features from a face by training a large number of features of the eyes and the forehead of the face through an algorithm. Based on Retina face, a large amount of labeling and training are carried out on 5 key points (left eye, right eye, nose tip, left mouth corner and right mouth corner) of the face photo of the mask, so that the key points of the face under the mask photo can be more accurately extracted by a model, the alignment operation under the face preprocessing link is facilitated, and the face recognition precision of the mask is improved;
the whole process is as follows:
1. wearing mask face recognition algorithm model training
1) Using about 280w images of the asia data set, cutting out images above the nose bridge including eyes and forehead as a training data set for face recognition of the mask
2) Putting the sorted data set into the prior high-precision face recognition algorithm for training
3) Obtaining a face recognition model of a mask
2. Wearing mask face key point model training
1) Labeling a large number of 5 key point data sets of the human face;
2) training a new data set by using a RetinaFace as a basic algorithm;
3) obtaining a mask wearing face key point detection model;
3. constructing face recognition services
1) Capturing human face image by using conventional human face detection algorithm
2) Face key point extraction using a mask-worn face key point detection model
3) Standardizing the face image by affine transformation which standardizes the face image by using the extracted 5 key points
4) Putting the standardized face image into a face recognition model of a mask for feature extraction and recognition
By the method, a whole set of complete face recognition service of the mask wearing is realized.
Drawings
(FIG. 1) a face recognition training flowchart of a mask wearing type;
(FIG. 2) wearing mask face key point training flow chart;
(fig. 3) a technical architecture flow chart of face recognition of a mask.
Detailed Description
In order to make the features and advantages of the patent more comprehensible, embodiments accompanied with figures are described in detail below
1 display card server with GPU 2080ti, the system needs to meet Linux Ubuntu more than 18.04, support Mxnet CUDA GPU version, Mxnet version more than 1.5, CUDA more than 10.1;
the implementation process is as follows:
1. constructing a face recognition data set of the mask, and cutting and preprocessing based on a 280w Asian face image;
2. training a face recognition data set of the mask wearing by using a deep learning network model to obtain an algorithm model;
3. manually calibrating key points of a large number (more than 3000) of images of the face with the mask to construct a data set;
4. training a key point data set by using a RetinaFace deep learning network model to obtain an algorithm model;
5. intercepting a face image by using a face detection model, extracting 5 key points by using a key point detection model, and then correcting and pairing the latest face by using the 5 key points;
6. calling a face detection and mask wearing face key point model by using C + + language, correcting and preprocessing a face image input into a mask wearing face, and then sending the face image into a mask wearing face recognition model to obtain 1024-dimensional feature vectors of a face;
7. and determining whether the two face images are similar or not by using cosine similarity of the 1024-dimensional feature vectors, and setting a threshold value meeting a scene to realize a face recognition function of the mask.
Claims (4)
1. The utility model provides a wear gauze mask face identification technique of high accuracy which characterized in that, including wearing gauze mask face identification model training and wearing gauze mask face key point training and constructing face identification service flow, wherein:
the wearing mask face recognition model training process is used for training a wearing mask face recognition algorithm model,
the wearing mask face key point model training process is used for training the wearing mask face key point model,
the constructed face recognition service process is used for completing the whole face recognition technology of the mask.
2. The training process of the mask wearing face recognition model according to claim 1, wherein the training process of the mask wearing face recognition model comprises:
the Asian face source opening library 280w is collected, the pictures are cut into the eyebrow images and the forehead images, further processing is carried out to obtain a face recognition training set meeting requirements of wearing a mask, and a deep learning network is used for training the collected images to obtain a model.
3. The wearing mask face key point model training process according to claim 1, wherein the wearing mask face key point model training specifically comprises:
the face image of a large number of real scenes wearing the mask is collected, and 2 eyes, the nose bridge and 2 mouth angle coordinates are calibrated in an aligning and manual mode
And training the collected images by using a RetinaFace deep learning network to obtain a model.
4. The process of constructing a face recognition service as claimed in claim 1, wherein the process of constructing a face recognition service comprises:
feature point extraction for face image of wearing mask by using key point detection model
Using the extracted 5 characteristic points to carry out affine transformation and human face image clipping on the human face image,
sending the cut image into a face recognition algorithm model of the mask for face feature extraction,
and storing the extracted features into a database for later comparison of the similarity of the face, and finally finishing the whole face recognition function of the mask with high precision.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010509631.1A CN111639617A (en) | 2020-06-08 | 2020-06-08 | High-precision face recognition technology for mask |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010509631.1A CN111639617A (en) | 2020-06-08 | 2020-06-08 | High-precision face recognition technology for mask |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111639617A true CN111639617A (en) | 2020-09-08 |
Family
ID=72330755
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010509631.1A Pending CN111639617A (en) | 2020-06-08 | 2020-06-08 | High-precision face recognition technology for mask |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111639617A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112488034A (en) * | 2020-12-14 | 2021-03-12 | 上海交通大学 | Video processing method based on lightweight face mask detection model |
CN113536953A (en) * | 2021-06-22 | 2021-10-22 | 浙江吉利控股集团有限公司 | Face recognition method and device, electronic equipment and storage medium |
-
2020
- 2020-06-08 CN CN202010509631.1A patent/CN111639617A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112488034A (en) * | 2020-12-14 | 2021-03-12 | 上海交通大学 | Video processing method based on lightweight face mask detection model |
CN113536953A (en) * | 2021-06-22 | 2021-10-22 | 浙江吉利控股集团有限公司 | Face recognition method and device, electronic equipment and storage medium |
CN113536953B (en) * | 2021-06-22 | 2024-04-19 | 浙江吉利控股集团有限公司 | Face recognition method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11295474B2 (en) | Gaze point determination method and apparatus, electronic device, and computer storage medium | |
US20230120985A1 (en) | Method for training face recognition model | |
CN111460962B (en) | Face recognition method and face recognition system for mask | |
CN112115866A (en) | Face recognition method and device, electronic equipment and computer readable storage medium | |
WO2021174819A1 (en) | Face occlusion detection method and system | |
CN108898125A (en) | One kind being based on embedded human face identification and management system | |
CN104143086A (en) | Application technology of portrait comparison to mobile terminal operating system | |
CN110969139A (en) | Face recognition model training method and related device, face recognition method and related device | |
CN111597910A (en) | Face recognition method, face recognition device, terminal equipment and medium | |
CN111639617A (en) | High-precision face recognition technology for mask | |
CN108171223A (en) | A kind of face identification method and system based on multi-model multichannel | |
CN111666845B (en) | Small sample deep learning multi-mode sign language recognition method based on key frame sampling | |
CN112633221A (en) | Face direction detection method and related device | |
CN112241689A (en) | Face recognition method and device, electronic equipment and computer readable storage medium | |
CN111914748A (en) | Face recognition method and device, electronic equipment and computer readable storage medium | |
CN111209874A (en) | Method for analyzing and identifying wearing attribute of human head | |
CN111259763A (en) | Target detection method and device, electronic equipment and readable storage medium | |
CN113239739B (en) | Wearing article identification method and device | |
CN115529837A (en) | Face recognition method and device for mask wearing, and computer storage medium | |
CN111401222A (en) | Feature learning method for combined multi-attribute information of shielded face | |
CN111145082A (en) | Face image processing method and device, electronic equipment and storage medium | |
Szlávik et al. | Face analysis using CNN-UM | |
CN111428670B (en) | Face detection method, face detection device, storage medium and equipment | |
CN113435358A (en) | Sample generation method, device, equipment and program product for training model | |
CN112560705A (en) | Face detection method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200908 |
|
WD01 | Invention patent application deemed withdrawn after publication |