CN114241542A - Face recognition method based on image stitching - Google Patents
Face recognition method based on image stitching Download PDFInfo
- Publication number
- CN114241542A CN114241542A CN202111118259.2A CN202111118259A CN114241542A CN 114241542 A CN114241542 A CN 114241542A CN 202111118259 A CN202111118259 A CN 202111118259A CN 114241542 A CN114241542 A CN 114241542A
- Authority
- CN
- China
- Prior art keywords
- face
- image
- mask
- face recognition
- recognition method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 238000001514 detection method Methods 0.000 claims abstract description 16
- 230000000877 morphologic effect Effects 0.000 claims abstract description 8
- 238000004590 computer program Methods 0.000 claims description 8
- 230000007797 corrosion Effects 0.000 claims description 2
- 238000005260 corrosion Methods 0.000 claims description 2
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 7
- 238000000605 extraction Methods 0.000 description 3
- 238000012512 characterization method Methods 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 210000001508 eye Anatomy 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 210000000697 sensory organ Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/155—Segmentation; Edge detection involving morphological operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Collating Specific Patterns (AREA)
Abstract
The invention discloses a face recognition method, a face recognition device and a face recognition medium based on image splicing, wherein the method comprises the following steps: determining whether the face is worn with a mask or not by distance detection on the face image; performing morphological processing on the face image wearing the mask, and replacing the area shielded by the mask with the existing corresponding area image to obtain a complete face image; and performing face matching and recognition on the complete face image. The invention has the following beneficial effects: the face image complementing method is used for complementing the face image when the face area is blocked by the mask or other objects to obtain a complete face image, and the face data used for detailed matching is used, so that the problem that most of characteristic information of the blocked face is lost is solved.
Description
Technical Field
The invention relates to the field of computer graphic processing and artificial intelligence, in particular to a face recognition method, a face recognition device and a face recognition medium based on image splicing.
Background
The traditional face recognition algorithm mainly comprises three modules: face Detection (Face Detection), Face Alignment (Face Alignment), and Face Feature characterization (features Representations).
The input received by the human face feature characterization is a standardized human face image, vectorized human face features are obtained through feature modeling, and finally the recognition result is obtained through the discrimination of a classifier. Because the existing face recognition algorithm adopts a complete face image for training and recognition, the face shielded by a mask cannot obtain complete characteristic information, such as nose shape and mouth information, and the result is difficult to recognize under the condition of lacking a large amount of characteristic information.
The traditional method for identifying the shielding face comprises three types of subspace regression, robust error coding, robust feature extraction and the like. The subspace regression method divides different classes of faces into different subspaces, the occlusion is an independent subspace, then the occluded face image is the superposition of the face without the occlusion and the occlusion, the occluded face image recognition problem can be regarded as the problem that the non-occluded face image and the occlusion are respectively regressed to the subspaces to which the non-occluded face image and the occlusion belong, the most representative methods in the subspace regression method are a sparse representation classification method and a collaborative representation method, and the difficulty of the subspace regression method lies in the construction of the occlusion subspace.
The robust error coding method mainly comprises an addition model and a multiplication model. The "additive model" considers that an occluded image is a composite of an original facial image without occlusion and an error e caused by occlusion, i.e., y is y0+ e, and the important consideration is how to separate the error e from y. The "multiplicative model" sees the occluded image as a concatenation of the unoccluded y0 and the occlusion, and only y0 can be accurately reconstructed. It is important to consider how to separate its occluded and non-occluded regions.
The robust feature extraction method is characterized in that features contained in a face image are usually very rich, and include low-order features such as color, brightness, texture and direction, and high-order features such as posture, expression, age and race, and the robust feature extraction method needs to decompose the features.
The occlusion face recognition algorithms have advantages, but when most feature information of the face is occluded by a mask, an accurate recognition result is difficult to obtain, and most algorithms cannot detect the face, so that the next face recognition cannot be performed. The invention uses face characteristics to replace part of information of the mask, thereby being capable of carrying out normal face detection and recognition under the shielding condition.
The prior art also adopts a face with shielding and a face without shielding as training sets to retrain and develop a new face recognition method, the method can effectively improve the face recognition accuracy under the shielding condition, and the defects are that a new algorithm needs to be retrained, and the development difficulty is high.
After the mask is worn, the information of the five sense organs such as the nose, the mouth and the like is shielded, the information which can be used for distinguishing the face of the human face can be greatly reduced, and the recognizable information such as the outline of the face and the like also changes greatly in physical distribution.
Disclosure of Invention
The present invention is directed to solving at least one of the problems of the prior art. Therefore, the invention provides a face recognition method, a face recognition device and a face recognition medium based on image splicing, which are used for obtaining a complete face image of a wearer wearing a mask and using the used face data for detailed matching, so that the problem that most of feature information of a shielded face is lost is solved.
The technical scheme of the invention comprises a face recognition method based on image splicing, which is characterized by comprising the following steps: s100, collecting face images through a camera device, and determining whether a face wears a mask or not through distance detection on the face images; s200, performing morphological processing on the face image wearing the mask, and replacing the area shielded by the mask with the existing corresponding area image to obtain a complete face image; and S300, performing face matching and recognition on the complete face image.
According to the face recognition method based on image stitching, S100 includes: the image capture device is used for capturing images, adjusting the size of the images to be used for detecting the mask, detecting whether the mask is worn by a target through a Yolov3 target object detection algorithm, and if the mask is detected, acquiring the image of the mask with the face being blocked.
According to the face recognition method based on image stitching, S100 further includes: and (3) positioning the mask, converting the color space of the picture into hsv from rgb, removing the color of a white part, and setting a threshold value to remove the background image.
According to the face recognition method based on image stitching, S200 includes: and performing opening operation, determining the maximum outline of the mask, saving the mask outline picture and determining the size of the replaced area, wherein the opening operation comprises the steps of performing corrosion treatment on the image and then performing expansion treatment on the image.
According to the face recognition method based on image stitching, S200 further includes: acquiring a mouth image from the existing face data, confirming the size and width of the face, adjusting the mouth image to cover on the mask according to the size of the face, adjusting to an optimal position coordinate, and intercepting and acquiring a spliced part on the portrait image by utilizing the coordinate; and adjusting the size of the intercepted mouth image according to the size of the mask, and then splicing to obtain a complete face image.
According to the face recognition method based on image stitching, the temperature S300 comprises the following steps: and matching and identifying the complete face image with the face image stored in the database, and outputting a corresponding matching identification result.
The technical scheme of the invention also comprises a face recognition device based on image splicing, which comprises a camera device, a memory, a processor and a computer program which is stored in the memory and can run on the processor, and is characterized in that any one of the method steps is realized when the processor executes the computer program.
The present invention also includes a computer-readable storage medium, in which a computer program is stored, wherein the computer program, when executed by a processor, implements any of the method steps.
The invention has the beneficial effects that:
(1) the face data is used for processing the shielded face image, and even under the condition of having a small amount of face information, the identity information of the shielded person can be identified;
(2) firstly, mask detection is carried out through a detection algorithm, face recognition under the condition of no mask is compatible, higher accuracy is achieved, and if the mask is detected, face information shielded by the mask is restored and recovered by using face information of an existing database;
(3) because the existing face recognition is a face recognition method based on the non-shielding condition, the face cannot be detected under the shielding condition of a mask, and the other face recognition algorithm fails, so that the algorithm can be compatible with the existing face recognition algorithm through spliced complete face information, and the face can be correctly recognized;
(4) when the face recognition is compared, the face information part of the mask is replaced, and the information of the mask part in the database is used for each matching, and the information of the mask part is the same and has no difference, so that the face recognition algorithm applies the focus to the face characteristic information without the mask, and the accuracy rate of the face recognition of the mask is higher than that of the face recognition of the mask based on deep learning training;
(5) the mask part of the face mask is processed, the nose part and the mouth part are spliced to obtain the characteristic information of the whole face, the characteristic information of the shielded part is completed for face identification, and the accuracy of the face is greatly improved;
(6) in public places, the face recognition can be carried out without taking off the mask, so that the crowd gathering is reduced, and the safety and the detection efficiency of public spaces are greatly improved;
(7) the face recognition method can be used for security scenes, and the identity of the target can still be recognized under the condition that the face is shielded by the mask.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1a, 1b, 1c, 1d, 1e are schematic diagrams illustrating a face recognition process of a mask wearing type according to an embodiment of the present invention;
FIG. 2 illustrates an overall flow diagram according to an embodiment of the invention;
FIG. 3 is a detailed flow diagram according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an apparatus and medium according to an embodiment of the invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
In the description of the present invention, the meaning of a plurality of means is one or more, the meaning of a plurality of means is two or more, and larger, smaller, larger, etc. are understood as excluding the number, and larger, smaller, inner, etc. are understood as including the number.
Fig. 1a, 1b, 1c, 1d, 1e are schematic diagrams illustrating a face recognition process of a mask wearing type according to an embodiment of the present invention. Which refers in particular to the flow chart of figure 3.
Fig. 2 shows a general flow diagram according to an embodiment of the invention. The method comprises the following steps: s100, collecting face images through a camera device, and determining whether a face wears a mask or not through distance detection on the face images; s200, performing morphological processing on the face image wearing the mask, and replacing the area shielded by the mask with the existing corresponding area image to obtain a complete face image; and S300, performing face matching and recognition on the complete face image.
Fig. 3 is a detailed flowchart according to an embodiment of the present invention. The specific process is as follows: capturing a face image from a camera according to frames, carrying out mask detection on a mask through a Yolo v3 algorithm, obtaining the image information of the mask with the face shielded, carrying out morphological operation processing on the shielded partial image, intercepting the existing face nose and mouth image to replace the mask part, restoring the whole face, obtaining relatively complete face characteristic information, and finally carrying out face recognition on the spliced face to be matched with the existing face data.
The method comprises the steps of detecting whether a target wears a mask or not by using a Yolov3 target object detection algorithm, acquiring image information of the mask with the human face being blocked if the mask is detected, performing morphological operation processing on a blocked partial image, intercepting an existing human face mouth image to replace the mask part, and splicing the image information with facial feature information of eyes, heads and the like of an original image, so that the whole human face can be restored, relatively complete human face data is obtained to be matched with the existing human face data, and identity information of the blocked human face is identified.
The method comprises the steps of starting a camera to capture an image according to a frame, primarily adjusting the size of the image for mask detection, locating the position of a mask if the mask is detected, converting the color space of the image into hsv from rgb, removing a white part from the color, setting a threshold value (removing a background part), processing the mask part, performing morphological conversion on the mask part, performing opening operation (firstly corroding and then expanding), determining the maximum outline of the mask, and storing the mask outline image (determining the size of a replaced area) (see fig. 1c and fig. 1 d).
Acquiring a mouth image (as shown in figure 1a) from the existing face data, confirming the size and width of the face, adjusting the mouth image to be covered on the mask (as shown in figure 1b) according to the size of the face, adjusting to an optimal position coordinate, and intercepting and acquiring a spliced part on the portrait image by utilizing the coordinate. Then, the size of the intercepted mouth image is adjusted according to the size of the mask, and then, the mouth image is spliced to obtain a relatively complete face image, and face recognition is performed on the processed face image (as shown in fig. 1 e).
FIG. 4 is a schematic diagram of an apparatus and medium according to an embodiment of the invention. The apparatus comprises a memory 100, a processor 200 and an acquisition device 300, wherein the processor 200 stores a computer program for executing: determining whether the face is worn with a mask or not by distance detection on the face image; performing morphological processing on the face image wearing the mask, and replacing the area shielded by the mask with the existing corresponding area image to obtain a complete face image; and performing face matching and recognition on the complete face image. The acquisition device 300 is used for acquiring a face image, and the memory 100 is used for storing data.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the gist of the present invention.
Claims (8)
1. A face recognition method based on image stitching is characterized by comprising the following steps:
s100, collecting face images through a camera device, and determining whether a face wears a mask or not through distance detection on the face images;
s200, performing morphological processing on the face image wearing the mask, and replacing the area shielded by the mask with the existing corresponding area image to obtain a complete face image;
and S300, performing face matching and recognition on the complete face image.
2. The image stitching-based face recognition method according to claim 1, wherein the S100 comprises:
the image capture device is used for capturing images, adjusting the size of the images to be used for detecting the mask, detecting whether the mask is worn by a target through a Yolov3 target object detection algorithm, and if the mask is detected, acquiring the image of the mask with the face being blocked.
3. The image stitching-based face recognition method according to claim 2, wherein the S100 further comprises:
and (3) positioning the mask, converting the color space of the picture into hsv from rgb, removing the color of a white part, and setting a threshold value to remove the background image.
4. The image stitching-based face recognition method according to claim 2, wherein the S200 comprises:
and performing opening operation, determining the maximum outline of the mask, saving the mask outline picture and determining the size of the replaced area, wherein the opening operation comprises the steps of performing corrosion treatment on the image and then performing expansion treatment on the image.
5. The image stitching-based face recognition method according to claim 4, wherein the step S200 further comprises:
acquiring a mouth image from the existing face data, confirming the size and width of the face, adjusting the mouth image to cover on the mask according to the size of the face, adjusting to an optimal position coordinate, and intercepting and acquiring a spliced part on the portrait image by utilizing the coordinate;
and adjusting the size of the intercepted mouth image according to the size of the mask, and then splicing to obtain a complete face image.
6. The image stitching-based face recognition method according to claim 5, wherein the temperature S300 comprises:
and matching and identifying the complete face image with the face image stored in the database, and outputting a corresponding matching identification result.
7. An image stitching based face recognition apparatus comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor when executing the computer program implements the method steps of any of claims 1 to 6.
8. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method steps of any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111118259.2A CN114241542A (en) | 2021-09-23 | 2021-09-23 | Face recognition method based on image stitching |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111118259.2A CN114241542A (en) | 2021-09-23 | 2021-09-23 | Face recognition method based on image stitching |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114241542A true CN114241542A (en) | 2022-03-25 |
Family
ID=80743007
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111118259.2A Pending CN114241542A (en) | 2021-09-23 | 2021-09-23 | Face recognition method based on image stitching |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114241542A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115063863A (en) * | 2022-06-27 | 2022-09-16 | 中国平安人寿保险股份有限公司 | Face recognition method and device, computer equipment and storage medium |
CN115619410A (en) * | 2022-10-19 | 2023-01-17 | 闫雪 | Self-adaptive financial payment platform |
CN115810214A (en) * | 2023-02-06 | 2023-03-17 | 广州市森锐科技股份有限公司 | Verification management method, system, equipment and storage medium based on AI face recognition |
-
2021
- 2021-09-23 CN CN202111118259.2A patent/CN114241542A/en active Pending
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115063863A (en) * | 2022-06-27 | 2022-09-16 | 中国平安人寿保险股份有限公司 | Face recognition method and device, computer equipment and storage medium |
CN115619410A (en) * | 2022-10-19 | 2023-01-17 | 闫雪 | Self-adaptive financial payment platform |
CN115619410B (en) * | 2022-10-19 | 2024-01-26 | 闫雪 | Self-adaptive financial payment platform |
CN115810214A (en) * | 2023-02-06 | 2023-03-17 | 广州市森锐科技股份有限公司 | Verification management method, system, equipment and storage medium based on AI face recognition |
CN115810214B (en) * | 2023-02-06 | 2023-05-12 | 广州市森锐科技股份有限公司 | AI-based face recognition verification management method, system, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Liu et al. | Learning deep models for face anti-spoofing: Binary or auxiliary supervision | |
CN114241542A (en) | Face recognition method based on image stitching | |
CN109389074B (en) | Facial feature point extraction-based expression recognition method | |
CN108629336B (en) | Face characteristic point identification-based color value calculation method | |
EP1255225A2 (en) | Method for detecting eye and mouth positions in a digital image | |
CN104951773A (en) | Real-time face recognizing and monitoring system | |
CN103116749A (en) | Near-infrared face identification method based on self-built image library | |
CN105046219A (en) | Face identification system | |
CN110059634B (en) | Large-scene face snapshot method | |
JP2009069996A (en) | Image processing device and image processing method, recognition device and recognition method, and program | |
CN111539911B (en) | Mouth breathing face recognition method, device and storage medium | |
CN112541422A (en) | Expression recognition method and device with robust illumination and head posture and storage medium | |
CN112115775A (en) | Smoking behavior detection method based on computer vision in monitoring scene | |
CN111611849A (en) | Face recognition system for access control equipment | |
CN113673607A (en) | Method and device for training image annotation model and image annotation | |
CN113158883A (en) | Face recognition method, system, medium and terminal based on regional attention | |
Xu et al. | Identity-constrained noise modeling with metric learning for face anti-spoofing | |
CN117475353A (en) | Video-based abnormal smoke identification method and system | |
CN114663985A (en) | Face silence living body detection method and device, readable storage medium and equipment | |
CN108564020B (en) | Micro-gesture recognition method based on panoramic 3D image | |
CN114360033A (en) | Mask face recognition method, system and equipment based on image convolution fusion network | |
CN114170069A (en) | Automatic eye closing processing method based on continuous multiple pictures | |
Paul et al. | Automatic adaptive facial feature extraction using CDF analysis | |
CN111523406A (en) | Deflection face correcting method based on generation of confrontation network improved structure | |
US20230377315A1 (en) | Learning method, learned model, detection system, detection method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |