WO2015165365A1 - 一种人脸识别方法及系统 - Google Patents
一种人脸识别方法及系统 Download PDFInfo
- Publication number
- WO2015165365A1 WO2015165365A1 PCT/CN2015/077373 CN2015077373W WO2015165365A1 WO 2015165365 A1 WO2015165365 A1 WO 2015165365A1 CN 2015077373 W CN2015077373 W CN 2015077373W WO 2015165365 A1 WO2015165365 A1 WO 2015165365A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face image
- image
- face
- occlusion
- reference face
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 230000001815 facial effect Effects 0.000 title claims abstract description 38
- 230000000903 blocking effect Effects 0.000 claims abstract description 4
- 238000000605 extraction Methods 0.000 claims description 16
- 239000003550 marker Substances 0.000 claims description 15
- 238000010606 normalization Methods 0.000 claims description 12
- 239000000284 extract Substances 0.000 claims description 11
- 238000010586 diagram Methods 0.000 description 8
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 7
- 239000011159 matrix material Substances 0.000 description 7
- 238000004590 computer program Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000012706 support-vector machine Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 238000003708 edge detection Methods 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 230000000694 effects Effects 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Definitions
- the present invention relates to the field of image processing technologies, and in particular, to a face recognition method and system.
- Face recognition technology has been widely applied to video surveillance, attendance records, access control and other occasions.
- the flow chart of face recognition in the prior art is in progress.
- face recognition it is first necessary to establish a face image library, and then each face image in the face image library is subjected to face feature extraction, and a face feature database is established based on each extracted face feature.
- face feature extraction is performed in the collected face image, and the face feature extracted in the collected face image is established.
- a matching is performed in the face feature library to obtain a face recognition result.
- the current face recognition technology is easily affected by factors such as light, posture, expression, and occlusion. Therefore, the current face recognition technology has a low accuracy in recognizing a face image having an obstruction on a person's face. Even unrecognized questions.
- the embodiment of the invention provides a method and a system for recognizing a face, which are used to solve the problem that the face image recognition with the obstruction is low, or even the face image with the obstruction cannot be recognized.
- a first aspect of the embodiments of the present invention provides a method for recognizing a face, including:
- the occlusion image is extracted in the face image in which the occlusion is present;
- detecting, according to the feature to be recognized, whether there is an obstruction for occluding a human face in the collected facial image including:
- extracting the occlusion image in the face image in which the occlusion is present includes:
- the obscuration image is extracted from the face image after the normalization process.
- the mask image is extracted from the normalized face image, including:
- determining whether there is a matching reference face image in the face image database includes:
- the current reference face image is determined as the matching reference face image existing in the face image library
- determining whether there is a matching reference face image in the face image database includes:
- the current reference face image is determined as the matching reference face image existing in the face image library
- the extracted occlusion image is added to the reference facial image, including:
- the scaled image of the occlusion is added to a position in the reference face image that blocks the key portion of the face.
- the occlusion image is scaled according to a preset scaling ratio, including:
- the ratio between the reference spacing and the actual spacing is determined by the preset scaling.
- a second aspect of the embodiments of the present invention provides a face recognition system, including:
- a detecting module configured to extract a facial feature to be recognized in the collected facial image, and detect, according to the facial feature to be recognized, whether there is an obstructing object covering the human face in the collected facial image;
- a feature extraction module configured to extract an occlusion image in the face image in which the occlusion object exists when the occlusion object exists in the collected face image
- a processing module configured to determine whether there is a matching reference face image in the face image library, and if the determination result is yes, the face recognition is successful, otherwise the face recognition fails; wherein the matching reference face image is added After the extracted occlusion image, the reference face feature extracted in the matching reference face image to which the occlusion image is added matches the to-be-recognized face feature.
- the detecting module includes:
- a retrieving unit for retrieving a database for storing a correspondence between a face feature and an occlusion category
- a matching unit configured to match the face feature to be recognized with the face feature in the database, and determine, according to the matching result, whether there is an occlusion object covering the face in the collected face image.
- the feature extraction module includes:
- An image processing unit configured to perform normalization processing on the collected face image, so that the key part of the face image in the collected face image overlaps with the key part of the face in the reference face image in the face image library;
- an extracting unit configured to extract the occlusion image from the face image after the normalization process.
- the extracting unit includes:
- a marker point obtaining module configured to acquire, in the normalized processed face image, an identifier point for identifying an edge of the occlusion object
- an image acquiring module configured to determine an area enclosed by the acquired identification point as the occlusion object image, and extract the determined occlusion object image.
- the processing module is specifically configured to extract a reference face image in the face image database as a current reference face image, and extract the Adding the occlusion image to the current reference face image, extracting a reference face feature in the current reference face image to which the occlusion object image is added, determining the extracted reference face feature and the Whether the face features to be identified match, if yes, the current reference face image is determined as the matching reference face image existing in the face image library; if not, it is determined whether the face image library still exists or not The reference face image, if present, extracts the next reference face image as the current reference face image, and returns to continue to perform extraction of a reference face image in the face image library as the current reference face And adding an extracted occlusion image to the current reference face image, if not, determining that there is no matching reference face image in the face image library .
- the processing module is specifically configured to add the extracted occlusion images to each reference face image in the face image database, Extracting a reference face image with an occlusion image added as a current reference face image, extracting a reference face feature from the extracted current reference face image, and determining the extracted reference face feature Whether the matching of the face features to be recognized is matched, and if the matching, the current reference face image is determined as the matching reference face image existing in the face image library; if not, determining whether the face image library exists or not The unextracted reference face image, if present, returns to the operation of extracting a reference face image to which the occlusion image is added as the current reference face image; if not, determining the face image There is no matching reference face image in the library.
- the processing module is further configured to determine that the occlusion image is in the collected facial image The occluded face of the face, the occlusion image is scaled according to a preset zoom ratio Processing, adding the occlusion image after the scaling process to a position in the reference face image that blocks the key part of the face.
- the processing module is specifically configured to acquire a reference spacing between two eye center points in the reference face image And obtaining an actual spacing between two center points of the eyes in the collected face image, and determining a ratio between the reference spacing and the actual spacing to determine the preset scaling ratio.
- the embodiment of the invention provides a face recognition method, which comprises: extracting a face feature to be recognized in the collected face image, and detecting whether the collected face image is in accordance with the face feature to be recognized There is an obstruction blocking the face; when there is an obstruction in the collected face image, the obstruction image is extracted in the face image with the obstruction, and it is determined whether there is a matching reference face image in the face image library. If yes, face recognition is successful. If not, face recognition fails.
- the reference face feature and the face feature to be recognized extracted after the extracted occlusion image is added to the matching reference face image match.
- the face recognition method in the embodiment of the invention realizes accurate recognition of the face image with the obstruction on the human face, and improves the accuracy of the face recognition.
- FIG. 2 is a flowchart of a face recognition method according to an embodiment of the present invention.
- FIG. 3 is a schematic diagram of identification of an obstruction in a face image according to an embodiment of the present invention.
- FIG. 4 is a schematic diagram of a face image including a marker point according to an embodiment of the present invention.
- FIG. 5 is a second schematic diagram of a face image including a marker point according to an embodiment of the present invention.
- FIG. 6 is a third schematic diagram of a face image including a marker point according to an embodiment of the present invention.
- FIG. 7 is a flowchart of a face recognition method according to an embodiment of the present invention.
- FIG. 8 is a flowchart of another method for recognizing a face according to an embodiment of the present invention.
- FIG. 9 is a schematic structural diagram of a face recognition system according to an embodiment of the present invention.
- the embodiment of the present invention provides a face recognition method, which includes: collecting The face feature to be recognized is extracted from the obtained face image, and according to the feature of the face to be recognized, whether the occlusion object covering the face is detected in the collected face image; the occlusion object exists in the collected face image
- a face recognition method which includes: collecting The face feature to be recognized is extracted from the obtained face image, and according to the feature of the face to be recognized, whether the occlusion object covering the face is detected in the collected face image; the occlusion object exists in the collected face image
- the reference face feature extracted after the extracted occlusion image is added to the matching reference face image matches the feature to be recognized.
- the face recognition method in the embodiment of the invention realizes accurate recognition of the face image with the obstruction on the human face, and improves the accuracy of the face recognition.
- FIG. 2 is a flowchart of a method for recognizing a face according to an embodiment of the present invention, where the method includes:
- S201 extracting a feature of the face to be recognized in the collected face image, and detecting, according to the feature of the face to be recognized, whether there is an obstruction for obscuring the face in the collected face image;
- feature extraction is first performed on the face image.
- the feature extraction is performed by using a local binary pattern (LBP), and the method for extracting the face feature is performed here. I will not go into details.
- LBP local binary pattern
- the occlusion of the collected face image can be detected by the following method:
- the database includes an unobstructed face image and a face image with an obstruction on the human face, and the face image with the obstruction may be a face image wearing sunglasses, A face image wearing a mask, a face image wearing black-rimmed glasses, and the like.
- the face image in the database is classified, the category of the unobstructed face image is determined to be -1, the category of the face image wearing the sunglasses is determined to be 1, and the category of the face image wearing the mask is determined to be 2, wearing
- the category of the face image of the black-rimmed glasses is determined to be 3 or the like, and each category also corresponds to the face feature corresponding to the face image in which the occlusion is present, that is, the category 1 corresponds to the face feature of the face image wearing the sunglasses.
- the face features of each face image under each category in the database are extracted, and then the extracted face features and faces are extracted.
- the corresponding class of the feature is input into the support vector machine (SVM), and the correspondence between the face feature and the category and the category and the occlusion can be established in the SVM, and finally the face feature is set by the SVM.
- Learning training can get a classifier for detecting the presence of obstructions in the face image.
- the face recognition system can perform detection of the occluded object image and classify the face image with the occlusion object.
- the face feature to be recognized in the collected face image is first extracted by the LBP feature extraction method, and then the extracted face feature to be recognized is input to the generated classifier.
- the classifier determines the category corresponding to the face feature to be recognized according to the input face feature to be recognized, and finally determines whether there is an obstruction in the collected face image according to the determined category, and there is an obstruction in the presence of the obstruction object. At the time, further determine which type of obstruction is.
- the classifier determines that there is an occlusion in the collected face image
- the occlusion image in the face image is extracted according to the following process:
- Step 1 determining an identification point of the edge of the obstruction in the collected face image
- the classifier detects that the obscured object exists in the collected face image
- the collected face image is detected by the identification point
- the method for detecting the identification point can use the active shape model.
- Active Shape Model ASM for short
- SDM Supervised Descent Method
- FIG. 3 the face shown in FIG.
- sunglasses the edge of the sunglasses is identified by the SDM processing method to generate an occlusion edge detection model.
- the face image with the occlusion is input, it can be directly determined by the occlusion edge detection model.
- the marker point at the edge of the occlusion.
- Step 2 extracting an occlusion image from the collected face image according to the identification point of the occlusion edge
- step 1 when the face recognition system collects the face image input of the occlusion object, the face recognition system obtains the identifier of the occlusion edge collected in the face image according to the SDM method. Point, the area enclosed by the marker point is determined as the area occupied by the occlusion object, and then the area occupied by the occlusion object is determined as the occlusion image, and finally the occlusion image in the face image is extracted.
- the SDM method is used to create a face image model that identifies the marker points on key parts of the face, that is, in the face image, the key points of the face are marked by the marker points, such as the eyebrows and eyes of the person's face. Key parts such as nose, lips, etc. are marked by marking points.
- the marking points of the eyes include eight marking points of 18 to 25, and the marking points included on the nose are: 26 to 31, wherein the marking point 28 is a nose
- the pointed mark point, the mark point on the mouth contains: 12 to 32 points, 32 and 38 are the mark points of the corner of the mouth.
- the face image model created by the SDM method can obtain the identification points of the key parts of the face in the collected face image, and after obtaining the identification points in the collected face images, determine the identifiers on the left and right eyes.
- Point the coordinates in the face image obtain the coordinates of the center point of the left eye according to the average of the coordinates of the marker points on the left eye, and obtain the coordinates of the center point of the right eye according to the average of the coordinates of the marker points on the right eye, such as As shown in FIG. 5, in FIG. 5, the marker point 01 is the left eye center point, and the marker point 02 is the right eye center point.
- the actual distance L 1 between the center points of the left and right eyes (as shown in FIG. 5), and obtaining the vertical distance S between the center point of the left eye and the lower edge of the sunglasses 1 .
- the SDM method is used to acquire the identification point of the reference face image with the occlusion added to the face image library, and then obtain the reference distance L 0 between the left eye center point and the right eye center point in the reference face image ( As shown in FIG. 6 , the vertical distance S 0 between the center point of the left eye and the lower edge of the occlusion is simultaneously determined, and the scaling ratio when the occlusion image is added to the reference face image is obtained by the following proportional relationship:
- l is used to represent the scaling in the horizontal direction
- h is used to represent the scaling in the vertical direction.
- Figure 7 is a flow chart of the first mode, including:
- the key part of the face that is blocked by the obstruction in the collected face image is determined, and after the reference face image is extracted from the face image library, the scaled obstruction image is added to the reference face.
- the position of the key part of the face is blocked in the image, that is, the occlusion object blocks the two eyes in the collected face image, and the occlusion object image also blocks the two eyes when added to the reference face image, and finally passes The distance between the center point of the human eye and the lower edge of the occlusion image to accurately determine the location at which the occlusion image is placed.
- the reference face feature is extracted from the current reference face image to which the occlusion image is added, using the LBP method;
- a reference face image is first extracted in the extracted face image library, an occlusion image is added to the reference face image, and then a reference to the occlusion image is added.
- the reference face feature is extracted from the face image, and finally the reference face feature is matched with the face feature to be recognized. If the match is successful, the recognition result is obtained, and the recognition process is ended; if this failure, the extraction is performed again.
- a reference face image is output to continue the matching operation, and the loop is executed as follows.
- Figure 8 is a flow chart of the second mode, including:
- the occlusion image is added to each reference face image in the face image library, and the method of adding the occlusion is the same as the method in the first mode.
- the reference face feature in the current reference face image is also extracted using the LBP method.
- S804. Determine whether the extracted reference facial features match the facial features to be recognized. If yes, proceed to S805. If not, perform S806.
- the occlusion image is added to each reference face image in the face image library, and then the reference face feature is extracted one by one in the reference face image to which the occlusion image is added.
- the face features in the collected face images are matched. If a matching reference face image is found, the face recognition process ends and the recognition result is obtained; if the reference face of the face image library is added with the occlusion object When the images do not match the captured face images, the face recognition fails.
- the face recognition system may extract the occlusion from the face image and add the occlusion to the reference face in the face image database.
- the face features of the collected face image are extracted, and the reference face features of the reference face image to which the occlusion image is added, and finally the face features and reference persons in the collected face image are collected.
- the face features are matched and recognized, and finally the face recognition result is obtained, thus achieving The recognition of the face image with the obstruction enhances the accuracy of the face recognition system for face recognition.
- the collected face image is acquired.
- the normalization process is performed to normalize the face image by aligning the key parts in the collected face image with the key parts in the reference face image in the face image library. To put it simply, it is to align the key parts of the collected face image with the key parts such as eyes, nose and mouth in the reference face image, which can make the image matching more accurate.
- the affine matrix R can be obtained as follows:
- the five identification points are: the identification points of the center of the two eyes, and the tip of the nose.
- the identification point and the identification points of the two corners of the mouth as shown in FIG. 4, the identification point of the center of the left eye can be obtained by the average of the image coordinates of the identification points 10 to 17, and the identification point of the center of the right eye can be identified by the identifier 18-25.
- the average of the image coordinates is obtained, the nose point is 28, and the two corners are 32 and 38.
- the identification points which can accurately represent the key parts of the human face are extracted, that is, the identification points of the center of the two eyes, the identification points of the nose points, and the identification points of the two mouth corners.
- the image coordinates of the five identification points in the reference face image are obtained according to the relation (1).
- r 11 , r 12 , r 13 , r 21 , r 22 , and r 23 are six parameters in the affine matrix R, and the expression of the formula (2) is obtained by transforming the formula (1):
- the image of the collected face image can be normalized by the affine matrix.
- the normalization process is to adjust the collected face image to and from the reference.
- each of the reference points in the reference face image can be represented by coordinates, and the reference face image is referred to.
- the coordinates of the center point of the left eye (60, 40), the coordinates of the center point of the right eye (60, 50), the coordinates of the tip of the nose point are (40, 45), and the points identifying the two corners of the mouth are (20, 43) (20, 47), the coordinates of these five identification points are used as the standard.
- the size of the face image of the acquisition form is 40 ⁇ 40, and then performing identification point detection on the collected face image, thereby obtaining five identification points of the collected face image.
- the coordinates of the five identification points are: The coordinates of the center point of the left eye (30, 13), the coordinates of the center point of the right eye (30, 17), the coordinates of the tip of the nose point are (15, 19,), and the coordinates of the points of the two corners are (13, 10), (16,10).
- the collected face image is normalized, that is, the size of the collected face image is adjusted to 100 ⁇ 100, and the left eye is
- the coordinates of the center point are adjusted from (30, 13) to (60, 40)
- the coordinates of the center point of the right eye are adjusted from (30, 17) to (60, 50)
- the coordinates of the nose point are adjusted from (15, 19,).
- the coordinates of the identification points of (40, 45) and the two corners are adjusted from (13, 10) to (20, 43) and (16, 10) to (20, 47), so that the captured face image is obtained. It can basically coincide with the reference face image, which ensures the accuracy of the sign matching.
- the reference face image saved in the face image library needs to be normalized, so as to ensure The collected face image substantially overlaps with the reference face image in the face image library.
- step S202 and subsequent steps are performed, that is, the occlusion image is acquired in the normalized face image. Since the normalized face image and the reference face image substantially overlap, the occlusion image can be accurately added to the corresponding position in the reference face image after being extracted.
- the face recognition system When the face recognition system detects that there is an occlusion in the collected face image, the face recognition system will retrieve the reference face image in which the occlusion object is stored in the face image library, and then collect the collected face image. The reference face image with the obstruction is matched, and finally the recognition result is obtained. When the face recognition is performed in this way, the face image with the obstruction object is accurately recognized, and the face of the face recognition system is improved. Identification function.
- a face recognition system is also provided in the embodiment of the present invention. As shown in FIG. 9, the face recognition system includes:
- the detecting module 901 is configured to extract, in the collected facial image, a facial feature to be recognized, and detect, according to the facial feature to be recognized, whether there is an obstructing object covering the human face in the collected facial image;
- the feature extraction module 902 is configured to extract an occlusion image in the face image in which the occlusion object exists when the occlusion object exists in the collected face image;
- the processing module 903 is configured to determine whether there is a matching reference face image in the face image database, and if the determination result is yes, the face recognition is successful, otherwise the face recognition fails; wherein the matching reference face image is added After the extracted occlusion image, the reference facial feature extracted in the matching reference facial image to which the occlusion image is added matches the facial feature to be recognized.
- the detecting module 901 includes:
- a retrieving unit for retrieving a database for storing a correspondence between a face feature and an occlusion category
- a matching unit configured to match the face feature to be recognized with the face feature in the database, and determine, according to the matching result, whether there is an occlusion object covering the face in the collected face image.
- the feature extraction module 902 includes:
- An image processing unit configured to perform normalization processing on the collected face image, so that the key part of the face image in the collected face image overlaps with the key part of the face in the reference face image in the face image library;
- an extracting unit configured to extract the occlusion image from the face image after the normalization process.
- the extracting unit in the feature extraction module 902 includes:
- a mark point obtaining module configured to obtain an identifier point in the normalized face image for identifying an edge of the occlusion object
- An image acquisition module is configured to determine an area enclosed by the acquired identification points as an occlusion image, and extract the determined occlusion image.
- the processing module 903 is specifically configured to extract a reference face image as a current reference face image in the face image library, and add the extracted occlusion image to the current reference face image, and add Determining the reference face feature in the current reference face image of the occlusion image, determining whether the extracted reference face feature matches the face feature to be recognized, and if the match, determining the current reference face image as a face image Matching reference face images existing in the library; if not, determining whether there is still an unextracted reference face image in the face image library, and if so, extracting the next reference face image as the current reference face image And return to continue execution in the face image library Extracting a reference face image as the current reference face image, and adding the extracted occlusion image to the current reference face image, if not, determining that there is no matching reference in the face image library Face image.
- the processing module 903 is specifically configured to add the extracted occlusion images to each reference face image in the face image library, and extract a reference face image with the occlusion image added as the current reference person.
- a face image extracting a reference face feature in the extracted current reference face image, determining whether the extracted reference face feature matches the to-be-recognized face feature, and if the match, determining the current reference face image as a face Matching reference face images existing in the image library; if there is no match, determine whether there is a reference face image that has not yet been extracted in the face image library, and if yes, return to continue to extract a reference to add an occlusion image
- the face image serves as an operation of the current reference face image; if not, it is determined that there is no matching reference face image in the face image library.
- the processing module 903 is further configured to determine a key part of the face that is blocked by the occlusion object image in the collected face image, and perform scaling processing on the occlusion object image according to a preset scaling ratio, and the occlusion object after the scaling process is performed.
- the image is added to the position of the reference face image that obscures the key parts of the face.
- the processing module 903 is specifically configured to acquire a reference spacing between two eye center points in the reference face image, and obtain an actual spacing between two eye center points in the collected face image, and the reference spacing and the actual spacing The ratio between the spacings determines the preset scaling.
- These computer program instructions can also be stored in a bootable computer or other programmable data processing device.
- a computer readable memory that operates in a particular manner, causing instructions stored in the computer readable memory to produce an article of manufacture comprising an instruction device implemented in one or more flows and/or block diagrams of the flowchart The function specified in the box or in multiple boxes.
- These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
- the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
本发明实施例提供了一种人脸识别方法及系统,该方法包括:在采集到的人脸图像中提取出待识别人脸特征,并根据待识别人脸特征,检测采集到的人脸图像中是否存在遮挡人脸的遮挡物;在采集到的人脸图像中存在遮挡物时,在存在遮挡物的人脸图像中提取出遮挡物图像,判断在人脸图像库中是否存在匹配参考人脸图像,若存在,则人脸识别成功,若否,则人脸识别失败,其中,在匹配参考人脸图像中添加了提取出的遮挡物图像后提取出的参考人脸特征与待识别人脸特征匹配。通过本发明实施例中的人脸识别方法实现了对人脸上存在遮挡物的人脸图像进行准确的识别,提高了人脸识别的精确性。
Description
本发明涉及图像处理技术领域,尤其涉及一种人脸识别方法及系统。
人脸识别技术已经被广泛应用到视频监控、考勤记录、门禁等场合,在视频监控等非主动配合的场景下,如图1所示为现有技术中的人脸识别的流程图,在进人脸识别之前首先需要建立人脸图像库,然后将人脸图像库中的每张人脸图像进行人脸特征提取,基于提取得到的各个人脸特征建立人脸特征库。后续在进行人脸识别时,每次采集到人脸图像之后,在采集到的人脸图像中进行人脸特征提取,并将在采集到的人脸图像中提取的人脸特征在建立的人脸特征库中进行匹配,从而得到人脸识别结果。
但是,当前的人脸识别技术容易受到光线、姿态、表情、遮挡等因素影响,因此,当前的人脸识别技术中存在对人脸上有遮挡物的人脸图像进行识别的精确度较低,甚至是无法识别的问题。
发明内容
本发明实施例提供了一种人脸识别方法及系统,用以解决存在遮挡物的人脸图像识别精确度较低,甚至是无法识别存在遮挡物的人脸图像的问题。
其具体的技术方案如下:
本发明实施例第一方面提供了一种人脸识别方法,包括:
在采集到的人脸图像中提取出待识别人脸特征,并根据所述待识别人脸特征,检测采集到的人脸图像中是否存在遮挡人脸的遮挡物;
在采集到的人脸图像中存在遮挡物时,在存在遮挡物的所述人脸图像中提取出遮挡物图像;
判断在人脸图像库中是否存在匹配参考人脸图像,其中在所述匹配参考人脸图像中添加了提取出的遮挡物图像后,在添加了所述遮挡物图像的匹配参考人脸图像中提取出的参考人脸特征与所述待识别人脸特征匹配;
在判断结果为是时,则人脸识别成功;否则
人脸识别失败。
结合第一方面,在第一种可能的实现方式中,根据所述待识别人脸特征,检测采集到的人脸图像中是否存在遮挡人脸的遮挡物,包括:
调取保存有人脸特征与遮挡物类别之间的对应关系的数据库;
将所述待识别人脸特征与所述数据库中的人脸特征进行匹配,根据匹配结果确定采集到的人脸图像中是否存在遮挡人脸的遮挡物。
结合第一方面,在第二种可能的实现方式中,在存在遮挡物的所述人脸图像中提取出遮挡物图像,包括:
对采集到的人脸图像进行归一化处理,以使采集到的人脸图像中人脸关键部位与人脸图像库中参考人脸图像中的人脸关键部位重叠;
在经过归一化处理后的所述人脸图像中提取出所述遮挡物图像。
结合第一方面的第二种可能的实现方式,在第三种可能的实现方式中,在经过归一化处理后的所述人脸图像中提取出所述遮挡物图像,包括:
获取经过归一化处理后的所述人脸图像中用以标识所述遮挡物的边缘的标识点;
将获取的标识点所围成的区域确定为所述遮挡物图像,并
提取确定出的所述遮挡物图像。
结合第一方面,在第四种可能的实现方式中,判断在人脸图像库中是否存在匹配参考人脸图像,包括:
在所述人脸图像库中提取出一张参考人脸图像作为当前参考人脸图像,并将提取出的所述遮挡物图像添加至所述当前参考人脸图像中;
在添加了遮挡物图像的当前参考人脸图像中提取出参考人脸特征;
判定提取出的所述参考人脸特征与所述待识别人脸特征是否匹配;
若匹配,则将当前参考人脸图像确定为人脸图像库中存在的匹配参考人脸图像;
若不匹配,判断所述人脸图像库中是否还存在未提取出的参考人脸图像;
若存在,则提取下一张参考人脸图像作为当前参考人脸图像,并返回继续执行在所述人脸图像库中提取出一张参考人脸图像作为当前参考人脸图像,并将提取出的所述遮挡物图像添加至当前参考人脸图像中的操作:
若不存在,则确定在人脸图像库中不存在匹配参考人脸图像。
结合第一方面,在第五种可能的实现方式中,判断在人脸图像库中是否存在匹配参考人脸图像,包括:
将提取出的所述遮挡物图像分别添加到所述人脸图像库中的各个参考人脸图像中;
提取出一张添加了遮挡物图像的参考人脸图像作为当前参考人脸图像;
在提取出的所述当前参考人脸图像中提取出参考人脸特征;
判定提取出的所述参考人脸特征与所述待识别人脸特征是否匹配;
若匹配,则将当前参考人脸图像确定为人脸图像库中存在的匹配参考人脸图像;
若不匹配,判断所述人脸图像库中是否存在还未提取的参考人脸图像,若存在,则返回继续执行提取出一张添加了遮挡物图像的参考人脸图像作为当前参考人脸图像的操作;若不存在,则确定在所述人脸图像库中不存在匹配参考人脸图像。
结合第一方面中的第四种或者第五种可能的实现方式,在第六种可能的实现方式中,将提取出的遮挡物图像添加到参考人脸图像中,包括:
确定所述遮挡物图像在采集到的人脸图像中所遮挡的人脸关键部位;
对所述遮挡物图像按照预设缩放比例进行缩放处理;
将经过缩放处理后的遮挡物图像添加至所述参考人脸图像中遮挡所述人脸关键部位的位置。
结合第一方面中的第六种可能的实现方式,在第七种可能的实现方式中,对所述遮挡物图像按照预设缩放比例进行缩放处理,包括:
获取所述参考人脸图像中两个眼睛中心点之间的参考间距;
获取采集到的人脸图像中两个眼睛中心点之间的实际间距;
将所述参考间距与所述实际间距之间的比例确定所述预设缩放比例。
本发明实施例第二方面提供了一种人脸识别系统,包括:
检测模块,用于在采集到的人脸图像中提取出待识别人脸特征,并根据所述待识别人脸特征,检测采集到的人脸图像中是否存在遮挡人脸的遮挡物;
特征提取模块,用于在采集到的人脸图像中存在遮挡物时,在存在遮挡物的所述人脸图像中提取出遮挡物图像;
处理模块,用于判断在人脸图像库中是否存在匹配参考人脸图像,在判断结果为是时,则人脸识别成功,否则人脸识别失败;其中在所述匹配参考人脸图像中添加了提取出的遮挡物图像后,在添加了所述遮挡物图像的匹配参考人脸图像中提取出的参考人脸特征与所述待识别人脸特征匹配。
结合第二方面,在第一种可能的实现方式中,检测模块包括:
调取单元,用于调取保存有人脸特征与遮挡物类别之间的对应关系的数据库;
匹配单元,用于将所述待识别人脸特征与所述数据库中的人脸特征进行匹配,根据匹配结果确定采集到的人脸图像中是否存在遮挡人脸的遮挡物。
结合第二方面,在第二种可能的实现方式中,所述特征提取模块,包括:
图像处理单元,用于对采集到的人脸图像进行归一化处理,以使采集到的人脸图像中人脸关键部位与人脸图像库中参考人脸图像中的人脸关键部位重叠;
提取单元,用于在经过归一化处理后的所述人脸图像中提取出所述遮挡物图像。
结合第二方面中的第二种可能的实现方式,在第三种可能的实现方式中,所述提取单元,包括:
标识点获取模块,用于获取经过归一化处理后的所述人脸图像中用以标识所述遮挡物的边缘的标识点;
图像获取模块,用于将获取的标识点所围成的区域确定为所述遮挡物图像,并提取确定出的所述遮挡物图像。
结合第二方面,在第四种可能的实现方式中,所述处理模块,具体用于在所述人脸图像库中提取出一张参考人脸图像作为当前参考人脸图像,并将提取出的所述遮挡物图像添加至所述当前参考人脸图像中,在添加了遮挡物图像的当前参考人脸图像中提取出参考人脸特征,判定提取出的所述参考人脸特征与所述待识别人脸特征是否匹配,若匹配,则将当前参考人脸图像确定为人脸图像库中存在的匹配参考人脸图像;若不匹配,判断所述人脸图像库中是否还存在未提取出的参考人脸图像,若存在,则提取下一张参考人脸图像作为当前参考人脸图像,并返回继续执行在所述人脸图像库中提取出一张参考人脸图像作为当前参考人脸图像,并将提取出的所述遮挡物图像添加至当前参考人脸图像中的操,若不存在,则确定在人脸图像库中不存在匹配参考人脸图像。
结合第二方面,在第五种可能的实现方式中,所述处理模块,具体用于将提取出的所述遮挡物图像分别添加到所述人脸图像库中的各个参考人脸图像中,提取出一张添加了遮挡物图像的参考人脸图像作为当前参考人脸图像,在提取出的所述当前参考人脸图像中提取出参考人脸特征,判定提取出的所述参考人脸特征与所述待识别人脸特征是否匹配,若匹配,则将当前参考人脸图像确定为人脸图像库中存在的匹配参考人脸图像;若不匹配,判断所述人脸图像库中是否存在还未提取的参考人脸图像,若存在,则返回继续执行提取出一张添加了遮挡物图像的参考人脸图像作为当前参考人脸图像的操作;若不存在,则确定在所述人脸图像库中不存在匹配参考人脸图像。
结合第二方面中的第四种或者第五种可能的实现方式,在第六种可能的实现方式中,所述处理模块,还用于确定所述遮挡物图像在采集到的人脸图像中所遮挡的人脸关键部位,对所述遮挡物图像按照预设缩放比例进行缩放
处理,将经过缩放处理后的遮挡物图像添加至所述参考人脸图像中遮挡所述人脸关键部位的位置。
结合第二方面中的第六种可能的实现方式,在第七种可能的实现方式中,所述处理模块,具体用于获取所述参考人脸图像中两个眼睛中心点之间的参考间距,获取采集到的人脸图像中两个眼睛中心点之间的实际间距,将所述参考间距与所述实际间距之间的比例确定所述预设缩放比例。
本发明实施例提供了一种人脸识别方法,该方法包括:在采集到的人脸图像中提取出待识别人脸特征,并根据待识别人脸特征,检测采集到的人脸图像中是否存在遮挡人脸的遮挡物;在采集到的人脸图像中存在遮挡物时,在存在遮挡物的人脸图像中提取出遮挡物图像,判断在人脸图像库中是否存在匹配参考人脸图像,若存在,则人脸识别成功,若否,则人脸识别失败,其中,在匹配参考人脸图像中添加了提取出的遮挡物图像后提取出的参考人脸特征与待识别人脸特征匹配。通过本发明实施例中的人脸识别方法实现了对人脸上存在遮挡物的人脸图像进行准确的识别,提高了人脸识别的精确性。
图1为现有技术中人脸识别的流程示意图;
图2为本发明实施例中一种人脸识别方法的流程图;
图3为本发明实施例中人脸图像中遮挡物的标识示意图;
图4为本发明实施例中包含标识点的人脸图像示意图之一;
图5为本发明实施例中包含标识点的人脸图像示意图之二;
图6为本发明实施例中包含标识点的人脸图像示意图之三;
图7为本发明实施例中一种人脸识别方式的流程图;
图8为本发明实施例中另一种人脸识别方式的流程图;
图9为本发明实施例中一种人脸识别系统的结构示意图。
下面通过附图以及具体实施例对本发明技术方案做详细的说明。
本发明实施例为了解决在存在遮挡物的人脸图像识别的精确度较低,甚至是无法识别存在遮挡物的人脸图像的问题,提供了一种人脸识别方法,该方法包括:在采集到的人脸图像中提取出待识别人脸特征,并根据待识别人脸特征,检测采集到的人脸图像中是否存在遮挡人脸的遮挡物;在采集到的人脸图像中存在遮挡物时,在存在遮挡物的人脸图像中提取出遮挡物图像,判断在人脸图像库中是否存在匹配参考人脸图像,若存在,则人脸识别成功,若否,则人脸识别失败,其中,在匹配参考人脸图像中添加了提取出的遮挡物图像后提取出的参考人脸特征与待识别人脸特征匹配。通过本发明实施例中的人脸识别方法实现了对人脸上存在遮挡物的人脸图像进行准确的识别,提高了人脸识别的精确性。
下面通过附图以及具体实施例对本发明技术方案做详细的说明。
如图2所示为本发明实施例中一种人脸识别方法的流程图,该方法包括:
S201,在采集到的人脸图像中提取出待识别人脸特征,并根据待识别人脸特征,检测采集到的人脸图像中是否存在遮挡人脸的遮挡物;
在采集到人脸图像之后,首先对人脸图像进行特征提取,在本发明实施例中通过定位特征算子(Local Binary Pattern,简称:LBP)进行特征提取,具体提取人脸特征的方法在此就不再详述。
在提取出采集到的人脸图像中的待识别人脸特征之后,将待识别人脸特征输入至人脸识别系统中的分类器,通过该分类器就能够确定出采集到的人脸图像中是否存在遮挡人脸的遮挡物,在本发明实施例中可以通过如下的方法来实现对采集到的人脸图像进行遮挡物的检测:
在人脸识别系统中建立一个数据库,在该数据库中包含无遮挡的人脸图像以及在人脸上存在遮挡物的人脸图像,存在遮挡物的人脸图像可以是佩戴墨镜的人脸图像、佩戴口罩的人脸图像、佩戴黑框眼镜的人脸图像等。
然后对数据库中的人脸图像进行分类,将无遮挡的人脸图像的类别确定为-1,佩戴墨镜的人脸图像的类别确定为1,佩戴口罩的人脸图像的类别确定为2,佩戴黑框眼镜的人脸图像的类别确定为3等等,并且每个类别还对应存在遮挡物的人脸图像对应的人脸特征,即:类别1对应佩戴墨镜的人脸图像的人脸特征。在可能的情况下,数据库中的样本越丰富越好,也就是说在数据库中可以尽可能的添加各种存在不同遮挡物的人脸图像,这样使得后续可以更加准确的对存在遮挡物的人脸图像进行分类。
在数据库中对每种存在遮挡物的人脸图像进行标类之后,提取数据库中每一标类下的每一张人脸图像的人脸特征,这然后将提取出的人脸特征以及人脸特征对应的标类输入到支持向量机(Support Vector Machine简称:SVM)中,在SVM中就可以建立人脸特征与类别以及类别与遮挡物之间的对应关系,最后通过SVM对人脸特征的学习训练就可以得到一个用于检测人脸图像中是否存在遮挡物的分类器。
在生成分类器之后,人脸识别系统就可以实现对采集到的人脸图像进行遮挡物的检测,并且对存在遮挡物的人脸图像进行分类。
具体来讲,当采集到人脸图像之后,首先通过LBP特征提取方法提取出采集到的人脸图像中的待识别人脸特征,然后将提取出的待识别人脸特征输入到生成的分类器中,分类器根据输入的待识别人脸特征确定出该待识别人脸特征对应的类别,然后根据确定出的类别最终确定出采集到的人脸图像中是否存在有遮挡物,在存在遮挡物时,进一步确定是哪一类的遮挡物。
S202,在采集到的人脸图像中存在遮挡物时,在存在遮挡物的人脸图像中提取出遮挡物图像;
若是分类器确定出采集到的人脸图像中存在遮挡物时,则按照如下的流程来提取出人脸图像中的遮挡物图像:
步骤1,在采集到的人脸图像中确定出遮挡物边缘的标识点;
在分类器检测到采集到的人脸图像中确定出存在遮挡物时,对采集到的人脸图像进行标识点检测,这里的标识点检测的方法可以使用主动形状模型
(Active Shape Model,简称ASM)或者是监督梯度下降法(Supervised Descent Method,简称SDM),在本发明实施例中采用的是SDM,比如说如图3所示,在图3所示的人脸图像中存在墨镜,通过SDM处理方法,将墨镜边缘通过标识点进行标识,从而生成一个遮挡物边缘检测模型,当存在遮挡物的人脸图像输入时,就可以通过遮挡物边缘检测模型直接确定出遮挡物边缘的标识点。
步骤2,根据遮挡物边缘的标识点,在采集到的人脸图像中提取出遮挡物图像;
基于在步骤1中得到了遮挡物检测模型之后,在人脸识别系统采集到存遮挡物的人脸图像输入时,人脸识别系统将根据SDM方法得到采集到人脸图像中遮挡物边缘的标识点,将标识点所围成的区域确定为遮挡物所占用的区域,然后将遮挡物所占用的区域确定为遮挡物图像,最后提取出人脸图像中的遮挡物图像。
S203,判断在人脸图像库中是否存在匹配参考人脸图像,其中,匹配参考人脸图像中添加了提取出的遮挡物图像后提取出的参考人脸特征与待识别人脸特征匹配,若判断结果为是时,进入S204,否则,进入S205;
S204,人脸识别成功;
S205,人脸识别失败。
在将遮挡物图像添加到人脸图像库中的参考人脸图像中之前,还需要对遮挡物图像进行缩放处理,该缩放需要得到遮挡物图像的缩放比例,缩放比例可以通过如下的方式得到:
通过SDM方法创建一个在人脸的关键部位上都标识了标识点的人脸图像模型,即:在人脸图像中通过标识点来标注出人脸关键部位,比如说人脸上的眉毛、眼睛、鼻子、嘴唇等关键部位都通过标识点进行标注。如图4所示为参考人脸图像中各个关键部位的标识点,在图4所示的人脸图像中的一只眼睛的标识点包括了:10~17这八个标识点,另一只眼睛的标识点包括了18~25这八个标识点,鼻子上包括的标识点为:26~31,其中,标识点28为鼻
尖的标识点,嘴上的标识点包含了:32~42这12个标识点,其中32和38为嘴角的标识点。
通过SDM方法创建的人脸图像模型,就可以得到采集到的人脸图像中的人脸关键部位的标识点,在得到采集到的人脸图像中的标识点之后,确定左右眼的上的标识点在人脸图像中的坐标,根据左眼上的标识点的坐标的平均值得到左眼中心点的坐标,根据右眼上的标识点的坐标的平均值得到右眼中心点的坐标,如图5所示,在图5中标识点01为左眼中心点,标识点02为右眼中心点。
在确定出人脸上左右眼的中心点的坐标之后,左右眼中心点之间的实际距离L1(如图5所示),并且获取左眼中心点与墨镜下边缘之间的垂直距离S1。
同样的通过SDM方法,获取人脸图像库中添加了遮挡物的参考人脸图像的标识点,然后获取参考人脸图像中的左眼中心点与右眼中心点之间的参考距离L0(如图6所示),同时确定左眼中心点与遮挡物下边缘之间的垂直距离S0,通过如下的比例关系得到将遮挡物图像添加到参考人脸图像中时的缩放比例:
其中,l表征水平方向上的缩放比例,h表征垂直方向上的缩放比例,在得到缩放比例之后,该人脸识别系统在提取出采集到的人脸图像中的遮挡物图像之后,就按照上述的缩放比例对遮挡物图像进行缩放处理,
在得到缩放处理后的遮挡物图像之后,将按照如下的两种方式来判定是否存在匹配参考人脸图像:
第一种方式:
如图7所示为第一种方式的流程图,包括:
S701,在人脸图像库中提取出一张参考人脸图像作为当前参考人脸图像,并将提取出的遮挡物图像添加至当前参考人脸图像中;
具体来讲,确定采集到的人脸图像中遮挡物所遮挡的人脸关键部位,在从人脸图像库中提取出参考人脸图像之后,将经过缩放处理的遮挡物图像添加至参考人脸图像中遮挡所述人脸关键部位的位置,即:在采集到的人脸图像中遮挡物遮挡了两只眼睛,则遮挡物图像添加至参考人脸图像中时也遮挡两只眼睛,最后通过人眼中心点与遮挡物图像下边缘之间的距离来精确的确定出遮挡物图像所放置的位置。
S702,在添加了遮挡物图像的当前参考人脸图像中提取出参考人脸特征;
在该步骤中使用LBP方法,在添加了遮挡物图像的当前参考人脸图像中提取出参考人脸特征;
S703,判定提取出的参考人脸特征与待识别人脸特征是否匹配,若匹配,则进入S704;若不匹配,则进入S705,;
S704,将当前参考人脸图像确定出人脸图像库中的存在的匹配参考人脸图像;
S705,判断人脸图像库中是否还存在未提取出的参考人脸图像,若存在,则进入S701;若不存在,则进入S706;
S706,确定在人脸图像库中不存在匹配参考人脸图像。
简单来讲,在图7所示的方式中,先在提取人脸图像库中提取出一张参考人脸图像,在参考人脸图像中添加遮挡物图像,然后在添加了遮挡物图像的参考人脸图像中提取出参考人脸特征,最后将参考人脸特征与待识别人脸特征进行匹配,若是此次匹配成功,则得到识别结果,并结束识别流程;若是此次失败,则再次提取出一张参考人脸图像继续执行匹配操作,并按此循环执行。
第二种方式:
如图8所示为第二种方式的流程图,包括:
S801,将提取出的所述遮挡物图像分别添加到所述人脸图像库中的各个参考人脸图像中;
具体来讲,在得到遮挡物图像之后,将该遮挡物图像添加到人脸图像库中的每一张参考人脸图像中,具体添加遮挡物的方法与第一种方式中的方法相同。
S802,提取出一张添加了遮挡物图像的参考人脸图像作为当前参考人脸图像;
S803,在提取出的当前参考人脸图像中提取出参考人脸特征;
此处也是使用LBP方法提取出当前参考人脸图像中的参考人脸特征。
S804,判定提取出的参考人脸特征与待识别人脸特征是否匹配,若匹配,则进入到S805,若不匹配,则执行S806。
S805,将当前参考人脸图像确定为人脸图像库中存在的匹配参考人脸图像;
S806,判断人脸图像库中是否存在还未提取的参考人脸图像,若存在,则返回执行S802,若不存在,在执行S807;
S807,确定在人脸图像库中不存在匹配参考人脸图像。
在第二种方式中是将遮挡物图像添加到人脸图像库中的每一张参考人脸图像中,然后在添加了遮挡物图像的参考人脸图像中逐张提取出参考人脸特征与采集到的人脸图像中的人脸特征进行匹配,若是找到匹配的参考人脸图像,则人脸识别流程结束,并得到识别结果;若是人脸图像库中的添加了遮挡物的参考人脸图像都与采集到的人脸图像不匹配时,则此次人脸识别失败。
在本发明实施例中在采集到的人脸图像中存在遮挡物时,人脸识别系统可以将遮挡物从人脸图像中提取出来,并将遮挡物添加到人脸图像库中的参考人脸图像中,然后提取出采集到的人脸图像的人脸特征,以及添加了遮挡物图像的参考人脸图像的参考人脸特征,最后将采集到的人脸图像中的人脸特征与参考人脸特征进行匹配识别,最终得到人脸识别结果,这样就实现了
对存在遮挡物的人脸图像的识别,提升了人脸识别系统对人脸识别的准确度。
进一步,在本发明实施例中,为了提高对存在遮挡物的人脸图像的识别精确度,在本发明实施例中对采集到的人脸图像进行标识点检测之后,对采集到的人脸图像进行归一化处理,对人脸图像进行归一化处理的目的就是将采集到的人脸图像中的关键部位与人脸图像库中的参考人脸图像中的关键部位对齐。简单来讲,就是将采集到的人脸图像中的眼睛、鼻子、嘴巴等关键部位分别与参考人脸图像中的眼睛、鼻子、嘴巴等关键部位对齐,这样可以使图像匹配更加准确。
要实现对采集到的人脸图像的归一化处理,就需要得到人脸图像中人脸关键部位的标识点以及仿射矩阵R:
在本发明实施例中仿射矩阵R可以通过如下的方式得到:
首先在参考人脸图像中得到5个能够较为准确的表征人脸上关键部位的标识点,具体可以通过SDM方法得到,这5个标识点分别为:两个眼睛的中心的标识点、鼻尖的标识点、两个嘴角的标识点,如图4所示,左眼的中心的标识点可以通过标识点10~17的图像坐标平均值得到,右眼的中心的标识点可以通过标识18~25的图像坐标的平均值得到,鼻尖的标识点为28、两个嘴角的标识点为32和38。将参考人脸图像的左下角作为坐标原点,这五个标识点在参考人脸图像中的坐标确定为(x′i,y′i),其中,i=1、2.....5。
在采集到的人脸图像中提取出能够较为准确的表征人脸上关键部位的标识点,也就是两个眼睛的中心的标识点、鼻尖的标识点、以及两个嘴角的标识点,这五个标识点也可以通过上述的SDM方法得到,将采集到的人脸图像的左下角作为坐标原点,这五个标识点在采集到的人脸图像中的坐标确定为(xi,yi),其中,i=1、2.....5。
在得到参考人脸图像中的五个标识点对应的坐标以及采集到的人脸图像的五个标识点之后,根据关系式(1)得到参考人脸图像中的五个标识点的图像坐标与采集到的人脸图像中的五个标识点的图像坐标之间的对应关系:
其中,r11、r12、r13、r21、r22、r23为仿射矩阵R中的6个参数,通过公式(1)变换得到公式(2)的表达式:
由于在公式(2)中的xi,yi以及x′i,y′i都是已知的量,所以通过求解公式(2)就能够仿射矩阵R中的所有位置参数r11、r12、r13、r21、r22、r23,进而得到最终的仿射矩阵R。
在得到仿射矩阵R之后,就可以通过仿射矩阵对采集到的人脸图像进行图像归一化的处理,这里需要说明的是归一化处理就是将采集到的人脸图像调整为与参考人脸图像基本重叠的人脸图像。下面通过具体的例子来说明归一化的处理效果:
例如:参考人脸图像的尺寸为100×100,将参考人脸图像的左下角作为直角坐标系的原点,则参考人脸图像中的每个标识点都可以通过坐标来表征,参考人脸图像中的左眼中心点坐标(60,40),右眼中心点坐标(60,50),鼻尖标识点的坐标为(40,45),标识两个嘴角的标识点分别为(20,43)、(20,47),这五个标识点的坐标就作为标准。
在采集到一张人脸图像之后,确定采集单的人脸图像的尺寸为40×40,然后对采集到的人脸图像进行标识点检测,从而得到采集到的人脸图像的五个标识点对应在采集到的人脸图像中的坐标,比如说该五个标识点的坐标为:
左眼中心点坐标(30,13),右眼中心点坐标(30,17),鼻尖标识点的坐标为(15,19,),两个嘴角标识点的坐标分别为(13,10)、(16,10)。
在获取采集到的人脸图像中的五个标识点之后,将采集到的人脸图像进行归一化处理,也就是将采集到的人脸图像的尺寸调整为100×100,并且将左眼中心点的坐标从(30,13)调整到(60,40)、右眼中心点坐标从(30,17)调整到(60,50)、鼻尖标识点的坐标从(15,19,)调整为(40,45)、两个嘴角的标识点的坐标分别从(13,10)调整到(20,43)、(16,10)调整到(20,47),这样采集到的人脸图像与参考人脸图像能够基本重合,这样保证了体征匹配是的准确性。
当然,若是人脸图像不是图像库中的参考人脸图像,则在进行人脸识别的过程中,也需要对人脸图像库中保存的参考人脸图像进行归一化处理,这样才能够保证采集到的人脸图像与人脸图像库中的参考人脸图像基本重叠。
在对采集到的人脸图像进行归一化处理之后,再执行步骤S202以及后续的步骤,即:在归一化处理后的人脸图像中获取遮挡物图像。由于经过归一化处理的人脸图像和参考人脸图像基本重叠,因此遮挡物图像被提取出来之后能够被准确的添加到参考人脸图像中的对应位置上。
在人脸识别系统检测到采集到的人脸图像中存在遮挡物时,人脸识别系统将调取人脸图像库中保存了存在遮挡物的参考人脸图像,然后将采集到的人脸图像与存在遮挡物的参考人脸图像进行匹配,最后得到识别结果,通过此种方式进行人脸识别时,实现了对存在遮挡物的人脸图像进行准确识别,提升了人脸识别系统的人脸识别功能。
本发明实施例中还提供了一种人脸识别系统,如图9所示,该人脸识别系统,包括:
检测模块901,用于在采集到的人脸图像中提取出待识别人脸特征,并根据待识别人脸特征,检测采集到的人脸图像中是否存在遮挡人脸的遮挡物;
特征提取模块902,用于在采集到的人脸图像中存在遮挡物时,在存在遮挡物的人脸图像中提取出遮挡物图像;
处理模块903,用于判断在人脸图像库中是否存在匹配参考人脸图像,在判断结果为是时,则人脸识别成功,否则人脸识别失败;其中在匹配参考人脸图像中添加了提取出的遮挡物图像后,在添加了所述遮挡物图像的匹配参考人脸图像中提取出的参考人脸特征与待识别人脸特征匹配。
进一步,检测模块901包括:
调取单元,用于调取保存有人脸特征与遮挡物类别之间的对应关系的数据库;
匹配单元,用于将待识别人脸特征与所述数据库中的人脸特征进行匹配,根据匹配结果确定采集到的人脸图像中是否存在遮挡人脸的遮挡物。
进一步,特征提取模块902包括:
图像处理单元,用于对采集到的人脸图像进行归一化处理,以使采集到的人脸图像中人脸关键部位与人脸图像库中参考人脸图像中的人脸关键部位重叠;
提取单元,用于在经过归一化处理后的所述人脸图像中提取出所述遮挡物图像。
进一步,在特征提取模块902中的提取单元包括:
标识点获取模块,用于获取经过归一化处理后的人脸图像中用以标识遮挡物的边缘的标识点;
图像获取模块,用于将获取的标识点所围成的区域确定为遮挡物图像,并提取确定出的遮挡物图像。
进一步,处理模块903,具体用于在人脸图像库中提取出一张参考人脸图像作为当前参考人脸图像,并将提取出的遮挡物图像添加至当前参考人脸图像中,在添加了遮挡物图像的当前参考人脸图像中提取出参考人脸特征,判定提取出的所述参考人脸特征与待识别人脸特征是否匹配,若匹配,则将当前参考人脸图像确定为人脸图像库中存在的匹配参考人脸图像;若不匹配,判断人脸图像库中是否还存在未提取出的参考人脸图像,若存在,则提取下一张参考人脸图像作为当前参考人脸图像,并返回继续执行在人脸图像库中
提取出一张参考人脸图像作为当前参考人脸图像,并将提取出的遮挡物图像添加至当前参考人脸图像中的操,若不存在,则确定在人脸图像库中不存在匹配参考人脸图像。
进一步,处理模块903,具体用于将提取出的遮挡物图像分别添加到人脸图像库中的各个参考人脸图像中,提取出一张添加了遮挡物图像的参考人脸图像作为当前参考人脸图像,在提取出的当前参考人脸图像中提取出参考人脸特征,判定提取出的参考人脸特征与待识别人脸特征是否匹配,若匹配,则将当前参考人脸图像确定为人脸图像库中存在的匹配参考人脸图像;若不匹配,判断人脸图像库中是否存在还未提取的参考人脸图像,若存在,则返回继续执行提取出一张添加了遮挡物图像的参考人脸图像作为当前参考人脸图像的操作;若不存在,则确定在人脸图像库中不存在匹配参考人脸图像。
进一步,处理模块903,还用于确定遮挡物图像在采集到的人脸图像中所遮挡的人脸关键部位,对遮挡物图像按照预设缩放比例进行缩放处理,将经过缩放处理后的遮挡物图像添加至参考人脸图像中遮挡人脸关键部位的位置。
进一步,处理模块903,具体用于获取参考人脸图像中两个眼睛中心点之间的参考间距,获取采集到的人脸图像中两个眼睛中心点之间的实际间距,将参考间距与实际间距之间的比例确定预设缩放比例。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设
备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本发明的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本发明范围的所有变更和修改。
显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。
Claims (16)
- 一种人脸识别方法,其特征在于,包括:在采集到的人脸图像中提取出待识别人脸特征,并根据所述待识别人脸特征,检测采集到的人脸图像中是否存在遮挡人脸的遮挡物;在采集到的人脸图像中存在遮挡物时,在存在遮挡物的所述人脸图像中提取出遮挡物图像;判断在人脸图像库中是否存在匹配参考人脸图像,其中在所述匹配参考人脸图像中添加了提取出的遮挡物图像后,在添加了所述遮挡物图像的匹配参考人脸图像中提取出的参考人脸特征与所述待识别人脸特征匹配;在判断结果为是时,则人脸识别成功;否则人脸识别失败。
- 如权利要求1所述的方法,其特征在于,根据所述待识别人脸特征,检测采集到的人脸图像中是否存在遮挡人脸的遮挡物,包括:调取保存有人脸特征与遮挡物类别之间的对应关系的数据库;将所述待识别人脸特征与所述数据库中的人脸特征进行匹配,根据匹配结果确定采集到的人脸图像中是否存在遮挡人脸的遮挡物。
- 如权利要求1所述的方法,其特征在于,在存在遮挡物的所述人脸图像中提取出遮挡物图像,包括:对采集到的人脸图像进行归一化处理,以使采集到的人脸图像中人脸关键部位与人脸图像库中参考人脸图像中的人脸关键部位重叠;在经过归一化处理后的所述人脸图像中提取出所述遮挡物图像。
- 如权利要求3所述的方法,其特征在于,在经过归一化处理后的所述人脸图像中提取出所述遮挡物图像,包括:获取经过归一化处理后的所述人脸图像中用以标识所述遮挡物的边缘的标识点;将获取的标识点所围成的区域确定为所述遮挡物图像,并提取确定出的所述遮挡物图像。
- 如权利要求1所述的方法,其特征在于,判断在人脸图像库中是否存在匹配参考人脸图像,包括:在所述人脸图像库中提取出一张参考人脸图像作为当前参考人脸图像,并将提取出的所述遮挡物图像添加至所述当前参考人脸图像中;在添加了遮挡物图像的当前参考人脸图像中提取出参考人脸特征;判定提取出的所述参考人脸特征与所述待识别人脸特征是否匹配;若匹配,则将当前参考人脸图像确定为人脸图像库中存在的匹配参考人脸图像;若不匹配,判断所述人脸图像库中是否还存在未提取出的参考人脸图像;若存在,则提取下一张参考人脸图像作为当前参考人脸图像,并返回继续执行在所述人脸图像库中提取出一张参考人脸图像作为当前参考人脸图像,并将提取出的所述遮挡物图像添加至当前参考人脸图像中的操作:若不存在,则确定在人脸图像库中不存在匹配参考人脸图像。
- 如权利要求1所述的方法,其特征在于,判断在人脸图像库中是否存在匹配参考人脸图像,包括:将提取出的所述遮挡物图像分别添加到所述人脸图像库中的各个参考人脸图像中;提取出一张添加了遮挡物图像的参考人脸图像作为当前参考人脸图像;在提取出的所述当前参考人脸图像中提取出参考人脸特征;判定提取出的所述参考人脸特征与所述待识别人脸特征是否匹配;若匹配,则将当前参考人脸图像确定为人脸图像库中存在的匹配参考人脸图像;若不匹配,判断所述人脸图像库中是否存在还未提取的参考人脸图像,若存在,则返回继续执行提取出一张添加了遮挡物图像的参考人脸图像作为当前参考人脸图像的操作;若不存在,则确定在所述人脸图像库中不存在匹配参考人脸图像。
- 如权利要求5或6所述的方法,其特征在于,将提取出的遮挡物图像添加到参考人脸图像中,包括:确定所述遮挡物图像在采集到的人脸图像中所遮挡的人脸关键部位;对所述遮挡物图像按照预设缩放比例进行缩放处理;将经过缩放处理后的遮挡物图像添加至所述参考人脸图像中遮挡所述人脸关键部位的位置。
- 如权利要求7所述的方法,其特征在于,对所述遮挡物图像按照预设缩放比例进行缩放处理,包括:获取所述参考人脸图像中两个眼睛中心点之间的参考间距;获取采集到的人脸图像中两个眼睛中心点之间的实际间距;将所述参考间距与所述实际间距之间的比例确定所述预设缩放比例。
- 一种人脸识别系统,其特征在于,包括:检测模块,用于在采集到的人脸图像中提取出待识别人脸特征,并根据所述待识别人脸特征,检测采集到的人脸图像中是否存在遮挡人脸的遮挡物;特征提取模块,用于在采集到的人脸图像中存在遮挡物时,在存在遮挡物的所述人脸图像中提取出遮挡物图像;处理模块,用于判断在人脸图像库中是否存在匹配参考人脸图像,在判断结果为是时,则人脸识别成功,否则人脸识别失败;其中在所述匹配参考人脸图像中添加了提取出的遮挡物图像后,在添加了所述遮挡物图像的匹配参考人脸图像中提取出的参考人脸特征与所述待识别人脸特征匹配。
- 如权利要求9所述的系统,其特征在于,检测模块包括:调取单元,用于调取保存有人脸特征与遮挡物类别之间的对应关系的数据库;匹配单元,用于将所述待识别人脸特征与所述数据库中的人脸特征进行匹配,根据匹配结果确定采集到的人脸图像中是否存在遮挡人脸的遮挡物。
- 如权利要求9所述的系统,其特征在于,所述特征提取模块,包括:图像处理单元,用于对采集到的人脸图像进行归一化处理,以使采集到的人脸图像中人脸关键部位与人脸图像库中参考人脸图像中的人脸关键部位重叠;提取单元,用于在经过归一化处理后的所述人脸图像中提取出所述遮挡物图像。
- 如权利要求11所述的系统,其特征在于,所述提取单元,包括:标识点获取模块,用于获取经过归一化处理后的所述人脸图像中用以标识所述遮挡物的边缘的标识点;图像获取模块,用于将获取的标识点所围成的区域确定为所述遮挡物图像,并提取确定出的所述遮挡物图像。
- 如权利要求9所述的系统,其特征在于,所述处理模块,具体用于在所述人脸图像库中提取出一张参考人脸图像作为当前参考人脸图像,并将提取出的所述遮挡物图像添加至所述当前参考人脸图像中,在添加了遮挡物图像的当前参考人脸图像中提取出参考人脸特征,判定提取出的所述参考人脸特征与所述待识别人脸特征是否匹配,若匹配,则将当前参考人脸图像确定为人脸图像库中存在的匹配参考人脸图像;若不匹配,判断所述人脸图像库中是否还存在未提取出的参考人脸图像,若存在,则提取下一张参考人脸图像作为当前参考人脸图像,并返回继续执行在所述人脸图像库中提取出一张参考人脸图像作为当前参考人脸图像,并将提取出的所述遮挡物图像添加至当前参考人脸图像中的操,若不存在,则确定在人脸图像库中不存在匹配参考人脸图像。
- 如权利要求9所述的系统,其特征在于,所述处理模块,具体用于将提取出的所述遮挡物图像分别添加到所述人脸图像库中的各个参考人脸图像中,提取出一张添加了遮挡物图像的参考人脸图像作为当前参考人脸图像,在提取出的所述当前参考人脸图像中提取出参考人脸特征,判定提取出的所述参考人脸特征与所述待识别人脸特征是否匹配,若匹配,则将当前参考人脸图像确定为人脸图像库中存在的匹配参考人脸图像;若不匹配,判断所述 人脸图像库中是否存在还未提取的参考人脸图像,若存在,则返回继续执行提取出一张添加了遮挡物图像的参考人脸图像作为当前参考人脸图像的操作;若不存在,则确定在所述人脸图像库中不存在匹配参考人脸图像。
- 如权利要求13或14所述的系统,其特征在于,所述处理模块,还用于确定所述遮挡物图像在采集到的人脸图像中所遮挡的人脸关键部位,对所述遮挡物图像按照预设缩放比例进行缩放处理,将经过缩放处理后的遮挡物图像添加至所述参考人脸图像中遮挡所述人脸关键部位的位置。
- 如权利要求15所述的系统,其特征在于,所述处理模块,具体用于获取所述参考人脸图像中两个眼睛中心点之间的参考间距,获取采集到的人脸图像中两个眼睛中心点之间的实际间距,将所述参考间距与所述实际间距之间的比例确定所述预设缩放比例。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410180720.0A CN105095829B (zh) | 2014-04-29 | 2014-04-29 | 一种人脸识别方法及系统 |
CN201410180720.0 | 2014-04-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015165365A1 true WO2015165365A1 (zh) | 2015-11-05 |
Family
ID=54358168
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2015/077373 WO2015165365A1 (zh) | 2014-04-29 | 2015-04-24 | 一种人脸识别方法及系统 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN105095829B (zh) |
WO (1) | WO2015165365A1 (zh) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111259698A (zh) * | 2018-11-30 | 2020-06-09 | 百度在线网络技术(北京)有限公司 | 用于获取图像的方法及装置 |
CN111428581A (zh) * | 2020-03-05 | 2020-07-17 | 平安科技(深圳)有限公司 | 人脸遮挡检测方法及系统 |
CN111444862A (zh) * | 2020-03-30 | 2020-07-24 | 深圳信可通讯技术有限公司 | 一种人脸识别的方法及装置 |
CN111461047A (zh) * | 2020-04-10 | 2020-07-28 | 北京爱笔科技有限公司 | 身份识别的方法、装置、设备及计算机存储介质 |
CN111783600A (zh) * | 2020-06-24 | 2020-10-16 | 北京百度网讯科技有限公司 | 一种人脸识别模型训练方法、装置、设备及介质 |
CN111898561A (zh) * | 2020-08-04 | 2020-11-06 | 腾讯科技(深圳)有限公司 | 一种人脸认证方法、装置、设备及介质 |
CN111931628A (zh) * | 2020-08-04 | 2020-11-13 | 腾讯科技(深圳)有限公司 | 人脸识别模型的训练方法、装置及相关设备 |
CN112507989A (zh) * | 2021-02-04 | 2021-03-16 | 北京远鉴信息技术有限公司 | 一种面部识别方法、装置、电子设备及可读存储介质 |
CN113705466A (zh) * | 2021-08-30 | 2021-11-26 | 浙江中正智能科技有限公司 | 用于遮挡场景、尤其高仿遮挡下的人脸五官遮挡检测方法 |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106991360B (zh) * | 2016-01-20 | 2019-05-07 | 腾讯科技(深圳)有限公司 | 人脸识别方法及人脸识别系统 |
CN107016370B (zh) * | 2017-04-10 | 2019-10-11 | 电子科技大学 | 一种基于数据增强的部分遮挡人脸识别方法 |
CN107169447A (zh) * | 2017-05-12 | 2017-09-15 | 贵州中信云联科技有限公司 | 基于人脸识别的医院自助系统 |
CN107622232B (zh) | 2017-09-08 | 2020-01-14 | Oppo广东移动通信有限公司 | 解锁控制方法及相关产品 |
CN107657166A (zh) * | 2017-09-27 | 2018-02-02 | 广东欧珀移动通信有限公司 | 解锁控制方法及相关产品 |
CN107862270B (zh) * | 2017-10-31 | 2020-07-21 | 深圳云天励飞技术有限公司 | 人脸分类器训练方法、人脸检测方法及装置、电子设备 |
CN108805046B (zh) | 2018-05-25 | 2022-11-04 | 京东方科技集团股份有限公司 | 用于面部匹配的方法、设备、装置和存储介质 |
CN109241852B (zh) * | 2018-08-10 | 2021-01-12 | 广州杰赛科技股份有限公司 | 具有附加特征的人脸识别方法、装置、计算机设备 |
US11275819B2 (en) | 2018-12-05 | 2022-03-15 | Bank Of America Corporation | Generative adversarial network training and feature extraction for biometric authentication |
CN113494964B (zh) * | 2020-03-20 | 2022-09-20 | 深圳云天励飞技术有限公司 | 一种人员测温方法、装置、电子设备及存储介质 |
CN111325191A (zh) * | 2020-04-07 | 2020-06-23 | 上海极链网络科技有限公司 | 一种人脸识别方法及系统 |
CN111310743B (zh) * | 2020-05-11 | 2020-08-25 | 腾讯科技(深圳)有限公司 | 人脸识别方法、装置、电子设备及可读存储介质 |
CN111626213A (zh) * | 2020-05-27 | 2020-09-04 | 北京嘀嘀无限科技发展有限公司 | 一种身份验证方法、装置、电子设备及可读存储介质 |
CN111914629B (zh) * | 2020-06-19 | 2024-06-11 | 北京百度网讯科技有限公司 | 为人脸识别生成训练数据的方法、装置、设备和存储介质 |
CN111914630A (zh) * | 2020-06-19 | 2020-11-10 | 北京百度网讯科技有限公司 | 为人脸识别生成训练数据的方法、装置、设备和存储介质 |
CN111768543A (zh) * | 2020-06-29 | 2020-10-13 | 杭州翔毅科技有限公司 | 基于人脸识别的通行管理方法、设备、存储介质及装置 |
CN112115886A (zh) * | 2020-09-22 | 2020-12-22 | 北京市商汤科技开发有限公司 | 图像检测方法和相关装置、设备、存储介质 |
CN112215113A (zh) * | 2020-09-30 | 2021-01-12 | 张成林 | 人脸识别方法及装置 |
CN112633221B (zh) * | 2020-12-30 | 2024-08-09 | 深圳市捷顺科技实业股份有限公司 | 一种人脸方向的检测方法及相关装置 |
JPWO2022201247A1 (zh) * | 2021-03-22 | 2022-09-29 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102034079A (zh) * | 2009-09-24 | 2011-04-27 | 汉王科技股份有限公司 | 眼镜遮挡下的人脸识别方法和系统 |
CN103020579A (zh) * | 2011-09-22 | 2013-04-03 | 上海银晨智能识别科技有限公司 | 人脸识别方法及系统、人脸图像的眼镜框去除方法与装置 |
CN103093210A (zh) * | 2013-01-24 | 2013-05-08 | 北京天诚盛业科技有限公司 | 人脸识别中眼镜的鉴别方法及装置 |
JP5359266B2 (ja) * | 2008-12-26 | 2013-12-04 | 富士通株式会社 | 顔認識装置、顔認識方法及び顔認識プログラム |
KR101373274B1 (ko) * | 2012-11-08 | 2014-03-11 | 주식회사 크라스아이디 | 안경 제거를 통한 얼굴 인식 방법 및 상기 안경 제거를 통한 얼굴 인식 방법을 이용한 얼굴 인식 장치 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101066097B1 (ko) * | 2010-04-05 | 2011-09-20 | 주식회사 미래인식 | 안경 착용자에 대한 오인식을 최소화한 얼굴 인식 장치 및 이를 이용한 안경 착용자에 대한 얼굴 오인식 최소화 방법 |
CN102024145B (zh) * | 2010-12-01 | 2012-11-21 | 五邑大学 | 一种伪装人脸分层识别方法及系统 |
CN102306304B (zh) * | 2011-03-25 | 2017-02-08 | 上海星尘电子科技有限公司 | 人脸遮挡物识别方法及其装置 |
CN102163289B (zh) * | 2011-04-06 | 2016-08-24 | 北京中星微电子有限公司 | 人脸图像中眼镜的去除方法和装置、佩戴方法和装置 |
CN103679159B (zh) * | 2013-12-31 | 2017-10-17 | 海信集团有限公司 | 人脸识别方法 |
-
2014
- 2014-04-29 CN CN201410180720.0A patent/CN105095829B/zh active Active
-
2015
- 2015-04-24 WO PCT/CN2015/077373 patent/WO2015165365A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5359266B2 (ja) * | 2008-12-26 | 2013-12-04 | 富士通株式会社 | 顔認識装置、顔認識方法及び顔認識プログラム |
CN102034079A (zh) * | 2009-09-24 | 2011-04-27 | 汉王科技股份有限公司 | 眼镜遮挡下的人脸识别方法和系统 |
CN103020579A (zh) * | 2011-09-22 | 2013-04-03 | 上海银晨智能识别科技有限公司 | 人脸识别方法及系统、人脸图像的眼镜框去除方法与装置 |
KR101373274B1 (ko) * | 2012-11-08 | 2014-03-11 | 주식회사 크라스아이디 | 안경 제거를 통한 얼굴 인식 방법 및 상기 안경 제거를 통한 얼굴 인식 방법을 이용한 얼굴 인식 장치 |
CN103093210A (zh) * | 2013-01-24 | 2013-05-08 | 北京天诚盛业科技有限公司 | 人脸识别中眼镜的鉴别方法及装置 |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111259698B (zh) * | 2018-11-30 | 2023-10-13 | 百度在线网络技术(北京)有限公司 | 用于获取图像的方法及装置 |
CN111259698A (zh) * | 2018-11-30 | 2020-06-09 | 百度在线网络技术(北京)有限公司 | 用于获取图像的方法及装置 |
CN111428581A (zh) * | 2020-03-05 | 2020-07-17 | 平安科技(深圳)有限公司 | 人脸遮挡检测方法及系统 |
CN111428581B (zh) * | 2020-03-05 | 2023-11-21 | 平安科技(深圳)有限公司 | 人脸遮挡检测方法及系统 |
CN111444862A (zh) * | 2020-03-30 | 2020-07-24 | 深圳信可通讯技术有限公司 | 一种人脸识别的方法及装置 |
CN111461047A (zh) * | 2020-04-10 | 2020-07-28 | 北京爱笔科技有限公司 | 身份识别的方法、装置、设备及计算机存储介质 |
CN111783600A (zh) * | 2020-06-24 | 2020-10-16 | 北京百度网讯科技有限公司 | 一种人脸识别模型训练方法、装置、设备及介质 |
CN111783600B (zh) * | 2020-06-24 | 2024-02-09 | 北京百度网讯科技有限公司 | 一种人脸识别模型训练方法、装置、设备及介质 |
CN111931628A (zh) * | 2020-08-04 | 2020-11-13 | 腾讯科技(深圳)有限公司 | 人脸识别模型的训练方法、装置及相关设备 |
CN111898561A (zh) * | 2020-08-04 | 2020-11-06 | 腾讯科技(深圳)有限公司 | 一种人脸认证方法、装置、设备及介质 |
CN111931628B (zh) * | 2020-08-04 | 2023-10-24 | 腾讯科技(深圳)有限公司 | 人脸识别模型的训练方法、装置及相关设备 |
CN112507989A (zh) * | 2021-02-04 | 2021-03-16 | 北京远鉴信息技术有限公司 | 一种面部识别方法、装置、电子设备及可读存储介质 |
CN113705466A (zh) * | 2021-08-30 | 2021-11-26 | 浙江中正智能科技有限公司 | 用于遮挡场景、尤其高仿遮挡下的人脸五官遮挡检测方法 |
CN113705466B (zh) * | 2021-08-30 | 2024-02-09 | 浙江中正智能科技有限公司 | 用于遮挡场景、尤其高仿遮挡下的人脸五官遮挡检测方法 |
Also Published As
Publication number | Publication date |
---|---|
CN105095829A (zh) | 2015-11-25 |
CN105095829B (zh) | 2019-02-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2015165365A1 (zh) | 一种人脸识别方法及系统 | |
CN107644204B (zh) | 一种用于安防系统的人体识别与跟踪方法 | |
WO2021047232A1 (zh) | 交互行为识别方法、装置、计算机设备和存储介质 | |
US11107225B2 (en) | Object recognition device and computer readable storage medium | |
CN109145742B (zh) | 一种行人识别方法及系统 | |
JP4241763B2 (ja) | 人物認識装置及びその方法 | |
CN102375970B (zh) | 一种基于人脸的身份认证方法和认证装置 | |
KR102357340B1 (ko) | 얼굴 인식 방법 및 장치 | |
WO2019033572A1 (zh) | 人脸遮挡检测方法、装置及存储介质 | |
JP6815707B2 (ja) | 顔姿勢検出方法、装置及び記憶媒体 | |
WO2019061658A1 (zh) | 眼镜定位方法、装置及存储介质 | |
JP5959093B2 (ja) | 人物検索システム | |
CN110163096B (zh) | 人物识别方法、装置、电子设备和计算机可读介质 | |
US20180075291A1 (en) | Biometrics authentication based on a normalized image of an object | |
JP6351243B2 (ja) | 画像処理装置、画像処理方法 | |
CN108171138B (zh) | 一种生物特征信息获取方法和装置 | |
US11062126B1 (en) | Human face detection method | |
US10496874B2 (en) | Facial detection device, facial detection system provided with same, and facial detection method | |
JP6410450B2 (ja) | オブジェクト識別装置、オブジェクト識別方法及びプログラム | |
TWI526953B (zh) | 人臉辨識方法及系統 | |
JP2017174343A (ja) | 入店者属性抽出装置及び入店者属性抽出プログラム | |
CN109558773B (zh) | 信息识别方法、装置及电子设备 | |
US20230116514A1 (en) | Authentication control device, authentication system, authentication control method and non-transitory computer readable medium | |
CN113989914B (zh) | 一种基于人脸识别的安防监控方法及系统 | |
KR20140138486A (ko) | 성별 인식 장치 및 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15786661 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15786661 Country of ref document: EP Kind code of ref document: A1 |