WO2019056988A1 - 人脸识别方法及装置、计算机设备 - Google Patents
人脸识别方法及装置、计算机设备 Download PDFInfo
- Publication number
- WO2019056988A1 WO2019056988A1 PCT/CN2018/105707 CN2018105707W WO2019056988A1 WO 2019056988 A1 WO2019056988 A1 WO 2019056988A1 CN 2018105707 W CN2018105707 W CN 2018105707W WO 2019056988 A1 WO2019056988 A1 WO 2019056988A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face
- target
- feature
- image
- facial
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 66
- 230000001815 facial effect Effects 0.000 claims abstract description 132
- 230000004438 eyesight Effects 0.000 claims abstract description 15
- 239000011159 matrix material Substances 0.000 claims description 99
- 230000004927 fusion Effects 0.000 claims description 27
- 210000000624 ear auricle Anatomy 0.000 claims description 26
- 238000005516 engineering process Methods 0.000 claims description 17
- 238000010606 normalization Methods 0.000 claims description 15
- 238000000605 extraction Methods 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 7
- 238000012549 training Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000000875 corresponding effect Effects 0.000 description 30
- 238000004422 calculation algorithm Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 8
- 238000005286 illumination Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 238000001514 detection method Methods 0.000 description 6
- 230000014509 gene expression Effects 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000036760 body temperature Effects 0.000 description 2
- 210000000887 face Anatomy 0.000 description 2
- 238000007499 fusion processing Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000011840 criminal investigation Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 210000000554 iris Anatomy 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- the present application relates to the field of computer vision technology, and in particular, to a face recognition method and apparatus, and a computer device.
- Face recognition technology is a technique for extracting features of a face from a computer and performing authentication based on these features. Faces are innate with other biological characteristics of the human body (such as fingerprints and/or irises), and their uniqueness and good characteristics that are not easily replicated provide the necessary conditions for identification. Compared with other biometric technologies, face recognition technology has the characteristics of simple operation and intuitive results. Therefore, face recognition technology has broad application prospects in the fields of information security, criminal investigation, entrance control and attendance.
- a non-contact temperature sensing device is generally used to detect whether the temperature in the face monitoring area is a normal human body temperature, when detecting When the temperature in the face monitoring area is the normal human body temperature, it is determined that the face detected in the face monitoring area is a real face. Then further recognition is based on the image of the real face.
- the embodiment of the present invention provides a method and a device for recognizing a face, and a computer device, which can solve the problem that the scheme is complicated and costly when determining whether the detected face is a real face in the related art.
- the technical solution is as follows:
- a method for recognizing a face comprising:
- the target face is photographed by the stereo camera assembly to obtain the m face image of the target face, m ⁇ 2;
- the target human face is a three-dimensional human face
- it is determined that the m personal face image is a real human face image.
- the capturing a target face by the stereo camera component to obtain the m personal face image of the target face includes:
- the target face is photographed by a binocular camera to obtain two face images taken at the same time;
- Determining, according to the m facial image, a depth of n facial feature points of the target human face including:
- the depth of the n-person face feature points is calculated using binocular stereo vision technology.
- the calculating the depth of the n-person face feature points by using binocular stereo vision technology based on the two face images including:
- the first facial feature point being any one of the n facial feature points
- the binocular camera comprises a visible light camera and a near infrared camera
- the two facial images comprise a visible light image captured by the visible light camera and a near infrared image captured by the near infrared camera.
- the determining, according to the depth of the n facial feature points, whether the target human face is a three-dimensional human face comprises:
- the target face is not a stereo face.
- the method further includes:
- the identity information is determined as identity information corresponding to the target face.
- the determining the target feature matrix based on the at least one face image in the m facial image includes:
- the m feature matrices are feature-fused to obtain the target feature matrix.
- performing the feature fusion on the m feature matrices to obtain the target feature matrix including:
- the m feature matrices are feature-fused to obtain the target feature matrix
- V is the target feature matrix
- V i is the i- th feature matrix of the m feature matrices
- a i is a weight coefficient of the i-th feature matrix
- norm() Indicates that the vector is modulo.
- the method further includes: before using the weighted summation normalization formula to perform feature fusion on the m feature matrices to obtain the target feature matrix, the method further includes:
- each set of face images including m face images
- the plurality of sets of face images are input to a preset model for data training to determine the weight coefficients.
- the n facial feature points include at least two of a nose tip, a nose root, a left corner, a right corner, a left corner, a right corner, a chin center point, a left earlobe, a right earlobe, a left cheek, and a right cheek.
- a face recognition device comprising:
- a stereo camera assembly for capturing a target human face to obtain an m personal face image of the target human face, m ⁇ 2;
- a first determining module configured to determine a depth of n personal face feature points of the target face based on the m facial image, n ⁇ 2;
- a determining module configured to determine, according to a depth of the n-person face feature point, whether the target face is a three-dimensional face
- the second determining module is configured to determine that the m personal face image is a real face image when the target human face is a three-dimensional human face.
- the stereo camera component is a binocular camera
- the binocular camera is used to:
- the first determining module is configured to:
- the depth of the n-person face feature points is calculated using binocular stereo vision technology.
- the first determining module is further configured to:
- the first facial feature point being any one of the n facial feature points
- the binocular camera comprises a visible light camera and a near infrared camera
- the two facial images comprise a visible light image captured by the visible light camera and a near infrared image captured by the near infrared camera.
- the determining module includes:
- a calculation submodule configured to calculate a stereoscopic score of the target face based on a depth of the n facial feature points
- a comparison submodule configured to compare a stereoscopic score of the target face with a preset stereo score
- a second determining sub-module configured to determine that the target human face is a three-dimensional human face when the stereoscopic score of the target human face is greater than or equal to the preset stereoscopic score
- a third determining submodule configured to determine that the target human face is not a stereoscopic human face when the stereoscopic score of the target human face is less than the preset stereoscopic score.
- the device further includes:
- a third determining module configured to determine a target feature matrix based on the at least one face image in the m facial image
- a matching module configured to match the target feature matrix with a feature matrix corresponding to the face image in the information base
- An acquiring module configured to acquire, when the similarity between the target feature matrix and the feature matrix corresponding to a certain face image in the information library is greater than or equal to a preset similarity threshold Identity information;
- a fourth determining module configured to determine the identity information as identity information corresponding to the target face.
- the third determining module includes:
- a feature extraction sub-module configured to perform feature extraction on each face image in the m personal face image to obtain m feature matrices
- a feature fusion sub-module configured to perform feature fusion on the m feature matrices to obtain the target feature matrix.
- the feature fusion submodule is configured to:
- the m feature matrices are feature-fused to obtain the target feature matrix
- V is the target feature matrix
- V i is the i- th feature matrix of the m feature matrices
- a i is a weight coefficient of the i-th feature matrix
- norm() Indicates that the vector is modulo.
- the device further includes:
- a selection module configured to select a plurality of sets of face images collected by the stereo camera component, each set of face images including an m personal face image;
- a fifth determining module configured to input the plurality of sets of face images into a preset model for data training to determine the weight coefficient.
- the n facial feature points include at least two of a nose tip, a nose root, a left corner, a right corner, a left corner, a right corner, a chin center point, a left earlobe, a right earlobe, a left cheek, and a right cheek.
- a computer apparatus comprising at least one processor and at least one memory,
- the memory is configured to store a computer program
- the processor is configured to execute a program stored on the memory, and implement the face recognition method according to any one of the first aspects.
- a non-transitory computer readable storage medium stores code instructions, the code instructions being executed by a processor to perform any of the first aspect Face recognition method.
- the method and device for recognizing a face and the computer device may determine the depth of the n-person face feature points of the target face based on the m-personal face image obtained by capturing the target face through the stereo camera component, and determine Whether the target face is a three-dimensional face, and when the target face is a three-dimensional face, it is determined that the captured m face image is a real face image, and it is not necessary to provide a non-contact temperature sensing device to determine whether the captured image is
- the real face image reduces the complexity of judging whether the captured image is a real face image, and reduces the cost of face recognition.
- FIG. 1 is a flowchart of a face recognition method according to an embodiment of the present application.
- FIG. 2 is a flowchart of another method for recognizing a face provided by an embodiment of the present application.
- FIG. 3 is a flowchart of a method for determining a depth of a feature point of a face according to an embodiment of the present application
- FIG. 4 is a flowchart of a method for determining whether a target face is a three-dimensional face according to an embodiment of the present application
- FIG. 5 is a schematic structural diagram of a face recognition apparatus according to an embodiment of the present disclosure.
- FIG. 6 is a schematic structural diagram of a judging module according to an embodiment of the present application.
- FIG. 7 is a schematic structural diagram of another face recognition apparatus according to an embodiment of the present application.
- FIG. 8 is a schematic structural diagram of a third determining module according to an embodiment of the present disclosure.
- FIG. 9 is a schematic structural diagram of still another face recognition device according to an embodiment of the present application.
- FIG. 10 is a schematic structural diagram of a computer device according to an embodiment of the present application.
- FIG. 1 is a flowchart of a method for recognizing a face according to an embodiment of the present application. As shown in FIG. 1 , the method may include:
- Step 101 The target face is photographed by the stereo camera component to obtain an m personal face image of the target face, m ⁇ 2.
- Step 102 Determine, according to the m personal face image, a depth of the n facial feature points of the target human face, n ⁇ 2.
- Step 103 Determine whether the target face is a three-dimensional face based on the depth of the n personal face feature points.
- Step 104 When the target human face is a three-dimensional human face, determine that the m personal face image is a real human face image.
- the face recognition method can determine the depth of the n face feature points of the target face based on the m facial image obtained by capturing the target face through the stereo camera component, and determine Whether the target face is a three-dimensional face, and when the target face is a three-dimensional face, it is determined that the captured m face image is a real face image, and it is not necessary to provide a non-contact temperature sensing device to determine whether the captured image is
- the real face image reduces the complexity of judging whether the captured image is a real face image, and reduces the cost of face recognition.
- FIG. 2 is a flowchart of a method for recognizing a face according to an embodiment of the present application. As shown in FIG. 2, the method may include:
- Step 201 The target face is photographed by the stereo camera component to obtain an m personal face image of the target face, m ⁇ 2.
- the stereo camera component may be a binocular camera (also referred to as a binocular stereo camera), and the target face is captured by the stereo camera component to obtain the m personal face image of the target face, which may include: The eye camera captures the target face to obtain two face images taken at the same time.
- a binocular camera also referred to as a binocular stereo camera
- the binocular camera usually includes two cameras. Because the positions of the two cameras are different, the binocular camera can shoot the target face from different angles of view at the same time, thereby obtaining two face images of different perspectives.
- the binocular camera may include a visible light camera and a near infrared camera.
- the two face images may include a visible light image captured by the visible light camera and a near infrared image captured by the near infrared camera.
- the binocular camera may also include two visible light cameras or two near-infrared cameras. The embodiment of the present application does not limit the type of the camera in the binocular camera.
- the visible light is an electromagnetic wave having a wavelength in the range of 400 to 760 nm
- the near-infrared light is an electromagnetic wave having a wavelength in the range of 780 to 2526 nm. Therefore, by using a visible light camera and a near-infrared camera to shoot the target face, it can be ensured that at least one of the two face images collected by the binocular camera at the same time can be used for subsequent images in different illumination scenes. Face recognition, which improves the reliability of binocular cameras for capturing face images.
- the illumination is strong (the wavelength of the light is between 400 and 760 nm)
- the sharpness of the visible light image collected by the visible light camera is higher, and since the texture of the visible light image is more delicate than the texture of the near infrared image, the subsequent Face recognition is mainly performed on visible light images
- the illumination is weak (light wavelength is between 780 and 2526 nm)
- the visible light image captured by the visible light camera has lower definition, while the near infrared image is not affected by illumination, and subsequent Face recognition can be performed mainly on near-infrared images, thereby improving the reliability of face recognition.
- the binocular camera performs shooting.
- the binocular camera and/or the computer device connected to the binocular camera determine whether there is a face in the shooting area of the binocular camera through a face detection algorithm, and the face detection algorithm used includes a histogram based rough segmentation and singularity.
- the binocular camera when shooting a face image, the binocular camera needs the user to actively adjust the position and expression of the face in the shooting area, so that the binocular camera can capture two face images that meet the face recognition requirements.
- the face image required for face recognition may be a complete face image or a face image containing some specific face feature points.
- the binocular camera can issue a prompt.
- the information prompts the user to adjust the position and expression. For example, the binocular camera can issue a voice prompt message, and the content of the voice prompt message is “Please re-shoot”.
- the stereo camera assembly may also be composed of a plurality of cameras arranged in an array.
- the type of the stereo camera assembly is not limited in this embodiment of the present application.
- the stereo camera assembly may include three cameras that shoot the same area from different angles of view at the same time to obtain three face images of different viewing angles.
- the stereo camera assembly includes a plurality of independent cameras
- the plurality of cameras may be connected to a shooting trigger device, and the plurality of cameras are controlled to be photographed at the same time by the shooting trigger device.
- the process of face recognition may include a process of judging a real face and a process of confirming identity information. After the stereo camera component captures the m face image of the target face, it may be determined based on the m face image. If the target face is a real face, the specific process refers to step 202 to step 204. The identity information corresponding to the target face may also be determined based on the m personal face image. For the specific process, refer to steps 205 to 208.
- Step 202 Determine, according to the m personal face image, a depth of the n facial feature points of the target human face, n ⁇ 2.
- the n facial feature points may include at least two of a nose tip, a nose root, a left eye corner, a right corner, a left corner, a right corner, a chin center point, a left earlobe, a right earlobe, a left cheek, and a right cheek.
- the manner of determining the depth of the n facial feature points of the target human face may be: Two face images, using binocular stereo vision technology to calculate the depth of n face feature points.
- the stereo camera component when the stereo camera component captures multiple face images at the same time, the plurality of images may be subjected to filtering processing or fusion processing to obtain two facial images, and then the n facial feature points are calculated based on the two facial images.
- the depth of the process of calculating the depth of the n face feature points based on the two face images may also employ binocular stereo vision technology.
- two face images conforming to the face recognition requirements such as images including clear n face feature points, may be selected from the three face images.
- the binocular stereo vision technology is used to calculate the depth of the n face feature points; or the image fusion process is performed on the two face images in the three face images to obtain the merged face image, and then based on the merged face image And another face image in the three face images, using binocular stereo vision technology to calculate the depth of the n face feature points.
- binocular stereo vision is an important form of machine vision. It is based on the parallax principle and uses the imaging device to acquire two images of the measured object from different positions, and obtains the object by calculating the positional deviation between the corresponding points of the image. The method of 3D geometric information.
- a flow chart of a method for calculating a depth of a feature face of an n-person face using binocular stereo vision technology includes:
- Step 2021 Determine a position of the first facial feature point in the two facial images.
- the first facial feature point is any one of the n facial feature points.
- the n facial feature points may be numbered separately, for example, the n facial feature points may be p 1 , p 2 , . . . , p n , and the two facial images include the first facial image and the second Face image.
- Step 2022 Calculate the three-dimensional coordinates of the first facial feature point according to the position of the first facial feature point in the two facial images in combination with the camera parameters.
- the camera parameters include internal parameters (referred to as internal parameters) and external parameters (referred to as external parameters), camera internal parameters include focal length, image center and distortion coefficient, etc.
- camera external parameters include pitch angle, tilt angle and height.
- the plane coordinates of the first facial feature point in the two images may be respectively determined, and the first face is determined.
- the method of feature points can determine that the plane coordinates of the n face feature points in the first face image are Determining the plane coordinates of the n face feature point in the second face image is
- the first facial feature point may be calculated in the camera according to the plane coordinate of the first facial feature point in the first facial image and the plane coordinate of the first facial feature point in the second facial image.
- n personal face feature points can perform feature point matching according to the number, determine the plane coordinates of the same face feature point in the two face images, and calculate the three-dimensional coordinates of the n face feature points in the camera coordinate system, for example, for example, In the camera coordinate system, the three-dimensional coordinates of the n facial face feature points are Where the depth coordinates of the n facial feature points are
- the camera external parameter can be combined with the rigid body transformation (in three-dimensional space, when the object does not deform, the movement of one geometric object is rotated and translated, Called the rigid body transformation), calculate the three-dimensional coordinates of the n-person face feature points in the world coordinate system.
- the three-dimensional coordinates of the n-person face feature points in the camera coordinate system may be used for the determination of the three-dimensional face, or the three-dimensional coordinates of the n-person face feature points in the world coordinate system may be used for the three-dimensional face. Judgment.
- the image coordinate system is a coordinate system established based on the two-dimensional image captured by the camera;
- the camera coordinate system is a coordinate system in which the camera measures the object at its own angle, and the origin of the camera coordinate system is on the optical center of the camera, the z-axis and The optical axis of the camera is parallel;
- the world coordinate system is the real physical coordinate system, which is the reference frame of the target object position.
- Step 2023 Determine a depth of the first facial feature point based on the three-dimensional coordinates.
- the depth of the first facial feature point may be determined according to the depth coordinate of the first facial feature point, and correspondingly, the depth of the other facial feature points of the n facial feature point may also refer to the first facial feature. The manner in which the depth of the point is determined is obtained, and details are not described herein again.
- Step 203 Determine, according to the depth of the n facial feature points, whether the target human face is a three-dimensional human face.
- a flowchart for determining whether the target face is a three-dimensional face based on the depth of the n-person face feature point, as shown in FIG. 4, includes:
- Step 2031 Calculate a stereoscopic score of the target face based on the depth of the n-person face feature points.
- the three-dimensionality score is a score obtained by scoring the depth of the feature points of the face. The higher the stereoscopic score, the higher the stereoscopic degree of the target face, and the depth coordinates of the stereo score t and the face feature point can satisfy the pre-preparation.
- Set the function relationship This functional relationship is also the scoring rule for the stereo score.
- the scoring rules for stereo scoring need to reflect the three-dimensional shape of the three-dimensional face and meet the requirements of the real face contour.
- the magnitude of the stereo scoring is positively correlated with the difference in depth of the sets of facial feature points, wherein each set of facial feature points may include two of the n facial feature points. The difference between the depths of the plurality of sets of facial feature points and the sum of the absolute differences of the depths of all the group of feature points of the plurality of sets of facial feature points.
- the stereoscopic score of the target facial may also be referred to as the stereoscopic score of the facial feature points.
- five feature points of a nose tip, a nasal root, a chin center point, a left earlobe, and a right earlobe may be selected as a face feature point for calculating a stereoscopic score, wherein the depth coordinate of the tip of the nose is z. 1 , the depth coordinate of the nasal root is z 2 , the depth coordinate of the center point of the chin is z 3 , the depth coordinate of the left earlobe is z 4 , and the depth coordinate of the right earlobe is z 5 , and the nose tip and the nasal root and the center point of the chin can be used.
- the function expression of the depth coordinate of the point can be:
- the three-featured facial feature points are used to calculate the stereopsis score. The advantage is that the number of facial feature points is small, and the feature information is obviously easy to determine the position of the feature points in the image and the difference between the depth coordinates of the pair of points. Larger is easier to judge whether it is a real face.
- five feature points of a left earlobe, a right earlobe, a nose tip, a left mouth corner, and a right mouth corner may be selected as a facial feature point for calculating a stereoscopic score, wherein the depth coordinate of the left earlobe is z 1 , the depth coordinate of the right earlobe is z 2 , the depth coordinate of the tip of the nose is z 3 , the depth coordinate of the left corner of the mouth is z 4 , and the depth coordinate of the right corner of the mouth is z 5 , and the tip of the nose and the left earlobe, the tip of the nose and the right earlobe can be used.
- the difference between the depth coordinates of the four sets of feature points of the nose tip and the left corner of the mouth, the tip of the nose and the right corner of the mouth, and the scoring rule as the scoring of the stereoscopic score, the stereoscopic score t of the face corresponding to the five facial feature points and the five facial features The function expression of the depth coordinate of the point can be:
- the above-mentioned scoring rule of the stereoscopic score is only used for simple exemplary description, and the actual scoring rule can be designed according to the face pose, and different face poses can correspond to different scoring rules, and the target face is obtained.
- the selected face feature points and the corresponding scoring rules are determined according to the face pose in the face image to calculate the stereoscopic score of the target face, which is not limited in the embodiment of the present application.
- Step 2032 Compare the stereoscopic score of the target face with the size of the preset stereo score.
- the design of the size of the preset stereo score is related to the selected face feature points for calculating the stereo score and the scoring rule. The larger the preset stereo score, the less the misjudgment of the non-real face, and the more the real face is missed.
- step 2031 when five feature points of the left earlobe, the right earlobe, the nose tip, the left corner of the mouth, and the right corner of the mouth are selected as the face feature points for calculating the stereoscopic score, 0.4 can be used.
- the physical distance of the left and right eye centers is used as a preset stereoscopic score.
- Step 2033 When the stereoscopic score of the target face is greater than or equal to the preset stereo score, determine that the target face is a three-dimensional face.
- Step 2034 When the stereoscopic score of the target face is less than the preset stereo score, it is determined that the target face is not a stereo face.
- the stereo camera component can upload the m personal face image obtained by capturing the target face to the “blacklist” database of the attendance system, and the manager can determine the face image in the “blacklist” when querying the attendance system.
- the employees who are on the job attendance are available for follow-up.
- the stereo camera component can immediately issue an alarm message to prompt the manager or security personnel to perform corresponding measures.
- Step 204 When the target human face is a three-dimensional human face, determine that the m personal face image is a real human face image.
- the target face is a three-dimensional face
- the target face for face recognition can be excluded as a face photo or a face image, etc., so that it can be determined that the m face image captured by the stereo camera component is a real person.
- Face image It is determined that the m personal face image is a real face image, that is, the target human face is determined to be a real human face.
- Step 205 Determine a target feature matrix based on at least one face image in the m personal face image.
- feature extraction may be performed on any face image in the m personal face image, and the obtained feature matrix is determined as a target feature matrix; or, each image in the m personal face image is subjected to feature extraction to obtain m Feature matrices, feature fusion of m feature matrices to obtain a target feature matrix.
- the face feature is the representation data with face recognition extracted from the face image.
- the representation methods include knowledge representation and statistical representation.
- the face features mainly include visual features, geometric features, pixel statistical features and image algebra features. Etc.
- feature extraction algorithms include local binary pattern (English: local binary pattern; LBP) algorithm, scale-invariant feature transform (SIFT) algorithm, gradient direction histogram (English: Histograms of Oriented Gradients; abbreviated as: HOG) algorithm and deep neural network learning algorithm, etc.
- LBP local binary pattern
- SIFT scale-invariant feature transform
- HOG gradient direction histogram
- HOG deep neural network learning algorithm
- the method for performing feature fusion on the m feature matrices to obtain the target feature matrix may include: using a weighted summation normalization formula to perform feature fusion on the m feature matrices to obtain target features. matrix.
- V is the target feature matrix
- V i is the i- th feature matrix among the m feature matrices
- a i is the weight coefficient of the i-th feature matrix
- norm() represents the modulo of the vector.
- the weighted summation normalization formula is used to feature the m feature matrices. On the basis of the fusion features, the feature dimension is not added. Therefore, after the features of multiple face images are merged, the same computational complexity can be used. Improve the accuracy of face recognition.
- the visible light image is extracted to obtain a visible light characteristic matrix V 1 , and the visible light characteristic matrix has a weight coefficient of a 1 for near infrared.
- the feature fusion of the visible light characteristic matrix and the near-infrared characteristic matrix can balance the near-infrared image without the influence of illumination and the delicate texture of the human face in the visible light image, and improve the accuracy of face recognition.
- each set of face images collected by the stereo camera component includes m face images, that is, faces in each group of face images.
- the number of images is the same as the number of feature matrices for feature fusion; multiple sets of face images are input into a preset model for data training to determine weight coefficients.
- the collected plurality of sets of face images may include face images collected at different times.
- the stereo camera component is a binocular camera including a visible light camera and a near infrared camera
- the plurality of sets of face images include a plurality of sets of face images respectively collected at different times, each set of face images including a visible light image and A near-infrared image is respectively input into a preset model by inputting a plurality of sets of face images collected at different times to perform data training to determine a weight coefficient.
- the plurality of sets of face images collected at the same time include different face poses and facial expressions.
- Setting a 1 and a gradient of 0.1 2, a 1 and a 2 values may be set to ⁇ 0,0.1,0.2, ..., 0.9, 1 ⁇ , then a 1 and a 2 have 11 possible combinations of 11 *
- the traversal of all combinations is used to count the success rate of face recognition under different combinations, and the values of a 1 and a 2 in the combination with the highest success rate of face recognition are determined as the weight coefficients at that moment. Since the intensity of the illumination is different at different times, the weight coefficient can be adjusted according to the actual illumination intensity to perform the biasing screening of the visible light characteristic matrix and the near-infrared characteristic matrix.
- the weighting coefficient of the visible light characteristic matrix V 1 is a.
- weight matrix V 2 is greater than the weight coefficients near infrared wherein a 2
- visible light weight matrix V 1 wherein the weight coefficient is less than a 1 weight matrix V 2 wherein the near infrared weight coefficients a 1.
- the embodiment of the present application does not limit the size of the specific value of the weight coefficient.
- feature fusion may be performed on multiple face images.
- the stereo camera component captures three face images at the same time
- the three images may be The personal face image is subjected to feature fusion.
- the number of face images for feature fusion is not limited in the embodiment of the present application.
- Step 206 Match the target feature matrix with a feature matrix corresponding to the face image in the information base.
- the target feature matrix may be compared with the feature matrix corresponding to the face image in the information library, and the maximum similarity between the target feature matrix and the feature matrix corresponding to the face image in the information base may be obtained. And determining a certain feature matrix corresponding to the maximum similarity.
- Step 207 Acquire identity information corresponding to a face image when the similarity between the target feature matrix and the feature matrix corresponding to a certain face image in the information library is greater than or equal to a preset similarity threshold.
- the target feature matrix is determined as a feature matrix of a face image in the information base, That is, the face image of the target face is stored in the information base.
- Step 208 Determine the identity information as identity information corresponding to the target face.
- the identity information corresponding to a certain face image in the information base is determined as the identity information corresponding to the target face, that is, the face attendance is successful.
- the face recognition method provided by the embodiment of the present application can be applied to the field of face attendance, and can also be applied to the security field such as face access management.
- the application scenario of the face recognition method is not limited in this embodiment of the present application. .
- step 205 to step 208 may be performed before step 201 to step 204, that is, the target face is first in the information base.
- step of identifying the target face is a real face; the step may also be correspondingly increased or decreased according to the situation.
- step 207 when it is determined that the similarity between the target feature matrix and the feature matrix corresponding to a certain face image in the information library is greater than or equal to a preset similarity threshold, the access control may be released (opening the door) without acquiring the identity corresponding to the image of a certain face.
- the information and the identity information corresponding to the target face are determined, that is, the step 208 is not required to be performed. Any method that can be easily conceived within the technical scope of the present application by those skilled in the art should be covered in the present application. Within the scope of protection, it will not be repeated.
- the face recognition method can determine the depth of the n face feature points of the target face based on the m facial image obtained by capturing the target face through the stereo camera component, and determine Whether the target face is a three-dimensional face, and when the target face is a three-dimensional face, it is determined that the captured m face image is a real face image, and it is not necessary to provide a non-contact temperature sensing device to determine whether the captured image is
- the real face image reduces the complexity of judging whether the captured image is a real face image, and reduces the cost of face recognition; using the target feature matrix obtained by feature fusion based on the feature matrix of the m face image Face recognition improves the accuracy of face recognition.
- FIG. 5 is a schematic structural diagram of a face recognition device 40 according to an embodiment of the present disclosure. As shown in FIG. 5, the device 40 may include:
- the stereo camera assembly 401 is configured to capture a target human face to obtain an m personal face image of the target human face, where m ⁇ 2.
- the first determining module 402 is configured to determine a depth of the n personal face feature points of the target human face based on the m personal face image, n ⁇ 2.
- the determining module 403 is configured to determine whether the target human face is a three-dimensional human face based on the depth of the n personal face feature points.
- the second determining module 404 is configured to determine that the m personal face image is a real face image when the target human face is a three-dimensional human face.
- the face recognition device provided by the embodiment of the present application can determine the n face feature points of the target face through the first determining module based on the m facial image obtained by capturing the target face through the stereo camera component. Depth, and judge whether the target face is a three-dimensional face through the judgment module, and determine that the captured m face image is a real face image when the target face is a three-dimensional face through the second determining module, without setting a non-contact type
- the temperature sensing device can determine whether the captured image is a real face image, which reduces the complexity of determining whether the captured image is a real face image, and reduces the cost of face recognition.
- the stereo camera component is a binocular camera, and the binocular camera can be used for:
- the target face is photographed to obtain two face images taken at the same time.
- the first determining module can be used for:
- binocular stereo vision technology is used to calculate the depth of n face feature points.
- the first determining module is further configured to:
- the position combined with the camera parameters, calculates the three-dimensional coordinates of the first facial feature point; and determines the depth of the first facial feature point based on the three-dimensional coordinates.
- the binocular camera may include a visible light camera and a near infrared camera, and the two facial images include a visible light image captured by the visible light camera and a near infrared image captured by the near infrared camera.
- the determining module 403 may include:
- the calculation sub-module 4031 is configured to calculate a stereoscopic score of the target face based on the depth of the n-person face feature points.
- the comparison sub-module 4032 is configured to compare the stereoscopic score of the target face with the size of the preset stereo score.
- the second determining sub-module 4033 is configured to determine that the target human face is a three-dimensional human face when the stereoscopic score of the target human face is greater than or equal to a preset stereoscopic score.
- the third determining sub-module 4034 is configured to determine that the target human face is not a three-dimensional human face when the stereoscopic score of the target human face is less than a preset stereoscopic score.
- the device 40 may further include:
- the third determining module 405 is configured to determine a target feature matrix based on at least one face image in the m personal face image.
- the matching module 406 is configured to match the target feature matrix with the feature matrix corresponding to the face image in the information base.
- the obtaining module 407 is configured to obtain identity information corresponding to a certain face image when the similarity between the target feature matrix and the feature matrix corresponding to a certain face image in the information library is greater than or equal to a preset similarity threshold.
- the fourth determining module 408 is configured to determine the identity information as the identity information corresponding to the target face.
- the third determining module 405 may include:
- the feature extraction sub-module 4051 is configured to perform feature extraction on each image in the m personal face image to obtain m feature matrices.
- the feature fusion sub-module 4052 is configured to perform feature fusion on the m feature matrices to obtain a target feature matrix.
- the feature fusion sub-module can be used for:
- the m feature matrices are feature-fused to obtain the target feature matrix; wherein the weighted summation normalization formula is: V is the target feature matrix, V i is the i- th feature matrix among the m feature matrices, a i is the weight coefficient of the i-th feature matrix, and 0 ⁇ a i ⁇ 1, and norm() represents the modulo of the vector.
- the device 40 may further include:
- the module 409 is configured to select a plurality of sets of face images collected by the stereo camera component, and each set of face images includes an m personal face image.
- the fifth determining module 410 is configured to input the plurality of sets of face images into the preset model for data training to determine a weight coefficient.
- the n facial feature points may include at least two of a nose tip, a nose root, a left corner of the eye, a right corner of the eye, a left corner of the mouth, a right corner of the mouth, a center point of the chin, a left earlobe, a right earlobe, a left cheek, and a right cheek.
- the face recognition device may determine the n person of the target face through the first determining module based on the m personal face image obtained by the camera module capturing the target face through the stereo camera component.
- the depth of the face feature point, and the judgment module determines whether the target face is a three-dimensional face, and determines that the captured m face image is a real face image when the target face is a three-dimensional face through the second determining module, without setting
- the non-contact temperature sensing device can determine whether the captured image is a real face image, which reduces the complexity of determining whether the captured image is a real face image, and reduces the cost of face recognition;
- the feature matrix of the face image is used to perform face recognition on the target feature matrix obtained by feature fusion, which improves the accuracy of face recognition.
- the embodiment of the present application provides a computer device.
- the computer device 01 includes at least one processor 12 and at least one memory 16.
- the memory 16 is configured to store a computer program
- the processor 12 is configured to execute the program stored on the memory 16, and implement the face recognition method described in the foregoing embodiment.
- the method may include:
- the target face is photographed by the stereo camera assembly to obtain the m face image of the target face, m ⁇ 2;
- the target human face is a three-dimensional human face
- it is determined that the m personal face image is a real human face image.
- processor 12 includes one or more processing cores.
- the processor 12 runs a computer program stored in the memory 16, which includes software programs and units to perform various functional applications and data processing.
- the computer program stored by the memory 16 includes a software program and a unit.
- the memory 16 can store an operating system 162, an application unit 164 required for at least one function.
- the operating system 162 can be an operating system such as Real Time eXecutive (RTX), LINUX, UNIX, WINDOWS, or OS X.
- the application unit 164 may include a photographing unit 164a, a first determining unit 164b, a judging unit 164c, and a second determining unit 164d.
- the photographing unit 164a has the same or similar function as the stereo camera pack 401.
- the first determining unit 164b has the same or similar function as the first determining module 402.
- the judging unit 164c has the same or similar function as the judging module 403.
- the second determining unit 164d has the same or similar function as the second determining module 404.
- the embodiment of the present application provides a non-transitory computer readable storage medium, where the computer readable storage medium stores code instructions, which are executed by a processor to execute the person involved in the foregoing method embodiment. Face recognition method.
- the embodiment of the present application provides a chip, which includes a programmable logic circuit and/or program instructions, and is used to implement the face recognition method according to the foregoing method embodiment when the chip is running.
- the embodiment of the present application provides a computer program, and when the computer program is executed by a processor, the face recognition method according to the foregoing method embodiment is implemented.
- a person skilled in the art may understand that all or part of the steps of implementing the above embodiments may be completed by hardware, or may be instructed by a program to execute related hardware, and the program may be stored in a computer readable storage medium.
- the storage medium mentioned may be a read only memory, a magnetic disk or an optical disk or the like.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Image Analysis (AREA)
Abstract
本申请公开了一种人脸识别方法及装置、计算机设备,属于计算机视觉技术领域。所述方法包括:通过立体摄像组件对目标人脸进行拍摄,以得到所述目标人脸的m个人脸图像,m≥2;基于所述m个人脸图像,确定所述目标人脸的n个人脸特征点的深度,n≥2;基于所述n个人脸特征点的深度,判断所述目标人脸是否为立体人脸;当所述目标人脸为立体人脸时,确定所述m个人脸图像为真实人脸图像。本申请解决了相关技术中在判断检测到的人脸是否为真实人脸时方案复杂且成本较高的问题。本申请用于人脸识别。
Description
本申请要求于2017年9月25日提交的申请号为201710872594.9、发明名称为“人脸识别方法及装置、计算机设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及计算机视觉技术领域,特别涉及一种人脸识别方法及装置、计算机设备。
人脸识别技术就是通过计算机提取人脸的特征,并根据这些特征进行身份验证的一种技术。人脸与人体的其他生物特征(例如指纹和/或虹膜等)一样与生俱来,它们所具有的唯一性和不易被复制的良好特性为身份鉴别提供了必要的条件。与其他生物特征识别技术相比,人脸识别技术具有操作简单和结果直观的特点。因此,人脸识别技术在信息安全、刑事侦破、出入口控制和考勤等领域具有广泛的应用前景。
相关技术中,为了解决利用高品质的照片或影片替代真实人脸进行人脸识别的问题,一般采用非接触式温度感测装置检测人脸监控区域内的温度是否为正常人体温度,当检测到人脸监控区域内的温度为正常人体温度时,确定在人脸监控区域检测到的人脸为真实人脸。然后再基于该真实人脸的图像进行进一步识别。
但是相关技术中,在判断检测到的人脸是否为真实人脸时,需要另外设置非接触式温度感测装置,方案复杂且成本较高。
发明内容
本申请实施例提供了一种人脸识别方法及装置、计算机设备,可以解决相关技术中在判断检测到的人脸是否为真实人脸时方案复杂且成本较高的问题。所述技术方案如下:
第一方面,提供了一种人脸识别方法,所述方法包括:
通过立体摄像组件对目标人脸进行拍摄,以得到所述目标人脸的m个人脸图像,m≥2;
基于所述m个人脸图像,确定所述目标人脸的n个人脸特征点的深度,n≥2;
基于所述n个人脸特征点的深度,判断所述目标人脸是否为立体人脸;
当所述目标人脸为立体人脸时,确定所述m个人脸图像为真实人脸图像。
可选地,所述通过立体摄像组件对目标人脸进行拍摄,以得到所述目标人脸的m个人脸图像,包括:
通过双目摄像机对所述目标人脸进行拍摄,以得到同一时刻拍摄的两个人脸图像;
所述基于所述m个人脸图像,确定所述目标人脸的n个人脸特征点的深度,包括:
基于所述两个人脸图像,采用双目立体视觉技术计算所述n个人脸特征点的深度。
可选地,所述基于所述两个人脸图像,采用双目立体视觉技术计算所述n个人脸特征点的深度,包括:
确定第一人脸特征点在所述两个人脸图像中的位置,所述第一人脸特征点为所述n个人脸特征点中的任一特征点;
根据第一人脸特征点在所述两个人脸图像中的位置,结合相机参数,计算所述第一人脸特征点的三维坐标;
基于所述三维坐标,确定所述第一人脸特征点的深度。
可选地,所述双目摄像机包括一可见光摄像头和一近红外摄像头,所述两个人脸图像包括所述可见光摄像头拍摄的可见光图像以及所述近红外摄像头拍摄的近红外图像。
可选地,所述基于所述n个人脸特征点的深度,判断所述目标人脸是否为立体人脸,包括:
基于所述n个人脸特征点的深度,计算所述目标人脸的立体度评分;
比较所述目标人脸的立体度评分与预设的立体度分值的大小;
当所述目标人脸的立体度评分大于或等于所述预设的立体度分值时,确定所述目标人脸为立体人脸;
当所述目标人脸的立体度评分小于所述预设的立体度分值时,确定所述目标人脸不为立体人脸。
可选地,所述方法还包括:
基于所述m个人脸图像中的至少一个人脸图像确定目标特征矩阵;
将所述目标特征矩阵与信息库中的人脸图像所对应的特征矩阵进行匹配;
当所述目标特征矩阵与所述信息库中的某一人脸图像所对应的特征矩阵的相似度大于或等于预设的相似度阈值时,获取所述某一人脸图像所对应的身份信息;
将所述身份信息确定为所述目标人脸对应的身份信息。
可选地,所述基于所述m个人脸图像中的至少一个人脸图像确定目标特征矩阵,包括:
对所述m个人脸图像中的每个人脸图像进行特征提取,以得到m个特征矩阵;
将所述m个特征矩阵进行特征融合,得到所述目标特征矩阵。
可选地,所述将所述m个特征矩阵进行特征融合,得到所述目标特征矩阵,包括:
采用加权求和归一化公式,将所述m个特征矩阵进行特征融合,得到所述目标特征矩阵;
可选地,在所述采用加权求和归一化公式,将所述m个特征矩阵进行特征融合,得到所述目标特征矩阵之前,所述方法还包括:
选取所述立体摄像组件采集的多组人脸图像,每组人脸图像包括m个人脸图像;
将所述多组人脸图像输入预设模型进行数据训练,以确定所述权重系数。
可选地,所述n个人脸特征点包括鼻尖、鼻根、左眼角、右眼角、左嘴角、右嘴角、下巴中心点、左耳垂、右耳垂、左脸颊和右脸颊中的至少两个。
第二方面,提供了一种人脸识别装置,所述装置包括:
立体摄像组件,用于对目标人脸进行拍摄,以得到所述目标人脸的m个人脸图像,m≥2;
第一确定模块,用于基于所述m个人脸图像,确定所述目标人脸的n个人脸特征点的深度,n≥2;
判断模块,用于基于所述n个人脸特征点的深度,判断所述目标人脸是否为立体人脸;
第二确定模块,用于当所述目标人脸为立体人脸时,确定所述m个人脸图像为真实人脸图像。
可选地,所述立体摄像组件为双目摄像机,所述双目摄像机用于:
对所述目标人脸进行拍摄,以得到同一时刻拍摄的两个人脸图像;
所述第一确定模块,用于:
基于所述两个人脸图像,采用双目立体视觉技术计算所述n个人脸特征点的深度。
可选地,所述第一确定模块,还用于:
确定第一人脸特征点在所述两个人脸图像中的位置,所述第一人脸特征点为所述n个人脸特征点中的任一特征点;
根据第一人脸特征点在所述两个人脸图像中的位置,结合相机参数,计算所述第一人脸特征点的三维坐标;
基于所述三维坐标,确定所述第一人脸特征点的深度。
可选地,所述双目摄像机包括一可见光摄像头和一近红外摄像头,所述两个人脸图像包括所述可见光摄像头拍摄的可见光图像以及所述近红外摄像头拍摄的近红外图像。
可选地,所述判断模块,包括:
计算子模块,用于基于所述n个人脸特征点的深度,计算所述目标人脸的立体度评分;
比较子模块,用于比较所述目标人脸的立体度评分与预设的立体度分值的大小;
第二确定子模块,用于当所述目标人脸的立体度评分大于或等于所述预设的立体度分值时,确定所述目标人脸为立体人脸;
第三确定子模块,用于当所述目标人脸的立体度评分小于所述预设的立体度分值时,确定所述目标人脸不为立体人脸。
可选地,所述装置还包括:
第三确定模块,用于基于所述m个人脸图像中的至少一个人脸图像确定目标特征矩阵;
匹配模块,用于将所述目标特征矩阵与信息库中的人脸图像所对应的特征矩阵进行匹配;
获取模块,用于当所述目标特征矩阵与所述信息库中的某一人脸图像所对应的特征矩阵的相似度大于或等于预设的相似度阈值时,获取所述某一人脸图像所对应的身份信息;
第四确定模块,用于将所述身份信息确定为所述目标人脸对应的身份信息。
可选地,所述第三确定模块,包括:
特征提取子模块,用于对所述m个人脸图像中的每个人脸图像进行特征提取,以得到m个特征矩阵;
特征融合子模块,用于将所述m个特征矩阵进行特征融合,得到所述目标特征矩阵。
可选地,所述特征融合子模块,用于:
采用加权求和归一化公式,将所述m个特征矩阵进行特征融合,得到所述目标特征矩阵;
可选地,所述装置还包括:
选取模块,用于选取所述立体摄像组件采集的多组人脸图像,每组人脸图像包括m个人脸图像;
第五确定模块,用于将所述多组人脸图像输入预设模型进行数据训练,以确定所述权重系数。
可选地,所述n个人脸特征点包括鼻尖、鼻根、左眼角、右眼角、左嘴角、右嘴角、下巴中心点、左耳垂、右耳垂、左脸颊和右脸颊中的至少两个。
第三方面,提供了一种计算机设备,包括至少一个处理器和至少一个存储 器,
其中,
所述存储器,用于存放计算机程序;
所述处理器,用于执行所述存储器上所存放的程序,实现第一方面任一所述的人脸识别方法。
第四方面,提供了一种非易失性的计算机可读存储介质,所述计算机可读存储介质中存储有代码指令,所述代码指令由处理器执行,以执行第一方面任一所述的人脸识别方法。
本申请实施例提供的技术方案带来的有益效果包括:
本申请实施例提供的人脸识别方法及装置、计算机设备,可以基于通过立体摄像组件对目标人脸进行拍摄得到的m个人脸图像,确定目标人脸的n个人脸特征点的深度,并判断目标人脸是否为立体人脸,当目标人脸为立体人脸时,确定拍摄得到的m个人脸图像为真实人脸图像,无需设置非接触式温度感测装置即可判断拍摄的图像是否为真实人脸图像,降低了判断拍摄到的图像是否为真实人脸图像的复杂度,且降低了人脸识别的成本。
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种人脸识别方法的流程图;
图2是本申请实施例提供的另一种人脸识别方法的流程图;
图3是本申请实施例提供的一种确定人脸特征点的深度的方法流程图;
图4是本申请实施例提供的一种判断目标人脸是否为立体人脸的方法流程图;
图5是本申请实施例提供的一种人脸识别装置的结构示意图;
图6是本申请实施例提供的一种判断模块的结构示意图;
图7是本申请实施例提供的另一种人脸识别装置的结构示意图;
图8是本申请实施例提供的一种第三确定模块的结构示意图;
图9是本申请实施例提供的又一种人脸识别装置的结构示意图;
图10是本申请实施例提供的一种计算机设备的结构示意图。
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
图1是本申请实施例提供的一种人脸识别方法的流程图,如图1所示,该方法可以包括:
步骤101、通过立体摄像组件对目标人脸进行拍摄,以得到目标人脸的m个人脸图像,m≥2。
步骤102、基于m个人脸图像,确定目标人脸的n个人脸特征点的深度,n≥2。
步骤103、基于n个人脸特征点的深度,判断目标人脸是否为立体人脸。
步骤104、当目标人脸为立体人脸时,确定m个人脸图像为真实人脸图像。
综上所述,本申请实施例提供的人脸识别方法,可以基于通过立体摄像组件对目标人脸进行拍摄得到的m个人脸图像,确定目标人脸的n个人脸特征点的深度,并判断目标人脸是否为立体人脸,当目标人脸为立体人脸时,确定拍摄得到的m个人脸图像为真实人脸图像,无需设置非接触式温度感测装置即可判断拍摄的图像是否为真实人脸图像,降低了判断拍摄到的图像是否为真实人脸图像的复杂度,且降低了人脸识别的成本。
图2是本申请实施例提供的一种人脸识别方法的流程图,如图2所示,该方法可以包括:
步骤201、通过立体摄像组件对目标人脸进行拍摄,以得到目标人脸的m个人脸图像,m≥2。
可选地,立体摄像组件可以为双目摄像机(也称为双目立体摄像机),则通过立体摄像组件对目标人脸进行拍摄,以得到目标人脸的m个人脸图像,可以包括:通过双目摄像机对目标人脸进行拍摄,以得到同一时刻拍摄的两个人脸图像。
其中,双目摄像机通常包括两个摄像头,由于该两个摄像头的位置不同,因此双目摄像机可以在同一时刻从不同视角对目标人脸进行拍摄,从而得到两个不同视角的人脸图像。
在本申请实施例中,双目摄像机可以包括一可见光摄像头和一近红外摄像头,则相应的,两个人脸图像可以包括可见光摄像头拍摄的可见光图像以及近红外摄像头拍摄的近红外图像。可选地,双目摄像机也可以包括两个可见光摄像头或两个近红外摄像头,本申请实施例对双目摄像机中的摄像头的类型不做限定。
需要说明的是,可见光为波长范围为400~760纳米的电磁波,近红外光为波长范围为780~2526纳米的电磁波。因此,通过采用一可见光摄像头和一近红外摄像头对目标人脸进行拍摄,可以保证在不同光照场景下,双目摄像机在同一时刻采集到的两个人脸图像中至少一个人脸图像可用于后续的人脸识别,从而提高了双目摄像机采集人脸图像的可靠性。例如,在光照较强(光波长处于400~760纳米之间)时,可见光摄像头采集到的可见光图像的清晰度较高,由于可见光图像的纹理较于近红外图像的纹理更为细腻,后续可以主要对可见光图像进行人脸识别;在光照较弱(光波长处于780~2526纳米之间)时,可见光摄像头采集到的可见光图像的清晰度较低,而近红外图像不受光照的影响,后续可以主要对近红外图像进行人脸识别,从而可以提高人脸识别的可靠性。
可选地,双目摄像机在进行拍摄前,需要判断当前拍摄区域内是否存在人脸,当拍摄区域内存在人脸时,双目摄像机进行拍摄。示例的,双目摄像机和/或与双目摄像机连接的计算机设备通过人脸检测算法判断双目摄像机的拍摄区域内是否存在人脸,所采用的人脸检测算法包括基于直方图粗分割和奇异值特征的人脸检测算法、基于二进小波变换的人脸检测算法、基于AdaBoost算法的人脸检测算法和基于面部双眼结构特征的人脸检测算法等中的至少一种,本申请实施例对判断拍摄区域内是否存在人脸的方式不做限定。
需要说明的是,双目摄像机在拍摄人脸图像时,需要用户主动配合调整人脸在拍摄区域内的位置以及表情,以便双目摄像机拍摄到符合人脸识别要求的两个人脸图像,符合人脸识别要求的人脸图像可以为完整的人脸图像,也可以为包含某些特定人脸特征点的人脸图像。当目标人脸不满足双目摄像机的拍摄要求时,例如双目摄像机无法采集到完整的人脸图像或无法采集到包含某些特定人脸特征点的人脸图像时,双目摄像机可以发出提示信息以提示用户调整位置及表情,例如双目摄像机可以发出语音提示信息,语音提示信息的内容为“请重新拍摄”。
可选地,立体摄像组件还可以由多个阵列排布的摄像机组成,本申请实施例对立体摄像组件的类型不做限定。例如,立体摄像组件可以包括三个摄像头,该三个摄像头在同一时刻从不同视角对同一区域进行拍摄,以得到不同视角的三个人脸图像。示例的,当立体摄像组件包括独立的多个摄像机时,该多个摄像机可以与一拍摄触发装置连接,通过拍摄触发装置控制该多个摄像机在同一时刻进行拍摄。
在本申请实施例中,人脸识别的过程可以包括真实人脸的判断过程和身份信息的确认过程,立体摄像组件拍摄得到目标人脸的m个人脸图像后,可以基于m个人脸图像,判断目标人脸是否为真实人脸,具体过程参考步骤202至步骤204;也可以基于m个人脸图像,确定目标人脸对应的身份信息,具体过程参考步骤205至步骤208。
步骤202、基于m个人脸图像,确定目标人脸的n个人脸特征点的深度,n≥2。
其中,n个人脸特征点可以包括鼻尖、鼻根、左眼角、右眼角、左嘴角、右嘴角、下巴中心点、左耳垂、右耳垂、左脸颊和右脸颊中的至少两个。
可选地,当m个人脸图像为通过双目摄像机对目标人脸进行拍摄得到的同一时刻的两个人脸图像时,确定目标人脸的n个人脸特征点的深度的方式可以为:基于该两个人脸图像,采用双目立体视觉技术计算n个人脸特征点的深度。
可选地,当立体摄像组件在同一时刻拍摄得到多个人脸图像时,可以对该多个图像进行筛选处理或融合处理得到两个人脸图像,再基于该两个人脸图像计算n个人脸特征点的深度,基于该两个人脸图像计算n个人脸特征点的深度的过程也可以采用双目立体视觉技术。例如,当立体摄像组件在同一时刻拍摄得到三个人脸图像时,可以从该三个人脸图像中筛选出符合人脸识别要求的两个人脸图像,如包括清晰的n个人脸特征点的图像,再采用双目立体视觉技术计算n个人脸特征点的深度;或者,对该三个人脸图像中的两个人脸图像进行图像融合处理得到融合后的人脸图像,再基于融合后的人脸图像和该三个人脸图像中的另一个人脸图像,采用双目立体视觉技术计算n个人脸特征点的深度。
其中,双目立体视觉是机器视觉的一种重要形式,它是基于视差原理并利用成像设备从不同的位置获取被测物体的两幅图像,通过计算图像对应点间的位置偏差,来获取物体三维几何信息的方法。
在本申请实施例中,采用双目立体视觉技术计算n个人脸特征点的深度的 方法流程图,可以如图3所示,包括:
步骤2021、确定第一人脸特征点在两个人脸图像中的位置。
其中,第一人脸特征点为n个人脸特征点中的任一特征点。可选地,可以分别为n个人脸特征点进行编号,例如该n个人脸特征点可以为p
1、p
2、...、p
n,两个人脸图像包括第一人脸图像和第二人脸图像。
步骤2022、根据第一人脸特征点在两个人脸图像中的位置,结合相机参数,计算第一人脸特征点的三维坐标。
其中,相机参数包括内部参数(简称内参)和外部参数(简称外参),相机内参包括焦距、图像中心和畸变系数等,相机外参包括俯仰角、倾斜角和高度。
在建立图像坐标系后,根据第一人脸特征点在两个图像中的位置,结合相机参数,可以分别确定第一人脸特征点在两个图像中的平面坐标,采用确定第一人脸特征点的方法,可以确定n个人脸特征点在第一人脸图像中的平面坐标为
确定n个人脸特征点在第二人脸图像中的平面坐标为
可选地,可以根据第一人脸特征点在第一人脸图像中的平面坐标和第一人脸特征点在第二人脸图像中的平面坐标,计算出第一人脸特征点在相机坐标系中的三维坐标。进一步的,n个人脸特征点可以根据编号分别进行特征点匹配,确定同一人脸特征点在两个人脸图像中的平面坐标,分别计算出n个人脸特征点在相机坐标系的三维坐标,例如,在相机坐标系中,该n个人脸特征点的三维坐标为
其中,n个人脸特征点的深度坐标为
进一步的,根据n个人脸特征点在相机坐标系中的三维坐标,可以结合相机外参,采用刚体变换(三维空间中,当物体不发生形变时,对一个几何物体作旋转、平移的运动,称之为刚体变换),计算n个人脸特征点在世界坐标系中的三维坐标。在本申请实施例中,既可以采用n个人脸特征点在相机坐标系中的三维坐标进行立体人脸的判断,也可以采用n个人脸特征点在世界坐标系中的三维坐标进行立体人脸的判断。
其中,图像坐标系是以摄像机拍摄的二维图像为基准建立的坐标系;相机坐标系是摄像机在自己角度上衡量物体的坐标系,相机坐标系的原点在摄像机的光心上,z轴与摄像机的光轴平行;世界坐标系为真实物理坐标系,也即是 目标物体位置的参考系。
步骤2023、基于三维坐标,确定第一人脸特征点的深度。
可选地,可以基于第一人脸特征点的深度坐标,确定第一人脸特征点的深度,相应的,n个人脸特征点中其他人脸特征点的深度也可以参考第一人脸特征点的深度的确定方式得到,本申请实施例在此不再赘述。
步骤203、基于n个人脸特征点的深度,判断目标人脸是否为立体人脸。
可选地,基于n个人脸特征点的深度,判断目标人脸是否为立体人脸的方法流程图,可以如图4所示,包括:
步骤2031、基于n个人脸特征点的深度,计算目标人脸的立体度评分。
其中,立体度评分是对人脸特征点的深度进行打分得到的分值,立体度评分越高表示目标人脸的立体度越高,立体度评分t与人脸特征点的深度坐标可以满足预设函数关系:
该函数关系也即是立体度评分的评分规则。立体度评分的评分规则需体现三维人脸的立体形状,以及符合真实的人脸轮廓要求。立体度评分的大小与多组人脸特征点的深度的差和正相关,其中,每组人脸特征点可以包括n个人脸特征点中的两个。其中,多组人脸特征点的深度的差和,指多组人脸特征点中所有组人脸特征点的深度的绝对差值之和。
本申请实施例中,由于立体度评分是根据n个人脸特征点的深度计算得到的,因此目标人脸的立体度评分也可称为人脸特征点的立体度评分。
作为本申请的一个可选实施例,可以选取鼻尖、鼻根、下巴中心点、左耳垂和右耳垂这五个特征点作为计算立体度评分的人脸特征点,其中,鼻尖的深度坐标为z
1,鼻根的深度坐标为z
2,下巴中心点的深度坐标为z
3,左耳垂的深度坐标为z
4,右耳垂的深度坐标为z
5,可以采用鼻尖与鼻根、下巴中心点与左耳垂、下巴中心点与右耳垂这三组特征点的深度坐标的差和作为立体度评分的评分规则,则该五个人脸特征点对应的人脸的立体度评分t与该五个人脸特征点的深度坐标的函数表达式可以为:
采用上述五个特征明显的人脸特征点计算立体度评分,优点在于人脸特征点的个数较少、特征信息明显易于确定特征点在图像中的位置以及点对间的深度坐标的差值较大易于判断是否为真实人脸。
作为本申请的另一个可选实施例,可以选取左耳垂、右耳垂、鼻尖、左嘴角和右嘴角这五个特征点作为计算立体度评分的人脸特征点,其中,左耳垂的深度坐标为z
1,右耳垂的深度坐标为z
2,鼻尖的深度坐标为z
3,左嘴角的深度坐标为z
4,右嘴角的深度坐标为z
5,可以采用鼻尖与左耳垂、鼻尖与右耳垂、鼻尖与左嘴角、鼻尖与右嘴角这四组特征点的深度坐标的差和作为立体度评分的评分规则,则该五个人脸特征点对应的人脸的立体度评分t与该五个人脸特征点的深度坐标的函数表达式可以为:
需要说明的是,上述立体度评分的评分规则仅用于作简单的示例性说明,实际评分规则可根据人脸姿态进行设计,不同的人脸姿态可以对应不同的评分规则,在获取目标人脸的人脸图像后,可以根据人脸图像中的人脸姿态确定所选取的人脸特征点以及对应的评分规则,以计算目标人脸的立体度评分,本申请实施例对此不做限定。
步骤2032、比较目标人脸的立体度评分与预设的立体度分值的大小。
其中,预设的立体度分值的大小的设计与选取的用于计算立体度评分的人脸特征点以及评分规则有关。预设的立体度分值越大,非真实人脸的误判越少,真实人脸的漏判越多。
示例的,参考步骤2031中的另一个可选实施例,当选取左耳垂、右耳垂、鼻尖、左嘴角和右嘴角这五个特征点作为计算立体度评分的人脸特征点时,可以采用0.4倍的左右眼睛中心物理距离作为预设的立体度分值。
步骤2033、当目标人脸的立体度评分大于或等于预设的立体度分值时,确定目标人脸为立体人脸。
步骤2034、当目标人脸的立体度评分小于预设的立体度分值时,确定目标人脸不为立体人脸。
进一步的,当确定目标人脸不为立体人脸时,可以确定m个人脸图像不是真实人脸图像。此时,立体摄像组件可以将拍摄该目标人脸得到的m个人脸图像上传至考勤系统的“黑名单”数据库,管理人员在查询考勤系统时,可以根据“黑名单”中的人脸图像确定被代考勤的员工,以便进行后续处理。或者,当确定目标人脸不为立体人脸时,立体摄像组件可以即时发出告警信息,以提 示管理人员或安保人员进行相应的措施。
步骤204、当目标人脸为立体人脸时,确定m个人脸图像为真实人脸图像。
需要说明的是,当目标人脸为立体人脸时,可以排除进行人脸识别的目标人脸为人脸照片或人脸影像等,因此可以确定立体摄像组件拍摄得到的m个人脸图像为真实人脸图像。确定m个人脸图像为真实人脸图像,即确定目标人脸为真实人脸。
步骤205、基于m个人脸图像中的至少一个人脸图像确定目标特征矩阵。
可选地,可以对m个人脸图像中的任一人脸图像进行特征提取,将得到的特征矩阵确定为目标特征矩阵;或者,对m个人脸图像中的每个图像进行特征提取,以得到m个特征矩阵,将m个特征矩阵进行特征融合,得到目标特征矩阵。其中,人脸特征是从人脸图像中提取出来的具有人脸辨识度的表征数据,表征方法包括知识表征和统计表征,人脸特征主要包括视觉特征、几何特征、像素统计特征和图像代数特征等,常用的特征提取算法包括局域二值模式(英文:local binary pattern;简称:LBP)算法、尺度不变特征变换(英文:Scale-invariant feature transform;简称:SIFT)算法、梯度方向直方图(英文:Histograms of Oriented Gradients;简称:HOG)算法和深度神经网络学习算法等,本申请实施例对特征提取的具体实现方式不做限定。
作为本申请的一个可选实施例,将m个特征矩阵进行特征融合,得到目标特征矩阵的方法,可以包括:采用加权求和归一化公式,将m个特征矩阵进行特征融合,得到目标特征矩阵。
其中,加权求和归一化公式为:
V为目标特征矩阵,V
i为m个特征矩阵中的第i个特征矩阵,a
i为第i个特征矩阵的权重系数,且0≤a
i≤1,norm()表示对向量取模。采用加权求和归一化公式将m个特征矩阵进行特征融合,在融合特征的基础上,没有增加特征维度,因此在融合多个人脸图像的特征后,可以在相同的计算复杂度的情况下提高人脸识别的准确率。
示例的,假设m个人脸图像为双目摄像机拍摄的一可见光图像和一近红外图像,则对可见光图像进行特征提取得到可见光特征矩阵V
1,可见光特征矩阵的权重系数为a
1,对近红外图像进行特征提取得到近红外特征矩阵V
2,近红外 特征矩阵的权重系数为a
2,则对可见光特征矩阵V
1和近红外特征矩阵V
2进行特征融合得到的目标特征矩阵V为V=(a
1*V
1+a
2*V
2)/norm((a
1*V
1+a
2*V
2))。将可见光特征矩阵和近红外特征矩阵进行特征融合,可以兼顾近红外图像不受光照影响和可见光图像中人脸纹理细腻的特点,提高人脸识别的准确率。
需要说明的是,在将m个特征矩阵进行特征融合之前,选取立体摄像组件采集的多组人脸图像,每组人脸图像包括m个人脸图像,也即是每组人脸图像中人脸图像的个数与进行特征融合的特征矩阵的个数相同;将多组人脸图像输入预设模型进行数据训练,以确定权重系数。其中,采集的多组人脸图像可以包括不同时刻采集的人脸图像。
示例的,假设立体摄像组件为包括一可见光摄像头和一近红外摄像头的双目摄像机,多组人脸图像包括在不同时刻分别采集的多组人脸图像,每组人脸图像包括一可见光图像和一近红外图像,分别将不同时刻采集的多组人脸图像输入预设模型进行数据训练,以确定权重系数。
可选地,同一时刻采集的多组人脸图像包括不同的人脸姿态和人脸表情。设置a
1和a
2的变化梯度为0.1,a
1和a
2可能设置数值为{0,0.1,0.2,...,0.9,1},则a
1和a
2可能的组合有11*11种,遍历所有组合,统计不同组合下人脸识别的成功率,将人脸识别的成功率最高的组合中a
1和a
2的值确定为该时刻的权重系数。由于不同时刻光照的强度不同,可根据实际光照强度调整权重系数,以对可见光特征矩阵和近红外特征矩阵进行偏向性筛选,例如在光照强度较强时,使可见光特征矩阵V
1的权重系数a
1大于近红外特征矩阵V
2的权重系数a
2,在光照强度较弱时,使可见光特征矩阵V
1的权重系数a
1小于近红外特征矩阵V
2的权重系数a
1。本申请实施例对权重系数的具体数值的大小不做限定。
可选地,当立体摄像组件在同一时刻拍摄得到多个人脸图像时,可以对多个人脸图像进行特征融合,例如,当立体摄像组件在同一时刻拍摄得到三个人脸图像,则可以对该三个人脸图像进行特征融合,本申请实施例对进行特征融合的人脸图像的个数不做限定。
步骤206、将目标特征矩阵与信息库中的人脸图像所对应的特征矩阵进行匹配。
可选地,可以将目标特征矩阵与信息库中的人脸图像所对应的特征矩阵进行一一比对,获取目标特征矩阵与信息库中的人脸图像所对应的特征矩阵的最大相似度,以及确定最大相似度所对应的某一特征矩阵。
步骤207、当目标特征矩阵与信息库中的某一人脸图像所对应的特征矩阵的相似度大于或等于预设的相似度阈值时,获取某一人脸图像所对应的身份信息。
当目标特征矩阵与信息库中的某一人脸图像所对应的特征矩阵的相似度大于或等于预设的相似度阈值时,确定目标特征矩阵为信息库中的某一人脸图像的特征矩阵,也即是,信息库中存储有目标人脸的人脸图像。
步骤208、将该身份信息确定为目标人脸对应的身份信息。
在人脸考勤过程中,将信息库中的某一人脸图像所对应的身份信息确定为目标人脸对应的身份信息,即人脸考勤成功。
可选地,本申请实施例提供的人脸识别方法既可以应用于人脸考勤领域,还可以应用于人脸门禁管理等安防领域,本申请实施例对人脸识别方法的应用场景不做限定。
需要说明的是,本申请实施例提供的人脸识别方法步骤的先后顺序可以进行适当调整,例如步骤205至步骤208可以在步骤201至步骤204之前执行,即先进行目标人脸在信息库中的识别,再识别目标人脸是否为真实人脸;步骤也可以根据情况进行相应增减,例如,当本申请实施例提供的人脸识别方法应用于人脸门禁管理时,则在步骤207中确定目标特征矩阵与信息库中的某一人脸图像所对应的特征矩阵的相似度大于或等于预设的相似度阈值时,则可以解除门禁(开门),无需获取某一人脸图像所对应的身份信息以及确定目标人脸对应的身份信息,也即是无需执行步骤208,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化的方法,都应涵盖在本申请的保护范围之内,因此不再赘述。
综上所述,本申请实施例提供的人脸识别方法,可以基于通过立体摄像组件对目标人脸进行拍摄得到的m个人脸图像,确定目标人脸的n个人脸特征点的深度,并判断目标人脸是否为立体人脸,当目标人脸为立体人脸时,确定拍摄得到的m个人脸图像为真实人脸图像,无需设置非接触式温度感测装置即可判断拍摄的图像是否为真实人脸图像,降低了判断拍摄到的图像是否为真实人脸图像的复杂度,且降低了人脸识别的成本;采用根据m个人脸图像的特征矩阵进行特征融合后得到的目标特征矩阵进行人脸识别,提高了人脸识别的准确率。
图5是本申请实施例提供的一种人脸识别装置40的结构示意图,如图5所示,该装置40可以包括:
立体摄像组件401,用于对目标人脸进行拍摄,以得到目标人脸的m个人脸图像,m≥2。
第一确定模块402,用于基于m个人脸图像,确定目标人脸的n个人脸特征点的深度,n≥2。
判断模块403,用于基于n个人脸特征点的深度,判断目标人脸是否为立体人脸。
第二确定模块404,用于当目标人脸为立体人脸时,确定m个人脸图像为真实人脸图像。
综上所述,本申请实施例提供的人脸识别装置,可以基于通过立体摄像组件对目标人脸进行拍摄得到的m个人脸图像,通过第一确定模块确定目标人脸的n个人脸特征点的深度,并通过判断模块判断目标人脸是否为立体人脸,通过第二确定模块当目标人脸为立体人脸时确定拍摄得到的m个人脸图像为真实人脸图像,无需设置非接触式温度感测装置即可判断拍摄的图像是否为真实人脸图像,降低了判断拍摄到的图像是否为真实人脸图像的复杂度,且降低了人脸识别的成本。
可选地,立体摄像组件为双目摄像机,双目摄像机可以用于:
对目标人脸进行拍摄,以得到同一时刻拍摄的两个人脸图像。
相应的,第一确定模块,可以用于:
基于两个人脸图像,采用双目立体视觉技术计算n个人脸特征点的深度。
进一步的,第一确定模块,还可以用于:
确定第一人脸特征点在两个人脸图像中的位置,该第一人脸特征点为n个人脸特征点中的任一特征点;根据第一人脸特征点在两个人脸图像中的位置,结合相机参数,计算第一人脸特征点的三维坐标;基于三维坐标,确定第一人脸特征点的深度。
可选地,双目摄像机可以包括一可见光摄像头和一近红外摄像头,两个人脸图像包括可见光摄像头拍摄的可见光图像以及近红外摄像头拍摄的近红外图像。
可选地,如图6所示,判断模块403,可以包括:
计算子模块4031,用于基于n个人脸特征点的深度,计算目标人脸的立体 度评分。
比较子模块4032,用于比较目标人脸的立体度评分与预设的立体度分值的大小。
第二确定子模块4033,用于当目标人脸的立体度评分大于或等于预设的立体度分值时,确定目标人脸为立体人脸。
第三确定子模块4034,用于当目标人脸的立体度评分小于预设的立体度分值时,确定目标人脸不为立体人脸。
进一步的,如图7所示,装置40还可以包括:
第三确定模块405,用于基于m个人脸图像中的至少一个人脸图像确定目标特征矩阵。
匹配模块406,用于将目标特征矩阵与信息库中的人脸图像所对应的特征矩阵进行匹配。
获取模块407,用于当目标特征矩阵与信息库中的某一人脸图像所对应的特征矩阵的相似度大于或等于预设的相似度阈值时,获取某一人脸图像所对应的身份信息。
第四确定模块408,用于将身份信息确定为目标人脸对应的身份信息。
可选地,如图8所示,第三确定模块405,可以包括:
特征提取子模块4051,用于对m个人脸图像中的每个图像进行特征提取,以得到m个特征矩阵。
特征融合子模块4052,用于将m个特征矩阵进行特征融合,得到目标特征矩阵。
其中,特征融合子模块,可以用于:
采用加权求和归一化公式,将m个特征矩阵进行特征融合,得到目标特征矩阵;其中,加权求和归一化公式为:
V为目标特征矩阵,V
i为m个特征矩阵中的第i个特征矩阵,a
i为第i个特征矩阵的权重系数,且0≤a
i≤1,norm()表示对向量取模。
相应的,如图9所示,装置40还可以包括:
选取模块409,用于选取立体摄像组件采集的多组人脸图像,每组人脸图像包括m个人脸图像。
第五确定模块410,用于将多组人脸图像输入预设模型进行数据训练,以确定权重系数。
可选地,n个人脸特征点可以包括鼻尖、鼻根、左眼角、右眼角、左嘴角、右嘴角、下巴中心点、左耳垂、右耳垂、左脸颊和右脸颊中的至少两个。
综上所述,本申请实施例提供的人脸识别装置,可以基于通过拍摄模块通过立体摄像组件对目标人脸进行拍摄得到的m个人脸图像,通过第一确定模块确定目标人脸的n个人脸特征点的深度,并通过判断模块判断目标人脸是否为立体人脸,通过第二确定模块当目标人脸为立体人脸时确定拍摄得到的m个人脸图像为真实人脸图像,无需设置非接触式温度感测装置即可判断拍摄的图像是否为真实人脸图像,降低了判断拍摄到的图像是否为真实人脸图像的复杂度,且降低了人脸识别的成本;采用根据m个人脸图像的特征矩阵进行特征融合后得到的目标特征矩阵进行人脸识别,提高了人脸识别的准确率。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
本申请实施例提供了一种计算机设备,如图10所示,计算机设备01包括至少一个处理器12和至少一个存储器16,
其中,
所述存储器16,用于存放计算机程序;
所述处理器12,用于执行存储器16上所存放的程序,实现上述实施例所述的人脸识别方法,示例的,该方法可以包括:
通过立体摄像组件对目标人脸进行拍摄,以得到所述目标人脸的m个人脸图像,m≥2;
基于所述m个人脸图像,确定所述目标人脸的n个人脸特征点的深度,n≥2;
基于所述n个人脸特征点的深度,判断所述目标人脸是否为立体人脸;
当所述目标人脸为立体人脸时,确定所述m个人脸图像为真实人脸图像。
具体的,处理器12包括一个或者一个以上处理核心。处理器12通过运行存储器16存储的计算机程序,该计算机程序包括软件程序以及单元,从而执行各种功能应用以及数据处理。
存储器16存储的计算机程序包括软件程序以及单元。具体的,存储器16 可存储操作系统162、至少一个功能所需的应用程序单元164。操作系统162可以是实时操作系统(Real Time eXecutive,RTX)、LINUX、UNIX、WINDOWS或OS X之类的操作系统。其中该应用程序单元164可以包括拍摄单元164a、第一确定单元164b、判断单元164c和第二确定单元164d。
拍摄单元164a,具有与立体摄像组件401相同或相似的功能。
第一确定单元164b,具有与第一确定模块402相同或相似的功能。
判断单元164c,具有与判断模块403相同或相似的功能。
第二确定单元164d,具有与第二确定模块404相同或相似的功能。
本申请实施例提供了一种非易失性的计算机可读存储介质,所述计算机可读存储介质中存储有代码指令,所述代码指令由处理器执行,以执行上述方法实施例涉及的人脸识别方法。
本申请实施例提供了一种芯片,所述芯片包括可编程逻辑电路和/或程序指令,当所述芯片运行时用于实现上述方法实施例涉及的人脸识别方法。
本申请实施例提供了一种计算机程序,当所述计算机程序被处理器执行时,实现上述方法实施例涉及的人脸识别方法。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本申请的可选实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。
Claims (22)
- 一种人脸识别方法,所述方法包括:通过立体摄像组件对目标人脸进行拍摄,以得到所述目标人脸的m个人脸图像,m≥2;基于所述m个人脸图像,确定所述目标人脸的n个人脸特征点的深度,n≥2;基于所述n个人脸特征点的深度,判断所述目标人脸是否为立体人脸;当所述目标人脸为立体人脸时,确定所述m个人脸图像为真实人脸图像。
- 根据权利要求1所述的方法,所述通过立体摄像组件对目标人脸进行拍摄,以得到所述目标人脸的m个人脸图像,包括:通过双目摄像机对所述目标人脸进行拍摄,以得到同一时刻拍摄的两个人脸图像;所述基于所述m个人脸图像,确定所述目标人脸的n个人脸特征点的深度,包括:基于所述两个人脸图像,采用双目立体视觉技术计算所述n个人脸特征点的深度。
- 根据权利要求2所述的方法,所述基于所述两个人脸图像,采用双目立体视觉技术计算所述n个人脸特征点的深度,包括:确定第一人脸特征点在所述两个人脸图像中的位置,所述第一人脸特征点为所述n个人脸特征点中的任一特征点;根据所述第一人脸特征点在所述两个人脸图像中的位置,结合相机参数,计算所述第一人脸特征点的三维坐标;基于所述三维坐标,确定所述第一人脸特征点的深度。
- 根据权利要求2或3所述的方法,所述双目摄像机包括一可见光摄像头和一近红外摄像头,所述两个人脸图像包括所述可见光摄像头拍摄的可见光图像以及所述近红外摄像头拍摄的近红外图像。
- 根据权利要求1所述的方法,所述基于所述n个人脸特征点的深度,判断所述目标人脸是否为立体人脸,包括:基于所述n个人脸特征点的深度,计算所述目标人脸的立体度评分;比较所述目标人脸的立体度评分与预设的立体度分值的大小;当所述目标人脸的立体度评分大于或等于所述预设的立体度分值时,确定所述目标人脸为立体人脸;当所述目标人脸的立体度评分小于所述预设的立体度分值时,确定所述目标人脸不为立体人脸。
- 根据权利要求1所述的方法,所述方法还包括:基于所述m个人脸图像中的至少一个人脸图像确定目标特征矩阵;将所述目标特征矩阵与信息库中的人脸图像所对应的特征矩阵进行匹配;当所述目标特征矩阵与所述信息库中的某一人脸图像所对应的特征矩阵的相似度大于或等于预设的相似度阈值时,获取所述某一人脸图像所对应的身份信息;将所述身份信息确定为所述目标人脸对应的身份信息。
- 根据权利要求6所述的方法,所述基于所述m个人脸图像中的至少一个人脸图像确定目标特征矩阵,包括:对所述m个人脸图像中的每个人脸图像进行特征提取,以得到m个特征矩阵;将所述m个特征矩阵进行特征融合,得到所述目标特征矩阵。
- 根据权利要求8所述的方法,在所述采用加权求和归一化公式,将所述m个特征矩阵进行特征融合,得到所述目标特征矩阵之前,所述方法还包括:选取所述立体摄像组件采集的多组人脸图像,每组人脸图像包括m个人脸图像;将所述多组人脸图像输入预设模型进行数据训练,以确定所述权重系数。
- 根据权利要求1所述的方法,所述n个人脸特征点包括鼻尖、鼻根、左眼角、右眼角、左嘴角、右嘴角、下巴中心点、左耳垂、右耳垂、左脸颊和右脸颊中的至少两个。
- 一种人脸识别装置,所述装置包括:立体摄像组件,用于对目标人脸进行拍摄,以得到所述目标人脸的m个人脸图像,m≥2;第一确定模块,用于基于所述m个人脸图像,确定所述目标人脸的n个人脸特征点的深度,n≥2;判断模块,用于基于所述n个人脸特征点的深度,判断所述目标人脸是否为立体人脸;第二确定模块,用于当所述目标人脸为立体人脸时,确定所述m个人脸图像为真实人脸图像。
- 根据权利要求11所述的装置,所述立体摄像组件为双目摄像机,所述双目摄像机用于:对所述目标人脸进行拍摄,以得到同一时刻拍摄的两个人脸图像;所述第一确定模块,用于:基于所述两个人脸图像,采用双目立体视觉技术计算所述n个人脸特征点的深度。
- 根据权利要求12所述的装置,所述第一确定模块,还用于:确定第一人脸特征点在所述两个人脸图像中的位置,所述第一人脸特征点为所述n个人脸特征点中的任一特征点;根据第一人脸特征点在所述两个人脸图像中的位置,结合相机参数,计算所述第一人脸特征点的三维坐标;基于所述三维坐标,确定所述第一人脸特征点的深度。
- 根据权利要求12或13所述的装置,所述双目摄像机包括一可见光摄像头和一近红外摄像头,所述两个人脸图像包括所述可见光摄像头拍摄的可见光图像以及所述近红外摄像头拍摄的近红外图像。
- 根据权利要求11所述的装置,所述判断模块,包括:计算子模块,用于基于所述n个人脸特征点的深度,计算所述目标人脸的立体度评分;比较子模块,用于比较所述目标人脸的立体度评分与预设的立体度分值的大小;第二确定子模块,用于当所述目标人脸的立体度评分大于或等于所述预设的立体度分值时,确定所述目标人脸为立体人脸;第三确定子模块,用于当所述目标人脸的立体度评分小于所述预设的立体度分值时,确定所述目标人脸不为立体人脸。
- 根据权利要求11所述的装置,所述装置还包括:第三确定模块,用于基于所述m个人脸图像中的至少一个人脸图像确定目标特征矩阵;匹配模块,用于将所述目标特征矩阵与信息库中的人脸图像所对应的特征矩阵进行匹配;获取模块,用于当所述目标特征矩阵与所述信息库中的某一人脸图像所对应的特征矩阵的相似度大于或等于预设的相似度阈值时,获取所述某一人脸图像所对应的身份信息;第四确定模块,用于将所述身份信息确定为所述目标人脸对应的身份信息。
- 根据权利要求16所述的装置,所述第三确定模块,包括:特征提取子模块,用于对所述m个人脸图像中的每个人脸图像进行特征提取,以得到m个特征矩阵;特征融合子模块,用于将所述m个特征矩阵进行特征融合,得到所述目标特征矩阵。
- 根据权利要求18所述的装置,所述装置还包括:选取模块,用于选取所述立体摄像组件采集的多组人脸图像,每组人脸图像包括m个人脸图像;第五确定模块,用于将所述多组人脸图像输入预设模型进行数据训练,以确定所述权重系数。
- 根据权利要求11所述的装置,所述n个人脸特征点包括鼻尖、鼻根、左眼角、右眼角、左嘴角、右嘴角、下巴中心点、左耳垂、右耳垂、左脸颊和右脸颊中的至少两个。
- 一种计算机设备,包括至少一个处理器和至少一个存储器,其中,所述存储器,用于存放计算机程序;所述处理器,用于执行所述存储器上所存放的程序,实现权利要求1至10任一所述的人脸识别方法。
- 一种非易失性的计算机可读存储介质,所述计算机可读存储介质中存储有代码指令,所述代码指令由处理器执行,以执行权利要求1至10任一所述的人脸识别方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710872594.9 | 2017-09-25 | ||
CN201710872594.9A CN109558764B (zh) | 2017-09-25 | 2017-09-25 | 人脸识别方法及装置、计算机设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019056988A1 true WO2019056988A1 (zh) | 2019-03-28 |
Family
ID=65809529
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/105707 WO2019056988A1 (zh) | 2017-09-25 | 2018-09-14 | 人脸识别方法及装置、计算机设备 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109558764B (zh) |
WO (1) | WO2019056988A1 (zh) |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110175515A (zh) * | 2019-04-15 | 2019-08-27 | 成都大数汇聚科技有限公司 | 一种基于大数据的人脸识别算法 |
CN110570348A (zh) * | 2019-09-10 | 2019-12-13 | 连尚(新昌)网络科技有限公司 | 一种脸部图像替换方法及设备 |
CN110929566A (zh) * | 2019-10-16 | 2020-03-27 | 厦门大学 | 基于可见光和近红外双目摄像头的人脸活体检测方法 |
CN110991266A (zh) * | 2019-11-13 | 2020-04-10 | 北京智芯原动科技有限公司 | 一种双目人脸活体检测方法及装置 |
CN111046770A (zh) * | 2019-12-05 | 2020-04-21 | 上海中信信息发展股份有限公司 | 一种照片档案人物自动标注方法 |
CN111274946A (zh) * | 2020-01-19 | 2020-06-12 | 杭州涂鸦信息技术有限公司 | 一种人脸识别方法和系统及设备 |
CN111382681A (zh) * | 2020-02-28 | 2020-07-07 | 浙江大华技术股份有限公司 | 一种人脸注册方法、装置及存储介质 |
CN111579083A (zh) * | 2020-05-13 | 2020-08-25 | 芋头科技(杭州)有限公司 | 基于红外图像人脸检测的体温测量的方法和装置 |
CN111583334A (zh) * | 2020-05-26 | 2020-08-25 | 广东电网有限责任公司培训与评价中心 | 一种变电站人员三维空间定位方法、装置和设备 |
CN111582157A (zh) * | 2020-05-07 | 2020-08-25 | 讯飞幻境(北京)科技有限公司 | 一种人体识别方法、装置、设备及计算机可读存储介质 |
WO2020220127A1 (en) * | 2019-04-29 | 2020-11-05 | Active Witness Corp. | Security systems and processes involving biometric authentication |
CN111968152A (zh) * | 2020-07-15 | 2020-11-20 | 桂林远望智能通信科技有限公司 | 一种动态身份识别方法及装置 |
CN112053389A (zh) * | 2020-07-28 | 2020-12-08 | 北京迈格威科技有限公司 | 人像处理方法、装置、电子设备及可读存储介质 |
CN112084811A (zh) * | 2019-06-12 | 2020-12-15 | 杭州海康威视数字技术股份有限公司 | 身份信息的确定方法、装置及存储介质 |
CN112101293A (zh) * | 2020-09-27 | 2020-12-18 | 深圳市灼华网络科技有限公司 | 人脸表情的识别方法、装置、设备及存储介质 |
CN112241700A (zh) * | 2020-10-15 | 2021-01-19 | 希望银蕨智能科技有限公司 | 一种额头精准定位的多目标额温测量方法 |
CN112241703A (zh) * | 2020-10-16 | 2021-01-19 | 沈阳天眼智云信息科技有限公司 | 基于红外技术与人脸识别的考勤方法 |
CN112364724A (zh) * | 2020-10-27 | 2021-02-12 | 北京地平线信息技术有限公司 | 活体检测方法和装置、存储介质、电子设备 |
CN112784661A (zh) * | 2019-11-01 | 2021-05-11 | 宏碁股份有限公司 | 真实人脸的识别方法与真实人脸的识别装置 |
CN113033307A (zh) * | 2021-02-22 | 2021-06-25 | 浙江大华技术股份有限公司 | 对象的匹配方法、装置、存储介质及电子装置 |
CN113128429A (zh) * | 2021-04-24 | 2021-07-16 | 新疆爱华盈通信息技术有限公司 | 基于立体视觉的活体检测方法和相关设备 |
CN113902849A (zh) * | 2021-10-18 | 2022-01-07 | 深圳追一科技有限公司 | 三维人脸模型重建方法、装置、电子设备及存储介质 |
CN114202741A (zh) * | 2021-12-13 | 2022-03-18 | 中国平安财产保险股份有限公司 | 用户学习的监控方法、装置、计算机设备及存储介质 |
CN114542874A (zh) * | 2022-02-23 | 2022-05-27 | 常州工业职业技术学院 | 一种自动调节拍照高度和角度的装置及其控制系统 |
CN118042074A (zh) * | 2024-01-05 | 2024-05-14 | 广州开得联软件技术有限公司 | 目标识别方法、目标识别系统、装置、设备和存储介质 |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020215283A1 (zh) * | 2019-04-25 | 2020-10-29 | 深圳市汇顶科技股份有限公司 | 人脸识别方法、处理芯片以及电子设备 |
CN110309782B (zh) * | 2019-07-02 | 2022-05-03 | 四川大学 | 一种基于红外与可见光双目系统的活体人脸检测方法 |
CN111353368A (zh) * | 2019-08-19 | 2020-06-30 | 深圳市鸿合创新信息技术有限责任公司 | 云台摄像机、人脸特征处理方法及装置、电子设备 |
CN111160178B (zh) * | 2019-12-19 | 2024-01-12 | 深圳市商汤科技有限公司 | 图像处理方法及装置、处理器、电子设备及存储介质 |
CN111126265A (zh) * | 2019-12-24 | 2020-05-08 | 杭州魔点科技有限公司 | 活体检测方法、装置、电子设备及介质 |
CN111428654B (zh) * | 2020-03-27 | 2023-11-28 | 北京万里红科技有限公司 | 一种虹膜识别方法、装置及存储介质 |
CN113139413A (zh) * | 2020-08-07 | 2021-07-20 | 西安天和防务技术股份有限公司 | 人员管理方法、装置及电子设备 |
CN112347849B (zh) * | 2020-09-29 | 2024-03-26 | 咪咕视讯科技有限公司 | 视频会议处理方法、电子设备及存储介质 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104834901A (zh) * | 2015-04-17 | 2015-08-12 | 北京海鑫科金高科技股份有限公司 | 一种基于双目立体视觉的人脸检测方法、装置及系统 |
CN105023010A (zh) * | 2015-08-17 | 2015-11-04 | 中国科学院半导体研究所 | 一种人脸活体检测方法及系统 |
CN105205458A (zh) * | 2015-09-16 | 2015-12-30 | 北京邮电大学 | 人脸活体检测方法、装置及系统 |
CN105956518A (zh) * | 2016-04-21 | 2016-09-21 | 腾讯科技(深圳)有限公司 | 一种人脸识别方法、装置和系统 |
-
2017
- 2017-09-25 CN CN201710872594.9A patent/CN109558764B/zh active Active
-
2018
- 2018-09-14 WO PCT/CN2018/105707 patent/WO2019056988A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104834901A (zh) * | 2015-04-17 | 2015-08-12 | 北京海鑫科金高科技股份有限公司 | 一种基于双目立体视觉的人脸检测方法、装置及系统 |
CN105023010A (zh) * | 2015-08-17 | 2015-11-04 | 中国科学院半导体研究所 | 一种人脸活体检测方法及系统 |
CN105205458A (zh) * | 2015-09-16 | 2015-12-30 | 北京邮电大学 | 人脸活体检测方法、装置及系统 |
CN105956518A (zh) * | 2016-04-21 | 2016-09-21 | 腾讯科技(深圳)有限公司 | 一种人脸识别方法、装置和系统 |
Cited By (40)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110175515A (zh) * | 2019-04-15 | 2019-08-27 | 成都大数汇聚科技有限公司 | 一种基于大数据的人脸识别算法 |
CN110175515B (zh) * | 2019-04-15 | 2022-11-29 | 成都大数汇聚科技有限公司 | 一种基于大数据的人脸识别算法 |
CN114097009A (zh) * | 2019-04-29 | 2022-02-25 | 艾克特维认证公司 | 涉及生物认证的安全系统和方法 |
WO2020220127A1 (en) * | 2019-04-29 | 2020-11-05 | Active Witness Corp. | Security systems and processes involving biometric authentication |
CN112084811A (zh) * | 2019-06-12 | 2020-12-15 | 杭州海康威视数字技术股份有限公司 | 身份信息的确定方法、装置及存储介质 |
CN112084811B (zh) * | 2019-06-12 | 2023-08-11 | 杭州海康威视数字技术股份有限公司 | 身份信息的确定方法、装置及存储介质 |
CN110570348A (zh) * | 2019-09-10 | 2019-12-13 | 连尚(新昌)网络科技有限公司 | 一种脸部图像替换方法及设备 |
CN110570348B (zh) * | 2019-09-10 | 2023-09-15 | 连尚(新昌)网络科技有限公司 | 一种脸部图像替换方法及设备 |
CN110929566B (zh) * | 2019-10-16 | 2023-05-23 | 厦门大学 | 基于可见光和近红外双目摄像头的人脸活体检测方法 |
CN110929566A (zh) * | 2019-10-16 | 2020-03-27 | 厦门大学 | 基于可见光和近红外双目摄像头的人脸活体检测方法 |
CN112784661A (zh) * | 2019-11-01 | 2021-05-11 | 宏碁股份有限公司 | 真实人脸的识别方法与真实人脸的识别装置 |
CN112784661B (zh) * | 2019-11-01 | 2024-01-19 | 宏碁股份有限公司 | 真实人脸的识别方法与真实人脸的识别装置 |
CN110991266B (zh) * | 2019-11-13 | 2024-02-20 | 北京智芯原动科技有限公司 | 一种双目人脸活体检测方法及装置 |
CN110991266A (zh) * | 2019-11-13 | 2020-04-10 | 北京智芯原动科技有限公司 | 一种双目人脸活体检测方法及装置 |
CN111046770B (zh) * | 2019-12-05 | 2023-08-01 | 上海信联信息发展股份有限公司 | 一种照片档案人物自动标注方法 |
CN111046770A (zh) * | 2019-12-05 | 2020-04-21 | 上海中信信息发展股份有限公司 | 一种照片档案人物自动标注方法 |
CN111274946A (zh) * | 2020-01-19 | 2020-06-12 | 杭州涂鸦信息技术有限公司 | 一种人脸识别方法和系统及设备 |
CN111274946B (zh) * | 2020-01-19 | 2023-05-05 | 杭州涂鸦信息技术有限公司 | 一种人脸识别方法和系统及设备 |
CN111382681B (zh) * | 2020-02-28 | 2023-11-14 | 浙江大华技术股份有限公司 | 一种人脸注册方法、装置及存储介质 |
CN111382681A (zh) * | 2020-02-28 | 2020-07-07 | 浙江大华技术股份有限公司 | 一种人脸注册方法、装置及存储介质 |
CN111582157B (zh) * | 2020-05-07 | 2023-07-28 | 讯飞幻境(北京)科技有限公司 | 一种人体识别方法、装置、设备及计算机可读存储介质 |
CN111582157A (zh) * | 2020-05-07 | 2020-08-25 | 讯飞幻境(北京)科技有限公司 | 一种人体识别方法、装置、设备及计算机可读存储介质 |
CN111579083A (zh) * | 2020-05-13 | 2020-08-25 | 芋头科技(杭州)有限公司 | 基于红外图像人脸检测的体温测量的方法和装置 |
CN111583334B (zh) * | 2020-05-26 | 2023-03-14 | 广东电网有限责任公司培训与评价中心 | 一种变电站人员三维空间定位方法、装置和设备 |
CN111583334A (zh) * | 2020-05-26 | 2020-08-25 | 广东电网有限责任公司培训与评价中心 | 一种变电站人员三维空间定位方法、装置和设备 |
CN111968152B (zh) * | 2020-07-15 | 2023-10-17 | 桂林远望智能通信科技有限公司 | 一种动态身份识别方法及装置 |
CN111968152A (zh) * | 2020-07-15 | 2020-11-20 | 桂林远望智能通信科技有限公司 | 一种动态身份识别方法及装置 |
CN112053389A (zh) * | 2020-07-28 | 2020-12-08 | 北京迈格威科技有限公司 | 人像处理方法、装置、电子设备及可读存储介质 |
CN112101293A (zh) * | 2020-09-27 | 2020-12-18 | 深圳市灼华网络科技有限公司 | 人脸表情的识别方法、装置、设备及存储介质 |
CN112241700A (zh) * | 2020-10-15 | 2021-01-19 | 希望银蕨智能科技有限公司 | 一种额头精准定位的多目标额温测量方法 |
CN112241703A (zh) * | 2020-10-16 | 2021-01-19 | 沈阳天眼智云信息科技有限公司 | 基于红外技术与人脸识别的考勤方法 |
CN112364724A (zh) * | 2020-10-27 | 2021-02-12 | 北京地平线信息技术有限公司 | 活体检测方法和装置、存储介质、电子设备 |
CN112364724B (zh) * | 2020-10-27 | 2024-08-06 | 北京地平线信息技术有限公司 | 活体检测方法和装置、存储介质、电子设备 |
CN113033307A (zh) * | 2021-02-22 | 2021-06-25 | 浙江大华技术股份有限公司 | 对象的匹配方法、装置、存储介质及电子装置 |
CN113033307B (zh) * | 2021-02-22 | 2024-04-02 | 浙江大华技术股份有限公司 | 对象的匹配方法、装置、存储介质及电子装置 |
CN113128429A (zh) * | 2021-04-24 | 2021-07-16 | 新疆爱华盈通信息技术有限公司 | 基于立体视觉的活体检测方法和相关设备 |
CN113902849A (zh) * | 2021-10-18 | 2022-01-07 | 深圳追一科技有限公司 | 三维人脸模型重建方法、装置、电子设备及存储介质 |
CN114202741A (zh) * | 2021-12-13 | 2022-03-18 | 中国平安财产保险股份有限公司 | 用户学习的监控方法、装置、计算机设备及存储介质 |
CN114542874A (zh) * | 2022-02-23 | 2022-05-27 | 常州工业职业技术学院 | 一种自动调节拍照高度和角度的装置及其控制系统 |
CN118042074A (zh) * | 2024-01-05 | 2024-05-14 | 广州开得联软件技术有限公司 | 目标识别方法、目标识别系统、装置、设备和存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN109558764A (zh) | 2019-04-02 |
CN109558764B (zh) | 2021-03-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109558764B (zh) | 人脸识别方法及装置、计算机设备 | |
CN106897675B (zh) | 双目视觉深度特征与表观特征相结合的人脸活体检测方法 | |
CN108764071B (zh) | 一种基于红外和可见光图像的真实人脸检测方法及装置 | |
CN109271950B (zh) | 一种基于手机前视摄像头的人脸活体检测方法 | |
CN109670390B (zh) | 活体面部识别方法与系统 | |
CN104933389B (zh) | 一种基于指静脉的身份识别方法和装置 | |
US8639058B2 (en) | Method of generating a normalized digital image of an iris of an eye | |
US8755607B2 (en) | Method of normalizing a digital image of an iris of an eye | |
CN101510257A (zh) | 一种人脸相似度匹配方法及装置 | |
KR101647803B1 (ko) | 3차원 얼굴모델 투영을 통한 얼굴 인식 방법 및 시스템 | |
CN110956114A (zh) | 人脸活体检测方法、装置、检测系统及存储介质 | |
KR101872811B1 (ko) | 행동 패턴 인식 장치, 행동 패턴 인식 방법 및 행동 패턴 분류기 생성 방법 | |
CN113850865A (zh) | 一种基于双目视觉的人体姿态定位方法、系统和存储介质 | |
WO2018232717A1 (zh) | 基于透视失真特性的人脸图像鉴伪方法、存储、处理设备 | |
Kose et al. | Shape and texture based countermeasure to protect face recognition systems against mask attacks | |
US20210256244A1 (en) | Method for authentication or identification of an individual | |
CN110909634A (zh) | 可见光与双红外线相结合的快速活体检测方法 | |
CN107292269A (zh) | 基于透视失真特性的人脸图像鉴伪方法、存储、处理设备 | |
CN112257641A (zh) | 一种人脸识别活体检测方法 | |
CN106570447A (zh) | 基于灰度直方图匹配的人脸照片太阳镜自动去除方法 | |
KR20160009972A (ko) | 허위 안면 이미지 분류가 가능한 홍채 인식 장치 | |
JP5971712B2 (ja) | 監視装置及び方法 | |
Wu et al. | Single-shot face anti-spoofing for dual pixel camera | |
Bastias et al. | A method for 3D iris reconstruction from multiple 2D near-infrared images | |
CN115035546A (zh) | 三维人体姿态检测方法、装置及电子设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18859224 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18859224 Country of ref document: EP Kind code of ref document: A1 |