WO2021017286A1 - 人脸识别方法、装置、电子设备及计算机非易失性可读存储介质 - Google Patents

人脸识别方法、装置、电子设备及计算机非易失性可读存储介质 Download PDF

Info

Publication number
WO2021017286A1
WO2021017286A1 PCT/CN2019/117656 CN2019117656W WO2021017286A1 WO 2021017286 A1 WO2021017286 A1 WO 2021017286A1 CN 2019117656 W CN2019117656 W CN 2019117656W WO 2021017286 A1 WO2021017286 A1 WO 2021017286A1
Authority
WO
WIPO (PCT)
Prior art keywords
key points
predetermined
detectable
missing
recognition
Prior art date
Application number
PCT/CN2019/117656
Other languages
English (en)
French (fr)
Inventor
赵莫言
王红伟
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Priority to US17/266,587 priority Critical patent/US11734954B2/en
Priority to JP2021504210A priority patent/JP7106742B2/ja
Priority to SG11202103132XA priority patent/SG11202103132XA/en
Publication of WO2021017286A1 publication Critical patent/WO2021017286A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • This application relates to the field of face recognition technology, and in particular to a face recognition method, device, electronic equipment, and computer non-volatile readable storage medium.
  • Face recognition is a kind of biometric recognition technology based on human facial feature information.
  • the recognition rate of face recognition is usually high enough under a normal evaluation environment with a complete face, but that is an evaluation under a normal environment.
  • the inventor of the present application realizes that in some scenes where the face image is deliberately decorated with incomplete images, the recognition rate will be greatly reduced, that is, the incomplete face images collected using the incomplete face images usually cannot match the preset faces in the database, so Complete face recognition.
  • an object of the present application is to provide a face recognition method, device, electronic equipment, and computer non-volatile readable storage medium.
  • a face recognition method includes: when a first face image in which some facial features are occluded, all key points are detected from the first face image to obtain multiple detectable Key points and the coordinates of each of the detectable key points; based on each of the detectable key points, the recognition score of each of the detectable key points and the missing key points are obtained from the preset key point information database Number; According to the recognition score of each of the detectable key points, obtain the influence score of the occluded area of the partial facial features on the recognition of the face; when the influence score is higher than a predetermined score threshold, based on the missing key The number of the point, the multiple target key points that have a predetermined facial feature association relationship with the missing key point among the multiple detectable key points are obtained; according to the coordinates of the multiple target key points, the predetermined facial feature In the template library, obtain a target face feature template whose position combination degree with the multiple target key points is greater than a predetermined combination degree threshold; compare the target face feature template with the multiple number on the
  • a face recognition device includes: a detection module, configured to detect all key points from the first face image when a first face image in which some facial features are occluded is received, Obtain a plurality of detectable key points and the coordinates of each of the detectable key points; the first acquisition module is configured to obtain each of the detectable key points from the preset key point information database based on each of the detectable key points.
  • the recognition score of the detectable key point and the number of the missing key point; the second acquisition module is used to obtain the influence of the occluded area of the partial face feature on the recognition of the face according to the recognition score of each detectable key point Score; a third acquisition module for when the impact score is higher than a predetermined score threshold, based on the number of the missing key point, to obtain the plurality of detectable key points and the missing key points have a predetermined face A plurality of target key points of the feature association relationship; the fourth obtaining module is used to obtain the positional combination degree with the plurality of target key points from a predetermined facial feature template library according to the coordinates of the plurality of target key points A target face feature template greater than a predetermined combination degree threshold; a stitching module for stitching the target face feature template with the multiple target key points on the first face image to obtain a second face image , To perform face recognition after detecting all key points according to the second face image.
  • an electronic device includes: a processing unit; and a storage unit for storing a face recognition program of the processing unit; wherein the processing unit is configured to execute the first face recognition program by executing the face recognition program A method in one aspect or any possible implementation of the first aspect.
  • a computer non-volatile readable storage medium stores a face recognition program that, when executed by a processing unit, implements the first aspect or any possible implementation of the first aspect method.
  • the feature when the face image is occluded, the feature can be accurately supplemented, thereby efficiently and accurately performing face recognition.
  • Fig. 1 schematically shows a flow chart of a face recognition method.
  • Fig. 2 schematically shows an example diagram of an application scenario of a face recognition method.
  • Fig. 3 schematically shows a flow chart of a method for acquiring multiple target key points of predetermined facial feature association relationships.
  • Fig. 4 schematically shows a block diagram of a face recognition device.
  • Fig. 5 shows a block diagram of an electronic device for implementing the aforementioned face recognition method according to an exemplary embodiment.
  • Fig. 6 shows a schematic diagram of a computer non-volatile readable storage medium for implementing the aforementioned face recognition method according to an exemplary embodiment.
  • This example embodiment first provides a face recognition method.
  • the face recognition method can be run on a server, a server cluster or a cloud server, etc.
  • a server cluster or a cloud server, etc.
  • the method is not specifically limited in this exemplary embodiment. As shown in FIG.
  • the face recognition method may include the following steps: Step S110, when a first face image with some facial features occluded is received, all key points are detected from the first face image , Obtain multiple detectable key points and the coordinates of each detectable key point; step S120, based on each detectable key point, obtain each detectable key point from a preset key point information database The recognition score of the point and the number of the missing key point; step S130, according to the recognition score of each of the detectable key points, obtain the influence score of the area where the partial facial features are occluded on the recognition of the face; step S140, when When the impact score is higher than a predetermined score threshold, based on the number of the missing key point, acquiring multiple target key points that have a predetermined facial feature association relationship with the missing key point among the multiple detectable key points; Step S150, according to the coordinates of the multiple target key points, obtain a target face feature template whose position combination degree with the multiple target key points is greater than a predetermined combination degree threshold from a predetermined
  • the above face recognition method first, when a first face image with some facial features blocked is received, all key points are detected from the first face image to obtain multiple detectable key points and each The coordinates of each of the detectable key points; in this way, it is possible to detect the detectable key points that are not occluded in the first face image in which part of the facial features are occluded. Then, based on each detectable key point, obtain the recognition score of each detectable key point and the number of the missing key point from the preset key point information database; in this way, the number of each detectable key point can be obtained. The recognition score for the influence of face recognition can then be used to accurately evaluate the recognizability of the occluded face.
  • obtaining the number of the missing key point on the first face image can be used to efficiently obtain the missing key point position. Furthermore, according to the recognition score of each of the detectable key points, the influence score of the occluded area of the partial facial features on the recognition of the human face is obtained; when the influence score is higher than a predetermined score threshold, based on the missing key Point number to obtain multiple target key points that have a predetermined facial feature association relationship with the missing key point among the multiple detectable key points; in this way, the recognition impact of the missing key point on the first person image can exceed the limit Obtain the target keypoints on the same face feature associated with the missing keypoints.
  • a target face feature template whose position combination degree with the multiple target key points is greater than a predetermined combination degree threshold can be obtained;
  • the combination degree with the positions of multiple target key points can accurately obtain the target face feature template that fits the occluded area.
  • the target face feature template may be stitched with the multiple target key points on the first face image to obtain a second face image, and all key points have been detected based on the second face image Then face recognition is performed, so that when the face image is occluded, the feature can be accurately supplemented, and then the face recognition can be performed efficiently and accurately.
  • step S110 when a first face image in which some facial features are occluded is received, all key points are detected from the first face image to obtain multiple detectable key points and each of the Detect the coordinates of key points.
  • the server 201 when the server 201 receives the first face image sent by the server 202 in which some facial features are occluded, the server 201 performs all key operations on the first face image. Point detection to obtain multiple detectable key points and the coordinates of each detectable key point.
  • the server 201 can analyze the influence of the occluded area of the face feature in the first face image based on multiple detectable key points and the coordinates of each of the detectable key points, and take corresponding recognition supplementary measures. . It can be understood that in the subsequent steps, if conditions permit, the server 202 can also directly detect all the key points from the first face image to obtain multiple detectable key points and each of the detectable key points. The coordinates of the key point, and then proceed to the next steps.
  • the server 201 can be any device with processing capabilities, such as a computer, a micro-processing unit, etc., which is not particularly limited here.
  • the server 202 can be any device with the ability to send instructions and store data, such as mobile phones, computers, etc. No special restrictions are made here.
  • the first face image in which part of the facial features are occluded is, for example, a face image in which part of the eye parts of the face are occluded.
  • the first face image is obtained by extracting the face area from a picture including the face image through a coordinate frame based on face detection, and scaling it to a fixed size. In this way, a first face image of a uniform size can be obtained, and further, position information such as coordinates of key points on the face image can be accurately obtained.
  • Performing all key point detection from the first face image is to use the existing face key point detection technology to locate a predetermined number of eyes, mouth, nose contours, etc. on the first face image, such as the corners of the eyes and the corners of the eyes.
  • There are multiple predetermined key points such as points on the contour, and the coordinates of each key point are obtained at the same time.
  • the multiple detectable key points are the key points that can be detected on the first face image in which part of the facial features are occluded. When detecting all the key points, multiple detectable key points that are not blocked can be detected, and the coordinates of each detectable key point can be obtained at the same time. In this way, in the subsequent steps, the information of the missing key points can be accurately obtained.
  • step S120 based on each detectable key point, the identification score of each detectable key point and the number of the missing key point are obtained from a preset key point information database.
  • the preset key point information database stores the key points of all human faces, such as the number of each key point and the recognition score of the contribution degree of each key point to face recognition during face recognition.
  • the missing key points are all key points that are predetermined to be detected on the face except for the key points that can be detected, that is, the key points of the existence of the occluded area on the face image. By searching for each detectable key point from the preset key point information database, the recognition score of each detectable key point and the number of the missing key point can be accurately obtained. Furthermore, in the subsequent steps, it is possible to accurately determine the degree of influence of the occluded area of partial facial features on the recognition of the human face.
  • obtaining the recognition score of each detectable key point and the number of the missing key point from a preset key point information database includes: In the preset key point information database, each of the detectable key points is sequentially searched to obtain the recognition score stored in association with each of the detectable key points; in the preset key point information database, except for the The key points other than the key points are detected as the missing key points.
  • the preset key point information database stores all key point information corresponding to a face image, and by searching for each of the detectable key points in turn, the recognition score associated with each detectable key point can be accurately obtained; At the same time, key points other than the detectable key points in a face image can be accurately and efficiently obtained as the missing key points.
  • step S130 according to the recognition score of each of the detectable key points, the influence score of the area where the part of the face feature is occluded on the recognition of the face is obtained.
  • the impact score of the occluded area of part of the facial features on the recognition of the face is the score of the degree of difficulty that the occluded area of the part of the facial features brings to face recognition during face recognition. The higher the impact score, the more difficult face recognition is.
  • the method for obtaining the influence score may be, for example, by summing the recognition scores of each detectable key point to obtain the sum of the recognition scores, and then calculating the reciprocal of the sum of the recognition scores as the influence score. By obtaining the influence score, it can be accurately determined whether the first face image can be recognized.
  • obtaining the influence score of the area where the partial facial features are occluded on the recognition of the human face includes: Sum the recognition scores of key points to obtain the total recognition score; subtract the predetermined recognition score threshold from the total recognition score to obtain the difference between the sum of recognition scores and the predetermined recognition score threshold as the partial face
  • the score of the influence of the feature occluded area on the recognition of the face Subtracting the predetermined recognition score threshold used to characterize the recognition of the face from the sum of the recognition scores, the difference between the sum of recognition scores and the predetermined recognition score threshold can be obtained.
  • the difference is a negative value indicating that it cannot be recognized and is negative. The smaller the value, the harder it is to be recognized. In this way, based on the difference, the influence score of the occluded area of some facial features on the recognition of the face can be accurately characterized.
  • step S140 when the impact score is higher than a predetermined score threshold, based on the number of the missing key point, obtain the plurality of detectable key points that have a predetermined facial feature association relationship with the missing key point Multiple target key points.
  • the predetermined facial feature association relationship is whether it comes from the same feature on the face. If the two key points come from the same facial feature, it means that the two key points have the predetermined facial feature association relationship. For example, multiple key points derived from the eyebrows have predetermined facial feature association relationships.
  • the key points of each face are numbered in advance in order, and the number of key points connected to it can be accurately obtained by the number of missing key points.
  • the preset key point information database through the number record on the same facial feature, it is possible to accurately find multiple target key points that have a predetermined facial feature association relationship with the missing key point among the multiple detectable key points.
  • the impact score is higher than the predetermined score threshold, it indicates that the occluded area of some facial features causes the first facial image to fail face recognition.
  • the position of the missing key point can be judged according to the position of the target key point, and then the missing key point is analyzed and calculated in the subsequent steps to obtain the missing feature Supplementary features.
  • the key point has multiple target key points with a predetermined facial feature association relationship, including: step 310, when the impact score is higher than a predetermined score threshold, group the missing key points derived from the same facial feature into a group
  • step 320 Search from the preset key point information database for the numbers of all the key points of the same group that are derived from the same facial feature with the numbers of the missing key points of each group;
  • Step 330 combine the numbers of all the key points of the same group In the numbering, other key points except for the number of each group of missing key points are regarded as the target key points corresponding to each group of missing key points.
  • the missing key points derived from the same face feature are grouped into a group, and further, the grouping can be based on Efficiently find from the preset key point information database the numbers of all the key points of the same group that are derived from the same facial feature with the number of each group of missing key points; then, each key point number of the same group can be accurately removed.
  • the other key points of the number of missing key points in the group are regarded as the target key points corresponding to each group of missing key points.
  • the target key points of each group can be obtained by first grouping the missing key points, which facilitates the search of the target key points, and at the same time, facilitates the management of the target key points, and ensures the accuracy of the analysis based on the missing key points in the subsequent steps And efficiency.
  • the multiple target key points of the feature association relationship include: when the impact score is higher than a predetermined score threshold, based on the number of the missing key point, searching for each missing key point from a preset key point information database The number of is derived from the number of all key points of the same face feature; according to the number of all key points of the same face feature as the number of each missing key point, it is obtained from the multiple detectable key points A plurality of target key points having a predetermined facial feature association relationship with each of the missing key points.
  • the number of each missing key point is derived from the same facial feature and the number of all key points is accurately obtained from the same facial feature.
  • the numbers of all the key points Furthermore, from the numbers of all the key points that are derived from the same facial feature as the number of each missing key point, by searching for the detectable key point, it is possible to accurately obtain the predetermined facial feature association relationship with each missing key point Key points of multiple goals.
  • the number of the key points corresponding to each of the multiple detectable key points is obtained.
  • Said missing key points have a plurality of target key points with a predetermined facial feature association relationship, including: searching for the number of each of the missing key points from the same face feature from the numbers of the plurality of detectable key points The numbers of all the key points in, the number of the found detectable key points is obtained; the detectable key point corresponding to the number of the found detectable key point is taken as the predetermined face with each missing key point Multiple target key points of feature association relationship.
  • the number of each missing key point is derived from the number of all key points of the same facial feature for comparison, and the corresponding number of each missing key point can be found.
  • the number of the detection key point Furthermore, the detectable key points corresponding to the numbers of the found detectable key points are acquired as multiple target key points that have a predetermined facial feature association relationship with each missing key point.
  • step S150 according to the coordinates of the multiple target key points, from a predetermined face feature template library, a target face feature template whose position combination degree with the multiple target key points is greater than a predetermined combination degree threshold is obtained.
  • the predetermined face feature template library stores face feature templates of different sizes, such as human eyes and eyebrows.
  • the coordinates of each target key point the coordinates of several target key points derived from the same face feature can be given, for example, the coordinates of several target key points derived from the person's eyebrows can be given.
  • the distance between these target key points can be calculated by the coordinates of several target key points derived from the same face feature, and then , Can calculate the similarity with the mutual distance between the key points calibrated on the face feature templates of different sizes stored in the predetermined face feature template library.
  • a template whose similarity exceeds a predetermined threshold can have a high degree of matching with the facial features of several target key points, that is, the degree of combination with multiple target key points is greater than the predetermined combination degree threshold. In this way, it is possible to obtain a target face feature template whose position combination degree with multiple target key points is greater than a predetermined combination degree threshold.
  • the acquired target facial feature template is used for face recognition, which effectively guarantees the recognizability of the face image in the occluded area.
  • a target person whose position combination with the multiple target key points is greater than a predetermined combination degree threshold is obtained from a predetermined facial feature template library
  • the face feature template includes: obtaining the coordinates of multiple target key points derived from the target face feature according to the coordinates of each of the target key points; according to the coordinates of the multiple target key points derived from the target face feature Calculate the first distance between each two key points of the multiple target key points; obtain a predetermined face feature template set corresponding to the target face feature from a predetermined face feature template library, and The second distance between each two key points of the plurality of preset key points with the same number of the multiple target key points; according to the first distance and the second distance, calculate the multiple target key points and The position combination degree of each template in the predetermined facial feature template set; the predetermined facial feature template set, and the template whose combination degree with the multiple target key points is greater than a predetermined combination degree threshold is used as the target Face feature template.
  • the target facial features are, for example, facial features such as eyes and mouths to be analyzed.
  • the target facial features are, for example, facial features such as eyes and mouths to be analyzed.
  • the predetermined face feature template set corresponding to the target face feature is obtained, and the set The second distance between each two key points of the multiple preset key points with the same number of the target key point; by comparing the difference between the first distance and the second distance, the number of the target face features can be accurately calculated The degree of combination between the target key points and the key points corresponding to the number of each template in the predetermined facial feature template set. Then, the predetermined facial feature templates can be collected, and the template whose combination degree with multiple target key points on the target facial features is greater than the predetermined combination degree threshold can be used as the target facial feature template for use in subsequent steps. Face recognition.
  • the predetermined combination degree threshold is the preset threshold value of the combination degree of the face image representing the facial feature template.
  • the position combination degree greater than the threshold indicates that the combination cannot be combined or the combination is not good.
  • the position combination degree is less than the threshold and the number can be good. Combine.
  • the method for calculating the degree of combination between the multiple target key points and the position of each template in the predetermined facial feature template set may be to combine the first distance and the second distance source The difference between the first distance and the second distance with the same key point number is obtained, and the sum of the difference between all the first distance and the second distance is obtained as the position combination degree. Or, calculate the difference between the first distance and the second distance with the same key point numbers of the first distance and the second distance source, and obtain the difference between the sum of all the difference between the first distance and the second distance and the predetermined similarity threshold , As the position binding degree.
  • step S160 stitch the target face feature template with the multiple target key points on the first face image to obtain a second face image, which has been detected based on the second face image. Face recognition is performed after the key points.
  • the target face feature template by moving the preset key points on the target face feature template to the positions of multiple target key points with the same number on the first face image, the target face feature template can be changed It is stitched with multiple target key points on the first face image to obtain a second face image. Furthermore, the key points of the second face image can be detected, and then the relative position and relative size of the representative parts of the face (such as eyebrows, eyes, nose, mouth, etc.) can be extracted as features, and then supplemented by the contour of the face The shape information is used as a feature for face recognition. In this way, the user's identity can be obtained by efficiently and accurately performing face recognition when some features of the face image are missing.
  • the face recognition device may include a detection module 410, a first acquisition module 420, a second acquisition module 430, a third acquisition module 440, a fourth acquisition module 450, and a stitching module 460.
  • the detection module 410 can be used to, when receiving a first face image in which some facial features are occluded, perform all key point detection on the first face image to obtain multiple detectable key points and each The coordinates of the detectable key points; the first obtaining module 420 may be used to obtain the recognition score and missing of each detectable key point from a preset key point information database based on each of the detectable key points The number of the key point; the second acquiring module 430 may be used to acquire the influence score of the area where the partial facial features are occluded on the recognition of the face according to the recognition score of each of the detectable key points; the third acquiring module 440 It can be used to obtain multiple detectable key points that have a predetermined facial feature association relationship with the missing key point based on the number of the missing key point when the impact score is higher than a predetermined score threshold.
  • Target key points the fourth acquisition module 450 can be used to obtain from a predetermined facial feature template library according to the coordinates of the multiple target key points, the position combination degree with the multiple target key points is greater than a predetermined combination degree threshold
  • the target face feature template the stitching module 460 can be used to stitch the target face feature template with the multiple target key points on the first face image to obtain a second face image, which has been The second face image detects all key points and then performs face recognition.
  • the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (can be a CD-ROM, U disk, mobile hard disk, etc.) or on the network , Including several instructions to make a computing device (which may be a personal computer, server, mobile terminal, or network device, etc.) execute the method according to the embodiment of the present application.
  • a non-volatile storage medium can be a CD-ROM, U disk, mobile hard disk, etc.
  • a computing device which may be a personal computer, server, mobile terminal, or network device, etc.
  • the electronic device 500 according to this embodiment of the present application will be described below with reference to FIG. 5.
  • the electronic device 500 shown in FIG. 5 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present application.
  • the electronic device 500 is represented in the form of a general-purpose computing device.
  • the components of the electronic device 500 may include, but are not limited to: the aforementioned at least one processing unit 510, the aforementioned at least one storage unit 520, and a bus 530 connecting different system components (including the storage unit 520 and the processing unit 510).
  • the storage unit stores program code, and the program code can be executed by the processing unit 510, so that the processing unit 510 executes the various exemplary methods described in the “exemplary method” section of this specification.
  • the processing unit 510 may perform step S110 as shown in FIG.
  • Step S120 When receiving a first face image in which some facial features are occluded, perform all key point detection from the first face image , Obtain multiple detectable key points and the coordinates of each detectable key point; S120: Based on each detectable key point, obtain each detectable key point from a preset key point information database Step S130: According to the recognition score of each of the detectable key points, obtain the influence score of the area where the part of the face feature is occluded on the recognition of the face; Step S140: When the impact score is higher than a predetermined score threshold, based on the number of the missing key point, obtain multiple target key points that have a predetermined facial feature association relationship with the missing key point among the multiple detectable key points; step S150: According to the coordinates of the multiple target key points, obtain a target face feature template whose position combination degree with the multiple target key points is greater than a predetermined combination degree threshold from a predetermined face feature template library; step S160: The target face feature template is stitched with the multiple target key points on the first
  • the storage unit 520 may include a readable medium in the form of a volatile storage unit, such as a random access storage unit (RAM) 5201 and/or a cache storage unit 5202, and may further include a read-only storage unit (ROM) 5203.
  • RAM random access storage unit
  • ROM read-only storage unit
  • the storage unit 520 may also include a program/utility tool 5204 having a set of (at least one) program module 5205.
  • program module 5205 includes but is not limited to: an operating system, one or more application programs, other program modules, and program data, Each of these examples or some combination may include the implementation of a network environment.
  • the bus 530 may represent one or more of several types of bus structures, including a storage unit bus or a storage unit controller, a peripheral bus, a graphics acceleration port, a processing unit, or a local area using any bus structure among multiple bus structures. bus.
  • the electronic device 500 can also communicate with one or more external devices 700 (such as keyboards, pointing devices, Bluetooth devices, etc.), and can also communicate with one or more devices that enable customers to interact with the electronic device 500, and/or communicate with Any device (such as a router, modem, etc.) that enables the electronic device 500 to communicate with one or more other computing devices. This communication can be performed through an input/output (I/O) interface 550.
  • the electronic device 500 may also communicate with one or more networks (for example, a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet) through the network adapter 560. As shown in the figure, the network adapter 560 communicates with other modules of the electronic device 500 through the bus 530.
  • LAN local area network
  • WAN wide area network
  • public network such as the Internet
  • the exemplary embodiments described herein can be implemented by software, or can be implemented by combining software with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (can be a CD-ROM, U disk, mobile hard disk, etc.) or on the network , Including several instructions to make a computing device (which may be a personal computer, server, terminal device, or network device, etc.) execute the method according to the embodiment of the present application.
  • a non-volatile storage medium can be a CD-ROM, U disk, mobile hard disk, etc.
  • Including several instructions to make a computing device which may be a personal computer, server, terminal device, or network device, etc.
  • a computer-readable storage medium which stores a program product capable of implementing the above-mentioned method of this specification.
  • the computer-readable storage medium may be a computer Non-volatile readable storage medium.
  • various aspects of the present application can also be implemented in the form of a program product, which includes program code.
  • the program product runs on a terminal device, the program code is used to enable the The terminal device executes the steps according to various exemplary embodiments of the present application described in the above-mentioned "Exemplary Method" section of this specification.
  • a program product 600 for implementing the above method according to an embodiment of the present application is described. It can adopt a portable compact disk read-only memory (CD-ROM) and include program code, and can be installed in a terminal device, For example, running on a personal computer.
  • the program product of this application is not limited to this.
  • the readable storage medium can be any tangible medium that contains or stores a program, and the program can be used by or combined with an instruction execution system, device, or device.
  • the program product can use any combination of one or more readable media.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination of the above. More specific examples (non-exhaustive list) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Type programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • the computer-readable signal medium may include a data signal propagated in baseband or as a part of a carrier wave, and readable program code is carried therein.
  • This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the readable signal medium may also be any readable medium other than a readable storage medium, and the readable medium may send, propagate, or transmit a program for use by or in combination with the instruction execution system, apparatus, or device.
  • the program code contained on the readable medium can be transmitted by any suitable medium, including but not limited to wireless, wired, optical cable, RF, etc., or any suitable combination of the foregoing.
  • the program code used to perform the operations of this application can be written in any combination of one or more programming languages.
  • the programming languages include object-oriented programming languages-such as Java, C++, etc., as well as conventional procedural programming languages.
  • the program code can be executed entirely on the client computing device, partly executed on the client device, executed as a stand-alone software package, partly executed on the client computing device and partly executed on the remote computing device, or entirely on the remote computing device or server Executed on.
  • the remote computing device can be connected to a client computing device through any kind of network, including a local area network (LAN) or a wide area network (WAN), or can be connected to an external computing device (for example, using Internet service providers). Business to connect via the Internet).
  • LAN local area network
  • WAN wide area network
  • Internet service providers for example, using Internet service providers

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

本申请是关于一种人脸识别方法、装置、电子设备及计算机非易失性可读存储介质,属于人脸识别技术领域,该方法包括:当接收到第一人脸图像,进行关键点检测;获取每个可检测关键点的识别分数及缺失关键点的编号;获取部分人脸特征被遮挡区域对识别人脸的影响分数;当影响分数高于预定分数阈值时,获取多个可检测关键点中与缺失关键点具有预定人脸特征关联关系的多个目标关键点;获取与多个目标关键点的位置结合度大于预定结合度阈值的目标人脸特征模板;将所述目标人脸特征模板与所述第一人脸图像缝合,得到第二人脸图像,以根据第二人脸图像检测所有关键点后进行人脸识别。本申请在人脸特征缺失时,进行特征准确补充,进而可以高效、准确地进行人脸识别。

Description

人脸识别方法、装置、电子设备及计算机非易失性可读存储介质 技术领域
本申请要求2019年08月01日递交、发明名称为“人脸识别方法、装置、存储介质及电子设备”的中国专利申请201910709169.7的优先权,在此通过引用将其全部内容合并于此。
本申请涉及人脸识别技术领域,尤其涉及一种人脸识别方法、装置、电子设备及计算机非易失性可读存储介质。
背景技术
人脸识别是基于人的脸部特征信息进行身份识别的一种生物识别技术。
目前人脸识别识别率在人脸完整的正常测评环境下通常足够高,但是那是在正常环境下的测评。
发明概述
技术问题
本申请的发明人意识到,在一些故意装饰人脸图像不完整的场景下,识别率会大大降低,即利用采集到的不完整人脸图像通常匹配不到数据库中的预设人脸,从而完成人脸识别。
问题的解决方案
技术解决方案
为了解决上述技术问题,本申请的一个目的在于提供一种人脸识别方法、装置、电子设备及计算机非易失性可读存储介质。
其中,本申请所采用的技术方案为:
第一方面,一种人脸识别方法,包括:当接收到存在部分人脸特征被遮挡的第一人脸图像,从所述第一人脸图像上进行所有关键点检测,得到多个可检测关键点及每个所述可检测关键点的坐标;基于每个所述可检测关键点,从预设关 键点信息数据库中,获取每个所述可检测关键点的识别分数及缺失关键点的编号;根据每个所述可检测关键点的识别分数,获取所述部分人脸特征被遮挡区域对识别人脸的影响分数;当所述影响分数高于预定分数阈值时,基于所述缺失关键点的编号,获取所述多个可检测关键点中与所述缺失关键点具有预定人脸特征关联关系的多个目标关键点;根据所述多个目标关键点的坐标,从预定人脸特征模板库中,获取与所述多个目标关键点的位置结合度大于预定结合度阈值的目标人脸特征模板;将所述目标人脸特征模板与所述第一人脸图像上的所述多个目标关键点缝合,得到第二人脸图像,以根据所述第二人脸图像检测所有关键点后进行人脸识别。
第二方面,一种人脸识别装置,包括:检测模块,用于当接收到存在部分人脸特征被遮挡的第一人脸图像,从所述第一人脸图像上进行所有关键点检测,得到多个可检测关键点及每个所述可检测关键点的坐标;第一获取模块,用于基于每个所述可检测关键点,从预设关键点信息数据库中,获取每个所述可检测关键点的识别分数及缺失关键点的编号;第二获取模块,用于根据每个所述可检测关键点的识别分数,获取所述部分人脸特征被遮挡区域对识别人脸的影响分数;第三获取模块,用于当所述影响分数高于预定分数阈值时,基于所述缺失关键点的编号,获取所述多个可检测关键点中与所述缺失关键点具有预定人脸特征关联关系的多个目标关键点;第四获取模块,用于根据所述多个目标关键点的坐标,从预定人脸特征模板库中,获取与所述多个目标关键点的位置结合度大于预定结合度阈值的目标人脸特征模板;缝合模块,用于将所述目标人脸特征模板与所述第一人脸图像上的所述多个目标关键点缝合,得到第二人脸图像,以根据所述第二人脸图像检测所有关键点后进行人脸识别。
第三方面,一种电子设备,包括:处理单元;以及存储单元,用于存储所述处理单元的人脸识别程序;其中,所述处理单元配置为经由执行所述人脸识别程序来执行第一方面或第一方面任意可能的实现方式中的方法。
第四方面,一种计算机非易失性可读存储介质,其上存储有人脸识别程序,所述人脸识别程序被处理单元执行时实现第一方面或第一方面任意可能的实现方式中的方法。
在上述技术方案中,可以在人脸图像被遮挡时,通过进行特征准确补充,进而高效、准确地进行人脸识别。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。
发明的有益效果
对附图的简要说明
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并于说明书一起用于解释本申请的原理。
图1示意性示出一种人脸识别方法的流程图。
图2示意性示出一种人脸识别方法的应用场景示例图。
图3示意性示出一种预定人脸特征关联关系的多个目标关键点获取方法流程图。
图4示意性示出一种人脸识别装置的方框图。
图5示出根据示例性实施例的用于实现上述人脸识别方法的电子设备的框图。
图6示出根据示例性实施例的用于实现上述人脸识别方法的计算机非易失性可读存储介质的示意图。
通过上述附图,已示出本申请明确的实施例,后文中将有更详细的描述,这些附图和文字描述并不是为了通过任何方式限制本申请构思的范围,而是通过参考特定实施例为本领域技术人员说明本申请的概念。
发明实施例
本发明的实施方式
这里将详细地对示例性实施例执行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
现在将参考附图更全面地描述示例实施方式。然而,示例实施方式能够以多种形式实施,且不应被理解为限于在此阐述的范例;相反,提供这些实施方式使得本申请将更加全面和完整,并将示例实施方式的构思全面地传达给本领域的技术人员。所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多实施方式中。
本示例实施方式中首先提供了人脸识别方法,该人脸识别方法可以运行于服务器,也可以运行于服务器集群或云服务器等,当然,本领域技术人员也可以根据需求在其他平台运行本申请的方法,本示例性实施例中对此不做特殊限定。参考图1所示,该人脸识别方法可以包括以下步骤:步骤S110,当接收到存在部分人脸特征被遮挡的第一人脸图像,从所述第一人脸图像上进行所有关键点检测,得到多个可检测关键点及每个所述可检测关键点的坐标;步骤S120,基于每个所述可检测关键点,从预设关键点信息数据库中,获取每个所述可检测关键点的识别分数及缺失关键点的编号;步骤S130,根据每个所述可检测关键点的识别分数,获取所述部分人脸特征被遮挡的区域对识别人脸的影响分数;步骤S140,当所述影响分数高于预定分数阈值时,基于所述缺失关键点的编号,获取所述多个可检测关键点中与所述缺失关键点具有预定人脸特征关联关系的多个目标关键点;步骤S150,根据所述多个目标关键点的坐标,从预定人脸特征模板库中,获取与所述多个目标关键点的位置结合度大于预定结合度阈值的目标人脸特征模板;步骤S160,将所述目标人脸特征模板与所述第一人脸图像上的所述多个目标关键点缝合,得到第二人脸图像,已根据所述第二人脸图像检测所有关键点后进行人脸识别。
上述人脸识别方法中,首先,当接收到存在部分人脸特征被遮挡的第一人脸图像,从所述第一人脸图像上进行所有关键点检测,得到多个可检测关键点及每个所述可检测关键点的坐标;这样可以检测出存在部分人脸特征被遮挡区域的第一人脸图像中没有被遮挡的可检测关键点。然后,基于每个所述可检测关键点,从预设关键点信息数据库中,获取每个所述可检测关键点的识别分数及缺失关键点的编号;这样可以获得每个可检测关键点的对人脸识别的影响力的识别分数,进而可以用于准确评估被遮挡的人脸的可识别性,同时获取第一人脸 图像上的缺失关键点的编号可以用于高效获取缺失关键点的位置。进而,根据每个所述可检测关键点的识别分数,获取所述部分人脸特征被遮挡区域对识别人脸的影响分数;当所述影响分数高于预定分数阈值时,基于所述缺失关键点的编号,获取所述多个可检测关键点中与所述缺失关键点具有预定人脸特征关联关系的多个目标关键点;这样可以在缺失关键点对第一人图像的识别影响超过限值时获取与缺失关键点关联的同一人脸特征上的目标关键点。进而可以根据所述多个目标关键点的坐标,从预定人脸特征模板库中,获取与所述多个目标关键点的位置结合度大于预定结合度阈值的目标人脸特征模板;这样可以通过与多个目标关键点的位置结合度准确地获取到适配被遮挡区域的目标人脸特征模板。进而,可以将所述目标人脸特征模板与所述第一人脸图像上的所述多个目标关键点缝合,得到第二人脸图像,已根据所述第二人脸图像检测所有关键点后进行人脸识别,这样可以在人脸图像被遮挡时,通过进行特征准确补充,进而高效、准确地进行人脸识别。
下面,将结合附图对本示例实施方式中上述人脸识别方法中的各步骤进行详细的解释以及说明。
在步骤S110中,当接收到存在部分人脸特征被遮挡的第一人脸图像,从所述第一人脸图像上进行所有关键点检测,得到多个可检测关键点及每个所述可检测关键点的坐标。本示例的实施方式中,参考图2所示,当服务器201接收到服务器202发送的存在部分人脸特征被遮挡的第一人脸图像,服务器201从所述第一人脸图像上进行所有关键点检测,得到多个可检测关键点及每个所述可检测关键点的坐标。这样可以在后续步骤中,由服务器201基于多个可检测关键点及每个所述可检测关键点的坐标分析第一人脸图像中人脸特征被遮挡区域的影响及采取相应的识别补充措施。可以理解,在后续步骤中,在条件允许的情况下,也可以由服务器202直接从所述第一人脸图像上进行所有关键点检测,得到多个可检测关键点及每个所述可检测关键点的坐标,然后进行后续步骤。其中,服务器201可以是任何具有处理能力的设备,例如,电脑、微处理单元等,在此不做特殊限定,服务器202可以是任何具有指令发送、数据存储能力的设备,例如手机、电脑等,在此不做特殊限定。存在部分人脸特征被遮挡区域的第一人 脸图像就是例如人脸的部分眼睛部位被遮挡的人脸图像。该第一人脸图像是从包括人脸图像的图片上,经过基于人脸检测的坐标框,将人脸区域抠取出来,缩放到固定尺寸后得到的。这样可以得到统一大小的第一人脸图像,进而,可以准确地获取人脸图像上关键点的例如坐标等位置信息。从第一人脸图像上进行所有关键点检测,就是利用现有的人脸关键点检测技术,定位第一人脸图像上的预定数目个眼、口、鼻轮廓等人脸上例如眼角、眼角轮廓上的点等预定的多个关键点,同时获得每个关键点的坐标。多个可检测关键点就是存在部分人脸特征被遮挡区域的第一人脸图像上可以检测到的关键点。在检测所有关键点时,可以检测到没有被遮挡的多个可检测关键点,同时得到每个可检测关键点的坐标。这样可以在后续步骤中,准确地获取缺失的关键点的信息。
在步骤S120中,基于每个所述可检测关键点,从预设关键点信息数据库中,获取每个所述可检测关键点的识别分数及缺失关键点的编号。本示例的实施方式中,预设关键点信息数据库存储有所有人脸关键点的如每个关键点的编号及每个关键点在人脸识别时对人脸识别的贡献程度的识别分数。
关键点的识别分数越高说明该关键点在人脸识别时越重要,例如,某个识别规则中眼睛上的特征关注度高,则眼睛上的关键点的识别分数高;或者眼睛上某几个关键点比较重要,则对应识别分数越高。缺失关键点就是人脸上所有预定检测的关键点中除去可检测关键点之外的关键点,也就是人脸图像上被遮挡区域的存在的关键点。通过从预设关键点信息数据库中,查找每个可检测关键点,然后就可以准确地得到每个可检测关键点的识别分数及缺失关键点的编号。进而可以在后续步骤中准确地判断部分人脸特征被遮挡区域对识别人脸的影响程度。
本示例的一种实施方式中,基于每个所述可检测关键点,从预设关键点信息数据库中,获取每个所述可检测关键点的识别分数及缺失关键点的编号,包括:从预设关键点信息数据库中,依次查找每个所述可检测关键点,以获取每个所述可检测关键点关联存储的识别分数;将所述预设关键点信息数据库中,除所述可检测关键点之外的其它关键点作为所述缺失关键点。预设关键点信息数据库中存储有对应于一副人脸图像的所有关键点信息,通过依次查找每个所述可 检测关键点,可以准确地获取每个可检测关键点关联存储的识别分数;同时,可以准确、高效地得到一副人脸图像中除可检测关键点之外的其它关键点作为所述缺失关键点。
在步骤S130中,根据每个所述可检测关键点的识别分数,获取所述部分人脸特征被遮挡的区域对识别人脸的影响分数。本示例的实施方式中,部分人脸特征被遮挡区域对识别人脸的影响分数,就是在进行人脸识别时,部分人脸特征被遮挡区域对人脸识别带来的困难程度的分数。该影响分数越高,人脸识别越困难。影响分数的获取方法可以是例如通过每个可检测关键点的识别分数求和,得到识别分数总和,然后,求该识别分数总和的倒数作为影响分数。通过获取该影响分数,可以准确判断第一人脸图像是否可以被识别。
本示例的一种实施方式中,根据每个所述可检测关键点的识别分数,获取所述部分人脸特征被遮挡的区域对识别人脸的影响分数,包括:将每个所述可检测关键点的识别分数的求和,得到识别分数总和;用所述识别分数总和减去预定识别分数阈值,得到所述识别分数总和与所述预定识别分数阈值的差值,作为所述部分人脸特征被遮挡区域对识别人脸的影响分数。用所述识别分数总和减去用于表征人脸可识别的预定识别分数阈值,可以得到识别分数总和与所述预定识别分数阈值的差值,该差值为负值说明不可被识别,且负值越小,说明越难被识别。这样基于该差值可以准确地表征部分人脸特征被遮挡区域对识别人脸的影响分数。
在步骤S140中,当所述影响分数高于预定分数阈值时,基于所述缺失关键点的编号,获取所述多个可检测关键点中与所述缺失关键点具有预定人脸特征关联关系的多个目标关键点。本示例的实施方式中,预定人脸特征关联关系就是是否是来自于人脸上同一个特征,如果两个关键点来在于同一个人脸特征说明这两个关键点具有预定人脸特征关联关系,例如,来源于眉毛上的多个关键点具有预定人脸特征关联关系。每个人脸关键点按照顺序事先编号,通过缺失关键点的编号可以准确地获取到与其相连接的几个编号的关键点。进而,根据预设关键点信息数据库,通过同一人脸特征上的编号记录,可以准确地查找到多个可检测关键点中与缺失关键点具有预定人脸特征关联关系的多个目标关键点 。当影响分数高于预定分数阈值时,说明部分人脸特征被遮挡区域导致第一人脸图像无法进行人脸识别。通过获取与缺失关键点具有预定人脸特征关联关系的多个目标关键点,可以根据目标关键点的位置判断缺失关键点的位置,进而在后续步骤中进行缺失关键点的分析计算,得到缺失特征的补充特征。
本示例的一种实施方式中,参考图3所示,当所述影响分数高于预定分数阈值时,基于所述缺失关键点的编号,获取所述多个可检测关键点中与所述缺失关键点具有预定人脸特征关联关系的多个目标关键点,包括:步骤310,当所述影响分数高于预定分数阈值时,将来源于同一人脸特征的所述缺失关键点分为一组;步骤320,从预设关键点信息数据库中查找与每组所述缺失关键点的编号来源于同一人脸特征的所有同组关键点的编号;步骤330,将所述所有同组关键点的编号中除去每组所述缺失关键点的编号的其它关键点,作为每组缺失关键点对应的所述目标关键点。当所述影响分数高于预定分数阈值时,即当缺失关键点对人脸识别的影响过大时,将来源于同一人脸特征的缺失关键点分为一组,进而,可以从基于分组,高效地从预设关键点信息数据库中查找与每组缺失关键点的编号来源于同一人脸特征的所有同组关键点的编号;然后,可以准确地将所有同组关键点的编号中除去每组缺失关键点的编号的其它关键点,作为每组缺失关键点对应的目标关键点。这样可以通过首先对缺失关键点进行分组,获取到每组的目标关键点,便于目标关键点的查找,同时,便于目标关键点的管理,保证后续步骤中,基于缺失关键点进行分析的准确性和效率。
本示例的一种实施方式中,当所述影响分数高于预定分数阈值时,基于所述缺失关键点的编号,获取所述多个可检测关键点中与所述缺失关键点具有预定人脸特征关联关系的多个目标关键点,包括:当所述影响分数高于预定分数阈值时,基于所述缺失关键点的编号,从预设关键点信息数据库中查找与每个所述缺失关键点的编号来源于同一人脸特征的所有关键点的编号;根据与每个所述缺失关键点的编号来源于同一人脸特征的所有关键点的编号,从所述多个可检测关键点中获取与每个所述缺失关键点具有预定人脸特征关联关系的多个目标关键点。这样可以通过从预设关键点信息数据库中,依次查找与每个缺失关键点的编号来源于同一人脸特征的所有关键点的编号,准确获取到每个关键点对 应的来源于同一人脸特征的所有关键点的编号。进而,从与每个缺失关键点的编号来源于同一人脸特征的所有关键点的编号中,通过查找可检测关键点,可以准确地获取到与每个缺失关键点具有预定人脸特征关联关系的多个目标关键点。
本示例的一种实施方式中,根据所述与每个所述缺失关键点的编号来源于同一人脸特征的所有关键点的编号,从所述多个可检测关键点中获取与每个所述缺失关键点具有预定人脸特征关联关系的多个目标关键点,包括:从所述多个可检测关键点的编号中,查找与每个所述缺失关键点的编号来源于同一人脸特征的所有关键点的编号,得到查找到的可检测关键点的编号;将所述查找到的可检测关键点的编号对应的可检测关键点,作为与每个所述缺失关键点具有预定人脸特征关联关系的多个目标关键点。
这样可以基于已经检测到的所有可检测关键点的编号,通过利用每个缺失关键点的编号来源于同一人脸特征的所有关键点的编号进行对比,查找到与每个缺失关键点对应的可检测关键点的编号。进而,获取到查找到的可检测关键点的编号对应的可检测关键点,作为与每个缺失关键点具有预定人脸特征关联关系的多个目标关键点。
在步骤S150中,根据所述多个目标关键点的坐标,从预定人脸特征模板库中,获取与所述多个目标关键点的位置结合度大于预定结合度阈值的目标人脸特征模板。本示例的实施方式中,预定人脸特征模板库中存储了不同大小的例如人的眼睛、眉毛的人脸特征模板。根据每个目标关键点的坐标,可以给出来源于同一人脸特征的几个目标关键点的坐标,例如,可以给出到来源于人的眉毛的几个目标关键点的坐标。由于目标关键点是人脸特征中部分被遮挡后剩余部分的关键点,通过来源于同一人脸特征的几个目标关键点的坐标,可以计算这几个目标关键点相互之间的距离,然后,可以与预定人脸特征模板库中存储的不同大小的人脸特征模板上标定的关键点之间的相互距离,进行相似度计算。相似度超过预定阈值的模板可以与几个目标关键点来源的人脸特征具有很高的匹配度,也就是与多个目标关键点的位置结合度大于预定结合度阈值。这样可以获取到与多个目标关键点的位置结合度大于预定结合度阈值的目标人脸特征 模板。在后续步骤中利用获取到的目标人脸特征模板进行人脸识别,有效保证存在被遮挡区域的人脸图像的可识别性。
本示例的一种实施方式中,根据所述多个目标关键点的坐标,从预定人脸特征模板库中,获取与所述多个目标关键点的位置结合度大于预定结合度阈值的目标人脸特征模板,包括:根据每个所述目标关键点的坐标,获取来源于目标人脸特征的多个目标关键点的坐标;根据所述来源于目标人脸特征的多个目标关键点的坐标,计算所述多个目标关键点中每两个关键点之间的第一距离;从预定人脸特征模板库中,获取与所述目标人脸特征对应的预定人脸特征模板集中,与所述多个目标关键点相同编号的多个预设关键点中每两个关键点之间的第二距离;根据所述第一距离和所述第二距离,计算所述多个目标关键点与所述预定人脸特征模板集中每个模板的位置结合度;将所述预定人脸特征模板集中,与所述多个目标关键点的位置结合度大于预定结合度阈值的模板,作为所述目标人脸特征模板。
目标人脸特征就是例如准备进行分析处理的眼睛、嘴巴等人脸特征。通过获取每个目标关键点中来源于同一目标人脸特征的几个目标关键点的坐标,可以获取到来源于目标人脸特征的多个目标关键点的坐标。进而,通过计算多个目标关键点中每两个关键点之间的第一距离;同时从预定人脸特征模板库中,获取与目标人脸特征对应的预定人脸特征模板集中,与多个目标关键点相同编号的多个预设关键点中每两个关键点之间的第二距离;通过第一距离和第二距离的差值的对比,可以准确地计算目标人脸特征中的多个目标关键点与预定人脸特征模板集中每个模板的对应编号的关键点的位置结合度。然后,就可以将预定人脸特征模板集中,与目标人脸特征上的多个目标关键点的位置结合度大于预定结合度阈值的模板,作为目标人脸特征模板,用于在后续步骤中进行人脸识别。其中,预定结合度阈值为预设的表征人脸特征模板的人脸图像可结合度的阈值,位置结合度大于该阈值说明不可结合或者结合情况不佳,位置结合度小于该阈值数目可以良好的结合。
根据所述第一距离和所述第二距离,计算所述多个目标关键点与所述预定人脸特征模板集中每个模板的位置结合度的方法可以是将第一距离和第二距离来源 的关键点编号相同的第一距离与第二距离求差,得到所有的第一距离与第二距离的差值的和,作为所述位置结合度。或者,将第一距离和第二距离来源的关键点编号相同的第一距离与第二距离求差,得到所有的第一距离与第二距离的差值的和与预定相似度阈值的差值,作为所述位置结合度。
在步骤S160中,将所述目标人脸特征模板与所述第一人脸图像上的所述多个目标关键点缝合,得到第二人脸图像,已根据所述第二人脸图像检测所有关键点后进行人脸识别。
本示例的实施方式中,通过将目标人脸特征模板上的预设关键点移动到与第一人脸图像上的多个相同编号的目标关键点的位置上,可以实现将目标人脸特征模板与第一人脸图像上的多个目标关键点缝合,得到第二人脸图像。进而可以对第二人脸图像进行关键点检测,然后提取人脸面部具有代表性的部位(例如眉毛、眼睛、鼻子、嘴巴等)的相对位置和相对大小作为特征,再辅以人脸轮廓的形状信息作为特征,进行人脸识别。这样通过在人脸图像存在部分特征缺失时,高效、准确地进行人脸识别,获得用户身份。
本申请还提供了一种人脸识别装置。参考图4所示,该人脸识别装置可以包括检测模块410、第一获取模块420、第二获取模块430、第三获取模块440、第四获取模块450及缝合模块460。其中:检测模块410可以用于当接收到存在部分人脸特征被遮挡的第一人脸图像,从所述第一人脸图像上进行所有关键点检测,得到多个可检测关键点及每个所述可检测关键点的坐标;第一获取模块420可以用于基于每个所述可检测关键点,从预设关键点信息数据库中,获取每个所述可检测关键点的识别分数及缺失关键点的编号;第二获取模块430可以用于根据每个所述可检测关键点的识别分数,获取所述部分人脸特征被遮挡的区域对识别人脸的影响分数;第三获取模块440可以用于当所述影响分数高于预定分数阈值时,基于所述缺失关键点的编号,获取所述多个可检测关键点中与所述缺失关键点具有预定人脸特征关联关系的多个目标关键点;第四获取模块450可以用于根据所述多个目标关键点的坐标,从预定人脸特征模板库中,获取与所述多个目标关键点的位置结合度大于预定结合度阈值的目标人脸特征模板;缝合模块460可以用于将所述目标人脸特征模板与所述第一人脸图像上的所述多个目标 关键点缝合,得到第二人脸图像,已根据所述第二人脸图像检测所有关键点后进行人脸识别。
上述人脸识别装置中各模块的具体细节已经在对应的人脸识别方法中进行了详细的描述,因此此处不再赘述。
应当注意,尽管在上文详细描述中提及了用于动作执行的设备的若干模块或者单元,但是这种划分并非强制性的。实际上,根据本申请的实施方式,上文描述的两个或更多模块或者单元的特征和功能可以在一个模块或者单元中具体化。反之,上文描述的一个模块或者单元的特征和功能可以进一步划分为由多个模块或者单元来具体化。此外,尽管在附图中以特定顺序描述了本申请中方法的各个步骤,但是,这并非要求或者暗示必须按照该特定顺序来执行这些步骤,或是必须执行全部所示的步骤才能实现期望的结果。附加的或备选的,可以省略某些步骤,将多个步骤合并为一个步骤执行,以及/或者将一个步骤分解为多个步骤执行等。通过以上的实施方式的描述,本领域的技术人员易于理解,这里描述的示例实施方式可以通过软件实现,也可以通过软件结合必要的硬件的方式来实现。因此,根据本申请实施方式的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘等)中或网络上,包括若干指令以使得一台计算设备(可以是个人计算机、服务器、移动终端、或者网络设备等)执行根据本申请实施方式的方法。
在本申请的示例性实施例中,还提供了一种能够实现上述方法的电子设备。
所属技术领域的技术人员能够理解,本申请的各个方面可以实现为系统、方法或程序产品。因此,本申请的各个方面可以具体实现为以下形式,即:完全的硬件实施方式、完全的软件实施方式(包括固件、微代码等),或硬件和软件方面结合的实施方式,这里可以统称为“电路”、“模块”或“系统”。
下面参照图5来描述根据本申请的这种实施方式的电子设备500。图5显示的电子设备500仅仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。
如图5所示,电子设备500以通用计算设备的形式表现。电子设备500的组件可 以包括但不限于:上述至少一个处理单元510、上述至少一个存储单元520、连接不同系统组件(包括存储单元520和处理单元510)的总线530。
其中,所述存储单元存储有程序代码,所述程序代码可以被所述处理单元510执行,使得所述处理单元510执行本说明书上述“示例性方法”部分中描述的根据本申请各种示例性实施方式的步骤。例如,所述处理单元510可以执行如图1中所示的步骤S110:当接收到存在部分人脸特征被遮挡的第一人脸图像,从所述第一人脸图像上进行所有关键点检测,得到多个可检测关键点及每个所述可检测关键点的坐标;S120:基于每个所述可检测关键点,从预设关键点信息数据库中,获取每个所述可检测关键点的识别分数及缺失关键点的编号;步骤S130:根据每个所述可检测关键点的识别分数,获取所述部分人脸特征被遮挡的区域对识别人脸的影响分数;步骤S140:当所述影响分数高于预定分数阈值时,基于所述缺失关键点的编号,获取所述多个可检测关键点中与所述缺失关键点具有预定人脸特征关联关系的多个目标关键点;步骤S150:根据所述多个目标关键点的坐标,从预定人脸特征模板库中,获取与所述多个目标关键点的位置结合度大于预定结合度阈值的目标人脸特征模板;步骤S160:将所述目标人脸特征模板与所述第一人脸图像上的所述多个目标关键点缝合,得到第二人脸图像,已根据所述第二人脸图像检测所有关键点后进行人脸识别。
存储单元520可以包括易失性存储单元形式的可读介质,例如随机存取存储单元(RAM)5201和/或高速缓存存储单元5202,还可以进一步包括只读存储单元(ROM)5203。
存储单元520还可以包括具有一组(至少一个)程序模块5205的程序/实用工具5204,这样的程序模块5205包括但不限于:操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。
总线530可以为表示几类总线结构中的一种或多种,包括存储单元总线或者存储单元控制器、外围总线、图形加速端口、处理单元或者使用多种总线结构中的任意总线结构的局域总线。
电子设备500也可以与一个或多个外部设备700(例如键盘、指向设备、蓝牙设 备等)通信,还可与一个或者多个使得客户能与该电子设备500交互的设备通信,和/或与使得该电子设备500能与一个或多个其它计算设备进行通信的任何设备(例如路由器、调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口550进行。并且,电子设备500还可以通过网络适配器560与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。如图所示,网络适配器560通过总线530与电子设备500的其它模块通信。应当明白,尽管图中未示出,可以结合电子设备500使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。
通过以上的实施方式的描述,本领域的技术人员易于理解,这里描述的示例实施方式可以通过软件实现,也可以通过软件结合必要的硬件的方式来实现。因此,根据本申请实施方式的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘等)中或网络上,包括若干指令以使得一台计算设备(可以是个人计算机、服务器、终端装置、或者网络设备等)执行根据本申请实施方式的方法。
在本申请的示例性实施例中,参考图6所示,还提供了一种计算机可读存储介质,其上存储有能够实现本说明书上述方法的程序产品,该计算机可读存储介质可以为计算机非易失性可读存储介质。在一些可能的实施方式中,本申请的各个方面还可以实现为一种程序产品的形式,其包括程序代码,当所述程序产品在终端设备上运行时,所述程序代码用于使所述终端设备执行本说明书上述“示例性方法”部分中描述的根据本申请各种示例性实施方式的步骤。
参考图6所示,描述了根据本申请的实施方式的用于实现上述方法的程序产品600,其可以采用便携式紧凑盘只读存储器(CD-ROM)并包括程序代码,并可以在终端设备,例如个人电脑上运行。然而,本申请的程序产品不限于此,在本文件中,可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。所述程序产品可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以为但不限于电、磁、光、电磁、红外线、或半导体 的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了可读程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。可读信号介质还可以是可读存储介质以外的任何可读介质,该可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、有线、光缆、RF等等,或者上述的任意合适的组合。可以以一种或多种程序设计语言的任意组合来编写用于执行本申请操作的程序代码,所述程序设计语言包括面向对象的程序设计语言-诸如Java、C++等,还包括常规的过程式程序设计语言-诸如“C”语言或类似的程序设计语言。程序代码可以完全地在客户计算设备上执行、部分地在客户设备上执行、作为一个独立的软件包执行、部分在客户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。在涉及远程计算设备的情形中,远程计算设备可以通过任意种类的网络,包括局域网(LAN)或广域网(WAN),连接到客户计算设备,或者,可以连接到外部计算设备(例如利用因特网服务提供商来通过因特网连接)。此外,上述附图仅是根据本申请示例性实施例的方法所包括的处理的示意性说明,而不是限制目的。易于理解,上述附图所示的处理并不表明或限制这些处理的时间顺序。另外,也易于理解,这些处理可以是例如在多个模块中同步或异步执行的。本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本申请的其他实施例。本申请旨在涵盖本申请的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本申请的一般性原理并包括本申请未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本申请的真正范围和精神由权利要求指出。

Claims (22)

  1. 一种人脸识别方法,包括:
    当接收到存在部分人脸特征被遮挡的第一人脸图像,从所述第一人脸图像上进行所有关键点检测,得到多个可检测关键点及每个所述可检测关键点的坐标;
    基于每个所述可检测关键点,从预设关键点信息数据库中,获取每个所述可检测关键点的识别分数及缺失关键点的编号;
    根据每个所述可检测关键点的识别分数,获取所述部分人脸特征被遮挡的区域对识别人脸的影响分数;
    当所述影响分数高于预定分数阈值时,基于所述缺失关键点的编号,获取所述多个可检测关键点中与所述缺失关键点具有预定人脸特征关联关系的多个目标关键点;
    根据所述多个目标关键点的坐标,从预定人脸特征模板库中,获取与所述多个目标关键点的位置结合度大于预定结合度阈值的目标人脸特征模板;
    将所述目标人脸特征模板与所述第一人脸图像上的所述多个目标关键点缝合,得到第二人脸图像,以根据所述第二人脸图像检测所有关键点后进行人脸识别。
  2. 根据权利要求1所述的方法,其中,所述基于每个所述可检测关键点,从预设关键点信息数据库中,获取每个所述可检测关键点的识别分数及缺失关键点的编号,包括:
    从预设关键点信息数据库中,依次查找每个所述可检测关键点,以获取每个所述可检测关键点关联存储的识别分数;
    将所述预设关键点信息数据库中,除所述可检测关键点之外的其它关键点作为所述缺失关键点。
  3. 根据权利要求1所述的方法,其中,所述根据每个所述可检测关键点的识别分数,获取所述部分人脸特征被遮挡的区域对识别人脸的影响分数,包括:
    将所有所述可检测关键点的识别分数的求和,得到识别分数总和;
    用所述识别分数总和减去预定识别分数阈值,得到所述识别分数总和与所述预定识别分数阈值的差值,作为所述部分人脸特征被遮挡区域对识别人脸的影响分数。
  4. 根据权利要求1所述的方法,其中,所述当所述影响分数高于预定分数阈值时,基于所述缺失关键点的编号,获取所述多个可检测关键点中与所述缺失关键点具有预定人脸特征关联关系的多个目标关键点,包括:
    当所述影响分数高于预定分数阈值时,将来源于同一人脸特征的所述缺失关键点分为一组;
    从预设关键点信息数据库中查找与每组所述缺失关键点的编号来源于同一人脸特征的所有同组关键点的编号;
    将所述所有同组关键点的编号中除去每组所述缺失关键点的编号的其它关键点,作为每组缺失关键点对应的所述目标关键点。
  5. 根据权利要求1所述的方法,其中,所述当所述影响分数高于预定分数阈值时,基于所述缺失关键点的编号,获取所述多个可检测关键点中与所述缺失关键点具有预定人脸特征关联关系的多个目标关键点,包括:
    当所述影响分数高于预定分数阈值时,基于所述缺失关键点的编号,从预设关键点信息数据库中查找与每个所述缺失关键点的编号来源于同一人脸特征的所有关键点的编号;
    根据与每个所述缺失关键点的编号来源于同一人脸特征的所有关键点的编号,从所述多个可检测关键点中获取与每个所述缺失关键点具有预定人脸特征关联关系的多个目标关键点。
  6. 根据权利要求5所述的方法,其中,所述根据所述与每个所述缺失关键点的编号来源于同一人脸特征的所有关键点的编号,从所述多个可检测关键点中获取与每个所述缺失关键点具有预定人脸特 征关联关系的多个目标关键点,包括:
    从所述多个可检测关键点中,查找与每个所述缺失关键点的编号来源于同一人脸特征的所有关键点的编号,得到查找到的可检测关键点的编号;
    将所述查找到的可检测关键点的编号对应的可检测关键点,作为与每个所述缺失关键点具有预定人脸特征关联关系的多个目标关键点。
  7. 根据权利要求1所述的方法,其中,所述根据每个所述目标关键点的坐标,从预定人脸特征模板库中,获取与所述多个目标关键点的位置结合度大于预定结合度阈值的目标人脸特征模板,包括:
    根据每个所述目标关键点的坐标,获取来源于目标人脸特征的多个目标关键点的坐标;
    根据所述来源于目标人脸特征的多个目标关键点的坐标,计算所述多个目标关键点中每两个关键点之间的第一距离;
    从预定人脸特征模板库中,获取与所述目标人脸特征对应的预定人脸特征模板集中,与所述多个目标关键点相同编号的多个预设关键点中每两个关键点之间的第二距离;
    根据所述第一距离和所述第二距离,计算所述多个目标关键点与所述预定人脸特征模板集中每个模板的位置结合度;
    将所述预定人脸特征模板集中,与所述多个目标关键点的位置结合度大于预定结合度阈值的模板,作为所述目标人脸特征模板。
  8. 一种人脸识别装置,包括:
    检测模块,用于当接收到存在部分人脸特征被遮挡的第一人脸图像,从所述第一人脸图像上进行所有关键点检测,得到多个可检测关键点及每个所述可检测关键点的坐标;
    第一获取模块,用于基于每个所述可检测关键点,从预设关键点信息数据库中,获取每个所述可检测关键点的识别分数及缺失关键点的编号;
    第二获取模块,用于根据每个所述可检测关键点的识别分数,获取所述部分人脸特征被遮挡区域对识别人脸的影响分数;
    第三获取模块,用于当所述影响分数高于预定分数阈值时,基于所述缺失关键点的编号,获取所述多个可检测关键点中与所述缺失关键点具有预定人脸特征关联关系的多个目标关键点;
    第四获取模块,用于根据所述多个目标关键点的坐标,从预定人脸特征模板库中,获取与所述多个目标关键点的位置结合度大于预定结合度阈值的目标人脸特征模板;
    缝合模块,用于将所述目标人脸特征模板与所述第一人脸图像上的所述多个目标关键点缝合,得到第二人脸图像,以根据所述第二人脸图像检测所有关键点后进行人脸识别。
  9. 根据权利要求8所述的装置,所述第一获取模块被配置为:
    从预设关键点信息数据库中,依次查找每个所述可检测关键点,以获取每个所述可检测关键点关联存储的识别分数;
    将所述预设关键点信息数据库中,除所述可检测关键点之外的其它关键点作为所述缺失关键点。
  10. 根据权利要求8所述的装置,所述第二获取模块被配置为:
    将所有所述可检测关键点的识别分数的求和,得到识别分数总和;
    用所述识别分数总和减去预定识别分数阈值,得到所述识别分数总和与所述预定识别分数阈值的差值,作为所述部分人脸特征被遮挡区域对识别人脸的影响分数。
  11. 根据权利要求8所述的装置,所述第三获取模块被配置为:
    当所述影响分数高于预定分数阈值时,将来源于同一人脸特征的所述缺失关键点分为一组;
    从预设关键点信息数据库中查找与每组所述缺失关键点的编号来源于同一人脸特征的所有同组关键点的编号;
    将所述所有同组关键点的编号中除去每组所述缺失关键点的编号 的其它关键点,作为每组缺失关键点对应的所述目标关键点。
  12. 根据权利要求8所述的装置,所述第三获取模块被配置为:
    当所述影响分数高于预定分数阈值时,基于所述缺失关键点的编号,从预设关键点信息数据库中查找与每个所述缺失关键点的编号来源于同一人脸特征的所有关键点的编号;
    根据与每个所述缺失关键点的编号来源于同一人脸特征的所有关键点的编号,从所述多个可检测关键点中获取与每个所述缺失关键点具有预定人脸特征关联关系的多个目标关键点。
  13. 根据权利要求12所述的装置,所述第三获取模块被配置为:
    从所述多个可检测关键点中,查找与每个所述缺失关键点的编号来源于同一人脸特征的所有关键点的编号,得到查找到的可检测关键点的编号;
    将所述查找到的可检测关键点的编号对应的可检测关键点,作为与每个所述缺失关键点具有预定人脸特征关联关系的多个目标关键点。
  14. 根据权利要求8所述的装置,所述第四获取模块被配置为:
    根据每个所述目标关键点的坐标,获取来源于目标人脸特征的多个目标关键点的坐标;
    根据所述来源于目标人脸特征的多个目标关键点的坐标,计算所述多个目标关键点中每两个关键点之间的第一距离;
    从预定人脸特征模板库中,获取与所述目标人脸特征对应的预定人脸特征模板集中,与所述多个目标关键点相同编号的多个预设关键点中每两个关键点之间的第二距离;
    根据所述第一距离和所述第二距离,计算所述多个目标关键点与所述预定人脸特征模板集中每个模板的位置结合度;
    将所述预定人脸特征模板集中,与所述多个目标关键点的位置结合度大于预定结合度阈值的模板,作为所述目标人脸特征模板。
  15. 一种电子设备,包括:处理单元;以及存储单元,用于存储所述 处理单元的人脸识别程序;其中,所述处理单元配置为经由执行所述人脸识别程序来执行以下处理:
    当接收到存在部分人脸特征被遮挡的第一人脸图像,从所述第一人脸图像上进行所有关键点检测,得到多个可检测关键点及每个所述可检测关键点的坐标;
    基于每个所述可检测关键点,从预设关键点信息数据库中,获取每个所述可检测关键点的识别分数及缺失关键点的编号;
    根据每个所述可检测关键点的识别分数,获取所述部分人脸特征被遮挡的区域对识别人脸的影响分数;
    当所述影响分数高于预定分数阈值时,基于所述缺失关键点的编号,获取所述多个可检测关键点中与所述缺失关键点具有预定人脸特征关联关系的多个目标关键点;
    根据所述多个目标关键点的坐标,从预定人脸特征模板库中,获取与所述多个目标关键点的位置结合度大于预定结合度阈值的目标人脸特征模板;
    将所述目标人脸特征模板与所述第一人脸图像上的所述多个目标关键点缝合,得到第二人脸图像,以根据所述第二人脸图像检测所有关键点后进行人脸识别。
  16. 根据权利要求15所述的电子设备,其中,所述基于每个所述可检测关键点,从预设关键点信息数据库中,获取每个所述可检测关键点的识别分数及缺失关键点的编号,包括:
    从预设关键点信息数据库中,依次查找每个所述可检测关键点,以获取每个所述可检测关键点关联存储的识别分数;
    将所述预设关键点信息数据库中,除所述可检测关键点之外的其它关键点作为所述缺失关键点。
  17. 根据权利要求15所述的电子设备,其中,所述根据每个所述可检测关键点的识别分数,获取所述部分人脸特征被遮挡的区域对识别人脸的影响分数,包括:
    将所有所述可检测关键点的识别分数的求和,得到识别分数总和;
    用所述识别分数总和减去预定识别分数阈值,得到所述识别分数总和与所述预定识别分数阈值的差值,作为所述部分人脸特征被遮挡区域对识别人脸的影响分数。
  18. 根据权利要求15所述的电子设备,其中,所述当所述影响分数高于预定分数阈值时,基于所述缺失关键点的编号,获取所述多个可检测关键点中与所述缺失关键点具有预定人脸特征关联关系的多个目标关键点,包括:
    当所述影响分数高于预定分数阈值时,将来源于同一人脸特征的所述缺失关键点分为一组;
    从预设关键点信息数据库中查找与每组所述缺失关键点的编号来源于同一人脸特征的所有同组关键点的编号;
    将所述所有同组关键点的编号中除去每组所述缺失关键点的编号的其它关键点,作为每组缺失关键点对应的所述目标关键点。
  19. 根据权利要求15所述的电子设备,其中,所述当所述影响分数高于预定分数阈值时,基于所述缺失关键点的编号,获取所述多个可检测关键点中与所述缺失关键点具有预定人脸特征关联关系的多个目标关键点,包括:
    当所述影响分数高于预定分数阈值时,基于所述缺失关键点的编号,从预设关键点信息数据库中查找与每个所述缺失关键点的编号来源于同一人脸特征的所有关键点的编号;
    根据与每个所述缺失关键点的编号来源于同一人脸特征的所有关键点的编号,从所述多个可检测关键点中获取与每个所述缺失关键点具有预定人脸特征关联关系的多个目标关键点。
  20. 根据权利要求19所述的电子设备,其中,所述根据所述与每个所述缺失关键点的编号来源于同一人脸特征的所有关键点的编号,从所述多个可检测关键点中获取与每个所述缺失关键点具有预定 人脸特征关联关系的多个目标关键点,包括:
    从所述多个可检测关键点中,查找与每个所述缺失关键点的编号来源于同一人脸特征的所有关键点的编号,得到查找到的可检测关键点的编号;
    将所述查找到的可检测关键点的编号对应的可检测关键点,作为与每个所述缺失关键点具有预定人脸特征关联关系的多个目标关键点。
  21. 根据权利要求15所述的电子设备,其中,所述根据每个所述目标关键点的坐标,从预定人脸特征模板库中,获取与所述多个目标关键点的位置结合度大于预定结合度阈值的目标人脸特征模板,包括:
    根据每个所述目标关键点的坐标,获取来源于目标人脸特征的多个目标关键点的坐标;
    根据所述来源于目标人脸特征的多个目标关键点的坐标,计算所述多个目标关键点中每两个关键点之间的第一距离;
    从预定人脸特征模板库中,获取与所述目标人脸特征对应的预定人脸特征模板集中,与所述多个目标关键点相同编号的多个预设关键点中每两个关键点之间的第二距离;
    根据所述第一距离和所述第二距离,计算所述多个目标关键点与所述预定人脸特征模板集中每个模板的位置结合度;
    将所述预定人脸特征模板集中,与所述多个目标关键点的位置结合度大于预定结合度阈值的模板,作为所述目标人脸特征模板。
  22. 一种计算机非易失性可读存储介质,其上存储有人脸识别程序,所述人脸识别程序被处理单元执行时执行权利要求1至7任一项所述的方法。
PCT/CN2019/117656 2019-08-01 2019-11-12 人脸识别方法、装置、电子设备及计算机非易失性可读存储介质 WO2021017286A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US17/266,587 US11734954B2 (en) 2019-08-01 2019-11-12 Face recognition method, device and electronic equipment, and computer non-volatile readable storage medium
JP2021504210A JP7106742B2 (ja) 2019-08-01 2019-11-12 顔認識方法、装置、電子機器及びコンピュータ不揮発性読み取り可能な記憶媒体
SG11202103132XA SG11202103132XA (en) 2019-08-01 2019-11-12 Face recognition method, device and electronic equipment, and computer non-volatile readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910709169.7 2019-08-01
CN201910709169.7A CN110610127B (zh) 2019-08-01 2019-08-01 人脸识别方法、装置、存储介质及电子设备

Publications (1)

Publication Number Publication Date
WO2021017286A1 true WO2021017286A1 (zh) 2021-02-04

Family

ID=68889857

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/117656 WO2021017286A1 (zh) 2019-08-01 2019-11-12 人脸识别方法、装置、电子设备及计算机非易失性可读存储介质

Country Status (5)

Country Link
US (1) US11734954B2 (zh)
JP (1) JP7106742B2 (zh)
CN (1) CN110610127B (zh)
SG (1) SG11202103132XA (zh)
WO (1) WO2021017286A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111213153A (zh) * 2019-01-30 2020-05-29 深圳市大疆创新科技有限公司 目标物体运动状态检测方法、设备及存储介质
CN111160221B (zh) * 2019-12-26 2023-09-01 深圳云天励飞技术有限公司 一种人脸采集方法及相关装置
CN111768478B (zh) * 2020-07-13 2023-05-30 腾讯科技(深圳)有限公司 一种图像合成方法、装置、存储介质和电子设备
WO2023272725A1 (zh) * 2021-07-02 2023-01-05 华为技术有限公司 人脸图像处理方法、装置和车辆
CN113689457A (zh) * 2021-10-26 2021-11-23 阿里巴巴达摩院(杭州)科技有限公司 数据处理方法、装置、计算机设备及计算机可读存储介质
CN114519866A (zh) * 2022-01-29 2022-05-20 阿里巴巴(中国)有限公司 人体测量数据的获取方法、处理方法及设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170140211A1 (en) * 2014-03-28 2017-05-18 Nec Corporation Face comparison device, method, and recording medium
CN107679450A (zh) * 2017-08-25 2018-02-09 珠海多智科技有限公司 基于深度学习的遮挡条件下人脸识别方法
CN108985212A (zh) * 2018-07-06 2018-12-11 深圳市科脉技术股份有限公司 人脸识别方法及装置
CN109299658A (zh) * 2018-08-21 2019-02-01 腾讯科技(深圳)有限公司 脸部检测方法、脸部图像渲染方法、装置及存储介质

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3970573B2 (ja) 2001-10-19 2007-09-05 アルパイン株式会社 顔画像認識装置および方法
JP4999731B2 (ja) 2008-02-29 2012-08-15 セコム株式会社 顔画像処理装置
JP5480532B2 (ja) 2009-04-30 2014-04-23 グローリー株式会社 画像処理装置、画像処理方法、及び同方法をコンピュータに実行させるプログラム
US8379940B2 (en) * 2009-06-02 2013-02-19 George Mason Intellectual Properties, Inc. Robust human authentication using holistic anthropometric and appearance-based features and boosting
JP2013196294A (ja) * 2012-03-19 2013-09-30 Toshiba Corp 人物画像処理装置、及び人物画像処理方法
CN106778574A (zh) * 2016-12-06 2017-05-31 广州视源电子科技股份有限公司 用于人脸图像的检测方法和装置
CN109117797A (zh) * 2018-08-17 2019-01-01 浙江捷尚视觉科技股份有限公司 一种基于人脸质量评价的人脸抓拍识别方法
US11163981B2 (en) * 2018-09-11 2021-11-02 Apple Inc. Periocular facial recognition switching
CN109063695A (zh) * 2018-09-18 2018-12-21 图普科技(广州)有限公司 一种人脸关键点检测方法、装置及其计算机存储介质
CN110070017B (zh) * 2019-04-12 2021-08-24 北京迈格威科技有限公司 一种人脸假眼图像生成方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170140211A1 (en) * 2014-03-28 2017-05-18 Nec Corporation Face comparison device, method, and recording medium
CN107679450A (zh) * 2017-08-25 2018-02-09 珠海多智科技有限公司 基于深度学习的遮挡条件下人脸识别方法
CN108985212A (zh) * 2018-07-06 2018-12-11 深圳市科脉技术股份有限公司 人脸识别方法及装置
CN109299658A (zh) * 2018-08-21 2019-02-01 腾讯科技(深圳)有限公司 脸部检测方法、脸部图像渲染方法、装置及存储介质

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
GE, SHIMING ET AL.: "Detecting Masked Faces in the Wild with LLE-CNNs", 2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, 9 November 2017 (2017-11-09), pages 426 - 434, XP033249379, ISSN: 1063-6919 *
GUO, WEI ET AL.: "Face Recognition Algorithm of Occlusion Location Based on PCANet", JOURNAL OF FRONTIERS OF COMPUTER SCIENCE AND TECHNOLOGY, 30 January 2019 (2019-01-30), pages 2149 - 2160, XP055778188, ISSN: 1673-9418, DOI: 10.3778/j.issn.1673-9418.1811031 *
ZHAO, SHUHUAN: "Pixel-level occlusion detection based on sparse representation for face recognition", OPTIK, vol. 168, 30 September 2018 (2018-09-30), pages 920 - 930, XP055778170, ISSN: 0030-4026 *

Also Published As

Publication number Publication date
US20210312163A1 (en) 2021-10-07
US11734954B2 (en) 2023-08-22
SG11202103132XA (en) 2021-04-29
JP2021534480A (ja) 2021-12-09
CN110610127A (zh) 2019-12-24
CN110610127B (zh) 2023-10-27
JP7106742B2 (ja) 2022-07-26

Similar Documents

Publication Publication Date Title
WO2021017286A1 (zh) 人脸识别方法、装置、电子设备及计算机非易失性可读存储介质
CN110135246B (zh) 一种人体动作的识别方法及设备
CN110147717B (zh) 一种人体动作的识别方法及设备
CN108875833B (zh) 神经网络的训练方法、人脸识别方法及装置
WO2018028546A1 (zh) 一种关键点的定位方法及终端、计算机存储介质
WO2019080411A1 (zh) 电子装置、人脸图像聚类搜索方法和计算机可读存储介质
WO2019196205A1 (zh) 外语教学评价信息生成方法以及装置
US20140241574A1 (en) Tracking and recognition of faces using selected region classification
WO2018090641A1 (zh) 识别保险单号码的方法、装置、设备及计算机可读存储介质
US11126827B2 (en) Method and system for image identification
WO2022001106A1 (zh) 关键点检测方法、装置、电子设备及存储介质
WO2019033567A1 (zh) 眼球动作捕捉方法、装置及存储介质
US20220027606A1 (en) Human behavior recognition method, device, and storage medium
US20210174104A1 (en) Finger vein comparison method, computer equipment, and storage medium
TW202201275A (zh) 手部作業動作評分裝置、方法及電腦可讀取存儲介質
CN113557546B (zh) 图像中关联对象的检测方法、装置、设备和存储介质
US20190122020A1 (en) Latent fingerprint pattern estimation
CN112084103B (zh) 界面测试方法、装置、设备和介质
CN108334602B (zh) 数据标注方法和装置、电子设备、计算机存储介质
CN112308055B (zh) 人脸检索系统的评价方法、装置、电子设备和存储介质
CN111368674B (zh) 图像识别方法及装置
US20220222967A1 (en) Retrieval device, control method, and non-transitory storage medium
CN109814712B (zh) 一种基于虚拟交互装置的手势识别系统
JP7470069B2 (ja) 指示物体検出装置、指示物体検出方法及び指示物体検出システム
CN111061367B (zh) 一种自助设备手势鼠标的实现方法

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2021504210

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19939201

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19939201

Country of ref document: EP

Kind code of ref document: A1