WO2019223339A1 - 用于面部匹配的方法、设备、装置和存储介质 - Google Patents

用于面部匹配的方法、设备、装置和存储介质 Download PDF

Info

Publication number
WO2019223339A1
WO2019223339A1 PCT/CN2019/071350 CN2019071350W WO2019223339A1 WO 2019223339 A1 WO2019223339 A1 WO 2019223339A1 CN 2019071350 W CN2019071350 W CN 2019071350W WO 2019223339 A1 WO2019223339 A1 WO 2019223339A1
Authority
WO
WIPO (PCT)
Prior art keywords
matching result
matching
sample
image
weight value
Prior art date
Application number
PCT/CN2019/071350
Other languages
English (en)
French (fr)
Inventor
张亚男
吴昊
张晓萍
郑仰利
谢晓波
孙兴盼
郭宝磊
王灵国
樊斌
印思琪
孙宏轩
姜永强
许梦兴
郑财
毛先峰
周震国
Original Assignee
京东方科技集团股份有限公司
北京京东方光电科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司, 北京京东方光电科技有限公司 filed Critical 京东方科技集团股份有限公司
Priority to US16/488,704 priority Critical patent/US11321553B2/en
Publication of WO2019223339A1 publication Critical patent/WO2019223339A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/50Maintenance of biometric data or enrolment thereof

Definitions

  • Embodiments of the present disclosure relate to a method, device, device, and storage medium for face matching.
  • facial recognition technology has received widespread attention.
  • face recognition in order to improve the accuracy of face recognition, good lighting conditions are required when collecting facial images, and the collected face images should be frontal and clear.
  • the face of the identified person changes, for example, the shooting angle, lighting conditions change, and / or some external features of the face (makeup, hairstyle, beard, scar, glasses, etc.) change, Accuracy is reduced and misidentification results may occur.
  • An embodiment of the present disclosure provides a method for facial matching, including: collecting images to be matched; matching based on at least one of an original sample library and an association sample library; and outputting a final matching result, wherein, the The original sample database includes an original sample image, and the association sample database includes an association sample image in which an association feature is added to the original sample image.
  • obtaining the original sample database and the associative sample database includes: collecting the original sample image; obtaining the original sample database based on the original sample image; adding associative features to the original sample image in the original sample database and generating the associative sample image to obtain the associative sample Library.
  • the associative feature includes at least one of scene, glasses, accessories, clothing, and hairstyle.
  • the method for face matching further includes: matching to-be-matched images based on a feature sample database, the feature sample database including feature sample images, wherein obtaining the feature sample database includes: changing the Local features of the sample images are associated and feature sample images are generated to obtain a feature sample library.
  • changing the local features of the Lenovo sample image in the Lenovo sample database includes: removing the local features in the Lenovo sample image and / or changing at least one of the size and shape of the local features, wherein the local features include a mole , Scar, beard, eyebrow shape, mouth, nose, eyes, ears.
  • matching the images to be matched includes: matching the images to be matched with the original sample images in the original sample library to determine a first matching result; and when the first matching result is greater than or equal to a first threshold, based on the The first matching result generates a final matching result.
  • matching the images to be matched further includes: matching the to-be-matched images with the associative sample images in the associative sample library to determine a second matching result; based on the first
  • the third matching result is determined by the matching result and the second matching result.
  • a final matching result is generated based on the third matching result, where the second threshold value may be the same as the first matching value.
  • the thresholds are the same or different.
  • determining the third matching result includes: setting a first weight value for the first matching result; setting a second weight value for the second matching result; and based on a product of the first matching result and the first weight value and the second The product of the matching result and the second weight value determines a third matching result, wherein the first weight value is greater than the second weight value.
  • matching the images to be matched further includes: matching the images to be matched with the feature sample images in the feature sample database to determine a fourth matching result; based on the first matching result;
  • the fifth matching result is determined by the matching result, the second matching result, and the fourth matching result.
  • a final matching result is generated based on the fifth matching result, wherein the third matching result is
  • the threshold may be the same as or different from the second threshold.
  • determining the fifth matching result includes: setting a third weight value for the first matching result; setting a fourth weight value for the second matching result; setting a fifth weight value for the fourth matching result; based on the first The product of the matching result and the third weight value, the product of the second matching result and the fourth weight value, and the product of the fourth matching result and the fifth weight value determine the fifth matching result, wherein the third weight value is greater than the fourth Weight value, fifth weight value.
  • the method for face matching further includes updating at least one of the original sample database, the associative sample database, and the feature sample database based on the image to be matched.
  • An embodiment of the present disclosure further provides a device for facial recognition, including: an input module for collecting images to be matched; and a weight analysis module for pairing the image based on at least one of an original sample database and a associative sample database. Matching the to-be-matched images; and an output module for outputting the final matching result, wherein the original sample library includes original sample images, and the association sample database includes association samples with association features added to the original sample image image.
  • the input module is further configured to collect an original sample image
  • the device further includes an algorithm processing module configured to obtain an original sample database based on the original sample image, and add an association feature to the original sample image in the original sample database.
  • An association sample image is generated to obtain an association sample library;
  • a sample database storage module is used to store the sample database, wherein the association feature includes at least one of scene, glasses, accessories, clothing, and hairstyle.
  • the weight analysis module is further configured to match the images to be matched based on a feature sample library, wherein the algorithm processing module is further configured to: change the local features of the Lenovo sample image in the Lenovo sample database and generate feature samples Image to obtain a feature sample library, wherein the local features include at least one of moles, scars, beards, eyebrow shapes, mouths, noses, eyes, and ears.
  • the weight analysis module is further configured to: match the image to be matched with the original sample image in the original sample library to determine a first matching result; and when the first matching result is greater than or equal to a first threshold, based on the The first matching result generates a final matching result.
  • the weight analysis module is further configured to: match the image to be matched with the associative sample image in the associative sample library to determine a second matching result; based on the first matching The result and the second matching result determine a third matching result; in the case where the third matching result is greater than or equal to a second threshold value, a final matching result is generated based on the third matching result, where the second threshold value may be equal to the first threshold value
  • the weight analysis module determining the third matching result includes: setting a first weight value for the first matching result; setting a second weight value for the second matching result; based on the first matching result and the first weight The product of the value and the product of the second matching result and the second weight value determine the third matching result, wherein the first weight value is greater than the second weight value.
  • the weight analysis module is further configured to: match the image to be matched with the feature sample image in the feature sample database to determine a fourth matching result; based on the first matching
  • the fifth matching result is determined by the result, the second matching result, and the fourth matching result; and in the case where the fifth matching result is greater than or equal to a third threshold value, a final matching result is generated based on the fifth matching result, wherein the third threshold value It may be the same as or different from the second threshold.
  • the weight analysis module determining the fifth matching result includes: setting a third weight value for the first matching result; setting a fourth weight value for the second matching result; and setting a fourth match.
  • the result sets a fifth weight value; determines a fifth matching result based on a product of the first matching result and the third weight value, a product of the second matching result and the fourth weight value, and a product of the fourth matching result and the fifth weight value,
  • the third weight value is greater than the fourth weight value and the fifth weight value.
  • the algorithm processing module is further configured to update at least one of the original sample database, the associative sample database, and the feature sample database based on the image to be matched.
  • An embodiment of the present disclosure further provides an apparatus for face matching, including: at least one processor; and at least one memory, wherein the memory stores computer-readable code, and the computer-readable code is The at least one processor executes the method for face matching as described above, or implements the device for face matching as described above.
  • Embodiments of the present disclosure also provide a non-transitory computer-readable storage medium in which computer-readable code is stored, and the computer-readable code, when executed by one or more processors, performs Face matching method.
  • FIG. 1 shows a flowchart of a method for face matching according to an embodiment of the present disclosure
  • FIG. 2 shows a flowchart of obtaining the original sample library and the associative sample library according to an embodiment of the present disclosure
  • FIG. 3 shows a schematic diagram of an association feature for generating an association sample library according to an embodiment of the present disclosure
  • FIGS. 4A and 4B are schematic diagrams showing local features according to an embodiment of the present disclosure.
  • FIG. 5 shows another schematic diagram of local features according to an embodiment of the present disclosure
  • FIG. 6A illustrates a schematic diagram of an embodiment of image matching based on a sample library according to an embodiment of the present disclosure
  • FIG. 6B shows a schematic diagram of another embodiment of image matching based on a sample library according to an embodiment of the present disclosure
  • FIG. 7 is a schematic structural diagram of a device for face matching according to an embodiment of the present disclosure.
  • FIG. 8 shows a schematic diagram of a device for face matching according to an embodiment of the present disclosure.
  • a flowchart is used in this disclosure to illustrate the steps of a method according to an embodiment of the present application. It should be understood that the preceding or following steps are not necessarily performed in order. Instead, the various steps can be processed in reverse order or simultaneously. At the same time, other actions can be added to or removed from these processes.
  • facial recognition technology is also increasingly applied to other fields, such as intelligent security, public security criminal investigation, sign-in time, virtual reality, and augmented reality.
  • the public security criminal investigation department can use facial recognition technology to identify criminal suspects in the image library.
  • the accuracy of recognition has become a key factor affecting the effect of the product.
  • face matching can be performed on the collected face images and existing images in the sample library, and face recognition can be performed based on the results of the face matching. Therefore, the accuracy of face matching will affect the results of facial recognition.
  • the accuracy of face matching is high, that is, the correct recognition result can be obtained.
  • the above-mentioned facial recognition products may be mismatched. For example, a mobile phone cannot identify a user when face recognition is unlocked, resulting in a result that the user cannot normally use the mobile phone, which affects the user experience.
  • the above products may require that the images collected for face matching include as many facial features as possible during design. For example, when capturing images, the mobile phone user may be reminded to face the camera of the mobile phone in a well-lit environment to improve the accuracy of recognition.
  • the user's facial characteristics will constantly change at different times. For example, for female users, changes in makeup, hairstyle, eyebrow shape, etc. will affect the overall effect of the face. For male users, changes in beard, glasses and other characteristics will also affect the overall contour of the face. Changes in these features will affect the accuracy of face matching and even lead to misidentification.
  • the present disclosure provides a method for face matching.
  • the original sample database is constructed by collecting the original sample images of human faces as comprehensively and in high definition as possible.
  • the association sample database is obtained by adding certain association features to the original sample image.
  • certain features of the association sample image are changed to obtain a feature sample database.
  • calculation is performed by setting weight values for the matching results based on the original sample database, associative sample database and feature sample database The final matching result, thereby improving the accuracy of facial matching.
  • FIG. 1 shows a flowchart of a method for face matching according to an embodiment of the present disclosure.
  • an image I to be matched is acquired.
  • the image to be matched I obtained here can basically accurately reflect the features of the face.
  • a frontal image of a human face can be collected, and the illumination intensity and the collection angle are appropriate.
  • the to-be-matched image I collected in this way is beneficial to obtain more accurate recognition results.
  • step S102 matching is performed on the image I to be matched based on at least one of the original sample library B1 and the associative sample database B2.
  • the original sample database includes original sample images
  • the association sample database includes associative sample images added with association features to the original sample images.
  • an image matching algorithm may be used to first match the image I to be matched with the original sample image in the original sample library B1 to obtain a matching result A. Then, the image matching algorithm is used to match the image I with the associative sample image in the associative sample database B2 to obtain a matching result B. It should be noted that the image matching process may also be performed in other orders. For example, the image I to be matched may be matched with the original sample image in the original sample database B1 and the associative sample image in the association sample database B2 at the same time.
  • the image I to be matched can be matched with the associative sample image in the associative sample database B2, and then the image I to be matched is matched with the original sample image in the original sample database B1.
  • This does not constitute a limitation on the present disclosure, and will not be described in detail here.
  • a final matching result D is output.
  • the final matching result D may be generated based on at least a part of the matching results A, B obtained in step S102.
  • the final matching result D indicates whether the image to be matched I is verified by a product supporting facial recognition. For example, a camera of a mobile phone may be used to collect a user's image to be matched I.
  • the user may unlock or With this product, if the final matching result D indicates that the verification of the facial recognition has failed, it shows that the verification has failed, it cannot be unlocked or the product is used, and it is prompted whether to restart the acquisition and verification process of the image I to be matched.
  • FIG. 2 shows a flowchart of obtaining the original sample library and the associative sample library according to an embodiment of the present disclosure.
  • an original sample image is collected.
  • an image acquisition device can be used to collect high-definition original sample images containing as many facial features as possible, such as facial contours, ears, and special marked points such as moles and scars.
  • an original sample library B1 is obtained based on the collected original sample images as described above.
  • the collected facial images may be stored in the original sample database B1 as a whole, or the collected facial features may be classified, and the collected facial features may be stored in different categories.
  • Image matching algorithms can be trained based on different types of facial features to improve matching accuracy.
  • a deep learning algorithm can be trained based on a certain category of facial features, such as eyes, to improve the accuracy of the algorithm's recognition of eye features.
  • an association feature is added to the original sample image in the original sample database B1 to generate an association sample image to obtain the association sample database B2.
  • the associative features may include scenes, glasses, accessories, clothing, hairstyles, and the like.
  • the Lenovo feature may be a place where a user often goes in and out, an accessory that is often worn, and the like.
  • FIG. 3 shows a schematic diagram of an association feature for generating an association sample library according to an embodiment of the present disclosure.
  • the association feature database is used as an example to describe the association sample database. It should be noted that the association feature may also include other features, which does not constitute a limitation on the present disclosure.
  • hairstyle features can be added to the original sample images in the original sample library. For example, when the original sample library B1 is generated, the hairstyle of the user is the hairstyle A shown in FIG. 3. After a period of time, the user's hairstyle may change to hairstyle B.
  • the hairstyle A included in all or part of the original sample images in the original sample library B1 may be changed to the hairstyle B to obtain the associative sample database B2. Therefore, when the user's hairstyle is changed, more accurate matching results can be obtained when the captured facial image of the user is used to match the image with the associative sample database B2, thereby avoiding the result of recognition errors caused by the user's hairstyle change.
  • the method for face matching further includes matching to-be-matched images based on a feature sample library, which includes the feature sample images.
  • a feature sample library which includes the feature sample images.
  • step S204 local feature of the associative sample image in the associative sample database B2 is changed to generate a feature sample image to obtain the feature sample database B3.
  • the local features may include moles, scars, beards, eyebrow shapes, mouths, noses, eyes, ears, and the like.
  • changing the local features of the associative sample image B2 in the associative sample database may refer to removing some local features from the associative sample image, or changing the size, shape, etc. of some local features.
  • the changes to the local features here can be performed in a controlled manner, for example, the above-mentioned changes to the local features can be performed in a supervised manner by setting rules.
  • the associative sample images with changed local features in the feature sample database can be used to obtain more accurate face matching results.
  • FIGS. 4A and 4B are schematic diagrams of local features according to an embodiment of the present disclosure.
  • the feature sample library B3 is described by taking the local feature as an eyebrow shape as an example. It should be noted that the local feature can also be Are any other local features and this does not constitute a limitation on this disclosure.
  • the change of the eyebrow shape will greatly affect the overall structure of a person's face, for example, when the user changes from eyebrow shape A to eyebrow shape B, or from eyebrow shape C to eyebrow In the shape D, if only the original sample image and the associative sample image in the original or associative sample library are used for image matching, an incorrect matching result may be obtained, so that the user who changes the eyebrow shape cannot use the product normally.
  • the feature sample library B3 can be obtained by changing local features such as eyebrow shapes in the Lenovo sample image on the basis of the Lenovo sample database B2. For example, the eyebrow shape that may be used by multiple users may be determined first, as shown in FIG. 5, and then the eyebrow shape in FIG.
  • the feature sample image in the feature sample library B3 includes various eyebrow shapes that the user may have. Therefore, when the feature sample database B3 is used to match the collected image I to be matched, a more accurate recognition can be obtained. result.
  • the local feature may also be a scar.
  • An image processing algorithm is used to remove or dilute the scars in some of the associative sample images in the associative sample database B2 to build a feature sample database B3, so that when the user's scar is eliminated, Can use the feature sample library to get accurate recognition results.
  • the changes in the local characteristics may also be changes in the size of eyes, the length of beards, etc., which are not listed here one by one. It should be noted that the changes in the local features are limited changes.
  • the original sample database, the Lenovo sample database, and the feature sample database obtained above can be used for products supporting facial recognition to improve the accuracy of matching.
  • the process of performing face matching using the sample libraries B1, B2, and B3 to output the final matching result D can be achieved by setting weight values for the matching results of the sample libraries B1, B2, and B3, respectively.
  • FIG. 6A illustrates a schematic diagram of an embodiment of image matching based on a sample library according to an embodiment of the present disclosure
  • FIG. 6B illustrates a schematic diagram of another embodiment of image matching based on a sample library according to an embodiment of the present disclosure.
  • the image matching methods shown in FIGS. 6A and 6B are only two embodiments according to the present disclosure, and do not constitute a limitation on the present disclosure.
  • the image matching using the sample library based on the embodiments of the present disclosure may also adopt other methods. get on.
  • the collected image I to be matched is first matched with the original sample image in the original sample library B1, and a first matching result R1 is obtained.
  • the first matching result R1 indicates the matching degree of the image I to be matched with the original sample image in the original sample library B1.
  • the first matching result R1 can be determined by comparing the first matching result R1 with a set first threshold T1. For example, if the first matching result R1 is greater than or equal to a set first threshold T1, the image I to be matched is considered to have passed verification, which means that the user has the right to unlock or use the product.
  • the first matching result R1 that is greater than or equal to the first threshold T1 can be directly determined as the final matching result, without performing the subsequent matching process of the second and feature sample libraries. Since the original sample image in the original sample database B1 is generated based on the directly collected original sample images, it can be considered that the matching result of the original sample database B1 is more accurate and reliable than the matching results of the association and feature sample databases. For example, in the case that the original sample database is successfully matched, the subsequent matching process may not be performed to save operation time.
  • the image I to be matched does not match the original sample database B1 unsuccessfully. This may be caused by a change in the user's association characteristics or local characteristics, for example, the user changes his glasses or changes his eyebrow shape.
  • the image I to be matched may be matched with the associative sample image in the associative sample library B2, so as to determine the second matching result R2.
  • the third matching result R3 determined by combining the first matching result R1 and the second matching result R2 may be used to determine whether the image I to be matched is successfully identified. For example, the foregoing judgment may be performed by setting a second threshold T2 in advance.
  • the third matching result R3 is determined as the final matching result D.
  • the third matching result R3 indicates that the user's to-be-matched image I has passed the face recognition verification, and the matching process of the feature sample database B3 may no longer be performed.
  • the second threshold value T2 may be the same as or different from the first threshold value T1.
  • the second threshold value T2 may be set reasonably according to the accuracy requirement of facial recognition. For example, if the accuracy requirement for recognition is high, the value of the second threshold T2 may be increased.
  • a first weight value W1 may be set for the first matching result
  • a second weight value W2 may be set for the second matching result R2
  • the first matching result R1 and the first weight value W1 may be set.
  • the product of the product of the second matching result R2 and the second weight value W2 determines the third matching result R3.
  • the third matching result R3 may be a weighted sum of the first matching result R1 and the second matching result R2.
  • the contribution of R1 and R2 to the third matching result R3 may be allocated by the weight values W1 and W2.
  • the first weight value W1 may be set to be greater than the second weight value W2.
  • matching the image I to be matched may further include matching the image I to be matched with the feature sample image in the feature sample database B3 to determine a fourth matching result R4. Then, a fifth matching result R5 is determined based on the first matching result R1, the second matching result R2, and the fourth matching result R4. When the fifth matching result R5 is greater than or equal to the third threshold T3, the fifth matching result R5 is determined as the final matching result D.
  • the fifth matching result indicates whether the collected user's to-be-matched image I has passed the verification of facial recognition.
  • the third threshold value T3 may be the same as or different from the second threshold value T2, and may be set reasonably according to the accuracy requirement of facial recognition. For example, if the accuracy requirement for recognition is high, the value of the third threshold T3 may be increased.
  • the above-mentioned process of determining the fifth matching result R5 may be performed by setting a third weight value W3 for the first matching result R1, setting a fourth weight value W4 for the second matching result R2, and setting a third weight value for the fourth matching result R4.
  • Five weight values W5 are implemented.
  • a fifth matching result may be determined based on a product of the first matching result R1 and the third weight value W3, a product of the second matching result R2 and the fourth weight value W4, and a product of the fourth matching result R4 and the fifth weight value W5.
  • the third matching result R3 may be a weighted sum of the first matching result R1, the second matching result R2, and the fourth matching result R4.
  • the contribution of R1, R2, and R4 to the fifth matching result R5 may be allocated through the set weight values W3, W4, and W5. Since the Lenovo sample image in the Lenovo sample library B2 is generated by adding the association feature to the original sample image in the original sample database B1, and the feature sample image in the feature sample database B3 is generated by the Lenovo sample image in the Lenovo sample database B2 It can be considered that the reliability of the second matching result R2 is lower than that of the first matching result R1, and the reliability of the fourth matching result R4 is lower than that of the second matching result R2.
  • the third weight value W3 may be set to be larger than the fourth weight value W4 and the fifth weight value W5.
  • the influence of the second matching result may not be considered when determining the fifth matching result R5, for example, this may be achieved by setting the fourth weight value W4 to 0.
  • the face recognition of the image I to be matched can be realized based on the sample libraries B1, B2, and B3. Even if there are some changes in the user's facial features, using the matching method shown in FIG. 6A can avoid the occurrence of misrecognition and obtain accurate recognition results.
  • the collected to-be-matched image I can also be matched with the sample libraries B1, B2, and B3 one by one, thereby obtaining three matching results A, B, and C, and then The three matching results obtained are used to calculate the final matching result D.
  • the matching result D may be determined by setting weight values respectively.
  • the final matching result D may be a weighted result of the matching results A, B, and C. This process is similar to the process shown in FIG. 6A, and is not repeated here.
  • the collected image I to be matched may also be used to update the above-mentioned sample database.
  • the to-be-matched image I may be stored in the original sample library as the original sample image in the original sample library B1.
  • the sample images in the sample databases B1, B2, and B3 are continuously updated and supplemented during the application of facial recognition.
  • the updated sample database is helpful to obtain more accurate matching results when performing matching.
  • the face-to-be-matched image I may be used to update the sample library.
  • the above-mentioned process of calculating the final matching result D may further include setting feature weight values for certain facial features.
  • a high feature weight value can be set for obvious facial features such as eyebrows, eyes, ears, nose, mouth, moles, etc., and scars, beards, glasses, etc. are prone to occur
  • the changed feature sets a lower feature weight value.
  • different feature weight values may be set based on gender. For example, it is possible to reduce the feature weight value of a facial feature that is easily changed based on gender. For example, a lower feature weight value is set for male beards, and a higher feature weight value is set for female eyebrows, eye areas, and other parts.
  • feature weight values can also be set for certain specific features, respectively. For example, as described above, since the Lenovo sample image in the Lenovo sample database B2 is generated based on the original sample database B1, the reliability of the matching result of the Lenovo sample database B2 is lower than that of the original sample database B1. Then, the first weight value W1 can be set higher than the second weight value W2. On this basis, feature weight values can be assigned to the association features added in the association sample database B2. For example, the feature weight value of the hairstyle can be set lower than the feature weight value of the glasses and the like.
  • the weighting values W3, W4, and W5 can be set for the matching results A, B, and C of the original sample database, the association sample database, and the feature sample database, respectively.
  • the feature weight value can also be set for the local features changed in the feature sample database B3.
  • the changed feature weight value of the eyebrow shape in the feature sample database B3 can be increased, and the original sample database can be reduced accordingly.
  • the feature weight value of the eyebrow shape in B1 is used to increase the proportion of the impact of the changed local feature matching result on the final matching result D.
  • Feature samples are obtained after features such as moles, scars, etc.
  • the feature weight value of this removed feature can be correspondingly increased in the fourth matching result R4, and the removed feature can be correspondingly reduced in the second matching result R2.
  • Feature weight value in a case where the feature sample image in the feature sample database B3 is obtained by removing some local features in the association sample image, for example, by removing the association sample image in the association sample B2 Feature samples are obtained after features such as moles, scars, etc.
  • the feature weight value of this removed feature can be correspondingly increased in the fourth matching result R4, and the removed feature can be correspondingly reduced in the second matching result R2.
  • FIG. 7 shows a schematic structural diagram of a device for face matching according to an embodiment of the present disclosure.
  • the device may include an input module 701, a weight analysis module 704, and an output module 705.
  • the input module 701 may be used to collect an image I to be matched.
  • the weight analysis module 704 may be configured to match the to-be-matched image I collected by the input module 701 based on at least one of the original sample library and the association sample library.
  • the output module 705 may be used to output a final matching result.
  • the original sample database includes original sample images
  • the association sample database includes associative sample images added with association features to the original sample images.
  • the input module is further configured to acquire an original sample image.
  • the device provided by the present disclosure further includes an algorithm processing module 702 and a sample library storage module 703.
  • the algorithm processing module 702 is configured to obtain an original sample database based on the original sample image, and is further configured to add an association feature to the original sample image in the original sample database to generate an association sample image to obtain an association sample database.
  • the association feature includes at least one of a scene, glasses, accessories, clothing, and hairstyle.
  • the weight analysis module 704 is further configured to match the image to be matched based on a feature sample library.
  • the algorithm processing module 702 is further configured to change a local feature of the associative sample image in the associative sample database to generate a feature sample image to obtain a feature sample database.
  • the algorithm processing module 702 changing the local features of the Lenovo sample image in the Lenovo sample database includes removing at least one of the local features in the Lenovo sample image and changing the size and shape of the local features.
  • the local features may include: At least one of a mole, a scar, a beard, an eyebrow shape, a mouth, a nose, an eye, and an ear.
  • the sample database storage module 703 is configured to store a sample database, such as the original sample database, the associative sample database, and the feature sample database as described above.
  • the facial recognition application for mobile phone unlocking is taken as an example to introduce the functions of the device for facial matching.
  • the input module 701 collects an image I to be matched.
  • the input module 701 may include an image acquisition unit, which may be, for example, a front camera of a mobile phone.
  • the weight analysis module 704 matches the collected image to be matched I based on the sample library stored in the sample library storage module 703, and sets a weight value for the matching result of each sample database.
  • the output module 705 then outputs the final matching result D.
  • the above-mentioned process of performing matching and setting the weight value is shown in FIGS. 6A and 6B, and is not repeated here.
  • the final matching result D may indicate whether the user is verified by face recognition.
  • the final matching result D may also indicate the percentage of the degree of facial matching based on the matching results of the sample library, for example, the output result may include 70% similarity.
  • the final matching result D may also indicate specific features that failed image matching, for example, the matching coefficient of the eyes is low. In order to facilitate the user to adjust the shooting angle of the eyes or the illumination angle when the image is collected in the next face recognition, it is helpful to obtain accurate recognition results.
  • the algorithm processing module 702 in the image matching device as shown in FIG. 7 may also be used to update the above-mentioned sample library with the collected image I to be matched during the process of using the facial recognition application.
  • the algorithm processing module 702 may store the collected image I to be matched in the sample library storage module 703 as the original sample image.
  • the algorithm processing module 702 may further perform processing as shown in FIG. 2 on the to-be-matched image I, for example, adding association features or changing local features to generate corresponding association sample images and feature sample images.
  • FIG. 8 illustrates a schematic diagram of an apparatus 800 for face matching according to an embodiment of the present disclosure.
  • the apparatus 800 for face matching may include at least one processor 801 and at least one memory 802.
  • the memory 802 stores computer-readable code, which when executed by the at least one processor 801 executes the method for face matching as described above.
  • the memory 802 stores computer-readable code, which when executed by the at least one processor 801 executes the method for face matching as described above.
  • a non-transitory computer-readable storage medium in which computer-readable code is stored, and the computer-readable code, when executed by one or more processors, executes the functions described above. For face matching.
  • the present disclosure provides a method for face matching, comprising: collecting images to be matched; matching based on at least one of an original sample library and an association sample library; and outputting a final matching result, wherein the original samples
  • the library includes original sample images
  • the association sample library includes association sample images added with association features to the original sample images.
  • Obtaining the original sample database and the association sample database includes: collecting the original sample image; obtaining the original sample database based on the original sample image; adding association features to the original sample image in the original sample database to generate the association sample image to obtain the association sample database .
  • the method further includes: matching to-be-matched images based on a feature sample database, the feature sample database including feature sample images, and obtaining the feature sample database includes: changing a part of the association sample image in the association sample database.
  • Features generate feature sample images to obtain a feature sample library.
  • the final matching result can be determined by setting weight values and thresholds for the matching results obtained based on different sample libraries.
  • the recognition accuracy of the images to be matched can be improved.
  • the above method for facial recognition can be used in the field of facial recognition, as well as the fields of intelligent security related to facial recognition, public security criminal investigation, smart card swiping, smart glasses, AR / VR, and the like.
  • aspects of this application can be illustrated and described through several patentable categories or situations, including any new and useful process, machine, product, or combination of materials, or to them Any new and useful improvements. Accordingly, various aspects of the present application can be executed entirely by hardware, can be executed entirely by software (including firmware, resident software, microcode, etc.), and can also be executed by a combination of hardware and software.
  • the above hardware or software can be called “data block”, “module”, “engine”, “unit”, “component” or “system”.
  • aspects of the present application may manifest as a computer product located in one or more computer-readable media, the product including computer-readable program code.

Abstract

一种用于面部匹配的方法、设备、装置和存储介质,所述方法包括:采集待匹配图像(S101);基于原始样本库和联想样本库中的至少一个对待匹配图像进行匹配(S102);以及输出最终匹配结果(S103),其中,所述原始样本库中包括原始样本图像,所述联想样本库中包括对所述原始样本图像添加了联想特征的联想样本图像。其中,获得所述原始样本库和联想样本库包括:采集原始样本图像(S201);基于原始样本图像获得原始样本库(S202);对原始样本库中的原始样本图像添加联想特征生成联想样本图像,以获得联想样本库(S203)。

Description

用于面部匹配的方法、设备、装置和存储介质
相关申请的交叉引用
本申请要求于2018年05月25日递交的中国专利申请第201810516485.8号的优先权,在此全文引用上述中国专利申请公开的内容以作为本公开的一部分。
技术领域
本公开的实施例涉及用于面部匹配的方法、设备、装置和存储介质。
背景技术
近年来,面部识别技术受到广泛关注。在进行面部识别时,为了提高面部识别的准确度,采集面部图像时需要良好的光照情况,并且采集的人脸图像应当是正面且清晰的。在一些情况下,如果被识别者的面容发生改变,例如,拍摄角度、光照情况发生改变和/或人脸的一些外部特征(妆容、发型、胡须、疤痕、眼镜等)发生改变,面部识别的准确性会降低,可能出现错误识别的结果。
发明内容
本公开的实施例提供一种用于面部匹配的方法,包括:采集待匹配图像;基于原始样本库和联想样本库中的至少一个对待匹配图像进行匹配;以及输出最终匹配结果,其中,所述原始样本库中包括原始样本图像,所述联想样本库中包括对所述原始样本图像添加了联想特征的联想样本图像。
例如,获得所述原始样本库和联想样本库包括:采集原始样本图像;基于原始样本图像获得原始样本库;对原始样本库中的原始样本图像添加联想特征并生成联想样本图像,以获得联想样本库。
例如,所述联想特征包括场景、眼镜、饰品、服饰、发型中的至少一种。
例如,所述用于面部匹配的方法还包括:基于特征样本库对待匹配图像进行匹配,所述特征样本库中包括特征样本图像,其中,获得所述特征样本库包括:改变联想样本库中的联想样本图像的局部特征并生成特征样本图像, 以获得特征样本库。
例如,所述改变联想样本库中的联想样本图像的局部特征包括:去掉联想样本图像中的局部特征和/或改变局部特征的尺寸、形状中的至少一种,其中,所述局部特征包括痣、疤痕、胡须、眉形、嘴巴、鼻子、眼睛、耳朵中的至少一种。
例如,所述对待匹配图像进行匹配包括:将待匹配图像与原始样本库中的原始样本图像进行匹配,确定第一匹配结果;在第一匹配结果大于等于第一阈值的情况下,基于所述第一匹配结果生成最终匹配结果。
例如,在第一匹配结果小于第一阈值的情况下,所述对待匹配图像进行匹配还包括:将待匹配图像与联想样本库中的联想样本图像进行匹配,确定第二匹配结果;基于第一匹配结果和第二匹配结果确定第三匹配结果;在第三匹配结果大于等于第二阈值的情况下,基于所述第三匹配结果生成最终匹配结果,其中,所述第二阈值可以与第一阈值相同或者不相同。
例如,所述确定第三匹配结果包括:为第一匹配结果设定第一权重值;为第二匹配结果设定第二权重值;基于第一匹配结果和第一权重值的乘积与第二匹配结果和第二权重值的乘积确定第三匹配结果,其中,所述第一权重值大于第二权重值。
例如,在第三匹配结果小于第二阈值的情况下,所述对待匹配图像进行匹配还包括:将待匹配图像与特征样本库中的特征样本图像进行匹配,确定第四匹配结果;基于第一匹配结果、第二匹配结果和第四匹配结果确定第五匹配结果;在第五匹配结果大于等于第三阈值的情况下,基于所述第五匹配结果生成最终匹配结果,其中,所述第三阈值可以与第二阈值相同或者不相同。
例如,所述确定第五匹配结果包括:为第一匹配结果设定第三权重值;为第二匹配结果设定第四权重值;为第四匹配结果设定第五权重值;基于第一匹配结果和第三权重值的乘积、第二匹配结果和第四权重值的乘积与第四匹配结果和第五权重值的乘积确定第五匹配结果,其中,所述第三权重值大于第四权重值、第五权重值。
例如,所述用于面部匹配的方法还包括基于所述待匹配图像更新所述原始样本库、联想样本库和特征样本库中的至少一个。
本公开的实施例还提供了一种用于面部识别的设备,包括:输入模块,用于采集待匹配图像;权重分析模块,用于基于原始样本库和联想样本库中的至少一个对所述待匹配图像进行匹配;和输出模块,用于输出最终匹配结果,其中,所述原始样本库中包括原始样本图像,所述联想样本库中包括对所述原始样本图像添加了联想特征的联想样本图像。
例如,所述输入模块还用于采集原始样本图像,所述设备还包括:算法处理模块,用于:基于所述原始样本图像获得原始样本库,对原始样本库中的原始样本图像添加联想特征并生成联想样本图像,以获得联想样本库;样本库存储模块,用于存储所述样本库,其中,所述联想特征包括场景、眼镜、饰品、服饰、发型中的至少一种。
例如,所述权重分析模块还用于基于特征样本库对所述待匹配图像进行匹配,其中,所述算法处理模块还用于:改变联想样本库中的联想样本图像的局部特征并生成特征样本图像,以获得特征样本库,其中,所述局部特征包括痣、疤痕、胡须、眉形、嘴巴、鼻子、眼睛、耳朵中的至少一种。
例如,所述权重分析模块还用于:将待匹配图像与原始样本库中的原始样本图像进行匹配,确定第一匹配结果;在第一匹配结果大于等于第一阈值的情况下,基于所述第一匹配结果生成最终匹配结果。
例如,在第一匹配结果小于第一阈值的情况下,所述权重分析模块还用于:将待匹配图像与联想样本库中的联想样本图像进行匹配,确定第二匹配结果;基于第一匹配结果和第二匹配结果确定第三匹配结果;在第三匹配结果大于等于第二阈值的情况下,基于所述第三匹配结果生成最终匹配结果,其中,所述第二阈值可以与第一阈值相同或者不相同,所述权重分析模块确定第三匹配结果包括:为第一匹配结果设定第一权重值;为第二匹配结果设定第二权重值;基于第一匹配结果和第一权重值的乘积与第二匹配结果和第二权重值的乘积确定第三匹配结果,其中,所述第一权重值大于第二权重值。
例如,在第三匹配结果小于第二阈值的情况下,所述权重分析模块还用于:将待匹配图像与特征样本库中的特征样本图像进行匹配,确定第四匹配结果;基于第一匹配结果、第二匹配结果和第四匹配结果确定第五匹配结果;在第五匹配结果大于等于第三阈值的情况下,基于所述第五匹配结果生成最终匹配结果,其中,所述第三阈值可以与第二阈值相同或者不相同,所述权 重分析模块确定第五匹配结果包括:为第一匹配结果设定第三权重值;为第二匹配结果设定第四权重值;为第四匹配结果设定第五权重值;基于第一匹配结果和第三权重值的乘积、第二匹配结果和第四权重值的乘积与第四匹配结果和第五权重值的乘积确定第五匹配结果,其中,所述第三权重值大于第四权重值、第五权重值。
例如,所述算法处理模块还用于基于所述待匹配图像更新所述原始样本库、联想样本库和特征样本库中的至少一个。
本公开的实施例还提供了一种用于面部匹配的装置,包括:至少一个处理器;和至少一个存储器,其中,所述存储器存储了计算机可读代码,所述计算机可读代码当由所述至少一个处理器运行时执行如上所述的用于面部匹配的方法,或者实现如上所述的用于面部匹配的设备。
本公开的实施例还提供了一种非暂时性计算机可读存储介质,其中存储有计算机可读代码,所述计算机可读代码当由一个或多个处理器运行时执行如上所述的用于面部匹配的方法。
附图说明
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示出了根据本公开实施例的用于面部匹配的方法的流程图;
图2示出了根据本公开实施例的获得所述原始样本库和联想样本库的流程图;
图3示出了根据本公开实施例的用于生成联想样本库的联想特征的示意图;
图4A和4B示出了根据本公开实施例的局部特征的示意图;
图5示出了根据本公开实施例的局部特征的另一示意图;
图6A示出了基于根据本公开实施例的样本库进行图像匹配的一个实施例的示意图;
图6B示出了基于根据本公开实施例的样本库进行图像匹配的另一实施 例的示意图;
图7示出了根据本公开实施例的用于面部匹配的设备的结构示意图;
图8示出了根据本公开实施例的用于面部匹配的装置的示意图。
具体实施方式
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述。显然,所描述的实施例仅是本公开一部分的实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在无需创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
本公开中使用的“第一”、“第二”以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的组成部分。同样,“包括”或者“包含”等类似的词语意指出现该词前面的元件或者物件涵盖出现在该词后面列举的元件或者物件及其等同,而不排除其他元件或者物件。
本公开中使用了流程图用来说明根据本申请的实施例的方法的步骤。应当理解的是,前面或后面的步骤不一定按照顺序来精确的进行。相反,可以按照倒序或同时处理各种步骤。同时,也可以将其他操作添加到这些过程中,或从这些过程移除某一步或数步。
随着图像识别技术以及计算机硬件等技术的发展,越来越多的设备以及产品中采用面部识别的方案来代替传统的验证方案,如卡片验证、指纹识别等。例如,在手机中可以采用面部识别来解锁屏幕,以代替传统的指纹识别,这既增加了该产品的便捷性以及娱乐性,也节省了用于指纹识别的按键所占用的空间。此外,面部识别技术也越来越多的应用于其他领域,例如,智能安防、公安刑侦、签到打卡、虚拟现实、增强现实等领域。例如,公安刑侦部门可以采用面部识别技术来在图像库中识别犯罪嫌疑人等。
在上述应用了面部识别技术的产品中,识别的准确性成为了影响产品效果的关键性因素。在一些情况下,可以将采集的面部图像和样本库中的已有图像进行面部匹配,并根据面部匹配的结果进行面部识别。因此,面部匹配的准确性会影响面部识别的结果。一般情况下,如果在图像采集过程中光照情况较好、采集的匹配图像是正面且清晰的,则面部匹配的准确度较高,即能够得出正确的识别结果。然而,在实际情况中,很难保证每次采集的图像 均能包含待匹配者的完整、正面、清晰的面部特征。此时,上述面部识别产品会出现错误匹配的现象。例如,手机在进行面部识别解锁时无法识别用户,造成用户无法正常使用该手机的结果,影响用户体验。
为此,上述产品在设计时可以要求采集的用于面部匹配的图像包含尽量多的面部特征。例如,可以在采集图像时提醒手机用户在光照良好的环境下正面朝向手机的摄像头,以提高识别的准确性。
此外,在不同的时期,用户的面部特征会不断变化,例如,对于女性用户,妆容、发型、眉形等的改变会影响面部的整体效果。对于男性用户,胡须、眼镜等特征的改变也会影响面部的整体轮廓。这些特征的改变均会影响面部匹配的准确性,甚至导致错误识别的情况。
因此,为了提高面部匹配的准确性,本公开提供了一种用于面部匹配的方法。首先通过尽可能全面、高清地采集人脸的原始样本图像,来构建原始样本库。然后,在原始样本库的基础上,通过对原始样本图像添加某些联想特征来得到联想样本库。进一步地,在联想样本库的基础上,改变联想样本图像的某些特征来得到特征样本库。基于上述原始样本库、联想样本库和特征样本库,在对采集的人脸图像进行匹配的过程中,通过分别对基于原始样本库、联想样本库和特征样本库的匹配结果设置权重值来计算最终匹配结果,从而提高面部匹配的准确性。
图1示出了根据本公开实施例的用于面部匹配的方法的流程图。首先,在步骤S101,采集待匹配图像I。此处获得的待匹配图像I可以基本准确地反映面部的特征。例如,在采集待匹配图像I时,可以采集人脸的正面图像,并且光照强度和采集角度适宜。这样采集的待匹配图像I有利于得到更准确的识别结果。
接着,在步骤S102,基于原始样本库B1和联想样本库B2中的至少一个对待匹配图像I进行匹配。其中,所述原始样本库中包括原始样本图像,所述联想样本库中包括对所述原始样本图像添加了联想特征的联想样本图像。
根据本公开的一个实施例,可以利用图像匹配算法先将待匹配图像I与原始样本库B1中的原始样本图像进行匹配,得到匹配结果A。然后,利用图像匹配算法将图像I与联想样本库B2中的联想样本图像进行匹配,得到匹 配结果B。需要注意的是,还可以采用其他的顺序进行上述图像匹配的过程。例如,可以将待匹配图像I同时与原始样本库B1中的原始样本图像和联想样本库B2中的联想样本图像进行匹配。例如,也可以先将待匹配图像I与联想样本库B2中的联想样本图像进行匹配,然后再将待匹配图像I与原始样本库B1中的原始样本图像进行匹配。这并不构成对本公开的限制,在此不再一一赘述。
最后,在步骤S103,输出最终匹配结果D,在根据本公开的实施例中,可以基于步骤S102中得到的匹配结果A、B中的至少一部分来生成所述最终匹配结果D。所述最终匹配结果D指示待匹配图像I是否通过支持面部识别的产品的验证。例如,可以利用手机的摄像头采集用户的待匹配图像I,如果基于该待匹配图像I的匹配结果A、B输出的最终匹配结果D指示该用户通过了面部识别的验证,则该用户可以解锁或者使用该产品,如果最终匹配结果D指示面部识别的验证未通过,则显示验证失败,无法解锁或者使用该产品,并提示是否重新进行待匹配图像I的采集和验证过程。
图2示出了根据本公开实施例的获得所述原始样本库和联想样本库的流程图,首先,在步骤S201,采集原始样本图像。在此处,可以利用图像采集设备采集包含尽可能多的面部特征的高清原始样本图像,例如,包括脸部轮廓、耳朵,以及痣、疤痕等特殊标记点。
接着,在步骤S202,基于如上所述的采集的原始样本图像来获得原始样本库B1。例如,可以将所采集的面部图像整体地存储在原始样本库B1中,也可以对采集的面部特征进行分类,并将采集的面部特征按照不同类别进行存储。基于不同类别的面部特征可以训练图像匹配算法来提高匹配精度。例如,可以基于某一类别的面部特征(诸如,眼睛)来训练深度学习算法以提高该算法关于眼睛特征识别的准确性。
接着,在步骤S203,对原始样本库B1中的原始样本图像添加联想特征生成联想样本图像,以获得联想样本库B2。其中,所述联想特征可以包括场景、眼镜、饰品、服饰、发型等。例如,联想特征可以是用户经常出入的场所、经常佩戴的饰品等。通过对原始样本图像增加这些联想特征,丰富了原始样本图像中包含的特征。当利用这些包括联想特征的联想样本图像进行面部识别时,可以提高匹配的准确性。
图3示出了根据本公开实施例的用于生成联想样本库的联想特征的示意图。示意性地,本文中以所述联想特征为发型为例说明联想样本库,需要注意的是,所述联想特征还可以包括其他的特征,这并不构成对本公开的限制。
如图3所示,女性用户可能会经常改变发型,当用户发型发生较大的改变时,可能会导致面部识别产品无法准确地识别出用户的特征,甚至出现错误识别的结果,例如,输出匹配不成功的结果。这将造成用户无法正常使用该产品,极大的降低了用户体验。利用本公开实施例提供的方法,可以对原始样本库中的原始样本图像添加发型特征。例如,在生成原始样本库B1时,用户的发型是图3中示出的发型A。一段时间后,用户的发型可能改变为发型B。此时,可以将原始样本库B1中的全部或者部分的原始样本图像中包括的发型A改变为发型B,来获得联想样本库B2。因此,当用户发型发生改变时,利用采集的用户的面部图像与联想样本库B2进行图像匹配时可以得到更准确的匹配结果,避免了因用户发型改变而造成的识别错误的结果。
回到图2,根据本公开的一个实施例,所述用于面部匹配的方法还包括基于特征样本库对待匹配图像进行匹配,所述特征样本库中包括特征样本图像。如图2所示,在步骤S204中,改变联想样本库B2中的联想样本图像的局部特征生成特征样本图像,以获得特征样本库B3。
根据本公开实施例,所述局部特征可以包括痣、疤痕、胡须、眉形、嘴巴、鼻子、眼睛、耳朵等。例如,改变联想样本库中的联想样本图像B2的局部特征可以指的是去掉联想样本图像中的某些局部特征,或者改变某些局部特征的尺寸、形状等。此处的对于局部特征的改变可以是有控制的进行,例如可以通过设定规则来有监督地进行上述局部特征的改变。从而使得即使当用户的某些局部特征(例如眉形)发生改变时,特征样本库中的改变了局部特征的联想样本图像也可以用于获得更准确的面部匹配结果。
图4A和4B示出了根据本公开实施例的局部特征的示意图,本文中以所述局部特征为眉形为例来说明所述特征样本库B3,需要注意的是,所述局部特征还可以是其他任何局部特征,这并不构成对本公开的限制。
如图4A或4B所示出的,眉形的改变会较大程度上影响一个人的面部整体结构,例如,当用户从眉形A改变为眉形B,或者是从眉形C改变为眉形D时,如果只利用原始或者联想样本库中的原始样本图像和联想样本图像进 行图像匹配,则可能得出错误的匹配结果,使得改变眉形的用户无法正常的使用产品。根据本公开实施例,可以在联想样本库B2的基础上,改变联想样本图像中的例如眉形的局部特征,来获得特征样本库B3。例如,可以首先确定多个用户可能使用的眉形形状,如图5中所示出的,然后利用图像处理算法,将图5中的眉形应用到全部或者部分的联想样本图像中。即,特征样本库B3中的特征样本图像包括了用户可能具有的各种眉形的形状,因此,当利用特征样本库B3对采集的待匹配图像I进行匹配时,则能得到更准确的识别结果。
类似的,所述局部特征还可以是疤痕,利用图像处理算法来去掉或者淡化联想样本库B2中的某些联想样本图像中的疤痕来构建特征样本库B3,使得当用户的疤痕消除时,也能利用特征样本库得到准确的识别结果。例如,所述局部特征的改变还可以是改变眼睛的大小、胡须的长度等,在此不再一一列举。需要注意的是,所述局部特征的改变是有限制的改变。
根据本公开实施例,上述获得的原始样本库、联想样本库和特征样本库可用于支持面部识别的产品,以提高匹配的准确性。
根据本公开的一个实施例,利用样本库B1、B2、B3进行面部匹配以输出最终匹配结果D的过程,可以通过对样本库B1、B2、B3的匹配结果分别地设置权重值的方式来实现。
图6A示出了基于根据本公开实施例的样本库进行图像匹配的一个实施例的示意图,图6B示出了基于根据本公开实施例的样本库进行图像匹配的另一实施例的示意图。图6A和6B中示出的图像匹配方法仅是作为根据本公开的两种实施例,并不构成对于本公开的限制,利用基于本公开实施例的样本库进行图像匹配还可以采用其他的方式进行。
如图6A所示,首先将采集的待匹配图像I与原始样本库B1中的原始样本图像进行匹配,并得到第一匹配结果R1。所述第一匹配结果R1指示待匹配图像I与原始样本库B1中的原始样本图像的匹配程度。可以通过比较第一匹配结果R1与设定的第一阈值T1的方式来判断第一匹配结果R1是否通过了识别。例如,如果第一匹配结果R1大于等于设定的第一阈值T1,则认为该待匹配图像I通过了验证,即意味着该用户具有解锁或者使用该产品的权限。此时,根据本公开实施例,可以将大于等于第一阈值T1的第一匹配结 果R1直接确定为最终匹配结果,而无需进行后续的第二和特征样本库的匹配过程。由于原始样本库B1中的原始样本图像是基于直接采集的原始样本图像生成的,即可以认为原始样本库B1的匹配结果与联想、特征样本库的匹配结果相比具有更高的准确性和可靠性,例如,在与原始样本库匹配成功的情况下,可以不再进行后续的匹配过程,以节省运算的时间。
在第一匹配结果R1小于第一阈值T1的情况下,则可以认为待匹配图像I与原始样本库B1匹配不成功。这可能是由于用户的联想特征或者局部特征发生了改变导致的,例如,用户更换了眼镜或者化了新的眉形等。在此种情况下,可以将待匹配图像I与联想样本库B2中的联想样本图像进行匹配,从而确定第二匹配结果R2。利用结合第一匹配结果R1和第二匹配结果R2确定的第三匹配结果R3可以用于判断待匹配图像I是否识别成功。例如可以通过预先设定第二阈值T2来进行上述判断,例如,在第三匹配结果R3大于等于第二阈值T2的情况下,将第三匹配结果R3确定为最终匹配结果D。在这种情况下,所述第三匹配结果R3指示用户的待匹配图像I通过了面部识别验证,则可以不再进行特征样本库B3的匹配过程。其中,上述第二阈值T2可以与第一阈值T1相同,也可以不相同,例如,可以根据面部识别准确性的要求进行合理的设置。例如,如果对于识别的准确性要求较高,则可以增加第二阈值T2的数值。
根据本公开的一种实施例,可以为第一匹配结果设定第一权重值W1,为第二匹配结果R2设定第二权重值W2,并基于第一匹配结果R1和第一权重值W1的乘积与第二匹配结果R2和第二权重值W2的乘积确定第三匹配结果R3。例如,上述第三匹配结果R3可以是第一匹配结果R1和第二匹配结果R2的加权和。可以通过权重值W1和W2来分配R1和R2对于第三匹配结果R3的贡献。例如,由于联想样本库B2中的联想样本图像是通过在原始样本库B1的原始样本图像中添加联想特征生成的,可以认为第二匹配结果R2的可靠性低于第一匹配结果R1,因此,可以将所述第一权重值W1设定为大于第二权重值W2。
在第三匹配结果R3小于第二阈值T2的情况下,对待匹配图像I的匹配还可以包括将待匹配图像I与特征样本库B3中的特征样本图像进行匹配,以确定第四匹配结果R4。然后,基于第一匹配结果R1、第二匹配结果R2和 第四匹配结果R4来确定第五匹配结果R5。在第五匹配结果R5大于等于第三阈值T3的情况下,将所述第五匹配结果R5确定为最终匹配结果D。所述第五匹配结果指示采集的用户的待匹配图像I是否通过了面部识别的验证。其中,所述第三阈值T3可以与第二阈值T2相同,也可以不相同,可以根据面部识别准确性的要求进行合理的设置。例如,如果对于识别的准确性要求较高,则可以增加第三阈值T3的数值。
其中,上述确定第五匹配结果R5的过程可以通过为第一匹配结果R1设定第三权重值W3,为第二匹配结果R2设定第四权重值W4,为第四匹配结果R4设定第五权重值W5来实现。然后,可以基于第一匹配结果R1和第三权重值W3的乘积、第二匹配结果R2和第四权重值W4的乘积与第四匹配结果R4和第五权重值W5的乘积确定第五匹配结果R5。例如,上述第三匹配结果R3可以是第一匹配结果R1、第二匹配结果R2和第四匹配结果R4的加权和。
根据本公开的一个实施例,可以通过设定的权重值W3、W4和W5来分配R1、R2和R4对于第五匹配结果R5的贡献。由于联想样本库B2中的联想样本图像是通过在原始样本库B1的原始样本图像中添加联想特征生成的,而特征样本库B3中的特征样本图像是通过在联想样本库B2的联想样本图像中改变局部特征生成的,因此可以认为第二匹配结果R2的可靠性低于第一匹配结果R1,而第四匹配结果R4的可靠性低于第二匹配结果R2。例如,可以将所述第三权重值W3设定为大于第四权重值W4、第五权重值W5。在根据本公开的其他实施例中,在确定第五匹配结果R5时可以不考虑第二匹配结果的影响,例如,这可以通过将第四权重值W4设置为0来实现。
采用如图6A所示的匹配方法,可以基于样本库B1、B2和B3来实现待匹配图像I的面部识别。即使用户的面部特征发生了某些改变,利用图6A示出的匹配方法也能避免错误识别的发生并得到准确的识别结果。
如图6B所示,根据本公开的另一实施例,也可以将采集的待匹配图像I与样本库B1、B2、B3逐个进行匹配,从而得到三个匹配结果A、B、C,再基于得到的三个匹配结果计算最终匹配结果D,例如可以通过分别设定权重值的方式来确定匹配结果D,例如,最终匹配结果D可以是匹配结果A、B、C的加权结果。此过程与图6A中示出的过程类似,在此不再赘述。
根据本公开实施例,在应用样本库B1、B2、B3进行面部匹配的过程中,采集的待匹配图像I还可以用于更新上述样本库。例如,在对采集的用户待匹配图像I进行一次面部匹配之后,可以将所述待匹配图像I存储在原始样本库中,作为原始样本库B1中的原始样本图像。换句话说,样本库B1、B2、B3中的样本图像是随着面部识别应用的过程中不断更新和补充的,更新的样本库在进行匹配时有利于获得更准确的匹配结果。在一些实施例中,可以将通过了面部识别的待匹配图像I用于更新样本库。
在根据本公开的其他实施例中,上述计算最终匹配结果D的过程还可以包括为某些面部特征设置特征权重值。例如,在进行与原始样本库B1的匹配过程中,可以为眉、眼、耳、鼻、口、痣等明显的面部特征设置较高的特征权重值,而对疤痕、胡须、眼镜等容易发生改变的特征设置较低的特征权重值。或者,也可以基于性别设置不同的特征权重值。例如,可以基于性别相应的降低易于改变的面部特征的特征权重值。例如,为男性的胡须设置较低的特征权重值,而为女性的眉毛、眼周等部位设置较高的特征权重值。
此外,在利用联想样本库B2进行图像匹配的过程中,也可以为某些特定的特征分别设定特征权重值。例如,如上所述,由于联想样本库B2中的联想样本图像是基于原始样本库B1生成的,因此,联想样本库B2的匹配结果的可靠性低于原始样本库B1的匹配结果的可靠性,则可以设定第一权重值W1高于第二权重值W2。在此基础上,可以对联想样本库B2中增加的联想特征分配特征权重值,例如,可以设定发型的特征权重值低于眼镜的特征权重值等。
在利用特征样本库B3进行图像匹配的过程中,可以如上所述的为原始样本库、联想样本库和特征样本库的匹配结果A、B、C分别设定权重值W3、W4和W5。在此基础上,也可以为特征样本库B3中改变的局部特征设定特征权重值,例如,可以提高特征样本库B3中的改变的眉形的特征权重值,也可以相应的降低原始样本库B1中的眉形的特征权重值,以此来增加改变的局部特征的匹配结果对于最终匹配结果D的影响比重。根据本公开的其他实施例,在特征样本库B3中的特征样本图像是通过去掉了联想样本图像中的某些局部特征得到的情况下,例如,通过去掉联想样本B2中的联想样本图像中的痣、疤痕等特征之后得到特征样本图像,可以在第四匹配结果R4 中相应的增加此被去掉的特征的特征权重值,也可以在第二匹配结果R2中相应的降低此被去掉的特征的特征权重值。
利用上述为样本库的匹配结果设定权重值或者为某些特征设定特征权重值的方式,可以基于上述第一、第二和特征样本库得到更准确的面部匹配结果。即使在匹配对象的面部特征发生某些改变的情况下,也能基于生成的联想或者特征样本库得到准确的匹配结果,避免了错误匹配的发生,有利于提高用户的使用体验。
图7示出了根据本公开实施例的用于面部匹配的设备的结构示意图。根据本公开的一个实施例,所述设备可以包括输入模块701、权重分析模块704和输出模块705。其中,所述输入模块701可以用于采集待匹配图像I。所述权重分析模块704可以用于基于原始样本库、联想样本库中的至少一个对输入模块701采集的待匹配图像I进行匹配。其中所述输出模块705可以用于输出最终匹配结果。其中,所述原始样本库中包括原始样本图像,所述联想样本库中包括对所述原始样本图像添加了联想特征的联想样本图像。在一些实施例中,所述输入模块还用于采集原始样本图像。
在一些实施例中,本公开提供的设备还包括算法处理模块702和样本库存储模块703。其中,所述算法处理模块702用于基于所述原始样本图像获得原始样本库,还用于对原始样本库中的原始样本图像添加联想特征生成联想样本图像,以获得联想样本库。其中,所述联想特征包括场景、眼镜、饰品、服饰、发型中的至少一种。
在一些实施例中,所述权重分析模块704还用于基于特征样本库对所述待匹配图像进行匹配。
其中,算法处理模块702还用于改变联想样本库中的联想样本图像的局部特征生成特征样本图像,以获得特征样本库。所述算法处理模块702改变联想样本库中的联想样本图像的局部特征包括:去掉联想样本图像中的局部特征和改变局部特征的尺寸、形状中的至少一种,其中,所述局部特征可以包括痣、疤痕、胡须、眉形、嘴巴、鼻子、眼睛、耳朵中的至少一种。
样本库存储模块703用于存储样本库,例如如上所述的原始样本库、联想样本库和特征样本库。
在此处,以用于手机解锁的面部识别应用为例,介绍所述用于面部匹配 的设备的功能。首先,输入模块701采集待匹配图像I。例如,所述输入模块701可以包括图像采集单元,其可以是例如手机的前置摄像头。在采集到用户的待匹配图像I之后,权重分析模块704基于样本库存储模块703中存储的样本库对采集的待匹配图像I进行匹配,并为各个样本库的匹配结果设定权重值。然后输出模块705输出最终匹配结果D。上述进行匹配和设置权重值的过程如图6A和6B中所示,在此不在赘述。最终匹配结果D可以指示该用户是否通过面部识别的验证。在一些实施例中,所述最终匹配结果D也可以基于样本库的匹配结果指示面部匹配程度的百分比,例如输出的结果可以包括70%相似。在另一些实施例中,最终匹配结果D还可以指示未通过图像匹配的具体的特征,例如,眼睛的匹配系数较低等。以利于用户在下一次进行面部识别时能调整图像采集时眼睛的拍摄角度,或者光照角度等,以利于得到准确的识别结果。
此外,如图7所示出的图像匹配设备中的算法处理模块702还可以用于在使用面部识别应用的过程中,利用采集的待匹配图像I更新上述样本库。例如,算法处理模块702可以将采集的待匹配图像I存储在样本库存储模块703中作为原始样本图像。又例如,算法处理模块702还可以对该待匹配图像I进行如图2所示出的处理,例如,增加联想特征或者改变局部特征等,以生成相应的联想样本图像和特征样本图像。
图8示出了根据本公开实施例的用于面部匹配的装置800的示意图。所述用于面部匹配的装置800可以包括至少一个处理器801和至少一个存储器802。
其中,所述存储器802存储了计算机可读代码,所述计算机可读代码当由所述至少一个处理器801运行时执行如上所述的用于面部匹配的方法。根据本公开实施例,还提供了一种非暂时性计算机可读存储介质,其中存储有计算机可读代码,所述计算机可读代码当由一个或多个处理器运行时执行如上所述的用于面部匹配的方法。
本公开提供了一种用于面部匹配的方法,包括:采集待匹配图像;基于原始样本库和联想样本库中的至少一个对待匹配图像进行匹配;以及输出最终匹配结果,其中,所述原始样本库中包括原始样本图像,所述联想样本库中包括对所述原始样本图像添加了联想特征的联想样本图像。其中,获得所 述原始样本库和联想样本库包括:采集原始样本图像;基于原始样本图像获得原始样本库;对原始样本库中的原始样本图像添加联想特征生成联想样本图像,以获得联想样本库。所述的方法,还包括:基于特征样本库对待匹配图像进行匹配,所述特征样本库中包括特征样本图像,其中,获得所述特征样本库包括:改变联想样本库中的联想样本图像的局部特征生成特征样本图像,以获得特征样本库。
基于上述获得的样本库进行图像匹配的过程中,可以通过对基于不同的样本库得到的匹配结果设定权重值和阈值来确定最终匹配结果。由于在原始样本图像中增加了可能出现的联想特征,或者改变了联想图像中可能发生改变的局部特征,可以提高待匹配图像的识别准确率。上述用于面部识别的方法可用于面部识别领域,以及与面部识别相关的智能安防、公安刑侦、智能刷卡、智能眼镜、AR/VR等领域。
此外,本领域技术人员可以理解,本申请的各方面可以通过若干具有可专利性的种类或情况进行说明和描述,包括任何新的和有用的工序、机器、产品或物质的组合,或对他们的任何新的和有用的改进。相应地,本申请的各个方面可以完全由硬件执行、可以完全由软件(包括固件、常驻软件、微码等)执行、也可以由硬件和软件组合执行。以上硬件或软件均可被称为“数据块”、“模块”、“引擎”、“单元”、“组件”或“系统”。此外,本申请的各方面可能表现为位于一个或多个计算机可读介质中的计算机产品,该产品包括计算机可读程序编码。
除非另有定义,这里使用的所有术语(包括技术和科学术语)具有与本公开所属领域的普通技术人员共同理解的相同含义。还应当理解,诸如在通常字典里定义的那些术语应当被解释为具有与它们在相关技术的上下文中的含义相一致的含义,而不应用理想化或极度形式化的意义来解释,除非这里明确地这样定义。
以上是对本公开的说明,而不应被认为是对其的限制。尽管描述了本公开的若干示例性实施例,但本领域技术人员将容易地理解,在不背离本发明的新颖教学和优点的前提下可以对示例性实施例进行许多修改。因此,所有这些修改都意图包含在权利要求书所限定的本公开范围内。应当理解,上面是对本公开的说明,而不应被认为是限于所公开的特定实施例,并且对所公开的实施例以及其他实施例的修改意图包含在所附权利要求书的范围内。本 发明由权利要求书及其等效物限定。

Claims (20)

  1. 一种用于面部匹配的方法,包括:
    采集待匹配图像;
    基于原始样本库和联想样本库中的至少一个对待匹配图像进行匹配;以及
    输出最终匹配结果,
    其中,所述原始样本库中包括原始样本图像,所述联想样本库中包括对所述原始样本图像添加了联想特征的联想样本图像。
  2. 根据权利要求1所述的方法,其中,获得所述原始样本库和联想样本库包括:
    采集原始样本图像;
    基于原始样本图像获得原始样本库;
    对原始样本库中的原始样本图像添加联想特征并生成联想样本图像,以获得联想样本库。
  3. 根据权利要求2所述的方法,其中,
    所述联想特征包括场景、眼镜、饰品、服饰、发型中的至少一种。
  4. 根据权利要求1-3任一项所述的方法,还包括:
    基于特征样本库对待匹配图像进行匹配,所述特征样本库中包括特征样本图像,其中,获得所述特征样本库包括:
    改变联想样本库中的联想样本图像的局部特征并生成特征样本图像,以获得特征样本库。
  5. 根据权利要求4所述的方法,其中,所述改变联想样本库中的联想样本图像的局部特征包括:
    去掉联想样本图像中的局部特征和/或改变局部特征的尺寸、形状中的至少一种,其中,所述局部特征包括痣、疤痕、胡须、眉形、嘴巴、鼻子、眼睛、耳朵中的至少一种。
  6. 根据权利要求1-5任一项所述的方法,其中,所述对待匹配图像进行匹配包括:
    将待匹配图像与原始样本库中的原始样本图像进行匹配,确定第一匹配结果;
    在第一匹配结果大于等于第一阈值的情况下,基于所述第一匹配结果生成最终匹配结果。
  7. 根据权利要求6所述的方法,其中,在第一匹配结果小于第一阈值的情况下,所述对待匹配图像进行匹配还包括:
    将待匹配图像与联想样本库中的联想样本图像进行匹配,确定第二匹配结果;
    基于第一匹配结果和第二匹配结果确定第三匹配结果;
    在第三匹配结果大于等于第二阈值的情况下,基于所述第三匹配结果生成最终匹配结果,
    其中,所述第二阈值与第一阈值相同或者不相同。
  8. 根据权利要求7所述的方法,其中,所述确定第三匹配结果包括:
    为第一匹配结果设定第一权重值;
    为第二匹配结果设定第二权重值;
    基于第一匹配结果和第一权重值的乘积与第二匹配结果和第二权重值的乘积确定第三匹配结果,
    其中,所述第一权重值大于第二权重值。
  9. 根据权利要求7所述的方法,其中,在第三匹配结果小于第二阈值的情况下,所述对待匹配图像进行匹配还包括:
    将待匹配图像与特征样本库中的特征样本图像进行匹配,确定第四匹配结果;
    基于第一匹配结果、第二匹配结果和第四匹配结果确定第五匹配结果;
    在第五匹配结果大于等于第三阈值的情况下,基于所述第五匹配结果生成最终匹配结果,
    其中,所述第三阈值与第二阈值相同或者不相同。
  10. 根据权利要求9所述的方法,其中,所述确定第五匹配结果包括:
    为第一匹配结果设定第三权重值;
    为第二匹配结果设定第四权重值;
    为第四匹配结果设定第五权重值;
    基于第一匹配结果和第三权重值的乘积、第二匹配结果和第四权重值的乘积与第四匹配结果和第五权重值的乘积确定第五匹配结果,
    其中,所述第三权重值大于第四权重值、第五权重值。
  11. 根据权利要求4所述的方法,还包括基于所述待匹配图像更新所述原始样本库和联想样本库和特征样本库中的至少一个。
  12. 一种用于面部识别的设备,包括:
    输入模块,用于采集待匹配图像;
    权重分析模块,用于基于原始样本库和联想样本库中的至少一个对所述待匹配图像进行匹配;以及
    输出模块,用于输出最终匹配结果,
    其中,所述原始样本库中包括原始样本图像,所述联想样本库中包括对所述原始样本图像添加了联想特征的联想样本图像。
  13. 根据权利要求12所述的设备,其中,所述输入模块还用于采集原始样本图像,所述设备还包括:
    算法处理模块,用于:
    基于所述原始样本图像获得原始样本库,
    对原始样本库中的原始样本图像添加联想特征并生成联想样本图像,以获得联想样本库;
    样本库存储模块,用于存储所述样本库,
    其中,所述联想特征包括场景、眼镜、饰品、服饰、发型中的至少一种。
  14. 根据权利要求12所述设备,所述权重分析模块还用于基于特征样本库对所述待匹配图像进行匹配,其中,所述算法处理模块还用于:
    改变联想样本库中的联想样本图像的局部特征并生成特征样本图像,以获得特征样本库,
    其中,所述局部特征包括痣、疤痕、胡须、眉形、嘴巴、鼻子、眼睛、耳朵中的至少一种。
  15. 根据权利要求12-14任一项所述的设备,其中,所述权重分析模块还用于:
    将待匹配图像与原始样本库中的原始样本图像进行匹配,确定第一匹配结果;
    在第一匹配结果大于等于第一阈值的情况下,基于所述第一匹配结果生成最终匹配结果。
  16. 根据权利要求15所述的设备,其中,在第一匹配结果小于第一阈值的情况下,所述权重分析模块还用于:
    将待匹配图像与联想样本库中的联想样本图像进行匹配,确定第二匹配结果;
    基于第一匹配结果和第二匹配结果确定第三匹配结果;
    在第三匹配结果大于等于第二阈值的情况下,基于所述第三匹配结果生成最终匹配结果,
    其中,所述第二阈值与第一阈值相同或者不相同,
    所述权重分析模块确定第三匹配结果包括:
    为第一匹配结果设定第一权重值;
    为第二匹配结果设定第二权重值;
    基于第一匹配结果和第一权重值的乘积与第二匹配结果和第二权重值的乘积确定第三匹配结果,
    其中,所述第一权重值大于第二权重值。
  17. 根据权利要求16所述的设备,其中,在第三匹配结果小于第二阈值的情况下,所述权重分析模块还用于:
    将待匹配图像与特征样本库中的特征样本图像进行匹配,确定第四匹配结果;
    基于第一匹配结果、第二匹配结果和第四匹配结果确定第五匹配结果;
    在第五匹配结果大于等于第三阈值的情况下,基于所述第五匹配结果生成最终匹配结果,
    其中,所述第三阈值与第二阈值相同或者不相同,
    所述权重分析模块确定第五匹配结果包括:
    为第一匹配结果设定第三权重值;
    为第二匹配结果设定第四权重值;
    为第四匹配结果设定第五权重值;
    基于第一匹配结果和第三权重值的乘积、第二匹配结果和第四权重值的乘积与第四匹配结果和第五权重值的乘积确定第五匹配结果,
    其中,所述第三权重值大于第四权重值、第五权重值。
  18. 根据权利要求14所述设备,其中,所述算法处理模块还用于基于所 述待匹配图像更新所述原始样本库、联想样本库和特征样本库中的至少一个。
  19. 一种用于面部匹配的装置,包括:
    至少一个处理器;和
    至少一个存储器,
    其中,所述存储器存储了计算机可读代码,所述计算机可读代码当由所述至少一个处理器运行时执行如权利要求1-11任一项所述的用于面部匹配的方法,或者实现如权利要求12-18任一项所述的用于面部匹配的设备。
  20. 一种非暂时性计算机可读存储介质,其中存储有计算机可读代码,所述计算机可读代码当由一个或多个处理器运行时执行如权利要求1-11任一项所述的用于面部匹配的方法。
PCT/CN2019/071350 2018-05-25 2019-01-11 用于面部匹配的方法、设备、装置和存储介质 WO2019223339A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/488,704 US11321553B2 (en) 2018-05-25 2019-01-11 Method, device, apparatus and storage medium for facial matching

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810516485.8 2018-05-25
CN201810516485.8A CN108805046B (zh) 2018-05-25 2018-05-25 用于面部匹配的方法、设备、装置和存储介质

Publications (1)

Publication Number Publication Date
WO2019223339A1 true WO2019223339A1 (zh) 2019-11-28

Family

ID=64089083

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/071350 WO2019223339A1 (zh) 2018-05-25 2019-01-11 用于面部匹配的方法、设备、装置和存储介质

Country Status (3)

Country Link
US (1) US11321553B2 (zh)
CN (1) CN108805046B (zh)
WO (1) WO2019223339A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805046B (zh) 2018-05-25 2022-11-04 京东方科技集团股份有限公司 用于面部匹配的方法、设备、装置和存储介质
CN110032852A (zh) * 2019-04-15 2019-07-19 维沃移动通信有限公司 屏幕解锁方法及终端设备
CN113435280A (zh) * 2021-06-18 2021-09-24 上海熙瑾信息技术有限公司 人证校验方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104850828A (zh) * 2015-04-29 2015-08-19 小米科技有限责任公司 人物识别方法及装置
CN105095829A (zh) * 2014-04-29 2015-11-25 华为技术有限公司 一种人脸识别方法及系统
CN107016370A (zh) * 2017-04-10 2017-08-04 电子科技大学 一种基于数据增强的部分遮挡人脸识别方法
CN107766824A (zh) * 2017-10-27 2018-03-06 广东欧珀移动通信有限公司 人脸识别方法、移动终端以及计算机可读存储介质
CN108805046A (zh) * 2018-05-25 2018-11-13 京东方科技集团股份有限公司 用于面部匹配的方法、设备、装置和存储介质

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7379627B2 (en) * 2003-10-20 2008-05-27 Microsoft Corporation Integrated solution to digital image similarity searching
US7519200B2 (en) * 2005-05-09 2009-04-14 Like.Com System and method for enabling the use of captured images through recognition
US8311294B2 (en) * 2009-09-08 2012-11-13 Facedouble, Inc. Image classification and information retrieval over wireless digital networks and the internet
US7450740B2 (en) * 2005-09-28 2008-11-11 Facedouble, Inc. Image classification and information retrieval over wireless digital networks and the internet
US8369570B2 (en) * 2005-09-28 2013-02-05 Facedouble, Inc. Method and system for tagging an image of an individual in a plurality of photos
JP4720880B2 (ja) * 2008-09-04 2011-07-13 ソニー株式会社 画像処理装置、撮像装置、画像処理方法およびプログラム
CN102622613B (zh) * 2011-12-16 2013-11-06 彭强 一种基于双眼定位和脸型识别的发型设计方法
TWI479435B (zh) * 2012-04-03 2015-04-01 Univ Chung Hua 人臉辨識方法
CN104537389B (zh) * 2014-12-29 2018-03-27 生迪光电科技股份有限公司 人脸识别方法和装置
US20160275518A1 (en) * 2015-03-19 2016-09-22 ecoATM, Inc. Device recycling systems with facial recognition
CN105488478B (zh) * 2015-12-02 2020-04-07 深圳市商汤科技有限公司 一种人脸识别系统和方法
CN106803106B (zh) * 2017-02-27 2020-04-21 民政部国家减灾中心 Sar图像自动分类系统及方法
KR102558741B1 (ko) * 2017-12-12 2023-07-24 삼성전자주식회사 사용자 등록 장치 및 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095829A (zh) * 2014-04-29 2015-11-25 华为技术有限公司 一种人脸识别方法及系统
CN104850828A (zh) * 2015-04-29 2015-08-19 小米科技有限责任公司 人物识别方法及装置
CN107016370A (zh) * 2017-04-10 2017-08-04 电子科技大学 一种基于数据增强的部分遮挡人脸识别方法
CN107766824A (zh) * 2017-10-27 2018-03-06 广东欧珀移动通信有限公司 人脸识别方法、移动终端以及计算机可读存储介质
CN108805046A (zh) * 2018-05-25 2018-11-13 京东方科技集团股份有限公司 用于面部匹配的方法、设备、装置和存储介质

Also Published As

Publication number Publication date
CN108805046A (zh) 2018-11-13
US11321553B2 (en) 2022-05-03
US20210334518A1 (en) 2021-10-28
CN108805046B (zh) 2022-11-04

Similar Documents

Publication Publication Date Title
US20190042866A1 (en) Process for updating templates used in facial recognition
Masupha et al. Face recognition techniques, their advantages, disadvantages and performance evaluation
US20060158307A1 (en) System and method for face recognition
US11113510B1 (en) Virtual templates for facial recognition
WO2019223339A1 (zh) 用于面部匹配的方法、设备、装置和存储介质
JP7107598B2 (ja) 認証用顔画像候補判定装置、認証用顔画像候補判定方法、プログラム、および記録媒体
CN110866466A (zh) 一种人脸识别方法、装置、存储介质和服务器
CN106228133B (zh) 用户验证方法及装置
US11711215B2 (en) Methods, systems, and media for secure authentication of users based on a biometric identifier and knowledge-based secondary information
US10229311B2 (en) Face template balancing
Vishi et al. Multimodal biometric authentication using fingerprint and iris recognition in identity management
Haji et al. Real time face recognition system (RTFRS)
CN108875549B (zh) 图像识别方法、装置、系统及计算机存储介质
KR20190070179A (ko) 사용자 등록 장치 및 방법
US20220157078A1 (en) Adaptive learning and matching of face modalities
Findling et al. Towards pan shot face unlock: Using biometric face information from different perspectives to unlock mobile devices
Beton et al. Biometric secret path for mobile user authentication: A preliminary study
TWI325568B (en) A method for face varification
JP2018128736A (ja) 顔認証システム、顔認証方法、及び顔認証プログラム
US11935327B1 (en) On the fly enrollment for facial recognition
Stragapede et al. Mobile passive authentication through touchscreen and background sensor data
Bartuzi et al. Mobibits: Multimodal mobile biometric database
Geetha et al. 3D face recognition using Hadoop
WO2022000334A1 (zh) 生物特征识别方法、装置、设备及存储介质
Swearingen et al. Synthesizing face images from match scores

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19808507

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 21.04.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19808507

Country of ref document: EP

Kind code of ref document: A1