WO2022179046A1 - 人脸识别方法、装置、计算机设备和存储介质 - Google Patents

人脸识别方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2022179046A1
WO2022179046A1 PCT/CN2021/109467 CN2021109467W WO2022179046A1 WO 2022179046 A1 WO2022179046 A1 WO 2022179046A1 CN 2021109467 W CN2021109467 W CN 2021109467W WO 2022179046 A1 WO2022179046 A1 WO 2022179046A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
age
face feature
feature
gender
Prior art date
Application number
PCT/CN2021/109467
Other languages
English (en)
French (fr)
Inventor
熊玮
Original Assignee
深圳壹账通智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳壹账通智能科技有限公司 filed Critical 深圳壹账通智能科技有限公司
Publication of WO2022179046A1 publication Critical patent/WO2022179046A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition

Definitions

  • the present application relates to the technical field of artificial intelligence, and in particular, to a face recognition method, device, computer equipment and storage medium.
  • the face feature templates are compared one by one to generate corresponding face recognition results.
  • the face template features of all members are usually stored in the face database, the number of template features that need to be compared with the collected face features will be very large, which will take a lot of processing time to
  • the final face recognition result is obtained by comparing all the feature templates stored in the back-end face database with the face features. Therefore, the existing face recognition methods have the technical problems of low processing efficiency of face recognition and slow generation rate of face recognition results.
  • the main purpose of this application is to provide a face recognition method, device, computer equipment and storage medium, which aims to solve the problem that the existing face recognition method has low processing efficiency of face recognition, and the generation rate of face recognition results is relatively high. Slow technical issues.
  • This application proposes a face recognition method, the method includes the steps:
  • the face features are used to perform matching processing with all face feature templates in the designated face feature database, respectively, to obtain a face recognition result corresponding to the face image.
  • the present application also provides a computer device, including a memory and a processor, where a computer program is stored in the memory, and the processor implements the steps of the above method when executing the computer program.
  • the present application also provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of the above method are implemented.
  • FIG. 1 is a schematic flowchart of a face recognition method according to an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of a face recognition device according to an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a computer device according to an embodiment of the present application.
  • the execution body of this embodiment of the method is a face recognition device.
  • the above-mentioned face recognition device can be implemented through a virtual device, such as software code, or through a physical device written or integrated with relevant execution codes, and can communicate with the user through a keyboard, mouse, remote control, touchpad or voice-activated devices for human-computer interaction.
  • the above-mentioned face recognition device may specifically be a front-end terminal with a camera.
  • the face recognition apparatus in this embodiment can effectively improve the processing efficiency of face feature recognition processing, improve the generation efficiency of face recognition results, and improve the flexibility and adaptability of face recognition processing.
  • a face recognition method includes:
  • the above image to be processed may be an image collected by the camera of the face recognition device.
  • Different face images can be collected by the camera, such as static images, dynamic images, different positions, different expressions, etc., can be well collected.
  • the camera will automatically search for and shoot image data with the user's face, so as to perform subsequent face recognition processing based on the image data.
  • S2 Perform face detection on the to-be-processed image based on a preset face detection algorithm to obtain a corresponding face image.
  • the selection of the above face detection algorithm is not specifically limited, and can be selected according to actual needs.
  • the face detection algorithm may be an adaboost algorithm, a dlib algorithm, an opencv algorithm, and the like.
  • image preprocessing can be further performed on the face image.
  • the image preprocessing of the face is the process of processing the image and finally serving the feature extraction based on the face detection result.
  • the original image obtained by the device cannot be used directly due to various restrictions and random interference, and it must be preprocessed such as grayscale correction and noise filtering in the early stage of image processing.
  • the preprocessing process mainly includes light compensation, grayscale transformation, histogram equalization, normalization, geometric correction, filtering and sharpening of face images.
  • the features that can be used in the face recognition process are generally classified into visual features, pixel statistical features, face image transformation coefficient features, face image algebraic features, and the like. Face feature extraction is carried out for certain features of the face. Facial feature extraction, also known as face representation, is the process of modeling features of human faces.
  • the feature extraction method of the above face features is not specifically limited, for example, two types of face feature extraction methods can be used: one is a knowledge-based representation method; the other is a representation method based on algebraic features or statistical learning.
  • the face features extracted from the face image correspond to multiple feature vectors. By extracting the corresponding face feature from the face image, it is beneficial to realize the face recognition processing for the face image intelligently, accurately and timely based on the face feature and the locally pre-stored face feature library.
  • S5 Calculate and generate age information corresponding to the face feature based on a locally pre-stored age feature database, where the age feature database stores age face features.
  • the gender feature database stores gender facial features, and the gender facial features may include the same number of male facial features and female facial features. Specifically, the sum of the distances between the extracted facial features and their corresponding male facial features and female facial features can be calculated and compared respectively.
  • the gender information corresponding to the facial features can be identified as female; and if the sum of the distances between the facial features and their corresponding male facial features is smaller than the corresponding facial features and their corresponding gender information When the sum of the distances of the female facial features is obtained, it can be recognized that the gender information corresponding to the extracted facial features is male.
  • age face features are stored in the age feature database, and the age face features may include face feature data of each age group, and the number of the age face features and the number of the age groups same.
  • the age range corresponding to the specified age-based facial feature with the smallest distance from the above-mentioned facial features is the face.
  • the age group to which the feature belongs, and then the age information corresponding to the face feature can be obtained.
  • S6 Based on the gender information and the age information, screen out a specified face feature database corresponding to the age information and the gender information from all locally pre-stored face feature databases.
  • a corresponding database label is preset for each face feature database, and the database label at least includes an age label and a gender label corresponding to the set of face feature templates in the database.
  • the above-mentioned gender information and age information are matched with the database labels at the same time, so that the designated face feature database corresponding to the above-mentioned face features can be found.
  • S7 Use the face features to respectively perform matching processing with all face feature templates in the designated face feature database to obtain a face recognition result corresponding to the face image.
  • the similarity between the face feature and each face feature template in the designated face feature database can be calculated, and all the calculated similarities can be subjected to size analysis processing, and then The face recognition result corresponding to the above-mentioned face image can be accurately generated according to the obtained analysis result.
  • a similarity threshold may be preset, and if all the calculated similarities are smaller than the similarity threshold, a face recognition result that fails the face recognition is generated. And if there is at least one specified similarity greater than the similarity threshold, the relevant user with the largest similarity among all specified similarities will be used as the matching object corresponding to the face image, and a person with successful face recognition will be generated. face recognition results.
  • the face features can be matched with all face feature templates in the designated face feature database at the same time.
  • the above-mentioned parallel feature comparison instruction may specifically be a single instruction multiple data stream (single instruction multiple data, SIMD) instruction.
  • multiple face feature databases including face feature templates are generated locally in advance, and when the face feature of the face image corresponding to the image to be processed is obtained, the gender corresponding to the face feature is calculated first. information and age information, and then filter out the specified face feature database corresponding to both gender information and age information from all face feature databases, and then only need to compare the face features and the face feature templates contained in the specified face feature database.
  • feature matching processing the face recognition result corresponding to the face image can be accurately generated, without the need to perform feature matching processing on the face features and all face feature templates included in each face feature database. , which effectively improves the processing efficiency of feature matching processing, reduces system consumption, and improves the generation efficiency of generating face matching results.
  • the face recognition process is not affected by network conditions, so that the face feature and the designated face feature can be used without a network. All face feature templates in the library are matched and processed, so that the face recognition result corresponding to the face image can be obtained quickly and accurately, and the user experience can be effectively improved.
  • step S4 includes:
  • S400 Acquire gender facial features from the gender feature database, wherein the gender facial features include the same number of male facial features and female facial features;
  • S402 Calculate the first distances between the facial features and each of the male facial features respectively, and calculate the first sum of all the first distances; and,
  • S403 Calculate the second distance between the face feature and each of the female face features respectively, and calculate the second sum of all the second distances;
  • the step of calculating and generating gender information corresponding to the facial features based on a locally pre-stored gender feature database may specifically include: first obtaining a gender person from the gender feature database Facial features, wherein the gender facial features include the same number of male facial features and female facial features.
  • the number of male/female facial features is not specifically limited, and can be set according to actual needs.
  • the facial features and all the gender facial features are then mapped into the same multidimensional space. Among them, by mapping the feature vectors of the face features extracted from the face images and the feature vectors of the same number of male face features and female face features into the same multi-dimensional space, each feature vector corresponds to the space. a dimension of .
  • first sum value is greater than the second sum value. If the first sum value is greater than the second sum value, generate female gender information corresponding to the face feature. And if the first sum value is not greater than the second sum value, male gender information corresponding to the face feature is generated. Among them, the sum of the distances between the extracted facial features and their corresponding male and female facial features can be calculated and compared respectively.
  • gender information corresponding to the face feature is female; and if the sum of the distances between the face features and their corresponding male face features is smaller than the face feature and its corresponding When the sum of the distances of the female face features, it can be identified that the gender information corresponding to the extracted face features is male.
  • gender information corresponding to the facial features is calculated and generated based on the locally pre-stored gender feature database, which is conducive to the subsequent accurate and rapid pre-storage of the facial features based on the gender information and the age information corresponding to the facial features.
  • the specified face feature library corresponding to the age information and the gender information is screened out from all the face feature libraries in the database, so that only the feature matching process is performed on the face features and the specified face feature library, which can greatly The processing efficiency of feature matching processing is improved, system consumption is reduced, and the generation efficiency of generating face matching results is improved.
  • the designated face feature database is stored locally, the face recognition process is not affected by network conditions, and the face features and the designated face feature database can be used without a network. All face feature templates in the device are matched, so as to quickly and accurately obtain the face recognition result corresponding to the face image, and improve the user experience.
  • step S5 includes:
  • S500 Acquire age face features from the age feature database, wherein the age face features include face feature data of each age group, and the number of the age face features and the number of the age groups same;
  • S502 Calculate the third distance between the facial features and each of the age facial features
  • S503 Based on the third distance, call a preset calculation formula to calculate and obtain a face feature of a specified age that matches the face feature;
  • S504 Acquire the specified age group information corresponding to the specified age face feature, and use the specified age group information as the age information.
  • the step of calculating and generating age information corresponding to the face feature based on the locally pre-stored age feature database specifically includes: firstly obtaining the age face from the age feature database feature, wherein the age face feature includes the face feature data of each age group, and the number of the age face feature is the same as the number of the age group, that is, for each age group only set A corresponding age face feature.
  • the division method of the above-mentioned age groups is not specifically limited, and can be set according to actual needs.
  • the facial features and all the age facial features are then mapped into the same multidimensional space. Then, a third distance between the face feature and each of the age face features is calculated.
  • the calculation method for calculating the third distance between the facial feature and each of the age facial features can refer to the above-mentioned calculation method for calculating the distance between the facial feature and the gender facial feature, I won't go into too much detail here.
  • a preset calculation formula is invoked to calculate and obtain a face feature of a specified age that matches the face feature.
  • the distance of face features, j is a natural number.
  • each feature vector corresponds to a dimension in the space, calculate and compare respectively
  • the distance between the above-mentioned face feature and each age face feature can finally be obtained.
  • the age range corresponding to the specified age face feature with the smallest distance from the above-mentioned face feature is the age range to which the face feature belongs, and for each face feature.
  • Each age group is set with only one corresponding age face feature, which can effectively reduce the time spent in feature calculation and processing, and improve the generation rate of the above-mentioned specified age face feature.
  • obtain the specified age group information corresponding to the specified age face feature and use the specified age group information as the age information.
  • the age information corresponding to the face feature is calculated and generated based on the locally pre-stored age feature database, which is conducive to the subsequent accurate and rapid pre-storage based on the age information and the gender information corresponding to the face feature.
  • the specified face feature library corresponding to the age information and the gender information is screened out from all the face feature libraries in the database, so that only the feature matching process is performed on the face features and the specified face feature library, which can greatly The processing efficiency of feature matching processing is improved, system consumption is reduced, and the generation efficiency of generating face matching results is improved.
  • the designated face feature database is stored locally, the face recognition process is not affected by network conditions, and the face features and the designated face feature database can be used without a network. All face feature templates in the device are matched, so as to obtain the face recognition result corresponding to the face image quickly and accurately, and improve the user experience.
  • step S7 includes:
  • S700 Calculate the similarity between the face feature and each face feature template in the designated face feature library
  • S701 Determine whether all the calculated similarities are smaller than a preset similarity threshold
  • S705 Acquire designated user information associated with the face feature template corresponding to the second degree of similarity, and generate a second face recognition result with successful face recognition based on the designated user information, wherein the second person The face recognition result carries at least the specified user information.
  • the use of the face features is respectively performed with all face feature templates in the designated face feature database to perform matching processing to obtain a face recognition result corresponding to the face image.
  • the step may specifically include: first calculating the similarity between the face feature and each face feature template in the designated face feature database.
  • the calculation formula for calculating the similarity between the face feature and each feature template in the specified face feature library may specifically be:
  • S ( ⁇ M k N k )/(sqrt( ⁇ (M k ) 2 )sqrt( ⁇ (N k ) 2 )) ⁇ abs(M k ⁇ N k ), S represents the similarity value, and M represents the the face feature, N represents the feature template, M k is the kth sub-vector of the face feature, N k is the k-th sub-vector of the feature template, p represents the number of sub-vectors, and the accumulated In the process, the value of k from 1 to p is traversed, and abs means to find the absolute value. Then it is judged whether all the calculated similarities are smaller than the preset similarity threshold.
  • the value of the above-mentioned preset similarity threshold is not specifically limited, and can be set according to actual needs, for example, it can be set to 0.95. If all the calculated similarities are smaller than the preset similarity threshold, then generate the first face recognition result of face recognition failure. And if all the calculated similarities are not smaller than the preset similarity threshold, further determine whether there are at least a specified number of first similarities greater than or equal to the preset similarity threshold in all the similarities, wherein , the specified number is a positive integer greater than 1. If there are at least a specified number of the first similarities in all the similarities, the second similarity with the largest value is selected from all the first similarities.
  • the identification result carries at least the specified user information.
  • the similarity between the face feature and each face feature template in the designated face feature database is calculated, and all the calculated similarities are subjected to size analysis processing, and then the obtained analysis
  • the result is to accurately generate the face recognition result corresponding to the above-mentioned face image, and because only the similarity between the face feature and each face feature template in the specified face feature library needs to be calculated, and no The similarity between the face feature and each face feature template in the face feature database needs to be calculated, so that the processing efficiency of the similarity calculation can be greatly improved, the system consumption can be reduced, and the generated face can be greatly improved.
  • the generation efficiency of matching results are examples of matching results.
  • step S6 before the above step S6, it includes:
  • S601 Acquire an age label and a gender label corresponding to each of the designated face feature templates
  • S602 Based on the age tag and the gender tag, perform classification and combination processing on all the specified face feature templates to obtain a plurality of processed face feature template sets;
  • S603 locally generate multiple databases with the same number as the face feature template sets
  • S605 Based on the mapping relationship, correspondingly store each of the face feature template sets in each of the databases, and generate a plurality of corresponding face feature libraries;
  • S606 Based on the age tag and the gender tag of the face feature template set included in the face feature library, generate a corresponding database tag for each of the face feature libraries.
  • the designated person corresponding to the age information and the gender information at the same time is selected from the locally pre-stored facial feature database.
  • the generating step of generating the above-mentioned face feature database may also be included, and the generating step may specifically include: first acquiring a pre-collected specified face feature template.
  • the above-mentioned designated face feature template may be all face feature templates corresponding to all staff included in the full-staff face database stored in the background server or the cloud, or the designated face feature template may only include Face feature templates corresponding to some people. Then, the age label and the gender label corresponding to each of the designated face feature templates are obtained.
  • the above-mentioned age label refers to the label information corresponding to the age range in which the specified face feature template is located, and the division method of age groups may refer to the aforementioned related content; the above-mentioned gender label refers to the gender corresponding to the specified face feature template. Label information, i.e. male label or female label.
  • the above-mentioned generation method of age label and gender label can be generated by manual labeling, or can also be manually labeled with respect to age label and gender label on some face feature templates, and then use machine learning method. After manual labeling and learning, the label labeling process of the remaining part of the face feature template is completed, so as to effectively improve the generation efficiency of label generation.
  • the above-mentioned method of classification and combination processing refers to placing the face feature templates containing the same age label and gender label into the same set, so as to realize the classification of all the specified face feature templates.
  • the above face feature template set After obtaining the above face feature template set, locally generate multiple databases with the same number as the face feature template set, and establish a one-to-one correspondence between the face feature template set and the database Mapping relations. Subsequently, based on the mapping relationship, each of the face feature template sets is correspondingly stored in each of the databases, and a plurality of corresponding face feature libraries are generated.
  • each face feature database stores a corresponding set of face feature templates consisting of face feature templates containing the same age label and gender label.
  • a corresponding database tag is generated for each of the face feature library.
  • the above-mentioned database labels at least include age labels and gender labels corresponding to the facial feature template set in the database.
  • multiple face feature databases corresponding to age tags and gender tags are respectively generated from the specified face feature templates and stored locally, which is conducive to quickly screening the specified faces corresponding to the face features locally in the future.
  • a feature library which can use the face features to match with all face feature templates in the specified face feature library in a timely manner without a network, so as to quickly and accurately obtain the face with the face
  • the face recognition result corresponding to the image.
  • it is only necessary to perform feature matching processing on the face features and the specified face feature database it is not necessary to perform feature matching processing on the face features and all face feature databases, which can greatly improve the processing efficiency of feature matching processing. Reduce system consumption and improve the generation efficiency of generating face recognition results.
  • step S600 before the above step S600, it includes:
  • S6002 Perform a screening process on the face feature template of all staff based on the region information, and filter out a target face feature template corresponding to the target region information;
  • the determining step may specifically include: first obtaining a preset face feature template.
  • the full-person face feature template in the full-person face feature database, and the target area information is obtained at the same time.
  • the above-mentioned full-staff face feature templates are all face-feature templates corresponding to all staff included in the full-staff face database stored in the background server or in the cloud.
  • the above-mentioned target area information may refer to local area information.
  • the target face feature template is the face feature template data required to perform face recognition processing on the person locally, that is, the face feature template of the relevant person corresponding to the target area information.
  • the target face feature template is used as the designated face feature template.
  • the target face feature template required for local face recognition processing is selected from all the face feature templates in the face feature database based on the target area information and stored locally, instead of All face feature templates in the face feature database will be downloaded and stored locally, which can effectively save the local resource occupation of the face recognition device, improve the performance of the face recognition device, and help improve the face recognition device.
  • step S2 includes:
  • S200 Use the face detection algorithm to perform face detection on the to-be-processed image to obtain a corresponding face detection result
  • S201 Determine the position coordinates of the face image included in the to-be-processed image based on the face detection result
  • S203 Use the intercepted image as the face image.
  • the step of performing face detection on the to-be-processed image based on a preset face detection algorithm to obtain a corresponding face image may specifically include: firstly using the face detection algorithm
  • the algorithm performs face detection on the to-be-processed image to obtain a corresponding face detection result.
  • face detection is mainly used for the preprocessing of face recognition in practical applications, that is, the position and size of the face are accurately calibrated in the image.
  • the pattern features contained in face images are very rich, such as histogram features, color features, template features, structural features and Haar features. Face detection is to pick out the useful feature information of the above pattern features, and use these features to realize face detection.
  • the above-mentioned face detection algorithm can be the adaboost algorithm.
  • the adaboost algorithm is used to select some rectangular features (weak classifiers) that can best represent the face, and the weak classifier is constructed according to the weighted voting method as a A strong classifier, and then several strong classifiers obtained by training are connected in series to form a cascaded classifier with a cascade structure, which effectively improves the detection speed of the classifier. Then, the position coordinates of the face image included in the to-be-processed image are determined based on the face detection result.
  • the face image in the to-be-processed image can be located, that is, the position coordinates of the face image can be obtained. Then, the image of the region to which the position coordinates belong is cut out from the to-be-processed image to obtain a corresponding cut-out image. Finally, after the above-mentioned intercepted image is obtained, the intercepted image is used as the face image.
  • the corresponding face image can be cut out from the image to be processed by performing the face crop operation, and the background information in the face image can be obtained by performing the face crop operation to reduce noise interference.
  • a corresponding face image is obtained by performing face detection on the to-be-processed image based on a preset face detection algorithm, which is conducive to subsequent extraction of corresponding face features from the face image, and then based on the obtained
  • the face feature of the image is matched with the face feature template in the associated designated face feature database, so as to obtain the face recognition result corresponding to the face image quickly and accurately.
  • the face recognition method in the embodiment of the present application can also be applied to the blockchain field, for example, the above-mentioned data such as the face recognition result is stored on the blockchain.
  • the security and immutability of the above face recognition results can be effectively guaranteed.
  • the above-mentioned blockchain is a new application mode of computer technology such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm.
  • Blockchain essentially a decentralized database, is a series of data blocks associated with cryptographic methods. Each data block contains a batch of network transaction information to verify its Validity of information (anti-counterfeiting) and generation of the next block.
  • the blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.
  • the underlying platform of the blockchain can include processing modules such as user management, basic services, smart contracts, and operation monitoring.
  • the user management module is responsible for the identity information management of all blockchain participants, including maintenance of public and private key generation (account management), key management, and maintenance of the corresponding relationship between the user's real identity and blockchain address (authority management), etc.
  • account management maintenance of public and private key generation
  • key management key management
  • authorization management maintenance of the corresponding relationship between the user's real identity and blockchain address
  • the basic service module is deployed on all blockchain node devices to verify the validity of business requests, After completing the consensus on valid requests, record them in the storage.
  • the basic service For a new business request, the basic service first adapts the interface for analysis and authentication processing (interface adaptation), and then encrypts the business information through the consensus algorithm (consensus management), After encryption, it is completely and consistently transmitted to the shared ledger (network communication), and records are stored; the smart contract module is responsible for the registration and issuance of contracts, as well as contract triggering and contract execution.
  • contract logic through a programming language and publish to On the blockchain (contract registration), according to the logic of the contract terms, call the key or other events to trigger execution, complete the contract logic, and also provide the function of contract upgrade and cancellation;
  • the operation monitoring module is mainly responsible for the deployment in the product release process , configuration modification, contract settings, cloud adaptation, and visual output of real-time status in product operation, such as: alarms, monitoring network conditions, monitoring node equipment health status, etc.
  • an embodiment of the present application further provides a face recognition device, including:
  • the first acquisition module 1 is used to acquire the image to be processed
  • a detection module 2 configured to perform face detection on the to-be-processed image based on a preset face detection algorithm to obtain a corresponding face image
  • Extraction module 3 for extracting corresponding face features from the face image
  • the first computing module 4 is configured to calculate and generate gender information corresponding to the facial features based on a locally pre-stored gender feature database, wherein the gender feature database stores gender facial features;
  • the second computing module 5 is configured to calculate and generate age information corresponding to the face features based on a locally pre-stored age feature database, wherein the age feature database stores age face features;
  • the first screening module 6 is used for, based on the gender information and the age information, to screen out the specified face feature database corresponding to the age information and the gender information from the locally pre-stored all face feature databases at the same time ;
  • the first generating module 7 is configured to use the face features to perform matching processing with all face feature templates in the designated face feature database, respectively, to obtain a face recognition result corresponding to the face image.
  • the implementation process of the functions and functions of the first acquisition module, detection module, extraction module, first calculation module, second calculation module, first screening module and first generation module in the above-mentioned face recognition device is specific
  • an embodiment of the present application further provides a computer device.
  • the computer device may be a server, and its internal structure may be as shown in FIG. 3 .
  • the computer equipment includes a processor, memory, a network interface, a display screen, an input device and a database connected by a system bus. Among them, the processor of the computer equipment is designed to provide computing and control capabilities.
  • the memory of the computer device includes a storage medium and an internal memory.
  • the storage medium stores an operating system, a computer program and a database.
  • the internal memory provides an environment for the operation of the operating system and computer programs in the storage medium.
  • the database of the computer device is used to store face images, face features, gender information, age information, face feature templates and face recognition results.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection.
  • the display screen of the computer equipment is an indispensable graphic and text output device in the computer, which is used to convert digital signals into optical signals, so that text and graphics can be displayed on the screen of the display screen.
  • the input device of the computer equipment is the main device for information exchange between the computer and the user or other devices, and is used to transmit data, instructions and certain flag information to the computer.
  • An embodiment of the present application further provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the above-mentioned face recognition method is implemented.
  • any reference to memory, storage, database or other medium provided in this application and used in the embodiments may include non-volatile and/or volatile memory.
  • Nonvolatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in various forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double-rate SDRAM (SSRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Collating Specific Patterns (AREA)

Abstract

本申请涉及人工智能领域,提供一种人脸识别方法、装置、计算机设备和存储介质,方法包括:获取待处理图像;对待处理图像进行人脸检测,得到对应的人脸图像;从人脸图像中提取出人脸特征;基于预存储的性别特征数据库,生成与人脸特征对应的性别信息;基于预存储的年龄特征数据库,生成与人脸特征对应的年龄信息;从本地预存储的所有人脸特征库中筛选出与年龄信息及性别信息同时对应的指定人脸特征库;使用人脸特征分别与指定人脸特征库中的所有人脸特征模板进行匹配,得到对应的人脸识别结果。

Description

人脸识别方法、装置、计算机设备和存储介质
本申请要求于2021年2月26日提交中国专利局、申请号为202110220518.6、发明名称为“人脸识别方法、装置、计算机设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能技术领域,具体涉及一种人脸识别方法、装置、计算机设备和存储介质。
背景技术
发明人意识到,现有的人脸识别方法,在提取到用户的人脸特征数据之后,需要将采集到的人脸特征数据上传到后端,并与后端的人脸数据库中存储的所有人脸特征模板进行一一比对以生成相应的人脸识别结果。然而,由于人脸数据库中通常存储有全体成员的人脸模板特征,从而导致需要与采集到的人脸特征进行比对处理的模板特征的数量会非常多,进而使得需要花费大量的处理时间来进行后端的人脸数据库中存储的所有特征模板与人脸特征之间的比对处理来得到最终的人脸识别结果。因此,现有的人脸识别方法存在人脸识别的处理效率较低,人脸识别结果的生成速率较慢的技术问题。
技术问题
本申请的主要目的为提供一种人脸识别方法、装置、计算机设备和存储介质,旨在解决现有的人脸识别方法存在人脸识别的处理效率较低,人脸识别结果的生成速率较慢的技术问题。
技术解决方案
本申请提出一种人脸识别方法,所述方法包括步骤:
获取待处理图像;
基于预设的人脸检测算法对所述待处理图像进行人脸检测,得到对应的人脸图像;
从所述人脸图像中提取出对应的人脸特征;
基于本地预存储的性别特征数据库,计算生成与所述人脸特征对应的性别信息,其中,所述性别特征数据库内存储有性别人脸特征;以及,
基于本地预存储的年龄特征数据库,计算生成与所述人脸特征对应的年龄信息,其中,所述年龄特征数据库内存储有年龄人脸特征;
基于所述性别信息与所述年龄信息,从本地预存储的所有人脸特征库中筛选出与所述年龄信息及所述性别信息同时对应的指定人脸特征库;
使用所述人脸特征分别与所述指定人脸特征库中的所有人脸特征模板进行匹配处理,得到与所述人脸图像对应的人脸识别结果。
本申请还提供一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器执行所述计算机程序时实现上述方法的步骤。
本申请还提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述方法的步骤。
有益效果
本申请中不需要将人脸特征与每一个人脸特征库中包括的所有人脸特征模板进行特征匹配处理,有效地提升了特征匹配处理的处理效率,减少系统消耗,提高生成人脸匹配结果的生成效率。另外,由于指定人脸特征库是存储在本地的,使得人脸识别过程不受网络情况的影响,从而能够在没有网络的情况下也能使用所述人脸特征分别与所述指定人脸特征库中的所有人脸特征模板进行匹配处理,进而实现快速准确地得到与所述人脸图像对应的人脸识别结果,有效提高用户的使用体验。
附图说明
图1是本申请一实施例的人脸识别方法的流程示意图;
图2是本申请一实施例的人脸识别装置的结构示意图;
图3是本申请一实施例的计算机设备的结构示意图。
本发明的实施方式
本方法实施例的执行主体为一种人脸识别装置。在实际应用中,上述人脸识别装置可以通过虚拟装置,例如软件代码实现,也可以通过写入或集成有相关执行代码的实体装置实现,且可以与用户通过键盘、鼠标、遥控器、触摸板或声控设备等方式进行人机交互。上述人脸识别装置具体可为带有摄像头的前端终端。本实施例中的人脸识别装置能够有效提升人脸特征识别处理的处理效率,提高人脸识别结果的生成效率,以及提高人脸识别处理的灵活性与适应性。
具体地,参照图1,本申请一实施例的人脸识别方法,包括:
S1:获取待处理图像。
如上述步骤S1所述,上述待处理图像可为由人脸识别装置的摄像头采集得到的图像。不同的人脸图像都能通过摄像头采集下来,比如静态图像、动态图像、不同的位置、不同表情等方面都可以得到很好的采集。当用户在摄像头的拍摄范围内时,摄像头会自动搜索并拍摄带有用户的人脸的图像数据,以便后续基于该图像数据进行后续的人脸识别处理。
S2:基于预设的人脸检测算法对所述待处理图像进行人脸检测,得到对应的人脸图像。
如上述步骤S2所述,对于上述人脸检测算法的选取不作具体限定,可根据实际需求进行选取,例如该人脸检测算法可为adaboost算法、dlib算法、opencv算法,等等。另外,在得到了上述人脸图像后,还可进一步对该人脸图像进行图像预处理。对于人脸的图像预处理是基于人脸检测结果,对图像进行处理并最终服务于特征提取的过程。装置获取的原始图像由于受到各种条件的限制和随机干扰,往往不能直接使用,必须在图像处理的早期阶段对它进行灰度校正、噪声过滤等图像预处理。对于人脸图像而言,其预处理过程主要包括人脸图像的光线补偿、灰度变换、直方图均衡化、归一化、几何校正、滤波以及锐化等。
S3:从所述人脸图像中提取出对应的人脸特征。
如上述步骤S3所述,人脸识别处理中可使用的特征通常分为视觉特征、像素统计特征、人脸图像变换系数特征、人脸图像代数特征等。人脸特征提取就是针对人脸的某些特征进行的。人脸特征提取,也称人脸表征,它是对人脸进行特征建模的过程。对于上述人脸特征的特征提取方式不作具体限定,例如可采用两大类人脸特征提取的方法:一种是基于知识的表征方法;另外一种是基于代数特征或统计学习的表征方法。另外,从人脸图像中提取出的人脸特征对应着多个特征向量。通过从人脸图像中提取出对应的人脸特征,有利于后续基于该人脸特征以及本地预存储的人脸特征库来智能准确且及时地实现对于人脸图像的人脸识别处理。
S4:基于本地预存储的性别特征数据库,计算生成与所述人脸特征对应的性别信息,其中,所述性别特征数据库内存储有性别人脸特征;以及,
S5:基于本地预存储的年龄特征数据库,计算生成与所述人脸特征对应的年龄信息,其中,所述年龄特征数据库内存储有年龄人脸特征。
如上述步骤S4至S5所述,所述性别特征数据库内存储有性别人脸特征,且该性别人脸特征可包括相同数量的男性人脸特征和女性人脸特征。具体的,可通过分别计算并比较提取的人脸特征与其对应的男性人脸特征和女性人脸特征的距离之和,如果人脸特征与其对应的男性人脸特征的距离之和大于人脸特征与其对应的女性人脸特征的距离之和时,则可识别出该人脸特征对应的性别信息为女性;而如果人脸特征与其对应的男性人脸特征的距离之和小于人脸特征与其对应的女性人脸特征的距离之和时,则可识别出提取的人脸特征对应的性别信息为男性。另外,所述年龄特征数据库内存储有年龄人脸特征,且所述年龄人脸特征可包括每一个年龄段的人脸特征数据,且所述年龄人脸特征的数量与所述年龄段的数量相同。具体的,可通过分别计算并比较上述人脸特征与每一个年龄人脸特征的距离,最终可得出与上述人脸特征距离最小的指定年龄人脸特征所对应的年龄段即为该人脸特征所属的年龄段,进而可得到该人脸特征对应的年龄信息。
S6:基于所述性别信息与所述年龄信息,从本地预存储的所有人脸特征库中筛选出与所述年龄信息及所述性别信息同时对应的指定人脸特征库。
如上述步骤S6所述,对于每一个人脸特征库均会预先设置有对应的数据库标签,该数据库标签至少包括与数据库内部的人脸特征模板集合对应的年龄标签与性别标签,进而可通过将上述性别信息与年龄信息同时与数据库标签进行匹配处理,便可查找出与上述人脸特征对应的指定人脸特征库。
S7:使用所述人脸特征分别与所述指定人脸特征库中的所有人脸特征模板进行匹配处理,得到与所述人脸图像对应的人脸识别结果。
如上述步骤S7所述,可通过计算所述人脸特征与所述指定人脸特征库中每一个人脸特征模板之间的相似度,并对计算得到的所有相似度进行大小分析处理,进而能够根据得到的分析结果来准确地生成与上述人脸图像对应的人脸识别结果。具体的,可预先设置一个相似度阈值,如果计算得到的所有相似度均小于该相似度阈值,则生成人脸识别失败的人脸识别结果。而如果存在至少一个大于该相似度阈值的指定相似度,则会将所有指定相似度中数值最大的相似度的相关用户作为与该人脸图像对应的匹配对象,并生成人脸识别成功的人脸识别结果。另外,可基于预设的并行特征比较指令,同时将所述人脸特征分别与所述指定人脸特征库中的所有人脸特征模板进行匹配处理。上述并行特征比较指令具体可为单指令流多数据流(single instruction multiple data,SIMD)指令。通过利用SIMD指令的并行计算能力来同时将所述人脸特征分别与所述指定人脸特征库中的所有人脸特征模板进行匹配处理,进一步提高了特征数据的比对处理速率,有效提高人脸识别结果的生成速率。
本实施例通过预先在本地生成包含有人脸特征模板的多个人脸特征库,并在获取到与待处理图像对应的人脸图像的人脸特征时,会先计算出该人脸特征对应的性别信息与年龄信息,再从所有人脸特征库筛选出与性别信息及年龄信息同时对应的指定人脸特征库,进而后续只需将人脸特征与指定人脸特征库中包含的人脸特征模板进行特征匹配处理,便能实现准确地生成与所述人脸图像对应的人脸识别结果,而不需要将人脸特征与每一个人脸特征库中包括的所有人脸特征模板进行特征匹配处理,有效地提升了特征匹配处理的处理效率,减少系统消耗,提高生成人脸匹配结果的生成效率。另外,由于指定人脸特征库是存储在本地的,使得人脸识别过程不受网络情况的影响,从而能够在没有网络的情况下也能使用所述人脸特征分别与所述指定人脸特征库中的所有人脸特征模板进行匹配处理,进而实现快速准确地得到与所述人脸图像对应的人脸识别结果,有效提高用户的使用体验。
进一步地,本申请一实施例中,上述步骤S4,包括:
S400:从所述性别特征数据库内获取性别人脸特征,其中,所述性别人脸特征包括数量相同的男性人脸特征与女性人脸特征;
S401:将所述人脸特征与所有所述性别人脸特征映射至相同的多维空间内;
S402:分别计算所述人脸特征与每一个所述男性人脸特征的第一距离,并计算所有所述第一距离的第一和值;以及,
S403:分别计算所述人脸特征与每一个所述女性人脸特征的第二距离,并计算所有所述第二距离的第二和值;
S404:判断所述第一和值是否大于所述第二和值;
S405:若所述第一和值大于所述第二和值,则生成与所述人脸特征对应的女性性别信息;
S406:若所述第一和值不大于所述第二和值,则生成与所述人脸特征对应的男性性别信息。
如上述步骤S400至S406所述,所述基于本地预存储的性别特征数据库,计算生成与所述人脸特征对应的性别信息的步骤,具体可包括:首先从所述性别特征数据库内获取性别人脸特征,其中,所述性别人脸特征包括数量相同的男性人脸特征与女性人脸特征,对于男/女性人脸特征的数量不作具体限定,可根据实际需求进行设置。然后将所述人脸特征与所有所述性别人脸特征映射至相同的多维空间内。其中,通过从将从人脸图像中提取的人脸特征的特征向量分别与相同数量的男性人脸特征和女性人脸特征的特征向量映射到同一个多维空间内,每一个特征向量对应空间内的一个维度。之后分别计算所述人脸特征与每一个所述男性人脸特征的第一距离,并计算所有所述第一距离的第一和值,以及分别计算所述人脸特征与每一个所述女性人脸特征的第二距离,并计算所有所述第二距离的第二和值。其中,可通过以下公式计算特征向量之间的距离:d(x,y)=sqrt(∑(x i-y i) 2),其中,d(x,y)表示点x和点y之间的距离,x可为人脸特征,y为性别人脸特征,即男性人脸特征或女性人脸特征,x i表示:点x在n维空间内是一个点集,可以表示为(x 1,x 2,···,x n),且x i (i=1,2,···,n)是实数,称为x的第i个坐标;y i表示:点y在n维空间内是一个点集,可以表示为(y 1,y 2,···,y n),且y i (i=1,2,···,n)是实数,称为y的第i个坐标,i为自然数,n表示空间维度,累加的过程中对i从1到n的取值进行遍历,sqrt表示求平方根。最后判断所述第一和值是否大于所述第二和值。如果所述第一和值大于所述第二和值,则生成与所述人脸特征对应的女性性别信息。而如果所述第一和值不大于所述第二和值,则生成与所述人脸特征对应的男性性别信息。其中,可通过分别计算并比较提取的人脸特征与其对应的男性人脸特征和女性人脸特征的距离之和,如果人脸特征与其对应的男性人脸特征的距离之和大于人脸特征与其对应的女性人脸特征的距离之和时,则可识别出该人脸特征对应的性别信息为女性;而如果人脸特征与其对应的男性人脸特征的距离之和小于人脸特征与其对应的女性人脸特征的距离之和时,则可识别出提取的人脸特征对应的性别信息为男性。本实施例通过基于本地预存储的性别特征数据库来计算生成与所述人脸特征对应的性别信息,有利于后续根据该性别信息以及与人脸特征对应的年龄信息实现准确快速地从地预存储的所有人脸特征库中筛选出与所述年龄信息及所述性别信息相对应的指定人脸特征库,使得仅仅只需将人脸特征与指定人脸特征库进行特征匹配处理,能够大大地提升了特征匹配处理的处理效率,减少系统消耗,提高生成人脸匹配结果的生成效率。另外,由于指定人脸特征库是存储在本地的,使得人脸识别过程不受网络情况的影响,能够在没有网络的情况下也能使用所述人脸特征分别与所述指定人脸特征库中的所有人脸特征模板进行匹配处理,进而实现快速准确地得到与所述人脸图像对应的人脸识别结果,提高用户使用体验。
进一步地,本申请一实施例中,上述步骤S5,包括:
S500:从所述年龄特征数据库内获取年龄人脸特征,其中,所述年龄人脸特征包括每一个年龄段的人脸特征数据,且所述年龄人脸特征的数量与所述年龄段的数量相同;
S501:将所述人脸特征与所有所述年龄人脸特征映射至相同的多维空间内;
S502:计算所述人脸特征与每一个所述年龄人脸特征之间的第三距离;
S503:基于所述第三距离,调用预设的计算公式计算得到与所述人脸特征匹配的指定年龄人脸特征;
S504:获取与所述指定年龄人脸特征对应的指定年龄段信息,并将所述指定年龄段信息作为所述年龄信息。
如上述步骤S500至S504所述,所述基于本地预存储的年龄特征数据库,计算生成与所述人脸特征对应的年龄信息的步骤,具体包括:首先从所述年龄特征数据库内获取年龄人脸特征,其中,所述年龄人脸特征包括每一个年龄段的人脸特征数据,且所述年龄人脸特征的数量与所述年龄段的数量相同,也即对于每一个年龄段均仅设置有一个对应的年龄人脸特征。另外,对于上述年龄段的划分方式不作具体限定,可根据实际需求进行设置,例如可按照10岁作为划分界限大小,100岁作为年龄上限,将0-100岁的年龄划分为相应的10个年龄段,0-10岁,11-20岁,···,91-100岁。然后将所述人脸特征与所有所述年龄人脸特征映射至相同的多维空间内。之后计算所述人脸特征与每一个所述年龄人脸特征之间的第三距离。其中,计算所述人脸特征与每一个所述年龄人脸特征之间的第三距离的计算方式可参考上述计算所述人脸特征与所述性别人脸特征之间的距离的计算方式,在此不作过多叙述。在得到了上述第三距离后,再基于所述第三距离,调用预设的计算公式计算得到与所述人脸特征匹配的指定年龄人脸特征。其中,计算与所述人脸特征匹配的指定年龄人脸特征对应的计算公式为:Z=min(d 1, d 2,...,d j),d j为人脸特征与第j个年龄人脸特征的距离,j为自然数。另外,通过将从人脸图像中提取的人脸特征的特征向量分别与各年龄段的年龄人脸特征映射到同一个多维空间内,每一个特征向量对应空间内的一个维度,分别计算并比较上述人脸特征与每一个年龄人脸特征的距离,最终可得出与上述人脸特征距离最小的指定年龄人脸特征所对应的年龄段即为该人脸特征所属的年龄段,并且对于每一个年龄段均仅设置有一个对应的年龄人脸特征,从而可以有效减少特征计算处理的花费时间,提高生成上述指定年龄人脸特征的生成速率。最后获取与所述指定年龄人脸特征对应的指定年龄段信息,并将所述指定年龄段信息作为所述年龄信息。本实施例通过基于本地预存储的年龄特征数据库来计算生成与所述人脸特征对应的年龄信息,有利于后续根据该年龄信息以及与人脸特征对应的性别信息实现准确快速地从地预存储的所有人脸特征库中筛选出与所述年龄信息及所述性别信息相对应的指定人脸特征库,使得仅仅只需将人脸特征与指定人脸特征库进行特征匹配处理,能够大大地提升了特征匹配处理的处理效率,减少系统消耗,提高生成人脸匹配结果的生成效率。另外,由于指定人脸特征库是存储在本地的,使得人脸识别过程不受网络情况的影响,能够在没有网络的情况下也能使用所述人脸特征分别与所述指定人脸特征库中的所有人脸特征模板进行匹配处理,进而实现快速准确地得到与所述人脸图像对应的人脸识别结果,提高用户使用体验。
进一步地,本申请一实施例中,上述步骤S7,包括:
S700:计算所述人脸特征与所述指定人脸特征库中每一个人脸特征模板之间的相似度;
S701:判断计算得到的所有相似度是否均小于预设相似度阈值;
S702:若计算得到的所有相似度均小于所述预设相似度阈值,生成人脸识别失败的第一人脸识别结果;
S703:若计算得到的所有相似度没有均小于所述预设相似度阈值,判断所有所述相似度中是否存在至少指定数量个大于或等于所述预设相似度阈值的第一相似度,其中,所述指定数量为大于1的正整数;
S704:若所有所述相似度中存在至少指定数量个所述第一相似度,从所有所述第一相似度中筛选出数值最大的第二相似度;
S705:获取与所述第二相似度对应的人脸特征模板相关联的指定用户信息,并基于所述指定用户信息生成人脸识别成功的第二人脸识别结果,其中,所述第二人脸识别结果至少携带所述指定用户信息。
如上述步骤S700至S705所述,所述使用所述人脸特征分别与所述指定人脸特征库中的所有人脸特征模板进行匹配处理,得到与所述人脸图像对应的人脸识别结果的步骤,具体可包括:首先计算所述人脸特征与所述指定人脸特征库中每一个人脸特征模板之间的相似度。其中,计算所述人脸特征与所述指定人脸特征库中每一个特征模板之间的相似度计算公式具体可为:
S=(∑M kN k)/(sqrt(∑(M k) 2)sqrt(∑(N k) 2))-∑abs(M k-N k),S表示相似度值,M表示所述人脸特征,N表示所述特征模板,M k为所述人脸特征的第k个分向量,N k为所述特征模板的第k个分向量,p表示分向量的数量,累加的过程中对k从1到p的取值进行遍历,abs表示求绝对值。然后判断计算得到的所有相似度是否均小于预设相似度阈值。其中,对于上述预设相似度阈值的取值不作具体限定,可根据实际需求进行设置,例如可设为0.95。如果计算得到的所有相似度均小于所述预设相似度阈值,则生成人脸识别失败的第一人脸识别结果。而如果计算得到的所有相似度没有均小于所述预设相似度阈值,进一步判断所有所述相似度中是否存在至少指定数量个大于或等于所述预设相似度阈值的第一相似度,其中,所述指定数量为大于1的正整数。如果所有所述相似度中存在至少指定数量个所述第一相似度,从所有所述第一相似度中筛选出数值最大的第二相似度。最后获取与所述第二相似度对应的人脸特征模板相关联的指定用户信息,并基于所述指定用户信息生成人脸识别成功的第二人脸识别结果,其中,所述第二人脸识别结果至少携带所述指定用户信息。本实施例通过计算所述人脸特征与所述指定人脸特征库中每一个人脸特征模板之间的相似度,并对计算得到的所有相似度进行大小分析处理,进而能够根据得到的分析结果来准确地生成与上述人脸图像对应的人脸识别结果,且由于仅仅需要计算所述人脸特征与所述指定人脸特征库中每一个人脸特征模板之间的相似度,而不需要计算所述人脸特征与所有人脸特征库中每一个人脸特征模板之间的相似度,从而能够大大地提升了相似度计算的处理效率,减少系统消耗,极大地提高了生成人脸匹配结果的生成效率。
进一步地,本申请一实施例中,上述步骤S6之前,包括:
S600:获取预先采集的指定人脸特征模板;
S601:获取与各所述指定人脸特征模板分别对应的年龄标签与性别标签;
S602:基于所述年龄标签与所述性别标签,对所有所述指定人脸特征模板进行分类组合处理,得到处理后的多个人脸特征模板集合;
S603:在本地生成与所述人脸特征模板集合的数量相同的多个数据库;
S604:为所述人脸特征模板集合与所述数据库之间建立一一对应的映射关系;
S605:基于所述映射关系,将各所述人脸特征模板集合对应存储至各所述数据库内,生成对应的多个人脸特征库;
S606:基于所述人脸特征库中包含的人脸特征模板集合的年龄标签与性别标签,为每一个所述人脸特征库生成对应的数据库标签。
如上述步骤S600至S606所述,在执行基于所述性别信息与所述年龄信息,从本地预存储的所有人脸特征库中筛选出与所述年龄信息及所述性别信息同时对应的指定人脸特征库的步骤之前,还可包括生成上述人脸特征库的生成步骤,该生成步骤具体可包括:首先获取预先采集的指定人脸特征模板。其中,上述指定人脸特征模板可以为存储在后台服务器或云端中的全员人脸数据库中包含的与全员对应的所有人脸特征模板,或者该指定人脸特征模板也可以是只包含有部分人员所对应的人脸特征模板。然后获取与各所述指定人脸特征模板分别对应的年龄标签与性别标签。其中,上述年龄标签是指指定人脸特征模板所处的年龄段范围所对应的标签信息,年龄段的划分方式可参照前述的相关内容;上述性别标签是指指定人脸特征模板所对应的性别标签信息,即男性标签或女性标签。另外,上述年龄标签与性别标签的生成方式可采用人工标注生成的方式,或者还可采用先由人工对一部分人脸特征模板进行关于年龄标签与性别标签的人工标注处理,之后通过机器学习方式对人工标注进行学习后再完成剩余部分的人脸特征模板的标签标注处理,以有效提高标签生成的生成效率。之后基于所述年龄标签与所述性别标签,对所有所述指定人脸特征模板进行分类组合处理,得到处理后的多个人脸特征模板集合。其中,上述分类组合处理的方式是指将包含有相同的年龄标签及性别标签的人脸特征模板放置到同一个集合内,以实现对于所有的指定人脸特征模板的分类。在得到了上述人脸特征模板集合后,在本地生成与所述人脸特征模板集合的数量相同的多个数据库,并为所述人脸特征模板集合与所述数据库之间建立一一对应的映射关系。后续基于所述映射关系,将各所述人脸特征模板集合对应存储至各所述数据库内,生成对应的多个人脸特征库。其中,每一个人脸特征库内均是存储有对应的由包含有相同的年龄标签与性别标签的人脸特征模板构成的人脸特征模板集合。最后在生成了上述人脸特征库后,再基于所述人脸特征库中包含的人脸特征模板集合的年龄标签与性别标签,为每一个所述人脸特征库生成对应的数据库标签。其中,上述数据库标签至少包括与数据库内部的人脸特征模板集合对应的年龄标签与性别标签。本实施例通过将指定人脸特征模板以年龄标签与性别标签分别生成相对应的多个人脸特征库并存储于本地,有利于后续能够从本地快速地筛选出与人脸特征对应的指定人脸特征库,能够在没有网络的情况下也能及时使用所述人脸特征分别与所述指定人脸特征库中的所有人脸特征模板进行匹配处理,以实现快速准确地得到与所述人脸图像对应的人脸识别结果。另外,由于只需将人脸特征与指定人脸特征库进行特征匹配处理,而不需要将人脸特征与所有人脸特征库进行特征匹配处理,能够大大地提升了特征匹配处理的处理效率,减少系统消耗,提高生成人脸识别结果的生成效率。
进一步地,本申请一实施例中,上述步骤S600之前,包括:
S6000:获取预设的全员人脸特征库中的全员人脸特征模板;以及,
S6001:获取目标区域信息;
S6002:基于所述区域信息对所述全员人脸特征模板进行筛选处理,筛选出与所述目标区域信息对应的目标人脸特征模板;
S6003:将所述目标人脸特征模板作为所述指定人脸特征模板。
如上述步骤S6000至S6003所述,在执行获取预先采集的指定人脸特征模板的步骤之前,还可包括对于上述指定人脸特征模板的确定步骤,该确定步骤具体可包括:首先获取预设的全员人脸特征库中的全员人脸特征模板,以及同时获取目标区域信息。其中,上述全员人脸特征模板为存储在后台服务器或云端中的全员人脸数据库中包含的与全员对应的所有人脸特征模板。上述目标区域信息可指本地的区域信息。然后基于所述区域信息对所述全员人脸特征模板进行筛选处理,筛选出与所述目标区域信息对应的目标人脸特征模板。其中,上述目标人脸特征模板为本地对人员进行人脸识别处理所需使用到的人脸特征模板数据,即为与该目标区域信息对应的相关人员的人脸特征模板。最后将所述目标人脸特征模板作为所述指定人脸特征模板。本实施例通过基于目标区域信息从全员人脸特征库内的所有全员人脸特征模板中选取出本地进行人脸识别处理所需使用到的目标人脸特征模板并存储于本地,而不会下载全员人脸特征库内的所有全员人脸特征模板并存储在本地,从而可以有效节省人脸识别装置本地的资源占用,提升人脸识别装置的性能,有利于提高人脸识别装置进行人脸识别处理的处理速度。
进一步地,本申请一实施例中,上述步骤S2,包括:
S200:采用所述人脸检测算法对所述待处理图像进行人脸检测,得到对应的人脸检测结果;
S201:基于所述人脸检测结果确定出所述待处理图像中包含的人脸图像的位置坐标;
S202:将所述位置坐标所属区域的图像从所述待处理图像中截取出来,得到对应的截取图像;
S203:将所述截取图像作为所述人脸图像。
如上述步骤S200至S203所述,所述基于预设的人脸检测算法对所述待处理图像进行人脸检测,得到对应的人脸图像的步骤,具体可包括:首先采用所述人脸检测算法对所述待处理图像进行人脸检测,得到对应的人脸检测结果。其中,人脸检测在实际应用中主要用于人脸识别的预处理,即在图像中准确标定出人脸的位置和大小。人脸图像中包含的模式特征十分丰富,如直方图特征、颜色特征、模板特征、结构特征及Haar特征等。人脸检测就是把上述模式特征其中有用的特征信息挑出来,并利用这些特征特征实现人脸检测。另外,上述人脸检测算法可为adaboost算法,通过人脸检测过程中使用adaboost算法挑选出一些最能代表人脸的矩形特征(弱分类器),按照加权投票的方式将弱分类器构造为一个强分类器,再将训练得到的若干强分类器串联组成一个级联结构的层叠分类器,有效地提高分类器的检测速度。然后基于所述人脸检测结果确定出所述待处理图像中包含的人脸图像的位置坐标。其中,在获取了上述人脸检测结果后,可以定位该待处理图像中的人脸图像,即获得人脸图像的位置坐标。之后将所述位置坐标所属区域的图像从所述待处理图像中截取出来,得到对应的截取图像。最后在得到了上述截取图像后,将所述截取图像作为所述人脸图像。其中,可通过执行人脸crop操作,从待处理图像中截取出相应的人脸图像,且通过执行人脸crop操作,可以去取人脸图像中的背景信息,减少噪声干扰。本实施例通过基于预设的人脸检测算法对所述待处理图像进行人脸检测来得到对应的人脸图像,有利于后续从该人脸图像中提取出对应的人脸特征,进而基于得到的人脸特征与关联的指定人脸特征库中的人脸特征模板进行人脸特征匹配处理,以实现快速准确地得到与所述人脸图像对应的人脸识别结果。
本申请实施例中的人脸识别方法还可以应用于区块链领域,如将上述人脸识别结果等数据存储于区块链上。通过使用区块链来对上述人脸识别结果进行存储和管理,能够有效地保证上述人脸识别结果的安全性与不可篡改性。
上述区块链是分布式数据存储、点对点传输、共识机制、加密算法等计算机技术的新型应用模式。区块链(Blockchain),本质上是一个去中心化的数据库,是一串使用密码学方法相关联产生的数据块,每一个数据块中包含了一批次网络交易的信息,用于验证其信息的有效性(防伪)和生成下一个区块。区块链可以包括区块链底层平台、平台产品服务层以及应用服务层等。
区块链底层平台可以包括用户管理、基础服务、智能合约以及运营监控等处理模块。其中,用户管理模块负责所有区块链参与者的身份信息管理,包括维护公私钥生成(账户管理)、密钥管理以及用户真实身份和区块链地址对应关系维护(权限管理)等,并且在授权的情况下,监管和审计某些真实身份的交易情况,提供风险控制的规则配置(风控审计);基础服务模块部署在所有区块链节点设备上,用来验证业务请求的有效性,并对有效请求完成共识后记录到存储上,对于一个新的业务请求,基础服务先对接口适配解析和鉴权处理(接口适配),然后通过共识算法将业务信息加密(共识管理),在加密之后完整一致的传输至共享账本上(网络通信),并进行记录存储;智能合约模块负责合约的注册发行以及合约触发和合约执行,开发人员可以通过某种编程语言定义合约逻辑,发布到区块链上(合约注册),根据合约条款的逻辑,调用密钥或者其它的事件触发执行,完成合约逻辑,同时还提供对合约升级注销的功能;运营监控模块主要负责产品发布过程中的部署、配置的修改、合约设置、云适配以及产品运行中的实时状态的可视化输出,例如:告警、监控网络情况、监控节点设备健康状态等。
参照图2,本申请一实施例中还提供了一种人脸识别装置,包括:
第一获取模块1,用于获取待处理图像;
检测模块2,用于基于预设的人脸检测算法对所述待处理图像进行人脸检测,得到对应的人脸图像;
提取模块3,用于从所述人脸图像中提取出对应的人脸特征;
第一计算模块4,用于基于本地预存储的性别特征数据库,计算生成与所述人脸特征对应的性别信息,其中,所述性别特征数据库内存储有性别人脸特征;以及,
第二计算模块5,用于基于本地预存储的年龄特征数据库,计算生成与所述人脸特征对应的年龄信息,其中,所述年龄特征数据库内存储有年龄人脸特征;
第一筛选模块6,用于基于所述性别信息与所述年龄信息,从本地预存储的所有人脸特征库中筛选出与所述年龄信息及所述性别信息同时对应的指定人脸特征库;
第一生成模块7,用于使用所述人脸特征分别与所述指定人脸特征库中的所有人脸特征模板进行匹配处理,得到与所述人脸图像对应的人脸识别结果。
本实施例中,上述人脸识别装置中的第一获取模块、检测模块、提取模块、第一计算模块、第二计算模块、第一筛选模块与第一生成模块的功能和作用的实现过程具体详见上述人脸识别方法中对应步骤S1至S7的实现过程,在此不再赘述。
参照图3,本申请实施例中还提供一种计算机设备,该计算机设备可以是服务器,其内部结构可以如图3所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口、显示屏、输入装置和数据库。其中,该计算机设备设计的处理器用于提供计算和控制能力。该计算机设备的存储器包括存储介质、内存储器。该存储介质存储有操作系统、计算机程序和数据库。该内存储器为存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的数据库用于存储人脸图像、人脸特征、性别信息、年龄信息、人脸特征模板以及人脸识别结果。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机设备的显示屏是计算机中必不可少的一种图文输出设备,用于将数字信号转换为光信号,使文字与图形在显示屏的屏幕上显示出来。该计算机设备的输入装置是计算机与用户或其他设备之间进行信息交换的主要装置,用于把数据、指令及某些标志信息等输送到计算机中去。该计算机程序被处理器执行时以实现上述人脸识别方法。
本申请一实施例还提供一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现上述人脸识别方法。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的和实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可以包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM通过多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双速据率SDRAM(SSRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。

Claims (20)

  1. 一种人脸识别方法,其中,包括:
    获取待处理图像;
    基于预设的人脸检测算法对所述待处理图像进行人脸检测,得到对应的人脸图像;
    从所述人脸图像中提取出对应的人脸特征;
    基于本地预存储的性别特征数据库,计算生成与所述人脸特征对应的性别信息,其中,所述性别特征数据库内存储有性别人脸特征;以及,
    基于本地预存储的年龄特征数据库,计算生成与所述人脸特征对应的年龄信息,其中,所述年龄特征数据库内存储有年龄人脸特征;
    基于所述性别信息与所述年龄信息,从本地预存储的所有人脸特征库中筛选出与所述年龄信息及所述性别信息同时对应的指定人脸特征库;
    使用所述人脸特征分别与所述指定人脸特征库中的所有人脸特征模板进行匹配处理,得到与所述人脸图像对应的人脸识别结果。
  2. 根据权利要求1所述的人脸识别方法,其中,所述基于本地预存储的性别特征数据库,计算生成与所述人脸特征对应的性别信息的步骤,包括:
    从所述性别特征数据库内获取性别人脸特征,其中,所述性别人脸特征包括数量相同的男性人脸特征与女性人脸特征;
    将所述人脸特征与所有所述性别人脸特征映射至相同的多维空间内;
    分别计算所述人脸特征与每一个所述男性人脸特征的第一距离,并计算所有所述第一距离的第一和值;以及,
    分别计算所述人脸特征与每一个所述女性人脸特征的第二距离,并计算所有所述第二距离的第二和值;
    判断所述第一和值是否大于所述第二和值;
    若所述第一和值大于所述第二和值,则生成与所述人脸特征对应的女性性别信息;
    若所述第一和值不大于所述第二和值,则生成与所述人脸特征对应的男性性别信息。
  3. 根据权利要求1所述的人脸识别方法,其中,所述基于本地预存储的年龄特征数据库,计算生成与所述人脸特征对应的年龄信息的步骤,包括:
    从所述年龄特征数据库内获取年龄人脸特征,其中,所述年龄人脸特征包括每一个年龄段的人脸特征数据,且所述年龄人脸特征的数量与所述年龄段的数量相同;
    将所述人脸特征与所有所述年龄人脸特征映射至相同的多维空间内;
    计算所述人脸特征与每一个所述年龄人脸特征之间的第三距离;
    基于所述第三距离,调用预设的计算公式计算得到与所述人脸特征匹配的指定年龄人脸特征;
    获取与所述指定年龄人脸特征对应的指定年龄段信息,并将所述指定年龄段信息作为所述年龄信息。
  4. 根据权利要求1所述的人脸识别方法,其中,所述使用所述人脸特征分别与所述指定人脸特征库中的所有人脸特征模板进行匹配处理,得到与所述人脸图像对应的人脸识别结果的步骤,包括:
    计算所述人脸特征与所述指定人脸特征库中每一个人脸特征模板之间的相似度;
    判断计算得到的所有相似度是否均小于预设相似度阈值;
    若计算得到的所有相似度均小于所述预设相似度阈值,生成人脸识别失败的第一人脸识别结果;
    若计算得到的所有相似度没有均小于所述预设相似度阈值,判断所有所述相似度中是否存在至少指定数量个大于或等于所述预设相似度阈值的第一相似度,其中,所述指定数量为大于1的正整数;
    若所有所述相似度中存在至少指定数量个所述第一相似度,从所有所述第一相似度中筛选出数值最大的第二相似度;
    获取与所述第二相似度对应的人脸特征模板相关联的指定用户信息,并基于所述指定用户信息生成人脸识别成功的第二人脸识别结果,其中,所述第二人脸识别结果至少携带所述指定用户信息。
  5. 根据权利要求1所述的人脸识别方法,其中,所述基于所述性别信息与所述年龄信息,从本地预存储的所有人脸特征库中筛选出与所述年龄信息及所述性别信息同时对应的指定人脸特征库的步骤之前,包括:
    获取预先采集的指定人脸特征模板;
    获取与各所述指定人脸特征模板分别对应的年龄标签与性别标签;
    基于所述年龄标签与所述性别标签,对所有所述指定人脸特征模板进行分类组合处理,得到处理后的多个人脸特征模板集合;
    在本地生成与所述人脸特征模板集合的数量相同的多个数据库;
    为所述人脸特征模板集合与所述数据库之间建立一一对应的映射关系;
    基于所述映射关系,将各所述人脸特征模板集合对应存储至各所述数据库内,生成对应的多个人脸特征库;
    基于所述人脸特征库中包含的人脸特征模板集合的年龄标签与性别标签,为每一个所述人脸特征库生成对应的数据库标签。
  6. 根据权利要求5所述的人脸识别方法,其中,所述获取预先采集的指定人脸特征模板的步骤之前,包括:
    获取预设的全员人脸特征库中的全员人脸特征模板;以及,
    获取目标区域信息;
    基于所述区域信息对所述全员人脸特征模板进行筛选处理,筛选出与所述目标区域信息对应的目标人脸特征模板;
    将所述目标人脸特征模板作为所述指定人脸特征模板。
  7. 根据权利要求1所述的人脸识别方法,其中,所述基于预设的人脸检测算法对所述待处理图像进行人脸检测,得到对应的人脸图像的步骤,包括:
    采用所述人脸检测算法对所述待处理图像进行人脸检测,得到对应的人脸检测结果;
    基于所述人脸检测结果确定出所述待处理图像中包含的人脸图像的位置坐标;
    将所述位置坐标所属区域的图像从所述待处理图像中截取出来,得到对应的截取图像;
    将所述截取图像作为所述人脸图像。
  8. 一种人脸识别装置,其中,包括:
    第一获取模块,用于获取待处理图像;
    检测模块,用于基于预设的人脸检测算法对所述待处理图像进行人脸检测,得到对应的人脸图像;
    提取模块,用于从所述人脸图像中提取出对应的人脸特征;
    第一计算模块,用于基于本地预存储的性别特征数据库,计算生成与所述人脸特征对应的性别信息,其中,所述性别特征数据库内存储有性别人脸特征;以及,
    第二计算模块,用于基于本地预存储的年龄特征数据库,计算生成与所述人脸特征对应的年龄信息,其中,所述年龄特征数据库内存储有年龄人脸特征;
    第一筛选模块,用于基于所述性别信息与所述年龄信息,从本地预存储的所有人脸特征库中筛选出与所述年龄信息及所述性别信息同时对应的指定人脸特征库;
    第一生成模块,用于使用所述人脸特征分别与所述指定人脸特征库中的所有人脸特征模板进行匹配处理,得到与所述人脸图像对应的人脸识别结果。
  9. 一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机程序,其中,所述处理器执行所述计算机程序时实现如下步骤:
    获取待处理图像;
    基于预设的人脸检测算法对所述待处理图像进行人脸检测,得到对应的人脸图像;
    从所述人脸图像中提取出对应的人脸特征;
    基于本地预存储的性别特征数据库,计算生成与所述人脸特征对应的性别信息,其中,所述性别特征数据库内存储有性别人脸特征;以及,
    基于本地预存储的年龄特征数据库,计算生成与所述人脸特征对应的年龄信息,其中,所述年龄特征数据库内存储有年龄人脸特征;
    基于所述性别信息与所述年龄信息,从本地预存储的所有人脸特征库中筛选出与所述年龄信息及所述性别信息同时对应的指定人脸特征库;
    使用所述人脸特征分别与所述指定人脸特征库中的所有人脸特征模板进行匹配处理,得到与所述人脸图像对应的人脸识别结果。
  10. 根据权利要求9所述的计算机设备,其中,所述基于本地预存储的性别特征数据库,计算生成与所述人脸特征对应的性别信息的步骤,包括:
    从所述性别特征数据库内获取性别人脸特征,其中,所述性别人脸特征包括数量相同的男性人脸特征与女性人脸特征;
    将所述人脸特征与所有所述性别人脸特征映射至相同的多维空间内;
    分别计算所述人脸特征与每一个所述男性人脸特征的第一距离,并计算所有所述第一距离的第一和值;以及,
    分别计算所述人脸特征与每一个所述女性人脸特征的第二距离,并计算所有所述第二距离的第二和值;
    判断所述第一和值是否大于所述第二和值;
    若所述第一和值大于所述第二和值,则生成与所述人脸特征对应的女性性别信息;
    若所述第一和值不大于所述第二和值,则生成与所述人脸特征对应的男性性别信息。
  11. 根据权利要求9所述的计算机设备,其中,所述基于本地预存储的年龄特征数据库,计算生成与所述人脸特征对应的年龄信息的步骤,包括:
    从所述年龄特征数据库内获取年龄人脸特征,其中,所述年龄人脸特征包括每一个年龄段的人脸特征数据,且所述年龄人脸特征的数量与所述年龄段的数量相同;
    将所述人脸特征与所有所述年龄人脸特征映射至相同的多维空间内;
    计算所述人脸特征与每一个所述年龄人脸特征之间的第三距离;
    基于所述第三距离,调用预设的计算公式计算得到与所述人脸特征匹配的指定年龄人脸特征;
    获取与所述指定年龄人脸特征对应的指定年龄段信息,并将所述指定年龄段信息作为所述年龄信息。
  12. 根据权利要求9所述的计算机设备,其中,所述使用所述人脸特征分别与所述指定人脸特征库中的所有人脸特征模板进行匹配处理,得到与所述人脸图像对应的人脸识别结果的步骤,包括:
    计算所述人脸特征与所述指定人脸特征库中每一个人脸特征模板之间的相似度;
    判断计算得到的所有相似度是否均小于预设相似度阈值;
    若计算得到的所有相似度均小于所述预设相似度阈值,生成人脸识别失败的第一人脸识别结果;
    若计算得到的所有相似度没有均小于所述预设相似度阈值,判断所有所述相似度中是否存在至少指定数量个大于或等于所述预设相似度阈值的第一相似度,其中,所述指定数量为大于1的正整数;
    若所有所述相似度中存在至少指定数量个所述第一相似度,从所有所述第一相似度中筛选出数值最大的第二相似度;
    获取与所述第二相似度对应的人脸特征模板相关联的指定用户信息,并基于所述指定用户信息生成人脸识别成功的第二人脸识别结果,其中,所述第二人脸识别结果至少携带所述指定用户信息。
  13. 根据权利要求9所述的计算机设备,其中,所述基于所述性别信息与所述年龄信息,从本地预存储的所有人脸特征库中筛选出与所述年龄信息及所述性别信息同时对应的指定人脸特征库的步骤之前,包括:
    获取预先采集的指定人脸特征模板;
    获取与各所述指定人脸特征模板分别对应的年龄标签与性别标签;
    基于所述年龄标签与所述性别标签,对所有所述指定人脸特征模板进行分类组合处理,得到处理后的多个人脸特征模板集合;
    在本地生成与所述人脸特征模板集合的数量相同的多个数据库;
    为所述人脸特征模板集合与所述数据库之间建立一一对应的映射关系;
    基于所述映射关系,将各所述人脸特征模板集合对应存储至各所述数据库内,生成对应的多个人脸特征库;
    基于所述人脸特征库中包含的人脸特征模板集合的年龄标签与性别标签,为每一个所述人脸特征库生成对应的数据库标签。
  14. 根据权利要求13所述的计算机设备,其中,所述获取预先采集的指定人脸特征模板的步骤之前,包括:
    获取预设的全员人脸特征库中的全员人脸特征模板;以及,
    获取目标区域信息;
    基于所述区域信息对所述全员人脸特征模板进行筛选处理,筛选出与所述目标区域信息对应的目标人脸特征模板;
    将所述目标人脸特征模板作为所述指定人脸特征模板。
  15. 根据权利要求9所述的计算机设备,其中,所述基于预设的人脸检测算法对所述待处理图像进行人脸检测,得到对应的人脸图像的步骤,包括:
    采用所述人脸检测算法对所述待处理图像进行人脸检测,得到对应的人脸检测结果;
    基于所述人脸检测结果确定出所述待处理图像中包含的人脸图像的位置坐标;
    将所述位置坐标所属区域的图像从所述待处理图像中截取出来,得到对应的截取图像;
    将所述截取图像作为所述人脸图像。
  16. 一种计算机可读存储介质,其上存储有计算机程序,其中,所述计算机程序被处理器执行时实现如下步骤:
    获取待处理图像;
    基于预设的人脸检测算法对所述待处理图像进行人脸检测,得到对应的人脸图像;
    从所述人脸图像中提取出对应的人脸特征;
    基于本地预存储的性别特征数据库,计算生成与所述人脸特征对应的性别信息,其中,所述性别特征数据库内存储有性别人脸特征;以及,
    基于本地预存储的年龄特征数据库,计算生成与所述人脸特征对应的年龄信息,其中,所述年龄特征数据库内存储有年龄人脸特征;
    基于所述性别信息与所述年龄信息,从本地预存储的所有人脸特征库中筛选出与所述年龄信息及所述性别信息同时对应的指定人脸特征库;
    使用所述人脸特征分别与所述指定人脸特征库中的所有人脸特征模板进行匹配处理,得到与所述人脸图像对应的人脸识别结果。
  17. 根据权利要求16所述的计算机可读存储介质,其中,所述基于本地预存储的性别特征数据库,计算生成与所述人脸特征对应的性别信息的步骤,包括:
    从所述性别特征数据库内获取性别人脸特征,其中,所述性别人脸特征包括数量相同的男性人脸特征与女性人脸特征;
    将所述人脸特征与所有所述性别人脸特征映射至相同的多维空间内;
    分别计算所述人脸特征与每一个所述男性人脸特征的第一距离,并计算所有所述第一距离的第一和值;以及,
    分别计算所述人脸特征与每一个所述女性人脸特征的第二距离,并计算所有所述第二距离的第二和值;
    判断所述第一和值是否大于所述第二和值;
    若所述第一和值大于所述第二和值,则生成与所述人脸特征对应的女性性别信息;
    若所述第一和值不大于所述第二和值,则生成与所述人脸特征对应的男性性别信息。
  18. 根据权利要求16所述的计算机可读存储介质,其中,所述基于本地预存储的年龄特征数据库,计算生成与所述人脸特征对应的年龄信息的步骤,包括:
    从所述年龄特征数据库内获取年龄人脸特征,其中,所述年龄人脸特征包括每一个年龄段的人脸特征数据,且所述年龄人脸特征的数量与所述年龄段的数量相同;
    将所述人脸特征与所有所述年龄人脸特征映射至相同的多维空间内;
    计算所述人脸特征与每一个所述年龄人脸特征之间的第三距离;
    基于所述第三距离,调用预设的计算公式计算得到与所述人脸特征匹配的指定年龄人脸特征;
    获取与所述指定年龄人脸特征对应的指定年龄段信息,并将所述指定年龄段信息作为所述年龄信息。
  19. 根据权利要求16所述的计算机可读存储介质,其中,所述使用所述人脸特征分别与所述指定人脸特征库中的所有人脸特征模板进行匹配处理,得到与所述人脸图像对应的人脸识别结果的步骤,包括:
    计算所述人脸特征与所述指定人脸特征库中每一个人脸特征模板之间的相似度;
    判断计算得到的所有相似度是否均小于预设相似度阈值;
    若计算得到的所有相似度均小于所述预设相似度阈值,生成人脸识别失败的第一人脸识别结果;
    若计算得到的所有相似度没有均小于所述预设相似度阈值,判断所有所述相似度中是否存在至少指定数量个大于或等于所述预设相似度阈值的第一相似度,其中,所述指定数量为大于1的正整数;
    若所有所述相似度中存在至少指定数量个所述第一相似度,从所有所述第一相似度中筛选出数值最大的第二相似度;
    获取与所述第二相似度对应的人脸特征模板相关联的指定用户信息,并基于所述指定用户信息生成人脸识别成功的第二人脸识别结果,其中,所述第二人脸识别结果至少携带所述指定用户信息。
  20. 根据权利要求16所述的计算机可读存储介质,其中,所述基于所述性别信息与所述年龄信息,从本地预存储的所有人脸特征库中筛选出与所述年龄信息及所述性别信息同时对应的指定人脸特征库的步骤之前,包括:
    获取预先采集的指定人脸特征模板;
    获取与各所述指定人脸特征模板分别对应的年龄标签与性别标签;
    基于所述年龄标签与所述性别标签,对所有所述指定人脸特征模板进行分类组合处理,得到处理后的多个人脸特征模板集合;
    在本地生成与所述人脸特征模板集合的数量相同的多个数据库;
    为所述人脸特征模板集合与所述数据库之间建立一一对应的映射关系;
    基于所述映射关系,将各所述人脸特征模板集合对应存储至各所述数据库内,生成对应的多个人脸特征库;
    基于所述人脸特征库中包含的人脸特征模板集合的年龄标签与性别标签,为每一个所述人脸特征库生成对应的数据库标签。
PCT/CN2021/109467 2021-02-26 2021-07-30 人脸识别方法、装置、计算机设备和存储介质 WO2022179046A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110220518.6 2021-02-26
CN202110220518.6A CN112949468A (zh) 2021-02-26 2021-02-26 人脸识别方法、装置、计算机设备和存储介质

Publications (1)

Publication Number Publication Date
WO2022179046A1 true WO2022179046A1 (zh) 2022-09-01

Family

ID=76246659

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/109467 WO2022179046A1 (zh) 2021-02-26 2021-07-30 人脸识别方法、装置、计算机设备和存储介质

Country Status (2)

Country Link
CN (1) CN112949468A (zh)
WO (1) WO2022179046A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117409470A (zh) * 2023-12-15 2024-01-16 千巡科技(深圳)有限公司 一种人脸识别特征数据动态匹配方法、系统、装置和介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949468A (zh) * 2021-02-26 2021-06-11 深圳壹账通智能科技有限公司 人脸识别方法、装置、计算机设备和存储介质
CN115223286A (zh) * 2022-06-14 2022-10-21 潮趴部落(杭州)科技有限公司 一种客户特征感知的智能门框
CN116895093B (zh) * 2023-09-08 2024-01-23 苏州浪潮智能科技有限公司 一种人脸识别方法、装置、设备及计算机可读存储介质
CN116912918B (zh) * 2023-09-08 2024-01-23 苏州浪潮智能科技有限公司 一种人脸识别方法、装置、设备及计算机可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101950358A (zh) * 2010-09-30 2011-01-19 冠捷显示科技(厦门)有限公司 智能电视自动年龄估计与性别判别的方法
CN107273796A (zh) * 2017-05-05 2017-10-20 珠海数字动力科技股份有限公司 一种基于人脸特征的快速人脸识别搜索方法
CN109815775A (zh) * 2017-11-22 2019-05-28 深圳市祈飞科技有限公司 一种基于人脸属性的人脸识别方法及系统
CN112312210A (zh) * 2020-10-30 2021-02-02 深圳创维-Rgb电子有限公司 电视字号声音自动调节处理方法、装置、智能终端及介质
CN112949468A (zh) * 2021-02-26 2021-06-11 深圳壹账通智能科技有限公司 人脸识别方法、装置、计算机设备和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101950358A (zh) * 2010-09-30 2011-01-19 冠捷显示科技(厦门)有限公司 智能电视自动年龄估计与性别判别的方法
CN107273796A (zh) * 2017-05-05 2017-10-20 珠海数字动力科技股份有限公司 一种基于人脸特征的快速人脸识别搜索方法
CN109815775A (zh) * 2017-11-22 2019-05-28 深圳市祈飞科技有限公司 一种基于人脸属性的人脸识别方法及系统
CN112312210A (zh) * 2020-10-30 2021-02-02 深圳创维-Rgb电子有限公司 电视字号声音自动调节处理方法、装置、智能终端及介质
CN112949468A (zh) * 2021-02-26 2021-06-11 深圳壹账通智能科技有限公司 人脸识别方法、装置、计算机设备和存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117409470A (zh) * 2023-12-15 2024-01-16 千巡科技(深圳)有限公司 一种人脸识别特征数据动态匹配方法、系统、装置和介质
CN117409470B (zh) * 2023-12-15 2024-03-15 千巡科技(深圳)有限公司 一种人脸识别特征数据动态匹配方法、系统、装置和介质

Also Published As

Publication number Publication date
CN112949468A (zh) 2021-06-11

Similar Documents

Publication Publication Date Title
WO2022179046A1 (zh) 人脸识别方法、装置、计算机设备和存储介质
CN110263673B (zh) 面部表情识别方法、装置、计算机设备及存储介质
US11163978B2 (en) Method and device for face image processing, storage medium, and electronic device
CN111860147B (zh) 行人重识别模型优化处理方法、装置和计算机设备
AU2009246750B2 (en) Fingerprint representation using gradient histograms
CN103593654B (zh) 一种人脸定位的方法与装置
CN110163111A (zh) 基于人脸识别的叫号方法、装置、电子设备及存储介质
CN111368926B (zh) 图像筛选方法、装置和计算机可读存储介质
CN110111136B (zh) 视频数据处理方法、装置、计算机设备和存储介质
US20190347472A1 (en) Method and system for image identification
US11403875B2 (en) Processing method of learning face recognition by artificial intelligence module
US20230215125A1 (en) Data identification method and apparatus
CN113298158A (zh) 数据检测方法、装置、设备及存储介质
CN112883980A (zh) 一种数据处理方法及系统
CN111275059B (zh) 一种图像处理方法、装置和计算机可读存储介质
CN107341189A (zh) 一种辅助人工对图像进行筛查、分类和存储的方法及系统
CN111709312B (zh) 一种基于联合主模式的局部特征人脸识别方法
CN113792683B (zh) 文本识别模型的训练方法、装置、设备以及存储介质
CN112182537A (zh) 监控方法、装置、服务器、系统以及存储介质
CN112200080A (zh) 一种人脸识别方法、装置、电子设备及存储介质
Suvorov et al. Mathematical model of the biometric iris recognition system
CN114266298B (zh) 基于一致流形逼近与投影聚类集成的图像分割方法及系统
CN113158920B (zh) 特定动作识别模型的训练方法、装置以及计算机设备
Mamatov et al. Algorithm for Selecting Optimal Features in Face Recognition Systems
CN113221676B (zh) 基于多维度特征的目标追踪方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21927475

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 28.11.2023)