WO2023275606A1 - 人脸识别方法、系统、装置、电子设备及存储介质 - Google Patents

人脸识别方法、系统、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2023275606A1
WO2023275606A1 PCT/IB2021/060012 IB2021060012W WO2023275606A1 WO 2023275606 A1 WO2023275606 A1 WO 2023275606A1 IB 2021060012 W IB2021060012 W IB 2021060012W WO 2023275606 A1 WO2023275606 A1 WO 2023275606A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
face
preset
features
network node
Prior art date
Application number
PCT/IB2021/060012
Other languages
English (en)
French (fr)
Inventor
孙栋梁
崔盛平
张帅
Original Assignee
商汤国际私人有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 商汤国际私人有限公司 filed Critical 商汤国际私人有限公司
Publication of WO2023275606A1 publication Critical patent/WO2023275606A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Definitions

  • Face recognition method, system, device, electronic device and storage medium This application requires submission on June 30, 2021, the application number is 202110737919.9, and the title of the invention is "face recognition method, system, device, electronic device and storage medium"
  • the priority of the Chinese patent application the entire content of which is incorporated in this application by reference.
  • Technical Field The present disclosure relates to the technical field of image processing, and in particular, to a face recognition method, device, system, electronic equipment, storage medium, and computer program product.
  • face recognition using deep learning technology is widely used in various scenarios, such as security scenarios, face access control scenarios, Internet entertainment, payment scenarios, and the like.
  • the face recognition technology can obtain the user's face image, match the face image with the pre-stored image, and perform corresponding processing according to the matching result.
  • the matching result is a matching
  • the door can be controlled to open so that the user can pass through the door.
  • the present disclosure provides a face recognition method, including: in response to an operation request initiated by a target device, acquiring a target face image of a target user collected by the target device, and extracting the target person The target face feature of the face image; determining the target face sub-library associated with the target device; wherein, the preset face features stored in the target face sub-library belong to the total face features stored in the total face library A part; matching the target face feature with the preset face feature, determining the first matching result corresponding to the target face image, and determining the first matching result corresponding to the operation request based on the first matching result.
  • Response results including: in response to an operation request initiated by a target device, acquiring a target face image of a target user collected by the target device, and extracting the target person The target face feature of the face image; determining the target face sub-library associated with the target device; wherein, the preset face features stored in the target face sub-library belong to the total face features stored in the total face library A part; matching the target face feature with the
  • the target face sub-base associated with the target device it is possible to determine the target face sub-base associated with the target device, and the preset face features stored in the target face sub-base belong to a part of the total face features stored in the total face database. It can be known that the target face sub-base The number of stored preset facial features is less than that of the total face database, so that when matching the preset facial features stored in the target facial sub-library with the target facial features corresponding to the target facial image, it can be more quickly and more accurately determine the first matching result corresponding to the target face image; furthermore, based on the first matching result, it is possible to more efficiently determine the first response result corresponding to the operation request.
  • the acquiring the target face image of the target user collected by the target device in response to the operation request initiated by the target device includes: controlling the target device in response to the operation request initiated by the target device Collect multiple frames of candidate face images of the target user; based on at least one of the position of the human face in the candidate face image, the orientation of the human face in the candidate face image, and the illumination information of the candidate face image, from the Among the multiple frames of candidate face images, a target face image corresponding to the target user is acquired.
  • the target face image corresponding to the target user can be obtained from the multiple frames of candidate face images according to at least one set condition, so that the selected target The image quality of the face image is better, and then when the face recognition is performed based on the target face image with better quality, the recognition accuracy can be improved.
  • the determining the target face sub-database associated with the target device includes: acquiring historical operation information of the target device; Among the stored total face features, determine preset face features used by the target device; and determine a target face sub-library associated with the target device based on the preset face features used by the target device.
  • the preset facial features used by the target device can be determined from the face database according to the acquired historical operation records of the target device, so that the selected preset facial features are associated with the target device. Face features, in the target face sub-library The stored preset face features are more likely to be accessed on the target device, and based on the selected preset face features, the target face sub-library associated with the target device can be more accurately determined. At the same time, while ensuring the face feature matching requirements of the target device, the number of preset face features in the target face sub-library is reduced, so that when matching the target face features with the preset face features, it can The first matching result corresponding to the target face image is relatively quickly determined.
  • the method further includes: The included preset face features are deleted to generate a target face sub-library after the deletion.
  • the preset face features included in the target face sub-library are deleted, and the target face sub-library after the deletion operation is generated, and the preset faces that do not meet the requirements are deleted.
  • the face feature deletion reduces the number of preset face features in the target face sub-database, and then when the preset face features stored in the target face sub-bank are subsequently matched with the target face features, the matching efficiency can be improved.
  • the deleting operation of the preset face features included in the target face sub-library according to the storage time of the preset face features includes: When the data volume of the preset face features stored in the face sub-bank is greater than or equal to the storage capacity threshold of the target face sub-bank, according to the order of the storage time of the preset face features from early to late, the target person The preset face features included in the face sub-base are deleted; and/or, according to the storage period of at least one preset face feature and the storage time of the preset face features, the target face sub-base The preset facial features included in the library are deleted.
  • the determining the first response result corresponding to the operation request based on the first matching result includes: the first matching result indicates that the target face sub-library includes the In the case of preset facial features matching the target facial features, determine the account information corresponding to the target user based on the preset facial features matching the target facial features; based on the account corresponding to the target user information, and determine the first response result corresponding to the operation request.
  • the determining the first response result corresponding to the operation request based on the first matching result includes: the first matching result indicates that the target face sub-library does not include the In the case where the target facial features match the preset facial features, matching the target facial features with the total facial features included in the total face database to obtain a second matching result; if When the second matching result indicates that the face database includes the first face feature matching the target face feature, based on the second matching result, determine a second response result corresponding to the operation request , and synchronizing the first face feature to the target face sub-database.
  • the matching the target face features with the total face features included in the total face library to obtain a second matching result includes: controlling the target device to display An operation interface for acquiring the identification information of the target user; based on the acquired identification information of the target user, acquiring the first face feature corresponding to the identification information from the face database and converting the first face The feature is matched with the target face feature to obtain the second matching result.
  • the operation interface used to obtain the identification information of the target user can more accurately obtain the first face feature corresponding to the identification information from the total face database based on the obtained identification information of the target user. Then, the first face feature is matched with the target face feature to obtain a second matching result, which improves the accuracy of face feature matching.
  • the target device is associated with at least one target face sub-library, and the preset face features in the target face sub-library are stored in a network node;
  • the method further includes: acquiring service performance information of at least one network node; the service performance information includes: load performance information and/or or hardware configuration information; according to service performance information of multiple network nodes, assigning the preset facial features to at least one network node; performing the target facial features with the preset facial features Matching, determining a first matching result corresponding to the target face image, including: calling a target network node in the at least one network node, matching the target face feature with the preset face feature, determining The first matching result corresponding to the target face image.
  • the preset facial features are distributed to at least one network node, reducing the storage and calculation pressure of some network nodes, and the storage and computing pressure of some network nodes.
  • the probability that the load of the network nodes is unbalanced due to the small calculation pressure makes the load of at least one network node more balanced, and improves the processing efficiency of the network nodes.
  • the method before obtaining the service performance information of at least one network node, the method further includes: judging the number of preset facial features according to the number of preset facial features included in the target facial sub-database Whether the capacity of the network node meets the storage requirement of the preset facial feature; if the capacity of multiple network nodes does not meet the storage requirement of the preset facial feature, expand a new network node.
  • the new network nodes are expanded so that the expanded network nodes can store the preset face features included in the target face sub-database, The probability of the occurrence of excessive load on the network nodes is reduced, and the processing efficiency of the network nodes is guaranteed.
  • the calling the target network node in the at least one network node, and combining the target facial features with the preset Assume that the face features are matched, and determining the first matching result corresponding to the target face image includes: judging whether the target network node is working normally; Among the multiple network nodes corresponding to the face feature, determine other network nodes except the target network node, and determine an updated target network node from the other network nodes; call the updated target network node , matching the target face feature with the preset face feature, and determining a first matching result corresponding to the target face image.
  • At least one preset face feature is stored in multiple network nodes, so that when any network node among the multiple network nodes storing preset face features fails to work normally, any network node Other network nodes other than the node can perform matching based on preset facial features, which ensures the high availability of network nodes and improves the matching efficiency of the target facial features corresponding to the target facial image.
  • the preset facial features are stored in an external memory of the target network node; the invoking the target network node in the at least one network node combines the target facial features with the Matching with preset face features, and determining the first matching result corresponding to the target face image, includes: calling a processor in the target network node to obtain from the external memory the target face sub-library contained at least one of the preset facial features; and matching the acquired at least one preset facial feature with the target facial feature to determine a first matching result corresponding to the target facial image.
  • the processor includes a graphics processing unit GPU and/or a central processing unit CPU.
  • the preset facial features are stored in the external memory of the target network node; the method also includes: at least one preset facial feature stored in the external memory of the target network node , loaded into the memory of the target network node;
  • the calling the target network node in the at least one network node, matching the target face feature with the preset face feature, and determining the first matching result corresponding to the target face image includes: calling The processor in the target network node matches at least one preset face feature contained in the target face sub-library stored in the memory with the target face feature, and determines the target face image The corresponding first matching result.
  • the processor can directly base on the at least one preset feature contained in the target face sub-library stored in the memory.
  • the face features are matched with the target face features without obtaining the preset face features from the external memory, and the face feature matching process of the processor is relatively simple and fast, which ensures the real-time performance of face recognition.
  • the loading at least one preset face feature stored in the external memory of the target network node into the memory of the target network node includes: based on at least one preset face The number of matching times or the matching frequency of the feature, from at least one preset face feature stored in the external memory of the target network node, determine the preset face feature to be loaded; the determined preset face feature to be loaded The feature is loaded into the memory of the target network node.
  • the preset facial features to be loaded are determined from the at least one preset facial features stored in the external memory of the target network node based on the matching times or matching frequency of at least one preset facial features, so that based on the to-be
  • the loaded preset facial features are matched with the target facial features, the time and resources consumed for obtaining the preset facial features from the external memory can be reduced, and the efficiency of facial feature matching is improved.
  • the preset facial features are stored in an external memory of the target network node; the method further includes: generating a corresponding target for at least one preset facial feature stored in the external memory Index; load the generated target index corresponding to at least one preset face feature into the memory of the target network node; wherein, the target index is used to search for the preset face feature stored in the external memory index of.
  • a corresponding target index can be generated for at least one preset face feature of the object to be compared stored in the external memory, and a corresponding target index can also be generated for some preset face features of the object to be compared stored in the external memory.
  • the target index may generate a corresponding target index for a preset face feature with a large number of matching times or a high matching frequency. And the generated target index corresponding to at least one preset face feature of the object to be compared can be loaded into the memory of the target network node, so that the target network node can obtain the target index corresponding to the target index more accurately.
  • To set the face feature there is no need to traverse at least one preset face feature stored in the external memory, which improves the efficiency of obtaining the preset face feature.
  • the invoking the target network node in the at least one network node matches the target face feature with the preset face feature, and determines the target face image corresponding
  • the first matching result includes: calling a processor in the target network node to search for the target index corresponding to the preset face features contained in the target face sub-library stored in the memory; and from the target network node in the external memory of the target index, acquire the preset face feature corresponding to the target index; match the preset face feature corresponding to the target index with the target face feature, and determine the target person The first matching result corresponding to the face image.
  • the processor in the target network node can be called to search for the target index corresponding to the preset face features contained in the target face sub-library stored in the memory, and obtain the corresponding target index from the external memory of the target network node more accurately.
  • the preset face features obtained according to the target index can be matched with the target face features to determine the first matching result corresponding to the target face image, which improves the efficiency of face feature matching.
  • the present disclosure provides a face recognition system, including: a target device, a background server; the background server is connected to the target device; the target device is configured to initiate an operation request, and based on The operation request is to obtain a target face image of the target user; the background server is configured to perform face recognition as described in the first aspect or any implementation manner of the first aspect based on the acquired target face image method.
  • the system further includes: at least one network node; the background server is connected to the at least one network node; the background server is further configured to control the at least one network node to store preset face features, and matching the stored preset face features with the target face features corresponding to the target face image.
  • the present disclosure provides a face recognition device, including: an acquisition module, configured to, in response to an operation request initiated by a target device, acquire a target face image of a target user collected by the target device, and extracting target face features of the target face image; a first determining module, configured to determine a target face sub-library associated with the target device; wherein, the preset face features stored in the target face sub-library belong to A part of the total face features stored in the total face database; a second determination module, configured to match the target face features with the preset face features, and determine the first face feature corresponding to the target face image Matching result; a third determining module, configured to determine a first response result corresponding to the operation request based on the first matching result.
  • an acquisition module configured to, in response to an operation request initiated by a target device, acquire a target face image of a target user collected by the target device, and extracting target face features of the target face image
  • a first determining module configured to determine a target face sub-library associated with the target device
  • the acquiring module when acquiring the target face image of the target user collected by the target device in response to the operation request initiated by the target device, is configured to: respond to the operation request initiated by the target device , controlling the target device to collect multiple frames of candidate face images of the target user; based on at least One, acquiring a target face image corresponding to the target user from the multiple frames of candidate face images.
  • the first determining module when determining the target face sub-library associated with the target device, is configured to: acquire historical operation information of the target device; according to the historical operation information, From the total face features pre-stored in the face database, determine the preset face features used by the target device; determine the target device based on the preset face features used by the target device The associated target face sub-library.
  • the device after determining the target face sub-library associated with the target device, the device further includes: a pruning module, configured to: according to the storage time of the preset face features, The preset face features included in the target face sub-library are deleted to generate a target face sub-library after the deletion.
  • the deletion module when performing a deletion operation on the preset facial features included in the target facial sub-library according to the storage time of the preset facial features, uses When: When the data volume of the preset facial features stored in the target human face sub-database is greater than or equal to the storage capacity threshold of the target human face sub-database, according to the storage time of the preset human face features from early to late Sequentially, delete the preset facial features included in the target face sub-library; and/or, according to the storage period of at least one preset facial feature and the storage time of the preset facial features , performing a pruning operation on the preset face features included in the target face sub-library.
  • the third determination module when determining the first response result corresponding to the operation request based on the first matching result, is configured to: when the first matching result indicates the When the target face sub-database includes preset face features matching the target face feature, determine account information corresponding to the target user based on the preset face feature matching the target face feature; Based on the account information corresponding to the target user, determine a first response result corresponding to the operation request.
  • the third determination module determines the first response result corresponding to the operation request based on the first matching result, it is configured to: when the first matching result indicates that the In the case that the target face sub-database does not include the preset face features matched with the target face features, the target face features are matched with the total face features included in the total face library , to obtain the second matching result; If the second matching result indicates that the face database includes the first face feature matching the target face feature, based on the second matching result, determine the second response corresponding to the operation request As a result, the first face feature is synchronized to the target face sub-database.
  • the third determining module matches the target facial features with the total facial features included in the total human face database to obtain a second matching result
  • use In: controlling the target device to display an operation interface for acquiring the identification information of the target user; based on the acquired identification information of the target user, acquiring the first human face corresponding to the identification information from the face database feature; matching the first face feature with the target face feature to obtain the second matching result.
  • the target device is associated with at least one target face sub-library, and the preset face features in the target face sub-library are stored in a network node;
  • the device further includes: an allocation module, configured to: acquire service performance information of at least one network node; the service performance information includes : load performance information and/or hardware configuration information; according to the service performance information of multiple network nodes, assigning the preset facial features to at least one network node; the second determination module, after assigning the target When matching the face features with the preset face features, and determining the first matching result corresponding to the target face image, it is used to: call the target network node in the at least one network node, and transfer the target person to The face features are matched with the preset face features, and a first matching result corresponding to the target face image is determined.
  • the device before obtaining the service performance information of at least one network node, the device further includes: a judging module, configured to: according to the number of preset face features included in the target face sub-database, judging whether the capacity of a plurality of preset network nodes meets the storage requirements of the preset facial features; if the capacity of the plurality of network nodes does not meet the storage requirements of the preset facial features, expand a new network node.
  • a judging module configured to: according to the number of preset face features included in the target face sub-database, judging whether the capacity of a plurality of preset network nodes meets the storage requirements of the preset facial features; if the capacity of the plurality of network nodes does not meet the storage requirements of the preset facial features, expand a new network node.
  • the second determination module calls the target network node in the at least one network node, and the target person
  • the face feature is matched with the preset face feature, and the first matching result corresponding to the target face image is determined, it is used to: determine whether the target network node is working normally; if the target network node is not working normally , then from the plurality of network nodes corresponding to the preset facial features, determine other network nodes except the target network node, and determine an updated target network node from the other network nodes; call The updated target network node matches the target face feature with the preset face feature, and determines a first matching result corresponding to the target face image.
  • the preset facial features are stored in the external memory of the target network node; the second determining module, when calling the target network node in the at least one network node, sets the target When matching the face features with the preset face features, and determining the first matching result corresponding to the target face image, it is used to: invoke the processor in the target network node to obtain from the external memory At least one of the preset facial features included in the target face sub-library; and matching the acquired at least one preset facial feature with the target face feature, and determining that the target face image corresponds to The first matching result of .
  • the processor includes a graphics processing unit GPU and/or a central processing unit CPU.
  • the preset facial features are stored in an external memory of the target network node; the device further includes: a loading module, configured to: at least A preset face feature is loaded into the memory of the target network node; the second determination module, when calling the target network node in the at least one network node, compares the target face feature with the When matching the preset face features, and determining the first matching result corresponding to the target face image, it is used to: call the processor in the target network node to include the target face sub-library stored in the memory matching at least one of the preset facial features with the target facial features, and determining a first matching result corresponding to the target facial image.
  • a loading module configured to: at least A preset face feature is loaded into the memory of the target network node
  • the second determination module when calling the target network node in the at least one network node, compares the target face feature with the When matching the preset face features, and determining the first matching result corresponding to the target face image, it is used to: call the processor in the target network node
  • the loading module when the loading module loads at least one preset face feature stored in the external memory of the target network node into the memory of the target network node, it is configured to: based on The matching times or matching frequency of at least one preset face feature, from the at least one preset face feature stored in the external memory of the target network node, determine the preset face feature to be loaded; The loaded preset facial features are loaded into the memory of the target network node.
  • the preset facial features are stored in an external memory of the target network node; the device further includes: a generating module, configured to: provide at least one preset facial feature stored in the external memory generating a target index corresponding to the face feature; loading the generated target index corresponding to at least one preset face feature into the memory of the target network node; wherein, the target index is used for searching the stored in the external memory The index of preset facial features.
  • a generating module configured to: provide at least one preset facial feature stored in the external memory generating a target index corresponding to the face feature; loading the generated target index corresponding to at least one preset face feature into the memory of the target network node; wherein, the target index is used for searching the stored in the external memory The index of preset facial features.
  • the second determination module after invoking a target network node in the at least one network node, matches the target face feature with the preset face feature, and determines the For the first matching result corresponding to the target face image, it is used to: call the processor in the target network node to search the target index corresponding to the preset face features contained in the target face sub-library stored in the memory and acquiring, from the external memory of the target network node, a preset facial feature corresponding to the target index; combining the preset facial feature corresponding to the target index with the target facial feature Perform matching, and determine a first matching result corresponding to the target face image.
  • the present disclosure provides an electronic device, including: a processor, a memory, and a bus, the memory stores machine-readable instructions executable by the processor, and when the electronic device is running, the processing The processor communicates with the memory through a bus, and when the machine-readable instructions are executed by the processor, the steps of the face recognition method described in the first aspect or any implementation mode are executed.
  • the present disclosure provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and the computer program is executed by a processor as described in the above-mentioned first aspect or any implementation manner. The steps of the face recognition method described above.
  • a computer program product including computer readable codes, or a non-volatile computer readable storage medium bearing computer readable codes, when the computer readable codes are stored in an electronic device
  • the processor in the electronic device is used to implement the above method.
  • FIG. 1 shows a schematic flowchart of a face recognition method provided by an embodiment of the present disclosure
  • Fig. 2 shows a first response corresponding to a determination operation request in a face recognition method provided by an embodiment of the present disclosure
  • FIG. 1 shows a schematic flowchart of a face recognition method provided by an embodiment of the present disclosure
  • FIG. 3 shows a schematic flow diagram of a specific method of determining the first matching result corresponding to the target face image in a face recognition method provided by an embodiment of the present disclosure
  • Fig. 4 shows a schematic flowchart of a face recognition method provided by an embodiment of the present disclosure
  • Fig. 5 shows a schematic architecture diagram of a face recognition system provided by an embodiment of the present disclosure
  • Fig. 6 shows a schematic diagram of the present invention
  • FIG. 7 shows a schematic structural diagram of a face recognition device provided by the disclosed embodiment
  • FIG. 8 shows a schematic diagram of the disclosed embodiment A schematic structural diagram of an electronic device provided.
  • Embodiments of the present disclosure provide a face recognition method, device, system, electronic equipment, and storage medium. It should be noted that similar symbols and letters represent similar items in the following drawings, therefore, once an item is defined in one drawing, it does not need to be further defined and explained in subsequent drawings.
  • a face recognition method disclosed in the embodiments of the present disclosure is first introduced in detail.
  • the face recognition method provided by the embodiments of the present disclosure may be executed by a background server. Referring to FIG. 1, it is a schematic flowchart of a face recognition method provided by an embodiment of the present disclosure.
  • the method includes S101-S104, wherein:
  • S101 in response to the operation request initiated by the target device, acquire the target face image of the target user collected by the target device, and extract the target face features of the target face image;
  • the target face sub-base associated with the target device it is possible to determine the target face sub-base associated with the target device, and the preset face features stored in the target face sub-base belong to a part of the total face features stored in the total face database. It can be known that the target face sub-base The number of stored preset facial features is less than that of the total face database, so that when matching the preset facial features stored in the target facial sub-library with the target facial features corresponding to the target facial image, it can be more quickly and more accurately determine the first matching result corresponding to the target face image; furthermore, based on the first matching result, it is possible to more efficiently determine the first response result corresponding to the operation request.
  • the face recognition method can be applied to the facial recognition payment scenario.
  • the target device can be a face recognition payment placed in any place such as a supermarket, a convenience store, a clothing store, etc. equipment.
  • the facial recognition payment device can initiate an operation request, and then obtain the target face image of the target user collected by the target device in response to the operation request of the facial recognition payment device.
  • Step A1 in response to the operation request initiated by the target device, acquiring the target face image of the target user collected by the target device may include Step A1 and Step A2, wherein: Step A1, in response to the operation initiated by the target device Request, controlling the target device to collect multiple frames of candidate face images of the target user; Step A2, based on at least one of the position of the human face in the candidate face image, the orientation of the human face in the candidate face image, and the illumination information of the candidate face image, obtain the target from multiple frames of candidate face images The target face image corresponding to the user.
  • the target device may be controlled to collect multiple frames of candidate face images of the target user through the mounted camera.
  • the target user can be obtained from multiple frames of candidate face images The corresponding target face image.
  • the position of the target user's face in each frame of candidate face images may be determined, and the candidate face image whose face is at the center position may be selected to be determined as the acquired target face image of the target user.
  • the orientation of the human face in the candidate face image may be determined, for example, Euler angles may be used to represent the orientation of the human face.
  • the optimal orientation can be set, the orientation deviation between the orientation of the human face in the candidate face image and the optimal orientation can be determined, and the candidate face image with the smallest orientation deviation can be determined as the acquired target face image of the target user.
  • the optimal orientation may be orientation information when the face is in the image.
  • the first neural network after training may be used to determine the position and/or orientation of the human face in the candidate face image.
  • the lighting information of the candidate face image can be determined, and then the optimal lighting information can be set to determine the lighting deviation between the lighting information of the candidate face image and the best lighting information, and the candidate face image with the smallest lighting deviation , is determined as the acquired target face image of the target user.
  • the target face image corresponding to the target user can be obtained from the multiple frames of candidate face images according to at least one set condition, so that the selected target The image quality of the face image is better, and then when the face recognition is performed based on the target face image with better quality, the recognition accuracy can be improved.
  • the trained face feature extraction network may be used to extract target face features corresponding to the target face image.
  • the target face feature and the preset face feature may be extracted by using the same face feature extraction network.
  • a corresponding target face sub-database may be determined for each target device, and different target devices may correspond to different target face sub-databases.
  • the preset face features stored in the target face sub-database are part of the total face features stored in the total face database.
  • determining the target face sub-library associated with the target device may include: Step B1, obtaining historical operation information of the target device; Step B2, according to the historical operation information, from the pre-stored total Among the face features, the preset face features used by the target device are determined; Step B3, based on the preset face features used by the target device, the target face sub-library associated with the target device is determined.
  • the total facial features are pre-stored in the total face database, and the total facial features can include the corresponding facial features of each user; The facial features corresponding to each registered user who wants to use the facial recognition payment function.
  • Obtain the historical operation information of the target device the historical operation information can be information generated after the operation is performed on the target device; The history of facial recognition payment has been recorded.
  • the preset used by the target device can be determined from the total face features pre-stored in the face database according to the historical facial recognition payment records facial features. Then, the preset face features used by the target device are stored in the constructed database, and a target face sub-library associated with the target device is generated.
  • the target face sub-library associated with each target device may also be determined according to the scene information corresponding to the real scene where the target device is located.
  • the scene information may include the consumption record in the place where the face recognition payment device is installed, the WiFi information and Bluetooth probe information configured in the place where the face recognition payment device is installed.
  • the account information corresponding to each consumption record can be determined; then, the preset face features corresponding to each account information can be determined from the face database, and the The preset face features corresponding to each account information are stored in the constructed database, and a target face sub-database associated with the target device is generated.
  • the scene information when the scene information includes the WiFi information configured in the place where the facial recognition payment device is installed, it can be determined to connect to the WiFi Information of the mobile device and the user image registered through the mobile device, and then from the set face database, the preset face features matching the user image can be determined, and the preset face features matching the user image can be determined. It is stored in the constructed database, and a target face sub-library associated with the target device is generated.
  • the preset facial features used by the target device can be determined from the set face database according to the acquired historical operation records of the target device, so that the selected preset facial features exist with the target device
  • the associated face features, the preset face features stored in the target face sub-library are more likely to be accessed on the target device, and based on the selected preset face features, it is possible to more accurately determine the association with the target device
  • the target face sub-database At the same time, while ensuring the face feature matching requirements of the target device, the number of preset face features in the target face sub-library is reduced, so that when matching the target face features with the preset face features, it can The first matching result corresponding to the target face image is relatively quickly determined.
  • the basic module of the face sub-bank can be provided.
  • the basic module can include an interface module for creating a face sub-bank, a retrieval logic module for preset facial features stored in the face sub-bank, and a preset face feature stored in the face sub-bank.
  • the network node can determine the corresponding target face sub-database for the target device by calling the basic modules of the face data sub-database.
  • the method further includes: The face features are deleted, and the target face sub-library after the deletion is generated.
  • the preset facial features included in the target face sub-database are deleted, including: Method 1.
  • the data of the preset facial features stored in the target human face sub-database When the amount is greater than or equal to the storage capacity threshold of the target face sub-database, according to the order of the storage time of the preset face features from early to late, the preset face features included in the target face sub-database are deleted. According to the storage period of at least one preset face feature and the storage time of the preset face feature, the preset face feature included in the target face sub-library is deleted. Here, according to the storage time of the preset face features, the preset face features included in the target face sub-library are deleted, and the target face sub-library after the deletion operation is generated, and the preset faces that do not meet the requirements are deleted.
  • the face feature deletion reduces the number of preset face features in the target face sub-database, and then when the preset face features stored in the target face sub-bank are subsequently matched with the target face features, the matching efficiency can be improved.
  • the target face sub-database can be a dynamic face database, that is, the preset facial features included in the dynamic face database can be updated, wherein the update of the dynamic face database can include adding and/or deleting preset facial features. reduce.
  • the storage time of each preset facial feature can be determined, and the storage time is when the preset facial features are stored in the target human face sub-database.
  • the preset face feature included in the target face sub-library can be deleted to generate the target face sub-library after the deletion operation.
  • the target face can be obtained from the total face database.
  • the first human face feature matched by the feature is added to the target human face sub-database, and the target human face sub-database is updated. In the first way, the storage capacity threshold corresponding to the target face sub-database can be set according to the actual situation.
  • the target face sub-database can be stored in the order of the storage time of the preset face features from early to late.
  • the preset face features included in the database are deleted until the amount of data stored in the target face sub-database after the deletion operation is less than the storage capacity threshold of the target face sub-database.
  • the storage period of the target face sub-library can be set.
  • the preset face features included in the target face sub-library are faces that have been paid for by face payment on the face payment device (target device) within the last 7 days. feature, the storage period may be 7 days.
  • the preset facial features whose storage period expires may be deleted after the storage period of the preset facial features expires.
  • the storage time of preset face feature 1 is 08:00 on November 1st, and the storage period is 7 days.
  • the preset face feature 1 can be deleted from the target face sub-library at 08:00 on November 8.
  • each time the face feature database deletes the face features to be deleted it needs to call the delete command to delete the face features to be deleted.
  • the method of deleting face features is cumbersome and time-consuming.
  • the strategy module and the interface of the strategy module corresponding to the first and/or second methods can be set, and the target face sub-library can realize the call of the strategy module by calling the interface of the strategy module, and by calling the strategy module, Mode 1 and/or Mode 2 can be used to realize automatic deletion of preset facial features, which improves the deletion efficiency of preset facial features.
  • S103 and S104 Exemplarily, it is possible to determine the cosine similarity between the target face feature of the target face image and each preset face feature, and determine the first corresponding to the target face image according to each cosine similarity. matching results. Then, the first response result corresponding to the operation request may be determined according to the first matching result corresponding to the target face image. In an optional implementation manner, as shown in FIG. 2, in S104, based on the first matching result, determining the first response result corresponding to the operation request may include:
  • determining the first response result corresponding to the operation request may include the step
  • Step C1 when the first matching result shows that the target face sub-library does not include the preset face features matching the target face feature, the target face feature and the face total library include The total face features are matched to obtain a second matching result; Step C2, if the second matching result indicates that the first face feature matching the target face feature is included in the total face database, based on the second matching result, determine The second response result corresponding to the operation request, and the first face feature is synchronized to the target face sub-database.
  • the target face features can be matched with the total face features included in the total face database to obtain the second Two matching results.
  • step C1 matching the target face features with the total face features included in the total face database to obtain a second matching result may include: Step C11, controlling the target device to display Obtain the operation interface of the identification information of the target user; Step C12, based on the acquired identification information of the target user, obtain the first facial feature corresponding to the identification information from the face database; Step C13, combine the first facial feature with the target The facial features are matched to obtain the second matching result.
  • the target device may be controlled to display an operation interface for acquiring the identification information of the target user.
  • the identification information of the target user may be information representing the identity of the target user, and different target users correspond to different identification information.
  • the identification information may be an ID card number, a phone number, a member number generated for the target user, and the like.
  • the identification information corresponding to each preset face feature in the total face database can be determined in advance, so that after the identification information of the target user is obtained, the first identification information corresponding to the identification information can be obtained from the total face database according to the identification information.
  • the operation interface used to obtain the identification information of the target user, based on the obtained identification information of the target user, can obtain the identification information pair from the face database more accurately.
  • the first face feature is matched with the target face feature to obtain a second matching result, which improves the accuracy of face feature matching.
  • the target device is associated with at least one target face sub-library, and the preset face features in the target face sub-library are stored in the network node; in response to an operation request initiated by the target device, obtain Before the target face image of the target user, the method may further include: acquiring service performance information of at least one network node; the service performance information includes: load performance information and/or hardware configuration information; Performance information, assigning preset facial features to at least one network node. Furthermore, the target network node in the at least one network node may be called to match the target face feature with the preset face feature, and determine the first matching result corresponding to the target face image.
  • the target device is associated with at least one target face sub-library, and the preset face features in the target face sub-library can be stored in the network node. That is, the network nodes are used to store preset facial features, and to match the target facial features with the stored preset facial features.
  • Obtain service performance information of at least one network node where the service performance information may include load performance information and/or hardware configuration information. Then, according to the load performance information and/or hardware configuration information of each network node, the preset facial features can be allocated to at least one network node.
  • more preset face features can be allocated to network nodes with better load performance, and fewer preset face features can be allocated to network nodes with poor load performance; or, network nodes with higher hardware configuration can be allocated
  • the preset face features are assigned to at least one network node, which reduces the storage and calculation pressure of some network nodes and the storage of some network nodes. The probability that the load of the network nodes is unbalanced due to the low calculation pressure makes the load of each network node relatively balanced, and the processing efficiency of the network nodes is improved.
  • the method may further include: judging the preset number of network nodes according to the number of preset face features included in the target face sub-database Whether the capacity of the network node meets the storage requirements of the preset facial features; if the capacity of multiple network nodes does not meet the storage requirements of the preset facial features, a new network node is expanded. Since the number of preset facial features that can be loaded by each network node is limited, when the number of preset facial features is greater than the load capacity of multiple network nodes set, it is necessary to expand the network nodes, that is, to expand A new network node, so that the expanded network nodes can load the preset face features included in the target face sub-library.
  • the new network nodes are expanded so that the expanded network nodes can store the preset face features included in the target face sub-database, The probability of the occurrence of excessive load on the network nodes is reduced, and the processing efficiency of the network nodes is guaranteed.
  • calling the target network node in at least one network node, matching the target facial features with the preset facial features, and determining The first matching result corresponding to the target face image includes: judging whether the target network node is working normally; other network nodes outside the network, and determine the updated target network node from other network nodes; and call the updated target network node, match the target face features with the preset face features, and determine the corresponding target face image The first matching result of .
  • each preset face feature is stored in multiple network nodes, so that when any network node among the multiple network nodes storing preset face features fails to work normally, any network node Other network nodes other than the node can perform matching based on preset facial features, which ensures the high availability of network nodes and improves the matching efficiency of the target facial features corresponding to the target facial image.
  • a network node with priority processing can be set for each preset face feature, and the network node with priority processing is the target network node. It is assumed that other network nodes corresponding to facial features can perform matching based on preset facial features.
  • the preset facial feature- is stored in the network node-, the network In node 2 and network node 3, network node 1 is the set target network node.
  • network node 2 can be randomly selected from network node 2 and network node 3 as the updated target network The node can then use the network node 2 to match the preset face feature 1 with the target face feature.
  • the preset face features are stored in the external memory of the target network node; the target network node in at least one network node is called, and the target face features and the preset face Features are matched, and the first matching result corresponding to the target face image is determined, which may include:
  • the processor may include a graphics processing unit (graphics processing unit, GPU), and/or, a central processing unit (central processing unit, CPU).
  • graphics processing unit graphics processing unit, GPU
  • central processing unit central processing unit
  • the processor may be a GPU or a CPU.
  • the GPU may be selected as the processor to perform face processing.
  • Feature matching when there are few preset face features, and/or the real-time matching requirements are low, the CPU can be selected as the processor to perform face feature matching.
  • the preset face features are stored in the external memory of the target network node; the method further includes: loading at least one preset face feature stored in the external memory of the target network node to the target network in the node's memory.
  • the processor in the target network node can be called to match each preset face feature contained in the target face sub-library stored in the memory with the target face feature, and determine the first matching result corresponding to the target face image.
  • at least one preset facial feature stored in the external memory of the target network node may be loaded into the memory of the target network node.
  • the processor in the target network node may be invoked to match each preset face feature contained in the target face sub-library stored in the memory with the target face feature, and determine the first matching result corresponding to the target face image.
  • loading at least one preset face feature stored in the external memory of the target network node into the memory of the target network node may include the following two methods: Method 1. The target network node All the preset face features stored in the external memory of the device are loaded into the memory of the target network node. Method 2.
  • the preset face feature Based on the matching times or matching frequency of each preset face feature, determine the preset face feature to be loaded from at least one preset face feature stored in the external memory of the target network node; The loaded preset facial features are loaded into the memory of the target network node.
  • the first way when the memory capacity of the target network node is greater than the capacity of all the preset facial features stored in the external memory, all the preset facial features stored in the external memory of the target network node can be loaded to the target network in the node's memory.
  • some preset facial features stored in the external memory of the target network node may be loaded into the memory of the target network node.
  • each preset face feature from at least one preset face feature stored in the external memory of the target network node, the preset with more matching times or higher matching frequency
  • the face feature is determined as the preset face feature to be loaded; and the determined preset face feature to be loaded is loaded into the memory of the target network node.
  • the maximum data stored in the external memory of the target network node In one less preset face feature, determine the preset face feature to be loaded, so that when matching based on the preset face feature to be loaded and the target face feature, it can reduce the need to obtain the preset face feature from the external memory
  • the time and resources consumed by features improve the efficiency of face feature matching.
  • the preset facial features are stored in the external storage of the target network node; the method further includes: generating a corresponding target index for at least one preset facial feature stored in the external storage; A target index corresponding to at least one preset face feature is loaded into the memory of the target network node; wherein, the target index is an index for searching the preset face feature stored in the external memory.
  • a corresponding target index can be generated for each preset face feature stored in the external memory, and a corresponding target index can also be generated for some preset face features stored in the external memory.
  • a corresponding target index is generated for preset faces with many or high matching frequencies.
  • the target index corresponding to the generated at least one preset face feature can be loaded into the memory of the target network node, so that the target network node can obtain the preset face feature corresponding to the target index more accurately according to the target index, There is no need to traverse each preset facial feature stored in the external memory, which improves the efficiency of obtaining preset facial features.
  • calling the target network node in at least one network node, matching the target face feature with the preset face feature, and determining the first matching result corresponding to the target face image may include: calling the target The processor in the network node searches the target index corresponding to the preset face feature contained in the target face sub-library stored in the memory; and obtains the preset face feature corresponding to the target index from the external memory of the target network node; The preset facial features corresponding to the target index are matched with the target facial features to determine a first matching result corresponding to the target facial image.
  • the processor in the target network node can be called to search for the target index corresponding to the preset face features contained in the target face sub-library stored in the memory, and obtain the corresponding target index from the external memory of the target network node more accurately.
  • the preset face features obtained according to the target index can be matched with the target face features to determine the first matching result corresponding to the target face image, which improves the efficiency of face feature matching.
  • Figure 4 an example of the application of the face recognition method to the facial recognition payment scenario is described: First, when the target user performs facial recognition payment on the facial recognition payment device, the facial recognition payment device can initiate a payment request.
  • a large number of dynamic small library basic modules (that is, the basic modules of the face sub-library) can be called through the small library face scanning request initiated, based on The total face database determines the target face sub-database corresponding to the facial recognition payment device.
  • the embodiment of the present disclosure also provides a face recognition system, as shown in FIG.
  • FIG. 5 is a schematic diagram of the architecture of the face recognition system provided by the embodiment of the present disclosure, including a target device 501 and a background server 502;
  • the background server 502 is connected to the target device 501;
  • the target device 501 is used to initiate an operation request, and based on the operation request, obtain a target face image of the target user;
  • the background server 502 is used to obtain based on the For the target face image, perform the face recognition method as described in the first aspect or any implementation manner of the first aspect.
  • the system further includes: at least one network node 503; the background server 502 is connected to the at least one network node 503; the background server 502 is also used to control the at least one network node 503; Node 503 stores preset facial features, and matches the stored preset facial features with target facial features corresponding to the target facial image.
  • the target device may be a face recognition payment device, as shown in FIG. 6, the face recognition system performs the following steps:
  • the facial recognition payment device acquires the target face image of the target user when the target user initiates a payment request.
  • the face-swiping payment device sends the acquired face image to the background server.
  • the background server extracts the target face features of the target face image; and determines the target face sub-library associated with the target device, And determine the target network node storing the preset face features in the target face sub-library.
  • the background server sends the target face feature to the target network node.
  • the target network node matches the target face features with the stored preset face features in the target face sub-library, and determines the first matching result corresponding to the target face image.
  • the target network node sends the first matching result to the background server.
  • the background server determines a first response result corresponding to the payment request based on the first matching result.
  • the acquisition module 701 configured to respond to the operation request initiated by the target device, acquire the target face image of the target user collected by the target device, and extract the The target face feature of the target face image
  • the first determining module 702 is configured to determine the target face sub-library associated with the target device; wherein, the preset face features stored in the target face sub-library belong to the face A part of the total facial features stored in the general library
  • the second determination module 703, configured to match the target facial features with the preset facial features, and determine the first match corresponding to the target facial image Result
  • a third determining module 704 configured to determine a first response result corresponding to the operation request based on the first matching result.
  • the acquiring module 701 when acquiring the target face image of the target user collected by the target device in response to the operation request initiated by the target device, is configured to: respond to the operation initiated by the target device request, controlling the target device to collect multiple frames of candidate face images of the target user; based on the position of the face in the candidate face image, the orientation of the face in the candidate face image, and the At least one, acquiring a target face image corresponding to the target user from the multiple frames of candidate face images.
  • the first determining module 702 when determining the target face sub-library associated with the target device, is configured to: acquire historical operation information of the target device; according to the historical operation information , from the total face features pre-stored in the face database, determine the preset face features used by the target device; based on the preset face features used by the target device, determine the The target face sub-library associated with the device.
  • the apparatus after determining the target face sub-library associated with the target device, the apparatus further includes: a pruning module 705, configured to: according to the storage time of the preset face features, Perform a pruning operation on the preset face features included in the target face sub-library to generate a pruned target face sub-library.
  • the deletion module 705 when performing a deletion operation on the preset face features included in the target face sub-library according to the storage time of the preset face features, Used for: when the data volume of the preset facial features stored in the target face sub-database is greater than or equal to the storage capacity threshold of the target face sub-database, according to the storage time of the preset facial features from early to late In the order of, the preset facial features included in the target face sub-library are deleted; and/or, according to the storage period of at least one preset facial feature, and the storage of the preset facial features time, perform a pruning operation on the preset face features included in the target face sub-library.
  • the third determining module 704 when determining the first response result corresponding to the operation request based on the first matching result, is configured to: when the first matching result indicates the In the case where the target face sub-database includes preset face features matching the target face features, determine the account information corresponding to the target user based on the preset face features matching the target face features ; Based on the account information corresponding to the target user, determine a first response result corresponding to the operation request.
  • the third determining module 704 when determining the first response result corresponding to the operation request based on the first matching result, is configured to: when the first matching result indicates that the In the case that the target face sub-database does not include the preset face features matching the target face features, the target face features are compared with the total face features included in the total face library matching to obtain a second matching result; if the second matching result indicates that the total face library includes the first human face feature matching the target face feature, based on the second matching result, determine the The second response result corresponding to the above operation request, and synchronize the first face feature to the target face sub-database.
  • the third determining module 704 matches the target face feature with the total face features included in the total face library to obtain a second matching result, For: controlling the target device to display an operation interface for acquiring the identification information of the target user; based on the acquired identification information of the target user, acquiring the first person corresponding to the identification information from the face database Face feature; matching the first face feature with the target face feature to obtain the second matching result.
  • the target device is associated with at least one target face sub-library, and the preset face features in the target face sub-library are stored in a network node;
  • the device further includes: an allocation module 706, configured to: acquire service performance information of at least one network node; the service performance information Including: load performance information and/or hardware configuration information; according to service performance information of multiple network nodes, assigning the preset facial features to at least one network node; the second determining module 703, after assigning the When matching the target face feature with the preset face feature, and determining the first matching result corresponding to the target face image, it is used to: call the target network node in the at least one network node, and transfer the The target face feature is matched with the preset face feature, and a first matching result corresponding to the target face image is determined.
  • the device before obtaining the service performance information of at least one network node, the device further includes: a judging module 707, configured to: according to the number of preset face features included in the target face sub-database , judging whether the capacity of the preset multiple network nodes meets the storage requirements of the preset facial features; if the capacity of the multiple network nodes does not meet the storage requirements of the preset facial features, expand the new network node.
  • a judging module 707 configured to: according to the number of preset face features included in the target face sub-database , judging whether the capacity of the preset multiple network nodes meets the storage requirements of the preset facial features; if the capacity of the multiple network nodes does not meet the storage requirements of the preset facial features, expand the new network node.
  • the second determination module 703 after invoking the target network node in the at least one network node, sets the target When matching the face features with the preset face features, and determining the first matching result corresponding to the target face image, it is used to: determine whether the target network node is working normally; if the target network node is not working normally work, then from the plurality of network nodes corresponding to the preset facial features, determine other network nodes except the target network node, and determine an updated target network node from the other network nodes; calling the updated target network node, matching the target face feature with the preset face feature, and determining a first matching result corresponding to the target face image.
  • the preset facial features are stored in the external memory of the target network node; the second determining module 703, when invoking the target network node in the at least one network node, sets the When matching the target face feature with the preset face feature, and determining the first matching result corresponding to the target face image, it is used to: call the processor in the target network node, and retrieve from the external memory Obtaining at least one of the preset facial features contained in the target face sub-library; and matching the acquired at least one preset facial feature with the target facial features to determine the target facial image The corresponding first matching result.
  • the processor includes a graphics processing unit GPU and/or a central processing unit CPU.
  • the preset facial features are stored in the external memory of the target network node; the device further includes: a loading module 708, configured to: store the preset facial features in the external memory of the target network node At least one preset face feature is loaded into the memory of the target network node; the second determination module 703, when calling the target network node in the at least one network node, compares the target face feature with the When matching the preset face features, and determining the first matching result corresponding to the target face image, it is used to: call the processor in the target network node to include the target face sub-library stored in the memory matching at least one of the preset facial features with the target facial features, and determining a first matching result corresponding to the target facial image.
  • the loading module 708, when loading at least one preset face feature stored in the external memory of the target network node into the memory of the target network node is configured to: Based on the matching times or matching frequency of at least one preset facial feature, from the at least one preset facial feature stored in the external memory of the target network node, determine the preset facial feature to be loaded; the determined The preset facial features to be loaded are loaded into the memory of the target network node.
  • the preset facial features are stored in an external memory of the target network node; the device further includes: a generation module 709, configured to: for at least one preset stored in the external memory Generate a target index corresponding to the face feature; load the generated target index corresponding to at least one preset face feature into the memory of the target network node; wherein, the target index is used to search the external memory Index of stored preset facial features.
  • a generation module 709 configured to: for at least one preset stored in the external memory Generate a target index corresponding to the face feature; load the generated target index corresponding to at least one preset face feature into the memory of the target network node; wherein, the target index is used to search the external memory Index of stored preset facial features.
  • the second determining module 703 after invoking a target network node in the at least one network node, matches the target face feature with the preset face feature, and determines the When the first matching result corresponding to the target face image is used, it is used to: invoke the processor in the target network node to search for the target corresponding to the preset face features contained in the target face sub-library stored in the memory index; and from the external memory of the target network node, obtain the preset face feature corresponding to the target index; and combine the preset face feature corresponding to the target index with the target face The features are matched, and the first matching result corresponding to the target face image is determined.
  • the functions of the device provided by the embodiments of the present disclosure or the included templates can be used to execute the methods described in the above method embodiments, and its specific implementation can refer to the descriptions of the above method embodiments. For brevity, here No longer.
  • the functions or modules included in the device provided by the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments, and its specific implementation and technical effects can refer to the above method embodiments The description is omitted here for brevity. Based on the same technical idea, an embodiment of the present disclosure also provides an electronic device. Referring to FIG.
  • FIG. 8 it is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure, including a processor 801, a memory 802, and a bus 803.
  • the memory 802 is used to store execution instructions, including a memory 8021 and an external memory 8022; the memory 8021 here is also called an internal memory, and is used to temporarily store operation data in the processor 801 and data exchanged with an external memory 8022 such as a hard disk, The processor 801 exchanges data with the external memory 8022 through the memory 8021.
  • the processor 801 communicates with the memory 802 through the bus 803, so that the processor 801 is executing the following instructions: In response to the operation initiated by the target device Requesting, obtaining the target face image of the target user collected by the target device, and extracting the target face features of the target face image; determining the target face sub-library associated with the target device; wherein, the target person The preset facial features stored in the face sub-bank belong to a part of the total facial features stored in the face general library; the target facial features are matched with the preset facial features to determine the target facial image The corresponding first matching result determines a first response result corresponding to the operation request based on the first matching result.
  • an embodiment of the present disclosure also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is run by a processor, the face recognition method described in the above-mentioned method embodiment is executed. step.
  • the storage medium may be a volatile or non-volatile computer-readable storage medium.
  • the embodiment of the present disclosure also provides a computer program product, the computer program product carries a program code, and the instructions included in the program code can be used to execute the steps of the face recognition method described in the above method embodiment, for details, please refer to the above The method embodiment will not be repeated here.
  • the above-mentioned computer program product may be specifically implemented by means of hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium.
  • the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. Wait.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components can be combined or Can be integrated into another system, or some features can be ignored, or not implemented. In another point, the mutual coupling or direct coupling or communication connection shown or discussed may be through some communication interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may physically exist separately, or two or more units may be integrated into one unit. If the functions are realized in the form of software function units and sold or used as independent products, they can be stored in a non-volatile computer-readable storage medium executable by a processor.
  • the technical solution of the present disclosure can be embodied in the form of a software product in essence or the part that contributes to the prior art or the part of the technical solution, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk, and other media that can store program codes. .

Abstract

本公开提供了一种人脸识别方法、系统、装置、电子设备及存储介质,该方法包括:响应于目标设备发起的操作请求,获取所述目标设备采集的目标用户的目标人脸图像,并提取所述目标人脸图像的目标人脸特征;确定与所述目标设备关联的目标人脸子库;其中,所述目标人脸子库中存储的预设人脸特征属于人脸总库中存储的总人脸特征的一部分;将所述目标人脸特征与所述预设人脸特征进行匹配,确定所述目标人脸图像对应的第一匹配结果;基于所述第一匹配结果,确定所述操作请求对应的第一响应结果。

Description

人脸识别方法、 系统、 装置、 电子设备及存储介质 本申请要求 2021年 06月 30日提交、申请号为 202110737919.9,发明名称为“人脸识别方法、系统、 装置、 电子设备及存储介质” 的中国专利申请的优先权, 其全部内容通过引用结合在本申请中。 技术领域 本公开涉及图像处理技术领域, 具体而言, 涉及一种人脸识别方法、 装置、 系统、 电子设备、 存 储介质及计算机程序产品。 背景技术 随着深度学习技术的发展, 利用深度学习技术进行人脸识别被广泛应用于各种场景中, 比如, 安 防场景、 人脸门禁场景、 互联网娱乐、 支付场景等。 一般的, 人脸识别技术可以获取用户的人脸图像, 将该人脸图像与预先存储的图像进行匹配, 根 据匹配结果进行相应的处理, 比如, 在人脸门禁场景中, 在匹配结果为匹配成功时, 可以控制门禁开 启, 以便用户可以通过门禁。 发明内容 有鉴于此, 本公开至少提供一种人脸识别方法、 装置、 系统、 电子设备、 存储介质和计算机程序 产品。 根据本公开的一方面, 本公开提供了一种人脸识别方法, 包括: 响应于目标设备发起的操作请求, 获取所述目标设备采集的目标用户的目标人脸图像, 并提取所 述目标人脸图像的目标人脸特征; 确定与所述目标设备关联的目标人脸子库; 其中, 所述目标人脸子库中存储的预设人脸特征属于 人脸总库中存储的总人脸特征的一部分; 将所述目标人脸特征与所述预设人脸特征进行匹配,确定所述目标人脸图像对应的第一匹配结果 基于所述第一匹配结果, 确定所述操作请求对应的第一响应结果。 上述方法中, 可以确定与目标设备关联的目标人脸子库, 目标人脸子库中存储的预设人脸特征属 于人脸总库中存储的总人脸特征的一部分,可知该目标人脸子库中存储的预设人脸特征的数量相比人 脸总库较少,使得将目标人脸子库中存储的预设人脸特征与目标人脸图像对应的目标人脸特征进行匹 配时, 能够较为快速和较为精准的确定目标人脸图像对应的第一匹配结果; 进而基于第一匹配结果, 能够较为高效的确定操作请求对应的第一响应结果。 一种可能的实施方式中, 所述响应于目标设备发起的操作请求, 获取所述目标设备采集的目标用 户的目标人脸图像, 包括: 响应于目标设备发起的操作请求, 控制所述目标设备采集目标用户的多帧候选人脸图像; 基于人脸在候选人脸图像中的位置、人脸在候选人脸图像中的朝向、候选人脸图像的光照信息中 的至少一者, 从所述多帧候选人脸图像中, 获取所述目标用户对应的目标人脸图像。 这里, 在控制目标设备采集目标用户的多帧候选人脸图像之后, 可以根据设置的至少一种条件, 从多帧候选人脸图像中, 获取目标用户对应的目标人脸图像, 使得选取的目标人脸图像的图像质量较 好, 进而基于质量较好的目标人脸图像进行人脸识别时, 可以提高识别的准确度。 一种可能的实施方式中, 所述确定与所述目标设备关联的目标人脸子库, 包括: 获取所述目标设备的历史操作信息; 根据所述历史操作信息, 从所述人脸总库预先存储的总人脸特征中, 确定所述目标设备使用过的 预设人脸特征; 基于所述目标设备使用过的预设人脸特征, 确定与所述目标设备关联的目标人脸子库。 上述实施方式中, 可以从人脸总库中, 根据获取的目标设备的历史操作记录, 确定目标设备使用 过的预设人脸特征, 使得选取的预设人脸特征为与目标设备存在关联的人脸特征, 该目标人脸子库中 存储的预设人脸特征在目标设备上被访问的可能性较高, 进而基于选取的预设人脸特征, 能够较为精 准的确定与目标设备关联的目标人脸子库。 同时, 在保障了目标设备的人脸特征匹配需求的情况下, 减少了目标人脸子库中预设人脸特征的 数量, 以便在将目标人脸特征与预设人脸特征进行匹配时, 能够较为快速的确定目标人脸图像对应的 第一匹配结果。 一种可能的实施方式中, 在确定与所述目标设备关联的目标人脸子库之后, 所述方法还包括: 根据所述预设人脸特征的入库时间, 对所述目标人脸子库中包括的预设人脸特征进行删减操作, 生成删减操作后的目标人脸子库。 这里, 根据预设人脸特征的入库时间, 对目标人脸子库中包括的预设人脸特征进行删减操作, 生 成删减操作后的目标人脸子库, 将不符合要求的预设人脸特征删除, 减少了目标人脸子库中预设人脸 特征的数量, 进而在后续将目标人脸子库中存储的预设人脸特征与目标人脸特征进行匹配时, 可以提 高匹配的效率。 一种可能的实施方式中, 所述根据所述预设人脸特征的入库时间, 对所述目标人脸子库中包括的 预设人脸特征进行删减操作, 包括: 在所述目标人脸子库存储的预设人脸特征的数据量大于或等于所述目标人脸子库的库容阈值时, 按照所述预设人脸特征的入库时间从早到晚的顺序,对所述目标人脸子库中包括的预设人脸特征进行 删减操作; 和 /或, 根据至少一个预设人脸特征的存储期限、 以及所述预设人脸特征的入库时间, 对所述目标人脸子 库中包括的预设人脸特征进行删减操作。 这里, 通过设置多种删减方式, 根据预设人脸特征的入库时间, 对目标人脸子库中包括的预设人 脸特征进行删减操作, 删减方式较为灵活。 一种可能的实施方式中, 所述基于所述第一匹配结果, 确定所述操作请求对应的第一响应结果, 包括: 在所述第一匹配结果指示所述目标人脸子库中包括与所述目标人脸特征匹配的预设人脸特征的 情况下, 基于与所述目标人脸特征匹配的预设人脸特征, 确定所述目标用户对应的账号信息; 基于所述目标用户对应的账号信息, 确定所述操作请求对应的第一响应结果。 一种可能的实施方式中, 所述基于所述第一匹配结果, 确定所述操作请求对应的第一响应结果, 包括: 在所述第一匹配结果表明所述目标人脸子库中不包括与所述目标人脸特征匹配的预设人脸特征 的情况下, 将所述目标人脸特征与所述人脸总库中包括的所述总人脸特征进行匹配, 获得第二匹配结 果; 若所述第二匹配结果表明所述人脸总库中包括与所述目标人脸特征匹配的第一人脸特征时,基于 所述第二匹配结果, 确定所述操作请求对应的第二响应结果, 并将所述第一人脸特征同步至所述目标 人脸子库。 一种可能的实施方式中,所述将所述目标人脸特征与所述人脸总库中包括的所述总人脸特征进行 匹配, 获得第二匹配结果, 包括: 控制目标设备展示用于获取所述目标用户的标识信息的操作界面; 基于获取的所述目标用户的标识信息,从所述人脸总库中获取所述标识信息对应的第一人脸特征 将所述第一人脸特征与所述目标人脸特征进行匹配, 得到所述第二匹配结果。 考虑到目标人脸总库中存在不包括与目标人脸特征匹配的预设人脸特征的情况,为了缓解上述情 况, 提高匹配的精准度和效率, 可以在发生上述情况时, 控制目标设备展示用于获取目标用户的标识 信息的操作界面, 再可以基于获取的目标用户的标识信息, 从人脸总库中较为精准的获取标识信息对 应的第一人脸特征。 再将第一人脸特征与目标人脸特征进行匹配, 得到第二匹配结果, 提高人脸特征 匹配的精准度。 一种可能的实施方式中, 所述目标设备关联至少一个所述目标人脸子库, 所述目标人脸子库中的 所述预设人脸特征被存储在网络节点中; 在所述响应于目标设备发起的操作请求, 获取所述目标设备 采集的目标用户的目标人脸图像之前, 所述方法还包括: 获取至少一个网络节点的服务性能信息; 所述服务性能信息包括: 负载性能信息和 /或硬件配置 信息; 根据多个所述网络节点的服务性能信息, 将所述预设人脸特征分配至至少一个网络节点; 所述将所述目标人脸特征与所述预设人脸特征进行匹配,确定所述目标人脸图像对应的第一匹配 结果, 包括: 调用所述至少一个网络节点中的目标网络节点,将所述目标人脸特征与所述预设人脸特征进行匹 配, 确定所述目标人脸图像对应的第一匹配结果。 这里, 根据设置的多个网络节点的负载信息和 /或硬件配置, 将预设人脸特征分配到至少一个网 络节点, 降低由于有些网络节点的存储和计算压力较大、有些网络节点的存储和计算压力较小而造成 网络节点的负载不均衡的情况发生的概率, 使得至少一个网络节点的负载较为均衡, 提高了网络节点 的处理效率。 一种可能的实施方式中, 在获取至少一个网络节点的服务性能信息之前, 所述方法还包括: 根据所述目标人脸子库中包括的预设人脸特征的数量,判断预设的多个网络节点的容量是否满足 所述预设人脸特征的存储需求; 若多个所述网络节点的容量不满足所述预设人脸特征的存储需求, 则扩展新的网络节点。 这里, 在多个网络节点的容量不满足预设人脸特征的存储需求时, 扩展新的网络节点, 使得扩展 后的多个网络节点能够存储目标人脸子库中包括的预设人脸特征,降低了网络节点负载过大的情况发 生的概率, 保障了网络节点的处理效率。 一种可能的实施方式中, 在预设人脸特征被分配至多个网络节点的情况下, 所述调用所述至少一 个网络节点中的目标网络节点, 将所述目标人脸特征与所述预设人脸特征进行匹配, 确定所述目标人 脸图像对应的第一匹配结果, 包括: 判断所述目标网络节点是否正常工作; 若所述目标网络节点不能正常工作, 则从所述预设人脸特征对应的多个网络节点中, 确定除所述 目标网络节点之外的其他网络节点, 并从所述其他网络节点中, 确定更新后的目标网络节点; 调用所述更新后的目标网络节点, 将所述目标人脸特征与所述预设人脸特征进行匹配, 确定所述 目标人脸图像对应的第一匹配结果。 本实施方式中, 通过将至少一个预设人脸特征存储在多个网络节点中, 以便在存储预设人脸特征 的多个网络节点中的任一网络节点无法正常工作时,除任一网络节点之外的其他网络节点能够基于预 设人脸特征进行匹配, 保障了网络节点的高可用能力, 提高了目标人脸图像对应的目标人脸特征的匹 配效率。 一种可能的实施方式中, 所述预设人脸特征存储在目标网络节点的外存储器中; 所述调用所述至 少一个网络节点中的目标网络节点, 将所述目标人脸特征与所述预设人脸特征进行匹配, 确定所述目 标人脸图像对应的第一匹配结果, 包括: 调用所述目标网络节点中的处理器,从所述外存储器中获取所述目标人脸子库包含的至少一个所 述预设人脸特征; 以及 将获取到的至少一个所述预设人脸特征与所述目标人脸特征进行匹配,确定所述目标人脸图像对 应的第一匹配结果。 一种可能的实施方式中, 所述处理器包括图形处理器 GPU和 /或中央处理器 CPU。 一种可能的实施方式中,所述预设人脸特征存储在目标网络节点的外存储器中;所述方法还包括: 将所述目标网络节点的外存储器中存储的至少一个预设人脸特征,加载至所述目标网络节点的内 存中; 所述调用所述至少一个网络节点中的目标网络节点,将所述目标人脸特征与所述预设人脸特征进 行匹配, 确定所述目标人脸图像对应的第一匹配结果, 包括: 调用所述目标网络节点中的处理器,将内存中存储的所述目标人脸子库包含的至少一个所述预设 人脸特征、 与所述目标人脸特征进行匹配, 确定所述目标人脸图像对应的第一匹配结果。 这里, 通过将目标网络节点的外存储器中存储的至少一个预设人脸特征, 加载至目标网络节点的 内存中, 以便处理器可以直接基于内存中存储的目标人脸子库包含的至少一个预设人脸特征与目标人 脸特征进行匹配,无需从外存储器中获取预设人脸特征,处理器的人脸特征匹配过程较为简便、快速, 保障了人脸识别的实时性。 一种可能的实施方式中, 所述将所述目标网络节点的外存储器中存储的至少一个预设人脸特征, 加载至所述目标网络节点的内存中, 包括: 基于至少一个预设人脸特征的匹配次数或匹配频率,从所述目标网络节点的外存储器存储的至少 一个预设人脸特征中, 确定待加载的预设人脸特征; 将确定的所述待加载的预设人脸特征, 加载至所述目标网络节点的内存中。 这里, 通过基于至少一个预设人脸特征的匹配次数或匹配频率, 从目标网络节点的外存储器存储 的至少一个预设人脸特征中, 确定待加载的预设人脸特征, 以便在基于待加载的预设人脸特征和目标 人脸特征进行匹配时, 可以减少从外存中获取预设人脸特征所消耗的时间和资源, 提高了人脸特征匹 配的效率。 一种可能的实施方式中,所述预设人脸特征存储在目标网络节点的外存储器中;所述方法还包括: 为所述外存储器中存储的至少一个预设人脸特征生成对应的目标索引; 将生成的至少一个预设人脸特征对应的目标索引, 加载至所述目标网络节点的内存中; 其中, 所 述目标索引为用于查找所述外存储器中存储的预设人脸特征的索引。 实施时, 可以为外存储器中存储的至少一个待比对对象的预设人脸特征生成对应的目标索引, 也 可以为外存储器中存储的部分待比对对象的预设人脸特生成对应的目标索引, 比如, 可以为匹配次数 较多或匹配频率较高的预设人脸特生成对应的目标索引。并可以将生成的至少一个待比对对象的预设 人脸特征对应的目标索引,加载至目标网络节点的内存中,以便使得目标网络节点能够根据目标索引, 较为精准的获取目标索引对应的预设人脸特征,无需对外存储器中存储的至少一个预设人脸特征进行 遍历, 提高了获取预设人脸特征的效率。 一种可能的实施方式中, 所述调用所述至少一个网络节点中的目标网络节点, 将所述目标人脸特 征与所述预设人脸特征进行匹配, 确定所述目标人脸图像对应的第一匹配结果, 包括: 调用所述目标网络节点中的处理器,搜索内存中存储的所述目标人脸子库包含的预设人脸特征对 应的所述目标索引; 以及 从所述目标网络节点的所述外存储器中, 获取所述目标索引对应的预设人脸特征; 将所述目标索引对应的所述预设人脸特征、与所述目标人脸特征进行匹配, 确定所述目标人脸图 像对应的第一匹配结果。 这里, 可以调用目标网络节点中的处理器, 搜索内存中存储的目标人脸子库包含的预设人脸特征 对应的目标索引, 并从目标网络节点的外存储器中, 较为精准的获取目标索引对应的预设人脸特征; 再可以将根据目标索引获取的预设人脸特征与目标人脸特征进行匹配,确定目标人脸图像对应的第一 匹配结果, 提高了人脸特征匹配的效率。 以下系统、 装置、 电子设备等的效果描述参见上述方法的说明, 这里不再赘述。 根据本公开的一方面, 本公开提供了一种人脸识别系统, 包括: 目标设备、 后台服务器; 所述后 台服务器与所述目标设备相连; 所述目标设备, 用于发起操作请求, 并基于所述操作请求, 获取目标用户的目标人脸图像; 所述后台服务器, 用于基于获取的所述目标人脸图像, 执行如第一方面或第一方面任一实施方式 所述的人脸识别方法。 一种可能的实施方式中, 所述系统还包括: 至少一个网络节点; 所述后台服务器与所述至少一个 网络节点相连; 所述后台服务器, 还用于控制所述至少一个网络节点存储预设人脸特征, 以及将存储的所述预设 人脸特征与目标人脸图像对应的目标人脸特征进行匹配。 根据本公开的一方面, 本公开提供了一种人脸识别装置, 包括: 获取模块, 用于响应于目标设备发起的操作请求, 获取所述目标设备采集的目标用户的目标人脸 图像, 并提取所述目标人脸图像的目标人脸特征; 第一确定模块, 用于确定与所述目标设备关联的目标人脸子库; 其中, 所述目标人脸子库中存储 的预设人脸特征属于人脸总库中存储的总人脸特征的一部分; 第二确定模块, 用于将所述目标人脸特征与所述预设人脸特征进行匹配, 确定所述目标人脸图像 对应的第一匹配结果; 第三确定模块, 用于基于所述第一匹配结果, 确定所述操作请求对应的第一响应结果。 一种可能的实施方式中, 所述获取模块, 在响应于目标设备发起的操作请求, 获取所述目标设备 采集的目标用户的目标人脸图像时, 用于: 响应于目标设备发起的操作请求, 控制所述目标设备采集目标用户的多帧候选人脸图像; 基于人脸在候选人脸图像中的位置、人脸在候选人脸图像中的朝向、候选人脸图像的光照信息中 的至少一者, 从所述多帧候选人脸图像中, 获取所述目标用户对应的目标人脸图像。 一种可能的实施方式中, 所述第一确定模块, 在确定与所述目标设备关联的目标人脸子库时, 用 于: 获取所述目标设备的历史操作信息; 根据所述历史操作信息, 从所述人脸总库预先存储的总人脸特征中, 确定所述目标设备使用过的 预设人脸特征; 基于所述目标设备使用过的预设人脸特征, 确定与所述目标设备关联的目标人脸子库。 一种可能的实施方式中, 在确定与所述目标设备关联的目标人脸子库之后, 所述装置还包括: 删 减模块, 用于: 根据所述预设人脸特征的入库时间, 对所述目标人脸子库中包括的预设人脸特征进行删减操作, 生成删减操作后的目标人脸子库。 一种可能的实施方式中, 所述删减模块, 在根据所述预设人脸特征的入库时间, 对所述目标人脸 子库中包括的预设人脸特征进行删减操作时, 用于: 在所述目标人脸子库存储的预设人脸特征的数据量大于或等于所述目标人脸子库的库容阈值时, 按照所述预设人脸特征的入库时间从早到晚的顺序,对所述目标人脸子库中包括的预设人脸特征进行 删减操作; 和 /或, 根据至少一个预设人脸特征的存储期限、 以及所述预设人脸特征的入库时间, 对所述目标人脸子 库中包括的预设人脸特征进行删减操作。 一种可能的实施方式中, 所述第三确定模块, 在基于所述第一匹配结果, 确定所述操作请求对应 的第一响应结果时, 用于: 在所述第一匹配结果指示所述目标人脸子库中包括与所述目标人脸特征匹配的预设人脸特征的 情况下, 基于与所述目标人脸特征匹配的预设人脸特征, 确定所述目标用户对应的账号信息; 基于所述目标用户对应的账号信息, 确定所述操作请求对应的第一响应结果。 一种可能的实施方式中, 所述第三确定模块, 在基于所述第一匹配结果, 确定所述操作请求对应 的第一响应结果时, 用于: 在所述第一匹配结果表明所述目标人脸子库中不包括与所述目标人脸特征匹配的预设人脸特征 的情况下, 将所述目标人脸特征与所述人脸总库中包括的所述总人脸特征进行匹配, 获得第二匹配结 果; 若所述第二匹配结果表明所述人脸总库中包括与所述目标人脸特征匹配的第一人脸特征时,基于 所述第二匹配结果, 确定所述操作请求对应的第二响应结果, 并将所述第一人脸特征同步至所述目标 人脸子库。 一种可能的实施方式中, 所述第三确定模块, 在将所述目标人脸特征与所述人脸总库中包括的所 述总人脸特征进行匹配, 获得第二匹配结果时, 用于: 控制目标设备展示用于获取所述目标用户的标识信息的操作界面; 基于获取的所述目标用户的标识信息,从所述人脸总库中获取所述标识信息对应的第一人脸特征; 将所述第一人脸特征与所述目标人脸特征进行匹配, 得到所述第二匹配结果。 一种可能的实施方式中, 所述目标设备关联至少一个所述目标人脸子库, 所述目标人脸子库中的 所述预设人脸特征被存储在网络节点中; 在所述响应于目标设备发起的操作请求, 获取所述目标设备 采集的目标用户的目标人脸图像之前, 所述装置还包括: 分配模块, 用于: 获取至少一个网络节点的服务性能信息; 所述服务性能信息包括: 负载性能信息和 /或硬件配置 信息; 根据多个所述网络节点的服务性能信息, 将所述预设人脸特征分配至至少一个网络节点; 所述第二确定模块, 在将所述目标人脸特征与所述预设人脸特征进行匹配, 确定所述目标人脸图 像对应的第一匹配结果时, 用于: 调用所述至少一个网络节点中的目标网络节点,将所述目标人脸特征与所述预设人脸特征进行匹 配, 确定所述目标人脸图像对应的第一匹配结果。 一种可能的实施方式中, 在获取至少一个网络节点的服务性能信息之前, 所述装置还包括: 判断 模块, 用于: 根据所述目标人脸子库中包括的预设人脸特征的数量,判断预设的多个网络节点的容量是否满足 所述预设人脸特征的存储需求; 若多个所述网络节点的容量不满足所述预设人脸特征的存储需求, 则扩展新的网络节点。 一种可能的实施方式中, 在预设人脸特征被分配至多个网络节点的情况下, 所述第二确定模块, 在调用所述至少一个网络节点中的目标网络节点,将所述目标人脸特征与所述预设人脸特征进行匹配, 确定所述目标人脸图像对应的第一匹配结果时, 用于: 判断所述目标网络节点是否正常工作; 若所述目标网络节点不能正常工作, 则从所述预设人脸特征对应的多个网络节点中, 确定除所述 目标网络节点之外的其他网络节点, 并从所述其他网络节点中, 确定更新后的目标网络节点; 调用所述更新后的目标网络节点, 将所述目标人脸特征与所述预设人脸特征进行匹配, 确定所述 目标人脸图像对应的第一匹配结果。 一种可能的实施方式中, 所述预设人脸特征存储在目标网络节点的外存储器中; 所述第二确定模 块, 在调用所述至少一个网络节点中的目标网络节点, 将所述目标人脸特征与所述预设人脸特征进行 匹配, 确定所述目标人脸图像对应的第一匹配结果时, 用于: 调用所述目标网络节点中的处理器,从所述外存储器中获取所述目标人脸子库包含的至少一个所 述预设人脸特征; 以及 将获取到的至少一个所述预设人脸特征与所述目标人脸特征进行匹配,确定所述目标人脸图像对 应的第一匹配结果。 一种可能的实施方式中, 所述处理器包括图形处理器 GPU和 /或中央处理器 CPU。 一种可能的实施方式中,所述预设人脸特征存储在目标网络节点的外存储器中;所述装置还包括: 加载模块, 用于: 将所述目标网络节点的外存储器中存储的至少一个预设人脸特征,加载至所述目标网络节点的内 存中; 所述第二确定模块, 在调用所述至少一个网络节点中的目标网络节点, 将所述目标人脸特征与所 述预设人脸特征进行匹配, 确定所述目标人脸图像对应的第一匹配结果时, 用于: 调用所述目标网络节点中的处理器,将内存中存储的所述目标人脸子库包含的至少一个所述预设 人脸特征、 与所述目标人脸特征进行匹配, 确定所述目标人脸图像对应的第一匹配结果。 一种可能的实施方式中, 所述加载模块, 在将所述目标网络节点的外存储器中存储的至少一个预 设人脸特征, 加载至所述目标网络节点的内存中时, 用于: 基于至少一个预设人脸特征的匹配次数或匹配频率,从所述目标网络节点的外存储器存储的至少 一个预设人脸特征中, 确定待加载的预设人脸特征; 将确定的所述待加载的预设人脸特征, 加载至所述目标网络节点的内存中。 一种可能的实施方式中,所述预设人脸特征存储在目标网络节点的外存储器中;所述装置还包括: 生成模块, 用于: 为所述外存储器中存储的至少一个预设人脸特征生成对应的目标索引; 将生成的至少一个预设人脸特征对应的目标索引, 加载至所述目标网络节点的内存中; 其中, 所 述目标索引为用于查找所述外存储器中存储的预设人脸特征的索引。 一种可能的实施方式中, 所述第二确定模块, 在调用所述至少一个网络节点中的目标网络节点, 将所述目标人脸特征与所述预设人脸特征进行匹配, 确定所述目标人脸图像对应的第一匹配结果时, 用于: 调用所述目标网络节点中的处理器,搜索内存中存储的所述目标人脸子库包含的预设人脸特征对 应的所述目标索引; 以及 从所述目标网络节点的所述外存储器中, 获取所述目标索引对应的预设人脸特征; 将所述目标索引对应的所述预设人脸特征、与所述目标人脸特征进行匹配, 确定所述目标人脸图 像对应的第一匹配结果。 根据本公开的一方面, 本公开提供一种电子设备, 包括: 处理器、 存储器和总线, 所述存储器存 储有所述处理器可执行的机器可读指令, 当电子设备运行时, 所述处理器与所述存储器之间通过总线 通信,所述机器可读指令被所述处理器执行时执行如上述第一方面或任一实施方式所述的人脸识别方 法的步骤。 根据本公开的一方面, 本公开提供一种计算机可读存储介质, 该计算机可读存储介质上存储有计 算机程序,该计算机程序被处理器运行时执行如上述第一方面或任一实施方式所述的人脸识别方法的 步骤。 根据本公开的一方面, 提供了一种计算机程序产品, 包括计算机可读代码, 或者承载有计算机可 读代码的非易失性计算机可读存储介质, 当所述计算机可读代码在电子设备的处理器中运行时, 所述 电子设备中的处理器用于实现上述方法。为使本公开的上述目的、 特征和优点能更明显易懂, 下文特 举较佳实施例, 并配合所附附图, 作详细说明如下。 附图说明 为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍, 此处的附图被并入说明书中并构成本说明书中的一部分, 这些附图示出了符合本公开的实施例, 并与 说明书一起用于说明本公开的技术方案。 应当理解, 以下附图仅示出了本公开的某些实施例, 因此不 应被看作是对范围的限定, 对于本领域普通技术人员来讲, 在不付出创造性劳动的前提下, 还可以根 据这些附图获得其他相关的附图。 图 1示出了本公开实施例所提供的一种人脸识别方法的流程示意图; 图 2示出了本公开实施例所提供的一种人脸识别方法中, 确定操作请求对应的第一响应结果的具 体方式的流程示意图; 图 3示出了本公开实施例所提供的一种人脸识别方法中, 确定目标人脸图像对应的第一匹配结果 的具体方式的流程示意图; 图 4示出了本公开实施例所提供的一种人脸识别方法的流程示意图; 图 5示出了本公开实施例所提供的一种人脸识别系统的架构示意图; 图 6示出了本公开实施例所提供的一种人脸识别系统的执行过程的流程示意图; 图 7示出了本公开实施例所提供的一种人脸识别装置的架构示意图; 图 8示出了本公开实施例所提供的一种电子设备的结构示意图。 具体实齡 式 为使本公开实施例的目的、 技术方案和优点更加清楚, 下面将结合本公开实施例中的附图, 对本 公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例, 而不是全部的实施例。通常在此处附图中描述和示出的本公开实施例的组件可以以各种不同的配置来 布置和设计。 因此, 以下对在附图中提供的本公开的实施例的详细描述并非旨在限制要求保护的本公 开的范围, 而是仅仅表示本公开的选定实施例。 基于本公开的实施例, 本领域技术人员在没有做出创 造性劳动的前提下所获得的所有其他实施例, 都属于本公开保护的范围。 由于人脸具有独特性, 即每个用户对应不同的人脸图像, 故可以将人脸识别技术应用于人脸支付 中, 使得用户可以不用携带钱包、 手机等外部支付凭证, 通过对用户人脸进行识别, 在识别成功后可 以完成支付过程, 提升了支付的便捷性和支付效率。 本公开实施例提供了一种人脸识别方法、 装置、 系统、 电子设备及存储介质。 应注意到: 相似的标号和字母在下面的附图中表示类似项, 因此, 一旦某一项在一个附图中被定 义, 则在随后的附图中不需要对其进行进一步定义和解释。 为便于对本公开实施例进行理解,首先对本公开实施例所公开的一种人脸识别方法进行详细介绍。 本公开实施例所提供的人脸识别方法的执行主体可以为后台服务器。 参见图 1所示, 为本公开实施例所提供的人脸识别方法的流程示意图, 该方法包括 S101-S104, 其 中:
S101, 响应于目标设备发起的操作请求, 获取目标设备采集的目标用户的目标人脸图像, 并提取 目标人脸图像的目标人脸特征;
5102, 确定与目标设备关联的目标人脸子库; 其中, 目标人脸子库中存储的预设人脸特征属于人 脸总库中存储的总人脸特征的一部分;
5103, 将所述目标人脸特征与所述预设人脸特征进行匹配, 确定所述目标人脸图像对应的第一匹 配结果;
5104, 基于第一匹配结果, 确定操作请求对应的第一响应结果。 上述方法中, 可以确定与目标设备关联的目标人脸子库, 目标人脸子库中存储的预设人脸特征属 于人脸总库中存储的总人脸特征的一部分,可知该目标人脸子库中存储的预设人脸特征的数量相比人 脸总库较少,使得将目标人脸子库中存储的预设人脸特征与目标人脸图像对应的目标人脸特征进行匹 配时, 能够较为快速和较为精准的确定目标人脸图像对应的第一匹配结果; 进而基于第一匹配结果, 能够较为高效的确定操作请求对应的第一响应结果。 下述对 S101-S104进行具体说明。 针对 S 101 : 实施时, 该人脸识别方法可以应用于刷脸支付场景中, 在该刷脸支付场景中, 目标设备可以为超 市、 便利店、 服装店等任一场所内放置的刷脸支付设备。 在目标用户需要进行刷脸支付时, 刷脸支付 设备可以发起操作请求, 进而响应于刷脸支付设备的操作请求, 获取目标设备采集的目标用户的目标 人脸图像。 一种可选实施方式中, 响应于目标设备发起的操作请求, 获取目标设备采集的目标用户的目标人 脸图像, 可以包括步骤 A1和步骤 A2, 其中: 步骤 A1, 响应于目标设备发起的操作请求, 控制目标设备采集目标用户的多帧候选人脸图像; 步骤 A2, 基于人脸在候选人脸图像中的位置、人脸在候选人脸图像中的朝向、候选人脸图像的光 照信息中的至少一者, 从多帧候选人脸图像中, 获取目标用户对应的目标人脸图像。 响应于目标设备发起的操作请求, 可以控制目标设备通过装载的摄像头, 采集目标用户的多帧候 选人脸图像。 再可以根据人脸在候选人脸图像中的位置、 人脸在候选人脸图像中的朝向、 候选人脸图 像的光照信息中的至少一者, 从多帧候选人脸图像中, 获取目标用户对应的目标人脸图像。 示例性的, 可以确定目标用户的人脸在每帧候选人脸图像中的位置, 选取人脸处于中心位置的候 选人脸图像, 确定为获取的目标用户的目标人脸图像。 和 /或, 可以确定人脸在候选人脸图像中的朝向, 比如, 可以使用欧拉角表征人脸的朝向。 再可 以设置最佳朝向, 确定人脸在候选人脸图像中的朝向与最佳朝向之间的朝向偏差, 将朝向偏差最小的 候选人脸图像, 确定为获取的目标用户的目标人脸图像。 其中, 最佳朝向可以为人脸正面在图像中时 的朝向信息。其中, 可以使用训练后的第一祌经网络, 确定人脸在候选人脸图像中的位置和 /或朝向。 和 /或, 可以确定候选人脸图像的关照信息, 再可以设置最佳光照信息, 确定候选人脸图像的光 照信息与最佳光照信息之间的光照偏差, 将光照偏差最小的候选人脸图像, 确定为获取的目标用户的 目标人脸图像。 这里, 在控制目标设备采集目标用户的多帧候选人脸图像之后, 可以根据设置的至少一种条件, 从多帧候选人脸图像中, 获取目标用户对应的目标人脸图像, 使得选取的目标人脸图像的图像质量较 好, 进而基于质量较好的目标人脸图像进行人脸识别时, 可以提高识别的准确度。 实施时, 在获取到目标人脸图像之后, 可以使用训练后的人脸特征提取网络, 提取目标人脸图像 对应的目标人脸特征。其中, 目标人脸特征和预设人脸特征可以为使用同一人脸特征提取网络提取得 到的。 针对 S 102: 实施时, 可以为每个目标设备确定对应的目标人脸子库, 不同的目标设备可以对应不同的目标人 脸子库。 其中, 目标人脸子库中存储的预设人脸特征属于人脸总库中存储的总人脸特征的一部分。 一种可选实施方式中, 确定与目标设备关联的目标人脸子库, 可以包括: 步骤 B1, 获取目标设备的历史操作信息; 步骤 B2, 根据历史操作信息, 从人脸总库预先存储的总人脸特征中, 确定目标设备使用过的预设 人脸特征; 步骤 B3, 基于目标设备使用过的预设人脸特征, 确定与目标设备关联的目标人脸子库。 人脸总库中预先存储有总人脸特征, 总人脸特征可以包括每个用户的对应的人脸特征; 比如, 在 该方法应用于刷脸支付场景时,总人脸特征中可以包括多个要使用刷脸支付功能的注册用户分别对应 的人脸特征。 获取目标设备的历史操作信息, 该历史操作信息可以为在目标设备上执行了操作后生成的信息; 比如, 在目标设备为刷脸支付设备时, 历史操作信息可以包括在刷脸支付设备上进行的历史刷脸支付 T己录。 在历史操作信息包括在刷脸支付设备上进行的历史刷脸支付记录时,可以根据历史刷脸支付记录, 从人脸总库预先存储的总人脸特征中, 确定目标设备使用过的预设人脸特征。再将目标设备使用过的 预设人脸特征存储至构建的数据库中, 生成与目标设备关联的目标人脸子库。 实施时, 还可以根据目标设备所处的现实场景对应的场景信息, 确定每个目标设备关联的目标人 脸子库。 比如场景信息可以包括在安装刷脸支付设备的场所内的消费记录、安装刷脸支付设备的场所 内配置的 WiFi信息、 蓝牙探针信息等。 再比如, 根据安装刷脸支付设备的场所内的消费记录, 可以确定每条消费记录对应的账号信息; 再可以从人脸总库中, 确定每个账号信息对应的预设人脸特征, 将各个账号信息对应的预设人脸特征 存储至构建的数据库中, 生成与目标设备关联的目标人脸子库。 再比如, 在场景信息包括安装刷脸支付设备的场所内配置的 WiFi信息时, 可以确定连接该 WiFi 信息的移动设备、 以及通过移动设备注册的用户图像, 再可以从设置的人脸总库中, 确定与该用户图 像匹配的预设人脸特征, 将与该用户图像匹配的预设人脸特征存储至构建的数据库中, 生成与目标设 备关联的目标人脸子库。 上述实施方式中, 可以从设置的人脸总库中, 根据获取的目标设备的历史操作记录, 确定目标设 备使用过的预设人脸特征, 使得选取的预设人脸特征为与目标设备存在关联的人脸特征, 该目标人脸 子库中存储的预设人脸特征在目标设备上被访问的可能性较高, 进而基于选取的预设人脸特征, 能够 较为精准的确定与目标设备关联的目标人脸子库。 同时, 在保障了目标设备的人脸特征匹配需求的情况下, 减少了目标人脸子库中预设人脸特征的 数量, 以便在将目标人脸特征与预设人脸特征进行匹配时, 能够较为快速的确定目标人脸图像对应的 第一匹配结果。 具体实施时,可以提供人脸子库的基础模块,比如,基础模块可以包括创建人脸子库的接口模块、 人脸子库内存储的预设人脸特征的检索逻辑模块、存储人脸子库中预设人脸特征的网络节点的扩展逻 辑模块等。 通过提供人脸子库的基础模块, 实现将基础模块与业务分库逻辑的解耦, 可以使得提供的 人脸子库的基础模块, 可以适用于不同业务场景的分库逻辑中, 提高基础模块的利用率。 实施时, 网 络节点可以通过调用人脸子库的基础模块, 实现为目标设备确定对应目标人脸子库。 一种可选实施方式中, 在确定与所述目标设备关联的目标人脸子库之后, 所述方法还包括: 根据 预设人脸特征的入库时间, 对目标人脸子库中包括的预设人脸特征进行删减操作, 生成删减操作后的 目标人脸子库。 其中, 根据预设人脸特征的入库时间, 对目标人脸子库中包括的预设人脸特征进行删减操作, 包 括: 方式一、 在目标人脸子库存储的预设人脸特征的数据量大于或等于目标人脸子库的库容阈值时, 按照预设人脸特征的入库时间从早到晚的顺序,对目标人脸子库中包括的预设人脸特征进行删减操作 方式二、 根据至少一个预设人脸特征的存储期限、 以及预设人脸特征的入库时间, 对目标人脸子 库中包括的预设人脸特征进行删减操作。 这里, 根据预设人脸特征的入库时间, 对目标人脸子库中包括的预设人脸特征进行删减操作, 生 成删减操作后的目标人脸子库, 将不符合要求的预设人脸特征删除, 减少了目标人脸子库中预设人脸 特征的数量, 进而在后续将目标人脸子库中存储的预设人脸特征与目标人脸特征进行匹配时, 可以提 高匹配的效率。 目标人脸子库可以为动态人脸库, 即动态人脸库中包括的预设人脸特征可以进行更新, 其中, 对 动态人脸库的更新可以包括预设人脸特征的增加和 /或删减。 在对目标人脸子库中包括的预设人脸特 征进行删减时, 可以确定存储的每个预设人脸特征的入库时间, 入库时间为目标人脸子库存储该预设 人脸特征的初始时间; 进而可以根据至少一个预设人脸特征的入库时间, 对目标人脸子库中包括的预 设人脸特征进行删减操作, 生成删减操作后的目标人脸子库。 在对目标人脸子库进行预设特征的增加时, 比如, 可以在目标人脸子库中不存在与目标人脸特征 匹配的预设人脸特征时, 从人脸总库中获取与目标人脸特征匹配的第一人脸特征, 将该第一人脸特征 添加至目标人脸子库中, 对目标人脸子库进行更新。 在方式一中, 目标人脸子库对应的库容阈值可以根据实际情况进行设置。进而可以在目标人脸子 库存储的预设人脸特征的数据量大于或等于目标人脸子库的库容阈值时,按照预设人脸特征的入库时 间从早到晚的顺序, 对目标人脸子库中包括的预设人脸特征进行删减操作, 直至删减操作后的目标人 脸子库存储的数据量小于目标人脸子库的库容阈值。 在方式二中, 可以设置目标人脸子库的存储期限, 比如, 在目标人脸子库包括的预设人脸特征为 最近 7天内在刷脸支付设备(目标设备)上进行刷脸支付的人脸特征时, 该存储期限可以为 7天。 在确 定了预设人脸特征的入库时间和存储期限时, 可以在预设人脸特征的存储期限到期之后, 将存储期限 到期的预设人脸特征删除。 比如, 在预设人脸特征一的入库时间为 11月 1日 08:00, 存储期限为 7天, 则可以在 11月 8日 08:00时将预设人脸特征一从目标人脸子库中删除。 考虑到, 人脸特征库在每次删除待删除人脸特征时, 需要调用删除指令对待删除人脸特征进行删 减, 在待删除人脸特征的数量较多时, 上述通过调用删除指令对待删除人脸特征进行删减的方式较为 繁琐和耗时。 基于此, 本公开实施方式中, 可以设置方式一和 /或方式二对应的策略模块和策略模块 的接口, 目标人脸子库可以通过调用策略模块的接口实现策略模块的调用, 通过调用策略模块, 可以 使用方式一和 /或方式二, 实现预设人脸特征的自动删除, 提高了预设人脸特征的删除效率。 针对 S103和 S104: 示例性的, 可以确定目标人脸图像的目标人脸特征与每个预设人脸特征之间的余弦相似度, 根据 各个余弦相似度, 确定目标人脸图像对应的第一匹配结果。再可以根据目标人脸图像对应的第一匹配 结果, 确定操作请求对应的第一响应结果。 一种可选实施方式中, 参见图 2所示, 在 S104中, 基于第一匹配结果, 确定操作请求对应的第一 响应结果, 可以包括:
S1041,在第一匹配结果指示目标人脸子库中包括与目标人脸特征匹配的预设人脸特征的情况下, 基于与目标人脸特征匹配的预设人脸特征, 确定目标用户对应的账号信息;
S1042, 基于目标用户对应的账号信息, 确定操作请求对应的第一响应结果。 比如, 在刷脸支付场景中, 在第一匹配结果指示目标人脸子库中包括与目标人脸特征匹配的预设 人脸特征时, 可以基于与目标人脸特征匹配的预设人脸特征, 确定目标用户对应的账号信息, 该账号 信息可以为目标用户对应的支付账号信息。再可以从目标用户对应的账号信息中, 扣除支付操作请求 对应的支付金额, 完成刷脸支付。 另一种可选实施方式中, 基于第一匹配结果, 确定操作请求对应的第一响应结果, 可以包括步骤
C1和 C2, 其中: 步骤 C1,在第一匹配结果表明目标人脸子库中不包括与目标人脸特征匹配的预设人脸特征的情况 下, 将目标人脸特征与人脸总库中包括的总人脸特征进行匹配, 获得第二匹配结果; 步骤 C2,若第二匹配结果表明人脸总库中包括与目标人脸特征匹配的第一人脸特征时,基于第二 匹配结果, 确定操作请求对应的第二响应结果, 并将第一人脸特征同步至目标人脸子库。 在第一匹配结果表明目标人脸子库中不包括与目标人脸特征匹配的预设人脸特征时,可以将目标 人脸特征与人脸总库中包括的总人脸特征进行匹配, 获取第二匹配结果。若第二匹配结果指示人脸总 库中包括与目标人脸特征匹配的第一人脸特征时, 可以基于第二匹配结果, 确定操作请求对应的第二 响应结果, 并将第一人脸特征同步至目标人脸子库, 实现对目标人脸子库的更新。 一种可选实施方式中, 步骤 C1中, 将目标人脸特征与人脸总库中包括的总人脸特征进行匹配, 获 得第二匹配结果, 可以包括: 步骤 C11, 控制目标设备展示用于获取目标用户的标识信息的操作界面; 步骤 C12, 基于获取的目标用户的标识信息, 从人脸总库中获取标识信息对应的第一人脸特征; 步骤 C13, 将第一人脸特征与目标人脸特征进行匹配, 得到第二匹配结果。 实施时, 在第一匹配结果指示目标人脸子库中不存在与目标人脸特征匹配的预设人脸特征时, 可 以控制目标设备展示用于获取目标用户的标识信息的操作界面。其中, 目标用户的标识信息可以为表 征目标用户身份的信息, 不同的目标用户对应不同的标识信息。 比如, 标识信息可以为身份证号码、 电话号码、 为该目标用户生成的会员号码等。 可以预先确定人脸总库中的每个预设人脸特征对应的标识信息, 以便在获取到目标用户的标识信 息之后, 可以根据该标识信息, 从人脸总库中获取标识信息对应的第一人脸特征; 并将第一人脸特征 与目标人脸特征进行匹配, 得到目标人脸图像对应的第二匹配结果。 考虑到目标人脸总库中存在不包括与目标人脸特征匹配的预设人脸特征的情况,为了缓解上述情 况, 提高匹配的精准度和效率, 可以在发生上述情况时, 控制目标设备展示用于获取目标用户的标识 信息的操作界面, 再可以基于获取的目标用户的标识信息, 从人脸总库中较为精准的获取标识信息对 应的第一人脸特征。 再将第一人脸特征与目标人脸特征进行匹配, 得到第二匹配结果, 提高人脸特征 匹配的精准度。 一种可选实施方式中, 目标设备关联至少一个目标人脸子库, 目标人脸子库中的预设人脸特征被 存储在网络节点中; 在响应于目标设备发起的操作请求, 获取目标设备采集的目标用户的目标人脸图 像之前, 该方法还可以包括: 获取至少一个网络节点的服务性能信息; 所述服务性能信息包括: 负载性能信息和 /或硬件配置 信息; 根据多个网络节点的服务性能信息, 将预设人脸特征分配至至少一个网络节点。 进而,可以调用至少一个网络节点中的目标网络节点,将目标人脸特征与预设人脸特征进行匹配, 确定目标人脸图像对应的第一匹配结果。 这里, 目标设备关联至少一个目标人脸子库, 目标人脸子库中的预设人脸特征能够被存储在网络 节点中。 即网络节点用于存储预设人脸特征, 以及用于将目标人脸特征与存储的预设人脸特征进行匹 配。 获取至少一个网络节点的服务性能信息, 其中服务性能信息可以包括负载性能信息和 /或硬件配 置信息。 再可以根据每个网络节点的负载性能信息和 /或硬件配置信息, 将预设人脸特征分配至至少 一个网络节点。 比如, 可以为负载性能较好的网络节点分配较多的预设人脸特征, 为负载性能较差的 网络节点分配较少的预设人脸特征; 或者, 为硬件配置较高的网络节点分配较多的预设人脸特征, 为 硬件配置较低的网络节点分配较少的预设人脸特征。 这里, 根据设置的多个网络节点的负载信息和 /或硬件配置, 将预设人脸特征分配至至少一个网 络节点, 减少了由于有些网络节点的存储和计算压力较大、有些网络节点的存储和计算压力较小而造 成网络节点的负载不均衡的情况发生的概率, 使得每个网络节点的负载较为均衡, 提高了网络节点的 处理效率。 一种可选实施方式中, 在获取至少一个网络节点的服务性能信息之前, 该方法还可以包括: 根据 目标人脸子库中包括的预设人脸特征的数量,判断预设的多个网络节点的容量是否满足预设人脸特征 的存储需求; 若多个网络节点的容量不满足预设人脸特征的存储需求, 则扩展新的网络节点。 由于每个网络节点能够负载的预设人脸特征的数量是有限的,故在预设人脸特征的数量大于设置 的多个网络节点的负载能力时, 需要对网络节点进行扩展, 即需要扩展新的网络节点, 以便扩展后的 多个网络节点能够负载目标人脸子库中包括的预设人脸特征。 这里, 在多个网络节点的容量不满足预设人脸特征的存储需求时, 扩展新的网络节点, 使得扩展 后的多个网络节点能够存储目标人脸子库中包括的预设人脸特征,降低了网络节点负载过大的情况发 生的概率, 保障了网络节点的处理效率。 一种可选实施方式中, 在预设人脸特征被分配至多个网络节点的情况下, 调用至少一个网络节点 中的目标网络节点, 将目标人脸特征与预设人脸特征进行匹配, 确定目标人脸图像对应的第一匹配结 果, 包括: 判断目标网络节点是否正常工作; 若目标网络节点不能正常工作, 则从预设人脸特征对应 的多个网络节点中, 确定除目标网络节点之外的其他网络节点, 并从其他网络节点中, 确定更新后的 目标网络节点; 并调用更新后的目标网络节点, 将目标人脸特征与预设人脸特征进行匹配, 确定目标 人脸图像对应的第一匹配结果。 本实施方式中, 通过将每个预设人脸特征存储在多个网络节点中, 以便在存储预设人脸特征的多 个网络节点中的任一网络节点无法正常工作时,除任一网络节点之外的其他网络节点能够基于预设人 脸特征进行匹配, 保障了网络节点的高可用能力, 提高了目标人脸图像对应的目标人脸特征的匹配效 率。 实施时, 可以针对每个预设人脸特征, 设置优先处理的网络节点, 该优先处理的网络节点即为目 标网络节点, 在优先处理的网络节点无法正常工作时, 进行故障自动迁移, 控制预设人脸特征对应的 其他网络节点能够基于预设人脸特征进行匹配。 比如, 若预设人脸特征一被存储在网络节点一、 网络 节点二、 和网络节点三中, 网络节点一为设置的目标网络节点, 在网络节点一无法正常工作时, 可以 从网络节点二和网络节点三中, 随机选择网络节点二作为更新后的目标网络节点, 再可以使用网络节 点二, 对预设人脸特征一和目标人脸特征进行匹配。 一种可选实施方式中, 参见图 3所示, 预设人脸特征存储在目标网络节点的外存储器中; 调用至 少一个网络节点中的目标网络节点, 将目标人脸特征与预设人脸特征进行匹配, 确定目标人脸图像对 应的第一匹配结果, 可以包括:
S301, 调用目标网络节点中的处理器, 从外存储器中获取目标人脸子库包含的至少一个预设人脸 特征;
S302, 将获取到的至少一个预设人脸特征与目标人脸特征进行匹配, 确定目标人脸图像对应的第 一匹配结果。 其中, 处理器可以包括图形处理器 (graphics processing unit, GPU), 和 /或, 中央处理器 (central processing unit, CPU)。 在预设人脸特征存储在目标网络节点的外存储器中时, 在进行人脸特征匹配时, 需要调用目标网 络节点中的处理器, 从外存储器中获取目标人脸子库包含的至少一个预设人脸特征。在从外存储器中 获取至少一个预设人脸特征时, 需要对外存储器中存储的多个预设人脸特征进行遍历, 确定目标人脸 子库包含的预设人脸特征。再将获取到的每个预设人脸特征与目标人脸特征进行匹配, 确定目标人脸 图像对应的第一匹配结果。 具体实施时, 该处理器可以是 GPU, 也可以是 CPU, 比如, 在预设人脸特征较多, 和 /或, 匹配的 实时性要求较高时, 可以选择 GPU作为处理器, 进行人脸特征的匹配; 在预设人脸特征较少, 和 /或, 匹配的实时性要求较低时, 可以选择 CPU作为处理器, 进行人脸特征的匹配。 一种可选实施方式中, 预设人脸特征存储在目标网络节点的外存储器中; 该方法还包括: 将目标 网络节点的外存储器中存储的至少一个预设人脸特征, 加载至目标网络节点的内存中。 进而可以调用目标网络节点中的处理器,将内存中存储的目标人脸子库包含的每个预设人脸特征、 与目标人脸特征进行匹配, 确定目标人脸图像对应的第一匹配结果。 实施时, 可以先将目标网络节点的外存储器中存储的至少一个预设人脸特征, 加载至目标网络节 点的内存中。 进而, 可以调用目标网络节点中的处理器, 将内存中存储的目标人脸子库包含的每个预 设人脸特征与目标人脸特征进行匹配, 确定目标人脸图像对应的第一匹配结果。 这里, 通过将目标网络节点的外存储器中存储的至少一个预设人脸特征, 加载至目标网络节点的 内存中, 以便处理器可以直接基于内存中存储的目标人脸子库包含的每个预设人脸特征与目标人脸特 征进行匹配, 无需从外存储器中获取预设人脸特征, 处理器的人脸特征匹配过程较为简便、 快速, 保 障了人脸识别的实时性。 一种可选实施方式中, 将目标网络节点的外存储器中存储的至少一个预设人脸特征, 加载至目标 网络节点的内存中, 可以包括下述两种方式: 方式一、 将目标网络节点的外存中存储的全部预设人脸特征, 加载至目标网络节点的内存中。 方式二、基于每个预设人脸特征的匹配次数或匹配频率, 从目标网络节点的外存储器存储的至少 一个预设人脸特征中, 确定待加载的预设人脸特征; 将确定的待加载的预设人脸特征, 加载至目标网 络节点的内存中。 在方式一中, 在目标网络节点的内存容量大于外存储器中存储的全部预设人脸特征的容量时, 可 以将目标网络节点的外存储器中存储的全部预设人脸特征, 加载至目标网络节点的内存中。 在方式二中, 可以将目标网络节点的外存储器中存储的部分预设人脸特征, 加载至目标网络节点 的内存中。 实施时, 可以根据每个预设人脸特征的匹配次数或匹配频率, 从目标网络节点的外存储器 存储的至少一个预设人脸特征中, 将匹配次数较多或者匹配频率较高的预设人脸特征, 确定为待加载 的预设人脸特征; 并将确定的待加载的预设人脸特征, 加载至目标网络节点的内存中。 这里, 通过基于每个预设人脸特征的匹配次数或匹配频率, 从目标网络节点的外存储器存储的至 少一个预设人脸特征中, 确定待加载的预设人脸特征, 以便在基于待加载的预设人脸特征和目标人脸 特征进行匹配时, 可以减少从外存中获取预设人脸特征所消耗的时间和资源, 提高了人脸特征匹配的 效率。 一种可选实施方式中, 预设人脸特征存储在目标网络节点的外存储器中; 该方法还包括: 为外存 储器中存储的至少一个预设人脸特征生成对应的目标索引;将生成的至少一个预设人脸特征对应的目 标索引, 加载至目标网络节点的内存中; 其中, 所述目标索引为用于查找所述外存储器中存储的预设 人脸特征的索引。 实施时, 可以为外存储器中存储的每个预设人脸特征生成对应的目标索引, 也可以为外存储器中 存储的部分预设人脸特生成对应的目标索引, 比如, 可以为匹配次数较多或匹配频率较高的预设人脸 特生成对应的目标索引。并可以将生成的至少一个预设人脸特征对应的目标索引, 加载至目标网络节 点的内存中, 以便使得目标网络节点能够根据目标索引, 较为精准的获取目标索引对应的预设人脸特 征, 无需对外存储器中存储的每个预设人脸特征进行遍历, 提高了获取预设人脸特征的效率。 一种可选实施方式中, 调用至少一个网络节点中的目标网络节点, 将目标人脸特征与预设人脸特 征进行匹配, 确定目标人脸图像对应的第一匹配结果, 可以包括: 调用目标网络节点中的处理器, 搜 索内存中存储的目标人脸子库包含的预设人脸特征对应的目标索引; 以及从目标网络节点的外存储器 中,获取目标索引对应的预设人脸特征;将目标索引对应的预设人脸特征、与目标人脸特征进行匹配, 确定目标人脸图像对应的第一匹配结果。 这里, 可以调用目标网络节点中的处理器, 搜索内存中存储的目标人脸子库包含的预设人脸特征 对应的目标索引, 并从目标网络节点的外存储器中, 较为精准的获取目标索引对应的预设人脸特征; 再可以将根据目标索引获取的预设人脸特征与目标人脸特征进行匹配,确定目标人脸图像对应的第一 匹配结果, 提高了人脸特征匹配的效率。 结合图 4所示, 对人脸识别方法应用于刷脸支付场景为例进行说明: 第一, 目标用户在刷脸支付设备上进行刷脸支付时, 刷脸支付设备可以发起支付刷脸请求。 第二, 可以根据获取的刷脸支付设备的历史操作信息、 和设置的业务分库逻辑, 通过发起的小库 刷脸请求调用海量动态小库基础模块 (即人脸子库的基础模块), 基于人脸总库, 确定刷脸支付设备 对应的目标人脸子库。 第三, 确定存储目标人脸子库中预设人脸特征的目标网络节点; 并调用该目标网络节点, 将目标 人脸子库中存储的预设人脸特征与目标人脸图像对应的目标人脸特征进行匹配,确定目标人脸图像对 应的第一匹配结果。 基于相同的构思, 本公开实施例还提供了一种人脸识别系统, 参见图 5所示, 为本公开实施例提 供的人脸识别系统的架构示意图, 包括目标设备 501、 后台服务器 502; 所述后台服务器 502与所述目 标设备 501相连; 所述目标设备 501, 用于发起操作请求, 并基于所述操作请求, 获取目标用户的目标人脸图像; 所述后台服务器 502, 用于基于获取的所述目标人脸图像, 执行如第一方面或第一方面任一实施 方式所述的人脸识别方法。 一种可能的实施方式中, 所述系统还包括: 至少一个网络节点 503; 所述后台服务器 502与所述至 少一个网络节点 503相连; 所述后台服务器 502, 还用于控制所述至少一个网络节点 503存储预设人脸特征, 以及将存储的所 述预设人脸特征与目标人脸图像对应的目标人脸特征进行匹配。 示例性的, 在人脸识别系统应用于刷脸支付场景时, 目标设备可以为刷脸支付设备, 参见图 6所 示, 人脸识别系统执行以下步骤:
S601, 刷脸支付设备在目标用户发起支付请求时, 获取目标用户的目标人脸图像。
5602, 刷脸支付设备将获取的人脸图像发送给后台服务器。
5603, 后台服务器提取目标人脸图像的目标人脸特征; 以及确定目标设备关联的目标人脸子库, 并确定存储目标人脸子库中的预设人脸特征的目标网络节点。
5604, 后台服务器将目标人脸特征发送给目标网络节点。
5605, 目标网络节点将目标人脸特征与存储的目标人脸子库中的预设人脸特征进行匹配, 确定目 标人脸图像对应的第一匹配结果。
S606, 目标网络节点将第一匹配结果发送给后台服务器。
S607, 后台服务器基于第一匹配结果, 确定支付请求对应的第一响应结果。 本领域技术人员可以理解, 在具体实施方式的上述方法中, 各步骤的撰写顺序并不意味着严格的 执行顺序而对实施过程构成任何限定, 各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。 基于相同的构思, 本公开实施例还提供了一种人脸识别装置, 参见图 7所示, 为本公开实施例提 供的人脸识别装置的架构示意图, 包括获取模块 701、 第一确定模块 702、 第二确定模块 703、 第三确 定模块 704, 具体的: 获取模块 701, 用于响应于目标设备发起的操作请求, 获取所述目标设备采集的目标用户的目标 人脸图像, 并提取所述目标人脸图像的目标人脸特征; 第一确定模块 702, 用于确定与所述目标设备关联的目标人脸子库; 其中, 所述目标人脸子库中 存储的预设人脸特征属于人脸总库中存储的总人脸特征的一部分; 第二确定模块 703, 用于将所述目标人脸特征与所述预设人脸特征进行匹配, 确定所述目标人脸 图像对应的第一匹配结果; 第三确定模块 704, 用于基于所述第一匹配结果, 确定所述操作请求对应的第一响应结果。 一种可能的实施方式中, 所述获取模块 701, 在响应于目标设备发起的操作请求, 获取所述目标 设备采集的目标用户的目标人脸图像时, 用于: 响应于目标设备发起的操作请求, 控制所述目标设备采集目标用户的多帧候选人脸图像; 基于人脸在候选人脸图像中的位置、人脸在候选人脸图像中的朝向、候选人脸图像的光照信息中 的至少一者, 从所述多帧候选人脸图像中, 获取所述目标用户对应的目标人脸图像。 一种可能的实施方式中,所述第一确定模块 702,在确定与所述目标设备关联的目标人脸子库时, 用于: 获取所述目标设备的历史操作信息; 根据所述历史操作信息, 从所述人脸总库预先存储的总人脸特征中, 确定所述目标设备使用过的 预设人脸特征; 基于所述目标设备使用过的预设人脸特征, 确定与所述目标设备关联的目标人脸子库。 一种可能的实施方式中, 在确定与所述目标设备关联的目标人脸子库之后, 所述装置还包括: 删 减模块 705, 用于: 根据所述预设人脸特征的入库时间, 对所述目标人脸子库中包括的预设人脸特征进行删减操作, 生成删减操作后的目标人脸子库。 一种可能的实施方式中, 所述删减模块 705, 在根据所述预设人脸特征的入库时间, 对所述目标 人脸子库中包括的预设人脸特征进行删减操作时, 用于: 在所述目标人脸子库存储的预设人脸特征的数据量大于或等于所述目标人脸子库的库容阈值时, 按照所述预设人脸特征的入库时间从早到晚的顺序,对所述目标人脸子库中包括的预设人脸特征进行 删减操作; 和 /或, 根据至少一个预设人脸特征的存储期限、 以及所述预设人脸特征的入库时间, 对所述目标人脸子 库中包括的预设人脸特征进行删减操作。 一种可能的实施方式中, 所述第三确定模块 704, 在基于所述第一匹配结果, 确定所述操作请求 对应的第一响应结果时, 用于: 在所述第一匹配结果指示所述目标人脸子库中包括与所述目标人脸特征匹配的预设人脸特征的 情况下, 基于与所述目标人脸特征匹配的预设人脸特征, 确定所述目标用户对应的账号信息; 基于所述目标用户对应的账号信息, 确定所述操作请求对应的第一响应结果。 一种可能的实施方式中, 所述第三确定模块 704, 在基于所述第一匹配结果, 确定所述操作请求 对应的第一响应结果时, 用于: 在所述第一匹配结果表明所述目标人脸子库中不包括与所述目标人脸特征匹配的预设人脸特征 的情况下, 将所述目标人脸特征与所述人脸总库中包括的所述总人脸特征进行匹配, 获得第二匹配结 果; 若所述第二匹配结果表明所述人脸总库中包括与所述目标人脸特征匹配的第一人脸特征时,基于 所述第二匹配结果, 确定所述操作请求对应的第二响应结果, 并将所述第一人脸特征同步至所述目标 人脸子库。 一种可能的实施方式中, 所述第三确定模块 704, 在将所述目标人脸特征与所述人脸总库中包括 的所述总人脸特征进行匹配, 获得第二匹配结果时, 用于: 控制目标设备展示用于获取所述目标用户的标识信息的操作界面; 基于获取的所述目标用户的标识信息,从所述人脸总库中获取所述标识信息对应的第一人脸特征; 将所述第一人脸特征与所述目标人脸特征进行匹配, 得到所述第二匹配结果。 一种可能的实施方式中, 所述目标设备关联至少一个所述目标人脸子库, 所述目标人脸子库中的 所述预设人脸特征被存储在网络节点中; 在所述响应于目标设备发起的操作请求, 获取所述目标设备 采集的目标用户的目标人脸图像之前, 所述装置还包括: 分配模块 706, 用于: 获取至少一个网络节点的服务性能信息; 所述服务性能信息包括: 负载性能信息和 /或硬件配置 信息; 根据多个所述网络节点的服务性能信息, 将所述预设人脸特征分配至至少一个网络节点; 所述第二确定模块 703, 在将所述目标人脸特征与所述预设人脸特征进行匹配, 确定所述目标人 脸图像对应的第一匹配结果时, 用于: 调用所述至少一个网络节点中的目标网络节点,将所述目标人脸特征与所述预设人脸特征进行匹 配, 确定所述目标人脸图像对应的第一匹配结果。 一种可能的实施方式中, 在获取至少一个网络节点的服务性能信息之前, 所述装置还包括: 判断 模块 707, 用于: 根据所述目标人脸子库中包括的预设人脸特征的数量,判断预设的多个网络节点的容量是否满足 所述预设人脸特征的存储需求; 若多个所述网络节点的容量不满足所述预设人脸特征的存储需求, 则扩展新的网络节点。 一种可能的实施方式中,在预设人脸特征被分配至多个网络节点的情况下,所述第二确定模块 703, 在调用所述至少一个网络节点中的目标网络节点,将所述目标人脸特征与所述预设人脸特征进行匹配, 确定所述目标人脸图像对应的第一匹配结果时, 用于: 判断所述目标网络节点是否正常工作; 若所述目标网络节点不能正常工作, 则从所述预设人脸特征对应的多个网络节点中, 确定除所述 目标网络节点之外的其他网络节点, 并从所述其他网络节点中, 确定更新后的目标网络节点; 调用所述更新后的目标网络节点, 将所述目标人脸特征与所述预设人脸特征进行匹配, 确定所述 目标人脸图像对应的第一匹配结果。 一种可能的实施方式中, 所述预设人脸特征存储在目标网络节点的外存储器中; 所述第二确定模 块 703, 在调用所述至少一个网络节点中的目标网络节点, 将所述目标人脸特征与所述预设人脸特征 进行匹配, 确定所述目标人脸图像对应的第一匹配结果时, 用于: 调用所述目标网络节点中的处理器,从所述外存储器中获取所述目标人脸子库包含的至少一个所 述预设人脸特征; 以及 将获取到的至少一个所述预设人脸特征与所述目标人脸特征进行匹配,确定所述目标人脸图像对 应的第一匹配结果。 一种可能的实施方式中, 所述处理器包括图形处理器 GPU和 /或中央处理器 CPU。 一种可能的实施方式中,所述预设人脸特征存储在目标网络节点的外存储器中;所述装置还包括: 加载模块 708, 用于: 将所述目标网络节点的外存储器中存储的至少一个预设人脸特征,加载至所述目标网络节点的内 存中; 所述第二确定模块 703, 在调用所述至少一个网络节点中的目标网络节点, 将所述目标人脸特征 与所述预设人脸特征进行匹配, 确定所述目标人脸图像对应的第一匹配结果时, 用于: 调用所述目标网络节点中的处理器,将内存中存储的所述目标人脸子库包含的至少一个所述预设 人脸特征、 与所述目标人脸特征进行匹配, 确定所述目标人脸图像对应的第一匹配结果。 一种可能的实施方式中, 所述加载模块 708, 在将所述目标网络节点的外存储器中存储的至少一 个预设人脸特征, 加载至所述目标网络节点的内存中时, 用于: 基于至少一个预设人脸特征的匹配次数或匹配频率,从所述目标网络节点的外存储器存储的至少 一个预设人脸特征中, 确定待加载的预设人脸特征; 将确定的所述待加载的预设人脸特征, 加载至所述目标网络节点的内存中。 一种可能的实施方式中,所述预设人脸特征存储在目标网络节点的外存储器中;所述装置还包括: 生成模块 709, 用于: 为所述外存储器中存储的至少一个预设人脸特征生成对应的目标索引; 将生成的至少一个预设人脸特征对应的目标索引, 加载至所述目标网络节点的内存中; 其中, 所 述目标索引为用于查找所述外存储器中存储的预设人脸特征的索引。 一种可能的实施方式中, 所述第二确定模块 703, 在调用所述至少一个网络节点中的目标网络节 点, 将所述目标人脸特征与所述预设人脸特征进行匹配, 确定所述目标人脸图像对应的第一匹配结果 时, 用于: 调用所述目标网络节点中的处理器,搜索内存中存储的所述目标人脸子库包含的预设人脸特征对 应的所述目标索引; 以及 从所述目标网络节点的所述外存储器中, 获取所述目标索引对应的预设人脸特征; 将所述目标索引对应的所述预设人脸特征、与所述目标人脸特征进行匹配, 确定所述目标人脸图 像对应的第一匹配结果。 在一些实施例中,本公开实施例提供的装置具有的功能或包含的模板可以用于执行上文方法实施 例描述的方法, 其具体实现可以参照上文方法实施例的描述, 为了简洁, 这里不再赘述。 在本公开的一些实施例中,本公开实施例提供的装置具有的功能或包含的模块可以用于执行上文 方法实施例描述的方法, 其具体实现和技术效果可参照上文方法实施例的描述, 为了简洁, 这里不再 赘述。 基于同一技术构思, 本公开实施例还提供了一种电子设备。 参照图 8所示, 为本公开实施例提供 的电子设备的结构示意图, 包括处理器 801、 存储器 802、 和总线 803。 其中, 存储器 802用于存储执行 指令, 包括内存 8021和外部存储器 8022; 这里的内存 8021也称内存储器, 用于暂时存放处理器 801中 的运算数据, 以及与硬盘等外部存储器 8022交换的数据, 处理器 801通过内存 8021与外部存储器 8022 进行数据交换, 当电子设备 800运行时,处理器 801与存储器 802之间通过总线 803通信,使得处理器 801 在执行以下指令: 响应于目标设备发起的操作请求, 获取所述目标设备采集的目标用户的目标人脸图像, 并提取所 述目标人脸图像的目标人脸特征; 确定与所述目标设备关联的目标人脸子库; 其中, 所述目标人脸子库中存储的预设人脸特征属于 人脸总库中存储的总人脸特征的一部分; 将所述目标人脸特征与所述预设人脸特征进行匹配,确定所述目标人脸图像对应的第一匹配结果 基于所述第一匹配结果, 确定所述操作请求对应的第一响应结果。 其中, 处理器 801的具体处理流程可以参照上述方法实施例的记载, 这里不再赘述。 此外, 本公开实施例还提供一种计算机可读存储介质, 该计算机可读存储介质上存储有计算机程 序, 该计算机程序被处理器运行时执行上述方法实施例中所述的人脸识别方法的步骤。 其中, 该存储 介质可以是易失性或非易失的计算机可读取存储介质。 本公开实施例还提供一种计算机程序产品, 该计算机程序产品承载有程序代码, 所述程序代码包 括的指令可用于执行上述方法实施例中所述的人脸识别方法的步骤, 具体可参见上述方法实施例, 在 此不再赘述。 其中,上述计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一个可选实施例中, 所述计算机程序产品具体体现为计算机存储介质, 在另一个可选实施例中, 计算机程序产品具体体现 为软件产品, 例如软件开发包 (Software Development Kit, SDK) 等等。 所属领域的技术人员可以清楚地了解到, 为描述的方便和简洁, 上述描述的系统和装置的具体工 作过程, 可以参考前述方法实施例中的对应过程, 在此不再赘述。 在本公开所提供的几个实施例中, 应该理解到, 所揭露的系统、 装置和方法, 可以通过其它的方式实现。 以上所描述的装置实施例仅仅 是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式, 又例如, 多个单元或组件可以结合或者可以集成到另一个系统, 或一些特征可以忽略, 或不执行。 另 一点, 所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口, 装置或单元 的间接耦合或通信连接, 可以是电性, 机械或其它的形式。 所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是 或者也可以不是物理单元, 即可以位于一个地方, 或者也可以分布到多个网络单元上。 可以根据实际 的需要选择其中的部分或者全部单元来实现本实施例方案的目的。 另外, 在本公开各个实施例中的各功能单元可以集成在一个处理单元中, 也可以是各个单元单独 物理存在, 也可以两个或两个以上单元集成在一个单元中。 所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理 器可执行的非易失的计算机可读取存储介质中。基于这样的理解, 本公开的技术方案本质上或者说对 现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品 存储在一个存储介质中, 包括若干指令用以使得一台计算机设备 (可以是个人计算机, 服务器, 或者 网络设备等) 执行本公开各个实施例所述方法的全部或部分步骤。 而前述的存储介质包括: U盘、 移 动硬盘、只读存储器 (Read-Only Memory, ROM)、随机存取存储器 (Random Access Memory, RAM)、 磁碟或者光盘等各种可以存储程序代码的介质。 以上仅为本公开的具体实施方式, 但本公开的保护范围并不局限于此, 任何熟悉本技术领域的技 术人员在本公开揭露的技术范围内, 可轻易想到变化或替换, 都应涵盖在本公开的保护范围之内。 因 此, 本公开的保护范围应以权利要求的保护范围为准。

Claims

权 利 要 求 书
1、 一种人脸识别方法, 包括: 响应于目标设备发起的操作请求, 获取所述目标设备采集的目标用户的目标人脸图像, 并提取所 述目标人脸图像的目标人脸特征; 确定与所述目标设备关联的目标人脸子库; 其中, 所述目标人脸子库中存储的预设人脸特征属于 人脸总库中存储的总人脸特征的一部分; 将所述目标人脸特征与所述预设人脸特征进行匹配,确定所述目标人脸图像对应的第一匹配结果 基于所述第一匹配结果, 确定所述操作请求对应的第一响应结果。
2、 根据权利要求 1所述的方法, 其中, 所述响应于目标设备发起的操作请求, 获取所述目标设备 采集的目标用户的目标人脸图像, 包括: 响应于目标设备发起的操作请求, 控制所述目标设备采集目标用户的多帧候选人脸图像; 基于人脸在候选人脸图像中的位置、人脸在候选人脸图像中的朝向、候选人脸图像的光照信息中 的至少一者, 从所述多帧候选人脸图像中, 获取所述目标用户对应的目标人脸图像。
3、根据权利要求 1或 2所述的方法, 其中, 所述确定与所述目标设备关联的目标人脸子库,包括: 获取所述目标设备的历史操作信息; 根据所述历史操作信息, 从所述人脸总库预先存储的总人脸特征中, 确定所述目标设备使用过的 预设人脸特征; 基于所述目标设备使用过的预设人脸特征, 确定与所述目标设备关联的目标人脸子库。
4、 根据权利要求 1〜 3任一所述的方法, 其中, 在确定与所述目标设备关联的目标人脸子库之后, 所述方法还包括: 根据所述预设人脸特征的入库时间, 对所述目标人脸子库中包括的预设人脸特征进行删减操作, 生成删减操作后的目标人脸子库。
5、 根据权利要求 4所述的方法, 其中, 所述根据所述预设人脸特征的入库时间, 对所述目标人脸 子库中包括的预设人脸特征进行删减操作, 包括: 在所述目标人脸子库存储的预设人脸特征的数据量大于或等于所述目标人脸子库的库容阈值时, 按照所述预设人脸特征的入库时间从早到晚的顺序,对所述目标人脸子库中包括的预设人脸特征进行 删减操作; 和 /或, 根据至少一个预设人脸特征的存储期限、 以及所述预设人脸特征的入库时间, 对所述目标人脸子 库中包括的预设人脸特征进行删减操作。
6、 根据权利要求 1〜 5任一所述的方法, 其中, 所述基于所述第一匹配结果, 确定所述操作请求对 应的第一响应结果, 包括: 在所述第一匹配结果指示所述目标人脸子库中包括与所述目标人脸特征匹配的预设人脸特征的 情况下, 基于与所述目标人脸特征匹配的预设人脸特征, 确定所述目标用户对应的账号信息; 基于所述目标用户对应的账号信息, 确定所述操作请求对应的第一响应结果。
7、 根据权利要求 1〜 6任一所述的方法, 其中, 所述基于所述第一匹配结果, 确定所述操作请求对 应的第一响应结果, 包括: 在所述第一匹配结果表明所述目标人脸子库中不包括与所述目标人脸特征匹配的预设人脸特征 的情况下, 将所述目标人脸特征与所述人脸总库中包括的所述总人脸特征进行匹配, 获得第二匹配结 果; 若所述第二匹配结果表明所述人脸总库中包括与所述目标人脸特征匹配的第一人脸特征时,基于 所述第二匹配结果, 确定所述操作请求对应的第二响应结果, 并将所述第一人脸特征同步至所述目标 人脸子库。
8、 根据权利要求 7所述的方法, 其中, 所述将所述目标人脸特征与所述人脸总库中包括的所述总 人脸特征进行匹配, 获得第二匹配结果, 包括: 控制目标设备展示用于获取所述目标用户的标识信息的操作界面; 基于获取的所述目标用户的标识信息,从所述人脸总库中获取所述标识信息对应的第一人脸特征; 将所述第一人脸特征与所述目标人脸特征进行匹配, 得到所述第二匹配结果。
9、 根据权利要求 1〜 8任一所述的方法, 其中, 所述目标设备关联至少一个所述目标人脸子库, 所 述目标人脸子库中的所述预设人脸特征被存储在网络节点中;在所述响应于目标设备发起的操作请求, 获取所述目标设备采集的目标用户的目标人脸图像之前, 所述方法还包括: 获取至少一个网络节点的服务性能信息; 所述服务性能信息包括: 负载性能信息和 /或硬件配置 信息; 根据多个所述网络节点的服务性能信息, 将所述预设人脸特征分配至至少一个网络节点; 所述将所述目标人脸特征与所述预设人脸特征进行匹配,确定所述目标人脸图像对应的第一匹配 结果, 包括: 调用所述至少一个网络节点中的目标网络节点,将所述目标人脸特征与所述预设人脸特征进行匹 配, 确定所述目标人脸图像对应的第一匹配结果。
10、 根据权利要求 9所述的方法, 其中, 在获取至少一个网络节点的服务性能信息之前, 所述方 法还包括: 根据所述目标人脸子库中包括的预设人脸特征的数量,判断预设的多个网络节点的容量是否满足 所述预设人脸特征的存储需求; 若多个所述网络节点的容量不满足所述预设人脸特征的存储需求, 则扩展新的网络节点。
11、 根据权利要求 9或 10所述的方法, 其中, 在预设人脸特征被分配至多个网络节点的情况下, 所述调用所述至少一个网络节点中的目标网络节点,将所述目标人脸特征与所述预设人脸特征进行匹 配, 确定所述目标人脸图像对应的第一匹配结果, 包括: 判断所述目标网络节点是否正常工作; 若所述目标网络节点不能正常工作, 则从所述预设人脸特征对应的多个网络节点中, 确定除所述 目标网络节点之外的其他网络节点, 并从所述其他网络节点中, 确定更新后的目标网络节点; 调用所述更新后的目标网络节点, 将所述目标人脸特征与所述预设人脸特征进行匹配, 确定所述 目标人脸图像对应的第一匹配结果。
12、 根据权利要求 9〜 11任一所述的方法, 其中, 所述预设人脸特征存储在目标网络节点的外存储 器中; 所述调用所述至少一个网络节点中的目标网络节点, 将所述目标人脸特征与所述预设人脸特征 进行匹配, 确定所述目标人脸图像对应的第一匹配结果, 包括: 调用所述目标网络节点中的处理器,从所述外存储器中获取所述目标人脸子库包含的至少一个所 述预设人脸特征; 以及 将获取到的至少一个所述预设人脸特征与所述目标人脸特征进行匹配,确定所述目标人脸图像对 应的第一匹配结果。
13、 根据权利要求 12所述的方法, 其中, 所述处理器包括图形处理器 GPU和 /或中央处理器 CPU。
14、 根据权利要求 9〜 11任一所述的方法, 其中, 所述预设人脸特征存储在目标网络节点的外存储 器中; 所述方法还包括: 将所述目标网络节点的外存储器中存储的至少一个预设人脸特征,加载至所述目标网络节点的内 存中; 所述调用所述至少一个网络节点中的目标网络节点,将所述目标人脸特征与所述预设人脸特征进 行匹配, 确定所述目标人脸图像对应的第一匹配结果, 包括: 调用所述目标网络节点中的处理器,将内存中存储的所述目标人脸子库包含的至少一个所述预设 人脸特征、 与所述目标人脸特征进行匹配, 确定所述目标人脸图像对应的第一匹配结果。
15、 根据权利要求 14所述的方法, 其中, 所述将所述目标网络节点的外存储器中存储的至少一个 预设人脸特征, 加载至所述目标网络节点的内存中, 包括: 基于至少一个预设人脸特征的匹配次数或匹配频率,从所述目标网络节点的外存储器存储的至少 一个预设人脸特征中, 确定待加载的预设人脸特征; 将确定的所述待加载的预设人脸特征, 加载至所述目标网络节点的内存中。
16、 根据权利要求 9〜 11任一所述的方法, 其中, 所述预设人脸特征存储在目标网络节点的外存储 器中; 所述方法还包括: 为所述外存储器中存储的至少一个预设人脸特征生成对应的目标索引; 将生成的至少一个预设人脸特征对应的目标索引, 加载至所述目标网络节点的内存中; 其中, 所 述目标索引为用于查找所述外存储器中存储的预设人脸特征的索引。
17、 根据权利要求 16所述的方法, 其中, 所述调用所述至少一个网络节点中的目标网络节点, 将 所述目标人脸特征与所述预设人脸特征进行匹配,确定所述目标人脸图像对应的第一匹配结果,包括: 调用所述目标网络节点中的处理器,搜索内存中存储的所述目标人脸子库包含的预设人脸特征对 应的所述目标索引; 以及 从所述目标网络节点的所述外存储器中, 获取所述目标索引对应的预设人脸特征; 将所述目标索引对应的所述预设人脸特征、与所述目标人脸特征进行匹配, 确定所述目标人脸图 像对应的第一匹配结果。
18、 一种人脸识别系统, 包括: 目标设备、 后台服务器; 所述后台服务器与所述目标设备相连; 所述目标设备, 用于发起操作请求, 并基于所述操作请求, 获取目标用户的目标人脸图像; 所述后台服务器, 用于基于获取的所述目标人脸图像, 执行如权利要求 1〜 17任一项所述的人脸识 别方法。
19、 根据权利要求 18所述的系统, 其中, 所述系统还包括: 至少一个网络节点; 所述后台服务器 与所述至少一个网络节点相连; 所述后台服务器, 还用于控制所述至少一个网络节点存储预设人脸特征, 以及将存储的所述预设 人脸特征与目标人脸图像对应的目标人脸特征进行匹配。
20、 一种人脸识别装置, 包括: 获取模块, 用于响应于目标设备发起的操作请求, 获取所述目标设备采集的目标用户的目标人脸 图像, 并提取所述目标人脸图像的目标人脸特征; 第一确定模块, 用于确定与所述目标设备关联的目标人脸子库; 其中, 所述目标人脸子库中存储 的预设人脸特征属于人脸总库中存储的总人脸特征的一部分; 第二确定模块, 用于将所述目标人脸特征与所述预设人脸特征进行匹配, 确定所述目标人脸图像 对应的第一匹配结果; 第三确定模块, 用于基于所述第一匹配结果, 确定所述操作请求对应的第一响应结果。
21、 一种电子设备, 包括: 处理器、 存储器和总线, 所述存储器存储有所述处理器可执行的机器 可读指令, 当电子设备运行时, 所述处理器与所述存储器之间通过总线通信, 所述机器可读指令被所 述处理器执行时执行如权利要求 1至 17任一所述的人脸识别方法。
22、 一种计算机可读存储介质, 所述计算机可读存储介质上存储有计算机程序, 所述计算机程序 被处理器运行时执行如权利要求 1至 17任一所述的人脸识别方法。
23、 一种计算机程序产品, 包括计算机可读代码, 或者承载有计算机可读代码的非易失性计算机 可读存储介质, 当所述计算机可读代码在电子设备的处理器中运行时, 所述电子设备中的处理器用于 实现权利要求 1 - 17中的任一权利要求所述的方法。
PCT/IB2021/060012 2021-06-30 2021-10-29 人脸识别方法、系统、装置、电子设备及存储介质 WO2023275606A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110737919.9A CN113326810A (zh) 2021-06-30 2021-06-30 人脸识别方法、系统、装置、电子设备及存储介质
CN202110737919.9 2021-06-30

Publications (1)

Publication Number Publication Date
WO2023275606A1 true WO2023275606A1 (zh) 2023-01-05

Family

ID=77423498

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2021/060012 WO2023275606A1 (zh) 2021-06-30 2021-10-29 人脸识别方法、系统、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN113326810A (zh)
WO (1) WO2023275606A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114648798A (zh) * 2022-03-18 2022-06-21 成都商汤科技有限公司 人脸识别方法、装置、电子设备及存储介质
CN115798023B (zh) * 2023-02-13 2023-04-18 成都睿瞳科技有限责任公司 一种人脸识别认证方法、装置、存储介质及处理器

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140330729A1 (en) * 2013-05-03 2014-11-06 Patrick Colangelo Payment processing using biometric identification
CN107657222A (zh) * 2017-09-12 2018-02-02 广东欧珀移动通信有限公司 人脸识别方法及相关产品
CN110399763A (zh) * 2018-04-24 2019-11-01 深圳奥比中光科技有限公司 人脸识别方法与系统
CN111292460A (zh) * 2020-02-27 2020-06-16 广州羊城通有限公司 基于地铁刷脸认证的控制方法及装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107808118A (zh) * 2017-09-28 2018-03-16 平安科技(深圳)有限公司 身份识别方法、电子装置及计算机可读存储介质
CN110442773B (zh) * 2019-08-13 2023-07-18 深圳市网心科技有限公司 分布式系统中节点缓存方法、系统、装置及计算机介质
CN110457281A (zh) * 2019-08-14 2019-11-15 北京博睿宏远数据科技股份有限公司 数据处理方法、装置、设备及介质
CN111275448A (zh) * 2020-02-22 2020-06-12 腾讯科技(深圳)有限公司 人脸数据处理方法、装置和计算机设备
CN111666443A (zh) * 2020-06-03 2020-09-15 腾讯科技(深圳)有限公司 业务处理方法、装置、电子设备及计算机可读存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140330729A1 (en) * 2013-05-03 2014-11-06 Patrick Colangelo Payment processing using biometric identification
CN107657222A (zh) * 2017-09-12 2018-02-02 广东欧珀移动通信有限公司 人脸识别方法及相关产品
CN110399763A (zh) * 2018-04-24 2019-11-01 深圳奥比中光科技有限公司 人脸识别方法与系统
CN111292460A (zh) * 2020-02-27 2020-06-16 广州羊城通有限公司 基于地铁刷脸认证的控制方法及装置

Also Published As

Publication number Publication date
CN113326810A (zh) 2021-08-31

Similar Documents

Publication Publication Date Title
WO2018219178A1 (zh) 数据同步方法、装置、服务器及存储介质
US20200142928A1 (en) Mobile video search
WO2023275606A1 (zh) 人脸识别方法、系统、装置、电子设备及存储介质
US20230244730A1 (en) Matchmaking Video Chatting Partners
WO2021203823A1 (zh) 图像分类方法、装置、存储介质及电子设备
WO2021043064A1 (zh) 社区发现方法、装置、计算机设备和存储介质
WO2023273058A1 (zh) 身份识别方法、系统、装置、计算机设备及存储介质
CN102915350A (zh) 一种查询联系人信息的方法、装置和设备
CN108664914B (zh) 人脸检索方法、装置及服务器
WO2020087950A1 (zh) 数据库更新方法和装置、电子设备、计算机存储介质
JP2021034003A (ja) 人物識別方法、装置、電子デバイス、記憶媒体、及びプログラム
CN103609098B (zh) 用于在远程呈现系统中注册的方法和装置
US20190364196A1 (en) Method and Apparatus for Generating Shot Information
WO2021169811A1 (zh) 特效生成方法、装置、系统、设备和存储介质
US10873662B2 (en) Identifying a media item to present to a user device via a communication session
JP2019133347A (ja) 認証システムおよび認証方法
CN115082999A (zh) 合影图像人物分析方法、装置、计算机设备和存储介质
CN114222028A (zh) 语音识别方法、装置、计算机设备和存储介质
US9940948B2 (en) Systems and methods for enabling information exchanges between devices
CN113094530B (zh) 一种图像数据检索方法、装置、电子设备及存储介质
CN111339107A (zh) 比对源数据同步方法、装置、电子设备及存储介质
US10467259B2 (en) Method and system for classifying queries
CN112992152B (zh) 一种单兵声纹识别系统、方法、存储介质及电子设备
CN113409051B (zh) 针对目标业务的风险识别方法及装置
RU2750642C2 (ru) Система и способ регистрации уникального идентификатора мобильного устройства

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21948220

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE