WO2019047897A1 - 人脸解锁及其信息注册方法和装置、设备、介质 - Google Patents

人脸解锁及其信息注册方法和装置、设备、介质 Download PDF

Info

Publication number
WO2019047897A1
WO2019047897A1 PCT/CN2018/104408 CN2018104408W WO2019047897A1 WO 2019047897 A1 WO2019047897 A1 WO 2019047897A1 CN 2018104408 W CN2018104408 W CN 2018104408W WO 2019047897 A1 WO2019047897 A1 WO 2019047897A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
face
angle
detection
feature
Prior art date
Application number
PCT/CN2018/104408
Other languages
English (en)
French (fr)
Inventor
吴立威
金啸
秦红伟
张瑞
暴天鹏
宋广录
苏鑫
闫俊杰
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Priority to SG11202001349XA priority Critical patent/SG11202001349XA/en
Priority to JP2020512794A priority patent/JP7080308B2/ja
Priority to KR1020207006153A priority patent/KR102324706B1/ko
Publication of WO2019047897A1 publication Critical patent/WO2019047897A1/zh
Priority to US16/790,703 priority patent/US20200184059A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/141Control of illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Definitions

  • the present disclosure relates to artificial intelligence technology, and in particular, to a face unlocking and information registration method and apparatus, device, program, and medium thereof.
  • terminal applications In the information age, various terminal applications (APPs) emerge in an endless stream. When each user uses various applications, they need to register user information to retain and protect user data.
  • terminal devices can provide more and more functions for users, such as communication, photo storage, installation of various applications, etc. Many users will lock their own terminal devices to prevent user data therein. leakage. Therefore, protecting private data in terminal devices and applications has gradually become a focus of attention.
  • Embodiments of the present disclosure provide a technical solution for unlocking a face.
  • a method for unlocking a face includes:
  • the stored facial features include at least two facial features corresponding to the same identification ID of the face images
  • the unlocking operation is performed at least in response to the extracted facial features being authenticated.
  • a method for registering a face unlocking information includes:
  • Face feature extraction is performed on an image in which a human face of each angle is detected
  • a face unlocking apparatus including:
  • a face detection module for performing face detection on an image
  • a feature extraction module configured to perform face feature extraction on the image of the detected face
  • An authentication module configured to authenticate the extracted facial features based on the stored facial features; wherein the stored facial features include at least two facial features corresponding to the same identification ID of the face images;
  • control module configured to perform an unlocking operation by at least authenticating in response to the extracted facial features.
  • an electronic device including:
  • the processor runs the authentication device
  • the unit in the face unlocking device according to any of the embodiments of the present disclosure is operated.
  • an electronic device including:
  • One or more processors in communication with the memory to execute the executable instructions to perform the operations of the steps in the method of any of the embodiments of the present disclosure.
  • a computer program comprising computer readable code, when a computer readable code is run on a device, the processor in the device performs any of the implementations of the present disclosure
  • the instructions of the various steps in the method of an embodiment are provided, comprising computer readable code, when a computer readable code is run on a device, the processor in the device performs any of the implementations of the present disclosure.
  • a computer readable medium for storing computer readable instructions that, when executed, implement operations of steps in the method of any of the embodiments of the present disclosure .
  • the face unlocking and the information registration method and device, the device and the medium provided by the foregoing embodiments of the present disclosure may pre-store face features of at least two different angle face images corresponding to the same ID through the registration process, and perform face unlocking.
  • the face is detected by the face, the face feature is extracted from the image of the detected face, and the extracted face feature is authenticated based on the stored face feature, and the extracted face feature is authenticated.
  • the face-based authentication and unlocking is implemented, and the unlocking mode of the embodiment of the present disclosure is simple in operation, high in convenience, and high in security; and, the embodiment of the present disclosure pre-stores the corresponding correspondence through the registration process.
  • the face feature of the face image of at least two different angles of the same ID can successfully implement the person based on the user face when the same face ID image corresponding to the user and the stored face feature is obtained.
  • the face is unlocked, which improves the success rate of face unlocking, and reduces the face angle and the face angle when registering due to the same user authentication. The possibility of authentication failure due to differences.
  • FIG. 1 is a flow chart of an embodiment of a method for unlocking a face according to the present disclosure.
  • FIG. 2 is a flow chart of another embodiment of a method for unlocking a face of the present disclosure.
  • FIG. 3 is a flow chart of still another embodiment of a method for unlocking a face according to the present disclosure.
  • FIG. 4 is a flowchart of an embodiment of a method for registering face unlock information according to the present disclosure.
  • FIG. 5 is a flowchart of another embodiment of a method for registering face unlock information according to the present disclosure.
  • FIG. 6 is a flowchart of still another embodiment of a method for registering a face unlocking information according to the present disclosure.
  • FIG. 7 is a flowchart of still another embodiment of a method for registering face unlock information according to the present disclosure.
  • FIG. 8 is a schematic structural diagram of an embodiment of a face unlocking device according to the present disclosure.
  • FIG. 9 is a schematic structural diagram of another embodiment of a face unlocking device according to the present disclosure.
  • FIG. 10 is a schematic structural diagram of an embodiment of an electronic device according to the present disclosure.
  • Embodiments of the present disclosure may be applied to electronic devices such as terminal devices, computer systems, servers, etc., which may operate with numerous other general purpose or special purpose computing system environments or configurations.
  • Examples of well-known terminal devices, computing systems, environments, and/or configurations suitable for use with electronic devices such as terminal devices, computer systems, servers, and the like include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients Machines, handheld or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, small computer systems, mainframe computer systems, and distributed cloud computing technology environments including any of the above, and the like.
  • Electronic devices such as terminal devices, computer systems, servers, etc., can be described in the general context of computer system executable instructions (such as program modules) being executed by a computer system.
  • program modules may include routines, programs, target programs, components, logic, data structures, and the like that perform particular tasks or implement particular abstract data types.
  • the computer system/server can be implemented in a distributed cloud computing environment where tasks are performed by remote processing devices that are linked through a communication network.
  • program modules may be located on a local or remote computing system storage medium including storage devices.
  • FIG. 1 is a flow chart of an embodiment of a method for unlocking a face according to the present disclosure. As shown in FIG. 1, the method for unlocking a face of this embodiment includes:
  • the operation 102 may be performed by a processor invoking a corresponding instruction stored in a memory, or by a face detection module executed by the processor.
  • the operation 104 may be performed by a processor invoking a corresponding instruction stored in a memory or by a feature extraction module executed by the processor.
  • the stored facial features include at least two facial features of the face images corresponding to the same identification (ID).
  • ID indicates user information corresponding to the stored face feature, and may be, for example, a user name, a number, a nickname, or the like.
  • the at least two different angles of the face images corresponding to the same ID may include, but are not limited to, face images of the following two or more angles corresponding to the same ID: The face image on the front, the face image on the head, the face image on the head, the face image on the left turn, the face image on the right turn, and so on.
  • the operation 106 may be performed by a processor invoking a corresponding instruction stored in a memory or by an authentication module executed by the processor.
  • the operation 108 may be performed by a processor invoking a corresponding instruction stored in a memory or by a control module executed by the processor.
  • the face features of the at least two different angle face images corresponding to the same ID may be pre-stored through the registration process, and when the face is unlocked, the face detection is performed on the image, The image of the face is detected for face feature extraction, and the extracted face feature is authenticated based on the stored face feature, and the extracted face feature is authenticated and then unlocked, thereby implementing
  • the unlocking mode of the embodiment of the present disclosure is simple in operation, high in convenience, and high in security.
  • the embodiment of the present disclosure pre-stores at least two different angle faces corresponding to the same ID through the registration process.
  • the face feature of the image can successfully achieve face unlocking based on the user's face when the same ID corresponding to the face image of the user corresponding to the stored face feature is obtained, thereby improving the success of the face unlocking.
  • the rate reduces the possibility of authentication failure due to the difference between the face angle at the time of the same user authentication and the face angle at the time of registration.
  • the method for authenticating the extracted facial features based on the stored facial features in the operation 108 can be implemented as follows:
  • the similarity between the extracted facial features and the stored facial features of each angle may be compared one by one, as long as the extracted facial features and the stored facial features of any angle are If the similarity is greater than the set threshold, it can be determined that the extracted facial features are authenticated, that is, the present embodiment may only compare the similarity between the extracted facial features and the stored facial features of an angle or a partial angle. The degree can be determined that the extracted face features are authenticated, so that it is no longer necessary to compare the similarity between the extracted face features and the stored remaining face features, thereby facilitating the improvement of the authentication efficiency.
  • the method for authenticating the extracted facial features based on the stored facial features in the operation 108 may also be implemented as follows:
  • the extracted facial features are determined to be authenticated in response to the maximum of the plurality of similarities between the extracted facial features and the plurality of stored facial features being greater than a set threshold.
  • the plurality of stored face features may be stored face features of all angles or face features of the partial angles.
  • the plurality of stored facial features are stored partial facial features
  • a maximum of a plurality of similarities between the extracted facial features and the partial facial features is greater than a set threshold
  • the maximum value among the plurality of similarities between the extracted face feature and the face feature of the partial angle is not greater than a set threshold, determining that the extracted face feature fails the authentication may be stored from the remaining angles.
  • a face feature of a plurality of angles is selected in the face feature, and a similar method is adopted, and the maximum value among the plurality of similarities between the face features of the plurality of selected angles and the extracted face features is greater than Determining the threshold until the maximum of the obtained plurality of similarities is greater than the set threshold, determining whether the extracted facial features are authenticated, or completing the extracted facial features and the stored facial features of all angles If the similarity is not compared, the similarity of the maximum value is greater than the set threshold, and it is determined that the extracted facial features have not passed the authentication.
  • FIG. 2 is a flow chart of another embodiment of a method for unlocking a face of the present disclosure. As shown in FIG. 2, the method for unlocking a face of this embodiment includes:
  • the operation 202 can be performed by the processor invoking the camera or by a receiving module that is executed by the processor.
  • the operation 204 may be directly performed to perform a light balance adjustment process on the acquired image.
  • the operation 204 before the operation 204, it may be determined whether the quality of the acquired image satisfies a predetermined face detection condition, and the quality of the image does not satisfy the predetermined person.
  • the operation 204 is performed again, and the image is subjected to the light balance adjustment processing, and the image is not subjected to the operation 204 for the image whose quality meets the predetermined face detection condition, and the image is directly detected by the operation 206.
  • the image equalization processing operation can be performed on the image whose quality meets the predetermined face detection condition, thereby facilitating the improvement of the face unlocking efficiency.
  • the predetermined face detection condition may include, for example, but not limited to, at least one of the following: the pixel value distribution of the image does not conform to the preset distribution range, the attribute value of the image is not within the preset value range, and the like. Among them, attribute values of the image, such as image values such as chromaticity, brightness, contrast, and saturation.
  • the operation 204 may be performed by a processor invoking a corresponding instruction stored in a memory, or by a light processing module executed by the processor.
  • the execution operation 202 may be selectively returned, that is, the operation of acquiring an image is continued. If no face is detected from the image, operation 208 is performed.
  • the operation 206 may be performed by a processor invoking a corresponding instruction stored in a memory or by a face detection module executed by the processor.
  • the operation 208 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a feature extraction module executed by the processor.
  • the stored facial features include at least two facial features corresponding to the same ID of the face image.
  • the operation 210 may be performed by a processor invoking a corresponding instruction stored in a memory or by an authentication module executed by the processor.
  • the ID corresponding to the extracted facial features may be acquired and displayed, so that the user knows the user information currently authenticated.
  • the unlock operation is not performed.
  • a prompt message that the face unlocking failure may also be output.
  • the operation 212 may be performed by a processor invoking a corresponding instruction stored in a memory or by a control module executed by the processor.
  • the pixel values of the dark scene are concentrated in the lower numerical area, the texture gradient is smaller, and the information characteristics of the whole image are very blurred, and it is difficult to detect the effective information, especially the face;
  • the backlight and glare scenes have similar overall brightness, but because the background light is very bright, the contours and details of the face are very blurred, which leads to the difficulty of face feature extraction. .
  • the present inventors have found through research that for complex illumination scenes such as backlight, glare, and dim light, the pixel values of these scenes tend to have a certain locality that does not conform to the preset distribution range, and/or image.
  • the attribute value is not within the preset value range. For example, in a dark scene, pixel values tend to be concentrated in areas with lower values. At this time, the contrast, chromaticity, etc. of the image will be low, and it is difficult for the detector to process the faces in these images or cause false alarms. happening.
  • performing a light balance adjustment process on the acquired image may include: acquiring a grayscale image of the image; and performing at least a histogram of the grayscale image of the image.
  • the equalization processing enables the pixel value distribution of the grayscale image of the image to be uniformly extended to the entire pixel value space while preserving the relative distribution of the pixel values of the original image, so as to perform subsequent processing on the grayscale image of the image subjected to histogram equalization processing. operating.
  • performing a light balance adjustment process on the acquired image may include: performing at least image illumination conversion on the image to convert the image to meet preset illumination. Conditional image.
  • the quality of the acquired image is detected, and when the quality of the image does not satisfy the predetermined face detection condition, for example, when the brightness of the image does not satisfy the preset brightness condition, the image is The grayscale image is subjected to histogram equalization processing, that is, firstly, the histogram equalization is performed on the grayscale image of the image, so that the pixel value distribution of the grayscale image of the image can be uniformly extended to the entire pixel value space. At the same time, the relative distribution of the original image pixel values is preserved, and the image subjected to the histogram equalization processing is further subjected to face detection.
  • the grayscale image of the image subjected to the histogram equalization processing has more obvious features, and the texture is more clear and easy to detect.
  • Face or, the image is subjected to image illumination transformation, the image is transformed into an image that satisfies the preset illumination condition, and then the face detection is performed, thereby facilitating detection of the face.
  • the embodiments of the present disclosure can still detect the face in the image more accurately under extreme lighting conditions such as dark light and backlight, especially in the case where the indoor or night illumination is very dark and almost black. Or the face can be detected in the case where the background illumination is strong at night and the face is dim and the texture is blurred, so that the present disclosure can better realize the face unlocking application.
  • the method further includes: performing a living body detection on the acquired image. Accordingly, in this embodiment, the unlocking operation is performed in response to the extracted facial feature being authenticated and the image being detected by the living body.
  • the image may be subjected to live detection after the image is acquired; or the image of the detected face may also be detected in response to detecting the face from the image.
  • the living body detection is performed; or, the acquired facial features may be authenticated, and the extracted facial features are subjected to the living body detection through the authenticated image.
  • performing live detection on an image may include:
  • image feature extraction is performed on the image, and the extracted image feature is detected to include at least one forged clue information; and based on the detection result of the at least one forged clue information, whether the image passes the living body detection is determined. If the extracted image feature does not contain any of the forged clue information, the image is detected by the living body; otherwise, if the extracted image feature contains any one or more of the forged clue information, the image does not pass the living body detection.
  • image features in various embodiments of the present disclosure may include, but are not limited to, any one of the following: a local binary pattern (LBP) feature, a sparsely encoded histogram (HSC) feature, and a panorama (LARGE) Feature, face map (SMALL) feature, face detail map (TINY) feature.
  • LBP local binary pattern
  • HSC sparsely encoded histogram
  • LARGE panorama
  • SMALL face map
  • TINY face detail map
  • the feature items included in the image feature to be extracted may be updated according to the forged clue information that may occur.
  • the LBP feature can highlight the edge information in the image to be detected; the HSC feature can more clearly reflect the reflection and blur information in the image to be detected; the LARGE feature is a panorama feature, which can be extracted based on the LARGE feature. Detecting the most obvious forged hack in the image; the face map (SMALL) is a region cut of a size of the face frame in the image to be detected (for example, 1.5 times the size), including the face, the face and the background.
  • SMALL is a region cut of a size of the face frame in the image to be detected (for example, 1.5 times the size), including the face, the face and the background.
  • counterfeit clues such as reflective, remake screen moiré and the edge of the model or mask can be extracted;
  • the face detail map (TINY) is an area cut of the size of the face frame, including the face, based on TINY Features, can be extracted to the image PS (photoshop editing), remake screen moiré and the texture of the model or mask and other forged clues.
  • the forgery clues of the forged faces included in the above features may be learned by the neural network in advance by training the neural network, and then the images containing the forged clues are detected after being input into the neural network, and the image is judged to be Forging a face image, otherwise it is a real face image, thereby realizing the living body detection of the face.
  • the at least one forged clue information in the embodiment of the present disclosure may include, but is not limited to, any one or more of the following: 2D-type forged clue information, 2.5D-type forged clue information, and 3D-type forged clue information.
  • the multiple dimension forged clue information may be updated according to the possible forged clue information.
  • the forged clue information in the embodiment of the present disclosure can be observed by the human eye.
  • the dimensions of the forged clue information can be divided into 2D class, 2.5D class and 3D class forged clues.
  • the 2D-type forged face refers to a face image printed by a paper-like material
  • the 2D-type forged clue information generally includes forged information such as a paper face edge, a paper material, a paper surface reflection, and a paper edge.
  • the 2.5D-type forged face refers to a face image carried by a carrier device such as a video remake device.
  • the 2.5D-type forged clue information generally includes forged information such as screen moiré, screen reflection, and screen edge of a carrier device such as a video remake device.
  • 3D fake faces refer to real fake faces, such as masks, models, sculptures, 3D printing, etc.
  • the 3D fake faces also have corresponding forgery information, such as the stitching of the mask, the abstraction of the model or Forged information such as too smooth skin.
  • whether an image is forged by a face image can be detected from multiple dimensions, and different dimensions and various types of forged face images can be detected, which improves the accuracy of forged face detection and effectively avoids living bodies.
  • the criminals use the photos or videos of the users to be verified to carry out forgery attacks; in addition, through the neural network for face anti-counterfeiting detection, the forged clue information of various forged face modes can be trained and learned, and new forgeries appear.
  • the neural network can be trained and fine-tuned based on the new forged clue information to quickly update the neural network without improving the hardware structure, so that the new face anti-counterfeiting detection requirement can be quickly and effectively responded.
  • FIG. 3 is a flow chart of still another embodiment of a method for unlocking a face according to the present disclosure.
  • an embodiment of the present disclosure is described by taking an in-vivo detection of an image after acquiring an image.
  • a person skilled in the art may know, according to the description of the present disclosure, that the detection is detected in response to detecting a human face from the image.
  • An implementation scheme for performing a living body detection on an image of a human face; and an implementation scheme for performing a living body detection on the extracted facial feature through the authenticated image in response to the extracted facial feature is authenticated, and details are not described herein again.
  • the method for unlocking a face of this embodiment includes:
  • Operations 304 and 308 are then performed separately.
  • the operation 302 can be performed by the processor invoking the camera or by a receiving module that is executed by the processor.
  • the quality requirement standard can be set in advance to select a high quality image for living body detection.
  • the quality requirement standard may include any one or more of the following: whether the face orientation is positive, the image clarity, the exposure level, etc., and the higher quality image is selected according to the corresponding standard for living body detection.
  • Operation 306 is performed for the image in response to the image meeting the preset quality requirements. Otherwise, in response to the image not meeting the preset quality requirements, operation 302 is re-executed to acquire the image.
  • the operation 304 may be performed by a processor invoking a corresponding instruction stored in a memory or by a light processing module executed by the processor.
  • the operation 306 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a biometric detection module executed by the processor.
  • the operation 308 may include: performing light balance adjustment processing on the image when the quality of the acquired image does not satisfy the predetermined face detection condition, and then performing face detection on the image after the light balance adjustment processing. . If the quality of the acquired image satisfies a predetermined face detection condition, the face detection can be directly performed on the image.
  • the operation 308 may be performed by a processor invoking a corresponding instruction stored in a memory or by a face detection module executed by the processor.
  • operation 312 In response to detecting a human face from the image, operation 312 is performed. Otherwise, in response to no human face being detected from the image, operation 302 may continue to be performed, ie, the image is reacquired and subsequent processes are performed.
  • the operation 310 may be performed by a processor invoking a corresponding instruction stored in a memory, or by a face detection module executed by the processor.
  • the stored facial features include at least two facial features corresponding to the same ID of the face image.
  • the operation 312 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a feature extraction module executed by the processor.
  • Operation 316 is performed in response to the extracted facial features being authenticated and the acquired images being detected by the living body. Otherwise, in response to the extracted face feature not passing the authentication and/or the acquired image does not pass the biometric detection, the subsequent flow of the embodiment is not performed, or alternatively, operation 318 is performed.
  • the operation 314 may be performed by a processor invoking a corresponding instruction stored in a memory or by an authentication module executed by the processor.
  • the ID corresponding to the authenticated facial features may also be acquired from the pre-stored correspondence and displayed.
  • the operation 316 may be performed by a processor invoking a corresponding instruction stored in a memory or by a control module executed by the processor.
  • the reason for the authentication failure may be, for example, that no face is detected, the face feature fails to pass the authentication, the living body is not detected (for example, detected as a photo, etc.), and the like.
  • the operation 318 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by an authentication module or an interaction module executed by the processor.
  • the method further includes:
  • the face unlocking method of the embodiments of the present disclosure may be applied to an unlocking of an electronic device screen, an unlocking of an application (APP), a face unlocking in an application, and the like, for example, when the mobile terminal is activated, the present invention may be used.
  • the face unlocking method unlocking screen of each embodiment is disclosed.
  • the application unlocking method can be performed by the face unlocking method of the embodiments of the present disclosure, and the face of the embodiments of the present disclosure is used in the payment application. Unlock the method to unlock the face and so on.
  • the face unlocking method of the embodiments of the present disclosure may trigger execution in response to receiving a face face authentication request sent by the user, or in response to receiving a face face authentication request sent by the application or the operating system.
  • an electronic device that needs to be unlocked by a face can be used normally, and an electronic device (such as a mobile terminal or the like) can be used normally; an APP that needs to be unlocked by a face (for example, various shopping clients, a bank client, an album in a terminal, etc.)
  • an APP that needs to be unlocked by a face (for example, various shopping clients, a bank client, an album in a terminal, etc.)
  • you need to unlock the face in the payment link of various APPs you can complete the payment after the unlocking is successful.
  • the method further includes: acquiring, by the face unlocking information registration process, the stored face features of the at least two different angle face images corresponding to the same ID.
  • the above-mentioned face unlocking information registration process can be implemented by the embodiment of the face unlocking information registration method in the following embodiments of the present disclosure.
  • FIG. 4 is a flowchart of an embodiment of a method for registering face unlock information according to the present disclosure. As shown in FIG. 4, the method for registering a face unlocking information in this embodiment includes:
  • Output prompt information indicating a face image of at least two different angles of the same ID.
  • the operation 402 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by an interaction module executed by the processor.
  • the operation 404 may be performed by a processor invoking a corresponding instruction stored in a memory, or by a face detection module executed by the processor.
  • the operation 406 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a feature extraction module executed by the processor.
  • the stored facial features include at least two facial features corresponding to the same ID of the face image.
  • the ID therein indicates user information corresponding to the stored face feature, and may be, for example, a user name, a number, or the like.
  • the at least two different angles of the face images corresponding to the same ID may include, but are not limited to, face images of the following two or more angles corresponding to the same ID: The face image on the front, the face image on the head, the face image on the head, the face image on the left turn, the face image on the right turn, and so on.
  • the operation 408 may be performed by a processor invoking a corresponding instruction stored in a memory or by a memory module being executed by the processor.
  • the face features of the at least two different angle face images corresponding to the same ID may be pre-stored through the registration process, so as to be based on the at least two different corresponding corresponding IDs.
  • the face feature of the face image is used to unlock the face, which is beneficial to improving the success rate of face unlocking, and reduces the possibility of authentication failure due to the difference between the face angle and the face angle at the time of registration.
  • FIG. 5 is a flowchart of another embodiment of a method for registering face unlock information according to the present disclosure. As shown in FIG. 5, the method for registering face unlocking information in this embodiment includes:
  • Output prompt information indicating a face image of at least two different angles of the same ID.
  • the operation 502 can be performed by a processor invoking a corresponding instruction stored in a memory, or can be performed by an interaction module executed by the processor.
  • the operation 504 can be performed by the processor invoking the camera or by the face detection module being executed by the processor.
  • the operation 506 may be directly performed to perform a light balance adjustment process on the acquired image.
  • the operation 506 before the operation 506, it may be determined whether the quality of the acquired image satisfies a predetermined face detection condition, and the quality of the image does not satisfy the predetermined person.
  • the operation 506 is performed again, and the acquired image is subjected to the light balance adjustment process; for the image whose quality meets the predetermined face detection condition, the operation 506 is not performed, and the image is directly detected by the operation 508.
  • the image equalization processing operation can be performed on the image whose quality meets the predetermined face detection condition, thereby facilitating the improvement of the face unlocking efficiency.
  • the predetermined face detection condition may include, but is not limited to, at least one of the following: the pixel value distribution of the image does not conform to the preset distribution range, the attribute value of the image is not within the preset value range, and the like.
  • attribute values of the image such as image values such as chromaticity, brightness, contrast, and saturation, and the like.
  • performing a light balance adjustment process on the acquired image may include: acquiring a grayscale image of the image; and performing at least histogram equalization processing on the grayscale image of the image.
  • the pixel value distribution of the grayscale image of the image can be uniformly spread to the entire pixel value space while preserving the relative distribution of the original image pixel values to perform subsequent operations on the grayscale image of the image subjected to histogram equalization processing.
  • performing ray equalization adjustment processing on the acquired image in operation 506 may include: performing at least image illumination conversion on the image to transform the image into an image satisfying a preset illumination condition. .
  • the operation 506 can be performed by a processor invoking a corresponding instruction stored in a memory, or by a light processing module executed by the processor.
  • the operation 508 may be performed by a processor invoking a corresponding instruction stored in a memory, or by a face detection module executed by the processor.
  • operation 512 In response to detecting a human face from the image, operation 512 is performed. Otherwise, in response to the fact that no face is detected from the image, execution returns to operation 504 to reacquire the image.
  • the operation 510 may be performed by a processor invoking a corresponding instruction stored in a memory, or by a face detection module executed by the processor.
  • the operation 512 may be performed by a processor invoking a corresponding instruction stored in a memory or by a feature extraction module executed by the processor.
  • 514 Store the extracted facial features of the face images of the respective angles and their correspondences with the same ID.
  • the operation 514 may be performed by a processor invoking a corresponding instruction stored in a memory, or by a face detection module executed by the processor.
  • the acquired image is first subjected to the light balance adjustment process, and then the face detection is performed, so that the face is easily detected, and the extreme light conditions such as dark light and backlight are still accurately detected.
  • the faces in the image especially those in the actual scene where the indoor or night illumination is very dark and almost black, or when the background illumination is strong at night, and the face is dim and the texture is blurred, the face can be detected.
  • the present disclosure can better implement the face unlocking application.
  • FIG. 6 is a flowchart of still another embodiment of a method for registering a face unlocking information according to the present disclosure. As shown in FIG. 6, in the face unlocking information registration method of the present embodiment, before the operation 514, for example, before, after, or at the same time as the operation 512, the following operations may be performed:
  • the operation 602 may be performed by a processor invoking a corresponding instruction stored in a memory or by a memory module being executed by the processor.
  • new prompt information indicating that the face image of the angle is re-entered may also be output, so as to adjust the face angle, A flow of a face unlocking information registration method of the embodiment of the present disclosure is performed.
  • the operation 604 may be performed by a processor invoking a corresponding instruction stored in a memory or by a memory module being executed by the processor.
  • operation 602 detects an angle of a face included in the image, which may include:
  • the face of the user may be unlocked based on the face feature saved in the face unlocking information registration process, in order to avoid the face angle of the participating face unlocking being different from the registration time when the face is unlocked. While the face unlocking fails and the success rate of the face unlocking is improved, the embodiment of the present disclosure may store the face features of the face image at multiple angles (for example, five angles) for the same user. Among them, the faces of different angles may be, for example, faces of the front, the head, the head, the left turn, and the right turn.
  • the left and right angles and the upper and lower angles of the human face may be used to represent the face angle, and when the frontal face is set, the left and right angles and the upper and lower angles of the face are both zero.
  • outputting prompt information indicating that at least two different angles of the face image of the same ID are obtained may include: according to a preset multiple angles
  • the parameter selects a preset angle and prompts the user to enter the face image of the preset angle.
  • the multi-angle parameter includes multiple angle information of the face image that needs to be acquired.
  • the method may further include: identifying whether all preset angles corresponding to the plurality of angle parameters are selected And in response to not selecting all the preset angles corresponding to the plurality of angle parameters, selecting the next preset angle and performing the above-described embodiment shown in FIG. 5 or FIG. 6 for the next preset angle. If all the preset angles corresponding to the multi-angle parameters are selected, the registration of the face unlocking information is completed.
  • prompt information for prompting the user to input the same ID may also be output.
  • storing the extracted facial features of the angled face images and the corresponding relationship with the same ID including: storing the extracted facial features of the at least two angled face images and the user input ID, and establish a correspondence between the ID and the face features of the at least two angle face images.
  • the method further includes: performing a living body detection on the image.
  • the face feature of the extracted angle image of each angle and the corresponding relationship with the same ID are performed. Operation.
  • the image is subjected to the living body detection, and the acquired image may be subjected to the living body detection after the image is acquired; or the image of the face of each angle may be detected.
  • FIG. 7 is a flowchart of still another embodiment of a method for registering face unlock information according to the present disclosure.
  • the embodiment of the present disclosure is described by taking the live detection of the image after the image is acquired.
  • the person skilled in the art may know, according to the description of the disclosure, that the image of the face is detected, and the image is detected.
  • An implementation scheme of performing live detection on the image in response to the detected angle of the face and matching the preset angle, and detecting the image of the face of each angle after the face is extracted is not described herein.
  • the method for registering face unlocking information in this embodiment includes:
  • Output prompt information indicating a face image of at least two different angles of the same ID.
  • the operation 702 can be performed by a processor invoking a corresponding instruction stored in a memory or by an interaction module executed by the processor.
  • Operation 706 is performed in response to the image being detected by the living body. Otherwise, if the image does not pass the biometric detection, the subsequent flow of this embodiment is not performed.
  • the operation 704 may be performed by a processor invoking a corresponding instruction stored by the camera and the memory, or by a biometric detection module executed by the processor.
  • the operation 706 may be performed by a processor invoking a corresponding instruction stored in a memory, or by a face detection module executed by the processor.
  • operation 710 is performed. If no face is detected from the image, operation 702 continues, or the image continues to be acquired and operation 704 is performed.
  • the operation 708 may be performed by a processor invoking a corresponding instruction stored in a memory or by a face detection module executed by the processor.
  • the operation 710 may be performed by a processor invoking a corresponding instruction stored in a memory or by a memory module being executed by the processor.
  • operation 714 In response to the detected angle matching the angle corresponding to the prompt information, operation 714 is performed. Otherwise, if the detected angle does not match the angle corresponding to the prompt information, operation 702 is re-executed.
  • the operation 712 may be performed by a processor invoking a corresponding instruction stored in a memory or by a memory module being executed by the processor.
  • the operation 714 may be performed by a processor invoking a corresponding instruction stored in a memory or by a feature detection module executed by the processor.
  • operation 704 of the embodiment shown in FIG. 7 it may be recognized whether the acquired image meets the preset quality requirement; and the image meets the preset quality requirement in response to the image. Performing a live detection on the image; otherwise, continuing to perform operation 702 or 704 in response to the image not meeting the preset quality requirements.
  • the operation 716 may be performed by a processor invoking a corresponding instruction stored in a memory or by a memory module being executed by the processor.
  • the above embodiments of the present disclosure can detect whether an image is forged by a plurality of dimensions, and can detect different dimensions and various types of forged face images, improve the accuracy of forged face detection, and effectively avoid the living body detection process.
  • the unscrupulous element uses the photo or video of the user to be verified to perform a forgery attack, ensuring that the image when the face unlock information is registered is the real user image; in addition, the face security detection by the neural network can be used for various fake face methods.
  • the forged clue information is used for training and learning. When a new forged face method appears, the neural network can be trained and fine-tuned based on the new forged clue information to quickly update the neural network without quickly improving the hardware structure.
  • the face anti-counterfeiting detection needs.
  • the face unlocking information registration method of the above embodiments of the present disclosure may be started in response to receiving an input face request sent by the user, or may be started in response to receiving an input face request sent by the application or the operating system.
  • any of the face unlocking method and the face unlocking information registration method provided by the embodiments of the present disclosure may be performed by any suitable device having data processing capabilities, including but not limited to: a terminal device, a server, and the like.
  • any of the face unlocking method and the face unlocking information registration method provided by the embodiments of the present disclosure may be executed by a processor, such as the processor executing any one of the embodiments mentioned in the present disclosure by calling corresponding instructions stored in the memory.
  • the face unlocking method and the face unlocking information registration method This will not be repeated below.
  • the foregoing program may be stored in a computer readable storage medium, and the program is executed when executed.
  • the foregoing steps include the steps of the foregoing method embodiments; and the foregoing storage medium includes: a medium that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.
  • FIG. 8 is a schematic structural diagram of an embodiment of a face unlocking device according to the present disclosure.
  • the face unlocking device of this embodiment can be used to implement the various method embodiments of the present disclosure.
  • the face unlocking device of this embodiment includes: a face detection module, a feature extraction module, an authentication module, and a control module. among them:
  • a face detection module for performing face detection on an image.
  • the feature extraction module is configured to perform face feature extraction on the image of the detected face.
  • An authentication module is configured to authenticate the extracted facial features based on the stored facial features.
  • the stored facial features include at least two facial features of the face images corresponding to the same ID.
  • the at least two different angle face images corresponding to the same ID may include, but are not limited to, face images of the following two or more angles corresponding to the same ID: a front face image, a person facing the head Face image, face image of bow, face image of left turn, face image of right turn, and so on.
  • control module configured to perform an unlocking operation by at least authenticating in response to the extracted facial features.
  • the authentication module is configured to obtain a similarity between the extracted facial feature and the at least one stored facial feature; and in response to obtaining any similarity greater than a set threshold, determining to extract The face features are certified.
  • the authentication module is configured to respectively obtain a similarity between the extracted facial features and the plurality of stored facial features; and the maximum value of the plurality of similarities obtained is greater than The threshold is determined to determine that the extracted facial features are authenticated.
  • the face unlocking device performs face detection on an image, performs face feature extraction on an image of the detected face, and authenticates the extracted face feature based on the stored face feature. After the extracted facial features are authenticated, an unlocking operation is performed, thereby implementing face-based authentication and unlocking.
  • the unlocking mode of the embodiment of the present disclosure is simple in operation, high in convenience, and high in security;
  • the disclosed embodiment pre-stores the face features of the at least two different angle face images corresponding to the same ID by using the registration process, and may obtain any angle face image corresponding to the user and the stored face feature of the same ID.
  • the face unlocking based on the user can be successfully implemented, the success rate of face unlocking is improved, and the possibility of authentication failure due to the difference between the face angle and the face angle at the time of registration of the same user authentication is reduced.
  • FIG. 9 is a schematic structural diagram of another embodiment of a face unlocking device according to the present disclosure. As shown in FIG. 9, the face unlocking device of this embodiment further includes: an obtaining module and a light processing module, as compared with the embodiment shown in FIG. 8. among them:
  • the acquisition module can be, for example, a camera or other image acquisition device.
  • a light processing module for performing light balance adjustment processing on an image.
  • the face detection module is configured to perform face detection on the image after the light balance adjustment processing.
  • the light processing module is configured to acquire a grayscale image of the image and to perform histogram equalization processing on at least the grayscale image of the image.
  • the ray processing module is operative to at least image illuminate the image to transform the image into an image that satisfies a predetermined lighting condition.
  • the light processing module is configured to determine that the quality of the image does not satisfy the predetermined face detection condition, and perform a light balance adjustment process on the image.
  • the predetermined face detection condition may include, but is not limited to, at least one of the following: the pixel value distribution of the image does not conform to the preset distribution range, and the attribute value of the image is not within the preset value range.
  • the interaction module and the storage module may be further included.
  • the interaction module is configured to output prompt information indicating that the face image of the at least two different angles of the same ID is obtained.
  • the storage module is configured to store a face feature of each angle face image extracted by the feature extraction module and a correspondence relationship with the same ID.
  • the storage module is configured to detect an angle of a face included in the image; and determine that the detected angle matches an angle corresponding to the prompt information, and the person who stores the angle image of each angle extracted by the feature extraction module Face feature and its correspondence with the same ID.
  • the storage module is configured to perform face key point detection on the image when detecting an angle of the face included in the image; and calculate an angle of the face included in the image according to the detected face key point.
  • the storage module is further configured to: when the detected angle does not match the angle corresponding to the prompt information, request the interaction module to output a face image indicating that the angle is re-entered. New prompt message.
  • the storage module is configured to identify whether to store face features of at least two different angles of face images of the same ID; in response to not storing at least two different angles of face images that complete the same ID
  • the request interaction module outputs an operation indicating that the prompt information of the face image of the at least two different angles of the same ID is acquired; and the interaction module is requested in response to storing the face feature of the face image of the at least two different angles of the same ID.
  • the method further includes: a living body detecting module, configured to perform living body detection on the image.
  • the control module is configured to perform an unlocking operation at least in response to the extracted facial features being authenticated and the images being detected by the living body.
  • the living body detection module is configured to perform a living body detection on the image in response to the image meeting a preset quality requirement.
  • the biometric detection module can be implemented via a neural network.
  • the neural network is configured to: perform image feature extraction on the image; detect whether the extracted image feature includes at least one forged clue information; and determine whether the image passes the living body detection based on the detection result of the at least one forged clue information.
  • the image features extracted by the image using the neural network may include, but are not limited to, any one or more of the following: an LBP feature, an HSC feature, a LARGE feature, a SMALL feature, and a TINY feature.
  • the at least one forged clue information may include, but is not limited to, any one or more of the following: 2D-type forged face information, 2.5D-type forged face information, and 3D-type forged face information.
  • the 2D-type forged face information includes forged information of the printed face image of the paper-based material; and/or the 2.5D-type forged face information includes forged information of the carrier device carrying the face image; and/or 3D-type forgery
  • the face information includes information for forging a face.
  • An embodiment of the present disclosure further provides an electronic device, including: the face unlocking device of any of the above embodiments of the present disclosure.
  • the embodiment of the present disclosure further provides another electronic device, including:
  • the module in the face unlocking of any of the above embodiments is executed when the processor runs the face unlock.
  • the embodiment of the present disclosure further provides another electronic device, including:
  • the one or more processors are in communication with the memory to execute the executable instructions such that the face unlocking method of any of the above-described embodiments of the present disclosure or the operation of the face unlocking information registration method step.
  • embodiments of the present disclosure also provide a computer program comprising computer readable code, the processor in the device executing a person for implementing any of the above embodiments of the present disclosure when the computer readable code is run on the device.
  • a computer program comprising computer readable code
  • the processor in the device executing a person for implementing any of the above embodiments of the present disclosure when the computer readable code is run on the device.
  • an embodiment of the present disclosure further provides a computer readable medium for storing a computer readable instruction, when the instruction is executed, implementing the face unlocking method or the middle face unlocking information of any of the above embodiments of the present disclosure.
  • the operation of the steps in the registration method is not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, a computer readable medium for storing a computer readable instruction, when the instruction is executed, implementing the face unlocking method or the middle face unlocking information of any of the above embodiments of the present disclosure. The operation of the steps in the registration method.
  • FIG. 10 is a schematic structural diagram of an embodiment of an electronic device according to the present disclosure.
  • the electronic device includes one or more processors, a communication unit, etc., such as one or more central processing units (CPUs) 801, and/or one or more An image processor (GPU) 813 or the like, the processor may execute various kinds according to executable instructions stored in a read only memory (ROM) 802 or executable instructions loaded from the storage portion 808 into the random access memory (RAM) 803. Proper action and handling.
  • processors such as one or more central processing units (CPUs) 801, and/or one or more An image processor (GPU) 813 or the like
  • the processor may execute various kinds according to executable instructions stored in a read only memory (ROM) 802 or executable instructions loaded from the storage portion 808 into the random access memory (RAM) 803. Proper action and handling.
  • ROM read only memory
  • RAM random access memory
  • the communication portion 812 can include, but is not limited to, a network card, which can include, but is not limited to, an IB (Infiniband) network card, and the processor can communicate with the read only memory 802 and/or the random access memory 803 to execute executable instructions over the bus 804. It is connected to the communication unit 812 and communicates with other target devices via the communication unit 812, thereby performing operations corresponding to any method provided by the embodiment of the present application, for example, performing face detection on an image; and performing image detection on a face of the human face.
  • a network card which can include, but is not limited to, an IB (Infiniband) network card
  • IB Infiniband
  • Face feature extraction the extracted face feature is authenticated based on the stored face feature, wherein the stored face feature includes at least two face features of the face image corresponding to the same identifier ID;
  • the unlocking operation is performed in response to the extracted facial feature being authenticated. Or outputting prompt information indicating a face image of at least two different angles of the same ID; performing face detection on the acquired image; performing face feature extraction on the image of the face detected at each angle; storing the extracted The face feature of the face image at each angle, and the correspondence between the face feature of the face image of each angle and the same ID.
  • RAM 803 various programs and data required for the operation of the device can be stored.
  • the CPU 801, the ROM 802, and the RAM 803 are connected to each other through a bus 804.
  • ROM 802 is an optional module.
  • the RAM 803 stores executable instructions, or writes executable instructions to the ROM 802 at runtime, and the executable instructions cause the central processing unit 801 to perform operations corresponding to the above methods.
  • An input/output (I/O) interface 805 is also coupled to bus 804.
  • the communication unit 812 may be integrated or may be provided with a plurality of sub-modules (for example, a plurality of IB network cards) and on the bus link.
  • the following components are connected to the I/O interface 805: an input portion 806 including a keyboard, a mouse, etc.; an output portion 807 including, for example, a cathode ray tube (CRT), a liquid crystal display (LCD), and the like, and a storage portion 808 including a hard disk or the like. And a communication portion 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the Internet.
  • Driver 810 is also coupled to I/O interface 805 as needed.
  • a removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory or the like, is mounted on the drive 811 as needed so that a computer program read therefrom is installed into the storage portion 808 as needed.
  • FIG. 10 is only an optional implementation manner.
  • the number and type of components in the foregoing FIG. 10 may be selected, deleted, added, or replaced according to actual needs;
  • separate implementations such as separate settings or integrated settings may also be adopted.
  • the GPU 813 and the CPU 801 may be separately configured or the GPU 813 may be integrated on the CPU 801, and the communication portion may be separately configured or integrated on the CPU 801 or the GPU 813. ,and many more.
  • an embodiment of the present disclosure includes a computer program product comprising a computer program tangibly embodied on a machine readable medium, the computer program comprising program code for executing the method illustrated in the flowchart, the program code comprising Executing instructions corresponding to the method steps provided by the embodiments of the present application, for example, performing face detection on an image; performing facial feature extraction on the image of the detected face; and authenticating the extracted facial features based on the stored facial features
  • the stored facial features include at least two facial features corresponding to the same identification ID of at least two different angles of the face image; and the unlocking operation is performed at least in response to the extracted facial features being authenticated.
  • the methods and apparatus, apparatus of the present disclosure may be implemented in a number of ways.
  • the methods, apparatus, and apparatus of the present disclosure may be implemented in software, hardware, firmware, or any combination of software, hardware, or firmware.
  • the above-described sequence of steps for the method is for illustrative purposes only, and the steps of the method of the present disclosure are not limited to the order of the above optional description unless otherwise specifically stated.
  • the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine readable instructions for implementing a method in accordance with the present disclosure.
  • the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Collating Specific Patterns (AREA)

Abstract

一种人脸解锁及其信息注册方法和装置、设备、介质,其中,人脸解锁方法包括:对图像进行人脸检测(102);对检测到人脸的图像进行人脸特征提取(104);基于存储的人脸特征对提取到的人脸特征进行认证(106),其中,所述存储的人脸特征至少包括对应同一标识ID的至少二个不同角度人脸图像的人脸特征;至少响应于所述提取到的人脸特征通过认证,进行解锁操作(108)。该方法实现了基于人脸的解锁,认证方式操作简单,便利性较高,且安全性较高,且人脸解锁的成功率高。

Description

人脸解锁及其信息注册方法和装置、设备、介质
本公开要求在2017年09月07日提交中国专利局、申请号为CN201710802146.1、发明名称为“人脸解锁及其信息注册方法和装置、设备、程序、介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及人工智能技术,尤其是一种人脸解锁及其信息注册方法和装置、设备、程序、介质。
背景技术
在信息化时代,各种终端应用(APP)层出不穷,每个用户使用各种应用时,都需要注册用户信息以保留和保护用户数据。另外,随着互联网技术的发展,终端设备可以为用户提供越来越多的功能,例如通信、照片存储、安装各种应用等,很多用户会对自己的终端设备进行锁定以防其中的用户数据泄漏。因此,保护终端设备和应用中的私密数据逐渐成为关注焦点。
随着人工智能技术的发展,计算机视觉技术已在安全监控、金融、乃至无人驾驶等领域都具有巨大的应用价值。
发明内容
本公开实施例提供一种人脸解锁的技术方案。
根据本公开实施例的一个方面,提供的一种人脸解锁方法,包括:
对图像进行人脸检测;
对检测到人脸的图像进行人脸特征提取;
基于存储的人脸特征对提取到的人脸特征进行认证;其中,所述存储的人脸特征至少包括对应同一标识ID的至少二个不同角度人脸图像的人脸特征;
至少响应于所述提取到的人脸特征通过认证,进行解锁操作。
根据本公开实施例的另一个方面,提供的一种人脸解锁信息注册方法,包括:
输出表示获取同一ID的至少二个不同角度的人脸图像的提示信息;
对获取到的图像进行人脸检测;
对检测到各角度人脸的图像进行人脸特征提取;
存储提取到的各角度人脸图像的人脸特征、以及所述各角度人脸图像的人脸特征与所述同一ID之间的对应关系。
根据本公开实施例的又一个方面,提供的一种人脸解锁装置,包括:
人脸检测模块,用于对图像进行人脸检测;
特征提取模块,用于对检测到人脸的图像进行人脸特征提取;
认证模块,用于基于存储的人脸特征对提取到的人脸特征进行认证;其中,所述存储的人脸特征至少包括对应同一标识ID的至少二个不同角度人脸图像的人脸特征;
控制模块,用于至少响应于所述提取到的人脸特征通过认证,进行解锁操作。
根据本公开实施例的再一个方面,提供的一种电子设备,包括:
处理器和本公开任一实施例所述的人脸解锁装置;
在处理器运行所述认证装置时,本公开任一实施例所述的人脸解锁装置中的单元被运行。
根据本公开实施例的再一个方面,提供的一种电子设备,包括:
存储器,存储可执行指令;
一个或多个处理器,与存储器通信以执行可执行指令从而完成本公开任一实施例所述方法中各步骤的操作。
根据本公开实施例的再一个方面,提供的一种计算机程序,包括计算机可读代码,当所述计算机可读代码在设备上运行时,所述设备中的处理器执行用于实现本公开任一实施例所述方法中各步骤的指令。
根据本公开实施例的再一个方面,提供的一种计算机可读介质,用于存储计算机可读取的指令,所述指令被执行时实现本公开任一实施例所述方法中各步骤的操作。
基于本公开上述实施例提供的人脸解锁及其信息注册方法和装置、设备、介质,可以通过注册流程预先存储对应同一ID的至少二个不同角度人脸图像的人脸特征,进行人脸解锁时,对图像进行人脸检测,对检测到人脸的图像进行人脸特征提取,并基于存储的人脸特征对该提取到的人脸特征进行认证, 在该提取到的人脸特征通过认证后,进行解锁操作,从而实现了基于人脸的认证解锁,本公开实施例的解锁方式操作简单,便利性较高,且安全性较高;并且,由于本公开实施例通过注册流程预先存储对应同一ID的至少二个不同角度人脸图像的人脸特征,可以在获取到上述同一ID对应用户与存储的人脸特征对应的任一角度人脸图像时,均可成功实现基于该用户的人脸解锁,提高了人脸解锁的成功率,降低了由于同一用户认证时人脸角度与注册时人脸角度的差异而导致认证失败的可能性。
下面通过附图和实施例,对本公开的技术方案做进一步的详细描述。
附图说明
构成说明书的一部分的附图描述了本公开的实施例,并且连同描述一起用于解释本公开的原理。
参照附图,根据下面的详细描述,可以更加清楚地理解本公开,其中:
图1为本公开人脸解锁方法一个实施例的流程图。
图2为本公开人脸解锁方法另一个实施例的流程图。
图3为本公开人脸解锁方法又一个实施例的流程图。
图4为本公开人脸解锁信息注册方法一个实施例的流程图。
图5为本公开人脸解锁信息注册方法另一个实施例的流程图。
图6为本公开人脸解锁信息注册方法又一个实施例的流程图。
图7为本公开人脸解锁信息注册方法再一个实施例的流程图。
图8为本公开人脸解锁装置一个实施例的结构示意图。
图9为本公开人脸解锁装置另一个实施例的结构示意图。
图10为本公开电子设备一个实施例的结构示意图。
具体实施方式
现在将参照附图来详细描述本公开的各种示例性实施例。应注意到:除非另外可选说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本公开的范围。
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本公开及其应用或使用的任何限制。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。
本公开实施例可以应用于终端设备、计算机系统、服务器等电子设备,其可与众多其它通用或专用计算系统环境或配置一起操作。适于与终端设备、计算机系统、服务器等电子设备一起使用的众所周知的终端设备、计算系统、环境和/或配置的例子包括但不限于:个人计算机系统、服务器计算机系统、瘦客户机、厚客户机、手持或膝上设备、基于微处理器的系统、机顶盒、可编程消费电子产品、网络个人电脑、小型计算机系统﹑大型计算机系统和包括上述任何系统的分布式云计算技术环境,等等。
终端设备、计算机系统、服务器等电子设备可以在由计算机系统执行的计算机系统可执行指令(诸如程序模块)的一般语境下描述。通常,程序模块可以包括例程、程序、目标程序、组件、逻辑、数据结构等等,它们执行特定的任务或者实现特定的抽象数据类型。计算机系统/服务器可以在分布式云计算环境中实施,分布式云计算环境中,任务是由通过通信网络链接的远程处理设备执行的。在分布式云计算环境中,程序模块可以位于包括存储设备的本地或远程计算系统存储介质上。
图1为本公开人脸解锁方法一个实施例的流程图。如图1所示,该实施例的人脸解锁方法包括:
102,对图像进行人脸检测。
在一个可选示例中,该操作102可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的人脸检测模块执行。
104,对检测到人脸的图像进行人脸特征提取。
在一个可选示例中,该操作104可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的特征提取模块执行。
106,基于存储的人脸特征对提取到的人脸特征进行认证。
其中,本公开各实施例中,存储的人脸特征至少包括对应同一标识(ID)的至少二个不同角度人脸图像的人脸特征。其中的ID表示对应于存储人脸特征的用户信息,例如可以是用户姓名、编号、昵称等。
在本公开各实施例的一个可选示例中,上述对应同一ID的至少二个不同角度人脸图像例如可以包 括但不限于对应上述同一ID的以下二个或二个以上角度的人脸图像:正面的人脸图像,仰头的人脸图像,低头的人脸图像,左转头的人脸图像,右转头的人脸图像,等等。
在一个可选示例中,该操作106可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的认证模块执行。
108,至少响应于提取到的人脸特征通过认证,进行解锁操作。
在一个可选示例中,该操作108可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的控制模块执行。
基于本公开上述实施例提供的人脸解锁方法,可以通过注册流程预先存储对应同一ID的至少二个不同角度人脸图像的人脸特征,进行人脸解锁时,对图像进行人脸检测,对检测到人脸的图像进行人脸特征提取,并基于存储的人脸特征对该提取到的人脸特征进行认证,在该提取到的人脸特征通过认证后,进行解锁操作,从而实现了基于人脸的认证解锁,本公开实施例的解锁方式操作简单,便利性较高,且安全性较高;并且,由于本公开实施例通过注册流程预先存储对应同一ID的至少二个不同角度人脸图像的人脸特征,可以在获取到上述同一ID对应用户与存储的人脸特征对应的任一角度人脸图像时,均可成功实现基于该用户的人脸解锁,提高了人脸解锁的成功率,降低了由于同一用户认证时人脸角度与注册时人脸角度的差异而导致认证失败的可能性。
在本公开人脸解锁方法各实施例的一个可选示例中,操作108中基于存储的人脸特征对提取到的人脸特征进行认证,可以通过如下方式实现:
获取提取到的人脸特征与至少一个存储的人脸特征之间的相似度;
响应于提取到的人脸特征与任一存储的人脸特征之间的相似度大于设定阈值,确定提取到的人脸特征通过认证。
否则,若提取到的人脸特征与存储的所有角度的人脸特征之间的相似度均不大于设定阈值,则确定提取到的人脸特征未通过认证。
基于本实施例,可以逐一比对提取到的人脸特征与存储的各角度的人脸特征之间的相似度,只要提取到的人脸特征与存储的任一角度的人脸特征之间的相似度大于设定阈值,即可确定提取到的人脸特征通过认证,即:本实施例可能只比对提取到的人脸特征与存储的一个角度或部分角度的人脸特征之间的相似度便可确定提取到的人脸特征通过认证,便无需再比对提取到的人脸特征与存储的其余角度的人脸特征之间的相似度,从而有利于提升认证效率。
或者,在本公开人脸解锁方法各实施例的另一个可选示例中,操作108中基于存储的人脸特征对提取到的人脸特征进行认证,还可以通过如下方式实现:
分别获取提取到的人脸特征与多个存储的人脸特征之间的相似度;
响应于提取到的人脸特征与多个存储的人脸特征之间的多个相似度中的最大值大于设定阈值,确定提取到的人脸特征通过认证。
其中,上述多个存储的人脸特征可以是存储的所有角度的人脸特征或者其中部分角度的人脸特征。上述多个存储的人脸特征是存储的部分角度的人脸特征时,在提取到的人脸特征与该部分角度的人脸特征之间的多个相似度中的最大值大于设定阈值时,即可确定提取到的人脸特征通过认证,便无需再比对提取到的人脸特征与其余角度的人脸特征之间的相似度,从而有利于提升认证效率。在提取到的人脸特征与该部分角度的人脸特征之间的多个相似度中的最大值不大于设定阈值时,确定提取到的人脸特征未通过认证,可以从存储其余角度的人脸特征中再选取多个角度的人脸特征,采取类似方式,获取再选取的多个角度的人脸特征与该提取到的人脸特征之间的多个相似度中的最大值大于设定阈值,直至获取到的多个相似度中的最大值大于设定阈值,确定提取到的人脸特征通过认证,或者完成提取到的人脸特征与存储的所有角度的人脸特征之间的相似度的比对,均不存在最大值大于设定阈值的相似度,则确定提取到的人脸特征未通过认证。
图2为本公开人脸解锁方法另一个实施例的流程图。如图2所示,该实施例的人脸解锁方法包括:
202,获取图像。
在一个可选示例中,该操作202可以由处理器调用摄像头执行,也可以由被处理器运行的一个接收模块执行。
204,对获取到的图像进行光线均衡调整处理。
在本公开各实施例的一个可选示例中,可以直接执行该操作204,对获取到的图像进行光线均衡调整处理。
或者,在本公开各实施例的另一个可选示例中,也可以在该操作204之前,先确定获取到的图像的质量是否满足预定的人脸检测条件,在图像的质量不满足预定的人脸检测条件时再执行该操作204,对图像进行光线均衡调整处理,而对于质量满足预定的人脸检测条件的图像不再执行操作204、而直接通 过操作206对图像进行人脸检测,该实施例可以对质量满足预定的人脸检测条件的图像不再执行光线均衡调整处理操作,从而有利于提升人脸解锁的效率。
其中,预定的人脸检测条件例如可以包括但不限于以下至少一项:图像的像素值分布不符合预设分布范围,图像的属性值不在预设数值范围内,等等。其中,图像的属性值例如图像的色度、亮度、对比度和饱和度等属性值。
在一个可选示例中,该操作204可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的光线处理模块执行。
206,对光线均衡调整处理后的图像进行人脸检测。
本公开各实施例中,若从图像中未检测到人脸,可以选择性地返回执行操作202,即继续执行获取图像的操作。若从图像中未检测到人脸,执行操作208。
在一个可选示例中,该操作206可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的人脸检测模块执行。
208,对检测到人脸的图像进行人脸进行特征提取。
在一个可选示例中,该操作208可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的特征提取模块执行。
210,基于存储的人脸特征对提取到的人脸特征进行认证。
其中,本公开各实施例中,存储的人脸特征至少包括对应同一ID的至少二个不同角度人脸图像的人脸特征。
在一个可选示例中,该操作210可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的认证模块执行。
212,至少响应于提取到的人脸特征通过认证,进行解锁操作。
在基于本实施例的一个可选实施例中,若提取到的人脸特征通过认证,还可以获取该提取的人脸特征对应的ID并显示,以便用户知晓当前通过认证的用户信息。
若提取到的人脸特征未通过认证,不执行解锁操作。或者,在本公开人脸解锁方法的一个可选实施例中,也可以输出人脸解锁失败的提示消息。
在一个可选示例中,该操作212可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的控制模块执行。
在实际情况中,经常会遇到逆光、强光、暗光等复杂场景,例如夜晚在外背后射来灯光或是室内光线昏暗等情况,此时对拍摄到的图像中人脸进行检测,背景过于突出而引起人脸检测的困难,或者即使检测到人脸,从图像中提取的人脸特征非常模糊。相比于一般场景的人脸检测,暗光场景的像素值集中在较低的数值区域,纹理梯度较小,图像整体的信息特征都十分模糊,要检测出有效信息尤其是人脸比较困难;而逆光、强光场景相对于一般场景而言,虽然整体亮度差不多,但是由于背景光线非常亮,导致人脸部分的轮廓和细节纹理等都十分模糊,从而导致人脸特征提取具有较高的难度。
本公开人通过研究发现,对于逆光、强光、暗光等复杂光照场景,这些场景的图像中,其像素值分布往往会有一定的局域性不符合预设分布范围,和/或图像的属性值不在预设数值范围内。例如在暗光场景中,像素值往往都集中在较低数值的区域,此时图像的对比度、色度等都会很低,检测器很难处理这些图像中的人脸或是会产生误报的情况。
在图2所示实施例的一个可选示例中,操作204中,对获取到的图像进行光线均衡调整处理,可以包括:获取图像的灰度图;至少对该图像的灰度图进行直方图均衡化处理,使图像的灰度图的像素值分布能够均匀的扩展到整个像素值空间,同时保留原图像像素值的相对分布,以便对经过直方图均衡化处理的图像的灰度图执行后续操作。
在图2所示实施例的另一个可选示例中,操作204中,对获取到的图像进行光线均衡调整处理,可以包括:至少对图像进行图像光照变换,以将图像变换为满足预设光照条件的图像。
本公开实施例的一个可选示例中,对获取到的图像的质量进行检测,在图像的质量不满足预定的人脸检测条件时,例如在图像的亮度不满足预设亮度条件时,对图像的灰度图进行直方图均衡化处理,即:首先对图像的灰度图在像素值上进行直方图均衡化,使图像的灰度图的像素值分布能够均匀的扩展到整个像素值空间,同时保留原图像像素值的相对分布,对经过直方图均衡化处理的图像再进行人脸检测,进行过直方图均衡化处理的图像的灰度图中特征更加明显,纹理更加清晰而易于检测人脸;或者,对图像进行图像光照变换,将图像变换为满足预设光照条件的图像,再进行人脸检测,从而易于检测人脸。本公开实施例遇到暗光、逆光等极端的光照条件下,依然能够较为准确地检测到图像中的人脸,尤其对于实际场景中那些室内或是夜晚光照非常暗几乎接近全黑的情况,或是在夜晚背景光照强烈,人脸昏暗纹理模糊的情况下也都能够检测出人脸,从而使得本公开可以更好的实现人脸解锁应用。
另外,在基于本公开上述各实施例的人脸解锁方法的又一个实施例中,还可以包括:对获取到的图像进行活体检测。相应地,在该实施例中,响应于提取到的人脸特征通过认证、且该图像通过活体检测,进行解锁操作。
示例性地,本公开各实施例的人脸解锁方法中,可以在获取图像之后,对图像进行活体检测;或者,也可以响应于从图像中检测到人脸,对该检测到人脸的图像进行活体检测;或者,还可以响应于提取到的人脸特征通过认证,对提取到的人脸特征通过认证的图像进行活体检测。
在本公开各实施例的一个可选示例中,对图像进行活体检测,可以包括:
利用神经网络,对图像进行图像特征提取,检测提取的图像特征是否包含至少一种伪造线索信息;基于该至少一种伪造线索信息的检测结果,确定图像是否通过活体检测。若提取的图像特征未包含任意一种伪造线索信息,该图像通过活体检测;否则,若提取的图像特征包含任意一种或多种伪造线索信息,该图像未通过活体检测。
示例性地,本公开各实施例中的图像特征,例如可以包括但不限于以下任意一项多项:局部二值模式(LBP)特征、稀疏编码的柱状图(HSC)特征、全景图(LARGE)特征、人脸图(SMALL)特征、人脸细节图(TINY)特征。实际应用中,可以根据可能出现的伪造线索信息对需要提取的图像特征包括的特征项进行更新。
其中,通过LBP特征,可以突出待检测图像中的边缘信息;通过HSC特征,可以更明显的反映待检测图像中的反光与模糊信息;LARGE特征是全景图特征,基于LARGE特征,可以提取到待检测图像中最明显的伪造线索(hack);人脸图(SMALL)是待检测图像中人脸框若干倍大小(例如1.5倍大小)的区域切图,包含人脸、人脸与背景切合的部分,基于SMALL特征,可以提取到反光、翻拍设备屏幕摩尔纹与模特或者面具的边缘等伪造线索;人脸细节图(TINY)是取人脸框大小的区域切图,包含人脸,基于TINY特征,可以提取到图像PS(photoshop编辑)、翻拍屏幕摩尔纹与模特或者面具的纹理等伪造线索。上述各项特征中包含的伪造人脸的伪造线索,可以预先通过训练神经网络,被神经网络学习到,之后包含这些伪造线索的图像输入神经网络后均会被检测出来,就可以判断该图像是伪造人脸图像,否则为真实人脸图像,从而实现人脸的活体检测。
示例性地,本公开实施例中的上述至少一种伪造线索信息,例如可以包括但不限于以下任意一项或多项:2D类伪造线索信息、2.5D类伪造线索信息和3D类伪造线索信息,可选可以根据可能出现的伪造线索信息对该多个维度伪造线索信息进行更新。
本公开实施例中的伪造线索信息能被人眼观测到。伪造线索信息的维度可以划分为2D类、2.5D类和3D类伪造线索。其中,2D类伪造人脸指的是纸质类材料打印出的人脸图像,该2D类伪造线索信息一般包含纸质人脸的边缘、纸张材质、纸面反光、纸张边缘等伪造信息。2.5D类伪造人脸指的是视频翻拍设备等载体设备承载的人脸图像,该2.5D类伪造线索信息一般包含视频翻拍设备等载体设备的屏幕摩尔纹、屏幕反光、屏幕边缘等伪造信息。3D类伪造人脸指的是真实存在的伪造人脸,例如面具、模特、雕塑、3D打印等,该3D类伪造人脸同样具备相应的伪造信息,例如面具的缝合处、模特的较为抽象或过于光滑的皮肤等伪造信息。
基于本公开上述实施例,可以从多个维度来检测图像是否伪造人脸图像,可以检测出不同维度、各种类型的伪造人脸图像,提高了伪造人脸检测的精确度,有效避免了活体检测过程中不法分子利用待验证用户的照片或视频进行伪造攻击;此外,通过神经网络进行人脸防伪检测,可以针对各种伪造人脸方式的伪造线索信息进行训练学习,在出现新的伪造人脸方式时,基于新的伪造线索信息对神经网络进行训练、微调即可快速更新神经网络,而无需改进硬件结构,从而可以快速有效的响应新的人脸防伪检测需求。
图3为本公开人脸解锁方法又一个实施例的流程图。本公开实施例中,以在获取图像之后对图像进行活体检测为例对本公开实施例进行说明,本领域技术人员根据本公开的记载可以知晓,响应于从图像中检测到人脸,对该检测到人脸的图像进行活体检测的实现方案;以及响应于提取到的人脸特征通过认证,对提取到的人脸特征通过认证的图像进行活体检测的实现方案,此处不再赘述。如图3所示,该实施例的人脸解锁方法包括:
302,获取图像。
之后分别执行操作304和308。
在一个可选示例中,该操作302可以由处理器调用摄像头执行,也可以由被处理器运行的一个接收模块执行。
304,识别获取到的图像是否满足预设质量要求。
其中,可以预先设置质量要求的标准,以选取高质量的图像进行活体检测。其中的质量要求的标准例如可以包括以下任意一项或多项:人脸朝向是否正面朝向、图像清晰度的高低、曝光度高低等,依据 相应的标准选取综合质量较高的图像进行活体检测。
响应于图像满足预设质量要求,针对该图像执行操作306。否则,响应于图像不满足预设质量要求,重新执行操作302获取图像。
在一个可选示例中,该操作304可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的光线处理模块执行。
306,对获取到的图像进行活体检测。
之后,执行操作314。
在一个可选示例中,该操作306可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的活体检测模块执行。
308,对获取到的图像进行人脸检测。
可选地,该操作308可以包括:在获取到的图像的质量不满足预定的人脸检测条件时,先对图像进行光线均衡调整处理,然后再对光线均衡调整处理后的图像进行人脸检测。若获取到的图像的质量满足预定的人脸检测条件时,可以直接对该图像进行人脸检测。
在一个可选示例中,该操作308可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的人脸检测模块执行。
310,识别是否从图像中检测到人脸。
响应于从图像中检测到人脸,执行操作312。否则,响应于从图像中未检测到人脸,可以继续执行操作302,即重新获取图像并进行后续流程。
在一个可选示例中,该操作310可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的人脸检测模块执行。
312,对检测到人脸的图像进行特征提取,并基于存储的人脸特征对提取到的人脸特征进行认证。
其中,本公开各实施例中,存储的人脸特征至少包括对应同一ID的至少二个不同角度人脸图像的人脸特征。
在一个可选示例中,该操作312可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的特征提取模块执行。
314,确定提取到的人脸特征是否通过认证,以及获取到的图像是否通过活体检测。
响应于提取到的人脸特征通过认证、且获取到的图像通过活体检测,执行操作316。否则,响应于提取到的人脸特征未通过认证和/或获取到的图像未通过活体检测,不执行本实施例的后续流程,或者,可选地执行操作318。
在一个可选示例中,该操作314可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的认证模块执行。
316,进行解锁操作。
可选地,在本公开另一实施例中,响应于提取到的人脸特征通过认证,还可以从预先存储的对应关系中获取该通过认证的人脸特征对应的ID并显示。
在一个可选示例中,该操作316可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的控制模块执行。
之后,不执行本实施例的后续流程。
318,输出认证失败的提示消息和/或者认证失败原因提示消息。
其中,认证失败原因例如可以是未检测到人脸、人脸特征未通过认证、未通过活体检测(例如,检测为照片等),等等。
在一个可选示例中,该操作318可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的认证模块或者交互模块执行。
另外,在本公开又一实施例的人脸解锁方法中,还可以包括:
响应于提取到的人脸特征未通过认证,获取预先设置的允许重复次数信息,对本次人脸解锁方法流程中的认证次数进行累计,并识别当前累计的认证次数是否达到允许重复次数;
若未达到允许重复次数,提示用户是否重新认证;
响应于接收到用户发送的重新认证请求,返回执行操作102、202或302,继续获取图像,重新执行本实施例的人脸解锁流程;
响应于当前累计的认证次数达到允许重复次数,执行输出认证失败的提示消息或者认证失败原因提示消息的操作。
本公开各实施例的人脸解锁方法可以应用于电子设备屏幕解锁、应用程序(APP)的解锁、应用程序中的人脸解锁等一切需要解锁的场景,例如,在移动终端启动时可以采用本公开各实施例的人脸解锁 方法解锁屏幕,在移动终端的APP中可以通过本公开各实施例的人脸解锁方法进行应用程序的解锁,在支付应用程序中通过本公开各实施例的人脸解锁方法进行人脸解锁等。由此,本公开各实施例的人脸解锁方法可以响应于接收到用户发送的刷脸认证请求,或者响应于接收到应用或操作系统发送的刷脸认证请求等,触发执行。解锁之后,可以正常操作设备、应付程序等,或者正常进行后续流程。例如,需要进行人脸解锁的电子设备解锁后可以正常使用、操作电子设备(例如移动终端等);需要进行人脸解锁的APP(例如各种购物客户端、银行客户端、终端中的相册等)在解锁后可以进入该APP,正常使用该APP;在各种APP的支付环节需要进行人脸解锁时,解锁成功后可以完成支付等等。
本公开上述各实施例的人脸解锁方法流程之前,还可以包括:通过人脸解锁信息注册流程获取存储的对应同一ID的至少二个不同角度人脸图像的人脸特征。
示例性地,上述人脸解锁信息注册流程可以通过本公开以下各实施例的人脸解锁信息注册方法实施例实现。
图4为本公开人脸解锁信息注册方法一个实施例的流程图。如图4所示,本实施例的人脸解锁信息注册方法包括:
402,输出表示获取同一ID的至少二个不同角度的人脸图像的提示信息。
在一个可选示例中,该操作402可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的交互模块执行。
404,对获取到的图像进行人脸检测。
在一个可选示例中,该操作404可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的人脸检测模块执行。
406,对检测到各角度人脸的图像进行人脸特征提取。
在一个可选示例中,该操作406可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的特征提取模块执行。
408,存储提取到的各角度人脸图像的人脸特征、以及各角度人脸图像的人脸特征与上述同一ID之间的对应关系。
其中,本公开各实施例中,存储的人脸特征至少包括对应同一ID的至少二个不同角度人脸图像的人脸特征。其中的ID表示对应于存储人脸特征的用户信息,例如可以是用户姓名、编号等。
在本公开各实施例的一个可选示例中,上述对应同一ID的至少二个不同角度人脸图像例如可以包括但不限于对应上述同一ID的以下二个或二个以上角度的人脸图像:正面的人脸图像,仰头的人脸图像,低头的人脸图像,左转头的人脸图像,右转头的人脸图像,等等。
在一个可选示例中,该操作408可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的存储模块执行。
基于本公开上述实施例提供的人脸解锁信息注册方法,可以通过注册流程预先存储对应同一ID的至少二个不同角度人脸图像的人脸特征,以便后续基于该对应同一ID的至少二个不同角度人脸图像的人脸特征进行人脸解锁,有利于提高人脸解锁的成功率,降低了由于同一用户认证时人脸角度与注册时人脸角度的差异而导致认证失败的可能性。
图5为本公开人脸解锁信息注册方法另一个实施例的流程图。如图5所示,本实施例的人脸解锁信息注册方法包括:
502,输出表示获取同一ID的至少二个不同角度的人脸图像的提示信息。
在一个可选示例中,该操作502可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的交互模块执行。
504,获取图像。
在一个可选示例中,该操作504可以由处理器调用摄像头执行,也可以由被处理器运行的人脸检测模块执行。
506,对获取到的图像进行光线均衡调整处理。
在本公开各实施例的一个可选示例中,可以直接执行该操作506,对获取到的图像进行光线均衡调整处理。
或者,在本公开各实施例的另一个可选示例中,也可以在该操作506之前,先确定获取到的图像的质量是否满足预定的人脸检测条件,在图像的质量不满足预定的人脸检测条件时再执行操作506,对获取到的图像进行光线均衡调整处理;对于质量满足预定的人脸检测条件的图像不再执行操作506,直接通过操作508对图像进行人脸检测,该实施例可以对质量满足预定的人脸检测条件的图像不再执行光线均衡调整处理操作,从而有利于提升人脸解锁的效率。
其中,预定的人脸检测条件例如可以包括但不限于以下至少一项:图像的像素值分布不符合预设分 布范围,图像的属性值不在预设数值范围内,等。其中,图像的属性值例如图像的色度、亮度、对比度和饱和度等属性值,等等。
在本实施例的一个可选示例中,操作506中,对获取到的图像进行光线均衡调整处理,可以包括:获取图像的灰度图;至少对该图像的灰度图进行直方图均衡化处理,使图像的灰度图的像素值分布能够均匀的扩展到整个像素值空间,同时保留原图像像素值的相对分布,以便对经过直方图均衡化处理的图像的灰度图执行后续操作。
在本实施例的另一个可选示例中,操作506中,对获取到的图像进行光线均衡调整处理,可以包括:至少对图像进行图像光照变换,以将图像变换为满足预设光照条件的图像。
在一个可选示例中,该操作506可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的光线处理模块执行。
508,对获取到的图像进行人脸检测。
在一个可选示例中,该操作508可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的人脸检测模块执行。
510,识别是否从图像中检测到人脸。
响应于从图像中检测到人脸,执行操作512。否则,响应于从图像中未检测到人脸,返回执行操作504,重新获取图像。
在一个可选示例中,该操作510可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的人脸检测模块执行。
512,对检测到各角度人脸的图像进行人脸特征提取。
在一个可选示例中,该操作512可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的特征提取模块执行。
514,存储提取到的各角度人脸图像的人脸特征及其与上述同一ID之间的对应关系。
在一个可选示例中,该操作514可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的人脸检测模块执行。
本公开实施例中,先对获取到的图像进行光线均衡调整处理,再进行人脸检测,从而易于检测人脸,遇到暗光、逆光等极端的光照条件下,依然能够较为准确地检测到图像中的人脸,尤其对于实际场景中那些室内或是夜晚光照非常暗几乎接近全黑的情况,或是在夜晚背景光照强烈,人脸昏暗纹理模糊的情况下也都能够检测出人脸,从而使得本公开可以更好的实现人脸解锁应用。
图6为本公开人脸解锁信息注册方法又一个实施例的流程图。如图6所示,与图5所示实施例相比,本实施例的人脸解锁信息注册方法中,在操作514之前,例如可以在操作512之前、之后或同时,执行如下操作:
602,检测图像包括的人脸的角度。
在一个可选示例中,该操作602可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的存储模块执行。
604,确定检测出的角度与提示信息对应的角度是否相匹配。确定检测出的角度与提示信息对应的角度相匹配时,执行操作512对检测到各角度人脸的图像进行人脸特征提取、或者514存储提取到的各角度人脸图像的人脸特征、以及各角度人脸图像的人脸特征与上述同一ID之间的对应关系。
可选地,在另一实施例中,响应于检测出的角度与提示信息对应的角度不匹配,还可以输出表示重新输入该角度的人脸图像的新提示信息,以便调整人脸角度,重新执行本公开实施例的人脸解锁信息注册方法流程。
在一个可选示例中,该操作604可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的存储模块执行。
在图6所示实施例的一个可选示例中,操作602检测图像包括的人脸的角度,可以包括:
对人脸进行关键点检测;
根据检测到的关键点计算人脸角度,例如人脸的左右角度和上下角度;
根据计算出的人脸角度,确定检测出的角度与提示信息对应的角度是否相匹配。
本公开实施例中,后续可以基于在人脸解锁信息注册流程中保存的人脸特征对用户进行人脸解锁,为了避免后续进行人脸解锁时由于参与人脸解锁的人脸角度与注册时不同而人脸解锁失败,提高人脸解锁的成功率,本公开实施例可以针对同一用户存储多个角度(例如五个角度)人脸图像的人脸特征。其中,不同角度的人脸例如可以是正面、仰头、低头、左转头、右转头五种角度的人脸。本公开实施例中,可以人脸(即:人头)的左右角度和上下角度表示人脸角度,可以设定正面人脸时,人脸的左右角度和上下角度均为零。
相应地,在图6所示实施例的另一个可选示例中,操作502中,输出表示获取同一ID的至少二个不同角度的人脸图像的提示信息,可以包括:根据预先设置的多角度参数,选取一预设角度并提示用户录入该一预设角度的人脸图像。其中的多角度参数包括需要获取的人脸图像的多个角度信息。相应地,该示例中,存储该一预设角度人脸图像的人脸特征及其与上述同一ID之间的对应关系之后,还可以包括:识别是否选取完多角度参数对应的所有预设角度;响应于未选取完多角度参数对应的所有预设角度,选取下一预设角度并针对该下一预设角度执行上述图5或图6所示实施例。若选取完多角度参数对应的所有预设角度,则完成该本次人脸解锁信息注册。
可选地,响应于选取完多角度参数对应的所有预设角度或者每次提取到一个角度的人脸特征后,还可以输出用于提示用户输入所述同一ID的提示信息。相应地,存储提取到的各角度人脸图像的人脸特征及其与所述同一ID之间的对应关系,包括:存储提取到的至少二个角度人脸图像的人脸特征与用户输入的ID,并建立该ID和上述至少二个角度人脸图像的人脸特征之间的对应关系。
基于上述示例,实现了针对同一用户存储多个不同角度人脸的人脸特征。
在本公开上述各实施例的人脸解锁信息注册方法中,还可以包括:对图像进行活体检测。相应地,在本公开上述各人脸解锁方法实施例中,响应于该图像通过活体检测,执行上述存储提取到的各角度人脸图像的人脸特征及其与上述同一ID之间的对应关系的操作。
示例性地,本公开各实施例的人脸解锁方法中,对图像进行活体检测,可以在获取图像之后,对获取到的图像进行活体检测;或者,也可以对检测到各角度人脸的图像进行活体检测;或者,响应于检测到的人脸的角度与预设角度匹配,对图像进行活体检测;或者,还可以对人脸进行特征提取之后,对图像进行活体检测。
本公开各人脸解锁信息注册方法实施例中对图像进行活体检测的实现方式,可以参考本公开上述各人脸解锁方法实施例中对图像进行活体检测的实现方式,此处不再赘述。
图7为本公开人脸解锁信息注册方法再一个实施例的流程图。本公开实施例中,以在获取图像之后对图像进行活体检测为例对本公开实施例进行说明,本领域技术人员根据本公开的记载可以知晓,对检测到各角度人脸的图像进行活体检测、响应于检测到的人脸的角度与预设角度匹配对图像进行活体检测、检测到各角度人脸的图像进行人脸提取之后对图像进行活体检测的实现方案,此处不再赘述。如图7所示,本实施例的人脸解锁信息注册方法包括:
702,输出表示获取同一ID的至少二个不同角度的人脸图像的提示信息。
在一个可选示例中,该操作702可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的交互模块执行。
704,获取图像,并对获取到的图像进行活体检测。
响应于该图像通过活体检测,执行操作706。否则,若该图像未通过活体检测,不执行本实施例的后续流程。
在一个可选示例中,该操作704可以由处理器调用摄像头和存储器存储的相应指令执行,也可以由被处理器运行的活体检测模块执行。
706,对获取到的图像进行人脸检测。
在一个可选示例中,该操作706可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的人脸检测模块执行。
708,识别是否从图像中检测到人脸。
响应于从图像中检测到人脸,执行操作710。若从图像中未检测到人脸,继续执行操作702,或者继续获取图像并执行操作704。
在一个可选示例中,该操作708可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的人脸检测模块执行。
710,检测图像包括的人脸的角度。
在一个可选示例中,该操作710可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的存储模块执行。
712,确定检测出的角度与提示信息对应的角度是否相匹配。
响应于检测出的角度与提示信息对应的角度相匹配,执行操作714。否则,若检测出的角度与提示信息对应的角度不相匹配,重新执行操作702。
在一个可选示例中,该操作712可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的存储模块执行。
714,对检测到各角度人脸的图像进行人脸特征提取。
在一个可选示例中,该操作714可以由处理器调用存储器存储的相应指令执行,也可以由被处理器 运行的特征检测模块执行。
716,存储提取到的各角度人脸图像的人脸特征、以及各角度人脸图像的人脸特征与上述同一ID之间的对应关系。
另外,作为本公开人脸解锁信息注册方法的又一实施例,在图7所示实施例的操作704中,可以识别获取到的图像是否满足预设质量要求;响应于图像满足预设质量要求,对图像进行活体检测;否则,响应于图像不满足预设质量要求,继续执行操作702或704。
在一个可选示例中,该操作716可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的存储模块执行。
本公开上述实施例可以从多个维度来检测图像是否伪造人脸图像,可以检测出不同维度、各种类型的伪造人脸图像,提高了伪造人脸检测的精确度,有效避免了活体检测过程中不法分子利用待验证用户的照片或视频进行伪造攻击,确保人脸解锁信息注册时的图像即为真实的用户图像;此外,通过神经网络进行人脸防伪检测,可以针对各种伪造人脸方式的伪造线索信息进行训练学习,在出现新的伪造人脸方式时,基于新的伪造线索信息对神经网络进行训练、微调即可快速更新神经网络,而无需改进硬件结构,可以快速有效的响应新的人脸防伪检测需求。
本公开上述各实施例的人脸解锁信息注册方法,可以响应于接收到用户发送的录入人脸请求开始执行,或者响应于接收到应用或操作系统发送的录入人脸请求开始执行。
本公开实施例提供的任一种人脸解锁方法和人脸解锁信息注册方法可以由任意适当的具有数据处理能力的设备执行,包括但不限于:终端设备和服务器等。或者,本公开实施例提供的任一种人脸解锁方法和人脸解锁信息注册方法可以由处理器执行,如处理器通过调用存储器存储的相应指令来执行本公开实施例提及的任一种人脸解锁方法和人脸解锁信息注册方法。下文不再赘述。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
图8为本公开人脸解锁装置一个实施例的结构示意图。该实施例的人脸解锁装置可用于实现本公开上述各方法实施例。如图8所示,该实施例的人脸解锁装置包括:人脸检测模块,特征提取模块,认证模块和控制模块。其中:
人脸检测模块,用于对图像进行人脸检测。
特征提取模块,用于对检测到人脸的图像进行人脸特征提取。
认证模块,用于基于存储的人脸特征对提取到的人脸特征进行认证。
其中,存储的人脸特征至少包括对应同一ID的至少二个不同角度人脸图像的人脸特征。示例性地,上述对应同一ID的至少二个不同角度人脸图像例如可以包括但不限于对应同一ID的以下二个或二个以上角度的人脸图像:正面的人脸图像,仰头的人脸图像,低头的人脸图像,左转头的人脸图像,右转头的人脸图像,等等。
控制模块,用于至少响应于提取到的人脸特征通过认证,进行解锁操作。
在其中一个可选示例中,认证模块用于获取提取到的人脸特征与至少一个存储的人脸特征之间的相似度;以及响应于获取到任一相似度大于设定阈值,确定提取到的人脸特征通过认证。在另一个可选示例中,认证模块用于分别获取提取到的人脸特征与多个存储的人脸特征之间的相似度;以及响应于获取到的多个相似度中的最大值大于设定阈值,确定提取到的人脸特征通过认证。
本公开实施例提供的人脸解锁装置,对图像进行人脸检测,对检测到人脸的图像进行人脸特征提取,并基于存储的人脸特征对该提取到的人脸特征进行认证,在该提取到的人脸特征通过认证后,进行解锁操作,从而实现了基于人脸的认证解锁,本公开实施例的解锁方式操作简单,便利性较高,且安全性较高;并且,由于本公开实施例通过注册流程预先存储对应同一ID的至少二个不同角度人脸图像的人脸特征,可以在获取到上述同一ID对应用户与存储的人脸特征对应的任一角度人脸图像时,均可成功实现基于该用户的人脸解锁,提高了人脸解锁的成功率,降低了由于同一用户认证时人脸角度与注册时人脸角度的差异而导致认证失败的可能性。
图9为本公开人脸解锁装置另一个实施例的结构示意图。如图9所示,与图8所示实施例相比,该实施例的人脸解锁装置还包括:获取模块和光线处理模块。其中:
获取模块,用于获取图像。该获取模块例如可以是一个摄像头或其他图像采集设备。
光线处理模块,用于在对图像进行光线均衡调整处理。
相应地,人脸检测模块用于对光线均衡调整处理后的图像进行人脸检测。
在其中一个可选示例中,光线处理模块用于获取图像的灰度图,以及至少对图像的灰度图进行直方图均衡化处理。在另一个可选示例中,光线处理模块用于至少对图像进行图像光照变换,以将图像变换 为满足预设光照条件的图像。在又一个可选示例中,光线处理模块用于确定图像的质量不满足预定的人脸检测条件,对图像进行光线均衡调整处理。其中,预定的人脸检测条件例如可以包括但不限于以下至少一项:图像的像素值分布不符合预设分布范围,图像的属性值不在预设数值范围内。
进一步地,再参见图9,在本公开人脸解锁装置的又一个实施例中,还可以包括:交互模块和存储模块。其中:交互模块,用于输出表示获取上述同一ID的至少二个不同角度的人脸图像的提示信息。存储模块,用于存储特征提取模块提取到的各角度人脸图像的人脸特征及其与上述同一ID之间的对应关系。
在其中一个可选示例中,存储模块用于检测图像包括的人脸的角度;以及确定检测出的角度与提示信息对应的角度相匹配,存储特征提取模块提取到的各角度人脸图像的人脸特征及其与同一ID之间的对应关系。
在另一个可选示例中,存储模块检测图像包括的人脸的角度时,用于对图像进行人脸关键点检测;以及根据检测到的人脸关键点计算图像包括的人脸的角度。
另外,在本公开人脸解锁装置的再一个实施例中,存储模块还可用于在检测出的角度与提示信息对应的角度不匹配时,请求交互模块输出表示重新输入该角度的人脸图像的新提示信息。
在又一个可选示例中,存储模块用于识别是否存储完成同一ID的至少二个不同角度的人脸图像的人脸特征;响应于未存储完成同一ID的至少二个不同角度的人脸图像的,请求交互模块输出表示获取同一ID的至少二个不同角度的人脸图像的提示信息的操作;响应于存储完成同一ID的至少二个不同角度的人脸图像的人脸特征,请求交互模块输出用于提示用户输入同一ID的提示信息;存储提取到的至少二个角度人脸图像的人脸特征与用户输入的同一ID,并建立同一ID和至少二个角度人脸图像的人脸特征之间的对应关系。
进一步地,再参见图9,在本公开人脸解锁装置的再一个实施例中,还可以包括:活体检测模块,用于对图像进行活体检测。相应地,该实施例中,控制模块用于至少响应于提取到的人脸特征通过认证、且图像通过活体检测,进行解锁操作。
在其中一个可选示例中,活体检测模块,用于响应于图像满足预设质量要求,对图像进行活体检测。
在另一个可选示例中,活体检测模块可以通过神经网络实现。该神经网络用于:对图像进行图像特征提取;检测提取的图像特征是否包含至少一种伪造线索信息;以及基于至少一种伪造线索信息的检测结果,确定图像是否通过活体检测。
其中,利用神经网络对图像提取的图像特征例如可以包括但不限于以下任意一项或多项:LBP特征、HSC特征、LARGE特征、SMALL特征、TINY特征。
上述至少一种伪造线索信息例如可以包括但不限于以下任意一项或多项:2D类伪造人脸信息、2.5D类伪造人脸信息和3D类伪造人脸信息。
其中,2D类伪造人脸信息包括纸质类材料打印人脸图像的伪造信息;和/或,2.5D类伪造人脸信息包括载体设备承载人脸图像的伪造信息;和/或,3D类伪造人脸信息包括伪造人脸的信息。
本公开实施例还提供了一种电子设备,包括:本公开上述任一实施例的人脸解锁装置。
另外,本公开实施例还提供了另一种电子设备,包括:
处理器和本公开上述任一实施例的人脸解锁;
在处理器运行该人脸解锁时,上述任一实施例的人脸解锁中的模块被运行。
另外,本公开实施例还提供了又一种电子设备,包括:
存储器,存储可执行指令;
一个或多个处理器,与存储器通信以执行可执行指令从而本公开上述任一实施例的人脸解锁方法或者中人脸解锁信息注册方法步骤的操作。
另外,本公开实施例还提供了一种计算机程序,包括计算机可读代码,当该计算机可读代码在设备上运行时,设备中的处理器执行用于实现本公开上述任一实施例的人脸解锁方法或者中人脸解锁信息注册方法中步骤的指令。
另外,本公开实施例还提供了一种计算机可读介质,用于存储计算机可读取的指令,该指令被执行时实现本公开上述任一实施例的人脸解锁方法或者中人脸解锁信息注册方法中步骤的操作。
图10为本公开电子设备一个实施例的结构示意图。下面参考图10,其示出了适于用来实现本申请实施例的终端设备或服务器的电子设备的结构示意图。如图10所示,该电子设备包括一个或多个处理器、通信部等,所述一个或多个处理器例如:一个或多个中央处理单元(CPU)801,和/或一个或多个图像处理器(GPU)813等,处理器可以根据存储在只读存储器(ROM)802中的可执行指令或者从存储部分808加载到随机访问存储器(RAM)803中的可执行指令而执行各种适当的动作和处理。通信部812可包括但不限于网卡,所述网卡可包括但不限于IB(Infiniband)网卡,处理器可与只读存储器802 和/或随机访问存储器803中通信以执行可执行指令,通过总线804与通信部812相连、并经通信部812与其他目标设备通信,从而完成本申请实施例提供的任一方法对应的操作,例如,对图像进行人脸检测;对检测到人脸的图像进行人脸特征提取;基于存储的人脸特征对提取到的人脸特征进行认证,其中,所述存储的人脸特征至少包括对应同一标识ID的至少二个不同角度人脸图像的人脸特征;至少响应于所述提取到的人脸特征通过认证,进行解锁操作。或者,输出表示获取同一ID的至少二个不同角度的人脸图像的提示信息;对获取到的图像进行人脸检测;对检测到各角度人脸的图像进行人脸特征提取;存储提取到的各角度人脸图像的人脸特征、以及各角度人脸图像的人脸特征与所述同一ID之间的对应关系。
此外,在RAM 803中,还可存储有装置操作所需的各种程序和数据。CPU801、ROM802以及RAM803通过总线804彼此相连。在有RAM803的情况下,ROM802为可选模块。RAM803存储可执行指令,或在运行时向ROM802中写入可执行指令,可执行指令使中央处理单元801执行上述方法对应的操作。输入/输出(I/O)接口805也连接至总线804。通信部812可以集成设置,也可以设置为具有多个子模块(例如多个IB网卡),并在总线链接上。
以下部件连接至I/O接口805:包括键盘、鼠标等的输入部分806;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分807;包括硬盘等的存储部分808;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分809。通信部分809经由诸如因特网的网络执行通信处理。驱动器810也根据需要连接至I/O接口805。可拆卸介质811,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器811上,以便于从其上读出的计算机程序根据需要被安装入存储部分808。
需要说明的,如图10所示的架构仅为一种可选实现方式,在可选实践过程中,可根据实际需要对上述图10的部件数量和类型进行选择、删减、增加或替换;在不同功能部件设置上,也可采用分离设置或集成设置等实现方式,例如GPU813和CPU801可分离设置或者可将GPU813集成在CPU801上,通信部可分离设置,也可集成设置在CPU801或GPU813上,等等。这些可替换的实施方式均落入本公开公开的保护范围。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括有形地包含在机器可读介质上的计算机程序,计算机程序包含用于执行流程图所示的方法的程序代码,程序代码可包括对应执行本申请实施例提供的方法步骤对应的指令,例如,对图像进行人脸检测;对检测到人脸的图像进行人脸特征提取;基于存储的人脸特征对提取到的人脸特征进行认证,其中,所述存储的人脸特征至少包括对应同一标识ID的至少二个不同角度人脸图像的人脸特征;至少响应于所述提取到的人脸特征通过认证,进行解锁操作。或者,输出表示获取同一ID的至少二个不同角度的人脸图像的提示信息;对获取到的图像进行人脸检测;对检测到各角度人脸的图像进行人脸特征提取;存储提取到的各角度人脸图像的人脸特征及其与所述同一ID之间的对应关系。
本说明书中各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似的部分相互参见即可。对于系统实施例而言,由于其与方法实施例基本对应,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
可能以许多方式来实现本公开的方法和装置、设备。例如,可通过软件、硬件、固件或者软件、硬件、固件的任何组合来实现本公开的方法和装置、设备。用于所述方法的步骤的上述顺序仅是为了进行说明,本公开的方法的步骤不限于以上可选描述的顺序,除非以其它方式特别说明。此外,在一些实施例中,还可将本公开实施为记录在记录介质中的程序,这些程序包括用于实现根据本公开的方法的机器可读指令。因而,本公开还覆盖存储用于执行根据本公开的方法的程序的记录介质。
本公开的描述是为了示例和描述起见而给出的,而并不是无遗漏的或者将本公开限于所公开的形式。很多修改和变化对于本领域的普通技术人员而言是显然的。选择和描述实施例是为了更好说明本公开的原理和实际应用,并且使本领域的普通技术人员能够理解本公开从而设计适于特定用途的带有各种修改的各种实施例。

Claims (65)

  1. 一种人脸解锁方法,其特征在于,包括:
    对图像进行人脸检测;
    对检测到人脸的图像进行人脸特征提取;
    基于存储的人脸特征对提取到的人脸特征进行认证;其中,所述存储的人脸特征至少包括对应同一标识ID的至少二个不同角度人脸图像的人脸特征;
    至少响应于所述提取到的人脸特征通过认证,进行解锁操作。
  2. 根据权利要求1所述的方法,其特征在于,所述对应同一ID的至少二个不同角度人脸图像包括对应同一ID的以下二个或二个以上角度的人脸图像:正面的人脸图像,仰头的人脸图像,低头的人脸图像,左转头的人脸图像,右转头的人脸图像。
  3. 根据权利要求1或2所述的方法,其特征在于,所述对图像进行人脸检测之前,还包括:对图像进行光线均衡调整处理;
    所述对图像进行人脸检测,包括:对光线均衡调整处理后的图像进行人脸检测。
  4. 根据权利要求3所述的方法,其特征在于,所述对图像进行光线均衡调整处理,包括:
    获取所述图像的灰度图;
    至少对所述图像的灰度图进行直方图均衡化处理。
  5. 根据权利要求3所述的方法,其特征在于,所述对图像进行光线均衡调整处理,包括:
    至少对所述图像进行图像光照变换,以将所述图像变换为满足预设光照条件的图像。
  6. 根据权利要求3-5任一所述的方法,其特征在于,所述对图像进行光线均衡调整处理之前,还包括:
    确定所述图像的质量不满足预定的人脸检测条件。
  7. 根据权利要求6所述的方法,其特征在于,所述预定的人脸检测条件包括以下任意一项或多项:所述图像的像素值分布不符合预设分布范围,所述图像的属性值不在预设数值范围内。
  8. 根据权利要求1-7任一所述的方法,其特征在于,所述基于存储的人脸特征对提取到的人脸特征进行认证,包括:
    获取所述提取到的人脸特征与至少一个存储的人脸特征之间的相似度;
    响应于所述提取到的人脸特征与任一存储的人脸特征之间的相似度大于设定阈值,确定所述提取到的人脸特征通过认证。
  9. 根据权利要求1-7任一所述的方法,其特征在于,所述基于存储的人脸特征对提取到的人脸特征进行认证,包括:
    分别获取所述提取到的人脸特征与多个存储的人脸特征之间的相似度;
    响应于所述提取到的人脸特征与多个存储的人脸特征之间的相似度中的最大值大于设定阈值,确定所述提取到的人脸特征通过认证。
  10. 根据权利要求1-9任一所述的方法,其特征在于,还包括:对所述图像进行活体检测;
    至少响应于所述提取到的人脸特征通过认证,进行解锁操作,包括:响应于所述提取到的人脸特征通过认证、且所述图像通过活体检测,进行解锁操作。
  11. 根据权利要求10所述的方法,其特征在于,对所述图像进行活体检测,包括:
    获取所述图像之后,对所述图像进行活体检测;或者,
    响应于从所述图像中检测到人脸,对所述图像进行活体检测;或者,
    响应于所述提取到的人脸特征通过认证,对所述图像进行活体检测。
  12. 根据权利要求10或11所述的方法,其特征在于,对所述图像进行活体检测,包括:
    响应于所述图像满足预设质量要求,对所述图像进行活体检测。
  13. 根据权利要求10-12任一所述的方法,其特征在于,对所述图像进行活体检测,包括:
    利用神经网络对所述图像进行图像特征提取;
    检测提取的图像特征是否包含至少一种伪造线索信息;
    基于所述至少一种伪造线索信息的检测结果,确定所述图像是否通过活体检测。
  14. 根据权利要求12所述的方法,其特征在于,利用所述神经网络对所述图像提取的图像特征包括以下任意一项或多项:局部二值模式LBP特征、稀疏编码的柱状图HSC特征、全景图LARGE特征、人脸图SMALL特征、人脸细节图TINY特征。
  15. 根据权利要求13或14所述的方法,其特征在于,所述至少一种伪造线索信息包括以下任意一项或多项:2D类伪造线索信息、2.5D类伪造线索信息和3D类伪造线索信息。
  16. 根据权利要求15所述的方法,其特征在于,所述2D类伪造线索信息包括纸质类材料打印人脸图像的信息;和/或,
    所述2.5D类伪造线索信息包括载体设备承载人脸图像的信息;和/或,
    所述3D类伪造线索信息包括伪造人脸的信息。
  17. 根据权利要求1-16任一所述的方法,其特征在于,所述基于存储的人脸特征对提取到的人脸特征进行认证之前,还包括:
    通过人脸解锁信息注册流程,获取存储的对应所述同一ID的至少二个不同角度人脸图像的人脸特征。
  18. 根据权利要求17所述的方法,其特征在于,所述人脸解锁信息注册流程包括:
    输出表示获取同一ID的至少二个不同角度的人脸图像的提示信息;
    对获取到的图像进行人脸检测;
    对检测到各角度人脸的图像进行人脸特征提取;
    存储提取到的各角度人脸图像的人脸特征、以及所述各角度人脸图像的人脸特征与所述同一ID之间的对应关系。
  19. 根据权利要求18所述的方法,其特征在于,所述对获取到的图像进行人脸检测之前,还包括:对获取到的图像进行光线均衡调整处理;
    所述对获取到的图像进行人脸检测,包括:对光线均衡调整处理后的图像进行人脸检测。
  20. 根据权利要求19所述的方法,其特征在于,所述对获取到的图像进行光线均衡调整处理之前,还包括:
    确定所述图像的质量不满足预定的人脸检测条件。
  21. 根据权利要求18-20任一所述的方法,其特征在于,存储提取到的任一角度人脸图像的人脸特征之前,还包括:
    检测所述图像包括的人脸的角度;
    确定检测出的角度与提示信息对应的角度相匹配。
  22. 根据权利要求21所述的方法,其特征在于,所述检测所述图像包括的人脸的角度,包括:
    对所述图像进行人脸关键点检测;
    根据检测到的人脸关键点计算所述图像包括的人脸的角度。
  23. 根据权利要求18-22任一所述的方法,其特征在于,还包括:
    对所述图像进行活体检测;
    响应于所述图像通过活体检测,执行所述存储提取到的各角度人脸图像的人脸特征、以及所述各角度人脸图像的人脸特征与所述同一ID之间的对应关系的操作。
  24. 一种人脸解锁信息注册方法,其特征在于,包括:
    输出表示获取同一ID的至少二个不同角度的人脸图像的提示信息;
    对获取到的图像进行人脸检测;
    对检测到各角度人脸的图像进行人脸特征提取;
    存储提取到的各角度人脸图像的人脸特征、以及所述各角度人脸图像的人脸特征与所述同一ID之间的对应关系。
  25. 根据权利要求24所述的方法,其特征在于,所述同一ID的至少二个不同角度的人脸图像包括对应同一ID的以下二个或二个以上人脸图像:正面的人脸图像,仰头的人脸图像,低头的人脸图像,左转头的人脸图像,右转头的人脸图像。
  26. 根据权利要求24或25所述的方法,其特征在于,所述对获取到的图像进行人脸检测之前,还包括:对获取到的图像进行光线均衡调整处理;
    所述对获取到的图像进行人脸检测,包括:对光线均衡调整处理后的图像进行人脸检测。
  27. 根据权利要求26所述的方法,其特征在于,所述对获取到的图像进行光线均衡调整处理,包括:获取所述图像的灰度图;
    至少对所述图像的灰度图进行直方图均衡化处理。
  28. 根据权利要求26所述的方法,其特征在于,所述对获取到的图像进行光线均衡调整处理,包括:
    至少对所述图像进行图像光照变换,以将所述图像变换为满足预设光照条件的图像。
  29. 根据权利要求26-28任一所述的方法,其特征在于,所述对获取到的图像进行光线均衡调整处理之前,还包括:
    确定所述图像的质量不满足预定的人脸检测条件。
  30. 根据权利要求29所述的方法,其特征在于,所述预定的人脸检测条件包括以下任意一项或多项:所述图像的像素值分布不符合预设分布范围,所述图像的属性值不在预设数值范围内。
  31. 根据权利要求24-30任一所述的方法,其特征在于,存储提取到的任一角度人脸图像的人脸特征之前,还包括:
    检测所述图像包括的人脸的角度;
    确定检测出的角度与提示信息对应的角度相匹配。
  32. 根据权利要求31所述的方法,其特征在于,所述检测所述图像包括的人脸的角度,包括:
    对所述图像进行人脸关键点检测;
    根据检测到的人脸关键点计算所述图像包括的人脸的角度。
  33. 根据权利要求31或32所述的方法,其特征在于,还包括:
    响应于检测出的角度与提示信息对应的角度不匹配,输出表示重新输入该角度的人脸图像的新提示信息。
  34. 根据权利要求24-33任一所述的方法,其特征在于,所述存储提取到的各角度人脸图像的人脸特征之后,还包括:
    识别是否存储完成所述同一ID的至少二个不同角度的人脸图像的人脸特征;
    响应于未存储完成所述同一ID的至少二个不同角度的人脸图像的,执行所述输出表示获取同一ID的至少二个不同角度的人脸图像的提示信息的操作。
  35. 根据权利要求34所述的方法,其特征在于,还包括:响应于存储完成所述同一ID的至少二个不同角度的人脸图像的人脸特征,输出用于提示用户输入所述同一ID的提示信息;
    所述存储提取到的各角度人脸图像的人脸特征、以及所述各角度人脸图像的人脸特征与所述同一ID之间的对应关系,包括:存储提取到的所述至少二个角度人脸图像的人脸特征与用户输入的所述同一ID,并建立所述同一ID和所述至少二个角度人脸图像的人脸特征之间的对应关系。
  36. 根据权利要求24-35任一所述的方法,其特征在于,还包括:
    对所述图像进行活体检测;
    响应于所述图像通过活体检测,执行所述存储提取到的各角度人脸图像的人脸特征、以及所述各角度人脸图像的人脸特征与所述同一ID之间的对应关系的操作。
  37. 根据权利要求36所述的方法,其特征在于,对所述图像进行活体检测,包括:
    对所述获取到的图像进行活体检测;或者,
    对检测到各角度人脸的图像进行活体检测;或者,
    响应于检测到的人脸的角度与所述选取的预设角度匹配,对所述图像进行活体检测;或者,
    对检测到各角度人脸的图像进行特征提取之后,对所述各角度的人脸的图像进行活体检测。
  38. 根据权利要求36或37所述的方法,其特征在于,对所述图像进行活体检测,包括:
    响应于所述图像满足预设质量要求,对所述图像进行活体检测。
  39. 根据权利要求36-38任一所述的方法,其特征在于,对所述图像进行活体检测,包括:
    利用神经网络对所述图像进行图像特征提取;
    检测提取的图像特征是否包含至少一种伪造线索信息;
    基于所述至少一种伪造线索信息的检测结果,确定所述图像是否通过活体检测。
  40. 根据权利要求39所述的方法,其特征在于,利用所述神经网络对所述图像提取的图像特征包括以下任意一项或多项:局部二值模式LBP特征、稀疏编码的柱状图HSC特征、全景图LARGE特征、人脸图SMALL特征、人脸细节图TINY特征。
  41. 根据权利要求39或40所述的方法,其特征在于,所述至少一种伪造线索信息包括以下任意一项或多项:2D类伪造线索信息、2.5D类伪造线索信息和3D类伪造线索信息。
  42. 根据权利要求41所述的方法,其特征在于,所述2D类伪造线索信息包括纸质类材料打印人脸图像的信息;和/或,
    所述2.5D类伪造线索信息包括载体设备承载人脸图像的信息;和/或,
    所述3D类伪造线索信息包括伪造人脸的信息。
  43. 一种人脸解锁装置,其特征在于,包括:
    人脸检测模块,用于对图像进行人脸检测;
    特征提取模块,用于对检测到人脸的图像进行人脸特征提取;
    认证模块,用于基于存储的人脸特征对提取到的人脸特征进行认证;其中,所述存储的人脸特征至少包括对应同一标识ID的至少二个不同角度人脸图像的人脸特征;
    控制模块,用于至少响应于所述提取到的人脸特征通过认证,进行解锁操作。
  44. 根据权利要求43所述的装置,其特征在于,所述对应同一ID的至少二个不同角度人脸图像包括对应同一ID的以下二个或二个以上角度的人脸图像:正面的人脸图像,仰头的人脸图像,低头的人脸图像,左转头的人脸图像,右转头的人脸图像。
  45. 根据权利要求43或44所述的装置,其特征在于,还包括:
    光线处理模块,用于在对图像进行光线均衡调整处理;
    所述人脸检测模块,用于对光线均衡调整处理后的图像进行人脸检测。
  46. 根据权利要求45所述的装置,其特征在于,所述光线处理模块,用于获取所述图像的灰度图,以及至少对所述图像的灰度图进行直方图均衡化处理。
  47. 根据权利要求45所述的装置,其特征在于,所述光线处理模块,用于至少对所述图像进行图像光照变换,以将所述图像变换为满足预设光照条件的图像。
  48. 根据权利要求45-47任一所述的装置,其特征在于,所述光线处理模块,用于确定所述图像的质量不满足预定的人脸检测条件,对图像进行光线均衡调整处理。
  49. 根据权利要求48所述的装置,其特征在于,所述预定的人脸检测条件包括以下任意一项或多项:所述图像的像素值分布不符合预设分布范围,所述图像的属性值不在预设数值范围内。
  50. 根据权利要求43-49任一所述的装置,其特征在于,所述认证模块,用于获取所述提取到的人脸特征与至少一个存储的人脸特征之间的相似度;以及响应于所述提取到的人脸特征与任一存储的人脸特征之间的相似度大于设定阈值,确定所述提取到的人脸特征通过认证。
  51. 根据权利要求43-49任一所述的装置,其特征在于,所述认证模块,用于分别获取所述提取到的人脸特征与多个存储的人脸特征之间的相似度;以及响应于所述提取到的人脸特征与多个存储的人脸特征之间的相似度中的最大值大于设定阈值,确定所述提取到的人脸特征通过认证。
  52. 根据权利要求43-51任一所述的装置,其特征在于,还包括:
    交互模块,用于输出表示获取同一ID的至少二个不同角度的人脸图像的提示信息;
    存储模块,用于存储提取到的各角度人脸图像的人脸特征、以及所述各角度人脸图像的人脸特征与所述同一ID之间的对应关系。
  53. 根据权利要求52所述的装置,其特征在于,所述存储模块,用于检测所述图像包括的人脸的角度;以及确定检测出的角度与提示信息对应的角度相匹配,存储提取到的各角度人脸图像的人脸特征、以及所述各角度人脸图像的人脸特征与所述同一ID之间的对应关系。
  54. 根据权利要求53所述的装置,其特征在于,所述存储模块检测所述图像包括的人脸的角度时,用于对所述图像进行人脸关键点检测;以及根据检测到的人脸关键点计算所述图像包括的人脸的角度。
  55. 根据权利要求53或54所述的装置,其特征在于,所述存储模块,还用于在检测出的角度与提示信息对应的角度不匹配时,请求所述交互模块输出表示重新输入该角度的人脸图像的新提示信息。
  56. 根据权利要求53-55任一所述的装置,其特征在于,所述存储模块,用于识别是否存储完成所述同一ID的至少二个不同角度的人脸图像的人脸特征;响应于未存储完成所述同一ID的至少二个不同角度的人脸图像的,请求所述交互模块输出表示获取同一ID的至少二个不同角度的人脸图像的提示信息的操作;响应于存储完成所述同一ID的至少二个不同角度的人脸图像的人脸特征,请求所述交互模块输出用于提示用户输入所述同一ID的提示信息;存储提取到的所述至少二个角度人脸图像的人脸特征与用户输入的所述同一ID,并建立所述同一ID和所述至少二个角度人脸图像的人脸特征之间的对应关系。
  57. 根据权利要求43-56任一所述的装置,其特征在于,还包括:
    活体检测模块,用于对所述图像进行活体检测;
    所述控制模块,用于至少响应于所述提取到的人脸特征通过认证、且所述图像通过活体检测,进行解锁操作。
  58. 根据权利要求57所述的装置,其特征在于,所述活体检测模块,用于响应于所述图像满足预设质量要求,对所述图像进行活体检测。
  59. 根据权利要求57或58所述的装置,其特征在于,所述活体检测模块包括神经网络,用于:
    对所述图像进行图像特征提取;
    检测提取的图像特征是否包含至少一种伪造线索信息;以及
    基于所述至少一种伪造线索信息的检测结果,确定所述图像是否通过活体检测。
  60. 根据权利要求59所述的装置,其特征在于,利用所述神经网络对所述图像提取的图像特征包括以下任意一项或多项:局部二值模式LBP特征、稀疏编码的柱状图HSC特征、全景图LARGE特征、人脸图SMALL特征、人脸细节图TINY特征。
  61. 根据权利要求59或60所述的装置,其特征在于,所述至少一种伪造线索信息包括以下任意一 项或多项:2D类伪造人脸信息、2.5D类伪造人脸信息和3D类伪造人脸信息。
  62. 根据权利要求61所述的装置,其特征在于,所述2D类伪造人脸信息包括纸质类材料打印人脸图像的伪造信息;和/或,所述2.5D类伪造人脸信息包括载体设备承载人脸图像的伪造信息;和/或,所述3D类伪造人脸信息包括伪造人脸的信息。
  63. 一种电子设备,其特征在于,包括:
    处理器和权利要求43-62任一所述的人脸解锁装置;
    在处理器运行所述认证装置时,权利要求43-62任一所述的人脸解锁装置中的单元被运行。
  64. 一种电子设备,其特征在于,包括:
    存储器,存储可执行指令;
    一个或多个处理器,与存储器通信以执行可执行指令从而完成权利要求1-42任一所述方法中各步骤的操作。
  65. 一种计算机可读介质,用于存储计算机可读取的指令,其特征在于,所述指令被执行时实现权利要求1-42任一所述方法中各步骤的操作。
PCT/CN2018/104408 2017-09-07 2018-09-06 人脸解锁及其信息注册方法和装置、设备、介质 WO2019047897A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
SG11202001349XA SG11202001349XA (en) 2017-09-07 2018-09-06 Facial unlocking method and information registration method and apparatus, device, and medium
JP2020512794A JP7080308B2 (ja) 2017-09-07 2018-09-06 顔ロック解除方法、その情報登録方法及び装置、機器並びに媒体
KR1020207006153A KR102324706B1 (ko) 2017-09-07 2018-09-06 얼굴인식 잠금해제 방법 및 장치, 기기, 매체
US16/790,703 US20200184059A1 (en) 2017-09-07 2020-02-13 Face unlocking method and apparatus, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710802146.1 2017-09-07
CN201710802146.1A CN108229120B (zh) 2017-09-07 2017-09-07 人脸解锁及其信息注册方法和装置、设备、程序、介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/790,703 Continuation US20200184059A1 (en) 2017-09-07 2020-02-13 Face unlocking method and apparatus, and storage medium

Publications (1)

Publication Number Publication Date
WO2019047897A1 true WO2019047897A1 (zh) 2019-03-14

Family

ID=62655208

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/104408 WO2019047897A1 (zh) 2017-09-07 2018-09-06 人脸解锁及其信息注册方法和装置、设备、介质

Country Status (6)

Country Link
US (1) US20200184059A1 (zh)
JP (1) JP7080308B2 (zh)
KR (1) KR102324706B1 (zh)
CN (1) CN108229120B (zh)
SG (1) SG11202001349XA (zh)
WO (1) WO2019047897A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022507315A (ja) * 2019-04-08 2022-01-18 ▲騰▼▲訊▼科技(深▲セン▼)有限公司 アイデンティティ検証方法並びにその、装置、コンピュータプログラムおよびコンピュータ機器
CN115063873A (zh) * 2022-08-15 2022-09-16 珠海翔翼航空技术有限公司 基于静态和动态人脸检测的飞行数据获取方法、设备

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229326A (zh) * 2017-03-16 2018-06-29 北京市商汤科技开发有限公司 人脸防伪检测方法和系统、电子设备、程序和介质
CN108229120B (zh) * 2017-09-07 2020-07-24 北京市商汤科技开发有限公司 人脸解锁及其信息注册方法和装置、设备、程序、介质
SG11202008549SA (en) * 2018-08-13 2020-10-29 Beijing Sensetime Technology Development Co Ltd Identity authentication method and apparatus, electronic device, and storage medium
CN109359502A (zh) * 2018-08-13 2019-02-19 北京市商汤科技开发有限公司 防伪检测方法和装置、电子设备、存储介质
CN109255299A (zh) * 2018-08-13 2019-01-22 北京市商汤科技开发有限公司 身份认证方法和装置、电子设备和存储介质
CN109344703B (zh) * 2018-08-24 2021-06-25 深圳市商汤科技有限公司 对象检测方法及装置、电子设备和存储介质
CN109194834B (zh) * 2018-09-27 2021-07-13 重庆辉烨物联科技有限公司 手机节电方法、装置、设备及存储介质
CN109558794B (zh) * 2018-10-17 2024-06-28 平安科技(深圳)有限公司 基于摩尔纹的图像识别方法、装置、设备和存储介质
CN109543611A (zh) * 2018-11-22 2019-03-29 珠海市蓝云科技有限公司 一种基于人工智能的图像匹配的方法
CN109740503A (zh) * 2018-12-28 2019-05-10 北京旷视科技有限公司 人脸认证方法、图像底库录入方法、装置及处理设备
CN109819114B (zh) * 2019-02-20 2021-11-30 北京市商汤科技开发有限公司 锁屏处理方法及装置、电子设备及存储介质
CN111783505A (zh) * 2019-05-10 2020-10-16 北京京东尚科信息技术有限公司 伪造人脸的识别方法、装置和计算机可读存储介质
CN110175572A (zh) * 2019-05-28 2019-08-27 深圳市商汤科技有限公司 人脸图像处理方法及装置、电子设备及存储介质
CN110309805A (zh) * 2019-07-08 2019-10-08 业成科技(成都)有限公司 脸部辨识装置
EP4030747A4 (en) * 2019-09-12 2022-11-02 NEC Corporation IMAGE ANALYSIS DEVICE, CONTROL METHOD AND PROGRAM
US20210334348A1 (en) * 2020-04-24 2021-10-28 Electronics And Telecommunications Research Institute Biometric authentication apparatus and operation method thereof
CN111723655B (zh) * 2020-05-12 2024-03-08 五八有限公司 人脸图像处理方法、装置、服务器、终端、设备及介质
CN112215084B (zh) * 2020-09-17 2024-09-03 中国银联股份有限公司 识别对象确定方法、装置、设备及存储介质
KR102393543B1 (ko) * 2020-11-02 2022-05-03 김효린 안면 인증 딥러닝 모델을 스마트폰 디바이스 내에서 학습하기 위한 안면 데이터 수집 및 처리 방법과 그 방법을 수행하는 디바이스
CN112667984A (zh) * 2020-12-31 2021-04-16 上海商汤临港智能科技有限公司 一种身份认证方法及装置、电子设备和存储介质
CN113762227B (zh) * 2021-11-09 2022-02-08 环球数科集团有限公司 一种多姿态人脸识别方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103377365A (zh) * 2012-04-25 2013-10-30 华晶科技股份有限公司 人脸识别的方法及使用该方法的人脸识别系统
CN104200146A (zh) * 2014-08-29 2014-12-10 华侨大学 一种结合视频人脸和数字唇动密码的身份验证方法
CN105654048A (zh) * 2015-12-30 2016-06-08 四川川大智胜软件股份有限公司 一种多视角人脸比对方法
CN105844227A (zh) * 2016-03-21 2016-08-10 湖南君士德赛科技发展有限公司 面向校车安全的司机身份认证方法
CN108229120A (zh) * 2017-09-07 2018-06-29 北京市商汤科技开发有限公司 人脸解锁及其信息注册方法和装置、设备、程序、介质

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100456619B1 (ko) * 2001-12-05 2004-11-10 한국전자통신연구원 에스.브이.엠(svm)을 이용한 얼굴 등록/인증 시스템 및방법
JP2005056004A (ja) * 2003-08-07 2005-03-03 Omron Corp 顔照合装置、顔照合方法、および顔照合プログラム
JPWO2009107237A1 (ja) * 2008-02-29 2011-06-30 グローリー株式会社 生体認証装置
JP5766564B2 (ja) * 2011-09-15 2015-08-19 株式会社東芝 顔認証装置及び顔認証方法
WO2014032162A1 (en) * 2012-08-28 2014-03-06 Solink Corporation Transaction verification system
CN103593598B (zh) * 2013-11-25 2016-09-21 上海骏聿数码科技有限公司 基于活体检测和人脸识别的用户在线认证方法及系统
CN104734852B (zh) * 2013-12-24 2018-05-08 中国移动通信集团湖南有限公司 一种身份认证方法及装置
CN103679158B (zh) * 2013-12-31 2017-06-16 北京天诚盛业科技有限公司 人脸认证方法和装置
KR102257897B1 (ko) * 2014-05-09 2021-05-28 삼성전자주식회사 라이브니스 검사 방법과 장치,및 영상 처리 방법과 장치
CN111898108B (zh) * 2014-09-03 2024-06-04 创新先进技术有限公司 身份认证方法、装置、终端及服务器
KR20160043425A (ko) * 2014-10-13 2016-04-21 엘지전자 주식회사 이동 단말기 및 그의 화면 잠금 해제 방법
EP3218844A4 (en) * 2014-11-13 2018-07-04 Intel Corporation Spoofing detection in image biometrics
US9922238B2 (en) * 2015-06-25 2018-03-20 West Virginia University Apparatuses, systems, and methods for confirming identity
JP6507046B2 (ja) * 2015-06-26 2019-04-24 株式会社東芝 立体物検知装置及び立体物認証装置
CN111144293A (zh) * 2015-09-25 2020-05-12 北京市商汤科技开发有限公司 带交互式活体检测的人脸身份认证系统及其方法
CN105930761A (zh) * 2015-11-30 2016-09-07 中国银联股份有限公司 一种基于眼球跟踪的活体检测的方法、装置及系统
US11098914B2 (en) * 2016-09-09 2021-08-24 Carrier Corporation System and method for operating a HVAC system by determining occupied state of a structure via IP address
KR102299847B1 (ko) * 2017-06-26 2021-09-08 삼성전자주식회사 얼굴 인증 방법 및 장치
CN110909695B (zh) * 2017-07-29 2023-08-18 Oppo广东移动通信有限公司 防伪处理方法及相关产品

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103377365A (zh) * 2012-04-25 2013-10-30 华晶科技股份有限公司 人脸识别的方法及使用该方法的人脸识别系统
CN104200146A (zh) * 2014-08-29 2014-12-10 华侨大学 一种结合视频人脸和数字唇动密码的身份验证方法
CN105654048A (zh) * 2015-12-30 2016-06-08 四川川大智胜软件股份有限公司 一种多视角人脸比对方法
CN105844227A (zh) * 2016-03-21 2016-08-10 湖南君士德赛科技发展有限公司 面向校车安全的司机身份认证方法
CN108229120A (zh) * 2017-09-07 2018-06-29 北京市商汤科技开发有限公司 人脸解锁及其信息注册方法和装置、设备、程序、介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022507315A (ja) * 2019-04-08 2022-01-18 ▲騰▼▲訊▼科技(深▲セン▼)有限公司 アイデンティティ検証方法並びにその、装置、コンピュータプログラムおよびコンピュータ機器
JP7142778B2 (ja) 2019-04-08 2022-09-27 ▲騰▼▲訊▼科技(深▲セン▼)有限公司 アイデンティティ検証方法並びにその、装置、コンピュータプログラムおよびコンピュータ機器
US11936647B2 (en) 2019-04-08 2024-03-19 Tencent Technology (Shenzhen) Company Limited Identity verification method and apparatus, storage medium, and computer device
CN115063873A (zh) * 2022-08-15 2022-09-16 珠海翔翼航空技术有限公司 基于静态和动态人脸检测的飞行数据获取方法、设备

Also Published As

Publication number Publication date
US20200184059A1 (en) 2020-06-11
CN108229120B (zh) 2020-07-24
KR20200032206A (ko) 2020-03-25
SG11202001349XA (en) 2020-03-30
CN108229120A (zh) 2018-06-29
JP2020532802A (ja) 2020-11-12
KR102324706B1 (ko) 2021-11-10
JP7080308B2 (ja) 2022-06-03

Similar Documents

Publication Publication Date Title
WO2019047897A1 (zh) 人脸解锁及其信息注册方法和装置、设备、介质
US11482040B2 (en) Face anti-counterfeiting detection methods and systems, electronic devices, programs and media
JP7165746B2 (ja) Id認証方法および装置、電子機器並びに記憶媒体
Boulkenafet et al. OULU-NPU: A mobile face presentation attack database with real-world variations
RU2733115C1 (ru) Способ и устройство для верифицирования сертификатов и идентичностей
US11244152B1 (en) Systems and methods for passive-subject liveness verification in digital media
US9652602B2 (en) Method, system and computer program for comparing images
US9652663B2 (en) Using facial data for device authentication or subject identification
US10924476B2 (en) Security gesture authentication
CN106663157A (zh) 用户认证方法、执行该方法的装置及存储该方法的记录介质
US11093770B2 (en) System and method for liveness detection
US11373449B1 (en) Systems and methods for passive-subject liveness verification in digital media
Galdi et al. Exploring new authentication protocols for sensitive data protection on smartphones
CN109063442B (zh) 业务实现、相机实现的方法和装置
US20240046709A1 (en) System and method for liveness verification

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18852925

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20207006153

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2020512794

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 09/09/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18852925

Country of ref document: EP

Kind code of ref document: A1