WO2019047897A1 - 人脸解锁及其信息注册方法和装置、设备、介质 - Google Patents
人脸解锁及其信息注册方法和装置、设备、介质 Download PDFInfo
- Publication number
- WO2019047897A1 WO2019047897A1 PCT/CN2018/104408 CN2018104408W WO2019047897A1 WO 2019047897 A1 WO2019047897 A1 WO 2019047897A1 CN 2018104408 W CN2018104408 W CN 2018104408W WO 2019047897 A1 WO2019047897 A1 WO 2019047897A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- face
- angle
- detection
- feature
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 192
- 230000001815 facial effect Effects 0.000 title claims abstract description 154
- 238000001514 detection method Methods 0.000 claims abstract description 157
- 230000004044 response Effects 0.000 claims abstract description 63
- 238000000605 extraction Methods 0.000 claims abstract description 41
- 238000012545 processing Methods 0.000 claims description 49
- 230000008569 process Effects 0.000 claims description 36
- 238000013528 artificial neural network Methods 0.000 claims description 19
- 238000005286 illumination Methods 0.000 claims description 17
- 238000004891 communication Methods 0.000 claims description 14
- 230000003993 interaction Effects 0.000 claims description 13
- 239000000463 material Substances 0.000 claims description 6
- 238000005242 forging Methods 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000001727 in vivo Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 7
- 238000004590 computer program Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 230000006872 improvement Effects 0.000 description 4
- 230000004313 glare Effects 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 238000010146 3D printing Methods 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/92—Dynamic range modification of images or parts thereof based on global image properties
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/141—Control of illumination
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/169—Holistic features and representations, i.e. based on the facial image taken as a whole
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
Definitions
- the present disclosure relates to artificial intelligence technology, and in particular, to a face unlocking and information registration method and apparatus, device, program, and medium thereof.
- terminal applications In the information age, various terminal applications (APPs) emerge in an endless stream. When each user uses various applications, they need to register user information to retain and protect user data.
- terminal devices can provide more and more functions for users, such as communication, photo storage, installation of various applications, etc. Many users will lock their own terminal devices to prevent user data therein. leakage. Therefore, protecting private data in terminal devices and applications has gradually become a focus of attention.
- Embodiments of the present disclosure provide a technical solution for unlocking a face.
- a method for unlocking a face includes:
- the stored facial features include at least two facial features corresponding to the same identification ID of the face images
- the unlocking operation is performed at least in response to the extracted facial features being authenticated.
- a method for registering a face unlocking information includes:
- Face feature extraction is performed on an image in which a human face of each angle is detected
- a face unlocking apparatus including:
- a face detection module for performing face detection on an image
- a feature extraction module configured to perform face feature extraction on the image of the detected face
- An authentication module configured to authenticate the extracted facial features based on the stored facial features; wherein the stored facial features include at least two facial features corresponding to the same identification ID of the face images;
- control module configured to perform an unlocking operation by at least authenticating in response to the extracted facial features.
- an electronic device including:
- the processor runs the authentication device
- the unit in the face unlocking device according to any of the embodiments of the present disclosure is operated.
- an electronic device including:
- One or more processors in communication with the memory to execute the executable instructions to perform the operations of the steps in the method of any of the embodiments of the present disclosure.
- a computer program comprising computer readable code, when a computer readable code is run on a device, the processor in the device performs any of the implementations of the present disclosure
- the instructions of the various steps in the method of an embodiment are provided, comprising computer readable code, when a computer readable code is run on a device, the processor in the device performs any of the implementations of the present disclosure.
- a computer readable medium for storing computer readable instructions that, when executed, implement operations of steps in the method of any of the embodiments of the present disclosure .
- the face unlocking and the information registration method and device, the device and the medium provided by the foregoing embodiments of the present disclosure may pre-store face features of at least two different angle face images corresponding to the same ID through the registration process, and perform face unlocking.
- the face is detected by the face, the face feature is extracted from the image of the detected face, and the extracted face feature is authenticated based on the stored face feature, and the extracted face feature is authenticated.
- the face-based authentication and unlocking is implemented, and the unlocking mode of the embodiment of the present disclosure is simple in operation, high in convenience, and high in security; and, the embodiment of the present disclosure pre-stores the corresponding correspondence through the registration process.
- the face feature of the face image of at least two different angles of the same ID can successfully implement the person based on the user face when the same face ID image corresponding to the user and the stored face feature is obtained.
- the face is unlocked, which improves the success rate of face unlocking, and reduces the face angle and the face angle when registering due to the same user authentication. The possibility of authentication failure due to differences.
- FIG. 1 is a flow chart of an embodiment of a method for unlocking a face according to the present disclosure.
- FIG. 2 is a flow chart of another embodiment of a method for unlocking a face of the present disclosure.
- FIG. 3 is a flow chart of still another embodiment of a method for unlocking a face according to the present disclosure.
- FIG. 4 is a flowchart of an embodiment of a method for registering face unlock information according to the present disclosure.
- FIG. 5 is a flowchart of another embodiment of a method for registering face unlock information according to the present disclosure.
- FIG. 6 is a flowchart of still another embodiment of a method for registering a face unlocking information according to the present disclosure.
- FIG. 7 is a flowchart of still another embodiment of a method for registering face unlock information according to the present disclosure.
- FIG. 8 is a schematic structural diagram of an embodiment of a face unlocking device according to the present disclosure.
- FIG. 9 is a schematic structural diagram of another embodiment of a face unlocking device according to the present disclosure.
- FIG. 10 is a schematic structural diagram of an embodiment of an electronic device according to the present disclosure.
- Embodiments of the present disclosure may be applied to electronic devices such as terminal devices, computer systems, servers, etc., which may operate with numerous other general purpose or special purpose computing system environments or configurations.
- Examples of well-known terminal devices, computing systems, environments, and/or configurations suitable for use with electronic devices such as terminal devices, computer systems, servers, and the like include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients Machines, handheld or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, small computer systems, mainframe computer systems, and distributed cloud computing technology environments including any of the above, and the like.
- Electronic devices such as terminal devices, computer systems, servers, etc., can be described in the general context of computer system executable instructions (such as program modules) being executed by a computer system.
- program modules may include routines, programs, target programs, components, logic, data structures, and the like that perform particular tasks or implement particular abstract data types.
- the computer system/server can be implemented in a distributed cloud computing environment where tasks are performed by remote processing devices that are linked through a communication network.
- program modules may be located on a local or remote computing system storage medium including storage devices.
- FIG. 1 is a flow chart of an embodiment of a method for unlocking a face according to the present disclosure. As shown in FIG. 1, the method for unlocking a face of this embodiment includes:
- the operation 102 may be performed by a processor invoking a corresponding instruction stored in a memory, or by a face detection module executed by the processor.
- the operation 104 may be performed by a processor invoking a corresponding instruction stored in a memory or by a feature extraction module executed by the processor.
- the stored facial features include at least two facial features of the face images corresponding to the same identification (ID).
- ID indicates user information corresponding to the stored face feature, and may be, for example, a user name, a number, a nickname, or the like.
- the at least two different angles of the face images corresponding to the same ID may include, but are not limited to, face images of the following two or more angles corresponding to the same ID: The face image on the front, the face image on the head, the face image on the head, the face image on the left turn, the face image on the right turn, and so on.
- the operation 106 may be performed by a processor invoking a corresponding instruction stored in a memory or by an authentication module executed by the processor.
- the operation 108 may be performed by a processor invoking a corresponding instruction stored in a memory or by a control module executed by the processor.
- the face features of the at least two different angle face images corresponding to the same ID may be pre-stored through the registration process, and when the face is unlocked, the face detection is performed on the image, The image of the face is detected for face feature extraction, and the extracted face feature is authenticated based on the stored face feature, and the extracted face feature is authenticated and then unlocked, thereby implementing
- the unlocking mode of the embodiment of the present disclosure is simple in operation, high in convenience, and high in security.
- the embodiment of the present disclosure pre-stores at least two different angle faces corresponding to the same ID through the registration process.
- the face feature of the image can successfully achieve face unlocking based on the user's face when the same ID corresponding to the face image of the user corresponding to the stored face feature is obtained, thereby improving the success of the face unlocking.
- the rate reduces the possibility of authentication failure due to the difference between the face angle at the time of the same user authentication and the face angle at the time of registration.
- the method for authenticating the extracted facial features based on the stored facial features in the operation 108 can be implemented as follows:
- the similarity between the extracted facial features and the stored facial features of each angle may be compared one by one, as long as the extracted facial features and the stored facial features of any angle are If the similarity is greater than the set threshold, it can be determined that the extracted facial features are authenticated, that is, the present embodiment may only compare the similarity between the extracted facial features and the stored facial features of an angle or a partial angle. The degree can be determined that the extracted face features are authenticated, so that it is no longer necessary to compare the similarity between the extracted face features and the stored remaining face features, thereby facilitating the improvement of the authentication efficiency.
- the method for authenticating the extracted facial features based on the stored facial features in the operation 108 may also be implemented as follows:
- the extracted facial features are determined to be authenticated in response to the maximum of the plurality of similarities between the extracted facial features and the plurality of stored facial features being greater than a set threshold.
- the plurality of stored face features may be stored face features of all angles or face features of the partial angles.
- the plurality of stored facial features are stored partial facial features
- a maximum of a plurality of similarities between the extracted facial features and the partial facial features is greater than a set threshold
- the maximum value among the plurality of similarities between the extracted face feature and the face feature of the partial angle is not greater than a set threshold, determining that the extracted face feature fails the authentication may be stored from the remaining angles.
- a face feature of a plurality of angles is selected in the face feature, and a similar method is adopted, and the maximum value among the plurality of similarities between the face features of the plurality of selected angles and the extracted face features is greater than Determining the threshold until the maximum of the obtained plurality of similarities is greater than the set threshold, determining whether the extracted facial features are authenticated, or completing the extracted facial features and the stored facial features of all angles If the similarity is not compared, the similarity of the maximum value is greater than the set threshold, and it is determined that the extracted facial features have not passed the authentication.
- FIG. 2 is a flow chart of another embodiment of a method for unlocking a face of the present disclosure. As shown in FIG. 2, the method for unlocking a face of this embodiment includes:
- the operation 202 can be performed by the processor invoking the camera or by a receiving module that is executed by the processor.
- the operation 204 may be directly performed to perform a light balance adjustment process on the acquired image.
- the operation 204 before the operation 204, it may be determined whether the quality of the acquired image satisfies a predetermined face detection condition, and the quality of the image does not satisfy the predetermined person.
- the operation 204 is performed again, and the image is subjected to the light balance adjustment processing, and the image is not subjected to the operation 204 for the image whose quality meets the predetermined face detection condition, and the image is directly detected by the operation 206.
- the image equalization processing operation can be performed on the image whose quality meets the predetermined face detection condition, thereby facilitating the improvement of the face unlocking efficiency.
- the predetermined face detection condition may include, for example, but not limited to, at least one of the following: the pixel value distribution of the image does not conform to the preset distribution range, the attribute value of the image is not within the preset value range, and the like. Among them, attribute values of the image, such as image values such as chromaticity, brightness, contrast, and saturation.
- the operation 204 may be performed by a processor invoking a corresponding instruction stored in a memory, or by a light processing module executed by the processor.
- the execution operation 202 may be selectively returned, that is, the operation of acquiring an image is continued. If no face is detected from the image, operation 208 is performed.
- the operation 206 may be performed by a processor invoking a corresponding instruction stored in a memory or by a face detection module executed by the processor.
- the operation 208 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a feature extraction module executed by the processor.
- the stored facial features include at least two facial features corresponding to the same ID of the face image.
- the operation 210 may be performed by a processor invoking a corresponding instruction stored in a memory or by an authentication module executed by the processor.
- the ID corresponding to the extracted facial features may be acquired and displayed, so that the user knows the user information currently authenticated.
- the unlock operation is not performed.
- a prompt message that the face unlocking failure may also be output.
- the operation 212 may be performed by a processor invoking a corresponding instruction stored in a memory or by a control module executed by the processor.
- the pixel values of the dark scene are concentrated in the lower numerical area, the texture gradient is smaller, and the information characteristics of the whole image are very blurred, and it is difficult to detect the effective information, especially the face;
- the backlight and glare scenes have similar overall brightness, but because the background light is very bright, the contours and details of the face are very blurred, which leads to the difficulty of face feature extraction. .
- the present inventors have found through research that for complex illumination scenes such as backlight, glare, and dim light, the pixel values of these scenes tend to have a certain locality that does not conform to the preset distribution range, and/or image.
- the attribute value is not within the preset value range. For example, in a dark scene, pixel values tend to be concentrated in areas with lower values. At this time, the contrast, chromaticity, etc. of the image will be low, and it is difficult for the detector to process the faces in these images or cause false alarms. happening.
- performing a light balance adjustment process on the acquired image may include: acquiring a grayscale image of the image; and performing at least a histogram of the grayscale image of the image.
- the equalization processing enables the pixel value distribution of the grayscale image of the image to be uniformly extended to the entire pixel value space while preserving the relative distribution of the pixel values of the original image, so as to perform subsequent processing on the grayscale image of the image subjected to histogram equalization processing. operating.
- performing a light balance adjustment process on the acquired image may include: performing at least image illumination conversion on the image to convert the image to meet preset illumination. Conditional image.
- the quality of the acquired image is detected, and when the quality of the image does not satisfy the predetermined face detection condition, for example, when the brightness of the image does not satisfy the preset brightness condition, the image is The grayscale image is subjected to histogram equalization processing, that is, firstly, the histogram equalization is performed on the grayscale image of the image, so that the pixel value distribution of the grayscale image of the image can be uniformly extended to the entire pixel value space. At the same time, the relative distribution of the original image pixel values is preserved, and the image subjected to the histogram equalization processing is further subjected to face detection.
- the grayscale image of the image subjected to the histogram equalization processing has more obvious features, and the texture is more clear and easy to detect.
- Face or, the image is subjected to image illumination transformation, the image is transformed into an image that satisfies the preset illumination condition, and then the face detection is performed, thereby facilitating detection of the face.
- the embodiments of the present disclosure can still detect the face in the image more accurately under extreme lighting conditions such as dark light and backlight, especially in the case where the indoor or night illumination is very dark and almost black. Or the face can be detected in the case where the background illumination is strong at night and the face is dim and the texture is blurred, so that the present disclosure can better realize the face unlocking application.
- the method further includes: performing a living body detection on the acquired image. Accordingly, in this embodiment, the unlocking operation is performed in response to the extracted facial feature being authenticated and the image being detected by the living body.
- the image may be subjected to live detection after the image is acquired; or the image of the detected face may also be detected in response to detecting the face from the image.
- the living body detection is performed; or, the acquired facial features may be authenticated, and the extracted facial features are subjected to the living body detection through the authenticated image.
- performing live detection on an image may include:
- image feature extraction is performed on the image, and the extracted image feature is detected to include at least one forged clue information; and based on the detection result of the at least one forged clue information, whether the image passes the living body detection is determined. If the extracted image feature does not contain any of the forged clue information, the image is detected by the living body; otherwise, if the extracted image feature contains any one or more of the forged clue information, the image does not pass the living body detection.
- image features in various embodiments of the present disclosure may include, but are not limited to, any one of the following: a local binary pattern (LBP) feature, a sparsely encoded histogram (HSC) feature, and a panorama (LARGE) Feature, face map (SMALL) feature, face detail map (TINY) feature.
- LBP local binary pattern
- HSC sparsely encoded histogram
- LARGE panorama
- SMALL face map
- TINY face detail map
- the feature items included in the image feature to be extracted may be updated according to the forged clue information that may occur.
- the LBP feature can highlight the edge information in the image to be detected; the HSC feature can more clearly reflect the reflection and blur information in the image to be detected; the LARGE feature is a panorama feature, which can be extracted based on the LARGE feature. Detecting the most obvious forged hack in the image; the face map (SMALL) is a region cut of a size of the face frame in the image to be detected (for example, 1.5 times the size), including the face, the face and the background.
- SMALL is a region cut of a size of the face frame in the image to be detected (for example, 1.5 times the size), including the face, the face and the background.
- counterfeit clues such as reflective, remake screen moiré and the edge of the model or mask can be extracted;
- the face detail map (TINY) is an area cut of the size of the face frame, including the face, based on TINY Features, can be extracted to the image PS (photoshop editing), remake screen moiré and the texture of the model or mask and other forged clues.
- the forgery clues of the forged faces included in the above features may be learned by the neural network in advance by training the neural network, and then the images containing the forged clues are detected after being input into the neural network, and the image is judged to be Forging a face image, otherwise it is a real face image, thereby realizing the living body detection of the face.
- the at least one forged clue information in the embodiment of the present disclosure may include, but is not limited to, any one or more of the following: 2D-type forged clue information, 2.5D-type forged clue information, and 3D-type forged clue information.
- the multiple dimension forged clue information may be updated according to the possible forged clue information.
- the forged clue information in the embodiment of the present disclosure can be observed by the human eye.
- the dimensions of the forged clue information can be divided into 2D class, 2.5D class and 3D class forged clues.
- the 2D-type forged face refers to a face image printed by a paper-like material
- the 2D-type forged clue information generally includes forged information such as a paper face edge, a paper material, a paper surface reflection, and a paper edge.
- the 2.5D-type forged face refers to a face image carried by a carrier device such as a video remake device.
- the 2.5D-type forged clue information generally includes forged information such as screen moiré, screen reflection, and screen edge of a carrier device such as a video remake device.
- 3D fake faces refer to real fake faces, such as masks, models, sculptures, 3D printing, etc.
- the 3D fake faces also have corresponding forgery information, such as the stitching of the mask, the abstraction of the model or Forged information such as too smooth skin.
- whether an image is forged by a face image can be detected from multiple dimensions, and different dimensions and various types of forged face images can be detected, which improves the accuracy of forged face detection and effectively avoids living bodies.
- the criminals use the photos or videos of the users to be verified to carry out forgery attacks; in addition, through the neural network for face anti-counterfeiting detection, the forged clue information of various forged face modes can be trained and learned, and new forgeries appear.
- the neural network can be trained and fine-tuned based on the new forged clue information to quickly update the neural network without improving the hardware structure, so that the new face anti-counterfeiting detection requirement can be quickly and effectively responded.
- FIG. 3 is a flow chart of still another embodiment of a method for unlocking a face according to the present disclosure.
- an embodiment of the present disclosure is described by taking an in-vivo detection of an image after acquiring an image.
- a person skilled in the art may know, according to the description of the present disclosure, that the detection is detected in response to detecting a human face from the image.
- An implementation scheme for performing a living body detection on an image of a human face; and an implementation scheme for performing a living body detection on the extracted facial feature through the authenticated image in response to the extracted facial feature is authenticated, and details are not described herein again.
- the method for unlocking a face of this embodiment includes:
- Operations 304 and 308 are then performed separately.
- the operation 302 can be performed by the processor invoking the camera or by a receiving module that is executed by the processor.
- the quality requirement standard can be set in advance to select a high quality image for living body detection.
- the quality requirement standard may include any one or more of the following: whether the face orientation is positive, the image clarity, the exposure level, etc., and the higher quality image is selected according to the corresponding standard for living body detection.
- Operation 306 is performed for the image in response to the image meeting the preset quality requirements. Otherwise, in response to the image not meeting the preset quality requirements, operation 302 is re-executed to acquire the image.
- the operation 304 may be performed by a processor invoking a corresponding instruction stored in a memory or by a light processing module executed by the processor.
- the operation 306 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a biometric detection module executed by the processor.
- the operation 308 may include: performing light balance adjustment processing on the image when the quality of the acquired image does not satisfy the predetermined face detection condition, and then performing face detection on the image after the light balance adjustment processing. . If the quality of the acquired image satisfies a predetermined face detection condition, the face detection can be directly performed on the image.
- the operation 308 may be performed by a processor invoking a corresponding instruction stored in a memory or by a face detection module executed by the processor.
- operation 312 In response to detecting a human face from the image, operation 312 is performed. Otherwise, in response to no human face being detected from the image, operation 302 may continue to be performed, ie, the image is reacquired and subsequent processes are performed.
- the operation 310 may be performed by a processor invoking a corresponding instruction stored in a memory, or by a face detection module executed by the processor.
- the stored facial features include at least two facial features corresponding to the same ID of the face image.
- the operation 312 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a feature extraction module executed by the processor.
- Operation 316 is performed in response to the extracted facial features being authenticated and the acquired images being detected by the living body. Otherwise, in response to the extracted face feature not passing the authentication and/or the acquired image does not pass the biometric detection, the subsequent flow of the embodiment is not performed, or alternatively, operation 318 is performed.
- the operation 314 may be performed by a processor invoking a corresponding instruction stored in a memory or by an authentication module executed by the processor.
- the ID corresponding to the authenticated facial features may also be acquired from the pre-stored correspondence and displayed.
- the operation 316 may be performed by a processor invoking a corresponding instruction stored in a memory or by a control module executed by the processor.
- the reason for the authentication failure may be, for example, that no face is detected, the face feature fails to pass the authentication, the living body is not detected (for example, detected as a photo, etc.), and the like.
- the operation 318 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by an authentication module or an interaction module executed by the processor.
- the method further includes:
- the face unlocking method of the embodiments of the present disclosure may be applied to an unlocking of an electronic device screen, an unlocking of an application (APP), a face unlocking in an application, and the like, for example, when the mobile terminal is activated, the present invention may be used.
- the face unlocking method unlocking screen of each embodiment is disclosed.
- the application unlocking method can be performed by the face unlocking method of the embodiments of the present disclosure, and the face of the embodiments of the present disclosure is used in the payment application. Unlock the method to unlock the face and so on.
- the face unlocking method of the embodiments of the present disclosure may trigger execution in response to receiving a face face authentication request sent by the user, or in response to receiving a face face authentication request sent by the application or the operating system.
- an electronic device that needs to be unlocked by a face can be used normally, and an electronic device (such as a mobile terminal or the like) can be used normally; an APP that needs to be unlocked by a face (for example, various shopping clients, a bank client, an album in a terminal, etc.)
- an APP that needs to be unlocked by a face (for example, various shopping clients, a bank client, an album in a terminal, etc.)
- you need to unlock the face in the payment link of various APPs you can complete the payment after the unlocking is successful.
- the method further includes: acquiring, by the face unlocking information registration process, the stored face features of the at least two different angle face images corresponding to the same ID.
- the above-mentioned face unlocking information registration process can be implemented by the embodiment of the face unlocking information registration method in the following embodiments of the present disclosure.
- FIG. 4 is a flowchart of an embodiment of a method for registering face unlock information according to the present disclosure. As shown in FIG. 4, the method for registering a face unlocking information in this embodiment includes:
- Output prompt information indicating a face image of at least two different angles of the same ID.
- the operation 402 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by an interaction module executed by the processor.
- the operation 404 may be performed by a processor invoking a corresponding instruction stored in a memory, or by a face detection module executed by the processor.
- the operation 406 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a feature extraction module executed by the processor.
- the stored facial features include at least two facial features corresponding to the same ID of the face image.
- the ID therein indicates user information corresponding to the stored face feature, and may be, for example, a user name, a number, or the like.
- the at least two different angles of the face images corresponding to the same ID may include, but are not limited to, face images of the following two or more angles corresponding to the same ID: The face image on the front, the face image on the head, the face image on the head, the face image on the left turn, the face image on the right turn, and so on.
- the operation 408 may be performed by a processor invoking a corresponding instruction stored in a memory or by a memory module being executed by the processor.
- the face features of the at least two different angle face images corresponding to the same ID may be pre-stored through the registration process, so as to be based on the at least two different corresponding corresponding IDs.
- the face feature of the face image is used to unlock the face, which is beneficial to improving the success rate of face unlocking, and reduces the possibility of authentication failure due to the difference between the face angle and the face angle at the time of registration.
- FIG. 5 is a flowchart of another embodiment of a method for registering face unlock information according to the present disclosure. As shown in FIG. 5, the method for registering face unlocking information in this embodiment includes:
- Output prompt information indicating a face image of at least two different angles of the same ID.
- the operation 502 can be performed by a processor invoking a corresponding instruction stored in a memory, or can be performed by an interaction module executed by the processor.
- the operation 504 can be performed by the processor invoking the camera or by the face detection module being executed by the processor.
- the operation 506 may be directly performed to perform a light balance adjustment process on the acquired image.
- the operation 506 before the operation 506, it may be determined whether the quality of the acquired image satisfies a predetermined face detection condition, and the quality of the image does not satisfy the predetermined person.
- the operation 506 is performed again, and the acquired image is subjected to the light balance adjustment process; for the image whose quality meets the predetermined face detection condition, the operation 506 is not performed, and the image is directly detected by the operation 508.
- the image equalization processing operation can be performed on the image whose quality meets the predetermined face detection condition, thereby facilitating the improvement of the face unlocking efficiency.
- the predetermined face detection condition may include, but is not limited to, at least one of the following: the pixel value distribution of the image does not conform to the preset distribution range, the attribute value of the image is not within the preset value range, and the like.
- attribute values of the image such as image values such as chromaticity, brightness, contrast, and saturation, and the like.
- performing a light balance adjustment process on the acquired image may include: acquiring a grayscale image of the image; and performing at least histogram equalization processing on the grayscale image of the image.
- the pixel value distribution of the grayscale image of the image can be uniformly spread to the entire pixel value space while preserving the relative distribution of the original image pixel values to perform subsequent operations on the grayscale image of the image subjected to histogram equalization processing.
- performing ray equalization adjustment processing on the acquired image in operation 506 may include: performing at least image illumination conversion on the image to transform the image into an image satisfying a preset illumination condition. .
- the operation 506 can be performed by a processor invoking a corresponding instruction stored in a memory, or by a light processing module executed by the processor.
- the operation 508 may be performed by a processor invoking a corresponding instruction stored in a memory, or by a face detection module executed by the processor.
- operation 512 In response to detecting a human face from the image, operation 512 is performed. Otherwise, in response to the fact that no face is detected from the image, execution returns to operation 504 to reacquire the image.
- the operation 510 may be performed by a processor invoking a corresponding instruction stored in a memory, or by a face detection module executed by the processor.
- the operation 512 may be performed by a processor invoking a corresponding instruction stored in a memory or by a feature extraction module executed by the processor.
- 514 Store the extracted facial features of the face images of the respective angles and their correspondences with the same ID.
- the operation 514 may be performed by a processor invoking a corresponding instruction stored in a memory, or by a face detection module executed by the processor.
- the acquired image is first subjected to the light balance adjustment process, and then the face detection is performed, so that the face is easily detected, and the extreme light conditions such as dark light and backlight are still accurately detected.
- the faces in the image especially those in the actual scene where the indoor or night illumination is very dark and almost black, or when the background illumination is strong at night, and the face is dim and the texture is blurred, the face can be detected.
- the present disclosure can better implement the face unlocking application.
- FIG. 6 is a flowchart of still another embodiment of a method for registering a face unlocking information according to the present disclosure. As shown in FIG. 6, in the face unlocking information registration method of the present embodiment, before the operation 514, for example, before, after, or at the same time as the operation 512, the following operations may be performed:
- the operation 602 may be performed by a processor invoking a corresponding instruction stored in a memory or by a memory module being executed by the processor.
- new prompt information indicating that the face image of the angle is re-entered may also be output, so as to adjust the face angle, A flow of a face unlocking information registration method of the embodiment of the present disclosure is performed.
- the operation 604 may be performed by a processor invoking a corresponding instruction stored in a memory or by a memory module being executed by the processor.
- operation 602 detects an angle of a face included in the image, which may include:
- the face of the user may be unlocked based on the face feature saved in the face unlocking information registration process, in order to avoid the face angle of the participating face unlocking being different from the registration time when the face is unlocked. While the face unlocking fails and the success rate of the face unlocking is improved, the embodiment of the present disclosure may store the face features of the face image at multiple angles (for example, five angles) for the same user. Among them, the faces of different angles may be, for example, faces of the front, the head, the head, the left turn, and the right turn.
- the left and right angles and the upper and lower angles of the human face may be used to represent the face angle, and when the frontal face is set, the left and right angles and the upper and lower angles of the face are both zero.
- outputting prompt information indicating that at least two different angles of the face image of the same ID are obtained may include: according to a preset multiple angles
- the parameter selects a preset angle and prompts the user to enter the face image of the preset angle.
- the multi-angle parameter includes multiple angle information of the face image that needs to be acquired.
- the method may further include: identifying whether all preset angles corresponding to the plurality of angle parameters are selected And in response to not selecting all the preset angles corresponding to the plurality of angle parameters, selecting the next preset angle and performing the above-described embodiment shown in FIG. 5 or FIG. 6 for the next preset angle. If all the preset angles corresponding to the multi-angle parameters are selected, the registration of the face unlocking information is completed.
- prompt information for prompting the user to input the same ID may also be output.
- storing the extracted facial features of the angled face images and the corresponding relationship with the same ID including: storing the extracted facial features of the at least two angled face images and the user input ID, and establish a correspondence between the ID and the face features of the at least two angle face images.
- the method further includes: performing a living body detection on the image.
- the face feature of the extracted angle image of each angle and the corresponding relationship with the same ID are performed. Operation.
- the image is subjected to the living body detection, and the acquired image may be subjected to the living body detection after the image is acquired; or the image of the face of each angle may be detected.
- FIG. 7 is a flowchart of still another embodiment of a method for registering face unlock information according to the present disclosure.
- the embodiment of the present disclosure is described by taking the live detection of the image after the image is acquired.
- the person skilled in the art may know, according to the description of the disclosure, that the image of the face is detected, and the image is detected.
- An implementation scheme of performing live detection on the image in response to the detected angle of the face and matching the preset angle, and detecting the image of the face of each angle after the face is extracted is not described herein.
- the method for registering face unlocking information in this embodiment includes:
- Output prompt information indicating a face image of at least two different angles of the same ID.
- the operation 702 can be performed by a processor invoking a corresponding instruction stored in a memory or by an interaction module executed by the processor.
- Operation 706 is performed in response to the image being detected by the living body. Otherwise, if the image does not pass the biometric detection, the subsequent flow of this embodiment is not performed.
- the operation 704 may be performed by a processor invoking a corresponding instruction stored by the camera and the memory, or by a biometric detection module executed by the processor.
- the operation 706 may be performed by a processor invoking a corresponding instruction stored in a memory, or by a face detection module executed by the processor.
- operation 710 is performed. If no face is detected from the image, operation 702 continues, or the image continues to be acquired and operation 704 is performed.
- the operation 708 may be performed by a processor invoking a corresponding instruction stored in a memory or by a face detection module executed by the processor.
- the operation 710 may be performed by a processor invoking a corresponding instruction stored in a memory or by a memory module being executed by the processor.
- operation 714 In response to the detected angle matching the angle corresponding to the prompt information, operation 714 is performed. Otherwise, if the detected angle does not match the angle corresponding to the prompt information, operation 702 is re-executed.
- the operation 712 may be performed by a processor invoking a corresponding instruction stored in a memory or by a memory module being executed by the processor.
- the operation 714 may be performed by a processor invoking a corresponding instruction stored in a memory or by a feature detection module executed by the processor.
- operation 704 of the embodiment shown in FIG. 7 it may be recognized whether the acquired image meets the preset quality requirement; and the image meets the preset quality requirement in response to the image. Performing a live detection on the image; otherwise, continuing to perform operation 702 or 704 in response to the image not meeting the preset quality requirements.
- the operation 716 may be performed by a processor invoking a corresponding instruction stored in a memory or by a memory module being executed by the processor.
- the above embodiments of the present disclosure can detect whether an image is forged by a plurality of dimensions, and can detect different dimensions and various types of forged face images, improve the accuracy of forged face detection, and effectively avoid the living body detection process.
- the unscrupulous element uses the photo or video of the user to be verified to perform a forgery attack, ensuring that the image when the face unlock information is registered is the real user image; in addition, the face security detection by the neural network can be used for various fake face methods.
- the forged clue information is used for training and learning. When a new forged face method appears, the neural network can be trained and fine-tuned based on the new forged clue information to quickly update the neural network without quickly improving the hardware structure.
- the face anti-counterfeiting detection needs.
- the face unlocking information registration method of the above embodiments of the present disclosure may be started in response to receiving an input face request sent by the user, or may be started in response to receiving an input face request sent by the application or the operating system.
- any of the face unlocking method and the face unlocking information registration method provided by the embodiments of the present disclosure may be performed by any suitable device having data processing capabilities, including but not limited to: a terminal device, a server, and the like.
- any of the face unlocking method and the face unlocking information registration method provided by the embodiments of the present disclosure may be executed by a processor, such as the processor executing any one of the embodiments mentioned in the present disclosure by calling corresponding instructions stored in the memory.
- the face unlocking method and the face unlocking information registration method This will not be repeated below.
- the foregoing program may be stored in a computer readable storage medium, and the program is executed when executed.
- the foregoing steps include the steps of the foregoing method embodiments; and the foregoing storage medium includes: a medium that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.
- FIG. 8 is a schematic structural diagram of an embodiment of a face unlocking device according to the present disclosure.
- the face unlocking device of this embodiment can be used to implement the various method embodiments of the present disclosure.
- the face unlocking device of this embodiment includes: a face detection module, a feature extraction module, an authentication module, and a control module. among them:
- a face detection module for performing face detection on an image.
- the feature extraction module is configured to perform face feature extraction on the image of the detected face.
- An authentication module is configured to authenticate the extracted facial features based on the stored facial features.
- the stored facial features include at least two facial features of the face images corresponding to the same ID.
- the at least two different angle face images corresponding to the same ID may include, but are not limited to, face images of the following two or more angles corresponding to the same ID: a front face image, a person facing the head Face image, face image of bow, face image of left turn, face image of right turn, and so on.
- control module configured to perform an unlocking operation by at least authenticating in response to the extracted facial features.
- the authentication module is configured to obtain a similarity between the extracted facial feature and the at least one stored facial feature; and in response to obtaining any similarity greater than a set threshold, determining to extract The face features are certified.
- the authentication module is configured to respectively obtain a similarity between the extracted facial features and the plurality of stored facial features; and the maximum value of the plurality of similarities obtained is greater than The threshold is determined to determine that the extracted facial features are authenticated.
- the face unlocking device performs face detection on an image, performs face feature extraction on an image of the detected face, and authenticates the extracted face feature based on the stored face feature. After the extracted facial features are authenticated, an unlocking operation is performed, thereby implementing face-based authentication and unlocking.
- the unlocking mode of the embodiment of the present disclosure is simple in operation, high in convenience, and high in security;
- the disclosed embodiment pre-stores the face features of the at least two different angle face images corresponding to the same ID by using the registration process, and may obtain any angle face image corresponding to the user and the stored face feature of the same ID.
- the face unlocking based on the user can be successfully implemented, the success rate of face unlocking is improved, and the possibility of authentication failure due to the difference between the face angle and the face angle at the time of registration of the same user authentication is reduced.
- FIG. 9 is a schematic structural diagram of another embodiment of a face unlocking device according to the present disclosure. As shown in FIG. 9, the face unlocking device of this embodiment further includes: an obtaining module and a light processing module, as compared with the embodiment shown in FIG. 8. among them:
- the acquisition module can be, for example, a camera or other image acquisition device.
- a light processing module for performing light balance adjustment processing on an image.
- the face detection module is configured to perform face detection on the image after the light balance adjustment processing.
- the light processing module is configured to acquire a grayscale image of the image and to perform histogram equalization processing on at least the grayscale image of the image.
- the ray processing module is operative to at least image illuminate the image to transform the image into an image that satisfies a predetermined lighting condition.
- the light processing module is configured to determine that the quality of the image does not satisfy the predetermined face detection condition, and perform a light balance adjustment process on the image.
- the predetermined face detection condition may include, but is not limited to, at least one of the following: the pixel value distribution of the image does not conform to the preset distribution range, and the attribute value of the image is not within the preset value range.
- the interaction module and the storage module may be further included.
- the interaction module is configured to output prompt information indicating that the face image of the at least two different angles of the same ID is obtained.
- the storage module is configured to store a face feature of each angle face image extracted by the feature extraction module and a correspondence relationship with the same ID.
- the storage module is configured to detect an angle of a face included in the image; and determine that the detected angle matches an angle corresponding to the prompt information, and the person who stores the angle image of each angle extracted by the feature extraction module Face feature and its correspondence with the same ID.
- the storage module is configured to perform face key point detection on the image when detecting an angle of the face included in the image; and calculate an angle of the face included in the image according to the detected face key point.
- the storage module is further configured to: when the detected angle does not match the angle corresponding to the prompt information, request the interaction module to output a face image indicating that the angle is re-entered. New prompt message.
- the storage module is configured to identify whether to store face features of at least two different angles of face images of the same ID; in response to not storing at least two different angles of face images that complete the same ID
- the request interaction module outputs an operation indicating that the prompt information of the face image of the at least two different angles of the same ID is acquired; and the interaction module is requested in response to storing the face feature of the face image of the at least two different angles of the same ID.
- the method further includes: a living body detecting module, configured to perform living body detection on the image.
- the control module is configured to perform an unlocking operation at least in response to the extracted facial features being authenticated and the images being detected by the living body.
- the living body detection module is configured to perform a living body detection on the image in response to the image meeting a preset quality requirement.
- the biometric detection module can be implemented via a neural network.
- the neural network is configured to: perform image feature extraction on the image; detect whether the extracted image feature includes at least one forged clue information; and determine whether the image passes the living body detection based on the detection result of the at least one forged clue information.
- the image features extracted by the image using the neural network may include, but are not limited to, any one or more of the following: an LBP feature, an HSC feature, a LARGE feature, a SMALL feature, and a TINY feature.
- the at least one forged clue information may include, but is not limited to, any one or more of the following: 2D-type forged face information, 2.5D-type forged face information, and 3D-type forged face information.
- the 2D-type forged face information includes forged information of the printed face image of the paper-based material; and/or the 2.5D-type forged face information includes forged information of the carrier device carrying the face image; and/or 3D-type forgery
- the face information includes information for forging a face.
- An embodiment of the present disclosure further provides an electronic device, including: the face unlocking device of any of the above embodiments of the present disclosure.
- the embodiment of the present disclosure further provides another electronic device, including:
- the module in the face unlocking of any of the above embodiments is executed when the processor runs the face unlock.
- the embodiment of the present disclosure further provides another electronic device, including:
- the one or more processors are in communication with the memory to execute the executable instructions such that the face unlocking method of any of the above-described embodiments of the present disclosure or the operation of the face unlocking information registration method step.
- embodiments of the present disclosure also provide a computer program comprising computer readable code, the processor in the device executing a person for implementing any of the above embodiments of the present disclosure when the computer readable code is run on the device.
- a computer program comprising computer readable code
- the processor in the device executing a person for implementing any of the above embodiments of the present disclosure when the computer readable code is run on the device.
- an embodiment of the present disclosure further provides a computer readable medium for storing a computer readable instruction, when the instruction is executed, implementing the face unlocking method or the middle face unlocking information of any of the above embodiments of the present disclosure.
- the operation of the steps in the registration method is not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, a computer readable medium for storing a computer readable instruction, when the instruction is executed, implementing the face unlocking method or the middle face unlocking information of any of the above embodiments of the present disclosure. The operation of the steps in the registration method.
- FIG. 10 is a schematic structural diagram of an embodiment of an electronic device according to the present disclosure.
- the electronic device includes one or more processors, a communication unit, etc., such as one or more central processing units (CPUs) 801, and/or one or more An image processor (GPU) 813 or the like, the processor may execute various kinds according to executable instructions stored in a read only memory (ROM) 802 or executable instructions loaded from the storage portion 808 into the random access memory (RAM) 803. Proper action and handling.
- processors such as one or more central processing units (CPUs) 801, and/or one or more An image processor (GPU) 813 or the like
- the processor may execute various kinds according to executable instructions stored in a read only memory (ROM) 802 or executable instructions loaded from the storage portion 808 into the random access memory (RAM) 803. Proper action and handling.
- ROM read only memory
- RAM random access memory
- the communication portion 812 can include, but is not limited to, a network card, which can include, but is not limited to, an IB (Infiniband) network card, and the processor can communicate with the read only memory 802 and/or the random access memory 803 to execute executable instructions over the bus 804. It is connected to the communication unit 812 and communicates with other target devices via the communication unit 812, thereby performing operations corresponding to any method provided by the embodiment of the present application, for example, performing face detection on an image; and performing image detection on a face of the human face.
- a network card which can include, but is not limited to, an IB (Infiniband) network card
- IB Infiniband
- Face feature extraction the extracted face feature is authenticated based on the stored face feature, wherein the stored face feature includes at least two face features of the face image corresponding to the same identifier ID;
- the unlocking operation is performed in response to the extracted facial feature being authenticated. Or outputting prompt information indicating a face image of at least two different angles of the same ID; performing face detection on the acquired image; performing face feature extraction on the image of the face detected at each angle; storing the extracted The face feature of the face image at each angle, and the correspondence between the face feature of the face image of each angle and the same ID.
- RAM 803 various programs and data required for the operation of the device can be stored.
- the CPU 801, the ROM 802, and the RAM 803 are connected to each other through a bus 804.
- ROM 802 is an optional module.
- the RAM 803 stores executable instructions, or writes executable instructions to the ROM 802 at runtime, and the executable instructions cause the central processing unit 801 to perform operations corresponding to the above methods.
- An input/output (I/O) interface 805 is also coupled to bus 804.
- the communication unit 812 may be integrated or may be provided with a plurality of sub-modules (for example, a plurality of IB network cards) and on the bus link.
- the following components are connected to the I/O interface 805: an input portion 806 including a keyboard, a mouse, etc.; an output portion 807 including, for example, a cathode ray tube (CRT), a liquid crystal display (LCD), and the like, and a storage portion 808 including a hard disk or the like. And a communication portion 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the Internet.
- Driver 810 is also coupled to I/O interface 805 as needed.
- a removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory or the like, is mounted on the drive 811 as needed so that a computer program read therefrom is installed into the storage portion 808 as needed.
- FIG. 10 is only an optional implementation manner.
- the number and type of components in the foregoing FIG. 10 may be selected, deleted, added, or replaced according to actual needs;
- separate implementations such as separate settings or integrated settings may also be adopted.
- the GPU 813 and the CPU 801 may be separately configured or the GPU 813 may be integrated on the CPU 801, and the communication portion may be separately configured or integrated on the CPU 801 or the GPU 813. ,and many more.
- an embodiment of the present disclosure includes a computer program product comprising a computer program tangibly embodied on a machine readable medium, the computer program comprising program code for executing the method illustrated in the flowchart, the program code comprising Executing instructions corresponding to the method steps provided by the embodiments of the present application, for example, performing face detection on an image; performing facial feature extraction on the image of the detected face; and authenticating the extracted facial features based on the stored facial features
- the stored facial features include at least two facial features corresponding to the same identification ID of at least two different angles of the face image; and the unlocking operation is performed at least in response to the extracted facial features being authenticated.
- the methods and apparatus, apparatus of the present disclosure may be implemented in a number of ways.
- the methods, apparatus, and apparatus of the present disclosure may be implemented in software, hardware, firmware, or any combination of software, hardware, or firmware.
- the above-described sequence of steps for the method is for illustrative purposes only, and the steps of the method of the present disclosure are not limited to the order of the above optional description unless otherwise specifically stated.
- the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine readable instructions for implementing a method in accordance with the present disclosure.
- the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Computer Security & Cryptography (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Collating Specific Patterns (AREA)
Abstract
Description
Claims (65)
- 一种人脸解锁方法,其特征在于,包括:对图像进行人脸检测;对检测到人脸的图像进行人脸特征提取;基于存储的人脸特征对提取到的人脸特征进行认证;其中,所述存储的人脸特征至少包括对应同一标识ID的至少二个不同角度人脸图像的人脸特征;至少响应于所述提取到的人脸特征通过认证,进行解锁操作。
- 根据权利要求1所述的方法,其特征在于,所述对应同一ID的至少二个不同角度人脸图像包括对应同一ID的以下二个或二个以上角度的人脸图像:正面的人脸图像,仰头的人脸图像,低头的人脸图像,左转头的人脸图像,右转头的人脸图像。
- 根据权利要求1或2所述的方法,其特征在于,所述对图像进行人脸检测之前,还包括:对图像进行光线均衡调整处理;所述对图像进行人脸检测,包括:对光线均衡调整处理后的图像进行人脸检测。
- 根据权利要求3所述的方法,其特征在于,所述对图像进行光线均衡调整处理,包括:获取所述图像的灰度图;至少对所述图像的灰度图进行直方图均衡化处理。
- 根据权利要求3所述的方法,其特征在于,所述对图像进行光线均衡调整处理,包括:至少对所述图像进行图像光照变换,以将所述图像变换为满足预设光照条件的图像。
- 根据权利要求3-5任一所述的方法,其特征在于,所述对图像进行光线均衡调整处理之前,还包括:确定所述图像的质量不满足预定的人脸检测条件。
- 根据权利要求6所述的方法,其特征在于,所述预定的人脸检测条件包括以下任意一项或多项:所述图像的像素值分布不符合预设分布范围,所述图像的属性值不在预设数值范围内。
- 根据权利要求1-7任一所述的方法,其特征在于,所述基于存储的人脸特征对提取到的人脸特征进行认证,包括:获取所述提取到的人脸特征与至少一个存储的人脸特征之间的相似度;响应于所述提取到的人脸特征与任一存储的人脸特征之间的相似度大于设定阈值,确定所述提取到的人脸特征通过认证。
- 根据权利要求1-7任一所述的方法,其特征在于,所述基于存储的人脸特征对提取到的人脸特征进行认证,包括:分别获取所述提取到的人脸特征与多个存储的人脸特征之间的相似度;响应于所述提取到的人脸特征与多个存储的人脸特征之间的相似度中的最大值大于设定阈值,确定所述提取到的人脸特征通过认证。
- 根据权利要求1-9任一所述的方法,其特征在于,还包括:对所述图像进行活体检测;至少响应于所述提取到的人脸特征通过认证,进行解锁操作,包括:响应于所述提取到的人脸特征通过认证、且所述图像通过活体检测,进行解锁操作。
- 根据权利要求10所述的方法,其特征在于,对所述图像进行活体检测,包括:获取所述图像之后,对所述图像进行活体检测;或者,响应于从所述图像中检测到人脸,对所述图像进行活体检测;或者,响应于所述提取到的人脸特征通过认证,对所述图像进行活体检测。
- 根据权利要求10或11所述的方法,其特征在于,对所述图像进行活体检测,包括:响应于所述图像满足预设质量要求,对所述图像进行活体检测。
- 根据权利要求10-12任一所述的方法,其特征在于,对所述图像进行活体检测,包括:利用神经网络对所述图像进行图像特征提取;检测提取的图像特征是否包含至少一种伪造线索信息;基于所述至少一种伪造线索信息的检测结果,确定所述图像是否通过活体检测。
- 根据权利要求12所述的方法,其特征在于,利用所述神经网络对所述图像提取的图像特征包括以下任意一项或多项:局部二值模式LBP特征、稀疏编码的柱状图HSC特征、全景图LARGE特征、人脸图SMALL特征、人脸细节图TINY特征。
- 根据权利要求13或14所述的方法,其特征在于,所述至少一种伪造线索信息包括以下任意一项或多项:2D类伪造线索信息、2.5D类伪造线索信息和3D类伪造线索信息。
- 根据权利要求15所述的方法,其特征在于,所述2D类伪造线索信息包括纸质类材料打印人脸图像的信息;和/或,所述2.5D类伪造线索信息包括载体设备承载人脸图像的信息;和/或,所述3D类伪造线索信息包括伪造人脸的信息。
- 根据权利要求1-16任一所述的方法,其特征在于,所述基于存储的人脸特征对提取到的人脸特征进行认证之前,还包括:通过人脸解锁信息注册流程,获取存储的对应所述同一ID的至少二个不同角度人脸图像的人脸特征。
- 根据权利要求17所述的方法,其特征在于,所述人脸解锁信息注册流程包括:输出表示获取同一ID的至少二个不同角度的人脸图像的提示信息;对获取到的图像进行人脸检测;对检测到各角度人脸的图像进行人脸特征提取;存储提取到的各角度人脸图像的人脸特征、以及所述各角度人脸图像的人脸特征与所述同一ID之间的对应关系。
- 根据权利要求18所述的方法,其特征在于,所述对获取到的图像进行人脸检测之前,还包括:对获取到的图像进行光线均衡调整处理;所述对获取到的图像进行人脸检测,包括:对光线均衡调整处理后的图像进行人脸检测。
- 根据权利要求19所述的方法,其特征在于,所述对获取到的图像进行光线均衡调整处理之前,还包括:确定所述图像的质量不满足预定的人脸检测条件。
- 根据权利要求18-20任一所述的方法,其特征在于,存储提取到的任一角度人脸图像的人脸特征之前,还包括:检测所述图像包括的人脸的角度;确定检测出的角度与提示信息对应的角度相匹配。
- 根据权利要求21所述的方法,其特征在于,所述检测所述图像包括的人脸的角度,包括:对所述图像进行人脸关键点检测;根据检测到的人脸关键点计算所述图像包括的人脸的角度。
- 根据权利要求18-22任一所述的方法,其特征在于,还包括:对所述图像进行活体检测;响应于所述图像通过活体检测,执行所述存储提取到的各角度人脸图像的人脸特征、以及所述各角度人脸图像的人脸特征与所述同一ID之间的对应关系的操作。
- 一种人脸解锁信息注册方法,其特征在于,包括:输出表示获取同一ID的至少二个不同角度的人脸图像的提示信息;对获取到的图像进行人脸检测;对检测到各角度人脸的图像进行人脸特征提取;存储提取到的各角度人脸图像的人脸特征、以及所述各角度人脸图像的人脸特征与所述同一ID之间的对应关系。
- 根据权利要求24所述的方法,其特征在于,所述同一ID的至少二个不同角度的人脸图像包括对应同一ID的以下二个或二个以上人脸图像:正面的人脸图像,仰头的人脸图像,低头的人脸图像,左转头的人脸图像,右转头的人脸图像。
- 根据权利要求24或25所述的方法,其特征在于,所述对获取到的图像进行人脸检测之前,还包括:对获取到的图像进行光线均衡调整处理;所述对获取到的图像进行人脸检测,包括:对光线均衡调整处理后的图像进行人脸检测。
- 根据权利要求26所述的方法,其特征在于,所述对获取到的图像进行光线均衡调整处理,包括:获取所述图像的灰度图;至少对所述图像的灰度图进行直方图均衡化处理。
- 根据权利要求26所述的方法,其特征在于,所述对获取到的图像进行光线均衡调整处理,包括:至少对所述图像进行图像光照变换,以将所述图像变换为满足预设光照条件的图像。
- 根据权利要求26-28任一所述的方法,其特征在于,所述对获取到的图像进行光线均衡调整处理之前,还包括:确定所述图像的质量不满足预定的人脸检测条件。
- 根据权利要求29所述的方法,其特征在于,所述预定的人脸检测条件包括以下任意一项或多项:所述图像的像素值分布不符合预设分布范围,所述图像的属性值不在预设数值范围内。
- 根据权利要求24-30任一所述的方法,其特征在于,存储提取到的任一角度人脸图像的人脸特征之前,还包括:检测所述图像包括的人脸的角度;确定检测出的角度与提示信息对应的角度相匹配。
- 根据权利要求31所述的方法,其特征在于,所述检测所述图像包括的人脸的角度,包括:对所述图像进行人脸关键点检测;根据检测到的人脸关键点计算所述图像包括的人脸的角度。
- 根据权利要求31或32所述的方法,其特征在于,还包括:响应于检测出的角度与提示信息对应的角度不匹配,输出表示重新输入该角度的人脸图像的新提示信息。
- 根据权利要求24-33任一所述的方法,其特征在于,所述存储提取到的各角度人脸图像的人脸特征之后,还包括:识别是否存储完成所述同一ID的至少二个不同角度的人脸图像的人脸特征;响应于未存储完成所述同一ID的至少二个不同角度的人脸图像的,执行所述输出表示获取同一ID的至少二个不同角度的人脸图像的提示信息的操作。
- 根据权利要求34所述的方法,其特征在于,还包括:响应于存储完成所述同一ID的至少二个不同角度的人脸图像的人脸特征,输出用于提示用户输入所述同一ID的提示信息;所述存储提取到的各角度人脸图像的人脸特征、以及所述各角度人脸图像的人脸特征与所述同一ID之间的对应关系,包括:存储提取到的所述至少二个角度人脸图像的人脸特征与用户输入的所述同一ID,并建立所述同一ID和所述至少二个角度人脸图像的人脸特征之间的对应关系。
- 根据权利要求24-35任一所述的方法,其特征在于,还包括:对所述图像进行活体检测;响应于所述图像通过活体检测,执行所述存储提取到的各角度人脸图像的人脸特征、以及所述各角度人脸图像的人脸特征与所述同一ID之间的对应关系的操作。
- 根据权利要求36所述的方法,其特征在于,对所述图像进行活体检测,包括:对所述获取到的图像进行活体检测;或者,对检测到各角度人脸的图像进行活体检测;或者,响应于检测到的人脸的角度与所述选取的预设角度匹配,对所述图像进行活体检测;或者,对检测到各角度人脸的图像进行特征提取之后,对所述各角度的人脸的图像进行活体检测。
- 根据权利要求36或37所述的方法,其特征在于,对所述图像进行活体检测,包括:响应于所述图像满足预设质量要求,对所述图像进行活体检测。
- 根据权利要求36-38任一所述的方法,其特征在于,对所述图像进行活体检测,包括:利用神经网络对所述图像进行图像特征提取;检测提取的图像特征是否包含至少一种伪造线索信息;基于所述至少一种伪造线索信息的检测结果,确定所述图像是否通过活体检测。
- 根据权利要求39所述的方法,其特征在于,利用所述神经网络对所述图像提取的图像特征包括以下任意一项或多项:局部二值模式LBP特征、稀疏编码的柱状图HSC特征、全景图LARGE特征、人脸图SMALL特征、人脸细节图TINY特征。
- 根据权利要求39或40所述的方法,其特征在于,所述至少一种伪造线索信息包括以下任意一项或多项:2D类伪造线索信息、2.5D类伪造线索信息和3D类伪造线索信息。
- 根据权利要求41所述的方法,其特征在于,所述2D类伪造线索信息包括纸质类材料打印人脸图像的信息;和/或,所述2.5D类伪造线索信息包括载体设备承载人脸图像的信息;和/或,所述3D类伪造线索信息包括伪造人脸的信息。
- 一种人脸解锁装置,其特征在于,包括:人脸检测模块,用于对图像进行人脸检测;特征提取模块,用于对检测到人脸的图像进行人脸特征提取;认证模块,用于基于存储的人脸特征对提取到的人脸特征进行认证;其中,所述存储的人脸特征至少包括对应同一标识ID的至少二个不同角度人脸图像的人脸特征;控制模块,用于至少响应于所述提取到的人脸特征通过认证,进行解锁操作。
- 根据权利要求43所述的装置,其特征在于,所述对应同一ID的至少二个不同角度人脸图像包括对应同一ID的以下二个或二个以上角度的人脸图像:正面的人脸图像,仰头的人脸图像,低头的人脸图像,左转头的人脸图像,右转头的人脸图像。
- 根据权利要求43或44所述的装置,其特征在于,还包括:光线处理模块,用于在对图像进行光线均衡调整处理;所述人脸检测模块,用于对光线均衡调整处理后的图像进行人脸检测。
- 根据权利要求45所述的装置,其特征在于,所述光线处理模块,用于获取所述图像的灰度图,以及至少对所述图像的灰度图进行直方图均衡化处理。
- 根据权利要求45所述的装置,其特征在于,所述光线处理模块,用于至少对所述图像进行图像光照变换,以将所述图像变换为满足预设光照条件的图像。
- 根据权利要求45-47任一所述的装置,其特征在于,所述光线处理模块,用于确定所述图像的质量不满足预定的人脸检测条件,对图像进行光线均衡调整处理。
- 根据权利要求48所述的装置,其特征在于,所述预定的人脸检测条件包括以下任意一项或多项:所述图像的像素值分布不符合预设分布范围,所述图像的属性值不在预设数值范围内。
- 根据权利要求43-49任一所述的装置,其特征在于,所述认证模块,用于获取所述提取到的人脸特征与至少一个存储的人脸特征之间的相似度;以及响应于所述提取到的人脸特征与任一存储的人脸特征之间的相似度大于设定阈值,确定所述提取到的人脸特征通过认证。
- 根据权利要求43-49任一所述的装置,其特征在于,所述认证模块,用于分别获取所述提取到的人脸特征与多个存储的人脸特征之间的相似度;以及响应于所述提取到的人脸特征与多个存储的人脸特征之间的相似度中的最大值大于设定阈值,确定所述提取到的人脸特征通过认证。
- 根据权利要求43-51任一所述的装置,其特征在于,还包括:交互模块,用于输出表示获取同一ID的至少二个不同角度的人脸图像的提示信息;存储模块,用于存储提取到的各角度人脸图像的人脸特征、以及所述各角度人脸图像的人脸特征与所述同一ID之间的对应关系。
- 根据权利要求52所述的装置,其特征在于,所述存储模块,用于检测所述图像包括的人脸的角度;以及确定检测出的角度与提示信息对应的角度相匹配,存储提取到的各角度人脸图像的人脸特征、以及所述各角度人脸图像的人脸特征与所述同一ID之间的对应关系。
- 根据权利要求53所述的装置,其特征在于,所述存储模块检测所述图像包括的人脸的角度时,用于对所述图像进行人脸关键点检测;以及根据检测到的人脸关键点计算所述图像包括的人脸的角度。
- 根据权利要求53或54所述的装置,其特征在于,所述存储模块,还用于在检测出的角度与提示信息对应的角度不匹配时,请求所述交互模块输出表示重新输入该角度的人脸图像的新提示信息。
- 根据权利要求53-55任一所述的装置,其特征在于,所述存储模块,用于识别是否存储完成所述同一ID的至少二个不同角度的人脸图像的人脸特征;响应于未存储完成所述同一ID的至少二个不同角度的人脸图像的,请求所述交互模块输出表示获取同一ID的至少二个不同角度的人脸图像的提示信息的操作;响应于存储完成所述同一ID的至少二个不同角度的人脸图像的人脸特征,请求所述交互模块输出用于提示用户输入所述同一ID的提示信息;存储提取到的所述至少二个角度人脸图像的人脸特征与用户输入的所述同一ID,并建立所述同一ID和所述至少二个角度人脸图像的人脸特征之间的对应关系。
- 根据权利要求43-56任一所述的装置,其特征在于,还包括:活体检测模块,用于对所述图像进行活体检测;所述控制模块,用于至少响应于所述提取到的人脸特征通过认证、且所述图像通过活体检测,进行解锁操作。
- 根据权利要求57所述的装置,其特征在于,所述活体检测模块,用于响应于所述图像满足预设质量要求,对所述图像进行活体检测。
- 根据权利要求57或58所述的装置,其特征在于,所述活体检测模块包括神经网络,用于:对所述图像进行图像特征提取;检测提取的图像特征是否包含至少一种伪造线索信息;以及基于所述至少一种伪造线索信息的检测结果,确定所述图像是否通过活体检测。
- 根据权利要求59所述的装置,其特征在于,利用所述神经网络对所述图像提取的图像特征包括以下任意一项或多项:局部二值模式LBP特征、稀疏编码的柱状图HSC特征、全景图LARGE特征、人脸图SMALL特征、人脸细节图TINY特征。
- 根据权利要求59或60所述的装置,其特征在于,所述至少一种伪造线索信息包括以下任意一 项或多项:2D类伪造人脸信息、2.5D类伪造人脸信息和3D类伪造人脸信息。
- 根据权利要求61所述的装置,其特征在于,所述2D类伪造人脸信息包括纸质类材料打印人脸图像的伪造信息;和/或,所述2.5D类伪造人脸信息包括载体设备承载人脸图像的伪造信息;和/或,所述3D类伪造人脸信息包括伪造人脸的信息。
- 一种电子设备,其特征在于,包括:处理器和权利要求43-62任一所述的人脸解锁装置;在处理器运行所述认证装置时,权利要求43-62任一所述的人脸解锁装置中的单元被运行。
- 一种电子设备,其特征在于,包括:存储器,存储可执行指令;一个或多个处理器,与存储器通信以执行可执行指令从而完成权利要求1-42任一所述方法中各步骤的操作。
- 一种计算机可读介质,用于存储计算机可读取的指令,其特征在于,所述指令被执行时实现权利要求1-42任一所述方法中各步骤的操作。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
SG11202001349XA SG11202001349XA (en) | 2017-09-07 | 2018-09-06 | Facial unlocking method and information registration method and apparatus, device, and medium |
JP2020512794A JP7080308B2 (ja) | 2017-09-07 | 2018-09-06 | 顔ロック解除方法、その情報登録方法及び装置、機器並びに媒体 |
KR1020207006153A KR102324706B1 (ko) | 2017-09-07 | 2018-09-06 | 얼굴인식 잠금해제 방법 및 장치, 기기, 매체 |
US16/790,703 US20200184059A1 (en) | 2017-09-07 | 2020-02-13 | Face unlocking method and apparatus, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710802146.1 | 2017-09-07 | ||
CN201710802146.1A CN108229120B (zh) | 2017-09-07 | 2017-09-07 | 人脸解锁及其信息注册方法和装置、设备、程序、介质 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/790,703 Continuation US20200184059A1 (en) | 2017-09-07 | 2020-02-13 | Face unlocking method and apparatus, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019047897A1 true WO2019047897A1 (zh) | 2019-03-14 |
Family
ID=62655208
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/104408 WO2019047897A1 (zh) | 2017-09-07 | 2018-09-06 | 人脸解锁及其信息注册方法和装置、设备、介质 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20200184059A1 (zh) |
JP (1) | JP7080308B2 (zh) |
KR (1) | KR102324706B1 (zh) |
CN (1) | CN108229120B (zh) |
SG (1) | SG11202001349XA (zh) |
WO (1) | WO2019047897A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2022507315A (ja) * | 2019-04-08 | 2022-01-18 | ▲騰▼▲訊▼科技(深▲セン▼)有限公司 | アイデンティティ検証方法並びにその、装置、コンピュータプログラムおよびコンピュータ機器 |
CN115063873A (zh) * | 2022-08-15 | 2022-09-16 | 珠海翔翼航空技术有限公司 | 基于静态和动态人脸检测的飞行数据获取方法、设备 |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108229326A (zh) * | 2017-03-16 | 2018-06-29 | 北京市商汤科技开发有限公司 | 人脸防伪检测方法和系统、电子设备、程序和介质 |
CN108229120B (zh) * | 2017-09-07 | 2020-07-24 | 北京市商汤科技开发有限公司 | 人脸解锁及其信息注册方法和装置、设备、程序、介质 |
SG11202008549SA (en) * | 2018-08-13 | 2020-10-29 | Beijing Sensetime Technology Development Co Ltd | Identity authentication method and apparatus, electronic device, and storage medium |
CN109359502A (zh) * | 2018-08-13 | 2019-02-19 | 北京市商汤科技开发有限公司 | 防伪检测方法和装置、电子设备、存储介质 |
CN109255299A (zh) * | 2018-08-13 | 2019-01-22 | 北京市商汤科技开发有限公司 | 身份认证方法和装置、电子设备和存储介质 |
CN109344703B (zh) * | 2018-08-24 | 2021-06-25 | 深圳市商汤科技有限公司 | 对象检测方法及装置、电子设备和存储介质 |
CN109194834B (zh) * | 2018-09-27 | 2021-07-13 | 重庆辉烨物联科技有限公司 | 手机节电方法、装置、设备及存储介质 |
CN109558794B (zh) * | 2018-10-17 | 2024-06-28 | 平安科技(深圳)有限公司 | 基于摩尔纹的图像识别方法、装置、设备和存储介质 |
CN109543611A (zh) * | 2018-11-22 | 2019-03-29 | 珠海市蓝云科技有限公司 | 一种基于人工智能的图像匹配的方法 |
CN109740503A (zh) * | 2018-12-28 | 2019-05-10 | 北京旷视科技有限公司 | 人脸认证方法、图像底库录入方法、装置及处理设备 |
CN109819114B (zh) * | 2019-02-20 | 2021-11-30 | 北京市商汤科技开发有限公司 | 锁屏处理方法及装置、电子设备及存储介质 |
CN111783505A (zh) * | 2019-05-10 | 2020-10-16 | 北京京东尚科信息技术有限公司 | 伪造人脸的识别方法、装置和计算机可读存储介质 |
CN110175572A (zh) * | 2019-05-28 | 2019-08-27 | 深圳市商汤科技有限公司 | 人脸图像处理方法及装置、电子设备及存储介质 |
CN110309805A (zh) * | 2019-07-08 | 2019-10-08 | 业成科技(成都)有限公司 | 脸部辨识装置 |
EP4030747A4 (en) * | 2019-09-12 | 2022-11-02 | NEC Corporation | IMAGE ANALYSIS DEVICE, CONTROL METHOD AND PROGRAM |
US20210334348A1 (en) * | 2020-04-24 | 2021-10-28 | Electronics And Telecommunications Research Institute | Biometric authentication apparatus and operation method thereof |
CN111723655B (zh) * | 2020-05-12 | 2024-03-08 | 五八有限公司 | 人脸图像处理方法、装置、服务器、终端、设备及介质 |
CN112215084B (zh) * | 2020-09-17 | 2024-09-03 | 中国银联股份有限公司 | 识别对象确定方法、装置、设备及存储介质 |
KR102393543B1 (ko) * | 2020-11-02 | 2022-05-03 | 김효린 | 안면 인증 딥러닝 모델을 스마트폰 디바이스 내에서 학습하기 위한 안면 데이터 수집 및 처리 방법과 그 방법을 수행하는 디바이스 |
CN112667984A (zh) * | 2020-12-31 | 2021-04-16 | 上海商汤临港智能科技有限公司 | 一种身份认证方法及装置、电子设备和存储介质 |
CN113762227B (zh) * | 2021-11-09 | 2022-02-08 | 环球数科集团有限公司 | 一种多姿态人脸识别方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103377365A (zh) * | 2012-04-25 | 2013-10-30 | 华晶科技股份有限公司 | 人脸识别的方法及使用该方法的人脸识别系统 |
CN104200146A (zh) * | 2014-08-29 | 2014-12-10 | 华侨大学 | 一种结合视频人脸和数字唇动密码的身份验证方法 |
CN105654048A (zh) * | 2015-12-30 | 2016-06-08 | 四川川大智胜软件股份有限公司 | 一种多视角人脸比对方法 |
CN105844227A (zh) * | 2016-03-21 | 2016-08-10 | 湖南君士德赛科技发展有限公司 | 面向校车安全的司机身份认证方法 |
CN108229120A (zh) * | 2017-09-07 | 2018-06-29 | 北京市商汤科技开发有限公司 | 人脸解锁及其信息注册方法和装置、设备、程序、介质 |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100456619B1 (ko) * | 2001-12-05 | 2004-11-10 | 한국전자통신연구원 | 에스.브이.엠(svm)을 이용한 얼굴 등록/인증 시스템 및방법 |
JP2005056004A (ja) * | 2003-08-07 | 2005-03-03 | Omron Corp | 顔照合装置、顔照合方法、および顔照合プログラム |
JPWO2009107237A1 (ja) * | 2008-02-29 | 2011-06-30 | グローリー株式会社 | 生体認証装置 |
JP5766564B2 (ja) * | 2011-09-15 | 2015-08-19 | 株式会社東芝 | 顔認証装置及び顔認証方法 |
WO2014032162A1 (en) * | 2012-08-28 | 2014-03-06 | Solink Corporation | Transaction verification system |
CN103593598B (zh) * | 2013-11-25 | 2016-09-21 | 上海骏聿数码科技有限公司 | 基于活体检测和人脸识别的用户在线认证方法及系统 |
CN104734852B (zh) * | 2013-12-24 | 2018-05-08 | 中国移动通信集团湖南有限公司 | 一种身份认证方法及装置 |
CN103679158B (zh) * | 2013-12-31 | 2017-06-16 | 北京天诚盛业科技有限公司 | 人脸认证方法和装置 |
KR102257897B1 (ko) * | 2014-05-09 | 2021-05-28 | 삼성전자주식회사 | 라이브니스 검사 방법과 장치,및 영상 처리 방법과 장치 |
CN111898108B (zh) * | 2014-09-03 | 2024-06-04 | 创新先进技术有限公司 | 身份认证方法、装置、终端及服务器 |
KR20160043425A (ko) * | 2014-10-13 | 2016-04-21 | 엘지전자 주식회사 | 이동 단말기 및 그의 화면 잠금 해제 방법 |
EP3218844A4 (en) * | 2014-11-13 | 2018-07-04 | Intel Corporation | Spoofing detection in image biometrics |
US9922238B2 (en) * | 2015-06-25 | 2018-03-20 | West Virginia University | Apparatuses, systems, and methods for confirming identity |
JP6507046B2 (ja) * | 2015-06-26 | 2019-04-24 | 株式会社東芝 | 立体物検知装置及び立体物認証装置 |
CN111144293A (zh) * | 2015-09-25 | 2020-05-12 | 北京市商汤科技开发有限公司 | 带交互式活体检测的人脸身份认证系统及其方法 |
CN105930761A (zh) * | 2015-11-30 | 2016-09-07 | 中国银联股份有限公司 | 一种基于眼球跟踪的活体检测的方法、装置及系统 |
US11098914B2 (en) * | 2016-09-09 | 2021-08-24 | Carrier Corporation | System and method for operating a HVAC system by determining occupied state of a structure via IP address |
KR102299847B1 (ko) * | 2017-06-26 | 2021-09-08 | 삼성전자주식회사 | 얼굴 인증 방법 및 장치 |
CN110909695B (zh) * | 2017-07-29 | 2023-08-18 | Oppo广东移动通信有限公司 | 防伪处理方法及相关产品 |
-
2017
- 2017-09-07 CN CN201710802146.1A patent/CN108229120B/zh active Active
-
2018
- 2018-09-06 WO PCT/CN2018/104408 patent/WO2019047897A1/zh active Application Filing
- 2018-09-06 JP JP2020512794A patent/JP7080308B2/ja active Active
- 2018-09-06 SG SG11202001349XA patent/SG11202001349XA/en unknown
- 2018-09-06 KR KR1020207006153A patent/KR102324706B1/ko active IP Right Grant
-
2020
- 2020-02-13 US US16/790,703 patent/US20200184059A1/en not_active Abandoned
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103377365A (zh) * | 2012-04-25 | 2013-10-30 | 华晶科技股份有限公司 | 人脸识别的方法及使用该方法的人脸识别系统 |
CN104200146A (zh) * | 2014-08-29 | 2014-12-10 | 华侨大学 | 一种结合视频人脸和数字唇动密码的身份验证方法 |
CN105654048A (zh) * | 2015-12-30 | 2016-06-08 | 四川川大智胜软件股份有限公司 | 一种多视角人脸比对方法 |
CN105844227A (zh) * | 2016-03-21 | 2016-08-10 | 湖南君士德赛科技发展有限公司 | 面向校车安全的司机身份认证方法 |
CN108229120A (zh) * | 2017-09-07 | 2018-06-29 | 北京市商汤科技开发有限公司 | 人脸解锁及其信息注册方法和装置、设备、程序、介质 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2022507315A (ja) * | 2019-04-08 | 2022-01-18 | ▲騰▼▲訊▼科技(深▲セン▼)有限公司 | アイデンティティ検証方法並びにその、装置、コンピュータプログラムおよびコンピュータ機器 |
JP7142778B2 (ja) | 2019-04-08 | 2022-09-27 | ▲騰▼▲訊▼科技(深▲セン▼)有限公司 | アイデンティティ検証方法並びにその、装置、コンピュータプログラムおよびコンピュータ機器 |
US11936647B2 (en) | 2019-04-08 | 2024-03-19 | Tencent Technology (Shenzhen) Company Limited | Identity verification method and apparatus, storage medium, and computer device |
CN115063873A (zh) * | 2022-08-15 | 2022-09-16 | 珠海翔翼航空技术有限公司 | 基于静态和动态人脸检测的飞行数据获取方法、设备 |
Also Published As
Publication number | Publication date |
---|---|
US20200184059A1 (en) | 2020-06-11 |
CN108229120B (zh) | 2020-07-24 |
KR20200032206A (ko) | 2020-03-25 |
SG11202001349XA (en) | 2020-03-30 |
CN108229120A (zh) | 2018-06-29 |
JP2020532802A (ja) | 2020-11-12 |
KR102324706B1 (ko) | 2021-11-10 |
JP7080308B2 (ja) | 2022-06-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019047897A1 (zh) | 人脸解锁及其信息注册方法和装置、设备、介质 | |
US11482040B2 (en) | Face anti-counterfeiting detection methods and systems, electronic devices, programs and media | |
JP7165746B2 (ja) | Id認証方法および装置、電子機器並びに記憶媒体 | |
Boulkenafet et al. | OULU-NPU: A mobile face presentation attack database with real-world variations | |
RU2733115C1 (ru) | Способ и устройство для верифицирования сертификатов и идентичностей | |
US11244152B1 (en) | Systems and methods for passive-subject liveness verification in digital media | |
US9652602B2 (en) | Method, system and computer program for comparing images | |
US9652663B2 (en) | Using facial data for device authentication or subject identification | |
US10924476B2 (en) | Security gesture authentication | |
CN106663157A (zh) | 用户认证方法、执行该方法的装置及存储该方法的记录介质 | |
US11093770B2 (en) | System and method for liveness detection | |
US11373449B1 (en) | Systems and methods for passive-subject liveness verification in digital media | |
Galdi et al. | Exploring new authentication protocols for sensitive data protection on smartphones | |
CN109063442B (zh) | 业务实现、相机实现的方法和装置 | |
US20240046709A1 (en) | System and method for liveness verification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18852925 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20207006153 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2020512794 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 09/09/2020) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18852925 Country of ref document: EP Kind code of ref document: A1 |