CN112712073A - Eye change feature-based living body identification method and device and electronic equipment - Google Patents

Eye change feature-based living body identification method and device and electronic equipment Download PDF

Info

Publication number
CN112712073A
CN112712073A CN202110330169.3A CN202110330169A CN112712073A CN 112712073 A CN112712073 A CN 112712073A CN 202110330169 A CN202110330169 A CN 202110330169A CN 112712073 A CN112712073 A CN 112712073A
Authority
CN
China
Prior art keywords
eye
change
display content
recognized
feature vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110330169.3A
Other languages
Chinese (zh)
Inventor
白世杰
吴富章
赵宇航
王秋明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yuanjian Information Technology Co Ltd
Original Assignee
Beijing Yuanjian Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yuanjian Information Technology Co Ltd filed Critical Beijing Yuanjian Information Technology Co Ltd
Priority to CN202110330169.3A priority Critical patent/CN112712073A/en
Publication of CN112712073A publication Critical patent/CN112712073A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Abstract

The application provides a living body identification method, an identification device and electronic equipment based on eye change characteristics, wherein multiple eye images of a user and multiple display content images of a detection screen are acquired at the same frame rate by randomly changing the display content of the detection screen; each eye image and each display content image are input to a pre-trained time sequence neural network model to obtain corresponding eye characteristic vectors and display content characteristic vectors, a first change characteristic vector and a second change characteristic vector respectively representing the eye image change process and the display content image change process are further obtained through the pre-trained cyclic neural network model, whether an object to be recognized is a living body is determined by judging whether the change processes corresponding to the two change characteristic vectors are consistent, and compared with the situation that a user face picture forged by printing a picture and the like cannot be consistent with randomly changed screen display content, the accuracy of living body recognition and the safety of identity authentication are improved.

Description

Eye change feature-based living body identification method and device and electronic equipment
Technical Field
The present application relates to the field of identity verification technologies, and in particular, to a living body identification method, an identification device, and an electronic device based on eye change characteristics.
Background
Along with the popularization of smart machine, at cell-phone unblock, mobile payment, remote authentication etc. in-process, the operation of brushing the face becomes one of the most convenient mode, compares in traditional password authentication mode, whether the operation of brushing the face can be more accurate definite this authentication process of confirming for oneself goes on, at present, in order to prevent to utilize the mode such as printing photo or screen photo to forge user's face and thereby obtain corresponding login authority, current face identification process usually is: the object to be recognized is required to make actions such as blinking, nodding, shaking and the like to determine that the object to be recognized is a living body, however, the method has low recognition success rate, the face of the user is easily replaced by printing pictures, screen pictures and the like, and the safety is poor.
Disclosure of Invention
In view of the above, the present application provides a living body recognition method, a recognition device and an electronic device based on eye change characteristics, when the object to be recognized faces the detection screen to perform face recognition operation, the detection screen changes the display content within the preset time at the preset frame rate, and simultaneously acquires the eye image of the object to be recognized at the same preset frame rate in the process, by comparing the collected eye images of the object to be identified with the change process of the display content image of the detection screen, whether the object to be identified is a living body is determined, if the change process is consistent, and determining that the object to be recognized is a living body, if the change process is inconsistent, determining that the object to be recognized is a non-living body, and compared with the case that the face picture of the user forged by printing a photo and the like cannot be consistent with the randomly changed screen display content, the living body recognition success rate is higher and the safety is better.
The embodiment of the application provides a living body identification method, an identification device and an electronic equipment method based on eye change characteristics, wherein the identification method comprises the following steps:
changing the display content of a detection screen within a preset data acquisition time, and simultaneously respectively acquiring a plurality of eye images of an object to be identified and a plurality of display content images of the detection screen at the same preset frame rate, wherein the eye images are obtained after verification passes;
inputting each acquired eye image into a pre-trained time sequence neural network model to obtain an eye feature vector corresponding to each eye image, and inputting a plurality of eye feature vectors into the pre-trained cyclic neural network model according to the acquisition time sequence corresponding to each eye feature vector to obtain a first change feature vector, wherein the first change feature vector represents the change process of the eye image;
inputting each acquired display content image into a pre-trained time sequence neural network model to obtain a display content feature vector corresponding to each display content image, and inputting a plurality of display content feature vectors into a pre-trained cyclic neural network model according to an acquisition time sequence corresponding to each display content feature vector to obtain a second change feature vector, wherein the second change feature vector represents a change process of the display content image;
and judging whether the change processes corresponding to the first change characteristic vector and the second change characteristic vector are consistent, and if so, determining that the object to be identified is a living body.
Further, whether the change processes corresponding to the first change feature vector and the second change feature vector are consistent is judged by the following method:
inputting the spliced first variation characteristic vector and the spliced second variation characteristic vector into a trained two-classification network model;
and outputting a judgment result whether the change processes corresponding to the first change characteristic vector and the second change characteristic vector are consistent.
Further, the identification method further comprises:
acquiring a face image of the object to be recognized, which faces the detection screen in the forward direction;
judging whether the face image contains the complete face of the object to be identified;
if yes, determining the pixel area occupied by the complete face of the object to be recognized in the detection screen;
and if not, prompting the object to be identified to place the face in the range of the detection screen.
Further, the identification method further comprises:
judging whether the pixel area occupied by the complete face of the object to be recognized in the detection screen is smaller than a preset area threshold value or not;
and if the pixel area of the complete face of the object to be recognized in the detection screen is smaller than the preset area threshold, prompting that the object to be recognized approaches the detection screen until the pixel area of the complete face of the object to be recognized in the detection screen is larger than or equal to the preset area threshold.
Further, the identification method further comprises:
determining an eye region of the object to be recognized according to the face image;
and judging whether the object to be identified wears glasses or not in the eye area, if so, prompting the object to be identified to take off the glasses, and continuously detecting whether the object to be identified wears the glasses or not until the object to be identified takes off the glasses.
Further, the identification method further comprises:
and after the object to be identified is determined to be a living body, prompting that the object to be identified passes the authentication, and opening the login authority for the object to be identified.
The embodiment of the present application further provides a living body recognition device based on eye change characteristics, the recognition device includes:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for changing the display content of a detection screen within a preset data acquisition time, and simultaneously respectively acquiring a plurality of eye images of an object to be identified and a plurality of display content images of the detection screen at the same preset frame rate, wherein the eye images are obtained after verification passes;
the first processing module is used for inputting each acquired eye image into a pre-trained time sequence neural network model to obtain an eye feature vector corresponding to each eye image, and inputting a plurality of eye images into a pre-trained cyclic neural network model according to the acquisition time sequence corresponding to each eye feature vector to obtain a first change feature vector, wherein the first change feature vector represents the change process of the eye images;
the second processing module is used for inputting each acquired display content image into a pre-trained time sequence neural network model to obtain a display content feature vector corresponding to each display content image, and inputting a plurality of display content images into a pre-trained cyclic neural network model according to the acquisition time sequence corresponding to each display content feature vector to obtain a second change feature vector, wherein the second change feature vector represents the change process of the display content images;
and the judging module is used for judging whether the change processes corresponding to the first change characteristic vector and the second change characteristic vector are consistent, and if so, determining that the object to be identified is a living body.
Further, the identification device further includes:
the first checking module is used for acquiring a face image of the object to be recognized, which faces the detection screen in the forward direction;
judging whether the face image contains the complete face of the object to be identified;
if yes, determining the pixel area occupied by the complete face of the object to be recognized in the detection screen;
and if not, prompting the object to be identified to place the face in the range of the detection screen.
An embodiment of the present application further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine readable instructions when executed by the processor performing the steps of the eye change feature based living body identification method as described above.
Embodiments of the present application also provide a computer-readable storage medium, on which a computer program is stored, and the computer program is executed by a processor to perform the steps of the eye-based living body identification method as described above.
According to the living body identification method, the identification device and the electronic equipment based on the eye change characteristics, when an object to be identified faces a detection screen to perform face identification operation, the detection screen changes display contents at a preset frame rate within preset time, eye images of the object to be identified are collected at the same preset frame rate in the process, whether the eye images of the collected object to be identified are consistent with the change process of the display content images of the detection screen is compared, whether the object to be identified is a living body is determined, if the change process is consistent, the object to be identified is determined to be the living body, if the change process is inconsistent, the object to be identified is determined to be a non-living body, the identification success rate is high, and the safety is good.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a flowchart illustrating a living body identification method based on eye change characteristics according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating another method for identifying a living body based on eye change characteristics according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating another method for identifying a living body based on eye change characteristics according to an embodiment of the present application;
FIG. 4 is a schematic structural diagram illustrating a living body recognition device based on eye variation characteristics according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of another living body recognition device based on eye variation characteristics according to an embodiment of the present application;
fig. 6 shows a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. Every other embodiment that can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present application falls within the protection scope of the present application.
First, an application scenario to which the present application is applicable will be described. The method and the device can be applied to the technical field of identity authentication.
Research shows that, in the existing face recognition process, in order to prevent the face of a user from being forged by printing a photo or a screen photo and the like to obtain corresponding login authority, the process generally adopts: the object to be identified is required to make actions such as blinking, nodding, shaking and the like to determine that the object to be identified is a living body, but the method has low identification success rate, is easy to forge and has poor safety.
Based on the above, the embodiment of the application provides a living body identification method based on eye change characteristics, when the object to be recognized faces the detection screen to perform face recognition operation, the detection screen changes the display content within the preset time at the preset frame rate, and simultaneously acquires the eye image of the object to be recognized at the same preset frame rate in the process, by comparing the collected eye images of the object to be identified with the change process of the display content image of the detection screen, whether the object to be identified is a living body is determined, if the change process is consistent, and determining that the object to be recognized is a living body, if the change process is inconsistent, determining that the object to be recognized is a non-living body, and compared with the case that the face picture of the user forged by printing a photo and the like cannot be consistent with the randomly changed screen display content, the living body recognition success rate is higher and the safety is better.
Referring to fig. 1, fig. 1 is a flowchart illustrating a living body recognition method based on eye variation characteristics according to an embodiment of the present application. As shown in fig. 1, an identification method provided in an embodiment of the present application includes:
s101, changing display content of a detection screen within preset data acquisition time, and simultaneously respectively acquiring a plurality of eye images of an object to be identified and a plurality of display content images of the detection screen at the same preset frame rate, wherein the eye images are obtained after verification passes.
In the step, after the object to be recognized faces the detection screen and a face recognition process is started, the display content displayed in the detection screen is changed within a preset data acquisition time, and a plurality of display content images displayed in the detection screen and a plurality of eye images of the object to be recognized observing the detection screen are respectively acquired according to a preset frame rate.
Wherein the content displayed in the detection screen is randomly transformed.
Here, in the recognition process, both eyes of the object to be recognized need to watch the detection screen, when the content displayed in the detection screen changes, if the object to be recognized is a living body, the eye image of the object to be recognized also changes correspondingly, and in addition, the obtained eye image of the user to be detected needs to pass position and size verification, so that the obtained eye image contains the complete eye feature of the object to be recognized, the area ratio of the image size in the detection screen reaches a preset threshold value, and the feature extraction processing is facilitated.
The preset data acquisition time can be selected according to actual needs, and is not particularly limited herein, and optionally, the preset data acquisition time is 2 s.
As a possible implementation manner, detecting a change manner of the display content in the screen, including a change of the display content itself, a change rate and a change acceleration of the display content, where the change of the display content itself is optionally: a color change, such as from red to green; shape changes, such as from circular to triangular; or a simultaneous change in shape with color, such as a red triangle changing to a green circle.
The change rate and the change acceleration of the display content in the detection screen can be selected according to actual needs, and are not particularly limited herein, and optionally determined according to a preset acquisition frame rate.
Here, the number of the collected multiple detection screen display content images and the number of the multiple eye images of the object to be recognized are the same, and are optionally 10.
As a possible implementation, the eye image of the object to be recognized may be captured by a screen camera included in the detection screen.
S102, inputting each acquired eye image into a pre-trained time sequence neural network model to obtain an eye feature vector corresponding to each eye image, and inputting a plurality of eye feature vectors into a pre-trained cyclic neural network model according to an acquisition time sequence corresponding to each eye feature vector to obtain a first change feature vector, wherein the first change feature vector represents a change process of the eye image.
In the step, each of the obtained eye images is input into a trained time sequence neural network model according to an obtaining time sequence, after the eye images are processed by the time sequence neural network, eye feature vectors of an object to be recognized corresponding to each eye image are output, furthermore, all the eye feature vectors are input into a trained cyclic neural network model according to an acquisition time sequence corresponding to each eye feature vector, and after the eye feature vectors are processed by the cyclic neural network, a first change feature vector representing the change process of the eye image of the object to be recognized is output.
Here, the role of the time-series neural network is: and performing feature extraction on each of the obtained eye images to obtain a plurality of eye feature vectors, wherein the number of the eye feature vectors is consistent with that of the collected eye images.
Wherein, the feature extractor of the eye picture is optionally resnet-18.
Here, the recurrent neural network outputs a first variation feature vector corresponding to each eye feature vector according to the input eye feature vectors of the object to be recognized.
The first change feature vector reflects the change process of the eye feature vector of the object to be recognized, namely the change process of the eye image of the object to be recognized in the time of detecting the change of the display content of the screen. Optionally, the recurrent neural network is composed of two layers of Long Short-Term Memory networks (LSTM).
S103, inputting each acquired display content image into a pre-trained time sequence neural network model to obtain a display content feature vector corresponding to each display content image, and inputting a plurality of display content feature vectors into the pre-trained cyclic neural network model according to the acquisition time sequence corresponding to each display content feature vector to obtain a second change feature vector, wherein the second change feature vector represents the change process of the display content image.
In the step, each of the acquired multiple display content images is input into a trained time sequence neural network model according to an acquisition time sequence, and after the processing of the time sequence neural network, a display content feature vector of a detection screen corresponding to each display content image is output and obtained.
Here, the role of the time-series neural network is: and performing feature extraction on each of the obtained multiple display content images to obtain multiple display content feature vectors, wherein the number of the display content feature vectors is consistent with the number of the acquired display content images.
Wherein, the feature extractor for displaying the content picture is optionally resnet-18.
Here, the recurrent neural network outputs a second variation feature vector corresponding to each display content feature vector according to the input plurality of display content feature vectors of the detection screen.
The second variation feature vector reflects the variation process of the display content feature vector of the detection screen, namely the variation process of the display content image of the detection screen in the time of detecting the change of the display content of the screen. Optionally, the recurrent neural network is composed of two layers of Long Short-Term Memory networks (LSTM).
And S104, judging whether the change processes corresponding to the first change characteristic vector and the second change characteristic vector are consistent, and if so, determining that the object to be identified is a living body.
In the step, a judgment result of whether the change processes corresponding to the first change feature vector and the second change feature vector are consistent is determined, and the judgment result is used as an identification result of whether the object to be identified is a living body. If the change processes corresponding to the first change characteristic vector and the second change characteristic vector are consistent, the object to be identified is a living body; and if the change processes corresponding to the first change characteristic vector and the second change characteristic vector are inconsistent, determining that the object to be identified is a non-living body such as a printing paper photo, a screen photo and the like.
The printing paper photo is a face photo of an object to be recognized, which is printed by paper; the screen photo is a face photo of an object to be recognized displayed on a screen.
As a possible implementation manner, the determining whether the change processes corresponding to the first change feature vector and the second change feature vector are consistent specifically includes:
inputting the spliced first variation characteristic vector and the spliced second variation characteristic vector into a trained two-classification network model;
and outputting a judgment result whether the change processes corresponding to the first change characteristic vector and the second change characteristic vector are consistent.
Here, the first variation feature vector and the second variation feature vector are spliced to obtain an intermediate vector which is convenient for comparing whether the variation process between the first variation feature vector and the second variation feature vector is consistent, the intermediate vector is input into the trained two-class network, and optionally, the output result is "0" or "1".
Here, the bipartite network compares the first variation feature vector and the second variation feature vector not only based on the content after detecting the screen variation, but also compares the variation process such as the variation rate and the variation acceleration.
If the classification result output by the two classification networks is '0', the object to be identified is a non-living body; and if the classification result output by the two-classification network is 1, the object to be identified is a living body.
As a possible implementation, optionally, the two-class network consists of three fully connected layers.
Therefore, due to the fact that the face image of the object to be recognized appears in the printing paper photo and the screen photo, the image of the corresponding eye part cannot change along with the change of the display content in the detection screen, and if the corresponding paper photo and the screen photo are replaced along with the change of the display content in the detection screen, the change process of the display content in the detection screen cannot be completely restored.
In the living body recognition method based on the eye change characteristics provided by the embodiment of the application, when the object to be recognized faces the detection screen to perform the face recognition operation, the detection screen changes the display content within the preset time at the preset frame rate, and simultaneously acquires the eye images of the object to be identified at the same preset frame rate in the process, by comparing the collected eye images of the object to be identified with the change process of the display content image of the detection screen, whether the object to be identified is a living body is determined, if the change process is consistent, and determining that the object to be recognized is a living body, if the change process is inconsistent, determining that the object to be recognized is a non-living body, and compared with the case that the face picture of the user forged by printing a photo and the like cannot be consistent with the randomly changed screen display content, the living body recognition success rate is higher and the safety is better.
Referring to fig. 2, fig. 2 is a flowchart illustrating another method for identifying a living body based on eye variation characteristics according to an embodiment of the present application. As shown in fig. 2, an identification method provided in an embodiment of the present application includes:
s201, acquiring a face image of the object to be recognized, which faces the detection screen in the forward direction, and judging whether the face image contains a complete face of the object to be recognized.
In the step, a face image of the object to be recognized, which faces the detection screen in the forward direction, is shot through a screen camera on the detection screen, and whether the shot face image contains a complete face of the complete object to be recognized is recognized.
As a possible implementation manner, a preset position adjustment time is reserved before the screen camera shoots the face image of the object to be recognized, in the time, the screen camera displays a picture acquired in real time on the detection screen until the position adjustment of the object to be recognized is completed, and shooting is performed after a complete face image is recognized in the detection screen so as to determine the face image of the object to be recognized.
The preset position adjustment time may be set as required, and is not particularly limited herein.
S202, if yes, determining the pixel area occupied by the complete face of the object to be recognized in the detection screen, and if not, prompting the object to be recognized to place the face in the range of the detection screen.
In the step, in the process of acquiring the face image of the object to be recognized, the detection screen can display the acquired face image on the screen, and the object to be recognized can observe the acquisition process of the face image in real time, so that the position of the object to be recognized can be adjusted conveniently, and the acquired face image is complete and clear.
Here, if it is detected that the image in the screen contains the complete face of the object to be recognized, calculating a pixel area occupied by the complete face of the object to be recognized on the detection screen; if the image in the screen is detected not to contain the complete face of the object to be recognized, prompting the object to be recognized to adjust the position of the object to be recognized, such as: the distance, angle, etc. of the face from the detection screen until the detection screen contains the complete face of the object to be recognized.
The object to be recognized can be subjected to position adjustment according to the real-time collected image displayed on the detection screen.
S203, judging whether the pixel area occupied by the complete face of the object to be recognized in the detection screen is smaller than a preset pixel area threshold value.
In the step, after the pixel area occupied by the complete face of the object to be recognized in the detection screen is determined, the pixel area is compared with a preset pixel area threshold value.
Here, the preset pixel area threshold may be determined according to the total pixel area of the detection screen, and is not particularly limited herein.
And S204, if the area is smaller than the preset area threshold, prompting that the object to be recognized approaches the detection screen until the pixel area occupied by the complete face of the object to be recognized in the detection screen is larger than or equal to the preset area threshold.
In the step, if the pixel area occupied by the complete face of the object to be recognized in the detection screen is smaller than the preset pixel area threshold, prompting that the object to be recognized approaches the detection screen to increase the pixel area occupied by the complete face of the object to be recognized in the detection screen, and keeping prompting of the object to be recognized during the period until the pixel area occupied by the complete face of the object to be recognized in the detection screen is larger than or equal to the preset pixel area threshold.
Here, in order to improve the accuracy of recognition and facilitate acquisition of a subsequent eye image, the area of the face of the recognition object in the detection screen needs to be large enough but cannot exceed the entire pixel area of the detection screen.
Here, if the pixel area occupied by the complete face of the object to be recognized in the detection screen is greater than or equal to the preset pixel area threshold, the next operation is performed.
S205, determining the eye region of the object to be recognized according to the face image.
In the step, after the face image of the object to be recognized is determined, the eye region is recognized in the face image.
As a possible implementation manner, after the eye region is identified, it is determined whether the pixel area occupied by the eye region of the object to be identified in the detection screen is smaller than a preset eye pixel area threshold, and if so, the object to be identified is prompted to approach the detection screen until the pixel area occupied by the eye region of the object to be identified in the detection screen is greater than or equal to the preset eye pixel area threshold.
Here, if the pixel area of the eye region of the object to be recognized in the detection screen is greater than or equal to the preset eye pixel area threshold, the next operation is performed.
The preset eye pixel area threshold may be determined according to the total pixel area of the detection screen, and is not limited herein. Optionally, the pixel area threshold is the same as the pixel area threshold in S203.
S206, judging whether the object to be identified wears glasses or not in the eye area, if yes, prompting the object to be identified to take off the glasses, and continuously detecting whether the object to be identified wears the glasses or not until the object to be identified takes off the glasses.
In the step, whether the glasses image exists in the determined eye area or not is identified, whether the glasses are worn by the object to be identified is judged, if the glasses are worn by the object to be identified, a prompt is sent to prompt the object to be identified to remove the glasses, in the process, whether the eye area contains the glasses image or not is continuously detected and identified, and the prompt is continuously carried out on the object to be identified until the eye image cannot be detected in the eye area, namely, the glasses are removed by the object to be identified.
As a possible implementation mode, the prompting mode can be a voice prompt or a text prompt displayed on the detection screen.
Referring to fig. 3, fig. 3 is a flowchart illustrating another method for identifying a living body based on eye variation characteristics according to an embodiment of the present application.
S301, in a preset data acquisition time, changing display content of a detection screen, and simultaneously respectively acquiring a plurality of eye images of an object to be identified and a plurality of display content images of the detection screen at the same preset frame rate, wherein the eye images are obtained after verification passes.
S302, inputting each acquired eye image into a pre-trained time sequence neural network model to obtain an eye feature vector corresponding to each eye image, and inputting a plurality of eye feature vectors into a pre-trained cyclic neural network model according to an acquisition time sequence corresponding to each eye feature vector to obtain a first change feature vector, wherein the first change feature vector represents a change process of the eye image.
And S303, inputting each acquired display content image into a pre-trained time sequence neural network model to obtain a display content feature vector corresponding to each display content image, and inputting a plurality of display content feature vectors into the pre-trained cyclic neural network model according to the acquisition time sequence corresponding to each display content feature vector to obtain a second change feature vector, wherein the second change feature vector represents the change process of the display content image.
S304, judging whether the change processes corresponding to the first change characteristic vector and the second change characteristic vector are consistent, and if so, determining that the object to be identified is a living body.
S305, after the object to be recognized is determined to be a living body, prompting that the object to be recognized passes the authentication, and opening the login authority for the object to be recognized.
In the step, after the object to be recognized is determined to be a living body, the recognition process is confirmed to be operated by the user, a prompt can be sent to the object to be recognized to remind the user that the living body authentication process is finished, the authentication is passed, and corresponding login authority or payment authority is further opened for the user.
As a possible implementation manner, after determining that the object to be recognized is a non-living body, it is determined that the recognition process is not operated by the user himself, but is verified by disguising a print paper photograph or a screen photograph as the user, and a prompt is sent to the object to be recognized to remind the user that the living body authentication process is finished and the authentication is not passed.
The descriptions of S301 to S304 may refer to the descriptions of S101 to S104, and the same technical effects can be achieved, which are not described in detail.
In the living body recognition method based on the eye change characteristics provided by the embodiment of the application, when the object to be recognized faces the detection screen to perform the face recognition operation, the detection screen changes the display content within the preset time at the preset frame rate, and simultaneously acquires the eye images of the object to be identified at the same preset frame rate in the process, by comparing the collected eye images of the object to be identified with the change process of the display content image of the detection screen, whether the object to be identified is a living body is determined, if the change process is consistent, and determining that the object to be recognized is a living body, if the change process is inconsistent, determining that the object to be recognized is a non-living body, and compared with the case that the face picture of the user forged by printing a photo and the like cannot be consistent with the randomly changed screen display content, the living body recognition success rate is higher and the safety is better.
Referring to fig. 4 and 5, fig. 4 is a schematic structural diagram of a living body recognition device based on eye variation characteristics provided in an embodiment of the present application, and fig. 5 is a schematic structural diagram of another living body recognition device based on eye variation characteristics provided in an embodiment of the present application. As shown in fig. 4, the recognition apparatus 400 includes:
the acquisition module 410 is configured to change display content of a detection screen within a preset data acquisition time, and simultaneously acquire a plurality of eye images of an object to be identified and a plurality of display content images of the detection screen at the same preset frame rate, respectively, where the eye images are obtained after verification passes;
the first processing module 420 is configured to input each acquired eye image into a pre-trained time sequence neural network model to obtain an eye feature vector corresponding to each eye image, and input a plurality of eye images into a pre-trained cyclic neural network model according to an acquisition time sequence corresponding to each eye feature vector to obtain a first change feature vector, where the first change feature vector represents a change process of the eye image;
the second processing module 430 is configured to input each acquired display content image into a pre-trained time sequence neural network model to obtain a display content feature vector corresponding to each display content image, and input a plurality of display content images into a pre-trained cyclic neural network model according to an acquisition time sequence corresponding to each display content feature vector to obtain a second variation feature vector, where the second variation feature vector represents a variation process of the display content image;
the determining module 440 is configured to determine whether the change processes corresponding to the first change feature vector and the second change feature vector are consistent, and if so, determine that the object to be identified is a living body.
Further, as shown in fig. 5, the identification apparatus 400 further includes:
the first checking module 450 is configured to obtain a face image of the object to be recognized, which faces the detection screen in the forward direction;
judging whether the face image contains the complete face of the object to be identified;
if yes, determining the pixel area occupied by the complete face of the object to be recognized in the detection screen;
and if not, prompting the object to be identified to place the face in the range of the detection screen.
Further, the identification apparatus 400 further includes:
the second checking module is used for judging whether the pixel area occupied by the complete face of the object to be recognized in the detection screen is smaller than a preset pixel area threshold value or not;
and if the pixel area of the complete face of the object to be recognized is smaller than the preset pixel area threshold, prompting the object to be recognized to approach the detection screen until the pixel area of the complete face of the object to be recognized in the detection screen is larger than or equal to the preset pixel area threshold.
Further, the identification apparatus 400 further includes:
the third checking module is used for determining the eye region of the object to be recognized according to the face image;
and judging whether the object to be identified wears glasses or not in the eye area, if so, prompting the object to be identified to take off the glasses, and continuously detecting whether the object to be identified wears the glasses or not until the object to be identified takes off the glasses.
Further, the identification apparatus 400 further includes:
and the authority management module is used for prompting that the object to be identified passes the authentication after the object to be identified is determined to be a living body, and opening the login authority for the object to be identified.
The living body recognition device based on the eye change characteristics provided by the embodiment of the application has the advantages that when the object to be recognized faces the detection screen to perform the face recognition operation, the detection screen changes the display content within the preset time at the preset frame rate, and simultaneously acquires the eye images of the object to be identified at the same preset frame rate in the process, by comparing the collected eye images of the object to be identified with the change process of the display content image of the detection screen, whether the object to be identified is a living body is determined, if the change process is consistent, and determining that the object to be recognized is a living body, if the change process is inconsistent, determining that the object to be recognized is a non-living body, and compared with the case that the face picture of the user forged by printing a photo and the like cannot be consistent with the randomly changed screen display content, the living body recognition success rate is higher and the safety is better.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 6, the electronic device 600 includes a processor 610, a memory 620, and a bus 630.
The memory 620 stores machine-readable instructions executable by the processor 610, when the electronic device 600 runs, the processor 610 communicates with the memory 620 through the bus 630, and when the machine-readable instructions are executed by the processor 610, the steps of the eye-based living body identification method in the method embodiment shown in fig. 1 to 3 may be performed.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the eye-based living body identification method in the method embodiments shown in fig. 1 to fig. 3 may be executed.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the exemplary embodiments of the present application, and are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A living body identification method based on eye change characteristics is characterized by comprising the following steps:
changing the display content of a detection screen within a preset data acquisition time, and simultaneously respectively acquiring a plurality of eye images of an object to be identified and a plurality of display content images of the detection screen at the same preset frame rate, wherein the eye images are obtained after verification passes;
inputting each acquired eye image into a pre-trained time sequence neural network model to obtain an eye feature vector corresponding to each eye image, and inputting a plurality of eye feature vectors into the pre-trained cyclic neural network model according to the acquisition time sequence corresponding to each eye feature vector to obtain a first change feature vector, wherein the first change feature vector represents the change process of the eye image;
inputting each acquired display content image into a pre-trained time sequence neural network model to obtain a display content feature vector corresponding to each display content image, and inputting a plurality of display content feature vectors into a pre-trained cyclic neural network model according to an acquisition time sequence corresponding to each display content feature vector to obtain a second change feature vector, wherein the second change feature vector represents a change process of the display content image;
and judging whether the change processes corresponding to the first change characteristic vector and the second change characteristic vector are consistent, and if so, determining that the object to be identified is a living body.
2. The identification method according to claim 1, wherein whether the variation process of the first variation feature vector is consistent with the variation process of the second variation feature vector is determined by:
inputting the spliced first variation characteristic vector and the spliced second variation characteristic vector into a trained two-classification network model;
and outputting a judgment result whether the change processes corresponding to the first change characteristic vector and the second change characteristic vector are consistent.
3. The identification method according to claim 1, characterized in that the identification method further comprises:
acquiring a face image of the object to be recognized, which faces the detection screen in the forward direction, and judging whether the face image contains a complete face of the object to be recognized;
if yes, determining the pixel area occupied by the complete face of the object to be recognized in the detection screen, and if not, prompting the object to be recognized to place the face in the range of the detection screen.
4. The identification method according to claim 3, characterized in that it further comprises:
judging whether the pixel area occupied by the complete face of the object to be recognized in the detection screen is smaller than a preset pixel area threshold value or not;
and if the pixel area of the complete face of the object to be recognized is smaller than the preset pixel area threshold, prompting the object to be recognized to approach the detection screen until the pixel area of the complete face of the object to be recognized in the detection screen is larger than or equal to the preset pixel area threshold.
5. The identification method according to claim 3, characterized in that it further comprises:
determining an eye region of the object to be recognized according to the face image;
and judging whether the object to be identified wears glasses or not in the eye area, if so, prompting the object to be identified to take off the glasses, and continuously detecting whether the object to be identified wears the glasses or not until the object to be identified takes off the glasses.
6. The identification method according to claim 1, characterized in that the identification method further comprises:
and after the object to be identified is determined to be a living body, prompting that the object to be identified passes the authentication, and opening the login authority for the object to be identified.
7. A living body recognition apparatus based on eye change characteristics, the recognition apparatus comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for changing the display content of a detection screen within a preset data acquisition time, and simultaneously respectively acquiring a plurality of eye images of an object to be identified and a plurality of display content images of the detection screen at the same preset frame rate, wherein the eye images are obtained after verification passes;
the first processing module is used for inputting each acquired eye image into a pre-trained time sequence neural network model to obtain an eye feature vector corresponding to each eye image, and inputting a plurality of eye images into a pre-trained cyclic neural network model according to the acquisition time sequence corresponding to each eye feature vector to obtain a first change feature vector, wherein the first change feature vector represents the change process of the eye images;
the second processing module is used for inputting each acquired display content image into a pre-trained time sequence neural network model to obtain a display content feature vector corresponding to each display content image, and inputting a plurality of display content images into a pre-trained cyclic neural network model according to the acquisition time sequence corresponding to each display content feature vector to obtain a second change feature vector, wherein the second change feature vector represents the change process of the display content images;
and the judging module is used for judging whether the change processes corresponding to the first change characteristic vector and the second change characteristic vector are consistent, and if so, determining that the object to be identified is a living body.
8. The identification device of claim 7, further comprising:
the first checking module is used for acquiring a face image of the object to be recognized, which faces the detection screen in the forward direction;
judging whether the face image contains the complete face of the object to be identified;
if yes, determining the pixel area occupied by the complete face of the object to be recognized in the detection screen;
and if not, prompting the object to be identified to place the face in the range of the detection screen.
9. An electronic device, comprising: processor, memory and bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the identification method according to any one of claims 1 to 6.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, performs the steps of the identification method according to one of claims 1 to 6.
CN202110330169.3A 2021-03-29 2021-03-29 Eye change feature-based living body identification method and device and electronic equipment Pending CN112712073A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110330169.3A CN112712073A (en) 2021-03-29 2021-03-29 Eye change feature-based living body identification method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110330169.3A CN112712073A (en) 2021-03-29 2021-03-29 Eye change feature-based living body identification method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN112712073A true CN112712073A (en) 2021-04-27

Family

ID=75550388

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110330169.3A Pending CN112712073A (en) 2021-03-29 2021-03-29 Eye change feature-based living body identification method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112712073A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113239887A (en) * 2021-06-04 2021-08-10 Oppo广东移动通信有限公司 Living body detection method and apparatus, computer-readable storage medium, and electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016197389A1 (en) * 2015-06-12 2016-12-15 北京释码大华科技有限公司 Method and device for detecting living object, and mobile terminal
CN109726396A (en) * 2018-12-20 2019-05-07 泰康保险集团股份有限公司 Semantic matching method, device, medium and the electronic equipment of question and answer text
CN110213668A (en) * 2019-04-29 2019-09-06 北京三快在线科技有限公司 Generation method, device, electronic equipment and the storage medium of video title
US20190282191A1 (en) * 2017-04-13 2019-09-19 The Board Of Trustees Of The Leland Stanford Junior University Systems and Methods for Detecting Complex Networks in MRI Image Data
CN110414335A (en) * 2019-06-20 2019-11-05 北京奇艺世纪科技有限公司 Video frequency identifying method, device and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016197389A1 (en) * 2015-06-12 2016-12-15 北京释码大华科技有限公司 Method and device for detecting living object, and mobile terminal
US20190282191A1 (en) * 2017-04-13 2019-09-19 The Board Of Trustees Of The Leland Stanford Junior University Systems and Methods for Detecting Complex Networks in MRI Image Data
CN109726396A (en) * 2018-12-20 2019-05-07 泰康保险集团股份有限公司 Semantic matching method, device, medium and the electronic equipment of question and answer text
CN110213668A (en) * 2019-04-29 2019-09-06 北京三快在线科技有限公司 Generation method, device, electronic equipment and the storage medium of video title
CN110414335A (en) * 2019-06-20 2019-11-05 北京奇艺世纪科技有限公司 Video frequency identifying method, device and computer readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113239887A (en) * 2021-06-04 2021-08-10 Oppo广东移动通信有限公司 Living body detection method and apparatus, computer-readable storage medium, and electronic device

Similar Documents

Publication Publication Date Title
CN107886032B (en) Terminal device, smart phone, authentication method and system based on face recognition
CN109389069B (en) Gaze point determination method and apparatus, electronic device, and computer storage medium
CN106897658B (en) Method and device for identifying human face living body
CN107370942B (en) Photographing method, photographing device, storage medium and terminal
CN109829370A (en) Face identification method and Related product
CN113366487A (en) Operation determination method and device based on expression group and electronic equipment
WO2016084072A1 (en) Anti-spoofing system and methods useful in conjunction therewith
US20190286798A1 (en) User authentication method using face recognition and device therefor
CN107358219B (en) Face recognition method and device
CN106161962B (en) A kind of image processing method and terminal
CN105141842A (en) Tamper-proof license camera system and method
JP2016031679A (en) Object identification device, object identification method, and program
WO2016203698A1 (en) Face detection device, face detection system, and face detection method
CN108805005A (en) Auth method and device, electronic equipment, computer program and storage medium
CN107330699A (en) Mobile terminal payment system based on image processing algorithm
JP6267025B2 (en) Communication terminal and communication terminal authentication method
CN110599187A (en) Payment method and device based on face recognition, computer equipment and storage medium
US11087118B2 (en) Facial recognition method
WO2017000217A1 (en) Living-body detection method and device and computer program product
CN112712073A (en) Eye change feature-based living body identification method and device and electronic equipment
CA3050456A1 (en) Facial modelling and matching systems and methods
EP3770780B1 (en) Identification system, method, and program
CN110163164B (en) Fingerprint detection method and device
CN110415113A (en) Finance data processing method, device, server and readable storage medium storing program for executing
CN112580615B (en) Living body authentication method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210427

RJ01 Rejection of invention patent application after publication