CN112580615B - Living body authentication method and device and electronic equipment - Google Patents

Living body authentication method and device and electronic equipment Download PDF

Info

Publication number
CN112580615B
CN112580615B CN202110213304.6A CN202110213304A CN112580615B CN 112580615 B CN112580615 B CN 112580615B CN 202110213304 A CN202110213304 A CN 202110213304A CN 112580615 B CN112580615 B CN 112580615B
Authority
CN
China
Prior art keywords
living body
optical flow
authentication data
authentication
characteristic matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110213304.6A
Other languages
Chinese (zh)
Other versions
CN112580615A (en
Inventor
白世杰
吴富章
赵宇航
王秋明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yuanjian Information Technology Co Ltd
Original Assignee
Beijing Yuanjian Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yuanjian Information Technology Co Ltd filed Critical Beijing Yuanjian Information Technology Co Ltd
Priority to CN202110213304.6A priority Critical patent/CN112580615B/en
Publication of CN112580615A publication Critical patent/CN112580615A/en
Application granted granted Critical
Publication of CN112580615B publication Critical patent/CN112580615B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a living body authentication method, an authentication device and an electronic device, which are used for acquiring authentication data of an object to be authenticated and judging whether the authentication data comprises image authentication data and audio authentication data; if the living body lower half muscle changes, acquiring a plurality of video frames from the image authentication data, and determining an optical flow graph reflecting the living body lower half muscle changes for each video frame; determining an optical flow characteristic matrix corresponding to the image authentication data according to the plurality of optical flow graphs corresponding to the determined image authentication data; and inputting the optical flow characteristic matrix into a living body authentication model obtained by training based on the mapping relation of the muscle variation characteristics of the lower half part of the living body in the historical living body sample and the sample audio authentication data, and outputting the classification result of whether the object to be authenticated is the living body. Thus, the problem of passing authentication by using a photograph can be effectively prevented, and the living body recognition capability and the recognition accuracy can be improved.

Description

Living body authentication method and device and electronic equipment
Technical Field
The present application relates to the field of security verification technologies, and in particular, to an authentication method and an authentication device for a living body, and an electronic device.
Background
In key login system scenes such as mobile phone unlocking, mobile payment, remote identity authentication and the like, the face recognition authentication operation is one of the most convenient authentication modes, and compared with a password authentication mode and a mode of performing authentication by using personal identity information, the face recognition authentication operation is safer and can represent the operation of a user, so that the face recognition authentication operation is an effective means for preventing hacker attack. However, in the current face recognition and verification method, there is a risk of photo attack by using a face photo of a user instead of the user, and the security of the method needs to be enhanced. The method is used for effectively distinguishing the face of a real person or a photo displayed by a user and preventing the photo from attacking the application of an identity authentication technology.
At present, hardware such as an infrared sensor or a depth sensor is mostly added in the prior art, the three-dimensional property of a human face is judged by utilizing a depth image, and the mode can directly defend against two-dimensional photo attacks such as a non-bendable display screen of a mobile phone, a computer and the like, but the effect of bendable printing of photos is poor. Meanwhile, the method also has a verification method for judging the difference between the pork skin and the screen or paper according to the characteristics of the image collected by the camera, such as texture, color, texture, quality and the like, so as to judge whether the image is a real user. However, these methods still have the possibility of photo attack, and have low recognition capability and recognition accuracy for living bodies.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a living body authentication method, an authentication device, and an electronic apparatus, which acquire image authentication data and audio authentication data of an object to be authenticated, determine an optical flow diagram corresponding to a video frame in the image authentication data and reflecting a change characteristic of a muscle of a lower half of a living body in the image authentication data, determine a corresponding feature matrix according to the optical flow diagram, input the feature matrix into a living body authentication model trained based on a mapping relationship between a change characteristic of a muscle of a lower half of a living body in a historical living body sample and sample audio authentication data, recognize a matching degree between the change characteristic of the muscle of the lower half of the living body and the audio authentication data by using a deep learning technique, and output a result of living body classification, where the change characteristic of the muscle of the lower half of the living body corresponding to the audio authentication data cannot be simulated due to a photo attack, therefore, the photo attack can be prevented, and the recognition capability and the recognition accuracy of the living body can be improved.
The embodiment of the application provides an authentication method of a living body, which comprises the following steps:
acquiring authentication data of an object to be authenticated, and judging whether the authentication data comprises image authentication data and audio authentication data;
if the video authentication data comprises image authentication data and audio authentication data, acquiring a plurality of video frames from the image authentication data, and determining a light flow graph corresponding to each video frame, wherein the light flow graph reflects the muscle change characteristics of the lower half of the living body corresponding to the image authentication data;
determining an optical flow characteristic matrix corresponding to the image authentication data according to the determined multiple optical flow graphs corresponding to the image authentication data;
and inputting the optical flow characteristic matrix into a trained living body authentication model, and outputting a classification result of whether the object to be authenticated is a living body, wherein the living body authentication model is obtained by training based on the mapping relation of the muscle variation characteristics of the lower half part of the living body in the historical living body sample and the sample audio authentication data.
Further, the optical flow feature matrix is determined according to the following method:
scaling the optical flow diagram corresponding to each video frame to a preset size to obtain a first optical flow diagram matrix corresponding to the optical flow diagram;
splicing optical flow diagram matrixes corresponding to the optical flow diagrams in two mutually perpendicular directions to obtain a second optical flow diagram matrix corresponding to the optical flow diagrams;
and determining an optical flow characteristic matrix corresponding to the image authentication data according to a second optical flow graph matrix corresponding to each optical flow graph corresponding to the image authentication data.
Further, the classification result of whether the object to be authenticated is a living body is output according to the following method:
inputting the optical flow characteristic matrix into a trained living body authentication model, performing space-time characteristic extraction on the optical flow characteristic matrix, and determining a space-time characteristic matrix corresponding to the optical flow characteristic matrix;
extracting spatial features of the space-time feature matrix, and determining a spatial feature matrix corresponding to the space-time feature matrix;
extracting time characteristics of the space characteristic matrix, and determining a time characteristic matrix corresponding to the space characteristic matrix;
and carrying out normalization index function and full-connection mapping classification processing on the time characteristic matrix, and outputting a classification result of whether the object to be authenticated is a living body.
Further, after the optical flow feature matrix is input into a trained living body authentication model and a classification result of whether the object to be authenticated is a living body is output, wherein the living body authentication model is trained based on a mapping relation between sample image authentication data and sample audio authentication data in a historical living body sample, the authentication method includes:
if the output living body authentication result is a living body, prompting that the authentication is passed;
if the output living body authentication result is a non-living body, the authentication fails, and the verification is prompted to be carried out again.
Further, for each video frame, determining an optical flow graph corresponding to the video frame according to the following steps:
determining an authentication area corresponding to the object to be authenticated from the image authentication data;
determining the feature point coordinates corresponding to the authentication area;
and processing the feature point coordinates by using an optical flow method, and determining an optical flow graph corresponding to the video frame.
An embodiment of the present application also provides an authentication apparatus of a living body, the authentication apparatus including:
the judging module is used for acquiring authentication data of an object to be authenticated and judging whether the authentication data comprises image authentication data and audio authentication data;
the device comprises a first determining module, a second determining module and a third determining module, wherein the first determining module is used for acquiring a plurality of video frames from image authentication data if the image authentication data and the audio authentication data are contained, and determining a light flow graph corresponding to each video frame, wherein the light flow graph reflects the muscle change characteristics of the lower half part of the living body corresponding to the image authentication data;
the second determining module is used for determining an optical flow characteristic matrix corresponding to the image authentication data according to the determined multiple optical flow graphs corresponding to the image authentication data;
and the output module is used for inputting the optical flow characteristic matrix into a trained living body authentication model and outputting a classification result of whether the object to be authenticated is a living body, wherein the living body authentication model is obtained by training based on the mapping relation between the muscle change characteristics of the lower half part of the living body in a historical living body sample and the sample audio authentication data.
Further, the second determining module is further configured to:
scaling the optical flow diagram corresponding to each video frame to a preset size to obtain a first optical flow diagram matrix corresponding to the optical flow diagram;
splicing optical flow diagram matrixes corresponding to the optical flow diagrams in two mutually perpendicular directions to obtain a second optical flow diagram matrix corresponding to the optical flow diagrams;
and determining an optical flow characteristic matrix corresponding to the image authentication data according to a second optical flow graph matrix corresponding to each optical flow graph corresponding to the image authentication data.
Further, the authentication apparatus further includes a prompt module, and the prompt module is configured to:
if the output living body authentication result is a living body, prompting that the authentication is passed;
if the output living body authentication result is a non-living body, the authentication fails, and the verification is prompted to be carried out again.
An embodiment of the present application further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is operating, the machine-readable instructions when executed by the processor performing the steps of the authentication method of a living body as described above.
An embodiment of the present application also provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, performs the steps of the authentication method of a living body as described above.
The living body authentication method, the authentication device and the electronic equipment provided by the application determine an optical flow diagram corresponding to a video frame in image authentication data and reflecting the muscle variation characteristics of the lower half part of the living body in the image authentication data by acquiring the image authentication data and the audio authentication data of an object to be authenticated, determine a corresponding characteristic matrix according to the optical flow diagram, input the characteristic matrix into a living body authentication model trained on the mapping relation between the muscle variation characteristics of the lower half part of the living body in a historical living body sample and the audio authentication data of the sample, identify the matching degree between the muscle variation characteristics of the lower half part of the living body and the audio authentication data by utilizing a deep learning technology and output the classification result of the living body, and can prevent photo attacks because the muscle variation characteristics of the lower half part corresponding to the audio authentication data cannot be simulated by photo attacks, meanwhile, the identification capability and the identification accuracy of the living body are improved.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a flowchart illustrating an authentication method of a living body according to an embodiment of the present application;
fig. 2 is a flowchart illustrating another living body authentication method provided in an embodiment of the present application;
fig. 3 is a flowchart illustrating another living body authentication method provided by an embodiment of the present application;
fig. 4 is a schematic structural diagram illustrating an authentication apparatus of a living body according to an embodiment of the present application;
fig. 5 is a second schematic structural diagram of an authentication device of a living body according to an embodiment of the present application;
fig. 6 shows a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. Every other embodiment that can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present application falls within the protection scope of the present application.
First, an application scenario to which the present application is applicable will be described. The application can be applied to the technical field of security verification.
Research shows that at present, hardware such as an infrared sensor or a depth sensor is mostly added in the prior art, the three-dimensional property of a human face is judged by utilizing a depth image, and the mode can directly defend against two-dimensional photo attacks such as a non-bendable display screen of a mobile phone, a computer and the like, but has poor effect on bendable printed photos. Meanwhile, the method also has a verification method for judging the difference between the pork skin and the screen or paper according to the characteristics of the image collected by the camera, such as texture, color, texture, quality and the like, so as to judge whether the image is a real user. However, these methods still have the possibility of photo attack, and have low recognition capability and recognition accuracy for living bodies.
Based on this, the embodiment of the application provides an authentication method of a living body, which improves the identification capability and the identification accuracy of the living body while preventing photo attack.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for authenticating a living body according to an embodiment of the present application. As shown in fig. 1, an authentication method of a living body provided in an embodiment of the present application includes:
s101, acquiring authentication data of an object to be authenticated, and judging whether the authentication data comprises image authentication data and audio authentication data.
In the step, authentication data of an object to be authenticated is acquired, whether the authentication data comprises image authentication data and audio authentication data or not is judged, and if the authentication data comprises the image authentication data and the audio authentication data, a subsequent living body authentication process is continued; and if the authentication data only comprises one of the image authentication data and the audio authentication data, subsequent authentication is not carried out, and the object to be authenticated is directly determined to be a non-living body.
The object to be authenticated can be the user or a face photo of the user.
In one possible embodiment, the authentication data includes living body data and attack data; the living body data comprises a screen, a camera and a microphone which are facing to a real user to read aloud according to a random digit number displayed by the screen in an authentication scene, and a shot real human face video picture and a recorded reading sound of the user are collected relevant authentication data of the living body user; the attack data comprises that a user face picture is placed in front of a screen, a camera and a microphone, the user face picture is randomly static or shaken to simulate a real user, the number displayed on the screen is read aside by others, and the shot video picture of the real user simulated by the user picture and the recorded reading sound of the non-user are collected and are related authentication data of the non-living user.
The number of arabic digits read aloud by the user may be selected according to actual needs, and is not limited specifically here.
S102, if the image authentication data and the audio authentication data are contained, acquiring a plurality of video frames from the image authentication data, and determining a light flow graph corresponding to each video frame, wherein the light flow graph reflects the muscle change characteristics of the lower half part of the living body corresponding to the image authentication data.
In this step, the authentication data including the image authentication data and the audio authentication data screened in S101 is further subjected to living body authentication, all video frames included in the video are acquired for the acquired image authentication data, and a light flow map corresponding to each video frame is determined for the video frame.
Here, when the user reads a random number sequence aloud, the corresponding lower half facial muscle movements are different in the process of reading each number, and more specifically, the lower half facial muscle movement characteristics of the living body are different between the mouth and the chin of the user. The optical flow graph can reflect the change characteristics of the lower half part of the face muscle of the user, so that whether the user reads the random number sequence displayed on the screen or not can be judged by analyzing the matching degree of the change of the lower half part of the face muscle of the user and the change of the lower half part of the face muscle of the user in the process of reading numbers in the authentication process.
In one possible embodiment, the optical flow map corresponding to the video frame is determined according to the following steps:
(1) and determining an authentication area corresponding to the object to be authenticated from the image authentication data.
In this step, optionally, an open-source computer vision library is used to read each frame in the image authentication data, and in each frame of the image authentication data, a face detector is used to detect an object to be authenticated and confirm a face portion in an object to be detected.
The authentication area is the face area of the user in each frame of picture of the image authentication data.
When the object to be authenticated is a real user, confirming a face area of the real user in each frame of picture of the image authentication data; when the object to be authenticated is a non-real user, such as a photo of the user, the face area in the photo of the user is determined in each frame of picture of the image authentication data.
(2) And determining the characteristic point coordinates corresponding to the authentication area.
In the step, after the area range of the user face in each frame of picture is confirmed, a face detector is used for extracting feature points of the user face, and since the main motion parts of the user face are concentrated in the lower half face area in the process of reading numbers by the user, the authentication area is preferably determined as the lower half face area of the user face.
Here, since the lower half of the muscle of the main movement when the user reads aloud the random number sequence displayed on the display is the muscle of the mouth and chin, and also includes a part of the facial muscle, the feature point is preferably acquired at the relevant position of the user's lower face.
In one possible implementation, the determining feature point coordinates corresponding to the authentication area includes: vertex coordinates (x) of the upper left corner of the user's face position box1,y1) And the coordinates of the vertex of the lower right corner (x)2,y2) Intercepting the lower half part area of the user face to obtain the top left corner vertex coordinate (x) of the lower half part area1,y1/2) and lower right corner vertex coordinates (x)2,y2 )。
The selection positions of the feature points are related feature point positions capable of reflecting muscle positions of the lower half of the user, and the number of the feature points can be specifically determined according to needs, which is not specifically limited herein.
(3) And processing the feature point coordinates by using an optical flow method, and determining an optical flow graph corresponding to the video frame.
In the step, a light flow graph corresponding to the x direction and the y direction of the lower half part area of the user face in the video frame is determined by using a light flow method in an open-source computer vision library.
S103, determining an optical flow characteristic matrix corresponding to the image authentication data according to the determined multiple optical flow graphs corresponding to the image authentication data.
In one possible embodiment, the optical-flow feature matrix is determined according to the following method:
(1) and scaling the optical flow diagram corresponding to each video frame to a preset size to obtain a first optical flow diagram matrix corresponding to the optical flow diagram.
In one possible embodiment, the optical flow map is scaled to a predetermined size, resulting in a first matrix of optical flow maps of size [3 × 64 × 64] dimensions for each optical flow map.
(2) And splicing the optical flow diagram matrixes corresponding to the optical flow diagrams in two mutually perpendicular directions to obtain a second optical flow diagram matrix corresponding to the optical flow diagram.
In this step, the optical flow diagram matrixes with the dimensions of [3 × 64 × 64] in the x direction and the y direction are spliced to obtain a second optical flow diagram matrix with the dimensions of [6 × 64 × 64 ].
(3) And determining an optical flow characteristic matrix corresponding to the image authentication data according to a second optical flow graph matrix corresponding to each optical flow graph corresponding to the image authentication data.
In a possible implementation mode, the total frame number m of the video collected in the whole process of reading numbers is determined, all video frames corresponding to one image authentication data are processed in the same step, and m optical flow feature matrixes with the dimension of [6 × 64 × 64] are obtained, wherein the dimension of the optical flow feature matrixes is [ m × 6 × 64 × 64 ].
S104, inputting the optical flow characteristic matrix into a trained living body authentication model, and outputting a classification result of whether the object to be authenticated is a living body, wherein the living body authentication model is obtained by training based on a mapping relation between the muscle change characteristics of the lower half part of the living body in a historical living body sample and sample audio authentication data.
In the step, the obtained optical flow characteristic matrix is input into a living body authentication model obtained by training based on the mapping relation of the living body face change characteristics in the historical living body sample and the sample audio authentication data, and the model automatically outputs the classification result of whether the object to be authenticated is the living body.
In the specific implementation process, a user faces a camera and a microphone, a random number sequence generated on a screen is spoken, the image of the lower half part of the face of the user in each frame of a video is obtained, light flow diagrams in the x direction and the y direction which are perpendicular to each other are calculated, a light flow characteristic matrix corresponding to the light flow diagrams is determined and input into a trained deep learning model, the light flow characteristic matrix is analyzed in a time domain and a space domain, the muscle movement characteristic change of the lower half part of the face when the number sequence is read aloud is learned, and therefore the recognition result of a living body and a non-living body is output, and the recognition accuracy of the learning model is continuously updated.
As a possible implementation manner, before the optical flow feature matrix is input into the trained living body authentication model, the authentication method further includes training the living body authentication model, and the living body authentication model is trained through the following steps:
(1) inputting input data into a living body authentication model in batches;
(2) adjusting training parameters such as batch size, learning rate, loss function and optimization device corresponding to the living body authentication model;
(3) model training is started.
Optionally, the batch size corresponding to the living body authentication model is set to 64, the learning rate is set to 0.0001, the loss function is set to a cross entropy loss function, and the preferrably device is set to a random Gradient Descent (SDG).
The living body authentication method provided by the application acquires image authentication data and audio authentication data of an object to be authenticated, determines an optical flow diagram which corresponds to a video frame in the image authentication data and reflects the muscle change characteristics of the lower half part of the living body in the image authentication data, and determines a corresponding characteristic matrix according to the light flow diagram, inputs the characteristic matrix into a living body authentication model trained on the mapping relation between the muscle variation characteristics of the lower half part of the living body in the historical living body sample and the sample audio authentication data, the deep learning technology is utilized to identify the matching degree between the muscle variation characteristics of the lower half part of the living body and the audio authentication data and output the classification result of the living body, because the photo attack cannot simulate the muscle variation characteristics of the lower half part corresponding to the audio authentication data, the photo attack can be prevented, and the recognition capability and the recognition accuracy of the living body can be improved.
Referring to fig. 2, fig. 2 is a flowchart illustrating another living body authentication method according to an embodiment of the present application. As shown in fig. 2, the living body authentication method provided in the embodiment of the present application includes:
s201, inputting the optical flow characteristic matrix into a trained living body authentication model, performing space-time characteristic extraction on the optical flow characteristic matrix, and determining a space-time characteristic matrix corresponding to the optical flow characteristic matrix.
In the step, a deep learning model is established, convolution operation is carried out on an optical flow characteristic matrix, the space-time characteristic of the optical flow characteristic matrix is obtained, and the space-time characteristic matrix corresponding to the optical flow characteristic matrix is determined.
Wherein the optional convolution operation is accomplished by a three-dimensional convolution module operation, comprising: linear rectification function operation and three-dimensional maximum pooling operation.
S202, extracting spatial features of the space-time feature matrix, and determining a spatial feature matrix corresponding to the space-time feature matrix;
in this step, spatial features are extracted according to the spatio-temporal feature matrix corresponding to the optical flow feature matrix determined in step S201, and a spatial feature matrix corresponding to the spatio-temporal feature matrix is determined.
And optionally, extracting the spatial features corresponding to the space-time feature matrix by using a residual error network.
S203, extracting the time characteristics of the spatial characteristic matrix, and determining the time characteristic matrix corresponding to the spatial characteristic matrix.
In this step, the time-series variation features of the spatial feature matrix determined in step S202 are extracted, and a temporal feature matrix corresponding to the spatial feature matrix is determined.
Optionally, the spatial feature matrix is input to a two-layer bidirectional Gated Round Unit (GRU) to extract variation features of the spatial feature matrix in a time sequence, and output features of the bidirectional GRU are spliced to obtain a temporal feature matrix corresponding to the spatial feature matrix.
S204, carrying out normalization exponential function and full-connection mapping classification processing on the time characteristic matrix, and outputting a classification result of whether the object to be authenticated is a living body.
In the step, the time characteristic matrix is processed by three full-connection layers and a normalized exponential function, and the output classification result is that the object to be authenticated is living or non-living.
Referring to fig. 3, fig. 3 is a flowchart illustrating another living body authentication method according to an embodiment of the present application. As shown in fig. 3, the living body authentication method provided by the embodiment of the present application includes:
s301, acquiring authentication data of an object to be authenticated, and judging whether the authentication data comprises image authentication data and audio authentication data.
S302, if the image authentication data and the audio authentication data are contained, a plurality of video frames are obtained from the image authentication data, and for each video frame, a light flow graph corresponding to the video frame is determined, wherein the light flow graph reflects the muscle change characteristics of the lower half part of the living body corresponding to the image authentication data.
S303, determining an optical flow characteristic matrix corresponding to the image authentication data according to the plurality of optical flow graphs corresponding to the determined image authentication data.
S304, inputting the optical flow characteristic matrix into a trained living body authentication model, and outputting a classification result of whether the object to be authenticated is a living body, wherein the living body authentication model is obtained by training based on a mapping relation between the muscle variation characteristic of the lower half part of the living body in a historical living body sample and sample audio authentication data.
S305, if the output living body authentication result is a living body, prompting that the authentication is passed; if the output living body authentication result is a non-living body, the authentication fails, and the verification is prompted to be carried out again.
In the step, the optical flow characteristic matrix is input into a trained model, living body calculation classification is carried out, if the obtained classification result is that the object to be authenticated is a living body, subsequent operation can be carried out, and if the obtained classification result is a non-living body, the authentication fails, and authentication needs to be carried out again.
If the obtained classification result is that the object to be authenticated is a living body, the identification authentication is considered to be successful, the whole authentication process is completed, the user is prompted that the authentication process is passed, and subsequent processes such as login right possession or payment right possession can be carried out.
The descriptions of S301 to S304 may refer to the descriptions of S101 to S104, and the same technical effects can be achieved, which are not described in detail.
The living body authentication method provided by the application acquires image authentication data and audio authentication data of an object to be authenticated, determines an optical flow diagram which corresponds to a video frame in the image authentication data and reflects the muscle change characteristics of the lower half part of the living body in the image authentication data, and determines a corresponding characteristic matrix according to the light flow diagram, inputs the characteristic matrix into a living body authentication model trained on the mapping relation between the muscle variation characteristics of the lower half part of the living body in the historical living body sample and the sample audio authentication data, the deep learning technology is utilized to identify the matching degree between the muscle variation characteristics of the lower half part of the living body and the audio authentication data and output the classification result of the living body, because the photo attack cannot simulate the muscle variation characteristics of the lower half part corresponding to the audio authentication data, the photo attack can be prevented, and the recognition capability and the recognition accuracy of the living body can be improved.
Referring to fig. 4 and 5, fig. 4 shows a first schematic structural diagram of an authentication device for living bodies according to an embodiment of the present application, and fig. 5 shows a second schematic structural diagram of an authentication device for living bodies according to an embodiment of the present application. As shown in fig. 4, the authentication apparatus 400 includes:
the determining module 410 is configured to obtain authentication data of an object to be authenticated, and determine whether the authentication data includes image authentication data and audio authentication data;
a first determining module 420, configured to, if the image authentication data and the audio authentication data are included, acquire a plurality of video frames from the image authentication data, and determine, for each of the video frames, a light flow map corresponding to the video frame, where the light flow map reflects muscle change characteristics of a lower half of a living body corresponding to the image authentication data;
a second determining module 430, configured to determine, according to the multiple determined optical flow graphs corresponding to the image authentication data, an optical flow feature matrix corresponding to the image authentication data;
and the output module 440 is configured to input the optical flow feature matrix into a trained living body authentication model, and output a classification result of whether the object to be authenticated is a living body, where the living body authentication model is trained based on a mapping relationship between a muscle change feature of a lower half surface of a living body in a historical living body sample and sample audio authentication data.
Further, the second determining module 430 is further configured to:
scaling the optical flow diagram corresponding to each video frame to a preset size to obtain a first optical flow diagram matrix corresponding to the optical flow diagram;
splicing optical flow diagram matrixes corresponding to the optical flow diagrams in two mutually perpendicular directions to obtain a second optical flow diagram matrix corresponding to the optical flow diagrams;
and determining an optical flow characteristic matrix corresponding to the image authentication data according to a second optical flow graph matrix corresponding to each optical flow graph corresponding to the image authentication data.
Further, the output module 440 is further configured to:
inputting the optical flow characteristic matrix into a trained living body authentication model, performing space-time characteristic extraction on the optical flow characteristic matrix, and determining a space-time characteristic matrix corresponding to the optical flow characteristic matrix;
extracting spatial features of the space-time feature matrix, and determining a spatial feature matrix corresponding to the space-time feature matrix;
extracting time characteristics of the space characteristic matrix, and determining a time characteristic matrix corresponding to the space characteristic matrix;
and carrying out normalization index function and full-connection mapping classification processing on the time characteristic matrix, and outputting a classification result of whether the object to be authenticated is a living body.
Further, the second determining module 430 is further configured to:
determining an authentication area corresponding to the object to be authenticated from the image authentication data;
determining the feature point coordinates corresponding to the authentication area;
and processing the feature point coordinates by using an optical flow method, and determining an optical flow graph corresponding to the video frame.
Further, as shown in fig. 5, the authentication apparatus 400 further includes: a prompt module 450, the prompt module 450 being configured to:
if the output living body authentication result is a living body, prompting that the authentication is passed;
if the output living body authentication result is a non-living body, the authentication fails, and the verification is prompted to be carried out again.
The living body authentication device provided by the application acquires image authentication data and audio authentication data of an object to be authenticated, determines an optical flow diagram which corresponds to a video frame in the image authentication data and reflects the change characteristics of muscles at the lower half part of the living body in the image authentication data, and determines a corresponding characteristic matrix according to the light flow diagram, inputs the characteristic matrix into a living body authentication model trained on the mapping relation between the muscle variation characteristics of the lower half part of the living body in the historical living body sample and the sample audio authentication data, the deep learning technology is utilized to identify the matching degree between the muscle variation characteristics of the lower half part of the living body and the audio authentication data and output the classification result of the living body, because the photo attack cannot simulate the muscle variation characteristics of the lower half part corresponding to the audio authentication data, the photo attack can be prevented, and the recognition capability and the recognition accuracy of the living body can be improved.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 6, the electronic device 600 includes a processor 610, a memory 620, and a bus 630.
The memory 620 stores machine-readable instructions executable by the processor 610, when the electronic device 600 runs, the processor 610 communicates with the memory 620 through the bus 630, and when the machine-readable instructions are executed by the processor 610, the steps of the living body authentication method in the method embodiments shown in fig. 1 to fig. 3 may be performed.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the living body authentication method in the method embodiments shown in fig. 1 to 3 may be executed.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the exemplary embodiments of the present application, and are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (9)

1. An authentication method of a living body, characterized by comprising:
acquiring authentication data of an object to be authenticated, and judging whether the authentication data comprises image authentication data and audio authentication data;
if the video authentication data comprises image authentication data and audio authentication data, acquiring a plurality of video frames from the image authentication data, and determining a light flow graph corresponding to each video frame, wherein the light flow graph reflects the muscle change characteristics of the lower half of the living body corresponding to the image authentication data;
determining an optical flow characteristic matrix corresponding to the image authentication data according to the determined multiple optical flow graphs corresponding to the image authentication data;
inputting the optical flow characteristic matrix into a trained living body authentication model, and outputting a classification result of whether the object to be authenticated is a living body, wherein the living body authentication model is obtained by training based on the mapping relation of the muscle variation characteristics of the lower half part of the living body in a historical living body sample and sample audio authentication data;
outputting a classification result of whether the object to be authenticated is a living body according to the following method:
inputting the optical flow characteristic matrix into a trained living body authentication model, performing space-time characteristic extraction on the optical flow characteristic matrix, and determining a space-time characteristic matrix corresponding to the optical flow characteristic matrix;
extracting spatial features of the space-time feature matrix, and determining a spatial feature matrix corresponding to the space-time feature matrix;
extracting time characteristics of the space characteristic matrix, and determining a time characteristic matrix corresponding to the space characteristic matrix;
carrying out normalization index function and full-connection mapping classification processing on the time characteristic matrix, and outputting a classification result of whether the object to be authenticated is a living body;
determining a time characteristic matrix corresponding to the space characteristic matrix according to the following method:
and inputting the spatial characteristic matrix into two layers of bidirectional gating circulation units to extract the variation characteristics of the spatial characteristic matrix on the time sequence, and splicing the output characteristics of the bidirectional gating circulation units to obtain a time characteristic matrix corresponding to the spatial characteristic matrix.
2. The authentication method according to claim 1, wherein the optical flow feature matrix is determined according to the following method:
scaling the optical flow diagram corresponding to each video frame to a preset size to obtain a first optical flow diagram matrix corresponding to the optical flow diagram;
splicing optical flow diagram matrixes corresponding to the optical flow diagrams in two mutually perpendicular directions to obtain a second optical flow diagram matrix corresponding to the optical flow diagrams;
and determining an optical flow characteristic matrix corresponding to the image authentication data according to a second optical flow graph matrix corresponding to each optical flow graph corresponding to the image authentication data.
3. The authentication method according to claim 1, wherein after the inputting of the optical flow feature matrix into the trained living body authentication model and the outputting of the result of classification of whether the object to be authenticated is a living body, the authentication method comprises:
if the output living body authentication result is a living body, prompting that the authentication is passed;
if the output living body authentication result is a non-living body, the authentication fails, and the verification is prompted to be carried out again.
4. The authentication method according to claim 1, wherein for each of said video frames, the optical flow graph corresponding to said video frame is determined according to the following steps:
determining an authentication area corresponding to the object to be authenticated from the image authentication data;
determining the feature point coordinates corresponding to the authentication area;
and processing the feature point coordinates by using an optical flow method, and determining an optical flow graph corresponding to the video frame.
5. An authentication apparatus of a living body, characterized by comprising:
the judging module is used for acquiring authentication data of an object to be authenticated and judging whether the authentication data comprises image authentication data and audio authentication data;
the device comprises a first determining module, a second determining module and a third determining module, wherein the first determining module is used for acquiring a plurality of video frames from image authentication data if the image authentication data and the audio authentication data are contained, and determining a light flow graph corresponding to each video frame, wherein the light flow graph reflects the muscle change characteristics of the lower half part of the living body corresponding to the image authentication data;
the second determining module is used for determining an optical flow characteristic matrix corresponding to the image authentication data according to the determined multiple optical flow graphs corresponding to the image authentication data;
the output module is used for inputting the optical flow characteristic matrix into a trained living body authentication model and outputting a classification result of whether the object to be authenticated is a living body, wherein the living body authentication model is obtained by training based on the mapping relation of the muscle change characteristics of the lower half part of the living body in a historical living body sample and sample audio authentication data;
the output module is further configured to:
inputting the optical flow characteristic matrix into a trained living body authentication model, performing space-time characteristic extraction on the optical flow characteristic matrix, and determining a space-time characteristic matrix corresponding to the optical flow characteristic matrix;
extracting spatial features of the space-time feature matrix, and determining a spatial feature matrix corresponding to the space-time feature matrix;
extracting time characteristics of the space characteristic matrix, and determining a time characteristic matrix corresponding to the space characteristic matrix;
carrying out normalization index function and full-connection mapping classification processing on the time characteristic matrix, and outputting a classification result of whether the object to be authenticated is a living body;
determining a time characteristic matrix corresponding to the space characteristic matrix according to the following method:
and inputting the spatial characteristic matrix into two layers of bidirectional gating circulation units to extract the variation characteristics of the spatial characteristic matrix on the time sequence, and splicing the output characteristics of the bidirectional gating circulation units to obtain a time characteristic matrix corresponding to the spatial characteristic matrix.
6. The authentication apparatus of claim 5, wherein the second determining module is further configured to:
scaling the optical flow diagram corresponding to each video frame to a preset size to obtain a first optical flow diagram matrix corresponding to the optical flow diagram;
splicing optical flow diagram matrixes corresponding to the optical flow diagrams in two mutually perpendicular directions to obtain a second optical flow diagram matrix corresponding to the optical flow diagrams;
and determining an optical flow characteristic matrix corresponding to the image authentication data according to a second optical flow graph matrix corresponding to each optical flow graph corresponding to the image authentication data.
7. The authentication device of claim 5, further comprising a prompting module to:
if the output living body authentication result is a living body, prompting that the authentication is passed;
if the output living body authentication result is a non-living body, the authentication fails, and the verification is prompted to be carried out again.
8. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is operating, the machine-readable instructions when executed by the processor performing the steps of the authentication method of a living body according to any one of claims 1 to 4.
9. A computer-readable storage medium, characterized in that a computer program is stored thereon, which, when being executed by a processor, performs the steps of the authentication method of a living body according to any one of claims 1 to 4.
CN202110213304.6A 2021-02-26 2021-02-26 Living body authentication method and device and electronic equipment Active CN112580615B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110213304.6A CN112580615B (en) 2021-02-26 2021-02-26 Living body authentication method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110213304.6A CN112580615B (en) 2021-02-26 2021-02-26 Living body authentication method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112580615A CN112580615A (en) 2021-03-30
CN112580615B true CN112580615B (en) 2021-06-18

Family

ID=75114033

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110213304.6A Active CN112580615B (en) 2021-02-26 2021-02-26 Living body authentication method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112580615B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114360073A (en) * 2022-01-04 2022-04-15 腾讯科技(深圳)有限公司 Image identification method and related device
CN114943286B (en) * 2022-05-20 2023-04-07 电子科技大学 Unknown target discrimination method based on fusion of time domain features and space domain features

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103605958A (en) * 2013-11-12 2014-02-26 北京工业大学 Living body human face detection method based on gray scale symbiosis matrixes and wavelet analysis
CN109241834A (en) * 2018-07-27 2019-01-18 中山大学 A kind of group behavior recognition methods of the insertion based on hidden variable
CN109858381A (en) * 2019-01-04 2019-06-07 深圳壹账通智能科技有限公司 Biopsy method, device, computer equipment and storage medium
CN110765839A (en) * 2019-09-02 2020-02-07 合肥工业大学 Multi-channel information fusion and artificial intelligence emotion monitoring method for visible light facial image
CN111792034A (en) * 2015-05-23 2020-10-20 深圳市大疆创新科技有限公司 Method and system for estimating state information of movable object using sensor fusion

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005352900A (en) * 2004-06-11 2005-12-22 Canon Inc Device and method for information processing, and device and method for pattern recognition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103605958A (en) * 2013-11-12 2014-02-26 北京工业大学 Living body human face detection method based on gray scale symbiosis matrixes and wavelet analysis
CN111792034A (en) * 2015-05-23 2020-10-20 深圳市大疆创新科技有限公司 Method and system for estimating state information of movable object using sensor fusion
CN109241834A (en) * 2018-07-27 2019-01-18 中山大学 A kind of group behavior recognition methods of the insertion based on hidden variable
CN109858381A (en) * 2019-01-04 2019-06-07 深圳壹账通智能科技有限公司 Biopsy method, device, computer equipment and storage medium
CN110765839A (en) * 2019-09-02 2020-02-07 合肥工业大学 Multi-channel information fusion and artificial intelligence emotion monitoring method for visible light facial image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
微表情识别研究综述;张人,何宁;《计算机工程与应用》;20210101;第57卷(第1期);第39、41-42页 *

Also Published As

Publication number Publication date
CN112580615A (en) 2021-03-30

Similar Documents

Publication Publication Date Title
JP6878572B2 (en) Authentication based on face recognition
CN109948408B (en) Activity test method and apparatus
CN106599772B (en) Living body verification method and device and identity authentication method and device
CN108804884B (en) Identity authentication method, identity authentication device and computer storage medium
US20200380279A1 (en) Method and apparatus for liveness detection, electronic device, and storage medium
US9652663B2 (en) Using facial data for device authentication or subject identification
EP2580711B1 (en) Distinguishing live faces from flat surfaces
CN110163053B (en) Method and device for generating negative sample for face recognition and computer equipment
CN112580615B (en) Living body authentication method and device and electronic equipment
EP2870562A1 (en) Continuous multi-factor authentication
KR20170002892A (en) Method and apparatus for detecting fake fingerprint, method and apparatus for recognizing fingerprint
CN107609364B (en) User identity confirmation method and device
CN112507889A (en) Method and system for verifying certificate and certificate holder
CN108389053B (en) Payment method, payment device, electronic equipment and readable storage medium
US9965612B2 (en) Method and system for visual authentication
CN111091031A (en) Target object selection method and face unlocking method
CN113642639A (en) Living body detection method, living body detection device, living body detection apparatus, and storage medium
KR101725219B1 (en) Method for digital image judging and system tereof, application system, and authentication system thereof
JP7264308B2 (en) Systems and methods for adaptively constructing a three-dimensional face model based on two or more inputs of two-dimensional face images
CN110163164B (en) Fingerprint detection method and device
WO2022222957A1 (en) Method and system for identifying target
CN112712073A (en) Eye change feature-based living body identification method and device and electronic equipment
CN111680191A (en) Information display method, device, equipment and storage device
KR20170076894A (en) Method for digital image judging and system tereof, application system, and authentication system thereof
KR20200127818A (en) Liveness test method and liveness test apparatus, biometrics authentication method and face authentication apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant