CN115035608A - Living body detection method, device, equipment and system - Google Patents

Living body detection method, device, equipment and system Download PDF

Info

Publication number
CN115035608A
CN115035608A CN202210579985.2A CN202210579985A CN115035608A CN 115035608 A CN115035608 A CN 115035608A CN 202210579985 A CN202210579985 A CN 202210579985A CN 115035608 A CN115035608 A CN 115035608A
Authority
CN
China
Prior art keywords
brain wave
model
living body
emotion
target detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210579985.2A
Other languages
Chinese (zh)
Inventor
曹佳炯
丁菁汀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202210579985.2A priority Critical patent/CN115035608A/en
Publication of CN115035608A publication Critical patent/CN115035608A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The specification provides a living body detection method, a living body detection device, equipment and a living body detection system, wherein a biological characteristic image of a target detection object is collected, a brain wave signal of the target detection object is predicted based on the collected biological characteristic image, an emotion characteristic of the target detection object is calculated based on the brain wave signal, and whether the target detection object is a living body object is identified based on the brain wave signal and the emotion characteristic. The method utilizes the essential difference between the real-person user and the attack, namely the brain wave signal and the emotional state of the real-person user, can essentially improve the safety capability of the living body anti-attack method, and even if no attack is found, the brain wave and the emotional state signal are abnormally intercepted, so that the accuracy of living body detection is improved, and the safety of biological identification is further improved.

Description

Living body detection method, device, equipment and system
Technical Field
The present disclosure relates to computer technologies, and in particular, to a method, an apparatus, a device, and a system for detecting a living body.
Background
With the development of computer internet technology, the application of face recognition technology is more and more as follows: the applications of other biometric technologies are also increasing continuously, such as face-brushing payment, face-brushing login, face-brushing attendance checking, face-brushing identity authentication for travel and the like. The biometric identification technology needs to collect a biometric image and then achieve the purpose of identity authentication on the biometric image.
As the application of face recognition technology is becoming more widespread, the social influence surface that may be brought by the potential safety hazard is also becoming more serious. One of the main risks threatening the safety of face recognition is live attack, namely, the face recognition is bypassed through media such as photos, screens, masks and the like, and the account of a victim is stolen.
Therefore, how to propose a living body detection scheme, which can prevent living body attack during biological identification and improve the safety of biological identification is a technical problem to be solved in the field.
Disclosure of Invention
An object of the embodiments of the present specification is to provide a method, an apparatus, a device, and a system for detecting a living body, which improve accuracy of living body detection, and further improve safety of biometric identification.
In one aspect, embodiments of the present specification provide a method for detecting a living body, the method including:
collecting a biological characteristic image of a target detection object during biological identification;
predicting brain wave signals of the target detection object based on the biological feature image;
determining emotional characteristics of the target detection object according to the brain wave signal;
and determining whether the target detection object is a living object according to the emotional characteristic.
In another aspect, the present specification provides a living body detection apparatus comprising:
the image acquisition module is used for acquiring a biological characteristic image of a target detection object during biological identification;
the brain wave information prediction module is used for predicting brain wave signals of the target detection object based on the biological characteristic image;
the emotion characteristic recognition module is used for determining the emotion characteristics of the target detection object according to the brain wave signals;
and the living body detection module is used for determining whether the target detection object is a living body object according to the emotional characteristics.
In another aspect, embodiments of the present specification provide a living body detection apparatus, including at least one processor and a memory for storing processor-executable instructions, which when executed by the processor implement the living body detection method described above.
In another aspect, an embodiment of the present specification provides a living body detection system, including: the image acquisition device is used for acquiring a biological characteristic image of a target detection object for biological recognition, and the image processing device comprises at least one processor and a memory for storing processor executable instructions, wherein the processor executes the instructions to realize the living body detection method.
The living body detection method, the device, the equipment and the system provided by the specification are used for acquiring a biological characteristic image of a target detection object, predicting an electroencephalogram signal of the target detection object based on the acquired biological characteristic image, calculating an emotional characteristic of the target detection object based on the electroencephalogram signal, and identifying whether the target detection object is a living body object based on the electroencephalogram signal and the emotional characteristic. The method utilizes the essential difference between the real-person user and the attack, namely the brain wave signal and the emotional state of the real-person user, can essentially improve the safety capability of the living body anti-attack method, and even if no attack is found, the brain wave and the emotional state signal are abnormally intercepted, so that the accuracy of living body detection is improved, and the safety of biological identification is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present specification, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort.
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a method for detecting a living body according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart illustrating the principle of liveness detection in an example scenario of the present disclosure;
FIG. 3 is a block diagram of an embodiment of a biopsy device provided herein;
fig. 4 is a block diagram of a hardware configuration of the liveness detection server in one embodiment of the present specification.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present specification, and not all of the embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step should fall within the scope of protection of the present specification.
With the continuous popularization of biometric technology, the security of biometric technology is more and more emphasized. The main threats to biometric security in general are live attacks, such as: in the process of face recognition, camouflaging is carried out by using a photo, a screen, a mask and the like, and information embezzlement is carried out on a user of face recognition. General biometric systems such as face recognition systems integrate a living body attack prevention algorithm to perform living body detection, and the general living body attack prevention algorithm may only use a simple living body detection model to classify users of face recognition, which generally has weak safety capability and weak interception capability for attack types that do not appear in training data. Or, some algorithms require the user to perform a series of actions to realize the living body detection, such as blinking, nodding, and the like, and the attack difficulty is improved through the completion degree of the actions. However, such methods are often used in more private and secure scenes (e.g., banks, etc.), and are not well experienced in more people's commercial scenes (e.g., vending machines, etc.), and thus are difficult to apply.
Some embodiments of the present description may provide a living body detection method, which is mainly applied to living body detection in a face recognition process to avoid living body attack, resulting in theft of user information, and the like. The embodiment of the present specification predicts an electroencephalogram signal of a target detection object based on a collected biometric image, calculates an emotional characteristic of the target detection object based on the electroencephalogram signal, and identifies whether the target detection object is a living object based on the electroencephalogram signal and the emotional characteristic. The mode utilizes the essential difference (namely the brain wave signals, emotion and state) between a real user and an attack (such as a static photo), can essentially improve the safety capability of the anti-attack method of the living body, and even if no attack is seen, the brain wave and emotion state signals are abnormally intercepted, thereby improving the safety of biological identification.
In addition, in the embodiments of the present specification, the following examples are: the acquisition, storage, use, processing and the like of data involved in face recognition and living body detection all accord with relevant regulations of national laws and regulations.
Fig. 1 is a schematic flow chart of an embodiment of a living body detection method provided in an embodiment of the present specification. Although the present specification provides the method steps or apparatus structures as shown in the following examples or figures, more or less steps or modules may be included in the method or apparatus structures based on conventional or non-inventive efforts. In the case of steps or structures which do not logically have the necessary cause-and-effect relationship, the execution order of the steps or the module structure of the apparatus is not limited to the execution order or the module structure shown in the embodiment or the drawings of this specification. When the described method or module structure is applied to a device, a server or an end product in practice, the method or module structure according to the embodiment or the figures may be executed sequentially or in parallel (for example, in a parallel processor or multi-thread processing environment, or even in an implementation environment including distributed processing and server clustering).
In a specific embodiment of the living body detection method provided in this specification, as shown in fig. 1, the method may be applied to a server, a computer, a tablet computer, a server, a smart phone, a smart wearable device, an in-vehicle device, a smart home device, and other devices capable of performing image processing, and the method may include the following steps:
and 102, acquiring a biological characteristic image of the target detection object during biological identification.
In a specific implementation, biometric identification is performed such as: when the human face is recognized, a camera in the biometric device can be used for collecting a biometric image of a target detection object being recognized, and the target detection object can be understood as an object for biometric recognition. The biometric image may be determined according to a type of biometric identification, and may refer to image information with a human face, or pupil information or fingerprint information, and the like. The biometric image may be a picture or a video, and may be determined according to actual needs, and embodiments of the present specification are not particularly limited.
In some embodiments of the present specification, the acquiring a biometric image of a target detection object during biometric identification includes:
and acquiring image information of the target detection object within a specified time after the start of biological identification, and selecting a specified number of images from the image information as the biological characteristic images of the target detection object.
In a specific implementation process, when acquiring a biometric image of a target detection object, image information of the target detection object within a specified time after biometric identification starts may be acquired, such as: the user performs face recognition through IoT device acquisition, and may acquire face data of about 5S from the user' S face recognition, which may be 5S video data, or may also be 5S continuously shot photo data. Selecting images of a specified number of the images as follows: 10 images are taken as biometric images of the target detection object. The biometric image in the embodiment of the present specification may acquire a high-definition biometric image, such as: the resolution of the face region is greater than 400 × 400, the face region of a general face recognition algorithm is generally 100 × 100, the information amount is less, and the accuracy of living body detection and biological recognition can be improved by collecting high-definition biological feature images.
By acquiring image information in a certain time and selecting a specified number of images as biological characteristic images, the uniformity and comprehensiveness of biological sampling are ensured, a more accurate data base is provided for living body detection and biological identification, and the accuracy of the living body detection and the biological identification is improved.
And 104, predicting the brain wave signal of the target detection object based on the biological characteristic image.
In a specific implementation process, after the biological characteristic image of the target detection object is acquired, the brain wave signal of the target detection object for biological recognition can be predicted by predicting the brain wave signal of the biological characteristic image. Brain waves are also called "brain waves", which are bioelectrical signals generated when information is transmitted between neuronal cells, and are electrical signals generated by ion exchange generated when synapses of pyramidal cells in the cerebral cortex are activated. The brain wave signal is understood to be a signal that can represent the brain activity of the subject, and may be a signal curve or a fluctuation signal represented by a number. The prediction method of the brain wave signal can be a basis for judging whether the subsequent target detection object belongs to the living body object or not by predicting the brain wave signal of the target detection object, and can be realized by an intelligent learning algorithm, such as: the method comprises the steps of collecting biological characteristic images of different users and corresponding brain wave signals in advance, carrying out model training, and predicting the brain wave signals of the biological characteristic images of a target detection object by using a trained brain wave prediction model. Alternatively, expert experience may be utilized, such as: and inviting some experts to analyze and classify the brain wave signals of the biological characteristic images corresponding to different users to obtain the relation between the biological characteristic images and the brain wave signals, and further predicting the brain wave signals of the biological characteristic images of the target detection object. The prediction method of the electroencephalogram signal is not particularly limited in the embodiments of the present specification.
And 106, determining the emotional characteristics of the target detection object according to the brain wave signal.
In a specific implementation process, different brain wave signals can reflect the emotion of the user, such as: when the brain wave signal is happy, angry and tense, the fluctuation of the brain wave signal is large, and when the brain wave signal is calm, the fluctuation of the brain wave signal is small. The emotion characteristics may be understood as characteristic data capable of characterizing emotion of the detected object, and may include emotion categories or other characteristic data, and embodiments of the present specification are not particularly limited. The emotion characteristics of the target detection object can be determined through the brain wave signals, and the emotion characteristics of the target detection object can also be determined by training an emotion perception model through an intelligent learning algorithm or by using expert experience and the like to perceive the brain wave signals obtained in the last step. Such as: obtaining brain wave signals corresponding to several emotion types in advance, comparing the brain wave signals of the target detection object obtained in the last step with brain wave signals of different emotion types, and taking the brain wave signal with the highest similarity as the emotion type of the target detection object. In the embodiment of the present specification, an emotion computing algorithm (aftereffect computing) may be used to perform emotion computing on a detection object, the aftereffect computing may generally refer to that a computer perceives and affects the emotion and the state of a person by using an algorithm, and in the embodiment of the present specification, the emotion and the state of a user may specifically be perceived by using a face image.
In some embodiments of the present specification, before determining the emotional characteristic of the target detection object according to the brain wave signal, the method further includes:
and if the brain wave signal of the target detection object is predicted to be 0 based on the biological characteristic image, determining that the target detection object is not a living object.
In a specific implementation process, after predicting the brain wave signal of the target detection object, if the predicted brain wave signal is 0, that is, it indicates that the target detection object does not have the brain wave signal, at this time, it can be indicated that the target detection object may be an object without brain activity, such as a photograph, a mask, a screen, and the like, and it is directly determined that the target detection object is not a living object, and a subsequent living detection process is not required. Through the prediction of the brain wave signal, when the brain wave signal is predicted to be 0, the detection object is directly judged to be a non-living object and belongs to an attack object, warning can be carried out, and the living body detection efficiency and the safety of biological identification are improved.
And step 108, determining whether the target detection object is a living object according to the emotional characteristics.
In a specific implementation process, after the emotional characteristics of the target detection object are sensed based on the brain wave signal, whether the target detection object is a living object may be determined according to the emotional characteristics of the target detection object, such as: and if the emotion fluctuation of the target detection object is relatively large, determining that the target detection object is a living object, and if the emotion fluctuation of the target detection object is relatively small, determining that the target detection object is a non-living object. Or, whether the target detection object belongs to the living body object may be comprehensively evaluated by combining the brain wave signal and the emotional characteristic of the target detection object, or a living body detection model may be trained by using an intelligent learning algorithm, and then the trained living body detection model is used to perform living body detection on the target detection object based on the emotional characteristic, the brain wave signal, and the like, so as to predict the probability that the target detection object belongs to the living body attack, and determine whether the target detection object belongs to the living body object based on the magnitude of the probability of the living body attack.
In the living body detection method provided in the embodiment of the present specification, a biological feature image of a target detection object is acquired, an electroencephalogram signal of the target detection object is predicted based on the acquired biological feature image, an emotional feature of the target detection object is calculated based on the electroencephalogram signal, and whether the target detection object is a living body object is identified based on the electroencephalogram signal and the emotional feature. The method utilizes the essential difference between the real-person user and the attack, namely the brain wave signal and the emotional state of the real-person user, can essentially improve the safety capability of the living body anti-attack method, and even if no attack is found, the brain wave and the emotional state signal are abnormally intercepted, so that the accuracy of living body detection is improved, and the safety of biological identification is further improved.
In some embodiments of the present description, the method further comprises:
pre-training and constructing a brain wave prediction model and an emotion perception model;
taking the output of the brain wave prediction model and the output of the emotion perception model as training input data of a living body detection model, and training and constructing the living body detection model;
predicting brain wave signals of the target detection object based on the biological characteristic images by using a trained brain wave prediction model;
determining the emotional characteristics of the target detection object according to the brain wave signals by using a trained emotional perception model;
and determining whether the target detection object is a living object or not according to the brain wave signal and the emotional characteristic by using the trained living body detection model.
In a specific implementation process, in the embodiment of the present specification, a brain wave prediction model and an emotion perception model may be trained in advance, for example: biological characteristic images of a certain number of historical users during biological recognition, corresponding brain wave signals, emotion states and the like are collected as sample data, and model training of a brain wave prediction model and an emotion perception model is performed. And then, training the living body detection model by using the trained brain wave prediction model and the trained emotion perception model, outputting the trained brain wave prediction model and the trained emotion perception model as the input of a training sample of the living body detection model, and training to construct the living body detection model. After the training of each model is completed, when living body detection is performed on the target detection object, the acquired biological characteristic image of the target detection object can be input into the trained brain wave prediction model, and brain wave signals of the target detection object can be predicted. And inputting the brain wave signal into the trained emotion perception model to obtain the emotion characteristics of the target detection object. Finally, the emotion characteristics or the emotion characteristics and the brain wave signals can be input into the trained living body detection model, the probability that the target detection object belongs to the living body object is obtained by using the living body detection model, and then whether the target detection object belongs to the living body object or not is determined, and the living body detection of the target detection object is completed. The process of model training can be completed through off-line and can be carried out at a server, brain wave signals and emotion characteristics of a detected object and the probability of belonging to a living object can be rapidly completed through the pre-trained intelligent learning model, and the data processing efficiency is improved.
In some embodiments of the present specification, the method for constructing the brain wave prediction model includes:
setting model parameters of a brain wave prediction model, wherein the model parameters comprise a network structure and a loss function of the brain wave prediction model;
acquiring a living body biological characteristic sample image of a living body object and a corresponding living body brain wave sample signal;
acquiring attack biological characteristic sample images of different types of attack objects and corresponding attack brain wave sample signals, wherein the attack brain wave sample signals are 0;
and taking the living body biological characteristic sample image and the attack biological characteristic sample image as the input of the brain wave prediction model, taking the living body brain wave sample signal and the attack brain wave sample signal as the corresponding output of the brain wave prediction model, and carrying out model training until the loss function of the brain wave prediction model is converged, thereby finishing the training of the brain wave prediction model.
In a specific implementation process, the model parameters of the brain wave prediction model may be set first, such as: the model structure, the type of the loss function, and the like may be specifically set according to actual needs, and embodiments of the present specification are not specifically limited. And then collecting a certain number of living objects such as living biological characteristic sample images of real users and corresponding living brain wave sample signals, wherein the biological characteristic images when the real users wear the brain wave collector to perform biological identification such as face identification and the like and the brain wave signals collected by the worn brain wave collector can be collected as the living biological characteristic sample images and the living brain wave sample signals by wearing the brain wave collector for the real users. Then, different types of attack objects are collected, such as: the biological characteristic sample and the corresponding brain wave signal when the photo, the screen, the mask and the like perform biological identification are respectively used as an attack biological characteristic sample image and an attack brain wave sample signal, wherein the attack sample has no brain wave signal, so the attack brain wave sample signal is 0. The brain wave prediction model can be trained by taking a sample of a real user as a white sample and taking a sample of an attack object as a black sample.
In some embodiments of the present description, the acquiring a living body biological characteristic sample image of a living body object and a corresponding living body brain wave sample signal includes:
selecting living objects of different ages as sample objects, wherein each sample object is worn with a brain wave collector;
and collecting a living body biological sample image of each sample object when biological identification is carried out under different lighting conditions and a living body brain wave sample signal collected by a brain wave collector.
In a specific implementation process, when acquiring a living body biological characteristic sample image of a living body object and a corresponding living body brain wave sample signal, the living body objects of different ages can be selected as sample objects, then the brain wave collector is worn on each sample object for biological identification, and the living body biological sample image of each sample object when performing biological identification under different lighting conditions and the living body brain wave sample signal acquired by the brain wave collector are acquired. The sample objects cover different age groups and different lighting conditions, so that the use requirements of real users are met better, and the accuracy of model training is improved.
The method comprises the steps of training a brain wave prediction model by collecting biological characteristic images and brain wave signals of a real user and an attack object during biological recognition as sample data, and predicting the brain wave signals of the detection object by using the trained brain wave prediction model so as to lay a data foundation for living body detection.
In some embodiments of the present specification, the method for training the emotion perception model includes:
setting model parameters of an emotion perception model, wherein the model parameters comprise a network structure and a loss function of the emotion perception model;
collecting a living brain wave sample signal of a living object, and determining an emotion type corresponding to the living object;
and taking the living brain wave sample signal as the input of the emotion perception model, taking the emotion category and the corresponding emotion characteristic as the corresponding output of the emotion perception model, and performing model training until the loss function of the emotion perception model is converged, thereby finishing the training of the emotion perception model.
In a specific implementation process, similar to a training process of an electroencephalogram prediction model, the training of an emotion perception model can be performed, the model parameters of the emotion perception model are set first, a living electroencephalogram sample signal of a living body object is collected, and emotion categories corresponding to the living body object are determined as follows: the brain wave signal acquisition method has the advantages that the brain wave signal acquisition method is happy, nervous, angry and calm, and can enable a living subject to carry out biological recognition under the condition of designating emotion types in advance, so that the corresponding brain wave sample signal is acquired. And (3) taking the collected living brain wave sample signals as input of the emotion perception model, taking the corresponding emotion category and emotion characteristics as output of the model, and performing model training until a loss function of the emotion perception model is converged to finish training of the emotion perception model. The emotion characteristics can be understood as characteristic vectors of a step of the emotion perception model before the emotion categories are output, the emotion perception model obtains a vector of the emotion characteristics after processing input data, and corresponding emotion categories are identified based on the vector, such as: the emotional features may be 512-dimensional vectors. The trained emotion perception model can be used for quickly perceiving emotion of the target detection object, and a data basis is laid for follow-up living body detection.
In some other embodiments of the present specification, the method for training the emotion perception model may further include:
setting model parameters of an emotion perception model, wherein the model parameters comprise a network structure and a loss function of the emotion perception model;
inputting the collected biological characteristic sample images into the trained brain wave prediction model, and predicting corresponding predicted brain wave signals by using the brain wave prediction model;
clustering the predicted brain wave signals to determine the emotion category corresponding to each predicted brain wave signal;
and taking the predicted brainwave signal as the input of the emotion perception model, taking the emotion category and the corresponding emotion characteristic as the corresponding output of the emotion perception model, and carrying out model training until the loss function of the emotion perception model is converged, thereby finishing the training of the emotion perception model.
In a specific implementation process, in some embodiments of the present specification, input sample data for training an emotion perception model may be generated by using a trained brain wave prediction model, and the acquired biological characteristic sample image may be: and inputting the face region image into a trained brain wave prediction model, and predicting a corresponding predicted brain wave signal by using the brain wave prediction model. Some biometric images of the subject without brain wave signals may be acquired, such as: the method comprises the steps of collecting face images of real users during face recognition, but not collecting corresponding brain wave signals by using a brain wave collector, and predicting brain wave signals corresponding to all face images by using a trained brain wave prediction model. The method can also be used for directly using biological characteristic images acquired during the training of the brain wave prediction model as training samples for training the emotion perception model, predicting corresponding brain wave signals by using the brain wave prediction model, then performing cluster analysis on the predicted brain wave signals output by the brain wave prediction model to obtain various emotion classes, and using the emotion classes corresponding to the predicted brain wave signals as labels for the subsequent emotion perception model training. The predicted brainwave signals can be used as input of the emotion perception model, corresponding emotion types and emotion characteristics are used as output corresponding to the emotion perception model, model training is carried out until loss functions of the emotion perception model are converged, and training of the emotion perception model is completed. The meaning of the emotion feature is the same as that of the emotion feature output during the training of the emotion perception model in the above embodiment, and is not described here again.
In the embodiment of the specification, the trained brain wave prediction model is used for training the emotion perception model, and subsequently, during the live body examination, the brain wave signal of the target detection object is predicted by using the brain wave prediction model, and then the emotion recognition is performed on the brain wave signal predicted by the brain wave prediction model by using the emotion perception model.
In some embodiments of the present specification, the training, with the output of the brain wave prediction model and the output of the emotion perception model as training input data of a living body detection model, to construct the living body detection model includes:
setting model parameters of a living body detection model, wherein the model parameters comprise a network structure and a loss function of the living body detection model;
collecting living body biological characteristic sample images of living body objects and attack biological characteristic sample images of attack objects of different classes, and marking attack probability labels corresponding to the sample images;
respectively predicting the living biological characteristic sample image and a predicted brain wave signal corresponding to the attack biological characteristic sample image by using a trained brain wave prediction model;
determining the living biological characteristic sample image and the emotional characteristic corresponding to the attack biological characteristic sample image by utilizing a predicted brain wave signal predicted by a trained emotion perception model based on the brain wave prediction model;
and taking the predicted brain wave signal output by the brain wave prediction model and the corresponding emotion characteristic output by the emotion perception model as the input of the living body detection model, taking the attack probability label corresponding to each sample image as the output of the living body detection model, and performing model training until the loss function of the living body detection model is converged, thereby finishing the training of the living body detection model.
In a specific implementation process, after the brain wave prediction model and the emotion perception model are trained, the output of the trained brain wave prediction model and emotion perception model can be utilized to train the living body detection model. After the model parameters of the living body detection model are set, the living body biological characteristic sample images of the living body object and the attack biological characteristic sample images of the attack objects of different types can be collected, and meanwhile, the attack probability labels corresponding to the sample images are marked, such as: the attack probability label corresponding to the living body biological characteristic sample image is 0, and the attack probability label corresponding to the attack biological characteristic sample image is 1. The method can also directly use biological characteristic images acquired during brain wave prediction model training as training samples for training a living body detection model, input the acquired biological characteristic sample images of the living body and the attack biological characteristic sample images into the brain wave prediction model, predict corresponding predicted brain wave signals by using the brain wave prediction model, perform emotion recognition on the predicted brain wave signals by using an emotion perception model, and determine emotion characteristics corresponding to the biological characteristic sample images of each living body and the attack biological characteristic sample images. After the predicted brain wave signal and the emotion feature are obtained, the predicted brain wave signal and the emotion feature can be used as the input of a living body detection model, the attack probability labels corresponding to the sample images are used as the output of the living body detection model, model training is carried out until the loss function of the living body detection model is converged, and the training of the living body detection model is completed.
The embodiment of the specification trains the living body detection model by using the trained brain wave prediction model and the emotion perception model, is consistent with the living body detection process, better meets the requirements of a living body detection scene, has higher model training accuracy, and improves the accuracy of the living body detection.
In some embodiments of the present disclosure, the model structure of the brain wave prediction model adopts a lightweight network structure, and the model structures of the emotion sensing model and the living body detection model adopt a multilayer perceptron structure.
In a specific implementation process, the model structure of the brain wave prediction model may adopt a lightweight network structure such as: the MobileNetV2 and MobileNetV2 structures are based on inverted residual error structures, the main branch of the original residual error structure is provided with three convolutions, the number of two point-by-point convolution channels is large, the inverted residual error structures are just opposite, the number of the middle convolution channels (the depth separable convolution structure is still used) is large, and the number of the two ends is small. The model structures of the emotion perception model and the living body detection model can adopt a multilayer perceptron structure such as: MLP, multilayered Perceptron, is a model of a feed-forward artificial neural network that maps multiple data sets of an input onto a single data set of an output. The brain wave prediction model is simple in structure and light in weight, calculation cost can be reduced, the emotion perception model and the living body detection model are of slightly more complex model structures than the light in weight, and emotion perception and living body detection accuracy can be improved.
Fig. 2 is a schematic flow chart of a principle of live body detection in a scenario example of the present specification, and as shown in fig. 2, in some scenario examples of the present specification, a process of live body detection mainly includes 5 steps, and specific processes may refer to the following:
1. training data acquisition: brain wave and face region changes in a face recognition stage are simulated by using a brain wave data acquisition instrument and IoT equipment carrying a high-definition camera;
2. training a brain wave prediction model: fitting and predicting brain wave data of a user by using a face image of the user as input;
3. training an emotion perception model: training an emotion classification model based on brain waves as a basis for living body detection;
4. training a living body model: training a living body detection model based on the result of the emotion perception model in the last step;
5. and (3) living body detection: and performing living body detection on the object of biological recognition by using each trained model.
The training process of the brain wave prediction model can refer to the following steps:
when a user carries out face recognition verification, the brain wave acquisition equipment and the IoT equipment carrying the high-definition camera are used for acquiring face data of the user and corresponding brain wave data.
A data acquisition process: in some example scenarios of the present specification, 500 users (scattered in various age groups, confirming coverage of various users) may be selected; each user wears a brain wave collector under different light conditions, and face recognition is performed for 5 times by utilizing IoT equipment; the data acquisition lasts for about 5s every time of face recognition. Each time of acquisition, acquiring face data and corresponding brain wave data of about 5 seconds by using a high-precision camera carried by IoT equipment; after the data acquisition of the real-person user is finished, performing data acquisition by using 500 various attack materials, wherein the acquisition process is the same as that of the real-person user; because the brain wave data cannot be collected by the attack, the brain wave data is replaced by a sequence of all 0 s.
Training a brain wave prediction model:
data preprocessing: firstly, carrying out face detection on collected real person/attack data, and crop out a face area for later use; then, mean filtering with the length of 25 is carried out on the brain wave data to filter noise;
model structure: using MobileNetV2 (lightweight network structure) as a backbone, regression brainwave signals;
and (3) inputting and outputting a model: the model inputs 10 images randomly and uniformly sampled in the 5s human face image as input, and outputs corresponding brain wave signals; note that the brainwave signals of the attack are all 0 sequences;
loss function: the loss function is the L2 distance between the model output and the real brain wave sequence;
the training method comprises the following steps: model training is carried out by utilizing the network structure and the loss function based on an SGD (stochastic gradient descent) method until the loss function is converged, and training of the brainwave prediction model is completed.
The training process of the emotion perception model can be referred to as follows:
and the emotion perception model is mainly based on the brain wave prediction model obtained in the last step, and further unsupervised emotion perception model training is carried out.
Extracting brain wave sequences: for the human face image sequence without the corresponding brain wave signal, brain wave prediction is carried out by utilizing the trained brain wave prediction model; generating brain wave data of about 5W-10W living body data for training an emotion perception model;
brainwave-emotion clustering: performing hierarchical clustering on 5W-10W brainwave records to obtain N types of different emotions; the emotion class corresponding to each record is used as a label for subsequent emotion recognition;
model structure: the model may be a 5-layer MLP (Multilayer Perceptron) model;
input and output: inputting brain wave sequences and outputting emotion categories and corresponding emotion characteristics;
loss function: the loss function is a classification loss function;
model training: model training is carried out by utilizing the network structure and the loss function based on the SGD until the loss function is converged, and training of the emotion perception model is completed.
The training of the living body detection model can refer to the following processes:
training a living body classification model by using the output of the trained brain wave prediction model and the emotion perception model;
model structure: the model is 5-layer MLP;
input and output: the input is emotion characteristics and brain wave signals, and the output is attack probability;
loss function: a live/attack classification function;
the model training method comprises the following steps: model training is carried out based on the SGD by utilizing the network structure and the loss function until the loss function is converged.
The procedure of the biopsy can be referred to as follows:
the target detection object starts face recognition through interaction with an IoT device;
collecting face data about 5s, and carrying out face detection to obtain a face area;
uniformly sampling 5s face images, and inputting 10 images into a brain wave prediction model to obtain a predicted brain wave signal; if the brain waves are all 0 signals, the brain waves are directly judged to be non-living bodies, and the face recognition fails; otherwise, carrying out the next step;
inputting the brain wave signal into an emotion perception model to obtain an emotion category and emotion characteristics;
and inputting the emotional characteristics and the brain wave signals into a living body detection model to obtain a living body attack probability P, judging as an attack if P is larger than a threshold value T set in advance, and otherwise, judging as a living body object.
The embodiment of the specification predicts the brain wave signal of the user by using the high-definition face images of multiple frames, further calculates the state and emotion of the user, and judges whether the user is a living user according to the state and emotion. The method utilizes the essential difference (namely emotion and state) between a real user and an attack (a static photo and the like), and can essentially improve the safety capability of the method, and even if the attack which is not seen is intercepted due to the abnormal emotion and state signal expression. The safety and the experience are greatly improved.
In the present specification, each embodiment of the method is described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. The relevant points can be obtained by referring to the partial description of the method embodiment.
Based on the living body detection method, one or more embodiments of the present specification further provide a system for living body detection. The system can include systems (including distributed systems), software (applications), modules, components, servers, clients, etc. that use the methods described in embodiments of the present specification in conjunction with any necessary devices to implement the hardware. Based on the same innovative conception, embodiments of the present specification provide an apparatus as described in the following embodiments. Since the implementation scheme of the apparatus for solving the problem is similar to that of the method, the specific apparatus implementation in the embodiment of the present specification may refer to the implementation of the foregoing method, and repeated details are not repeated. As used hereinafter, the term "unit" or "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Specifically, fig. 3 is a schematic block diagram of an embodiment of a living body detecting apparatus provided in the present specification, which can be applied to a biometric device, such as: a face brushing device, a face brushing payment device, and the like, as shown in fig. 3, the living body detecting apparatus provided in the present specification may include:
the image acquisition module 31 is used for acquiring a biological characteristic image of the target detection object during biological identification;
a brain wave information prediction module 32, configured to predict a brain wave signal of the target detection object based on the biometric image;
an emotion feature recognition module 33, configured to determine an emotion feature of the target detection object according to the brain wave signal;
and the living body detection module 34 is used for determining whether the target detection object is a living body object according to the emotional characteristic.
The living body detection device provided by the embodiment of the present specification predicts the brain wave signal of the user by using the high-definition face images of a plurality of frames, further calculates the state and emotion of the user, and determines whether the user is a living body user according to the state and emotion. Such methods make use of the essential differences (i.e. emotions and states) between a live user and an attack (e.g. a still photograph), and can essentially improve the security of the method, even if no attack is seen, the attack is intercepted because of the abnormal emotion and state signals. The safety and the experience are greatly improved.
It should be noted that the above-mentioned apparatus may also include other embodiments according to the description of the corresponding method embodiment. The specific implementation manner may refer to the description of the above corresponding method embodiment, and is not described in detail herein.
An embodiment of the present specification further provides a living body detection apparatus, including: at least one processor and a memory for storing processor-executable instructions, the processor implementing the liveness detection method of the above embodiments when executing the instructions, such as:
acquiring a biological characteristic image of a target detection object;
predicting brain wave signals of the target detection object based on the biological feature image;
determining emotional characteristics of the target detection object according to the brain wave signal;
and determining whether the target detection object is a living object according to the emotional characteristic.
An embodiment of the present specification further provides a living body detection system, including: an image acquisition device and an image processing device, wherein the image acquisition device is used for acquiring a biological characteristic image of a target detection object for biological recognition, the image processing device comprises at least one processor and a memory for storing processor executable instructions, and the living body detection method performs living body detection on the biological characteristic image acquired by the image acquisition device when the processor executes the instructions, such as:
acquiring a biological characteristic image of a target detection object;
predicting brain wave signals of the target detection object based on the biological feature image;
determining emotional characteristics of the target detection object according to the brain wave signal;
and determining whether the target detection object is a living object according to the emotional characteristic.
It should be noted that the above description of the apparatus and system according to the method embodiments may also include other embodiments. The specific implementation manner may refer to the description of the related method embodiment, and is not described in detail herein.
The living body detecting device provided by the present specification can also be applied to various data analysis processing systems. The system or server or terminal or device may be a single server, or may include a server cluster, a system (including a distributed system), software (applications), actual operating devices, logical gate devices, quantum computers, and the like, which use one or more of the methods or one or more embodiments of the present disclosure, and terminal devices in combination with necessary hardware implementations. The system for checking for discrepancies may comprise at least one processor and a memory storing computer-executable instructions that, when executed by the processor, implement the steps of the method of any one or more of the embodiments described above.
The method embodiments provided in the embodiments of the present specification may be executed in a mobile terminal, a computer terminal, a server, or a similar computing device. Taking the example of the operation on the server, fig. 4 is a block diagram of the hardware structure of the biopsy server in one embodiment of the present specification, and the computer terminal may be the biopsy server or the biopsy device in the above embodiment. As shown in fig. 4, the server 10 may include one or more (only one shown) processors 100 (the processors 100 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA, etc.), a non-volatile memory 200 for storing data, and a transmission module 300 for communication functions. It will be understood by those skilled in the art that the structure shown in fig. 4 is only an illustration and is not intended to limit the structure of the electronic device. For example, the server 10 may also include more or fewer components than shown in FIG. 4, and may also include other processing hardware, such as a database or multi-level cache, a GPU, or have a different configuration than shown in FIG. 4, for example.
The non-volatile memory 200 may be used to store software programs and modules of application software, such as program instructions/modules corresponding to the living body detection method in the embodiments of the present specification, and the processor 100 executes various functional applications and resource data updates by running the software programs and modules stored in the non-volatile memory 200. Non-volatile memory 200 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the non-volatile memory 200 may further include memory located remotely from the processor 100, which may be connected to a computer terminal through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission module 300 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal. In one example, the transmission module 300 includes a Network adapter (NIC) that can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission module 300 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The method or apparatus provided in this specification and described in the foregoing embodiments may implement the service logic through a computer program and record the service logic on a storage medium, where the storage medium may be read and executed by a computer, and implement the effects of the solutions described in the embodiments of this specification, such as:
collecting a biological characteristic image of a target detection object;
predicting brain wave signals of the target detection object based on the biological feature image;
determining emotional characteristics of the target detection object according to the brain wave signal;
and determining whether the target detection object is a living object according to the emotional characteristic.
The above-mentioned biopsy method or apparatus provided in this specification may be implemented by a processor executing corresponding program instructions in a computer, for example, implemented by using a c + + language of a windows operating system on a PC side, a linux system, or implemented by using android and iOS system programming languages on an intelligent terminal, or implemented by using processing logic based on a quantum computer.
It should be noted that descriptions of the apparatus, the computer storage medium, and the system described above according to the related method embodiments may also include other embodiments, and specific implementations may refer to descriptions of corresponding method embodiments, which are not described in detail herein.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the hardware + program class embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and the relevant points can be referred to the partial description of the method embodiment.
The embodiments of this specification are not limited to what must be consistent with industry communication standards, standard computer resource data updates and data storage rules or as described in one or more embodiments of this specification. Certain industry standards, or implementations modified slightly from those described using custom modes or examples, may also achieve the same, equivalent, or similar, or other, contemplated implementations of the above-described examples. The embodiments using the modified or transformed data acquisition, storage, judgment, processing and the like can still fall within the scope of the alternative embodiments of the embodiments in this specification.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a vehicle-mounted human-computer interaction device, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
Although one or more embodiments of the present description provide method operational steps as described in the embodiments or flowcharts, more or fewer operational steps may be included based on conventional or non-inventive approaches. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of sequences, and does not represent a unique order of performance. When the device or the end product in practice executes, it can execute sequentially or in parallel according to the method shown in the embodiment or the figures (for example, in the environment of parallel processors or multi-thread processing, even in the environment of distributed resource data update). The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the presence of additional identical or equivalent elements in a process, method, article, or apparatus that comprises the recited elements is not excluded. The terms first, second, etc. are used to denote names, but not any particular order.
For convenience of description, the above devices are described as being divided into various modules by functions, and are described separately. Of course, when implementing one or more of the present description, the functions of each module may be implemented in one or more software and/or hardware, or a module implementing the same function may be implemented by a combination of multiple sub-modules or sub-units, etc. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable resource data updating apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable resource data updating apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable resource data update apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable resource data update apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage, graphene storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
As will be appreciated by one skilled in the art, one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, one or more embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
One or more embodiments of the present description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the specification may also be practiced in distributed computing environments where tasks are performed by remote devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
All the embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, and the relevant points can be referred to only part of the description of the method embodiments. In the description of the specification, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the specification. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
The above description is merely exemplary of one or more embodiments of the present disclosure and is not intended to limit the scope of one or more embodiments of the present disclosure. Various modifications and alterations to one or more embodiments described herein will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of the present specification should be included in the scope of the claims.

Claims (13)

1. A method of in vivo detection, the method comprising:
collecting a biological characteristic image of a target detection object during biological identification;
predicting a brain wave signal of the target detection object based on the biometric image;
determining emotional characteristics of the target detection object according to the brain wave signal;
and determining whether the target detection object is a living object according to the emotional characteristic.
2. The method of claim 1, prior to determining an emotional characteristic of the target detection object from the brain wave signal, the method further comprising:
and if the brain wave signal of the target detection object is predicted to be 0 based on the biological characteristic image, determining that the target detection object is not a living object.
3. The method of claim 1, further comprising:
pre-training and constructing a brain wave prediction model and an emotion perception model;
taking the output of the brain wave prediction model and the output of the emotion perception model as training input data of a living body detection model, and training and constructing the living body detection model;
predicting brain wave signals of the target detection object based on the biological characteristic image by using a trained brain wave prediction model;
determining the emotional characteristics of the target detection object according to the brain wave signals by using a trained emotional perception model;
and determining whether the target detection object is a living object or not according to the brain wave signal and the emotional characteristic by using the trained living body detection model.
4. The method of claim 3, wherein the brain wave prediction model is constructed by:
setting model parameters of a brain wave prediction model, wherein the model parameters comprise a network structure and a loss function of the brain wave prediction model;
acquiring a living body biological characteristic sample image of a living body object and a corresponding living body brain wave sample signal;
acquiring attack biological characteristic sample images of different types of attack objects and corresponding attack brain wave sample signals, wherein the attack brain wave sample signals are 0;
and taking the living body biological characteristic sample image and the attack biological characteristic sample image as the input of the brain wave prediction model, taking the living body brain wave sample signal and the attack brain wave sample signal as the corresponding output of the brain wave prediction model, and carrying out model training until the loss function of the brain wave prediction model is converged, thereby finishing the training of the brain wave prediction model.
5. The method of claim 3, the method of training the emotion perception model comprising:
setting model parameters of an emotion perception model, wherein the model parameters comprise a network structure and a loss function of the emotion perception model;
collecting a living brain wave sample signal of a living object, and determining the corresponding emotion category of the living object;
and taking the living brain wave sample signal as the input of the emotion perception model, taking the emotion category and the corresponding emotion characteristic as the output corresponding to the emotion perception model, and carrying out model training until the loss function of the emotion perception model is converged, thereby finishing the training of the emotion perception model.
6. The method of claim 3, the method of training the emotion perception model comprising:
setting model parameters of an emotion perception model, wherein the model parameters comprise a network structure and a loss function of the emotion perception model;
inputting the collected biological characteristic sample images into the trained brain wave prediction model, and predicting corresponding predicted brain wave signals by using the brain wave prediction model;
clustering the predicted brain wave signals to determine emotion types corresponding to the predicted brain wave signals;
and taking the predicted brainwave signals as the input of the emotion perception model, taking the emotion types and the corresponding emotion characteristics as the corresponding output of the emotion perception model, and performing model training until the loss function of the emotion perception model is converged, thereby finishing the training of the emotion perception model.
7. The method according to claim 3, wherein the training of the living body detection model by using the output of the brain wave prediction model and the output of the emotion perception model as training input data of the living body detection model comprises:
setting model parameters of a living body detection model, wherein the model parameters comprise a network structure and a loss function of the living body detection model;
collecting living body biological characteristic sample images of living body objects and attack biological characteristic sample images of attack objects of different classes, and marking attack probability labels corresponding to the sample images;
predicting the biological characteristic sample image of the living body and a predicted brain wave signal corresponding to the attack biological characteristic sample image respectively by using a trained brain wave prediction model;
determining the living biological characteristic sample image and the emotional characteristic corresponding to the attack biological characteristic sample image by utilizing a predicted brain wave signal predicted by a trained emotion perception model based on the brain wave prediction model;
and taking the predicted brain wave signal output by the brain wave prediction model and the corresponding emotion characteristic output by the emotion perception model as the input of the living body detection model, taking the attack probability label corresponding to each sample image as the output of the living body detection model, and performing model training until the loss function of the living body detection model is converged, thereby finishing the training of the living body detection model.
8. The method according to claim 3, wherein the model structure of the brain wave prediction model is a lightweight network structure, and the model structures of the emotion perception model and the living body detection model are multilayer perceptron structures.
9. The method of claim 1, wherein the acquiring of the biometric image of the target detection object during biometric identification comprises:
and acquiring image information of the target detection object within a specified time after the start of biological identification, and selecting a specified number of images from the image information as the biological characteristic images of the target detection object.
10. The method of claim 4, the acquiring a living biometric sample image of a living subject and a corresponding living brain wave sample signal comprising:
selecting living objects of different ages as sample objects, wherein each sample object is worn with a brain wave collector;
and collecting living body biological sample images of all sample objects when biological identification is carried out on the sample objects under different lighting conditions and living body brain wave sample signals collected by the brain wave collector.
11. A living body detection apparatus, the apparatus comprising:
the image acquisition module is used for acquiring a biological characteristic image of a target detection object during biological identification;
the brain wave information prediction module is used for predicting brain wave signals of the target detection object based on the biological characteristic image;
the emotion feature recognition module is used for determining the emotion feature of the target detection object according to the brain wave signal;
and the living body detection module is used for determining whether the target detection object is a living body object according to the emotional characteristics.
12. A living body examination apparatus comprising: at least one processor and a memory for storing processor-executable instructions, the processor implementing the method of any one of claims 1-10 when executing the instructions.
13. A living body detection system comprising: an image acquisition device for acquiring a biometric image of a target detection object for biometric identification, and an image processing device comprising at least one processor and a memory for storing processor-executable instructions which, when executed by the processor, implement the method of any one of claims 1-10 for in vivo detection of the biometric image acquired by the image acquisition device.
CN202210579985.2A 2022-05-26 2022-05-26 Living body detection method, device, equipment and system Pending CN115035608A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210579985.2A CN115035608A (en) 2022-05-26 2022-05-26 Living body detection method, device, equipment and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210579985.2A CN115035608A (en) 2022-05-26 2022-05-26 Living body detection method, device, equipment and system

Publications (1)

Publication Number Publication Date
CN115035608A true CN115035608A (en) 2022-09-09

Family

ID=83120280

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210579985.2A Pending CN115035608A (en) 2022-05-26 2022-05-26 Living body detection method, device, equipment and system

Country Status (1)

Country Link
CN (1) CN115035608A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115438703A (en) * 2022-10-24 2022-12-06 广州河东科技有限公司 Smart home biological identification system and identification method

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107491716A (en) * 2016-06-13 2017-12-19 腾讯科技(深圳)有限公司 A kind of face authentication method and device
CN108512995A (en) * 2018-03-01 2018-09-07 广东欧珀移动通信有限公司 electronic device, brain wave control method and related product
JP2018187287A (en) * 2017-05-11 2018-11-29 学校法人 芝浦工業大学 Sensitivity estimation device, sensitivity estimation system, sensitivity estimation method and program
CN109697831A (en) * 2019-02-25 2019-04-30 湖北亿咖通科技有限公司 Fatigue driving monitoring method, device and computer readable storage medium
CN110141258A (en) * 2019-05-16 2019-08-20 深兰科技(上海)有限公司 A kind of emotional state detection method, equipment and terminal
WO2020048140A1 (en) * 2018-09-07 2020-03-12 北京市商汤科技开发有限公司 Living body detection method and apparatus, electronic device, and computer readable storage medium
CN113208633A (en) * 2021-04-07 2021-08-06 北京脑陆科技有限公司 Emotion recognition method and system based on EEG brain waves
CN113869253A (en) * 2021-09-29 2021-12-31 北京百度网讯科技有限公司 Living body detection method, living body training device, electronic apparatus, and medium
US20220015685A1 (en) * 2020-07-14 2022-01-20 Dhiraj JEYANANDARAJAN Systems and methods for brain state capture and referencing
CN114424945A (en) * 2021-12-08 2022-05-03 中国科学院深圳先进技术研究院 Brain wave biological feature recognition system and method based on random graphic image flash

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107491716A (en) * 2016-06-13 2017-12-19 腾讯科技(深圳)有限公司 A kind of face authentication method and device
JP2018187287A (en) * 2017-05-11 2018-11-29 学校法人 芝浦工業大学 Sensitivity estimation device, sensitivity estimation system, sensitivity estimation method and program
CN108512995A (en) * 2018-03-01 2018-09-07 广东欧珀移动通信有限公司 electronic device, brain wave control method and related product
WO2020048140A1 (en) * 2018-09-07 2020-03-12 北京市商汤科技开发有限公司 Living body detection method and apparatus, electronic device, and computer readable storage medium
CN109697831A (en) * 2019-02-25 2019-04-30 湖北亿咖通科技有限公司 Fatigue driving monitoring method, device and computer readable storage medium
CN110141258A (en) * 2019-05-16 2019-08-20 深兰科技(上海)有限公司 A kind of emotional state detection method, equipment and terminal
US20220015685A1 (en) * 2020-07-14 2022-01-20 Dhiraj JEYANANDARAJAN Systems and methods for brain state capture and referencing
CN113208633A (en) * 2021-04-07 2021-08-06 北京脑陆科技有限公司 Emotion recognition method and system based on EEG brain waves
CN113869253A (en) * 2021-09-29 2021-12-31 北京百度网讯科技有限公司 Living body detection method, living body training device, electronic apparatus, and medium
CN114424945A (en) * 2021-12-08 2022-05-03 中国科学院深圳先进技术研究院 Brain wave biological feature recognition system and method based on random graphic image flash

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张冠华等: "面向情绪识别的脑电特征研究综述", 中国科学:信息科学, vol. 49, no. 09, 20 September 2019 (2019-09-20) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115438703A (en) * 2022-10-24 2022-12-06 广州河东科技有限公司 Smart home biological identification system and identification method
CN115438703B (en) * 2022-10-24 2023-04-07 广州河东科技有限公司 Smart home biological identification system and identification method

Similar Documents

Publication Publication Date Title
Wang et al. Simulating human saccadic scanpaths on natural images
CN109766755B (en) Face recognition method and related product
CN107590473B (en) Human face living body detection method, medium and related device
JP2022521038A (en) Face recognition methods, neural network training methods, devices and electronic devices
CN106295501A (en) The degree of depth based on lip movement study personal identification method
CN112668519A (en) Abnormal face recognition living body detection method and system based on MCCAE network and Deep SVDD network
CN111539320B (en) Multi-view gait recognition method and system based on mutual learning network strategy
Busey et al. Characterizing human expertise using computational metrics of feature diagnosticity in a pattern matching task
KR102187831B1 (en) Control method, device and program of congestion judgment system using cctv
Lucio et al. Simultaneous iris and periocular region detection using coarse annotations
CN115035608A (en) Living body detection method, device, equipment and system
CN115439927A (en) Gait monitoring method, device, equipment and storage medium based on robot
Viedma et al. Relevant features for gender classification in NIR periocular images
CN116205726B (en) Loan risk prediction method and device, electronic equipment and storage medium
CN112818899A (en) Face image processing method and device, computer equipment and storage medium
CN111814738A (en) Human face recognition method, human face recognition device, computer equipment and medium based on artificial intelligence
CN111723869A (en) Special personnel-oriented intelligent behavior risk early warning method and system
CN115731620A (en) Method for detecting counter attack and method for training counter attack detection model
CN111222374A (en) Lie detection data processing method and device, computer equipment and storage medium
CN109359543A (en) A kind of portrait search method and device based on Skeleton
CN114387674A (en) Living body detection method, living body detection system, living body detection apparatus, storage medium, and program product
Dhar et al. Detecting deepfake images using deep convolutional neural network
Veiga et al. A Federated Learning Approach for Authentication and User Identification based on Behavioral Biometrics
CN115116146A (en) Living body detection method, device, equipment and system
Ali et al. Applying computational intelligence to community policing and forensic investigations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination