CN114495291A - Method, system, electronic device and storage medium for in vivo detection - Google Patents

Method, system, electronic device and storage medium for in vivo detection Download PDF

Info

Publication number
CN114495291A
CN114495291A CN202210337902.9A CN202210337902A CN114495291A CN 114495291 A CN114495291 A CN 114495291A CN 202210337902 A CN202210337902 A CN 202210337902A CN 114495291 A CN114495291 A CN 114495291A
Authority
CN
China
Prior art keywords
sample
error
prone
samples
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210337902.9A
Other languages
Chinese (zh)
Other versions
CN114495291B (en
Inventor
邵琦琦
王东
张江峰
王月平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Moredian Technology Co ltd
Original Assignee
Hangzhou Moredian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Moredian Technology Co ltd filed Critical Hangzhou Moredian Technology Co ltd
Priority to CN202210337902.9A priority Critical patent/CN114495291B/en
Publication of CN114495291A publication Critical patent/CN114495291A/en
Application granted granted Critical
Publication of CN114495291B publication Critical patent/CN114495291B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application relates to a method, a system, an electronic device and a storage medium for in vivo detection, which are characterized in that a prediction result and a characteristic vector of a sample set by an in vivo detection model are obtained, the sample set is divided into samples with correct classification and wrong classification according to the prediction result, a first mean value of prediction class characteristic values of all the samples with correct classification is obtained, a second mean value of prediction class characteristic values of all the samples with wrong classification is obtained, when the difference between the first mean value and the second mean value is greater than a prediction class characteristic difference threshold value, if the prediction class characteristic value of the sample is smaller than the second mean value, the sample is a first error-prone sample, the in vivo detection model is trained according to the first error-prone sample to obtain an updated in vivo detection model, a human face is detected according to the updated in vivo detection model to obtain an in vivo detection result, and the problem of low robustness of a common in vivo detection model in related technologies is solved, and the problem of high time cost is solved by selecting error-prone samples to train the model by performing data enhancement on the samples.

Description

Method, system, electronic device and storage medium for in vivo detection
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, a system, an electronic device, and a storage medium for detecting a living body.
Background
In recent years, with the development of face recognition technology, more and more scenes in which the face-brushing can be applied are available, such as face-brushing payment, face-brushing card-punching sign-in, face-brushing unlocking electronic equipment, face-brushing unlocking door control and the like, and the face-brushing unlocking door control method has the characteristics of convenience and quickness in operation and the like. As an important technology in the face recognition technology, the living body detection plays an important role in distinguishing the authenticity of images, resisting spoofing attacks and protecting the safety of the whole face recognition system. An ordinary in-vivo detection model is usually trained by using true and false samples, but the obtained model is low in robustness and misjudges an error-prone sample, in the related technology, a batch of samples are generated by performing data enhancement on one sample, if the prediction effect of the model on the batch of samples is not good, the sample is an error-prone sample, the obtained error-prone sample is trained on the model to improve the robustness of the model, and the time cost for selecting the error-prone sample by using the data enhancement is high.
At present, an effective solution is not provided aiming at the problems that a common in-vivo detection model in the related technology is low in robustness, and the time cost is high because an error-prone sample is selected to train the model by performing data enhancement on the sample.
Disclosure of Invention
The embodiment of the application provides a method, a system, an electronic device and a storage medium for in-vivo detection, which are used for at least solving the problems that the robustness of a common in-vivo detection model in the related technology is low, and the time cost is high because an error-prone sample is selected to train the model by performing data enhancement on the sample.
In a first aspect, an embodiment of the present application provides a method for in-vivo detection, where the method includes:
s101, obtaining a prediction result and a feature vector of a living body detection model on a sample set, wherein the feature vector comprises a prediction class feature value and a non-prediction class feature value;
s102, dividing a sample set into correctly classified samples and incorrectly classified samples according to the prediction result of the sample set, obtaining the mean value of all correctly classified sample prediction class characteristic values, recording the mean value as a first mean value, obtaining the mean value of all incorrectly classified sample prediction class characteristic values, and recording the mean value as a second mean value;
s103, under the condition that the difference between the first average value and the second average value is larger than a prediction class characteristic difference threshold value, if the prediction class characteristic value of a sample is smaller than the second average value, the sample is a first error-prone sample, and all the first error-prone samples are obtained;
s104, training the living body detection model according to all the first error-prone samples to obtain an updated living body detection model;
and S105, detecting the human face according to the updated living body detection model to obtain a living body detection result.
In some embodiments, after dividing the samples into the correctly classified samples and the incorrectly classified samples according to the prediction result of the sample set, the method further comprises:
judging whether the number of the samples with the classification errors is larger than a preset threshold value or not;
if the judgment result is yes, executing the step S102 to the step S105, and if the judgment result is no, training the living body detection model through the sample set to obtain an updated living body detection model.
In some embodiments, after training the in-vivo detection model according to all the first error-prone samples, the method further comprises:
counting the training times of the living body detection model, and judging whether the training times reach preset times;
if the judgment result is negative, circularly executing the steps S101 to S105 until the training frequency reaches the preset frequency, finishing the training and obtaining an updated living body detection model;
if the judgment result is yes, the training is ended, and the updated living body detection model is obtained.
In some embodiments, after all of the first error-prone samples are obtained, the method further comprises:
obtaining the average value of the difference between the sample prediction characteristic value and the non-prediction characteristic value which are classified correctly, recording the average value as a third average value, and obtaining the average value of the difference between the sample prediction characteristic value and the non-prediction characteristic value which are classified incorrectly, recording the average value as a fourth average value;
under the condition that the difference between the third mean value and the fourth mean value is greater than a feature difference threshold value, if the difference between a sample prediction class feature value and a sample non-prediction class feature value is smaller than the fourth mean value, the sample is a second error-prone sample, and all the second error-prone samples are obtained;
and training the living body detection model according to all the first error-prone samples and all the second error-prone samples to obtain an updated living body detection model.
In some embodiments, after all of the first error-prone samples and all of the second error-prone samples are obtained, the method further comprises:
equalizing the prediction class characteristic value and the non-prediction class characteristic value of the error-prone sample to obtain the corrected characteristic vector of the error-prone sample, wherein the error-prone sample comprises the first error-prone sample and the second error-prone sample;
and training the in-vivo detection model according to the feature vector of the common sample and the feature vector corrected by the error-prone sample to obtain an updated in-vivo detection model.
In some of these embodiments, equating the predicted class eigenvalue and the non-predicted class eigenvalue of the error-prone sample comprises: and making the prediction class characteristic value of the error-prone sample equal to the non-prediction class characteristic value.
In some embodiments, before obtaining the prediction result of the living body detection model on the sample set and the feature vector, the method includes:
and training the model according to a sample set until a trained in vivo detection model is obtained, wherein the sample set comprises in vivo samples and prosthesis samples.
In a second aspect, the present application provides a system for in vivo detection, the system including an acquisition module, a dividing module, a comparison module, a training module and a detection module,
the obtaining module is used for obtaining a prediction result and a feature vector of the living body detection model on the sample set, wherein the feature vector comprises a prediction class feature value and a non-prediction class feature value;
the dividing module is used for dividing the sample set into correctly classified samples and incorrectly classified samples according to the prediction result of the sample set, obtaining the mean value of all the correctly classified sample prediction class characteristic values, recording the mean value as a first mean value, obtaining the mean value of all the incorrectly classified sample prediction class characteristic values, and recording the mean value as a second mean value;
the comparison module is configured to, when a difference between the first average value and the second average value is greater than a prediction class feature difference threshold, if a prediction class feature value of a sample is smaller than the second average value, the sample is a first error-prone sample, and all the first error-prone samples are obtained;
the training module is used for training the in-vivo detection model according to all the first error-prone samples to obtain an updated in-vivo detection model;
and the detection module is used for detecting the human face according to the updated in-vivo detection model to obtain an in-vivo detection result.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor, when executing the computer program, implements the method for detecting a living body according to the first aspect.
In a fourth aspect, embodiments of the present application provide a storage medium having a computer program stored thereon, which when executed by a processor, implement the method for detecting a living body as described in the first aspect above.
Compared with the related art, the living body detection method provided by the embodiment of the application includes the steps of obtaining a prediction result and a feature vector of a living body detection model on a sample set, wherein the feature vector includes a prediction class feature value and a non-prediction class feature value, dividing the sample set into a sample with correct classification and a sample with wrong classification according to the prediction result of the sample set, obtaining a mean value of the prediction class feature values of all the samples with correct classification, recording the mean value as a first mean value, obtaining a mean value of the prediction class feature values of all the samples with wrong classification, recording the mean value as a second mean value, and under the condition that the difference between the first mean value and the second mean value is greater than a prediction class feature difference threshold value, if the prediction class feature value of the sample is smaller than the second mean value, taking the sample as a first error-prone sample, training the living body detection model according to all the first error-prone samples, obtaining an updated living body detection model, and detecting a human face according to the updated living body detection model, the in-vivo detection result is obtained, and the problems that in the related technology, the robustness of a common in-vivo detection model is low, and the time cost is high due to the fact that the sample is subjected to data enhancement to select an error-prone sample to train the model are solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a flow chart of a method of in vivo detection according to an embodiment of the present application;
FIG. 2 is a flow chart of another method of liveness detection according to an embodiment of the present application;
FIG. 3 is a flow chart of a third method of in vivo testing according to an embodiment of the present application;
fig. 4 is a block diagram of a system for in-vivo detection according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Reference herein to "a plurality" means greater than or equal to two. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
The present embodiment provides a method for in-vivo detection, and fig. 1 is a flowchart of a method for in-vivo detection according to an embodiment of the present application, and as shown in fig. 1, the method includes the following steps:
step S101, obtaining a prediction result and a feature vector of a living body detection model to a sample set, wherein the feature vector comprises a prediction class feature value V1And a non-predictive class eigenvalue V2(ii) a In this embodiment, the living body detection model is used to determine whether the human face is a living body or a prosthesis, and is a binary classification model, so the living body detection model outputs a two-dimensional feature vector, and a larger feature value in the two-dimensional feature vector is a prediction class feature value V1The other characteristic value is a non-prediction class characteristic value V2
Optionally, the model is trained in advance according to a sample set until the trained biopsy model is obtained, wherein the sample set includes a biopsy sample and a prosthesis sample. Specifically, if the sample is x and the sample label is y, the model is
Figure DEST_PATH_IMAGE001
And setting a warm-up period, transmitting the sample set into the model in the warm-up period, calculating loss through forward propagation, updating model parameters through backward propagation until the warm-up period is finished, and obtaining a trained living body detection model.
Step S102, dividing the sample set into correctly classified samples and incorrectly classified samples according to the prediction result of the sample set, obtaining the average value of all correctly classified sample prediction class characteristic values, and recording the average value as a first average value
Figure 738171DEST_PATH_IMAGE002
Obtaining the mean value of the sample prediction class characteristic values of all classification errors, and recording the mean value as a second mean value
Figure DEST_PATH_IMAGE003
(ii) a In this embodiment, the prediction result of the sample may be compared with the sample label to determine whether the sample is classified correctly.
Step S103, in the first mean value
Figure 411598DEST_PATH_IMAGE002
And a firstMean of two
Figure 702902DEST_PATH_IMAGE003
Is greater than the prediction class characteristic difference threshold
Figure 828990DEST_PATH_IMAGE004
Under the condition of (1), if the prediction class characteristic value of the sample is smaller than the second mean value, the sample is a first error-prone sample, and all the first error-prone samples are obtained;
in practical application, the common sample is a sample which is easy to be classified correctly, so that the class characteristic value V is predicted1Larger, no classification error even applying small disturbance, and the error-prone sample is opposite, and the prediction class characteristic value V1And a non-predictive class eigenvalue V2The values of (a) are relatively close, and therefore, the prediction results change after applying a small perturbation. For example, assume two samples
Figure DEST_PATH_IMAGE005
Figure 621365DEST_PATH_IMAGE006
Are (50, -7) and (1-7), respectively, and the prediction class confidence E is obtained by the following equation 1:
Figure 352561DEST_PATH_IMAGE007
equation 1
Thus the sample
Figure 142662DEST_PATH_IMAGE005
Figure 490467DEST_PATH_IMAGE006
The confidence of the prediction classes of (1) and (7) is relatively close, and the prediction result changes after slight disturbance is applied, so that the sample
Figure 696320DEST_PATH_IMAGE006
High confidence error prone samples.
When the sample isWhen the amount is enough, the distribution of the error-prone sample and the normal sample approximately follows normal distribution according to the central limit theorem, so that when the distribution between the error-prone sample and the normal sample has a difference, which samples can be regarded as error-prone samples can be judged by using the distribution characteristics of the error-prone sample, that is, the samples can be regarded as error-prone samples, that is, the error-prone samples and the normal samples are distributed in different ways
Figure 813181DEST_PATH_IMAGE008
When the difference exists between the error-prone sample and the normal sample, the method utilizes
Figure 977446DEST_PATH_IMAGE009
Capable of picking out error-prone samples, i.e. prediction class characteristic value V of the sample1Mean of sample prediction class feature values less than all classification errors
Figure DEST_PATH_IMAGE010
The sample is described as an error-prone sample, which may be an error-prone sample with high confidence or an error-prone sample with low confidence.
Step S104, training the in-vivo detection model according to all the first error-prone samples to obtain an updated in-vivo detection model;
and step S105, detecting the human face according to the updated living body detection model to obtain a living body detection result.
Compared with the problems of low robustness of a common in-vivo detection model in the related art, and high time cost of training a model by selecting an error-prone sample through data enhancement of the sample, the embodiment obtains the prediction result and the feature vector of the in-vivo detection model on a sample set, the feature vector comprises a prediction class feature value and a non-prediction class feature value, divides the sample set into a correctly classified sample and an incorrectly classified sample according to the prediction result of the sample set, obtains the mean value of all correctly classified sample prediction class feature values, is recorded as a first mean value, obtains the mean value of all incorrectly classified sample prediction class feature values, is recorded as a second mean value, and if the difference between the first mean value and the second mean value is greater than a prediction class feature difference threshold value, if the prediction class feature value of the sample is less than the second mean value, the sample is a first error-prone sample, the method comprises the steps of training a living body detection model according to all first error-prone samples to obtain an updated living body detection model, detecting a human face according to the updated living body detection model to obtain a living body detection result, and solves the problems that a common living body detection model in the related technology is low in robustness, and the time cost is high due to the fact that the samples are subjected to data enhancement to select error-prone samples to train the model.
If a certain batch of samples are concentrated, the samples with wrong classification are fewer, and the calculated mean value of the predicted class characteristic values of the wrong samples cannot represent the concentration trend of the predicted class characteristic values of the error-prone samples, so that it is difficult to determine the distribution difference between the error-prone samples and the common samples, and the selected error-prone samples are inaccurate.
Specifically, in one training, if there are fewer samples with wrong classification in a certain sample set, the operation of selecting the sample with easy error is not required, but the living body detection model is directly trained through the sample set, and if the number of samples with wrong classification in the next sample set is greater than a preset threshold in the next training, the steps S102 to S105 are performed, that is, after the sample with easy error is selected, the living body detection model is trained through the sample with easy error, so that the misjudgment rate of the living body detection model on the sample with easy error is reduced.
In some embodiments, after the in-vivo detection model is trained according to all the first error-prone samples, counting the training times of the in-vivo detection model, and judging whether the training times reach a preset number;
if the judgment result is negative, circularly executing the steps S101 to S105 until the training frequency reaches a preset frequency, finishing the training and obtaining an updated living body detection model;
if the judgment result is yes, the training is ended, and the updated living body detection model is obtained.
In this embodiment, the model is more accurate through multiple training, so when the training frequency does not reach the prediction frequency, the steps S101 to S105 are executed in a loop, the prediction result and the feature vector of a new batch of sample sets are output through the biopsy model, and finally, an error-prone sample is selected, and the biopsy model is trained again according to the error-prone sample until the training frequency reaches the preset frequency.
In some embodiments, fig. 2 is a flowchart of another method for in vivo testing according to an embodiment of the present application, and after all first error-prone samples are obtained, as shown in fig. 2, the method further includes the following steps:
step S201, obtaining all sample prediction class characteristic values V with correct classification1And a non-prediction class eigenvalue V2The mean value of the difference of (2) is recorded as the third mean value
Figure 281388DEST_PATH_IMAGE011
Obtaining the sample prediction class characteristic value V of all classification errors1And a non-prediction class eigenvalue V2Is taken as the fourth mean value
Figure DEST_PATH_IMAGE012
Step S202, when the difference between the third mean value and the fourth mean value is larger than the characteristic difference threshold value
Figure 618829DEST_PATH_IMAGE013
Under the condition of (1), if the difference between the predicted class characteristic value and the non-predicted class characteristic value of the sample is smaller than the fourth mean value, the sample is a second error-prone sample, and all second error-prone samples are obtained;
due to the fact that
Figure DEST_PATH_IMAGE014
Can measure the similarity between a predicted class and a non-predicted class when a certain sample is used
Figure 646653DEST_PATH_IMAGE014
When the confidence coefficient of the sample prediction class is smaller, the sample belongs to the error-prone sample with low confidence coefficient, namely the second error-prone sample selected is the error-prone sample with low confidence coefficient
Figure 716241DEST_PATH_IMAGE015
When the difference exists between the distribution of the error-prone sample and the common sample, the difference is utilized
Figure DEST_PATH_IMAGE016
Error-prone samples with low confidence can be sorted out.
And step S203, training the in-vivo detection model according to all the first error-prone samples and all the second error-prone samples to obtain an updated in-vivo detection model.
Through the embodiment shown in fig. 1, error-prone samples with high confidence or low confidence can be selected, and error-prone samples with low confidence that are not selected can also be selected through the embodiment, so that the number of the obtained error-prone samples is larger, and the efficiency is higher.
In some embodiments, after all the first error-prone samples and all the second error-prone samples are obtained, the prediction class characteristic values and the non-prediction class characteristic values of the error-prone samples are made to be equal to obtain the corrected characteristic vectors of the error-prone samples, where the error-prone samples include the first error-prone samples and the second error-prone samples, and the living body detection model is trained according to the characteristic vectors of the common samples and the corrected characteristic vectors of the error-prone samples to obtain the updated living body detection model.
In this embodiment, the prediction class characteristic value and the non-prediction class characteristic value of the error-prone sample are forced to be equal, and from the viewpoint of the decision boundary, the error-prone sample is located on the decision boundary, that is, the error-prone sample becomes an absolutely difficult sample; and because the model updating is equivalent to the decision boundary adjustment, and the decision boundary adjustment needs to ensure that more samples are classified correctly, the method enables error-prone samples to be considered more during the decision boundary adjustment, and corrects the characteristic vector to increase the loss of the error-prone samples from the loss point of view, so that the contribution of the error-prone samples to the gradient is increased, the training by using a large number of samples is avoided, and the robustness of the in-vivo detection model is enhanced.
Optionally, if the feature vector of the error-prone sample is
Figure 304217DEST_PATH_IMAGE017
Can make it possible to
Figure DEST_PATH_IMAGE018
Correcting the feature vector of the error-prone sample to
Figure 38823DEST_PATH_IMAGE019
Or can also make
Figure DEST_PATH_IMAGE020
So that the feature vector of the error-prone sample is corrected to
Figure 864697DEST_PATH_IMAGE021
All will make the error-prone sample fall on the decision boundary, but because of
Figure 105185DEST_PATH_IMAGE022
Thus making it possible to
Figure DEST_PATH_IMAGE023
The difference between the error-prone sample and the common sample can be further opened, so that the learning effect of the in-vivo detection model is better.
The in vivo detection model optimization problem can be described by the following equation 2:
Figure 852562DEST_PATH_IMAGE024
equation 2
Wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE025
the parameters of the model are represented by,
Figure 797384DEST_PATH_IMAGE026
the function of the loss is represented by,
Figure DEST_PATH_IMAGE027
the samples are represented by a representation of the sample,
Figure 212185DEST_PATH_IMAGE028
the error-prone samples are represented by the samples,
Figure DEST_PATH_IMAGE029
the feature vector is represented by a vector of features,
Figure 217050DEST_PATH_IMAGE030
a label representing the sample is attached to the sample,
Figure DEST_PATH_IMAGE031
the feature vector after the modification is represented,
Figure 982880DEST_PATH_IMAGE032
and
Figure DEST_PATH_IMAGE033
the result of the prediction is represented by,
Figure 465814DEST_PATH_IMAGE034
expressed as minimizing losses
Figure 531859DEST_PATH_IMAGE025
In some embodiments, fig. 3 is a flowchart of a third method for in-vivo detection according to an embodiment of the present application, and as shown in fig. 3, the method includes the following steps:
step S301, starting, inputting a sample set into a living body detection model;
step S302, outputting a prediction result and a characteristic vector V of a sample by a living body detection model;
step S303, counting the number of the classified error samples, and judging whether the number is greater than a preset threshold value M, if so, executing step S304 and step S305, otherwise, executing step S309;
step S304, judge
Figure DEST_PATH_IMAGE035
Whether or not greater than
Figure 707626DEST_PATH_IMAGE036
If yes, executing step S306, otherwise executing step S309;
step S305, judge
Figure DEST_PATH_IMAGE037
Whether or not greater than
Figure 695173DEST_PATH_IMAGE038
If yes, executing step S307, otherwise executing step S309;
step S306, judging V1Whether or not less than
Figure DEST_PATH_IMAGE039
If yes, go to step S308, otherwise go to step S309;
step S307, judge
Figure 987658DEST_PATH_IMAGE040
Whether or not less than
Figure DEST_PATH_IMAGE041
If yes, go to step S308, otherwise go to step S309;
step S308, correcting the feature vector of the error-prone sample into
Figure 111471DEST_PATH_IMAGE042
Step S309, calculating loss after the characteristic vector passes through a Softmax layer, and reversely propagating an updating model;
step S310, judging whether the training cycle number is reached, namely the preset number, if so, executing step S311, otherwise, executing step S302, and outputting the prediction results and the feature vectors V of a new batch of samples by the living body detection model;
step S311 ends.
Through the steps S301 to S311, the problems that the robustness of a common in-vivo detection model in the related technology is low, and the time cost is high because the model is trained by selecting an error-prone sample through data enhancement on the sample are solved, and the robustness of the in-vivo detection model is improved.
It should be noted that the steps illustrated in the above-described flow diagrams or in the flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order different than here.
The present embodiment further provides a system for biopsy, which is used to implement the foregoing embodiments and preferred embodiments, and the description of which has been already given is omitted. As used hereinafter, the terms "module," "unit," "subunit," and the like may implement a combination of software and/or hardware for a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 4 is a block diagram of a system for living body detection according to an embodiment of the present application, and as shown in fig. 4, the system includes an obtaining module 41, a dividing module 42, a comparing module 43, a training module 44, and a detecting module 45, where the obtaining module 41 is configured to obtain a prediction result of a living body detection model on a sample set and a feature vector, and the feature vector includes a prediction class feature value and a non-prediction class feature value; the dividing module 42 is configured to divide the sample set into correctly classified samples and incorrectly classified samples according to the prediction result of the sample set, obtain a mean value of all correctly classified sample prediction class feature values, record the mean value as a first mean value, and obtain a mean value of all incorrectly classified sample prediction class feature values, record the mean value as a second mean value; the comparison module 43 is configured to, when a difference between the first average value and the second average value is greater than the prediction-class feature difference threshold, if the prediction-class feature value of the sample is smaller than the second average value, the sample is a first error-prone sample, and all the first error-prone samples are obtained; the training module 44 is configured to train the in-vivo detection model according to all the first error-prone samples, and obtain an updated in-vivo detection model; the detection module 45 is used for detecting the human face according to the updated in-vivo detection model to obtain an in-vivo detection result, so that the problems that the robustness of a common in-vivo detection model in the related art is low, and the time cost is high due to the fact that an error-prone sample is selected to train the model by performing data enhancement on the sample are solved, and the robustness of the in-vivo detection model is improved.
The above modules may be functional modules or program modules, and may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules can be respectively positioned in different processors in any combination.
The present embodiment also provides an electronic device comprising a memory having a computer program stored therein and a processor configured to execute the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
It should be noted that, for specific examples in this embodiment, reference may be made to examples described in the foregoing embodiments and optional implementations, and details of this embodiment are not described herein again.
In addition, in combination with the method of the living body detection in the above embodiments, the embodiments of the present application may be implemented by providing a storage medium. The storage medium having stored thereon a computer program; the computer program, when executed by a processor, implements any of the methods of in vivo detection in the above embodiments.
In one embodiment, a computer device is provided, which may be a terminal. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of liveness detection. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus (Rambus) direct RAM (RDRAM), direct bused dynamic RAM (DRDRAM), and bused dynamic RAM (RDRAM).
It should be understood by those skilled in the art that various features of the above-described embodiments can be combined in any combination, and for the sake of brevity, all possible combinations of features in the above-described embodiments are not described in detail, but rather, all combinations of features which are not inconsistent with each other should be construed as being within the scope of the present disclosure.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method of in vivo testing, the method comprising:
s101, obtaining a prediction result and a feature vector of a living body detection model on a sample set, wherein the feature vector comprises a prediction class feature value and a non-prediction class feature value;
s102, dividing a sample set into correctly classified samples and incorrectly classified samples according to the prediction result of the sample set, obtaining the mean value of all correctly classified sample prediction class characteristic values, recording the mean value as a first mean value, obtaining the mean value of all incorrectly classified sample prediction class characteristic values, and recording the mean value as a second mean value;
s103, under the condition that the difference between the first average value and the second average value is larger than a prediction class characteristic difference threshold value, if the prediction class characteristic value of a sample is smaller than the second average value, the sample is a first error-prone sample, and all the first error-prone samples are obtained;
s104, training the living body detection model according to all the first error-prone samples to obtain an updated living body detection model;
and S105, detecting the human face according to the updated living body detection model to obtain a living body detection result.
2. The method of claim 1, wherein after dividing the samples into correctly classified samples and incorrectly classified samples according to the prediction result of the sample set, the method further comprises:
judging whether the number of the samples with the classification errors is larger than a preset threshold value or not;
if the judgment result is yes, executing the step S102 to the step S105, and if the judgment result is no, training the living body detection model through the sample set to obtain an updated living body detection model.
3. The method of claim 1, wherein after training the in-vivo testing model based on all of the first error-prone samples, the method further comprises:
counting the training times of the living body detection model, and judging whether the training times reach preset times;
if the judgment result is negative, circularly executing the steps S101 to S105 until the training frequency reaches the preset frequency, finishing the training and obtaining an updated living body detection model;
if the judgment result is yes, the training is ended, and the updated living body detection model is obtained.
4. The method of claim 1, wherein after all of the first error-prone samples are obtained, the method further comprises:
obtaining the average value of the difference between the sample prediction characteristic value and the non-prediction characteristic value which are classified correctly, recording the average value as a third average value, and obtaining the average value of the difference between the sample prediction characteristic value and the non-prediction characteristic value which are classified incorrectly, recording the average value as a fourth average value;
under the condition that the difference between the third mean value and the fourth mean value is greater than a feature difference threshold value, if the difference between a sample prediction class feature value and a sample non-prediction class feature value is smaller than the fourth mean value, the sample is a second error-prone sample, and all the second error-prone samples are obtained;
and training the living body detection model according to all the first error-prone samples and all the second error-prone samples to obtain an updated living body detection model.
5. The method of claim 4, wherein after all of the first error-prone samples and all of the second error-prone samples are obtained, the method further comprises:
equalizing the prediction class characteristic value and the non-prediction class characteristic value of the error-prone sample to obtain the corrected characteristic vector of the error-prone sample, wherein the error-prone sample comprises the first error-prone sample and the second error-prone sample;
and training the in-vivo detection model according to the feature vector of the common sample and the feature vector corrected by the error-prone sample to obtain an updated in-vivo detection model.
6. The method of claim 5, wherein equating the predicted class eigenvalue and the non-predicted class eigenvalue of the error-prone sample comprises: and making the prediction class characteristic value of the error-prone sample equal to the non-prediction class characteristic value.
7. The method of claim 1, wherein before obtaining the prediction of the set of samples and the feature vector by the in-vivo detection model, the method comprises:
and training the model according to a sample set until a trained in vivo detection model is obtained, wherein the sample set comprises in vivo samples and prosthesis samples.
8. A living body detection system is characterized by comprising an acquisition module, a division module, a comparison module, a training module and a detection module,
the obtaining module is used for obtaining a prediction result and a feature vector of the living body detection model on the sample set, wherein the feature vector comprises a prediction class feature value and a non-prediction class feature value;
the dividing module is used for dividing the sample set into correctly classified samples and incorrectly classified samples according to the prediction result of the sample set, obtaining the mean value of all the correctly classified sample prediction class characteristic values, recording the mean value as a first mean value, obtaining the mean value of all the incorrectly classified sample prediction class characteristic values, and recording the mean value as a second mean value;
the comparison module is configured to, when a difference between the first average value and the second average value is greater than a prediction class feature difference threshold, if a prediction class feature value of a sample is smaller than the second average value, the sample is a first error-prone sample, and all the first error-prone samples are obtained;
the training module is used for training the in-vivo detection model according to all the first error-prone samples to obtain an updated in-vivo detection model;
and the detection module is used for detecting the human face according to the updated in-vivo detection model to obtain an in-vivo detection result.
9. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of in vivo detection as defined in any one of claims 1 to 7.
10. A storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of living body detection of any one of claims 1 to 7 when executed.
CN202210337902.9A 2022-04-01 2022-04-01 Method, system, electronic device and storage medium for in vivo detection Active CN114495291B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210337902.9A CN114495291B (en) 2022-04-01 2022-04-01 Method, system, electronic device and storage medium for in vivo detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210337902.9A CN114495291B (en) 2022-04-01 2022-04-01 Method, system, electronic device and storage medium for in vivo detection

Publications (2)

Publication Number Publication Date
CN114495291A true CN114495291A (en) 2022-05-13
CN114495291B CN114495291B (en) 2022-07-12

Family

ID=81487834

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210337902.9A Active CN114495291B (en) 2022-04-01 2022-04-01 Method, system, electronic device and storage medium for in vivo detection

Country Status (1)

Country Link
CN (1) CN114495291B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009078096A1 (en) * 2007-12-18 2009-06-25 Fujitsu Limited Generating method of two class classification prediction model, program for generating classification prediction model and generating device of two class classification prediction model
US20170011523A1 (en) * 2015-07-06 2017-01-12 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
CN106611193A (en) * 2016-12-20 2017-05-03 太极计算机股份有限公司 Image content information analysis method based on characteristic variable algorithm
CN108805185A (en) * 2018-05-29 2018-11-13 腾讯科技(深圳)有限公司 Training method, device, storage medium and the computer equipment of model
CN110084271A (en) * 2019-03-22 2019-08-02 同盾控股有限公司 A kind of other recognition methods of picture category and device
CN110083728A (en) * 2019-04-03 2019-08-02 上海联隐电子科技合伙企业(有限合伙) A kind of methods, devices and systems of optimization automation image data cleaning quality
CN110705717A (en) * 2019-09-30 2020-01-17 支付宝(杭州)信息技术有限公司 Training method, device and equipment of machine learning model executed by computer
CN112580734A (en) * 2020-12-25 2021-03-30 深圳市优必选科技股份有限公司 Target detection model training method, system, terminal device and storage medium
CN113052144A (en) * 2021-04-30 2021-06-29 平安科技(深圳)有限公司 Training method, device and equipment of living human face detection model and storage medium
CN113283388A (en) * 2021-06-24 2021-08-20 中国平安人寿保险股份有限公司 Training method, device and equipment of living human face detection model and storage medium
CN114120452A (en) * 2021-09-02 2022-03-01 北京百度网讯科技有限公司 Living body detection model training method and device, electronic equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009078096A1 (en) * 2007-12-18 2009-06-25 Fujitsu Limited Generating method of two class classification prediction model, program for generating classification prediction model and generating device of two class classification prediction model
US20170011523A1 (en) * 2015-07-06 2017-01-12 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
CN106611193A (en) * 2016-12-20 2017-05-03 太极计算机股份有限公司 Image content information analysis method based on characteristic variable algorithm
CN108805185A (en) * 2018-05-29 2018-11-13 腾讯科技(深圳)有限公司 Training method, device, storage medium and the computer equipment of model
CN110084271A (en) * 2019-03-22 2019-08-02 同盾控股有限公司 A kind of other recognition methods of picture category and device
CN110083728A (en) * 2019-04-03 2019-08-02 上海联隐电子科技合伙企业(有限合伙) A kind of methods, devices and systems of optimization automation image data cleaning quality
CN110705717A (en) * 2019-09-30 2020-01-17 支付宝(杭州)信息技术有限公司 Training method, device and equipment of machine learning model executed by computer
CN112580734A (en) * 2020-12-25 2021-03-30 深圳市优必选科技股份有限公司 Target detection model training method, system, terminal device and storage medium
CN113052144A (en) * 2021-04-30 2021-06-29 平安科技(深圳)有限公司 Training method, device and equipment of living human face detection model and storage medium
CN113283388A (en) * 2021-06-24 2021-08-20 中国平安人寿保险股份有限公司 Training method, device and equipment of living human face detection model and storage medium
CN114120452A (en) * 2021-09-02 2022-03-01 北京百度网讯科技有限公司 Living body detection model training method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LIN MAO等: "Samples Selective Updating Mechanism for Object Tracking", 《2019 CHINESE CONTROL CONFERENCE (CCC)》 *
任钦差: "基于数据选择方法的分类器性能提高的研究", 《中国优秀硕士学位论文全文数据库》 *

Also Published As

Publication number Publication date
CN114495291B (en) 2022-07-12

Similar Documents

Publication Publication Date Title
US20180137329A1 (en) User authentication method using fingerprint image and method of generating coded model for user authentication
US20120290526A1 (en) Method and System for Association and Decision Fusion of Multimodal Inputs
CN110852450B (en) Method and device for identifying countermeasure sample to protect model security
JP7059695B2 (en) Learning method and learning device
US20200257885A1 (en) High speed reference point independent database filtering for fingerprint identification
CN110335248B (en) Medical image focus detection method, device, computer equipment and storage medium
CN112488992B (en) Method, medium and electronic equipment for judging mutation state of epidermal growth factor receptor
CN115147874A (en) Method and apparatus for biometric information forgery detection
CN115147875A (en) Anti-cheating method and device
CN109034280B (en) Handwriting model training method, handwriting character recognition method, device, equipment and medium
CN112215298A (en) Model training method, device, equipment and readable storage medium
WO2013088707A1 (en) Dictionary learning device, pattern-matching device, dictionary learning method and storage medium
CN114495291B (en) Method, system, electronic device and storage medium for in vivo detection
CN112163110B (en) Image classification method and device, electronic equipment and computer-readable storage medium
TWI775186B (en) Rf fingerprint signal processing device and rf fingerprint signal processing method
CN111104339B (en) Software interface element detection method, system, computer equipment and storage medium based on multi-granularity learning
CN107077617B (en) Fingerprint extraction method and device
KR20210071410A (en) Sensor-specific image recognition device and method
CN115410250A (en) Array type human face beauty prediction method, equipment and storage medium
CN113963208A (en) Seed bone grade identification method and device, computer equipment and storage medium
CN114913404A (en) Model training method, face image living body detection method, electronic device and storage medium
CN114049544A (en) Face quality evaluation method, device, equipment and medium based on feature comparison
CN113239075A (en) Construction data self-checking method and system
CN114445700B (en) Evidence fusion target identification method for unbalanced SAR image data
EP4127984B1 (en) Neural network watermarking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant