CN113743220A - Biological characteristic in-vivo detection method and device and computer equipment - Google Patents

Biological characteristic in-vivo detection method and device and computer equipment Download PDF

Info

Publication number
CN113743220A
CN113743220A CN202110889466.1A CN202110889466A CN113743220A CN 113743220 A CN113743220 A CN 113743220A CN 202110889466 A CN202110889466 A CN 202110889466A CN 113743220 A CN113743220 A CN 113743220A
Authority
CN
China
Prior art keywords
biological
feature
original
data
biological characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110889466.1A
Other languages
Chinese (zh)
Inventor
黄觉坤
包慧东
陈俏钢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shangzhou Zhilian Technology Co ltd
Original Assignee
Shenzhen Shangzhou Zhilian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shangzhou Zhilian Technology Co ltd filed Critical Shenzhen Shangzhou Zhilian Technology Co ltd
Priority to CN202110889466.1A priority Critical patent/CN113743220A/en
Publication of CN113743220A publication Critical patent/CN113743220A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention relates to a biological characteristic in-vivo detection method, a biological characteristic in-vivo detection device and computer equipment. The method comprises the following steps: s1, randomly doodling the original biological characteristic data to generate doodle biological characteristic data; s2, inputting the original biological characteristic data and the doodle biological characteristic data into a characteristic extractor, and extracting the original characteristic of the original biological characteristic data and the doodle characteristic of the doodle biological characteristic data; s3, calculating the similarity of the original features and the doodle features, and constraining the similarity by using a loss function to enable the similarity to tend to be minimum; s4, repeating the steps until the feature extractor completes training; and S5, embedding the trained feature extractor into the biological feature recognition system. The invention enables the feature extractor to capture more biological features by establishing the doodle restoration prepositive learning task, and realizes higher-quality biological feature living body detection by using the abundant biological features.

Description

Biological characteristic in-vivo detection method and device and computer equipment
Technical Field
The present invention relates to the field of biological feature in-vivo detection, and more particularly, to a biological feature in-vivo detection method, apparatus and computer device.
Background
Artificial neural networks are increasingly applied to the field of anti-counterfeiting, and have obvious advantages compared with the traditional target detection method, such as identification and detection of authenticity of fingerprint images or biological characteristic images. It should be noted that one of the main challenges faced by the biometric anti-counterfeiting technology based on the artificial neural network is the initialization weight during the training of the neural network, and the neural network generally uses the model parameters trained by ImageNet as the initialization parameters, but the data set of ImageNet contains a large number of natural images, so the initialization weight obtained after the training in the data set lacks pertinence to the living body detection problem. Therefore, the model pre-trained by using ImageNet often cannot obtain a very good effect, so that the accuracy of neural network training is greatly reduced, the fingerprint authenticity classifier established according to the neural network cannot accurately identify the authenticity of the fingerprint image, the authenticity distinguishing capability is low, and the anti-counterfeiting capability is weak.
Disclosure of Invention
The present invention provides a method, an apparatus and a computer device for detecting a biological feature of a living body, which are directed to overcome the above-mentioned drawbacks of the prior art.
The technical scheme adopted by the invention for solving the technical problems is as follows: a biological characteristic living body detection method is constructed, and the method comprises the following steps:
s1, selecting an original biological characteristic data in a biological characteristic sample set, and randomly doodling the original biological characteristic data to generate doodle biological characteristic data, wherein the biological characteristic sample set comprises a plurality of original biological characteristic data;
s2, inputting the original biological feature data and the doodle biological feature data into a feature extractor, and extracting the original features of the original biological feature data and the doodle features of the doodle biological feature data;
s3, calculating the similarity of the original feature and the doodle feature, and constraining the similarity by using a loss function to enable the similarity to tend to be minimum;
s4, repeating the steps S1 to S3, and training the feature extractor one by using the original biological feature data in the biological feature sample set until all the original biological feature data in the biological feature sample set are trained completely, and the feature extractor finishes training;
and S5, embedding the trained feature extractor into a biological feature recognition system, and calling the feature extractor by the biological feature recognition system to perform biological feature living body detection.
Further, in the method for detecting a biological feature living body according to the present invention, after the step S3 and before the step S4, the method further includes: s31, adding the constraint conditions of the original features and the doodle features, and calculating the similarity of the original features and the doodle features after the constraint conditions are added;
the repeatedly performing of the steps S1 to S3 in the step S4 includes: the steps S1 to S31 are repeatedly performed.
Further, in the biological feature live detection method of the present invention, the step S1 of randomly doodling the original biological feature data to generate doodle biological feature data includes: and randomly doodling the original biological characteristic data by using a random line to generate doodle biological characteristic data.
Further, in the biological feature living body detection method of the present invention, the shape of the random line is randomly generated, the width of the random line is randomly selected from 5 to 35 pixels, and the color three-channel value of the random line is randomly selected from 0 to 255.
Further, in the biological feature in-vivo detection method of the present invention, the feature extractor is a neural network structure;
the biological features are human faces or fingerprints.
Further, in the biological characteristic living body detecting method of the present invention, the neural network structure is ResNet-18.
Further, in the biometric living body detection method of the present invention, the extracting of the original feature of the original biometric data and the graffiti feature of the graffiti biometric data in step S2 includes: and after all the neurons of the feature extractor are activated, extracting the original features of the original biological feature data and the doodle features of the doodle biological feature data.
Further, in the method for detecting a living organism of a biological feature according to the present invention, the step S5 in which the biometric recognition system calls the feature extractor to perform the living organism detection of the biological feature includes: and the biological characteristic identification system calls a classifier to identify whether the biological characteristic image acquired by the camera is a biological characteristic living body image, and calls the characteristic extractor to carry out biological characteristic living body detection if the biological characteristic image is the biological characteristic living body image.
In addition, the present invention also provides a biological feature living body detecting apparatus including:
the scrawling unit is used for selecting an original biological characteristic data in a biological characteristic sample set, and randomly scrawling the original biological characteristic data to generate scrawling biological characteristic data, wherein the biological characteristic sample set comprises a plurality of original biological characteristic data;
the characteristic extraction unit is used for inputting the original biological characteristic data and the scrawling biological characteristic data into a characteristic extractor and extracting the original characteristic of the original biological characteristic data and the scrawling characteristic of the scrawling biological characteristic data;
the similarity calculation unit is used for calculating the similarity of the original features and the doodle features, and the similarity is constrained by using a loss function so as to enable the similarity to tend to be minimum;
the repeated training unit is used for repeatedly executing the doodling unit, the feature extraction unit and the similarity calculation unit, and training the feature extractor by using the original biological feature data in the biological feature sample set one by one until all the original biological feature data in the biological feature sample set are trained, and the feature extractor finishes training;
and the biological characteristic recognition unit is used for embedding the trained characteristic extractor into a biological characteristic recognition system, and the biological characteristic recognition system calls the characteristic extractor to carry out biological characteristic living body detection.
In addition, the invention also provides computer equipment which comprises a processor and a memory, wherein the processor is in communication connection with the memory;
the memory is used for storing a computer program;
the processor is configured to execute the computer program stored in the memory to implement the biometric liveness detection method as described above.
The biological characteristic in-vivo detection method, the biological characteristic in-vivo detection device and the computer equipment have the following beneficial effects: the invention enables the feature extractor to capture more biological features by establishing the doodle restoration prepositive learning task, and realizes higher-quality biological feature living body detection by using the abundant biological features.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
fig. 1 is a flowchart of a face live detection method according to an embodiment of the present invention;
FIGS. 2a and 2b are human face images before and after a graffiti provided by an embodiment of the invention;
FIG. 3 is a flowchart of a method for detecting a living human face according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a living human face detection device according to an embodiment of the present invention.
Detailed Description
For a more clear understanding of the technical features, objects and effects of the present invention, embodiments of the present invention will now be described in detail with reference to the accompanying drawings.
The biological characteristics of the invention can be human faces or fingerprints, and the biological characteristics are taken as human faces for illustration, and the implementation can be referred to when the biological characteristics are fingerprints. Referring to fig. 1, the living human face detection method of the present embodiment includes the following steps:
s1, selecting original face data in a face sample set, and randomly scrawling the original face data to generate scrawling face data, wherein the face sample set comprises a plurality of original face data.
Specifically, a large amount of original face data is prepared in advance as a face sample set, and it should be noted that the original face data in the face sample set may not belong to data used for subsequent living face detection, because real face data is often easier to obtain than a forgery attack, the prepared data is easy to obtain. After the pre-learning task is established, selecting original face data in a face sample set, and randomly doodling the original face data to generate doodle face data. The random graffiti is that the graffiti patterns are not set in advance, but are unknown patterns and are generated by temporary random combination, and the random graffiti mode is closer to the real graffiti, so that the training of the feature extractor is more scientific.
Alternatively, the graffiti patterns generated by the random graffiti of the present embodiment include, but are not limited to, single points, dot matrixes, characters, lines, figures, and the like, the principle of the random lines is described as an example, and other patterns can be referred to for implementation. Then the step S1 of randomly doodling the original face data to generate doodle face data includes: the method comprises the steps of randomly doodling original face data by using random lines to generate doodle face data, wherein the shape of the random lines is randomly generated, the width of the random lines is randomly selected from 5-35 pixels, three color channel values (RGB) of the random lines are randomly selected from 0-255, and the three color channel values of the random lines, namely the values of three basic colors of red, green and blue, can be randomly selected from 0-255. Fig. 2a is original face data, and fig. 2b is graffiti face data after random lines are added.
And S2, inputting the original face data and the scrawling face data into the feature extractor, and extracting the original features of the original face data and the scrawling features of the scrawling face data.
Specifically, a feature extractor is set in advance, and the feature extractor is of a neural network structure. The feature extractor is composed of a deep convolutional neural network, the capability of extracting features is obtained by overlapping convolutional neural network layers, and the feature extractor can be adaptively adjusted according to a deep learning network of a specific task. The features extracted by the feature extractor relate to texture information, context information and semantic information, wherein the context information refers to that if obvious boundary defects or frames appear around the face in a face recognition system, the fact that the input samples are attack samples with high probability indicates that the contents of different receptive fields need to be fused at the same time. The semantic information refers to semantic features of the human face, such as the eyes with double eyelids, single eyelids, and the like, which are features separated from the low-dimensional pixel information. Alternatively, the neural network structure is ResNet-18, a residual network-18 layer.
And inputting the original face data and the scrawling face data into a feature extractor, and extracting the original features of the original face data and the scrawling features of the scrawling face data. It should be noted that when extracting the original features of the original face data and the doodle features of the doodle face data, it is required to ensure that all neurons of the neural network of the feature extractor are activated as completely as possible, that is, after all neurons of the feature extractor are activated, the original features of the original face data and the doodle features of the doodle face data are extracted.
And S3, calculating the similarity of the original feature and the doodle feature, and using a loss function to constrain the similarity so as to enable the similarity to be minimum. Specifically, after the original features and the doodle features are extracted, the similarity of the original features and the doodle features is calculated, and the similarity is constrained by using a loss function so that the similarity tends to be minimum. Alternatively, the present embodiment uses the Cosine similarity to calculate the distance between the original features and the doodle features, minimizing this distance to train the neural network structure.
And S4, repeatedly executing the steps S1 to S3, and training the feature extractor by using the original face data in the face sample set one by one until all the original face data in the face sample set are completely trained, and finishing the training of the feature extractor.
And S5, embedding the trained feature extractor into a face recognition system, and calling the feature extractor by the face recognition system to perform face living body detection.
Specifically, some parameters in the neural network model corresponding to the feature extractor need to be obtained through continuous training, so that the initialization node of the neural network model is very important. Embedding the trained feature extractor into a face recognition system, and performing re-adaptation and adjustment according to a specific in-vivo detection task when embedding, namely embedding the initialized parameter neural network model containing the training into the face recognition system, wherein the normal training, loss function, data setting and model setting for in-vivo task detection have no special change with normal in-vivo detection based on deep learning. After the feature extractor is embedded into the face recognition system, the face recognition system can use the feature extractor to perform face living body detection.
In order to distinguish a living body face from a non-living body face, the present embodiment sets a classifier for living body face judgment, and if an input face image is a living body face, the classifier outputs 1; if the input face image is a non-living face, for example, the face mask, the display screen displays the face image, etc., the classifier outputs 0. When the face recognition system calls the feature extractor to perform face living body detection, the face recognition system calls the classifier to identify whether a face image acquired by the camera is a face living body image, and if the acquired face image is the face living body image, the feature extractor is called to perform face living body detection; and if the acquired face image is not the face living body image, sending out reminding information that the face image is not the face living body image.
According to the method and the device, the doodle restoration preposed learning task is established, so that the feature extractor can capture more human face features, and the high-quality human face feature living body detection is realized by using the abundant human face features.
In the face live detection method of some embodiments, referring to fig. 3, after step S3 and before step S4, the method further includes: and S31, adding the constraint conditions of the original features and the doodle features, and calculating the similarity of the original features and the doodle features after the constraint conditions are added. Specifically, the above embodiment is the scribble restoration performed on one face image, but the scribble restoration is only considered to cause a model collapse problem, that is, the output is constant no matter what image the input is. In order to solve the problem, constraint conditions need to be added, a human face is given, a feature extractor is used for feature extraction to obtain a feature vector, and the feature vector is recorded as an Anchor vector. And then randomly sampling other faces, taking the sampled faces as input, and performing feature extraction on the input faces by using a feature extractor to obtain a feature vector which is marked as a Negative vector. The problem of preventing model collapse can be solved by calculating the cosine similarity of the Anchor vector and the Negative vector and taking the cosine similarity as another loss function to minimize the similarity. That is to say, unlike the similarity constraint of step S3 in the above embodiment that original features and scribble features extracted before and after the scribble of the same face are correspondingly extracted, in step S31 in this embodiment, the cosine similarity is used to calculate the feature distance of different faces, and then the distance is maximized, so as to ensure that the degree of difference between the faces is maximized. Correspondingly, the step S1 to the step S3 repeatedly executed in the step S4 includes: steps S1 to S31 are repeatedly performed. The embodiment increases the difference between different faces by adding constraint conditions, prevents the model from collapsing, and improves the success rate of face feature living body detection.
The biological characteristics of the invention can be human faces or fingerprints, and the biological characteristics are taken as human faces for illustration, and the implementation can be referred to when the biological characteristics are fingerprints. Referring to fig. 4, the living human face detection apparatus of the present embodiment includes a graffiti unit, a feature extraction unit, a similarity calculation unit, a repetitive training unit, and a biometric feature recognition unit, which are described below separately.
The scrawling unit is used for selecting original face data in the face sample set and randomly scrawling the original face data to generate scrawling face data, wherein the face sample set comprises a plurality of original face data.
Specifically, a large amount of original face data is prepared in advance as a face sample set, and it should be noted that the original face data in the face sample set may not belong to data used for subsequent living face detection, because real face data is often easier to obtain than a forgery attack, the prepared data is easy to obtain. After the pre-learning task is established, selecting original face data in a face sample set, and randomly doodling the original face data to generate doodle face data. The random graffiti is that the graffiti patterns are not set in advance, but are unknown patterns and are generated by temporary random combination, and the random graffiti mode is closer to the real graffiti, so that the training of the feature extractor is more scientific.
Alternatively, the graffiti patterns generated by the random graffiti of the present embodiment include, but are not limited to, single points, dot matrixes, characters, lines, figures, and the like, the principle of the random lines is described as an example, and other patterns can be referred to for implementation. Then randomly doodling the original face data to generate doodle face data comprises: the method comprises the steps of randomly doodling original face data by using random lines to generate doodle face data, wherein the shape of the random lines is randomly generated, the width of the random lines is randomly selected from 5-35 pixels, three color channel values (RGB) of the random lines are randomly selected from 0-255, and the three color channel values of the random lines, namely the values of three basic colors of red, green and blue, can be randomly selected from 0-255. Fig. 2a is original face data, and fig. 2b is graffiti face data after random lines are added.
And the feature extraction unit is used for inputting the original face data and the scrawling face data into the feature extractor and extracting the original features of the original face data and the scrawling features of the scrawling face data.
Specifically, a feature extractor is set in advance, and the feature extractor is of a neural network structure. The feature extractor is composed of a deep convolutional neural network, the capability of extracting features is obtained by overlapping convolutional neural network layers, and the feature extractor can be adaptively adjusted according to a deep learning network of a specific task. The features extracted by the feature extractor relate to texture information, context information and semantic information, wherein the context information refers to that if obvious boundary defects or frames appear around the face in a face recognition system, the fact that the input samples are attack samples with high probability indicates that the contents of different receptive fields need to be fused at the same time. The semantic information refers to semantic features of the human face, such as the eyes with double eyelids, single eyelids, and the like, which are features separated from the low-dimensional pixel information. Alternatively, the neural network structure is ResNet-18, a residual network-18 layer.
And inputting the original face data and the scrawling face data into a feature extractor, and extracting the original features of the original face data and the scrawling features of the scrawling face data. It should be noted that when extracting the original features of the original face data and the doodle features of the doodle face data, it is required to ensure that all neurons of the neural network of the feature extractor are activated as completely as possible, that is, after all neurons of the feature extractor are activated, the original features of the original face data and the doodle features of the doodle face data are extracted.
And the similarity calculation unit is used for calculating the similarity of the original features and the doodle features, and using the loss function to restrict the similarity so as to enable the similarity to tend to be minimum. Specifically, after the original features and the doodle features are extracted, the similarity of the original features and the doodle features is calculated, and the similarity is constrained by using a loss function so that the similarity tends to be minimum. Alternatively, the present embodiment uses the Cosine similarity to calculate the distance between the original features and the doodle features, minimizing this distance to train the neural network structure.
And the repeated training unit is used for repeatedly executing the doodling unit, the feature extraction unit and the similarity calculation unit, training the feature extractor by using the original face data in the face sample set one by one until all the original face data in the face sample set are completely trained, and finishing the training of the feature extractor.
And the biological characteristic recognition unit is used for embedding the trained characteristic extractor into a face recognition system, and the face recognition system calls the characteristic extractor to carry out face living body detection.
Specifically, some parameters in the neural network model corresponding to the feature extractor need to be obtained through continuous training, so that the initialization node of the neural network model is very important. Embedding the trained feature extractor into a face recognition system, and performing re-adaptation and adjustment according to a specific in-vivo detection task when embedding, namely embedding the initialized parameter neural network model containing the training into the face recognition system, wherein the normal training, loss function, data setting and model setting for in-vivo task detection have no special change with normal in-vivo detection based on deep learning. After the feature extractor is embedded into the face recognition system, the face recognition system can use the feature extractor to perform face living body detection.
In order to distinguish a living body face from a non-living body face, the present embodiment sets a classifier for living body face judgment, and if an input face image is a living body face, the classifier outputs 1; if the input face image is a non-living face, for example, the face mask, the display screen displays the face image, etc., the classifier outputs 0. When the face recognition system calls the feature extractor to perform face living body detection, the face recognition system calls the classifier to identify whether a face image acquired by the camera is a face living body image, and if the acquired face image is the face living body image, the feature extractor is called to perform face living body detection; and if the acquired face image is not the face living body image, sending out reminding information that the face image is not the face living body image.
According to the method and the device, the doodle restoration preposed learning task is established, so that the feature extractor can capture more human face features, and the high-quality human face feature living body detection is realized by using the abundant human face features.
In a preferred embodiment, the computer device of this embodiment includes a processor and a memory, the processor being communicatively coupled to the memory. The memory is used for storing a computer program; the processor is adapted to execute a computer program stored in the memory to implement the face liveness detection method as described above. The computer equipment of the embodiment enables the feature extractor to capture more human faces by establishing the scrawling restoration prepositive learning task, and realizes higher-quality human face living body detection by using the abundant human faces.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above embodiments are merely illustrative of the technical ideas and features of the present invention, and are intended to enable those skilled in the art to understand the contents of the present invention and implement the present invention, and not to limit the scope of the present invention. All equivalent changes and modifications made within the scope of the claims of the present invention should be covered by the claims of the present invention.

Claims (10)

1. A biological characteristic live body detection method, characterized by comprising the steps of:
s1, selecting an original biological characteristic data in a biological characteristic sample set, and randomly doodling the original biological characteristic data to generate doodle biological characteristic data, wherein the biological characteristic sample set comprises a plurality of original biological characteristic data;
s2, inputting the original biological feature data and the doodle biological feature data into a feature extractor, and extracting the original features of the original biological feature data and the doodle features of the doodle biological feature data;
s3, calculating the similarity of the original feature and the doodle feature, and constraining the similarity by using a loss function to enable the similarity to tend to be minimum;
s4, repeating the steps S1 to S3, and training the feature extractor one by using the original biological feature data in the biological feature sample set until all the original biological feature data in the biological feature sample set are trained completely, and the feature extractor finishes training;
and S5, embedding the trained feature extractor into a biological feature recognition system, and calling the feature extractor by the biological feature recognition system to perform biological feature living body detection.
2. The biological feature living body detecting method according to claim 1, further comprising, after the step S3 and before the step S4: s31, adding the constraint conditions of the original features and the doodle features, and calculating the similarity of the original features and the doodle features after the constraint conditions are added;
the repeatedly performing of the steps S1 to S3 in the step S4 includes: the steps S1 to S31 are repeatedly performed.
3. The method for detecting living organism of biological characteristics according to claim 1, wherein the step S1 of randomly doodling the raw biological characteristic data to generate doodle biological characteristic data comprises: and randomly doodling the original biological characteristic data by using a random line to generate doodle biological characteristic data.
4. The biological feature living body detection method according to claim 3, wherein the shape of the random line is randomly generated, the width of the random line is randomly selected from 5 to 35 pixels, and the color three-channel value of the random line is randomly selected from 0 to 255.
5. The biological feature living body detecting method according to claim 1, wherein the feature extractor is a neural network structure;
the biological features are human faces or fingerprints.
6. The method of claim 5, wherein the neural network structure is ResNet-18.
7. The biometric living body detection method according to claim 5, wherein the extracting of the original feature of the original biometric data and the graffiti feature of the graffiti biometric data in step S2 includes: and after all the neurons of the feature extractor are activated, extracting the original features of the original biological feature data and the doodle features of the doodle biological feature data.
8. The method according to claim 1, wherein the step S5 of invoking the feature extractor by the biometric recognition system for biometric live detection comprises: and the biological characteristic identification system calls a classifier to identify whether the biological characteristic image acquired by the camera is a biological characteristic living body image, and calls the characteristic extractor to carry out biological characteristic living body detection if the biological characteristic image is the biological characteristic living body image.
9. A biological feature living body detecting device characterized by comprising:
the scrawling unit is used for selecting an original biological characteristic data in a biological characteristic sample set, and randomly scrawling the original biological characteristic data to generate scrawling biological characteristic data, wherein the biological characteristic sample set comprises a plurality of original biological characteristic data;
the characteristic extraction unit is used for inputting the original biological characteristic data and the scrawling biological characteristic data into a characteristic extractor and extracting the original characteristic of the original biological characteristic data and the scrawling characteristic of the scrawling biological characteristic data;
the similarity calculation unit is used for calculating the similarity of the original features and the doodle features, and the similarity is constrained by using a loss function so as to enable the similarity to tend to be minimum;
the repeated training unit is used for repeatedly executing the doodling unit, the feature extraction unit and the similarity calculation unit, and training the feature extractor by using the original biological feature data in the biological feature sample set one by one until all the original biological feature data in the biological feature sample set are trained, and the feature extractor finishes training;
and the biological characteristic recognition unit is used for embedding the trained characteristic extractor into a biological characteristic recognition system, and the biological characteristic recognition system calls the characteristic extractor to carry out biological characteristic living body detection.
10. A computer device comprising a processor and a memory, the processor communicatively coupled to the memory;
the memory is used for storing a computer program;
the processor is configured to execute a computer program stored in the memory to implement the biometric liveness detection method of any one of claims 1 to 8.
CN202110889466.1A 2021-08-04 2021-08-04 Biological characteristic in-vivo detection method and device and computer equipment Pending CN113743220A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110889466.1A CN113743220A (en) 2021-08-04 2021-08-04 Biological characteristic in-vivo detection method and device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110889466.1A CN113743220A (en) 2021-08-04 2021-08-04 Biological characteristic in-vivo detection method and device and computer equipment

Publications (1)

Publication Number Publication Date
CN113743220A true CN113743220A (en) 2021-12-03

Family

ID=78729990

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110889466.1A Pending CN113743220A (en) 2021-08-04 2021-08-04 Biological characteristic in-vivo detection method and device and computer equipment

Country Status (1)

Country Link
CN (1) CN113743220A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107633165A (en) * 2017-10-26 2018-01-26 深圳奥比中光科技有限公司 3D face identity authentications and device
CN109492550A (en) * 2018-10-25 2019-03-19 腾讯科技(深圳)有限公司 The related system of biopsy method, device and application biopsy method
CN109492551A (en) * 2018-10-25 2019-03-19 腾讯科技(深圳)有限公司 The related system of biopsy method, device and application biopsy method
CN109886167A (en) * 2019-02-01 2019-06-14 中国科学院信息工程研究所 One kind blocking face identification method and device
CN110443102A (en) * 2018-05-04 2019-11-12 北京眼神科技有限公司 Living body faces detection method and device
CN111460939A (en) * 2020-03-20 2020-07-28 深圳市优必选科技股份有限公司 Deblurring face recognition method and system and inspection robot
CN112215043A (en) * 2019-07-12 2021-01-12 普天信息技术有限公司 Human face living body detection method
CN112766162A (en) * 2021-01-20 2021-05-07 北京市商汤科技开发有限公司 Living body detection method, living body detection device, electronic apparatus, and computer-readable storage medium
CN113052144A (en) * 2021-04-30 2021-06-29 平安科技(深圳)有限公司 Training method, device and equipment of living human face detection model and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107633165A (en) * 2017-10-26 2018-01-26 深圳奥比中光科技有限公司 3D face identity authentications and device
CN110443102A (en) * 2018-05-04 2019-11-12 北京眼神科技有限公司 Living body faces detection method and device
CN109492550A (en) * 2018-10-25 2019-03-19 腾讯科技(深圳)有限公司 The related system of biopsy method, device and application biopsy method
CN109492551A (en) * 2018-10-25 2019-03-19 腾讯科技(深圳)有限公司 The related system of biopsy method, device and application biopsy method
CN109886167A (en) * 2019-02-01 2019-06-14 中国科学院信息工程研究所 One kind blocking face identification method and device
CN112215043A (en) * 2019-07-12 2021-01-12 普天信息技术有限公司 Human face living body detection method
CN111460939A (en) * 2020-03-20 2020-07-28 深圳市优必选科技股份有限公司 Deblurring face recognition method and system and inspection robot
CN112766162A (en) * 2021-01-20 2021-05-07 北京市商汤科技开发有限公司 Living body detection method, living body detection device, electronic apparatus, and computer-readable storage medium
CN113052144A (en) * 2021-04-30 2021-06-29 平安科技(深圳)有限公司 Training method, device and equipment of living human face detection model and storage medium

Similar Documents

Publication Publication Date Title
Jourabloo et al. Face de-spoofing: Anti-spoofing via noise modeling
CN106204779B (en) Check class attendance method based on plurality of human faces data collection strategy and deep learning
CN111488756B (en) Face recognition-based living body detection method, electronic device, and storage medium
CN108229381B (en) Face image generation method and device, storage medium and computer equipment
CN108717524B (en) Gesture recognition system based on double-camera mobile phone and artificial intelligence system
CN110569756B (en) Face recognition model construction method, recognition method, device and storage medium
CN111127308B (en) Mirror image feature rearrangement restoration method for single sample face recognition under partial shielding
CN111161178A (en) Single low-light image enhancement method based on generation type countermeasure network
CN106372581A (en) Method for constructing and training human face identification feature extraction network
CN111783748B (en) Face recognition method and device, electronic equipment and storage medium
CN106778785A (en) Build the method for image characteristics extraction model and method, the device of image recognition
CN109784148A (en) Biopsy method and device
CN108875623B (en) Face recognition method based on image feature fusion contrast technology
Kamble et al. Counterfeit currency detection using deep convolutional neural network
CN110674824A (en) Finger vein segmentation method and device based on R2U-Net and storage medium
CN110956080A (en) Image processing method and device, electronic equipment and storage medium
CN110674759A (en) Monocular face in-vivo detection method, device and equipment based on depth map
CN112232204A (en) Living body detection method based on infrared image
CN111709305B (en) Face age identification method based on local image block
CN116912604B (en) Model training method, image recognition device and computer storage medium
CN114596608A (en) Double-stream video face counterfeiting detection method and system based on multiple clues
CN112200065B (en) Micro-expression classification method based on action amplification and self-adaptive attention area selection
CN116229528A (en) Living body palm vein detection method, device, equipment and storage medium
CN112329727A (en) Living body detection method and device
CN109325472B (en) Face living body detection method based on depth information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination