CN111985423A - Living body detection method, living body detection device, living body detection equipment and readable storage medium - Google Patents

Living body detection method, living body detection device, living body detection equipment and readable storage medium Download PDF

Info

Publication number
CN111985423A
CN111985423A CN202010866143.6A CN202010866143A CN111985423A CN 111985423 A CN111985423 A CN 111985423A CN 202010866143 A CN202010866143 A CN 202010866143A CN 111985423 A CN111985423 A CN 111985423A
Authority
CN
China
Prior art keywords
frame difference
data
living body
frame
body detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010866143.6A
Other languages
Chinese (zh)
Inventor
谭圣琦
吴泽衡
徐倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN202010866143.6A priority Critical patent/CN111985423A/en
Publication of CN111985423A publication Critical patent/CN111985423A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a living body detection method, a living body detection device, equipment and a readable storage medium, wherein the living body detection method comprises the following steps: acquiring frame difference data corresponding to video data to be detected, determining frame difference noise characteristic data corresponding to the frame difference data, and performing in-vivo detection on the video data to be detected based on the frame difference noise characteristic data to obtain an in-vivo detection result. The application solves the technical problem of low accuracy of in-vivo detection.

Description

Living body detection method, living body detection device, living body detection equipment and readable storage medium
Technical Field
The present application relates to the field of artificial intelligence in financial technology (Fintech), and more particularly, to a method, an apparatus, a device, and a readable storage medium for detecting a living body.
Background
With the continuous development of financial technologies, especially internet technology and finance, more and more technologies (such as distributed, Blockchain, artificial intelligence and the like) are applied to the financial field, but the financial industry also puts higher requirements on the technologies, such as higher requirements on the distribution of backlog of the financial industry.
With the continuous development of computer software and artificial intelligence, the application field of artificial intelligence is becoming more and more extensive, for example, artificial intelligence is often applied to face recognition, but because face data is easy to obtain, when face recognition is performed, living body detection is usually required to ensure the accuracy of face recognition, at present, living body detection is performed in a manner of analyzing motion change (such as motion living body), voice information, mouth motion change (digital living body) of a face or three-dimensional structure information of the face and the like therein, that is, living body detection is performed through video characteristic information in video data, but because video data can be forged through a video editing technology, if a malicious attacker attacks a face recognition system through forged video data, it is difficult for the current living body detection method to identify whether the video data is forged video data or not, and thus results in lower accuracy of in vivo testing.
Disclosure of Invention
The application mainly aims to provide a living body detection method, a living body detection device, living body detection equipment and a readable storage medium, and aims to solve the technical problem that in the prior art, the living body detection accuracy is low.
To achieve the above object, the present application provides a living body detecting method applied to a living body detecting apparatus, the living body detecting method including:
acquiring frame difference data corresponding to video data to be detected, and determining frame difference noise characteristic data corresponding to the frame difference data;
and performing living body detection on the video data to be detected based on the frame difference noise characteristic data to obtain a living body detection result.
Optionally, the frame difference noise characterization data comprises spatial frame difference noise characterization data,
the step of determining the frame difference noise characteristic data corresponding to the frame difference data comprises:
and filtering the frame difference data to amplify the noise signals in the frame difference data to obtain space domain frame difference noise characteristic data.
Optionally, the frame difference noise characterization data comprises frequency domain frame difference noise characterization data,
the step of determining the frame difference noise characteristic data corresponding to the frame difference data comprises:
and performing Fourier transform on the frame difference data to transform the frame difference data from a spatial domain to a frequency domain, so as to obtain the frequency domain frame difference noise characteristic data corresponding to the frame difference data.
Optionally, the performing live body detection on the video data to be detected based on the frame difference noise characteristic data, and the step of obtaining a live body detection result includes:
inputting the frame difference noise characteristic data into a preset classification model, classifying each frame difference noise characteristic in the frame difference noise characteristic data, and obtaining a frame difference noise characteristic classification result;
and performing living body detection on the video data to be detected based on the frame difference noise characteristic classification result to obtain a living body detection result.
Optionally, the frame difference noise feature classification result comprises a live frame difference noise feature,
the step of performing in-vivo detection on the video data to be detected based on the frame difference noise feature classification result to obtain the in-vivo detection result comprises the following steps:
calculating the feature quantity ratio of the living body frame difference noise features in the frame difference noise feature classification result;
comparing the characteristic quantity ratio with a preset quantity ratio threshold value, and if the characteristic quantity ratio is greater than or equal to the preset quantity ratio threshold value, judging that the living body detection result is a pass;
and if the characteristic quantity ratio is smaller than the preset quantity ratio threshold value, judging that the in-vivo detection result is failed.
Optionally, the step of acquiring frame difference data corresponding to the video data to be detected includes:
acquiring continuous frame data corresponding to the video data to be detected;
and calculating the frame difference between each two continuous frame images in the continuous frame data to obtain the frame difference data.
Optionally, the frame difference data comprises at least a frame difference image matrix, each of the successive frame images comprises a first successive frame image and a second successive frame image,
the step of calculating the frame difference between each two consecutive frames of images in the consecutive frame data to obtain the frame difference data includes:
acquiring a first pixel matrix corresponding to the first continuous frame image and a second pixel matrix corresponding to the second continuous frame image;
and carrying out subtraction operation on the first pixel matrix and the second pixel matrix to obtain the frame difference image matrix.
The present application further provides a living body detection apparatus, the living body detection apparatus is a virtual apparatus, just the living body detection apparatus is applied to the living body detection device, the living body detection apparatus includes:
the extraction module is used for acquiring frame difference data corresponding to video data to be detected and determining frame difference noise characteristic data corresponding to the frame difference data;
and the living body detection module is used for carrying out living body detection on the video data to be detected based on the frame difference noise characteristic data to obtain a living body detection result.
Optionally, the extraction module comprises:
and the filtering unit is used for filtering the frame difference data so as to amplify the noise signals in the frame difference data and obtain space domain frame difference noise characteristic data.
Optionally, the extraction module further comprises:
and the transformation module is used for carrying out Fourier transformation on the frame difference data so as to transform the frame difference data from a spatial domain to a frequency domain and obtain the frequency domain frame difference noise characteristic data corresponding to the frame difference data.
Optionally, the liveness detection module comprises:
the classification unit is used for inputting the frame difference noise characteristic data into a preset classification model, classifying each frame difference noise characteristic in the frame difference noise characteristic data and obtaining a frame difference noise characteristic classification result;
and the living body detection unit is used for carrying out living body detection on the video data to be detected based on the frame difference noise characteristic classification result to obtain a living body detection result.
Optionally, the living body detecting unit includes:
the first calculating subunit is used for calculating the feature quantity ratio of the living body frame difference noise features in the frame difference noise feature classification result;
the first judgment subunit is configured to compare the feature quantity ratio with a preset quantity ratio threshold, and if the feature quantity ratio is greater than or equal to the preset quantity ratio threshold, judge that the in-vivo detection result passes;
and the second judging subunit is configured to judge that the in-vivo detection result is failed if the feature quantity ratio is smaller than the preset quantity ratio threshold.
Optionally, the extraction module further comprises:
the acquisition unit is used for acquiring continuous frame data corresponding to the video data to be detected;
and the calculating unit is used for calculating the frame difference between each two continuous frame images in the continuous frame data to obtain the frame difference data.
Optionally, the computing unit comprises:
an obtaining subunit, configured to obtain a first pixel matrix corresponding to the first continuous frame image and a second pixel matrix corresponding to the second continuous frame image;
and the second calculating subunit is configured to perform subtraction on the first pixel matrix and the second pixel matrix to obtain the frame difference image matrix.
The application also provides a biopsy device, the biopsy device is an entity device, the biopsy device includes: a memory, a processor and a program of the liveness detection method stored on the memory and executable on the processor, which program, when executed by the processor, may implement the steps of the liveness detection method as described above.
The present application also provides a readable storage medium having stored thereon a program for implementing a living body detection method, the program for implementing the living body detection method implementing the steps of the living body detection method as described above when executed by a processor.
The application provides a living body detection method, equipment and a readable storage medium, compared with the technical means of carrying out living body detection based on video characteristic information of video data adopted by the prior art, the application firstly obtains frame difference data corresponding to the video data to be detected and further extracts frame difference noise characteristic data corresponding to the frame difference data, wherein, it needs to be explained that the frame difference noise characteristic data of the artificially edited forged video data is different from that of the naturally shot video data, and further carries out living body detection on the video data to be detected based on the frame difference noise characteristic data, thus identifying whether the video data to be detected corresponding to the frame difference noise characteristic data is forged video data or not, further obtaining more accurate living body identification detection results, further overcoming the defects that the video characteristic information of the video data can be forged in the prior art and further based on the video characteristic information of the video data, the accuracy of the biopsy is improved because of the technical defect of low accuracy of biopsy.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic flow chart of a first embodiment of a biopsy method of the present application;
FIG. 2 is a schematic diagram of a second order difference matrix according to the in vivo detection method of the present application;
FIG. 3 is a schematic diagram of an edge difference matrix according to the in vivo detection method of the present application;
FIG. 4 is a schematic diagram of the square difference matrix in the in vivo detection method of the present application;
FIG. 5 is a schematic flow chart of a second embodiment of the in-vivo detection method of the present application;
fig. 6 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In a first embodiment of the biopsy method according to the present application, referring to fig. 1, the biopsy method includes:
step S10, acquiring frame difference data corresponding to video data to be detected, and determining frame difference noise characteristic data corresponding to the frame difference data;
in this embodiment, it should be noted that the living body detection method is applied to a face recognition system, the video data to be detected is video data including a face, the video data to be detected at least includes one continuous frame image, the frame difference data includes a frame difference image between the continuous frame images, and the frame difference noise feature data is noise feature information data possessed by the frame difference data and used for distinguishing a manually edited video from a naturally shot video, where it is noted that the frame difference noise feature data is used for manually editing a video obtained by editing an image and a video, and when editing is performed, an original frame difference noise feature is changed, and further the frame difference noise feature of the manually edited video is different from the frame difference noise feature of the naturally shot video, for example, assuming that the frame difference noise feature is a frame difference noise in the frame difference image, the frame difference noise corresponding to the naturally shot video should conform to gaussian distribution or poisson distribution, and clipping the video will destroy the distribution condition of the frame difference noise, so that the frame difference noise corresponding to the manually edited video will not conform to gaussian distribution or poisson distribution.
The method comprises the steps of obtaining frame difference data corresponding to video data to be detected, determining frame difference noise characteristic data corresponding to the frame difference data, specifically, extracting the video data to be detected from a preset database, decoding the video data to be detected, and restoring the video data to be detected in a compressed storage state into a plurality of continuous frame data, wherein the continuous frame data are continuous frame images in the video data to be detected, one continuous frame image is an image of a time frame in the video data to be detected, further calculating the frame difference of the continuous frame images corresponding to the adjacent time frames, obtaining the frame difference data, further performing characteristic extraction on the frame difference data, and obtaining the frame difference noise characteristic data.
The step of acquiring frame difference data corresponding to the video data to be detected comprises:
step S11, acquiring continuous frame data corresponding to the video data to be detected;
in this embodiment, it should be noted that the continuous frame data is an image set on each continuous time frame corresponding to the video data to be detected, and the continuous frame data at least includes a continuous frame image, where the continuous frame image is an image on a time frame in the video data to be detected.
Step S12, calculating a frame difference between each of the consecutive frame images in the consecutive frame data, and obtaining the frame difference data.
In this embodiment, the frame difference between each of the consecutive frame images in the consecutive frame data is calculated to obtain the frame difference data, and specifically, the frame difference between the consecutive frame images corresponding to each two adjacent time frames is calculated based on the time sequence of the time frames corresponding to each of the consecutive frame images in the consecutive frame data to obtain the frame difference data, for example, it is assumed that each of the consecutive frame images is (f)1,f2,…,fN) Then frame difference dN-1=fN-fN-1And further the frame difference data is (d)1,d2,…,dN)。
Wherein the frame difference data comprises at least a frame difference image matrix, each of the successive frame images comprises a first successive frame image and a second successive frame image,
the step of calculating the frame difference between each two consecutive frames of images in the consecutive frame data to obtain the frame difference data includes:
step S121, acquiring a first pixel matrix corresponding to the first continuous frame image and a second pixel matrix corresponding to the second continuous frame image;
in this embodiment, it should be noted that the first continuous frame image and the second continuous frame image are images in adjacent time frames, and the time frame corresponding to the first continuous frame image is before the time frame corresponding to the second continuous frame image, the first pixel matrix is an image representation matrix formed by pixel values of each pixel in the first continuous frame image, and the second pixel matrix is an image representation matrix formed by pixel values of each pixel in the second continuous frame image.
Step S122, performing subtraction on the first pixel matrix and the second pixel matrix to obtain the frame difference image matrix.
In this embodiment, the first pixel matrix and the second pixel matrix are subtracted to obtain the frame difference image matrix, and specifically, the first pixel matrix is subtracted from the second pixel matrix to obtain the frame difference image matrix, where the frame difference image matrix is an image representation matrix of a frame difference image between consecutive frame images corresponding to the second pixel matrix and consecutive frame images corresponding to the first pixel matrix.
Wherein the frame difference noise characteristic data comprises spatial domain frame difference noise characteristic data,
the step of determining the frame difference noise characteristic data corresponding to the frame difference data comprises:
and A10, filtering the frame difference data to amplify the noise signal in the frame difference data and obtain spatial domain frame difference noise characteristic data.
In this embodiment, it should be noted that if there is no noise signal in each continuous frame image, most of the pixel values in the frame difference image matrix corresponding to the frame difference image should be 0, and since there is always a noise signal in the natural captured video and the noise signal should be distributed everywhere in the frame difference image, in the manually edited image, some regions of the image are manually edited, so that the noise signal in the edited region disappears, and further, in the frame difference image, the noise signal is not distributed in some regions in the frame difference image, the pixel value in the partial region in the frame difference image matrix is 0, and the spatial frame difference noise feature data is the frame difference data in which the noise signal is amplified, that is, the frame difference image matrix in which the pixel value is amplified.
Performing filtering processing on the frame difference data to amplify noise signals in the frame difference data to obtain spatial domain frame difference noise characteristic data, specifically, performing convolution processing on each frame difference image matrix in the frame difference data based on a preset filtering kernel to amplify pixel values in the frame difference image matrix, thereby obtaining a filtering convolution processing matrix corresponding to each frame difference image matrix, and further using each filtering convolution processing matrix as the spatial domain frame difference noise characteristic data, where the preset filtering kernel is a convolution kernel for performing convolution processing, the preset filtering kernel includes a second order difference matrix, an edge difference matrix, a square difference matrix, and the like, where fig. 2 is a schematic diagram of the second order difference matrix, and fig. 3 is a schematic diagram of the edge difference matrix, fig. 4 is a schematic diagram of the square differential matrix.
Wherein the frame difference noise characterization data comprises frequency domain frame difference noise characterization data,
the step of determining the frame difference noise characteristic data corresponding to the frame difference data comprises:
and B10, performing Fourier transform on the frame difference data to transform the frame difference data from a spatial domain to a frequency domain, and obtaining the frequency domain frame difference noise characteristic data corresponding to the frame difference data.
In this embodiment, it should be noted that the frame difference map is an image in a spatial domain, and the frame difference map includes spatial information of the image, where the spatial information reflects characteristics of a position, a shape, a size, and the like of an object in the image, and the frequency-domain frame difference noise feature data is a frame difference spectrogram corresponding to the frame difference map, where the frame difference spectrogram is an image corresponding to the frame difference map in a frequency domain and is used to represent an image frequency of the frame difference map, where the image frequency is a severity of a pixel value change of the image, that is, a gradient of the image.
Additionally, it should be noted that the main component of the image is low-frequency information, which forms the basic gray scale of the image and has little determining effect on the image structure, the medium-frequency information determines the basic structure of the image and forms the main edge structure of the image, the high-frequency information forms the edge and detail of the image, and the noise signal is usually middle-high frequency information in the image.
Additionally, it should be noted that the noise signal and the low frequency signal in the frame difference spectrogram corresponding to the naturally shot video should be uniformly distributed in each region of the frame difference spectrogram, while for the frame difference spectrogram corresponding to the manually edited video, the noise signal is eliminated due to the manual editing, and further the medium-high frequency information appearing as a partial region in the frame difference spectrogram disappears.
Performing fourier transform on the frame difference data to transform the frame difference data from a spatial domain to a frequency domain, so as to obtain frequency domain frame difference noise characteristic data corresponding to the frame difference data, specifically, performing fourier transform on each frame difference map to transform each frame difference map from the spatial domain to the frequency domain, so as to obtain a frame difference spectrogram corresponding to each frame difference map, and further taking each frame difference spectrogram as the frequency domain frame difference noise characteristic data.
And step S20, performing living body detection on the video data to be detected based on the frame difference noise characteristic data to obtain a living body detection result.
In this embodiment, it should be noted that the frame difference noise feature data includes a frame difference noise feature corresponding to each frame difference map in the frame difference data, where the frame difference noise feature includes a frame difference spectrogram and a filtering convolution processing matrix.
Performing live body detection on the video data to be detected based on the frame difference noise characteristic data to obtain a live body detection result, specifically, performing live body detection on the video data to be detected respectively based on the frame difference noise characteristic corresponding to each frame difference map to obtain a live body detection sub-result corresponding to each frame difference map, and further generating the live body detection result based on each live body detection sub-result, in an implementable scheme, determining a live body judgment number of the live body detection sub-results judged to be a live body, further calculating a ratio of the live body judgment number to the number of the frame difference maps to obtain a live body score, further determining that the live body detection result is passed if the live body score is greater than or equal to a preset live body score threshold, and determining that the live body detection result is not passed if the live body score is less than the preset live body score threshold, it should be noted that, in the embodiment of the present application, live detection is performed based on the frame difference noise features, and then a manually edited video and a naturally photographed video can be distinguished, where a live detection sub-result corresponding to each frame difference noise feature corresponding to the manually edited video should be determined as a non-live body at a high probability, and further even if a malicious attacker attacks the face recognition system with forged video data, the forged video data can be identified, and a face in the forged video data is determined as a non-live body, so that the accuracy of live detection is improved, and further, the accuracy of face recognition is improved.
The embodiment of the application provides a living body detection method, compared with the technical means of carrying out living body detection based on video characteristic information of video data adopted in the prior art, the embodiment of the application firstly obtains frame difference data corresponding to the video data to be detected and then extracts frame difference noise characteristic data corresponding to the frame difference data, wherein, it needs to be explained that the frame difference noise characteristic data of the artificially edited forged video data is different from that of the naturally shot video data, and then carries out living body detection on the video data to be detected based on the frame difference noise characteristic data, thus identifying whether the video data to be detected corresponding to the frame difference noise characteristic data is forged video data or not, further obtaining more accurate living body identification detection results, further overcoming the defects that the video characteristic information of the video data can be forged and further based on the video characteristic information of the video data in the prior art, the accuracy of the biopsy is improved because of the technical defect of low accuracy of biopsy.
Further, referring to fig. 5, based on the first embodiment in the present application, in another embodiment of the present application, the step of performing live detection on the video data to be detected based on the frame difference noise characteristic data to obtain a live detection result includes:
step S21, inputting the frame difference noise characteristic data into a preset classification model, classifying each frame difference noise characteristic in the frame difference noise characteristic data, and obtaining a frame difference noise characteristic classification result;
in this embodiment, it should be noted that the preset classification model is a trained neural network model and is configured to classify a frame difference noise feature, where the type of the frame difference noise feature includes a living body frame difference noise feature type and a non-living body frame difference noise feature type, the frame difference noise feature includes a frame difference spectrogram and a filtering convolution processing matrix, where the frame difference noise feature can be represented by a coding matrix, and the coding matrix corresponding to the frame difference noise feature is a frame difference noise feature representation matrix.
Inputting the frame difference noise characteristic data into a preset classification model, classifying each frame difference noise characteristic in the frame difference noise characteristic data to obtain a frame difference noise characteristic classification result, specifically, inputting a frame difference noise characteristic representation matrix corresponding to the frame difference noise characteristic corresponding to each frame difference image into the preset classification model, respectively performing characteristic extraction on each frame difference noise characteristic representation matrix to obtain a characteristic extraction matrix corresponding to each frame difference noise characteristic representation matrix, further respectively performing full connection on each characteristic extraction matrix to obtain a classification label vector corresponding to each frame difference noise characteristic, further based on the classification label in each classification label vector, judging whether each frame difference noise characteristic is a living body frame difference noise characteristic type or a non-living body frame difference noise characteristic type to obtain a frame difference noise characteristic classification result, the living body frame difference noise feature type represents that a frame difference map corresponding to the frame difference noise feature corresponds to a living body, and the non-living body frame difference noise feature type represents that a frame difference map corresponding to the frame difference noise feature corresponds to a non-living body.
Additionally, it should be noted that, when the preset classification model is trained, the naturally shot face video data and the manually edited face video data may be respectively obtained, the frame difference noise feature corresponding to the naturally shot face video data is used as the first type of training data, the first type of training data is given with the first classification label corresponding to the living body frame difference noise feature type, the frame difference noise feature corresponding to the manually edited face video data is used as the second type of training data, the second type of training data is given with the second classification label corresponding to the non-living body frame difference noise feature type, and the preset classification model may be trained based on the first type of training data, the first classification label, the second type of training data, and the second classification label.
And step S22, performing living body detection on the video data to be detected based on the frame difference noise characteristic classification result to obtain the living body detection result.
In this embodiment, the live body detection is performed on the video data to be detected based on the frame difference noise feature classification result to obtain the live body detection result, and specifically, based on the frame difference noise feature classification result, the number of live body frame difference noise features of frame difference noise features belonging to a live body frame difference noise feature type is counted, and based on the number of live body frame difference noise features, whether a human face in the video data to be detected corresponds to a live body is determined, so as to obtain the live body detection result.
Wherein the frame difference noise feature classification result comprises a live frame difference noise feature,
the step of performing in-vivo detection on the video data to be detected based on the frame difference noise feature classification result to obtain the in-vivo detection result comprises the following steps:
step S221, calculating the feature quantity ratio of the living body frame difference noise features in the frame difference noise feature classification result;
in this embodiment, it should be noted that the living body frame difference noise feature is a frame difference noise feature belonging to the living body frame difference noise feature type, and the frame difference noise feature classification result includes a living body frame difference noise feature and a non-living body frame difference noise feature, where the non-living body frame difference noise feature is a frame difference noise feature belonging to the non-living body frame difference noise feature type.
Calculating the ratio of the feature quantity of the living body frame difference noise features in the frame difference noise feature classification result, specifically, counting the quantity of the living body frame difference noise features in the frame difference noise feature classification result to obtain the quantity of the living body frame difference noise features, and further calculating the ratio of the quantity of the living body frame difference noise features in all the frame difference noise features to obtain the ratio of the feature quantity.
Step S222, comparing the characteristic quantity ratio with a preset quantity ratio threshold, and if the characteristic quantity ratio is greater than or equal to the preset quantity ratio threshold, judging that the living body detection result is passed;
in this embodiment, the feature quantity ratio is compared with a preset quantity ratio threshold, if the feature quantity ratio is greater than or equal to the preset quantity ratio threshold, it is determined that the live body detection result passes, specifically, the feature quantity ratio is compared with the preset quantity ratio threshold, and if the feature quantity ratio is greater than or equal to the preset quantity ratio threshold, it is proved that the frame difference noise feature of each time frame in the video data to be detected supports that the video data to be detected is not a manually edited video data, and then the face in the video data to be detected is a live body face, and it is determined that the live body detection result passes.
In step S223, if the ratio of the feature quantity is smaller than the preset quantity ratio threshold, it is determined that the living body detection result does not pass.
In this embodiment, if the ratio of the number of features is smaller than the preset number ratio threshold, it is determined that the live body detection result does not pass, specifically, if the ratio of the number of features is smaller than the preset number ratio threshold, it is proved that the frame difference noise feature of each time frame in the video data to be detected supports that the video data to be detected is manually edited video data, and then the face in the video data to be detected is not a live body face, and it is determined that the live body detection result does not pass.
Additionally, it should be noted that the living body detection method can be combined with an existing living body detection method and applied to a face recognition system, wherein the existing living body detection method includes an action living body, a digital living body, a flash living body and the like, wherein the existing living body detection method also needs to collect user face video data, and further based on the living body detection method in the embodiment of the present application, the existing face recognition system using the action living body, the digital living body, the light living body and the like can be directly upgraded, no additional hardware or user learning cost is needed, and because the present application can effectively prevent malicious participants from attacking the face recognition system by manually editing the video data, the security of the face recognition system is improved.
The embodiment of the application provides a method for performing in-vivo detection based on frame difference noise characteristics and a preset classification model of video data to be detected, compared with the technical means for performing in-vivo detection based on video characteristic information of the video data adopted in the prior art, the method comprises the steps of classifying the frame difference noise characteristics in the frame difference noise characteristic data based on the preset classification model after the frame difference noise characteristic data of the video data to be detected is collected, obtaining a frame difference noise characteristic classification result, and further determining the ratio of the characteristic quantity of the in-vivo frame difference noise characteristics in the frame difference noise characteristic classification result, wherein the higher the ratio of the characteristic quantity is, the higher the probability that the video data to be detected is not forged is indicated, and further determining whether the video data to be detected is forged, so that the in-vivo detection of the video data to be detected can be realized, the method and the device have the advantages that the living body detection result is obtained, the technical defect that in the prior art, the accuracy of the living body detection is low due to the fact that the video characteristic information of the video data can be forged and then the living body detection is carried out based on the video characteristic information of the video data is overcome, and therefore the accuracy of the living body detection is improved.
Referring to fig. 6, fig. 6 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application.
As shown in fig. 6, the living body detecting apparatus may include: a processor 1001, such as a CPU, a memory 1005, and a communication bus 1002. The communication bus 1002 is used for realizing connection communication between the processor 1001 and the memory 1005. The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a memory device separate from the processor 1001 described above.
Optionally, the liveness detection device may further include a rectangular user interface, a network interface, a camera, RF (Radio Frequency) circuitry, sensors, audio circuitry, a WiFi module, and the like. The rectangular user interface may comprise a Display screen (Display), an input sub-module such as a Keyboard (Keyboard), and the optional rectangular user interface may also comprise a standard wired interface, a wireless interface. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface).
Those skilled in the art will appreciate that the biopsy device configuration shown in FIG. 6 does not constitute a limitation of a biopsy device, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
As shown in fig. 6, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, and a living body detection program. The operating system is a program that manages and controls the hardware and software resources of the liveness detection device, supporting the operation of the liveness detection program as well as other software and/or programs. The network communication module is used to enable communication between the various components within the memory 1005, as well as with other hardware and software in the liveness detection system.
In the living body detecting apparatus shown in fig. 6, the processor 1001 is configured to execute a living body detecting program stored in the memory 1005, and implement the steps of the living body detecting method described in any one of the above.
The specific implementation of the biopsy device of the present application is substantially the same as the embodiments of the biopsy method described above, and is not described herein again.
The embodiment of this application still provides a living body detection device, living body detection device is applied to live body detection equipment, living body detection device includes:
the extraction module is used for acquiring frame difference data corresponding to video data to be detected and determining frame difference noise characteristic data corresponding to the frame difference data;
and the living body detection module is used for carrying out living body detection on the video data to be detected based on the frame difference noise characteristic data to obtain a living body detection result.
Optionally, the extraction module comprises:
and the filtering unit is used for filtering the frame difference data so as to amplify the noise signals in the frame difference data and obtain space domain frame difference noise characteristic data.
Optionally, the extraction module further comprises:
and the transformation module is used for carrying out Fourier transformation on the frame difference data so as to transform the frame difference data from a spatial domain to a frequency domain and obtain the frequency domain frame difference noise characteristic data corresponding to the frame difference data.
Optionally, the liveness detection module comprises:
the classification unit is used for inputting the frame difference noise characteristic data into a preset classification model, classifying each frame difference noise characteristic in the frame difference noise characteristic data and obtaining a frame difference noise characteristic classification result;
and the living body detection unit is used for carrying out living body detection on the video data to be detected based on the frame difference noise characteristic classification result to obtain a living body detection result.
Optionally, the living body detecting unit includes:
the first calculating subunit is used for calculating the feature quantity ratio of the living body frame difference noise features in the frame difference noise feature classification result;
the first judgment subunit is configured to compare the feature quantity ratio with a preset quantity ratio threshold, and if the feature quantity ratio is greater than or equal to the preset quantity ratio threshold, judge that the in-vivo detection result passes;
and the second judging subunit is configured to judge that the in-vivo detection result is failed if the feature quantity ratio is smaller than the preset quantity ratio threshold.
Optionally, the extraction module further comprises:
the acquisition unit is used for acquiring continuous frame data corresponding to the video data to be detected;
and the calculating unit is used for calculating the frame difference between each two continuous frame images in the continuous frame data to obtain the frame difference data.
Optionally, the computing unit comprises:
an obtaining subunit, configured to obtain a first pixel matrix corresponding to the first continuous frame image and a second pixel matrix corresponding to the second continuous frame image;
and the second calculating subunit is configured to perform subtraction on the first pixel matrix and the second pixel matrix to obtain the frame difference image matrix.
The specific implementation of the biopsy device of the present application is substantially the same as the embodiments of the biopsy method described above, and is not described herein again.
The embodiment of the application provides a readable storage medium, and the readable storage medium stores one or more programs, which can be executed by one or more processors for implementing the steps of the living body detection method described in any one of the above.
The specific implementation of the readable storage medium of the present application is substantially the same as the embodiments of the above-mentioned biopsy method, and is not described herein again.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (10)

1. A method of in vivo detection, the method comprising:
acquiring frame difference data corresponding to video data to be detected, and determining frame difference noise characteristic data corresponding to the frame difference data;
and performing living body detection on the video data to be detected based on the frame difference noise characteristic data to obtain a living body detection result.
2. The biopsy method of claim 1, wherein the frame difference noise characterization data comprises spatial frame difference noise characterization data,
the step of determining the frame difference noise characteristic data corresponding to the frame difference data comprises:
and filtering the frame difference data to amplify the noise signals in the frame difference data to obtain space domain frame difference noise characteristic data.
3. The liveness detection method of claim 1 wherein the frame difference noise characterization data comprises frequency domain frame difference noise characterization data,
the step of determining the frame difference noise characteristic data corresponding to the frame difference data comprises:
and performing Fourier transform on the frame difference data to transform the frame difference data from a spatial domain to a frequency domain, so as to obtain the frequency domain frame difference noise characteristic data corresponding to the frame difference data.
4. The in-vivo detection method as claimed in claim 1, wherein the step of performing in-vivo detection on the video data to be detected based on the frame difference noise characteristic data to obtain in-vivo detection results comprises:
inputting the frame difference noise characteristic data into a preset classification model, classifying each frame difference noise characteristic in the frame difference noise characteristic data, and obtaining a frame difference noise characteristic classification result;
and performing living body detection on the video data to be detected based on the frame difference noise characteristic classification result to obtain a living body detection result.
5. The live body detecting method according to claim 4, wherein the frame difference noise feature classification result includes a live body frame difference noise feature,
the step of performing in-vivo detection on the video data to be detected based on the frame difference noise feature classification result to obtain the in-vivo detection result comprises the following steps:
calculating the feature quantity ratio of the living body frame difference noise features in the frame difference noise feature classification result;
comparing the characteristic quantity ratio with a preset quantity ratio threshold value, and if the characteristic quantity ratio is greater than or equal to the preset quantity ratio threshold value, judging that the living body detection result is a pass;
and if the characteristic quantity ratio is smaller than the preset quantity ratio threshold value, judging that the in-vivo detection result is failed.
6. The in-vivo detection method as set forth in claim 1, wherein the step of acquiring frame difference data corresponding to the video data to be detected comprises:
acquiring continuous frame data corresponding to the video data to be detected;
and calculating the frame difference between each two continuous frame images in the continuous frame data to obtain the frame difference data.
7. The biopsy method according to claim 6, wherein the frame difference data comprises at least a matrix of frame difference images, each of the successive frame images comprising a first successive frame image and a second successive frame image,
the step of calculating the frame difference between each two consecutive frames of images in the consecutive frame data to obtain the frame difference data includes:
acquiring a first pixel matrix corresponding to the first continuous frame image and a second pixel matrix corresponding to the second continuous frame image;
and carrying out subtraction operation on the first pixel matrix and the second pixel matrix to obtain the frame difference image matrix.
8. A living body detecting device, characterized in that the living body detecting device comprises:
the extraction module is used for acquiring frame difference data corresponding to video data to be detected and determining frame difference noise characteristic data corresponding to the frame difference data;
and the living body detection module is used for carrying out living body detection on the video data to be detected based on the frame difference noise characteristic data to obtain a living body detection result.
9. A biopsy device, the biopsy device comprising: a memory, a processor, and a program stored on the memory for implementing the liveness detection method,
the memory is used for storing a program for realizing the living body detection method;
the processor is configured to execute a program for implementing the in-vivo detection method to implement the steps of the in-vivo detection method according to any one of claims 1 to 7.
10. A readable storage medium, characterized in that the readable storage medium has stored thereon a program for implementing a living body detection method, the program being executed by a processor to implement the steps of the living body detection method according to any one of claims 1 to 7.
CN202010866143.6A 2020-08-25 2020-08-25 Living body detection method, living body detection device, living body detection equipment and readable storage medium Pending CN111985423A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010866143.6A CN111985423A (en) 2020-08-25 2020-08-25 Living body detection method, living body detection device, living body detection equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010866143.6A CN111985423A (en) 2020-08-25 2020-08-25 Living body detection method, living body detection device, living body detection equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN111985423A true CN111985423A (en) 2020-11-24

Family

ID=73443908

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010866143.6A Pending CN111985423A (en) 2020-08-25 2020-08-25 Living body detection method, living body detection device, living body detection equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111985423A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4080470A3 (en) * 2021-07-08 2022-12-14 Beijing Baidu Netcom Science Technology Co., Ltd. Method and apparatus for detecting living face

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4080470A3 (en) * 2021-07-08 2022-12-14 Beijing Baidu Netcom Science Technology Co., Ltd. Method and apparatus for detecting living face

Similar Documents

Publication Publication Date Title
CN108229526B (en) Network training method, network training device, image processing method, image processing device, storage medium and electronic equipment
CN108875676B (en) Living body detection method, device and system
CN111369545B (en) Edge defect detection method, device, model, equipment and readable storage medium
Raja et al. Video presentation attack detection in visible spectrum iris recognition using magnified phase information
Rosin Thresholding for change detection
CN112561080B (en) Sample screening method, sample screening device and terminal equipment
CN111985427A (en) Living body detection method, living body detection apparatus, and readable storage medium
CN110827249A (en) Electronic equipment backboard appearance flaw detection method and equipment
CN112784835B (en) Method and device for identifying authenticity of circular seal, electronic equipment and storage medium
CN111027450A (en) Bank card information identification method and device, computer equipment and storage medium
CN111192241B (en) Quality evaluation method and device for face image and computer storage medium
CN110827246A (en) Electronic equipment frame appearance flaw detection method and equipment
CN112001362A (en) Image analysis method, image analysis device and image analysis system
CN114445768A (en) Target identification method and device, electronic equipment and storage medium
CN111080665A (en) Image frame identification method, device and equipment and computer storage medium
CN111985423A (en) Living body detection method, living body detection device, living body detection equipment and readable storage medium
CN112287905A (en) Vehicle damage identification method, device, equipment and storage medium
CN116152191A (en) Display screen crack defect detection method, device and equipment based on deep learning
CN111860261B (en) Passenger flow value statistical method, device, equipment and medium
CN111178340B (en) Image recognition method and training method of image recognition model
CN113095272A (en) Living body detection method, living body detection apparatus, living body detection medium, and computer program product
CN112907206A (en) Service auditing method, device and equipment based on video object identification
CN113420716B (en) Illegal behavior identification and early warning method based on improved Yolov3 algorithm
CN113709563B (en) Video cover selecting method and device, storage medium and electronic equipment
CN113537199B (en) Image boundary box screening method, system, electronic device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination