CN111985427A - Living body detection method, living body detection apparatus, and readable storage medium - Google Patents

Living body detection method, living body detection apparatus, and readable storage medium Download PDF

Info

Publication number
CN111985427A
CN111985427A CN202010876192.8A CN202010876192A CN111985427A CN 111985427 A CN111985427 A CN 111985427A CN 202010876192 A CN202010876192 A CN 202010876192A CN 111985427 A CN111985427 A CN 111985427A
Authority
CN
China
Prior art keywords
data
face
living body
body detection
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010876192.8A
Other languages
Chinese (zh)
Inventor
谭圣琦
吴泽衡
徐倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN202010876192.8A priority Critical patent/CN111985427A/en
Publication of CN111985427A publication Critical patent/CN111985427A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a living body detection method, equipment and a readable storage medium, wherein the living body detection method comprises the following steps: the method comprises the steps of obtaining face data to be detected and shooting equipment motion information corresponding to the face data to be detected, determining face area data corresponding to each time frame image in the face data to be detected and corresponding background area data, further carrying out face three-dimensional reconstruction on the face area data, obtaining face three-dimensional point cloud data, further carrying out in-vivo detection on the face data to be detected based on the face three-dimensional point cloud data, obtaining a first in-vivo detection result, further calculating optical flow information corresponding to the background area data, carrying out in-vivo detection on the face data to be detected based on the shooting equipment motion information and the optical flow information, obtaining a second in-vivo detection result, and further determining a target in-vivo detection result based on the first in-vivo detection result and the second in-vivo detection result. The method and the device solve the technical problem that the safety of the face recognition system is low.

Description

Living body detection method, living body detection apparatus, and readable storage medium
Technical Field
The present application relates to the field of artificial intelligence in financial technology (Fintech), and more particularly, to a method, device, and readable storage medium for detecting a living body.
Background
With the continuous development of financial technologies, especially internet technology and finance, more and more technologies (such as distributed, Blockchain, artificial intelligence and the like) are applied to the financial field, but the financial industry also puts higher requirements on the technologies, such as higher requirements on the distribution of backlog of the financial industry.
With the continuous development of computer software and artificial intelligence, the application field of artificial intelligence is also more and more extensive, for example, artificial intelligence is often applied to face recognition, but because face data is easily obtained, when face recognition is performed, living body detection is usually required to ensure the accuracy of face recognition, at present, living body detection is usually performed by analyzing the motion change (such as motion living body), voice information and mouth motion change (digital living body) of a face in a face video or an image sequence, or three-dimensional structure information of the face, that is, living body detection is performed based on a face image sequence or a surface layer feature of a face video, but with the increasing development of video and image editing technologies, if a malicious attacker attacks a face recognition system through a forged face image sequence or face video, the current face recognition system is more and more difficult to resist illegal attacks of malicious attackers, and further, the safety of the current face recognition system is difficult to be ensured.
Disclosure of Invention
The application mainly aims to provide a living body detection method, living body detection equipment and a readable storage medium, and aims to solve the technical problem that in the prior art, a face recognition system is low in safety.
To achieve the above object, the present application provides a living body detecting method applied to a living body detecting apparatus, the living body detecting method including:
acquiring face data to be detected and shooting equipment motion information corresponding to the face data to be detected, and determining face area data corresponding to each time frame image in the face data to be detected and corresponding background area data;
performing human face three-dimensional reconstruction on the human face region data to obtain human face three-dimensional point cloud data;
performing living body detection on the face data to be detected based on the face three-dimensional point cloud data to obtain a first living body detection result;
calculating optical flow information corresponding to the background area data, and performing living body detection on the face data to be detected based on the motion information of the shooting equipment and the optical flow information to obtain a second living body detection result;
determining a target in-vivo detection result based on the first in-vivo detection result and the second in-vivo detection result.
The present application further provides a living body detection apparatus, the living body detection apparatus is a virtual apparatus, just the living body detection apparatus is applied to the living body detection device, the living body detection apparatus includes:
the first determining module is used for acquiring the face data to be detected and the shooting equipment motion information corresponding to the face data to be detected, and determining the face area data corresponding to each time frame image in the face data to be detected and the corresponding background area data;
the three-dimensional reconstruction module is used for performing human face three-dimensional reconstruction on the human face region data to obtain human face three-dimensional point cloud data;
the first living body detection module is used for carrying out living body detection on the human face data to be detected based on the human face three-dimensional point cloud data to obtain a first living body detection result;
the second living body detection module is used for calculating optical flow information corresponding to the background area data, and carrying out living body detection on the face data to be detected based on the shooting equipment motion information and the optical flow information to obtain a second living body detection result;
a second determining module for determining a target in-vivo detection result based on the first in-vivo detection result and the second in-vivo detection result.
The application also provides a biopsy device, the biopsy device is an entity device, the biopsy device includes: a memory, a processor and a program of the liveness detection method stored on the memory and executable on the processor, which program, when executed by the processor, may implement the steps of the liveness detection method as described above.
The present application also provides a readable storage medium having stored thereon a program for implementing a living body detection method, the program for implementing the living body detection method implementing the steps of the living body detection method as described above when executed by a processor.
Compared with the technical means of in-vivo detection based on the surface layer characteristics of a human face image sequence or a human face video adopted in the prior art, the method, the device and the storage medium realize the determination of the human face region data and the background region data corresponding to each time frame image in the human face data to be detected after the human face data to be detected and the motion information of the shooting device corresponding to the human face data to be detected are obtained, and further carry out human face three-dimensional reconstruction on the human face region data to obtain human face three-dimensional point cloud data, wherein the aim of accurately identifying the forged human face image sequence can be realized due to the fact that regular human face three-dimensional characteristics are difficult to extract from the forged human face region image sequence, malicious attackers can be prevented from attacking a human face identification system through the forged human face image sequence and further based on the human face three-dimensional point cloud data, the face data to be detected can be subjected to living body detection to obtain a first living body detection result aiming at identifying a forged face image sequence, and then optical flow information corresponding to the background area data is calculated, wherein it needs to be stated that if the image is a real shooting video, the optical flow information should be matched with the motion information of shooting equipment, if the image is a forged face video, the optical flow information is not matched with the motion information of the shooting equipment, and then the living body detection is carried out on the face data to be detected based on the motion information of the shooting equipment and the optical flow information, so that the aim of accurately identifying the forged face video can be achieved, a malicious attacker can be prevented from attacking the face identification system through the forged face video to obtain a second living body detection result aiming at identifying the forged face video, and further, the first living body detection result and the second living body detection result are integrated, the method can judge whether the face data to be detected corresponds to the face living body or not, obtain the target living body detection result, and further determine whether the face recognition system is attacked or not, thereby overcoming the technical defect that malicious attackers cannot defend the attack on the face recognition system by counterfeiting the face image sequence or the face video due to the fact that the living body detection is carried out based on the surface layer characteristics of the face image sequence or the face video in the prior art, and further improving the safety of the face recognition system.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic flow chart of a first embodiment of a biopsy method of the present application;
FIG. 2 is a schematic diagram of a user shooting a face video to be detected in the in-vivo detection method;
FIG. 3 is a schematic diagram illustrating a comparison between the semantic image and the time frame image in the in-vivo detection method of the present application;
FIG. 4 is a schematic flow chart of a second embodiment of the in-vivo detection method of the present application;
FIG. 5 is a schematic diagram of the second order difference matrix in the in vivo detection method of the present application;
FIG. 6 is a schematic diagram of an edge difference matrix according to the in vivo detection method of the present application;
FIG. 7 is a schematic diagram of the square difference matrix in the in vivo detection method of the present application;
fig. 8 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In a first embodiment of the biopsy method according to the present application, referring to fig. 1, the biopsy method includes:
step S10, acquiring the face data to be detected and the shooting equipment motion information corresponding to the face data to be detected, and determining the face region data corresponding to each time frame image in the face data to be detected and the corresponding background region data;
in this embodiment, it should be noted that the data of the face to be detected is a face video to be detected, and the living body detection method is applied to a face recognition system and is used for performing living body detection on the face video to be detected to determine whether the face video to be detected corresponds to a living body of the face, where if the face video to be detected corresponds to the living body of the face, it is proved that the face video to be detected is shot by the living body of the face, and if the face video to be detected does not correspond to the living body of the face, it is proved that the face video to be detected is a forged face video or a face image sequence.
Additionally, it should be noted that the face video to be detected is a face video shot in a moving manner, the motion information of the shooting device is motion track data of a preset shooting device, and is used for representing a motion track of the preset shooting device in a moving shooting process, as shown in fig. 2, a schematic diagram of the user shooting the face video to be detected is shown, where a dotted arrow represents that a moving direction during the moving shooting is horizontal from left to right.
Acquiring the face data to be detected and the shooting equipment motion information corresponding to the face data to be detected, and determines face region data corresponding to each time frame image in the face data to be detected and corresponding background region data, specifically, the preset shooting equipment is used for shooting the face video to be detected in a moving mode, the gyroscope data corresponding to the preset shooting equipment is synchronously collected, wherein the gyroscope data is acceleration data of the movement of the preset shooting equipment, and then the gyroscope data is subjected to integration to obtain motion trail data of the preset shooting equipment, and the motion trail data of the preset shooting equipment is used as the motion information of the shooting equipment, and further, based on a preset image segmentation model, respectively segmenting each time frame image corresponding to the face video to be detected into a face region and a background region, and obtaining face region data and background region data.
Wherein the face region data at least includes a face region corresponding to the image of the time frame, and the background region data at least includes a background region corresponding to the image of the time frame,
the step of determining the face region data corresponding to each time frame image in the face data to be detected and the corresponding background region data includes:
step S11, based on a preset image segmentation model, performing semantic segmentation on each time frame image, to obtain the face region corresponding to each time frame image and the corresponding background region.
In this embodiment, it should be noted that the preset image segmentation model is a pre-trained neural network model for performing semantic segmentation, and the time frame image is a constituent image of a time frame in the face video to be detected.
Based on a preset image segmentation model, performing semantic segmentation on each time frame image to obtain the face region corresponding to each time frame image and the background region corresponding to each time frame image, specifically, performing the following steps on each time frame image:
outputting the time frame image to the preset image segmentation model, and performing downsampling on the time frame image for a preset number of times, wherein the downsampling mode includes maximum pooling, average pooling, random pooling, and sum region pooling, to obtain a downsampled image, wherein the downsampled image includes high-level semantic information of the time frame image, the high-level semantic information includes abstract feature information of an image subject, such as subject position information and subject contour information, and further performing upsampling on the downsampled image, wherein the upsampling mode includes bilinear interpolation, deconvolution, and inverse pooling, to gradually restore spatial information of the downsampled image, that is, to gradually restore position information of pixels in the downsampled image, so that the upsampled image is consistent with the resolution of the input time frame image, obtaining a semantic image, wherein the semantic image comprises the face region and the background region, and as shown in fig. 3, the semantic image and the time frame image are schematically compared, wherein a left image is a time frame image, a right image is a semantic image, a region 1 is a face region, and a region 2 is a background region.
Step S20, performing human face three-dimensional reconstruction on the human face region data to obtain human face three-dimensional point cloud data;
in this embodiment, it should be noted that the three-dimensional face point cloud data includes three-dimensional face sparse point cloud data and three-dimensional face dense point cloud data, where the three-dimensional face sparse point cloud data is obtained by performing three-dimensional reconstruction based on sfm (structure from motion) reconstruction technology, and the three-dimensional face dense point cloud is obtained by performing three-dimensional reconstruction based on MVS (Multi-View Stereo reconstruction) reconstruction technology.
Additionally, it should be noted that each time-frame image is a time-frame image sequence along a time axis, each time-frame image is a frame of the face video to be detected, because the face video to be detected is a face video shot in a moving manner, and each time-frame image is a face image shot by preset shooting equipment at different shooting angles, two-dimensional face characteristic points of each time-frame image can be mapped to the same three-dimensional face characteristic point in a three-dimensional space, and further, three-dimensional face three-dimensional point cloud data reconstructed based on each time image should conform to the three-dimensional contour characteristics of the face in the face video to be detected, and for a forged image sequence, because each face image in the forged image sequence is a face image collected by a malicious attacker through each channel, there is no association relationship between the face images in the forged image sequence, and then the three-dimensional point cloud data of the human face obtained by three-dimensional reconstruction based on the forged image sequence usually do not accord with the three-dimensional contour characteristics of the human face.
Step S30, performing living body detection on the human face data to be detected based on the human face three-dimensional point cloud data to obtain a first living body detection result;
in this embodiment, the living body detection is performed on the face data to be detected based on the face three-dimensional point cloud data to obtain a first living body detection result, specifically, the face three-dimensional point cloud data is input into a preset point cloud data classification model, the face three-dimensional point cloud data is classified to obtain a point cloud data classification label, and then, based on the point cloud data classification label, whether the face video to be detected corresponds to a face living body is determined to obtain a first living body detection result.
The method comprises the following steps of carrying out living body detection on the face data to be detected based on the face three-dimensional point cloud data, and obtaining a first living body detection result, wherein the step of carrying out living body detection on the face data to be detected comprises the following steps:
step S31, classifying the human face three-dimensional point cloud data based on a preset point cloud data classification model to obtain a point cloud data classification result;
in this embodiment, the human face three-dimensional point cloud data is classified based on a preset point cloud data classification model to obtain a point cloud data classification result, specifically, a point cloud data feature representation matrix corresponding to the human face three-dimensional point cloud data is input into the preset point cloud data classification model, extracting the characteristic of the point cloud data characteristic representation matrix to obtain a point cloud data category characteristic representation matrix corresponding to the point cloud data characteristic representation matrix, wherein the point cloud data feature representation matrix is a coding matrix corresponding to the human face three-dimensional point cloud data and is used for representing the human face three-dimensional point cloud data through coding, and then carrying out full connection on the point cloud data category feature representation matrix to obtain a point cloud data classification label vector, and taking the point cloud data classification label vector as the point cloud data classification result.
And step S32, performing living body detection on the face data to be detected based on the point cloud data classification result to obtain a first living body detection result.
In this embodiment, based on the point cloud data classification result, performing living body detection on the face data to be detected to obtain the first living body detection result, specifically, extracting a classification label code and a first classification probability in a classification label vector of the point cloud data, where the classification label code is an identifier of the three-dimensional point cloud data of the face, and the first classification probability is a probability that the three-dimensional point cloud data of the face belongs to a data category corresponding to the classification label vector, and further, based on the classification label code and the first classification probability, performing living body detection on the face video to be detected to generate a first living body score corresponding to the face video to be detected, and taking the first living body score as the first living body detection result, where the first living body classification is a score used for representing a probability that the face video to be detected corresponds to the living body of the face, for example, assuming that the classification label is coded as 1, the point cloud data indicating that the three-dimensional point cloud data of the human face belongs to a counterfeit video data type, and the first classification probability is 40%, the first living body score is 60.
Step S40, calculating optical flow information corresponding to the background area data, and performing living body detection on the face data to be detected based on the shooting equipment motion information and the optical flow information to obtain a second living body detection result;
in this embodiment, it should be noted that the optical flow information is information of position shift of a pixel point in each of the time-frame images, and the optical flow information includes at least one optical flow value, where the optical flow value is information of position shift of a specific pixel value in an x direction and a y direction respectively from one time-frame image to another time-frame image, and the optical flow information includes a sparse optical flow and a dense optical flow, where the sparse optical flow includes optical flow values of a preset number of pixel points, and the dense optical flow includes an optical flow value of each pixel point in the time-frame image, and the method of calculating the optical flow sparseness includes a Lucas-Kanade algorithm and the like, and the method of calculating the dense optical flow includes a Farneback algorithm and the like.
In addition, it should be noted that, because the face video to be detected is the face video shot in a moving manner, and then the preset shooting equipment forms a motion track, and in the human face video to be detected, the background area pixel points in each time frame image can be considered to be formed by moving the background area pixel points of the time frame image of the previous time frame, and the moving track of the pixel point in the background area should have high similarity with the moving track of the preset shooting device, that is, the motion information and the optical flow information of the shooting device should have high similarity, and for the forged face video, due to the fact that the video is edited, the moving track of the pixel point of each time frame image in the forged face video is deviated from the moving track of the shooting equipment, namely, the optical flow information corresponding to the forged face video does not have high similarity with the equipment motion information of the shooting equipment.
Calculating optical flow information corresponding to the background area data, performing living body detection on the face data to be detected based on the shooting device motion information and the optical flow information, and obtaining a second living body detection result, specifically, calculating the optical flow information corresponding to the background area of each time frame image based on a preset optical flow information calculation method, wherein the preset optical flow information calculation method comprises a Lucas-Kanade algorithm, a Farneback algorithm and the like, and further comparing the optical flow information with the shooting device motion information to verify whether the motion trajectory information of the pixel point corresponding to the optical flow information is consistent with the motion trajectory information of the preset shooting device, so as to obtain a comparison result, and further performing living body detection on the face video to be detected based on the comparison result, so as to obtain a second living body detection result.
The step of performing living body detection on the face data to be detected based on the shooting equipment motion information and the optical flow information to obtain a second living body detection result comprises the following steps:
step S41 of calculating an information matching degree of the photographing apparatus motion information and the optical flow information;
in this embodiment, the information matching degree between the shooting device motion information and the optical flow information is calculated, specifically, a first motion trajectory corresponding to the shooting device motion information and a second motion trajectory corresponding to the optical flow information are generated, and then the trajectory similarity between the first motion trajectory and the second motion trajectory is calculated, and the trajectory similarity is used as the information matching degree.
And step S42, comparing the information matching degree with a preset matching degree threshold value to perform living body detection on the face data to be detected, so as to obtain a second living body detection result.
In this embodiment, the information matching degree is compared with a preset matching degree threshold value to perform living body detection on the face data to be detected, so as to obtain the second living body detection result, specifically, the information matching degree is compared with a preset matching degree threshold value, if the information matching degree is greater than or equal to the preset matching degree threshold value, it is determined that the face video to be detected corresponds to a face living body, and a second living body score corresponding to the information matching degree is generated, and the second living body score is used as the second living body detection result, and if the information matching degree is less than the preset matching degree threshold value, it is determined that the face video to be detected does not correspond to the face living body, that is, the second living body score is set to 0, and the second living body score is used as the second living body detection result.
Step S50, determining a target live detection result based on the first live detection result and the second live detection result.
In this embodiment, a target living body detection result is determined based on the first living body detection result and the second living body detection result, specifically, a weighted average is performed on the first living body score and the second living body score to obtain a target living body score, and based on the target living body score, whether the face video to be detected corresponds to a face living body is determined, so as to obtain the target living body detection result.
Wherein the step of determining a target in-vivo detection result based on the first in-vivo detection result and the second in-vivo detection result includes:
step S51, respectively carrying out image recognition on each time frame image based on a preset image recognition model to obtain an image recognition result corresponding to each time frame image;
in this embodiment, it should be noted that, because a forged face video or a sequence of face images both need to be edited, an image of each frame in the forged face video may be obtained as first type training data and given a first type label, an image of each frame in a non-forged face video may be obtained as second type training data and given a second type label, and further, based on the first type label, the first type training data, the second type label, and the second type training data, a preset image recognition model for recognizing each frame of image corresponding to the forged face video may be trained.
And respectively carrying out image recognition on each time frame image based on a preset image recognition model to obtain an image recognition result corresponding to each time frame image, specifically, inputting each time frame image into the preset image recognition model, classifying each time frame image to judge whether the time frame image is an image in an image sequence corresponding to a forged face video, obtaining an image classification result of each time frame image, and taking each image classification result as the image recognition result.
Step S52, performing living body detection on the face data to be detected based on the image recognition result to obtain a third living body detection result;
in this embodiment, the live body detection is performed on the face data to be detected based on the image recognition result, so as to obtain a third live body detection result, specifically, based on each image classification result, an image proportion of a time frame image not belonging to an image sequence corresponding to a forged face video to all time frame images is calculated, a third live body score corresponding to the image proportion is generated, and the third live body score is further used as the third live body result.
Step S53 of generating a second target live detection result based on the first, second, and third live detection results.
In this embodiment, a second target living body detection result is generated based on the first living body detection result, the second living body detection result and a third living body detection result, specifically, the first living body score, the second living body score and the third living body score are weighted and averaged to obtain a second target living body score, and based on the second target living body score, whether the face video to be detected corresponds to a face living body is determined, so as to obtain the target living body detection result.
Compared with the technical means of performing living body detection based on a face image sequence or the surface layer characteristics of a face video in the prior art, the method for detecting the living body in the human body comprises the steps of determining face region data and background region data corresponding to each time frame image in the face data to be detected after acquiring the face data to be detected and the motion information of the shooting equipment corresponding to the face data to be detected, further performing face three-dimensional reconstruction on the face region data to obtain face three-dimensional point cloud data, wherein regular face three-dimensional characteristics are difficult to extract due to the fact that the face region image sequence is forged, further the purpose of accurately identifying the forged face image sequence can be achieved, a malicious attacker can be prevented from attacking a face identification system through the forged face image sequence, and further based on the face three-dimensional point cloud data, the face data to be detected can be subjected to living body detection to obtain a first living body detection result aiming at identifying a forged face image sequence, and then optical flow information corresponding to the background area data is calculated, wherein it needs to be stated that if the image is a real shooting video, the optical flow information should be matched with the motion information of shooting equipment, if the image is a forged face video, the optical flow information is not matched with the motion information of the shooting equipment, and then the living body detection is carried out on the face data to be detected based on the motion information of the shooting equipment and the optical flow information, so that the aim of accurately identifying the forged face video can be achieved, a malicious attacker can be prevented from attacking the face identification system through the forged face video to obtain a second living body detection result aiming at identifying the forged face video, and further, the first living body detection result and the second living body detection result are integrated, the method can judge whether the face data to be detected corresponds to the face living body or not, obtain the target living body detection result, and further determine whether the face recognition system is attacked or not, thereby overcoming the technical defect that malicious attackers cannot defend the attack on the face recognition system by counterfeiting the face image sequence or the face video due to the fact that the living body detection is carried out based on the surface layer characteristics of the face image sequence or the face video in the prior art, and further improving the safety of the face recognition system.
Further, referring to fig. 4, based on the first embodiment in the present application, in another embodiment of the present application, the step of determining the target in-vivo detection result based on the first in-vivo detection result and the second in-vivo detection result includes:
step A10, obtaining frame difference noise characteristic data corresponding to each time frame image;
in this embodiment, it should be noted that the frame difference noise feature data is feature data of noise information in the frame difference data, and is used for distinguishing the manually edited video from the naturally shot video, wherein the artificially edited video is a fake human face video, the naturally shot video is an unforeseen human face video, it should be noted that, because the video is edited manually and is obtained by editing as needed, when editing, the original frame difference noise characteristics will be changed, and further the frame difference noise characteristics of the manually edited video will be different from those of the naturally shot video, for example, the frame difference noise corresponding to a naturally shot video should conform to a gaussian distribution or a poisson distribution, and when the video is edited, the distribution of the frame difference noise is damaged, and then the frame difference noise corresponding to the manually edited video does not accord with Gaussian distribution or Poisson distribution.
And acquiring frame difference noise characteristic data corresponding to each time frame image, specifically, calculating a frame difference between the time frame images corresponding to adjacent time frames to acquire frame difference data, and further performing characteristic extraction on the frame difference data to acquire frame difference noise characteristic data.
Wherein the frame difference noise characteristic data comprises spatial domain frame difference noise characteristic data and frequency domain frame difference noise characteristic data,
the step of obtaining frame difference noise characteristic data corresponding to each time frame image includes:
step A11, calculating the adjacent frame difference between each time frame image to obtain each frame difference image;
in this embodiment, adjacent frame differences between the time frame images are calculated to obtain each frame difference map, specifically, based on the time sequence of the time frames corresponding to each time frame image, the frame differences between the time frame images corresponding to each two adjacent time frames are calculated to obtain each frame difference map, for example, assuming that each time frame image is (f)1,f2,…,fN) Then frame difference dN-1=fN-fN-1And the frame difference map is (d)1,d2,…,dN)。
Step A12, filtering each frame difference image to amplify the noise signal in the frame difference data and obtain the noise characteristic data of the space domain frame difference;
in this embodiment, it should be noted that if there is no noise signal in each time-frame image, most of the pixel values in the frame difference image matrix corresponding to the frame difference image should be 0, and since there is always a noise signal in the naturally captured video and the noise signal should be distributed everywhere in the frame difference image, in the manually edited image, some regions of the image are manually edited, so that the noise signal in the edited region disappears, and further, in the frame difference image, the noise signal is not distributed in some regions in the frame difference image, and the pixel value in the partial region in the frame difference image matrix is 0.
Respectively performing filtering processing on each frame difference image to amplify noise signals in the frame difference data to obtain the spatial domain frame difference noise characteristic data, specifically, performing convolution processing on a frame difference image matrix corresponding to each frame difference image to amplify pixel values in the frame difference image based on a preset filtering kernel to further amplify the noise signals to obtain a convolution processing matrix corresponding to each frame difference image matrix, and further taking each convolution processing matrix as the spatial domain frame difference noise characteristic data, wherein the preset filtering kernel is a convolution kernel for performing convolution processing, the preset filtering kernel includes a second-order difference matrix, an edge difference matrix, a square difference matrix, and the like, and fig. 5 is a schematic diagram of the second-order difference matrix, and fig. 6 is a schematic diagram of the edge difference matrix, fig. 7 is a schematic diagram of the square differential matrix.
Step A13, respectively performing Fourier transform on each frame difference image, so as to transform the frame difference image from a spatial domain to a frequency domain, and obtain the frequency domain frame difference noise characteristic data.
In this embodiment, it should be noted that the frequency-domain frame difference noise feature data includes a frame difference spectrogram, the frame difference spectrogram is an image in a spatial domain, the frame difference spectrogram includes spatial information of the image, where the spatial information reflects characteristics of a position, a shape, a size, and the like of an object in the image, and the frame difference spectrogram is an image corresponding to the frame difference spectrogram in a frequency domain, and is used for representing an image frequency of the frame difference spectrogram, where the image frequency is a severity of a pixel value change of the image, that is, a gradient of the image.
Additionally, it should be noted that the main component of the image is low-frequency information, which forms the basic gray scale of the image and has little determining effect on the image structure, the medium-frequency information determines the basic structure of the image and forms the main edge structure of the image, the high-frequency information forms the edge and detail of the image, and the noise signal is usually middle-high frequency information in the image.
Additionally, it should be noted that the noise signal and the low frequency signal in the frame difference spectrogram corresponding to the naturally shot video should be uniformly distributed in each region of the frame difference spectrogram, while for the frame difference spectrogram corresponding to the manually edited video, the noise signal is eliminated due to the manual editing, and further the high frequency information appearing as a partial region in the frame difference spectrogram disappears.
Performing fourier transform on each frame difference map to transform the frame difference map from a spatial domain to a frequency domain, so as to obtain the frequency domain frame difference noise characteristic data, specifically, performing fourier transform on each frame difference map to transform the frame difference map from the spatial domain to the frequency domain, so as to obtain a frame difference spectrogram corresponding to each frame difference map, and further taking each frame difference spectrogram as the frequency domain frame difference noise characteristic data.
Step A20, performing living body detection on the face data to be detected based on the frame difference noise characteristic data to obtain a fourth living body detection result;
in this embodiment, the face data to be detected is subjected to live body detection based on the frame difference noise feature data to obtain a fourth live body detection result, specifically, live body detection is performed on each frame difference map based on the frame difference noise feature data to obtain a live body detection sub-result corresponding to each frame difference map, a fourth live body score is calculated based on each live body detection sub-result, and the fourth live body score is used as the fourth live body detection result.
Additionally, it should be noted that, in the embodiment of the present application, live detection is performed based on frame difference noise feature data, and thus a manually edited video and a naturally shot video can be distinguished, where a live detection sub-result corresponding to each frame difference noise feature data corresponding to the manually edited video should be determined as a non-live body at a high probability, and further even if a malicious attacker attacks the face recognition system with forged video data, the forged video data can be identified, and a face in the forged video data is determined as a non-live body, so that accuracy of live detection is improved, and further accuracy of face recognition is improved.
The step of performing living body detection on the face data to be detected based on the frame difference noise characteristic data to obtain a fourth living body detection result comprises the following steps:
step A21, inputting the frame difference noise characteristic data into a preset frame difference noise characteristic classification model, classifying each frame difference noise characteristic in the frame difference noise characteristic data, and obtaining a frame difference noise characteristic classification result;
in this embodiment, it should be noted that the frame difference noise feature data at least includes a frame difference noise feature corresponding to a frame difference map, where the frame difference noise feature includes the convolution processing matrix and a frame difference spectrum map, and the preset frame difference noise feature classification model is a trained neural network model and is used to classify the frame difference noise feature, where the type of the frame difference noise feature includes a living body frame difference noise feature type and a non-living body frame difference noise feature type.
Inputting the frame difference noise feature data into a preset frame difference noise feature classification model, classifying each frame difference noise feature in the frame difference noise feature data to obtain a frame difference noise feature classification result, specifically, inputting a frame difference noise feature representation matrix corresponding to each frame difference map into the preset frame difference noise feature classification model, respectively performing feature extraction on each frame difference noise feature representation matrix to obtain a feature extraction matrix corresponding to each frame difference noise feature representation matrix, wherein the frame difference noise feature representation matrix is a coding matrix of the frame difference noise feature, respectively performing full connection on each feature extraction matrix to obtain a classification label vector corresponding to each frame difference noise feature, and further determining whether each frame difference noise feature is a living body frame difference noise feature type or a non-living body frame difference noise feature type based on a classification label in each classification label vector, and obtaining a frame difference noise feature classification result, wherein the living body frame difference noise feature type represents that a frame difference image corresponding to the frame difference noise feature corresponds to a living human face, and the non-living body frame difference noise feature type represents that a frame difference image corresponding to the frame difference noise feature corresponds to a living non-human face.
Additionally, it should be noted that, when the preset frame difference noise feature classification model is trained, the naturally shot face video data and the manually edited face video data may be respectively obtained, the frame difference noise feature corresponding to the naturally shot face video data is used as the first type of training data, the first type of training data is given with the first classification label corresponding to the living body frame difference feature type, the frame difference noise feature corresponding to the manually edited face video data is used as the second type of training data, the second type of training data is given with the second classification label corresponding to the non-living body frame difference feature type, and the preset frame difference noise feature classification model may be trained based on the first type of training data, the first classification label, the second type of training data, and the second classification label.
And A22, performing living body detection on the face data to be detected based on the frame difference noise feature classification result to obtain a fourth living body detection result.
In this embodiment, the living body detection is performed on the face data to be detected based on the frame difference noise feature classification result to obtain the fourth living body detection result, specifically, based on the frame difference noise feature classification result, the living body frame difference feature number of the frame difference noise features belonging to the living body frame difference feature type is counted, and based on the living body frame difference feature number and the number of the frame difference maps, a fourth living body score is calculated, and the fourth living body score is taken as the fourth living body detection result.
Step A30, generating a third target live detection result based on the first, second and fourth live detection results.
In this embodiment, a third target living body detection result is generated based on the first living body detection result, the second living body detection result and the fourth living body detection result, specifically, the first living body score, the second living body score and the fourth living body score are weighted and averaged to obtain a third target living body score, and based on the third target living body score, whether the face video to be detected corresponds to a face living body is determined, so as to obtain the third target living body detection result.
Compared with the technical means of performing live body detection based on a face image sequence or a surface layer feature of a face video in the prior art, the method for detecting a live body in a living body according to the present embodiment includes, after acquiring frame difference noise feature data corresponding to each time frame image, performing live body detection on the face data to be detected by using frame difference noise feature data different from frame difference noise feature data of video data photographed naturally, that is, recognizing whether the face video to be detected corresponding to the frame difference noise feature data is forged video data, and further acquiring a more accurate fourth live body detection result, and further, comprehensively judging whether the face data to be detected corresponds to a face live body based on the first live body detection result, the second live body detection result and the fourth live body detection result in a combined manner, and a third target living body detection result is generated, so that whether the face recognition system is attacked or not can be determined, the technical defect that a malicious attacker attacks the face recognition system by forging the face image sequence or the face video is difficult to defend when living body detection is performed based on the surface layer characteristics of the face image sequence or the face video in the prior art is overcome, and the safety of the face recognition system is further improved.
Referring to fig. 8, fig. 8 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application.
As shown in fig. 8, the living body detecting apparatus may include: a processor 1001, such as a CPU, a memory 1005, and a communication bus 1002. The communication bus 1002 is used for realizing connection communication between the processor 1001 and the memory 1005. The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a memory device separate from the processor 1001 described above.
Optionally, the liveness detection device may further include a rectangular user interface, a network interface, a camera, RF (Radio Frequency) circuitry, sensors, audio circuitry, a WiFi module, and the like. The rectangular user interface may comprise a Display screen (Display), an input sub-module such as a Keyboard (Keyboard), and the optional rectangular user interface may also comprise a standard wired interface, a wireless interface. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface).
Those skilled in the art will appreciate that the biopsy device configuration shown in FIG. 8 does not constitute a limitation of a biopsy device, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
As shown in fig. 8, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, and a living body detection program. The operating system is a program that manages and controls the hardware and software resources of the liveness detection device, supporting the operation of the liveness detection program as well as other software and/or programs. The network communication module is used to enable communication between the various components within the memory 1005, as well as with other hardware and software in the liveness detection system.
In the living body detecting apparatus shown in fig. 8, the processor 1001 is configured to execute a living body detecting program stored in the memory 1005, and implement the steps of the living body detecting method described in any one of the above.
The specific implementation of the biopsy device of the present application is substantially the same as the embodiments of the biopsy method described above, and is not described herein again.
The embodiment of this application still provides a living body detection device, living body detection device is applied to live body detection equipment, living body detection device includes:
the first determining module is used for acquiring the face data to be detected and the shooting equipment motion information corresponding to the face data to be detected, and determining the face area data corresponding to each time frame image in the face data to be detected and the corresponding background area data;
the three-dimensional reconstruction module is used for performing human face three-dimensional reconstruction on the human face region data to obtain human face three-dimensional point cloud data;
the first living body detection module is used for carrying out living body detection on the human face data to be detected based on the human face three-dimensional point cloud data to obtain a first living body detection result;
the second living body detection module is used for calculating optical flow information corresponding to the background area data, and carrying out living body detection on the face data to be detected based on the shooting equipment motion information and the optical flow information to obtain a second living body detection result;
a second determining module for determining a target in-vivo detection result based on the first in-vivo detection result and the second in-vivo detection result.
Optionally, the first liveness detection module comprises:
the classification unit is used for classifying the human face three-dimensional point cloud data based on a preset point cloud data classification model to obtain a point cloud data classification result;
and the first living body detection unit is used for carrying out living body detection on the face data to be detected based on the point cloud data classification result to obtain a first living body detection result.
Optionally, the second liveness detection module comprises:
a calculation unit configured to calculate an information matching degree of the photographing apparatus motion information and the optical flow information;
and the second in-vivo detection unit is used for comparing the information matching degree with a preset matching degree threshold value so as to perform in-vivo detection on the face data to be detected and obtain a second in-vivo detection result.
Optionally, the first determining module includes:
and the semantic segmentation unit is used for performing semantic segmentation on each time frame image based on a preset image segmentation model to obtain the face region corresponding to each time frame image and the corresponding background region.
Optionally, the second determining module includes:
the image recognition unit is used for respectively carrying out image recognition on each time frame image based on a preset image recognition model to obtain an image recognition result corresponding to each time frame image;
the third living body detection unit is used for carrying out living body detection on the face data to be detected based on the image recognition result to obtain a third living body detection result;
a first determination unit configured to generate a second target living body detection result based on the first living body detection result, the second living body detection result, and a third living body detection result.
Optionally, the second determining module further includes:
the acquisition unit is used for acquiring frame difference noise characteristic data corresponding to each time frame image;
the fourth living body detection unit is used for carrying out living body detection on the human face data to be detected based on the frame difference noise characteristic data to obtain a fourth living body detection result;
a second determination unit configured to generate a third target in-vivo detection result based on the first in-vivo detection result, the second in-vivo detection result, and a fourth in-vivo detection result.
Optionally, the obtaining unit includes:
the calculating subunit is used for calculating the adjacent frame difference between the time frame images to obtain each frame difference image;
the filtering subunit is configured to perform filtering processing on each frame difference map respectively to amplify a noise signal in the frame difference data, so as to obtain the spatial domain frame difference noise feature data;
and the Fourier transform subunit is used for respectively carrying out Fourier transform on each frame difference image so as to transform the frame difference image from a spatial domain to a frequency domain and obtain the frequency domain frame difference noise characteristic data.
Optionally, the fourth living body detecting unit includes:
the classification subunit is used for inputting the frame difference noise characteristic data into a preset frame difference noise characteristic classification model, classifying each frame difference noise characteristic in the frame difference noise characteristic data, and obtaining a frame difference noise characteristic classification result;
and the living body detection subunit is used for carrying out living body detection on the human face data to be detected based on the frame difference noise feature classification result to obtain a fourth living body detection result.
The specific implementation of the biopsy device of the present application is substantially the same as the embodiments of the biopsy method described above, and is not described herein again.
The embodiment of the application provides a readable storage medium, and the readable storage medium stores one or more programs, which can be executed by one or more processors for implementing the steps of the living body detection method described in any one of the above.
The specific implementation of the readable storage medium of the present application is substantially the same as the embodiments of the above-mentioned biopsy method, and is not described herein again.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (10)

1. A method of in vivo detection, the method comprising:
acquiring face data to be detected and shooting equipment motion information corresponding to the face data to be detected, and determining face area data corresponding to each time frame image in the face data to be detected and corresponding background area data;
performing human face three-dimensional reconstruction on the human face region data to obtain human face three-dimensional point cloud data;
performing living body detection on the face data to be detected based on the face three-dimensional point cloud data to obtain a first living body detection result;
calculating optical flow information corresponding to the background area data, and performing living body detection on the face data to be detected based on the motion information of the shooting equipment and the optical flow information to obtain a second living body detection result;
determining a target in-vivo detection result based on the first in-vivo detection result and the second in-vivo detection result.
2. The in-vivo detection method as claimed in claim 1, wherein the step of performing in-vivo detection on the data of the face to be detected based on the data of the three-dimensional point cloud of the face to obtain a first in-vivo detection result comprises:
classifying the three-dimensional face point cloud data based on a preset point cloud data classification model to obtain a point cloud data classification result;
and performing living body detection on the face data to be detected based on the point cloud data classification result to obtain a first living body detection result.
3. The live body detection method according to claim 1, wherein the step of performing live body detection on the face data to be detected based on the photographing apparatus motion information and the optical flow information to obtain a second live body detection result comprises:
calculating the information matching degree of the motion information of the shooting equipment and the optical flow information;
and comparing the information matching degree with a preset matching degree threshold value to perform living body detection on the face data to be detected, so as to obtain a second living body detection result.
4. The live body detecting method according to claim 1, wherein the face region data includes at least a face region corresponding to the image of the time frame, and the background region data includes at least a background region corresponding to the image of the time frame,
the step of determining the face region data corresponding to each time frame image in the face data to be detected and the corresponding background region data includes:
and performing semantic segmentation on each time frame image based on a preset image segmentation model to obtain the face region corresponding to each time frame image and the corresponding background region.
5. The living body test method according to claim 1, wherein the step of determining the target living body test result based on the first living body test result and the second living body test result includes:
respectively carrying out image recognition on each time frame image based on a preset image recognition model to obtain an image recognition result corresponding to each time frame image;
performing living body detection on the face data to be detected based on the image recognition result to obtain a third living body detection result;
generating a second target in-vivo detection result based on the first in-vivo detection result, the second in-vivo detection result, and a third in-vivo detection result.
6. The living body test method according to claim 1, wherein the step of determining the target living body test result based on the first living body test result and the second living body test result includes:
acquiring frame difference noise characteristic data corresponding to each time frame image;
performing living body detection on the face data to be detected based on the frame difference noise characteristic data to obtain a fourth living body detection result;
generating a third target in-vivo detection result based on the first in-vivo detection result, the second in-vivo detection result, and a fourth in-vivo detection result.
7. The biopsy method of claim 6, wherein the frame difference noise characterization data comprises spatial and frequency domain frame difference noise characterization data,
the step of obtaining frame difference noise characteristic data corresponding to each time frame image includes:
calculating adjacent frame differences among the time frame images to obtain frame difference images;
respectively carrying out filtering processing on each frame difference image so as to amplify noise signals in the frame difference data and obtain the space domain frame difference noise characteristic data;
and respectively carrying out Fourier transform on each frame difference image so as to transform the frame difference image from a spatial domain to a frequency domain and obtain the frequency domain frame difference noise characteristic data.
8. The live body detection method according to claim 6, wherein the step of performing live body detection on the face data to be detected based on the frame difference noise feature data to obtain a fourth live body detection result comprises:
inputting the frame difference noise characteristic data into a preset frame difference noise characteristic classification model, and classifying each frame difference noise characteristic in the frame difference noise characteristic data to obtain a frame difference noise characteristic classification result;
and performing living body detection on the face data to be detected based on the frame difference noise feature classification result to obtain a fourth living body detection result.
9. A biopsy device, the biopsy device comprising: a memory, a processor, and a program stored on the memory for implementing the liveness detection method,
the memory is used for storing a program for realizing the living body detection method;
the processor is configured to execute a program for implementing the in-vivo detection method to implement the steps of the in-vivo detection method according to any one of claims 1 to 8.
10. A readable storage medium, characterized in that the readable storage medium has stored thereon a program for implementing a living body detection method, the program being executed by a processor to implement the steps of the living body detection method according to any one of claims 1 to 8.
CN202010876192.8A 2020-08-25 2020-08-25 Living body detection method, living body detection apparatus, and readable storage medium Pending CN111985427A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010876192.8A CN111985427A (en) 2020-08-25 2020-08-25 Living body detection method, living body detection apparatus, and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010876192.8A CN111985427A (en) 2020-08-25 2020-08-25 Living body detection method, living body detection apparatus, and readable storage medium

Publications (1)

Publication Number Publication Date
CN111985427A true CN111985427A (en) 2020-11-24

Family

ID=73439855

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010876192.8A Pending CN111985427A (en) 2020-08-25 2020-08-25 Living body detection method, living body detection apparatus, and readable storage medium

Country Status (1)

Country Link
CN (1) CN111985427A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112433228A (en) * 2021-01-05 2021-03-02 中国人民解放军国防科技大学 Multi-laser radar decision-level fusion method and device for pedestrian detection
CN113095272A (en) * 2021-04-23 2021-07-09 深圳前海微众银行股份有限公司 Living body detection method, living body detection apparatus, living body detection medium, and computer program product
CN113822841A (en) * 2021-01-29 2021-12-21 深圳信息职业技术学院 Sewage impurity caking detection method and device and related equipment
WO2022126914A1 (en) * 2020-12-18 2022-06-23 平安科技(深圳)有限公司 Living body detection method and apparatus, electronic device, and storage medium
CN116310146A (en) * 2023-05-16 2023-06-23 北京邃芒科技有限公司 Face image replay method, system, electronic device and storage medium
TWI824892B (en) * 2022-08-03 2023-12-01 大陸商中國銀聯股份有限公司 A face manipulation detection method and its detection device based on optical flow analysis

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022126914A1 (en) * 2020-12-18 2022-06-23 平安科技(深圳)有限公司 Living body detection method and apparatus, electronic device, and storage medium
CN112433228A (en) * 2021-01-05 2021-03-02 中国人民解放军国防科技大学 Multi-laser radar decision-level fusion method and device for pedestrian detection
CN113822841A (en) * 2021-01-29 2021-12-21 深圳信息职业技术学院 Sewage impurity caking detection method and device and related equipment
CN113822841B (en) * 2021-01-29 2022-05-20 深圳信息职业技术学院 Sewage impurity caking detection method and device and related equipment
CN113095272A (en) * 2021-04-23 2021-07-09 深圳前海微众银行股份有限公司 Living body detection method, living body detection apparatus, living body detection medium, and computer program product
CN113095272B (en) * 2021-04-23 2024-03-29 深圳前海微众银行股份有限公司 Living body detection method, living body detection device, living body detection medium and computer program product
TWI824892B (en) * 2022-08-03 2023-12-01 大陸商中國銀聯股份有限公司 A face manipulation detection method and its detection device based on optical flow analysis
CN116310146A (en) * 2023-05-16 2023-06-23 北京邃芒科技有限公司 Face image replay method, system, electronic device and storage medium
CN116310146B (en) * 2023-05-16 2023-10-27 北京邃芒科技有限公司 Face image replay method, system, electronic device and storage medium

Similar Documents

Publication Publication Date Title
CN111985427A (en) Living body detection method, living body detection apparatus, and readable storage medium
CN108875676B (en) Living body detection method, device and system
Younus et al. Effective and fast deepfake detection method based on haar wavelet transform
JP4739355B2 (en) Fast object detection method using statistical template matching
JP6230751B1 (en) Object detection apparatus and object detection method
US9928426B1 (en) Vehicle detection, tracking and localization based on enhanced anti-perspective transformation
Boulkenafet et al. Scale space texture analysis for face anti-spoofing
JP5431362B2 (en) Feature-based signature for image identification
US9767383B2 (en) Method and apparatus for detecting incorrect associations between keypoints of a first image and keypoints of a second image
JP2012038318A (en) Target detection method and device
Su et al. A novel forgery detection algorithm for video foreground removal
CN111738211B (en) PTZ camera moving object detection and recognition method based on dynamic background compensation and deep learning
Thajeel et al. A Novel Approach for Detection of Copy Move Forgery using Completed Robust Local Binary Pattern.
Barni et al. Detection of adaptive histogram equalization robust against JPEG compression
Xu et al. COCO-Net: A dual-supervised network with unified ROI-loss for low-resolution ship detection from optical satellite image sequences
CN112435278B (en) Visual SLAM method and device based on dynamic target detection
CN107748885B (en) Method for recognizing fuzzy character
CN106778822B (en) Image straight line detection method based on funnel transformation
CN111179245B (en) Image quality detection method, device, electronic equipment and storage medium
Saxena et al. Video inpainting detection and localization using inconsistencies in optical flow
JP2018124963A (en) Image processing device, image recognition device, image processing program, and image recognition program
Xu et al. Features based spatial and temporal blotch detection for archive video restoration
CN110910497A (en) Method and system for realizing augmented reality map
CN111985423A (en) Living body detection method, living body detection device, living body detection equipment and readable storage medium
Robinson et al. Foreground segmentation in atmospheric turbulence degraded video sequences to aid in background stabilization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination