CN112183357A - Deep learning-based multi-scale in-vivo detection method and system - Google Patents

Deep learning-based multi-scale in-vivo detection method and system Download PDF

Info

Publication number
CN112183357A
CN112183357A CN202011047776.0A CN202011047776A CN112183357A CN 112183357 A CN112183357 A CN 112183357A CN 202011047776 A CN202011047776 A CN 202011047776A CN 112183357 A CN112183357 A CN 112183357A
Authority
CN
China
Prior art keywords
scale
image
information
features
vivo detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011047776.0A
Other languages
Chinese (zh)
Other versions
CN112183357B (en
Inventor
朱鑫懿
魏文应
安欣赏
张伟民
李革
张世雄
李楠楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Instritute Of Intelligent Video Audio Technology Longgang Shenzhen
Original Assignee
Instritute Of Intelligent Video Audio Technology Longgang Shenzhen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Instritute Of Intelligent Video Audio Technology Longgang Shenzhen filed Critical Instritute Of Intelligent Video Audio Technology Longgang Shenzhen
Priority to CN202011047776.0A priority Critical patent/CN112183357B/en
Publication of CN112183357A publication Critical patent/CN112183357A/en
Application granted granted Critical
Publication of CN112183357B publication Critical patent/CN112183357B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A deep learning-based multi-scale in-vivo detection method comprises the following steps: inputting a picture, and extracting a multi-scale image; step two, extracting multi-scale features from the multi-scale image: extracting multi-scale features from the multi-scale image by using a deep learning model to obtain face image information features, environment information features and behavior information features; step three, obtaining multi-scale fusion characteristics: performing feature fusion on the extracted multi-scale features by adopting different constraints to obtain multi-scale fusion features; and step four, inputting the multi-scale fusion characteristics to a classification network, outputting the living body score, and obtaining a living body test result according to a threshold value. Compared with the existing in-vivo detection method based on a single face region, the method provided by the invention has better scene adaptability and higher detection accuracy under a poorer imaging environment.

Description

Deep learning-based multi-scale in-vivo detection method and system
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to a deep learning-based multi-scale in-vivo detection method and system.
Background
Biometric identification, and in particular face identification, has long been a research hotspot in the field of computer vision. With the development of deep learning and the promotion of hardware arithmetic equipment, the face recognition technology is widely applied to various fields, such as mobile phone face unlocking, door access machine face attendance and online face recognition payment. The existing face recognition technology has the potential safety hazard that identity information is stolen, and lawbreakers can carry out identity verification through forged living face information and carry out illegal activities such as stealing property, endangering public safety and the like after passing the identity verification. The face recognition application needs a living body detection method to identify whether a given face is a living body, wherein the living body refers to a living object, and common non-living body attack means include a printed face paper attack, a video face attack and a 3D face mask attack. At present, a plurality of RGB image face living body detection algorithms based on traditional vision and deep learning exist, however, a single living body detection algorithm based on face information is easily influenced by environments and equipment, such as illumination environments and equipment imaging quality, and under some poor environments, the algorithm is difficult to distinguish living bodies from non-living bodies.
Disclosure of Invention
The deep learning-based multi-scale in-vivo detection method and system provided by the invention start from a multi-scale feature method, and extract human face features under multi-scale conditions, environmental information features and behavior information features near the human face through a feature extraction neural network and perform feature fusion, so that the adaptability of the algorithm under different environments is enhanced, and the detection accuracy under a poorer imaging environment is improved.
The technical scheme provided by the invention is as follows:
according to one aspect of the invention, the deep learning-based multi-scale in-vivo detection method comprises the following steps: inputting a picture, and extracting a multi-scale image; step two, extracting multi-scale features from the multi-scale image: extracting multi-scale features from the multi-scale image by using a deep learning model to obtain face image information features, environment information features and behavior information features; step three, obtaining multi-scale fusion characteristics: performing feature fusion on the extracted multi-scale features by adopting different constraints to obtain multi-scale fusion features; and step four, inputting the multi-scale fusion characteristics to a classification network, outputting the living body score, and obtaining a living body test result according to a threshold value.
Preferably, in the depth learning-based multi-scale in-vivo detection method, in the first step, a multi-scale image is extracted from the target to be detected in the input picture, the multi-scale image includes a low-scale face image information image, a medium-scale environment information image and a high-scale behavior information image, and the multi-scale image is an RGB image.
Preferably, in the deep learning-based multi-scale in-vivo detection method, in step three, the multi-scale fusion features can be expressed by formula (2):
L(Gl,Gm,Gh)=λ1Fl2Fm3Fh (2)
in formula (2), Gl,Gm,GhRespectively, acquired low, medium and high-scale images, FlFor extracted low-scale imaging features, λ1To its constraint, FmFor extracted mesoscale environmental information features, λ2To constrain it,FhFor extracted high-scale behavior information features, λ3Is its constraint.
According to another aspect of the invention, a deep learning-based multi-scale in-vivo detection system is provided, and the deep learning-based multi-scale in-vivo detection method is used, wherein the multi-scale in-vivo detection system comprises an adaptive multi-scale image acquisition module, a convolutional neural network creation module and a multi-scale in-vivo detection module, wherein the convolutional neural network creation module is used for designing a convolutional neural network model, performing in-vivo judgment on an input target to be detected and outputting an in-vivo detection score; and the multi-scale in-vivo detection module is used for extracting multi-scale image information of the target to be detected, inputting the multi-scale image information into the convolutional neural network model, fusing multi-scale in-vivo detection scores output by the convolutional neural network model and acquiring an in-vivo detection result of the target to be detected.
Preferably, in the above deep learning-based multi-scale in-vivo detection system, the adaptive multi-scale image acquisition module includes: the low-scale biological information acquisition unit is used for acquiring a facial information image according to the position information of the target to be detected;
the mesoscale environmental information acquisition unit is used for acquiring an image containing an environmental background by using an adaptive method independent of the resolution of a camera and the image size according to the position information of the target to be detected; and the high-scale behavior information acquisition unit is used for acquiring the image containing the behavior information of the target to be detected by using an adaptive method independent of the resolution of the camera and the image size according to the position information of the target to be detected.
Preferably, in the deep learning-based multi-scale in vivo detection system, the convolutional neural network creation module includes: the RGB image feature information extraction network is used for constructing a multi-level deep neural network and extracting multi-level semantic feature information of the target to be detected under different scales; the RGB image characteristic information classification network is used for constructing a multi-level semantic characteristic information fusion network, fusing the extracted semantic information of the target to be detected and outputting a living body score, wherein the score is between 0 and 1, if the target to be detected is a living body, the network output result is 1, and if the target to be detected is a non-living body, the network output result is 0.
Preferably, in the above deep learning-based multi-scale in-vivo detection system, the multi-scale in-vivo detection module includes: the low-scale biological imaging characteristic constraint unit is used for giving constraint weight to the facial imaging characteristics extracted from the low-scale image; the mesoscale environmental information feature constraint unit is used for giving constraint weight to the environmental features extracted from the mesoscale image; and the high-scale behavior information characteristic constraint unit is used for endowing constraint weight to the behavior characteristic of the target to be detected extracted from the high-scale image.
Compared with the prior art, the invention has the beneficial effects that:
by utilizing the technical scheme provided by the invention, the living body detection is carried out in the process of face identity identification, a multi-scale feature and feature fusion method is adopted, and multi-scale visual information is utilized. Compared with the existing in-vivo detection method based on a single face area, the method provided by the invention has better scene adaptability and higher detection accuracy under a poorer imaging environment.
Drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings that are needed in the detailed description of the invention or the prior art will be briefly described below.
FIG. 1 is a flow chart of a deep learning based multi-scale in vivo detection method of the present invention;
FIG. 2 is a network structure diagram of the deep learning-based multi-scale in vivo detection system of the present invention;
FIG. 3 is a diagram of a multi-scale image extraction structure of the deep learning-based multi-scale in vivo detection method.
Detailed Description
The deep learning-based multi-scale in-vivo detection method provided by the invention adopts a deep learning framework, designs a multi-scale fusion characteristic method, and completes in-vivo detection on the basis.
The principle of the invention is as follows: 1.) the liveness detection problem is formulated as a multi-scale feature detection model, each scale focusing on a different visual feature. The imaging features of the human face, i.e. imaging features under different media (paper, screen and mask), are of interest at low scales, e.g. moire of screen double imaging and a malformed human face in paper attacks. The mesoscale focuses on environmental features of the area near the face, such as paper boundaries, screen boundaries. The high scale focuses on the behavior information of the target to be measured, such as hand movements. 2.) feature fusion of different scales, namely, an attention mechanism is used for different types of feature information (different attention degrees are adopted), so that adaptability under different scenes is achieved.
The deep learning-based multi-scale in-vivo detection method comprises three parts: performing multi-scale image extraction on a target to be detected in an input picture; extracting the features of the images with different scales by using a deep learning model; fusing the multi-scale features; and (4) scoring the fusion features by using a classification network, and judging whether the target to be detected is a living body or not through a threshold value. As shown in fig. 1, the method for multi-scale in vivo detection based on deep learning of the present invention, from picture input to in vivo detection result output, includes the following steps:
inputting a picture, extracting a multi-scale image s 1: and carrying out target detection on the input image, acquiring the position information of the target to be detected, and carrying out self-adaptive multi-scale image information acquisition according to the position information of the target to be detected. The multi-scale images comprise low-scale face image information images, medium-scale environment information images and high-scale behavior information images, and the target detection method is not limited to the traditional computer vision and deep learning method.
Extracting multi-scale features from the multi-scale image s2, namely extracting the multi-scale features from the multi-scale image by using a deep learning model to obtain the multi-scale features of the target to be detected under different scales, wherein the multi-scale features comprise face image information features, environment information features and behavior information features;
step three, acquiring multi-scale fusion characteristics s3, namely performing characteristic fusion on the extracted multi-scale characteristics by adopting different characteristic constraints to obtain multi-scale fusion characteristics;
and step four, inputting the multi-scale fusion characteristics into the classification network, outputting the living body score, and obtaining a living body detection result according to a threshold value s4, namely inputting the fusion characteristics into the classification network to obtain the living body detection score, and obtaining the living body detection result according to a set score threshold value.
The following describes specific implementation steps of the deep learning-based multi-scale in-vivo detection method according to the present invention with reference to fig. 2 and 3, and the overall operation flow is as follows:
step one, inputting a picture, extracting a multi-scale image s1, and giving a RGB image 12 (shown in figure 3) collected by a device, wherein the image is marked as GRThe width and height of the image are denoted as w and h, respectively. Face position information (x) is acquired by a general face detection method (the detection method is not limited to the conventional visual method or the deep learning method)l,yl,wl,hl) Wherein x isl,ylAs coordinates of the center position of the face region, wlAnd hlThe face acquisition area is a low-scale image acquisition area 15 (as shown in fig. 3) corresponding to the width and height of the face area. According to the width and height of the image and the position information of the face region, a self-adaptive calculation method is used for obtaining a mesoscale image acquisition region 14 and a high-scale image acquisition region 13 (as shown in fig. 3), and the calculation method is shown as formula (1):
Figure BDA0002708538170000041
wherein Si1And Si2The scale parameter can be adjusted according to the equipment and the actual scene. Wherein the collected low-scale image is recorded as GlAnd the collected mesoscale image is recorded as GmAnd the acquired high-scale image is recorded.
And step two, extracting multi-scale features s2 from the images with different scales, and inputting the images with different scales, such as a high-scale image 1, a medium-scale image 2 and a low-scale image 3 in the image 2, into respective corresponding feature extraction convolutional neural networks in the feature extraction stage to obtain corresponding multi-scale features. As shown in fig. 2, a high-scale image 1 is input into a high-scale image feature extraction convolutional neural network 4 to obtain a high-scale feature 7; inputting the mesoscale image 2 into a mesoscale image feature extraction convolutional neural network 5 to obtain mesoscale features 8; and inputting the low-scale image 3 into the low-scale image feature extraction convolutional neural network 6 to obtain the low-scale features 9.
And step three, acquiring the multi-scale fusion features s 3. And performing feature fusion on the obtained multi-scale features in the multi-scale fusion feature unit 10 by adopting different constraints to obtain the multi-scale fusion features. Wherein the fusion feature is denoted as L (G)l、Gm、Gh) Wherein G isl、GmAnd GhCorresponding features are respectively denoted as Fl、FmAnd FhThe expression is shown in (2):
L(Gl,Gm,Gh)=λ1Fl2Fm3Fh (2)
in the formula (2), FlFor extracted low-scale face image features, λ1Is constrained thereto; fmFor extracted mesoscale environmental information features, λ2Is constrained thereto; fhFor extracted high-scale behavior information features, λ3Is its constraint. And acquiring fusion characteristics by using a weighted average method, and taking different attention degrees for the characteristics with different scales. Wherein λ1、λ2And λ3The adjustment is not a fixed value, and can be correspondingly adjusted according to the environment and the equipment in the actual scene.
And step four, inputting the fusion characteristics to a classification network, outputting the living body score, and obtaining a living body test result s4 according to a threshold value. And inputting the fusion characteristics L into a classification network module 11, constraining the output result to [0,1] by using a Sigmoid function, and acquiring a probability value P, namely the score of the living body detection of the target to be detected. According to the set living body detection threshold α, a living body is classified as a living body for a classification greater than or equal to the threshold α, and a non-living body for a classification smaller than the threshold α.
Deep learning-based multi-scale in vivo detection system, comprising: the system comprises a self-adaptive multi-scale image acquisition module, a convolutional neural network creation module and a multi-scale in-vivo detection module. Wherein the content of the first and second substances,
an adaptive multi-scale image acquisition module comprising: the low-scale biological information acquisition unit is used for acquiring face image information according to the position information of the target to be detected; the mesoscale environmental information acquisition unit is used for acquiring an image containing environmental information by using an adaptive method independent of the resolution of a camera and the image size according to the position information of the target to be detected; and the high-scale behavior information acquisition unit is used for acquiring the image containing the behavior information by using an adaptive method independent of the resolution of the camera and the image size according to the position information of the target to be detected.
The convolutional neural network creation module is used for designing a convolutional neural network model, performing living body judgment on an input target to be detected and outputting a living body detection score, and comprises: the method comprises the steps that an RGB image characteristic information extraction network is constructed, namely a multi-level deep neural network is constructed and used for extracting multi-scale characteristics of a target to be detected under different scales; the RGB image feature information classification network is characterized in that a multi-level semantic feature information fusion network is constructed, extracted semantic information of a target to be detected is fused, and a living body score is output, wherein the score is 0-1, if the target to be detected is a living body, the network output result is 1, and if the target to be detected is a non-living body, the network output result is 0.
And the multi-scale in-vivo detection module is used for extracting multi-scale image information of the target to be detected, inputting the multi-scale image information into the convolutional neural network module, fusing the multi-scale in-vivo detection scores output by the model and acquiring the in-vivo detection result of the target to be detected. The method comprises the following steps: the low-scale biological imaging characteristic constraint unit is used for endowing constraint weight to the human face imaging information characteristic extracted from the low-scale image; the mesoscale environmental information feature constraint unit is used for giving constraint weight to the environmental information features extracted from the mesoscale image; the high-scale behavior information characteristic constraint unit is used for endowing constraint weight to the behavior information characteristic of the target to be detected extracted from the high-scale image; and the multi-scale fusion characteristic unit is used for fusing the multi-scale constraint characteristics, inputting the fused multi-scale constraint characteristics into the classification network and outputting a living body detection result. Wherein the multi-scale fusion features are represented by formula (2) above.
The method of the invention trains and evaluates in the living body detection data set Siw, CelebA _ Spoof and the self-acquisition application scene data, and the evaluation method adopts FAR (false action rate) and FRR (false Rejection rate). When the threshold value is 0.5, the method provided by the invention is superior to a deep learning in vivo detection method based on a single scale.
At λ1、λ2、λ3The results of the tests, FAR and FRR, when the values were 0.5, 0.3 and 0.2, respectively, and the threshold was set to 0.5 are shown in the following table. Where FAR represents the proportion of a false face judged to be a true face, FRR represents the proportion of a true face judged to be a false face. As can be seen from the table, the multi-scale approach outperforms the single scale approach in both FAR and FRR.
FAR(%) FRR(%)
Single scale method 1.0 5.0
Multi-scale method 0.8 4.2
The invention discloses a deep learning-based multi-scale in-vivo detection method, which adopts a convolutional neural network in deep learning to design a background semantic information-based multi-scale human face in-vivo detection method. Compared with a living body detection method based on a single human face, the method has better detection accuracy.
Although the present invention has been described in detail with reference to the foregoing embodiments, those skilled in the art will understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (7)

1. A deep learning-based multi-scale in vivo detection method is characterized by comprising the following steps:
inputting a picture, and extracting a multi-scale image;
extracting multi-scale features from the multi-scale image: extracting multi-scale features from the multi-scale image by using a deep learning model to obtain face image information features, environment information features and behavior information features;
step three, obtaining multi-scale fusion characteristics: performing feature fusion on the extracted multi-scale features by adopting different constraints to obtain multi-scale fusion features;
and step four, inputting the multi-scale fusion characteristics to a classification network, outputting a living body score, and obtaining a living body test result according to a threshold value.
2. The deep learning-based multi-scale in-vivo detection method according to claim 1, wherein in the first step, a multi-scale image is extracted from the target to be detected in the input picture, the multi-scale image includes a low-scale face image information image, a medium-scale environment information image and a high-scale behavior information image, and the multi-scale image is an RGB image.
3. The deep learning based multi-scale in-vivo detection method according to claim 1, wherein in the step three, the multi-scale fusion features can be expressed by equation (2):
L(Gl,Gm,Gh)=λ1Fl2Fm3Fh (2)
in formula (2), Gl,Gm,GhRespectively, acquired low, medium and high-scale images, FlFor extracted low-scale imaging features, λ1To its constraint, FmFor extracted mesoscale environmental information features, λ2To its constraint, FhFor extracted high-scale behavior information features, λ3Is its constraint.
4. A deep learning based multi-scale in vivo detection system using the deep learning based multi-scale in vivo detection method of claim 1, 2 or 3, wherein the multi-scale in vivo detection system comprises an adaptive multi-scale image acquisition module, a convolutional neural network creation module, and a multi-scale in vivo detection module,
the convolutional neural network creating module is used for designing a convolutional neural network model, carrying out living body judgment on an input target to be detected and outputting a living body detection score;
the multi-scale in-vivo detection module is used for extracting multi-scale image information of the target to be detected, inputting the multi-scale image information into the convolutional neural network model, fusing multi-scale in-vivo detection scores output by the convolutional neural network model, and obtaining an in-vivo detection result of the target to be detected.
5. The deep learning based multi-scale in vivo detection system according to claim 4, wherein said adaptive multi-scale image acquisition module comprises:
the low-scale biological information acquisition unit is used for acquiring a facial information image according to the position information of the target to be detected;
the mesoscale environmental information acquisition unit is used for acquiring an image containing an environmental background by using an adaptive method independent of the resolution of a camera and the image size according to the position information of the target to be detected;
and the high-scale behavior information acquisition unit is used for acquiring the image containing the behavior information of the target to be detected by using an adaptive method independent of the resolution of a camera and the image size according to the position information of the target to be detected.
6. The deep learning based multi-scale in vivo detection system according to claim 4, wherein said convolutional neural network creation module comprises:
the RGB image characteristic information extraction network is used for constructing a multi-level deep neural network and extracting multi-level semantic characteristic information of the target to be detected under different scales;
the RGB image characteristic information classification network is used for constructing a multi-level semantic characteristic information fusion network, fusing the extracted semantic information of the target to be detected and outputting a living body score, wherein the score is between 0 and 1, if the target to be detected is a living body, the network output result is 1, and if the target to be detected is a non-living body, the network output result is 0.
7. The deep learning based multi-scale in vivo detection system according to claim 4, wherein the multi-scale in vivo detection module comprises:
the low-scale biological imaging characteristic constraint unit is used for giving constraint weight to the facial imaging characteristics extracted from the low-scale image;
the mesoscale environmental information feature constraint unit is used for giving constraint weight to the environmental features extracted from the mesoscale image;
and the high-scale behavior information characteristic constraint unit is used for endowing constraint weight to the behavior characteristic of the target to be detected extracted from the high-scale image.
CN202011047776.0A 2020-09-29 2020-09-29 Multi-scale living body detection method and system based on deep learning Active CN112183357B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011047776.0A CN112183357B (en) 2020-09-29 2020-09-29 Multi-scale living body detection method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011047776.0A CN112183357B (en) 2020-09-29 2020-09-29 Multi-scale living body detection method and system based on deep learning

Publications (2)

Publication Number Publication Date
CN112183357A true CN112183357A (en) 2021-01-05
CN112183357B CN112183357B (en) 2024-03-26

Family

ID=73946353

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011047776.0A Active CN112183357B (en) 2020-09-29 2020-09-29 Multi-scale living body detection method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN112183357B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114155591A (en) * 2021-12-02 2022-03-08 海南伍尔索普电子商务有限公司 Human face silence living body detection method based on multi-scale information fusion

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109670430A (en) * 2018-12-11 2019-04-23 浙江大学 A kind of face vivo identification method of the multiple Classifiers Combination based on deep learning
CN110569808A (en) * 2019-09-11 2019-12-13 腾讯科技(深圳)有限公司 Living body detection method and device and computer equipment
CN111144277A (en) * 2019-12-25 2020-05-12 东南大学 Face verification method and system with living body detection function
WO2020125623A1 (en) * 2018-12-20 2020-06-25 上海瑾盛通信科技有限公司 Method and device for live body detection, storage medium, and electronic device
CN111401134A (en) * 2020-02-19 2020-07-10 北京三快在线科技有限公司 Living body detection method, living body detection device, electronic apparatus, and storage medium
CN111696080A (en) * 2020-05-18 2020-09-22 江苏科技大学 Face fraud detection method, system and storage medium based on static texture

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109670430A (en) * 2018-12-11 2019-04-23 浙江大学 A kind of face vivo identification method of the multiple Classifiers Combination based on deep learning
WO2020125623A1 (en) * 2018-12-20 2020-06-25 上海瑾盛通信科技有限公司 Method and device for live body detection, storage medium, and electronic device
CN110569808A (en) * 2019-09-11 2019-12-13 腾讯科技(深圳)有限公司 Living body detection method and device and computer equipment
CN111144277A (en) * 2019-12-25 2020-05-12 东南大学 Face verification method and system with living body detection function
CN111401134A (en) * 2020-02-19 2020-07-10 北京三快在线科技有限公司 Living body detection method, living body detection device, electronic apparatus, and storage medium
CN111696080A (en) * 2020-05-18 2020-09-22 江苏科技大学 Face fraud detection method, system and storage medium based on static texture

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114155591A (en) * 2021-12-02 2022-03-08 海南伍尔索普电子商务有限公司 Human face silence living body detection method based on multi-scale information fusion

Also Published As

Publication number Publication date
CN112183357B (en) 2024-03-26

Similar Documents

Publication Publication Date Title
KR102596897B1 (en) Method of motion vector and feature vector based fake face detection and apparatus for the same
Shao et al. Deep convolutional dynamic texture learning with adaptive channel-discriminability for 3D mask face anti-spoofing
JP4743823B2 (en) Image processing apparatus, imaging apparatus, and image processing method
Chugh et al. Fingerprint spoof detection using minutiae-based local patches
CN110008813B (en) Face recognition method and system based on living body detection technology
CN108021892B (en) Human face living body detection method based on extremely short video
WO2020195732A1 (en) Image processing device, image processing method, and recording medium in which program is stored
JP2013178816A (en) Image processing apparatus, imaging apparatus and image processing method
CN111783629B (en) Human face in-vivo detection method and device for resisting sample attack
CN111783748A (en) Face recognition method and device, electronic equipment and storage medium
CN113269010B (en) Training method and related device for human face living body detection model
CN112883941A (en) Facial expression recognition method based on parallel neural network
CN115240280A (en) Construction method of human face living body detection classification model, detection classification method and device
Barni et al. Iris deidentification with high visual realism for privacy protection on websites and social networks
CN111767879A (en) Living body detection method
JP2009169518A (en) Area identification apparatus and content identification apparatus
CN112183357B (en) Multi-scale living body detection method and system based on deep learning
Huang et al. Multi-Teacher Single-Student Visual Transformer with Multi-Level Attention for Face Spoofing Detection.
CN112613430B (en) Gait recognition method based on deep migration learning
CN112329518B (en) Fingerprint activity detection method based on edge texture reinforcement and symmetrical differential statistics
Angadi et al. Human identification using histogram of oriented gradients (HOG) and non-maximum suppression (NMS) for atm video surveillance
CN113435315A (en) Expression recognition method based on double-path neural network feature aggregation
CN114373205A (en) Face detection and recognition method based on convolution width network
CN113723215A (en) Training method of living body detection network, living body detection method and device
Singh et al. LBP and CNN feature fusion for face anti-spoofing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant