WO2020177226A1 - Procédé de détection in vivo de visage humain fondé sur un resnet amélioré et dispositif associé - Google Patents

Procédé de détection in vivo de visage humain fondé sur un resnet amélioré et dispositif associé Download PDF

Info

Publication number
WO2020177226A1
WO2020177226A1 PCT/CN2019/089163 CN2019089163W WO2020177226A1 WO 2020177226 A1 WO2020177226 A1 WO 2020177226A1 CN 2019089163 W CN2019089163 W CN 2019089163W WO 2020177226 A1 WO2020177226 A1 WO 2020177226A1
Authority
WO
WIPO (PCT)
Prior art keywords
single frame
frame image
image
detected
face image
Prior art date
Application number
PCT/CN2019/089163
Other languages
English (en)
Chinese (zh)
Inventor
庞烨
王义文
王健宗
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020177226A1 publication Critical patent/WO2020177226A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Definitions

  • This application relates to the field of live body detection, and in particular to a method and related equipment for face live body detection based on an improved Resnet.
  • this application provides a face live detection method and related equipment based on an improved Resnet.
  • a method for face living detection based on improved Resnet includes: obtaining a single frame image to be detected containing a face image; and for each face in the single frame image to be detected Based on the improved Resnet, the probability value that the face image is directly derived from a living body is acquired; based on the matching result of the probability value and a preset threshold, it is determined whether the face image is directly derived from the living body.
  • an apparatus for face living detection based on an improved Resnet including: a first acquisition module configured to acquire a single frame image to be detected containing a face image; a second acquisition module configured For each face image in the single frame image to be detected, based on the improved Resnet, obtain the probability value that the face image is directly derived from a living body; the determination module is configured to be based on the probability value and a preset threshold According to the matching result, it is determined whether the face image is directly derived from a living body.
  • an electronic device for face living detection based on an improved Resnet including: a memory configured to store executable instructions; a processor configured to execute executable instructions stored in the memory To perform the method described above.
  • a computer non-volatile readable storage medium which stores computer program instructions that, when executed by a computer, cause the computer to execute the method described above.
  • the embodiments of the present disclosure use an improved Resnet to perform live detection of face images, which reduces hardware requirements and improves Accuracy of face live detection.
  • Fig. 1 shows a flow chart of the steps of face living detection based on improved Resnet according to an exemplary embodiment of the present disclosure.
  • Fig. 2 shows a flow chart of partial steps of face living detection based on improved Resnet according to an exemplary embodiment of the present disclosure.
  • Fig. 3 shows a flow chart of partial steps of face living detection based on improved Resnet according to an exemplary embodiment of the present disclosure.
  • Fig. 4 shows a flow chart of partial steps of face living detection based on improved Resnet according to an exemplary embodiment of the present disclosure.
  • Fig. 5 shows a block diagram of a device for face living detection based on an improved Resnet according to an exemplary embodiment of the present disclosure.
  • Fig. 6 shows a system architecture diagram of face living detection based on improved Resnet according to an exemplary embodiment of the present disclosure.
  • Fig. 7 shows a diagram of an electronic device for face living detection based on an improved Resnet according to an exemplary embodiment of the present disclosure.
  • FIG. 8 shows a diagram of a computer non-volatile readable storage medium for face living detection based on improved Resnet according to an exemplary embodiment of the present disclosure.
  • a method for face living detection based on an improved Resnet includes: acquiring a single frame image to be detected containing a face image; and for each face image in the single frame image to be detected, based on the improvement Resnet of, obtaining the probability value that the face image is directly derived from a living body; based on the matching result of the probability value and a preset threshold, it is determined whether the face image is directly derived from the living body.
  • the embodiments of the present disclosure use an improved Resnet to perform live detection of face images, which reduces hardware requirements and improves Accuracy of face live detection.
  • Fig. 1 shows a flow chart of face living detection based on an improved Resnet according to an exemplary embodiment of the present disclosure: Step S100: Obtain a single frame image to be detected containing a face image; Step S110: Check the single frame to be detected For each face image in the image, based on the improved Resnet, the probability value that the face image is directly derived from a living body is obtained; step S120: based on the matching result of the probability value and a preset threshold, it is determined whether the face image is Directly derived from living organisms.
  • the deep residual network Resnet used for face live detection has been improved in structure in advance, and the improved Resnet can realize face live detection with more excellent performance.
  • face live detection obtain a single frame image to be detected containing a face image, apply an improved Resnet to each face image in the single frame image to be detected, and determine the probability value of each face directly derived from a living body. According to the probability value, it is judged whether the corresponding face image is directly derived from a living body.
  • step S100 a single frame image to be detected containing a human face image is obtained.
  • the single-frame image to be detected refers to the image obtained by decomposing the to-be-detected video into single frames.
  • step S100 includes: step S1001: obtaining a video to be detected; step S1002: decomposing the video to be detected into single frame images; step S1003: based on the dlib framework, from the single frame A single frame image to be detected containing a face image is obtained from the frame image.
  • the video to be detected refers to the video obtained by the server that needs to detect whether the face image appearing in the video is directly derived from a living body.
  • dlib is a toolkit containing machine learning algorithms that can determine the area of the face in the image, that is, recognize the face image in a single frame of image.
  • the server obtains the video to be detected from a video recording terminal, such as a camera, which needs to be detected whether the face image appears directly from a living body.
  • the video to be detected may be obtained by the video recording terminal directly shooting the action of a living body, or may be obtained by the video recording terminal shooting the video played by the electronic device. Therefore, it is necessary to perform live detection on the acquired video to determine whether the face image appearing in the video is directly derived from a living body.
  • the video to be detected is decomposed into single frame images. Based on the dlib framework, face detection is performed on a single frame image, and the single frame image containing the face image is determined as the single frame image to be detected. In this way, the single frame image to be detected containing the face image is extracted, so that the server can further perform live detection on the single frame image to be detected containing the face image.
  • step S1003 includes: step S10031: randomly extract one from the single frame image as the original single frame image; step S10032: confirm the original single frame image based on the dlib framework Step S10033: if it is confirmed that the original single-frame image contains a human face image, use the original single-frame image as the single-frame image to be detected; if it is confirmed that the original single-frame image contains no human For a face image, another one is randomly selected from the single frame image as the original single frame image until it is confirmed that the original single frame image contains a face image, and the original single frame image is used as the single frame image to be detected.
  • the server decomposes the to-be-detected video into a single frame image frame by frame. Randomly select an image from the single frame image, determine whether the image contains a human face image based on the dlib framework, if so, use the image as a single frame image to be detected for live detection; if not, then randomly Select an image until a single-frame image of a human face image is obtained, and use it as a single-frame image to be detected for living body detection.
  • the purpose of obtaining a single frame image to be detected containing a face image is achieved.
  • step S110 for each face image in the single frame image to be detected, the probability value that the face image is directly derived from a living body is obtained based on the improved Resnet.
  • Resnet refers to a deep residual network based on residual learning to solve the problem of gradient disappearance during machine learning training.
  • the server after the server obtains the single frame image to be detected containing the face image, it extracts each face image contained in the single frame image separately and inputs it into the improved Resnet to obtain each face output by Resnet
  • the image is directly derived from the probability value of a living body.
  • the method for obtaining each face image in the single frame image to be detected in step S110 includes: Step S1101: extract the face features in the single frame image to be detected based on the dlib framework Point; Step S1102: Use each group of images of the predetermined shape and size area where the facial feature points are located as the facial image.
  • a group of face feature points is obtained.
  • each group of facial feature points corresponds to a facial image.
  • An image of a predetermined-sized square area where each group of facial feature points is located is determined as the facial image corresponding to the facial feature points.
  • each face image in the single frame image to be detected is obtained.
  • each face image in the single frame image to be detected is determined.
  • the improved Resnet includes: adding a dropout layer after the Resnet pooling layer; and using a sigmoid function to output the probability value that the face image is directly derived from a living body.
  • the dropout layer makes the neural network ignore half of the feature detectors in each training batch, thereby reducing the occurrence of overfitting during the training process.
  • the sigmoid function is a special case of the logistic regression function.
  • the mathematical curve is in the shape of "S" and is used to deal with two classification problems.
  • adding a dropout layer after the Resnet pooling layer can effectively prevent the occurrence of overfitting.
  • Resnet for processing multi-classification problems is improved to deal with two-class classification problems, that is, the sigmoid function is used to output the probability value of the face image directly derived from a living body, instead of the softmax function used in the prior art Output probability value.
  • the probability value output by using the sigmoid function is more suitable for further binary classification judgments, which makes the sigmoid function perform better than the softmax function specially used to deal with multi-classification problems in dealing with two classification problems.
  • the Resnet improved in accordance with this method performs better in dealing with the two-category problem of living detection ("living" and "non-living").
  • the improved Resnet is trained in the following manner:
  • the face images labeled "living” and “non-living” in advance according to whether they are directly derived from a living body are taken as samples, and they are randomly divided into training set and verification set; based on the gradient descent algorithm, the training set is used to Improved Resnet for training: For each input sample of the training set, the improved Resnet will output the probability value of the sample directly derived from the living body, and paste the sample with the probability value greater than or equal to the preset standard value On the label of "living”, label the samples with the probability value less than the preset standard value as "non-living” to determine whether the improved Resnet judges whether the training set samples are directly derived from the living body, If the correct rate of judgment for the training set samples is less than the preset expected value, the improved Resnet is updated, and then the training set is used to train it until the training set samples are The correct judgment rate of is greater than or equal to the preset expected value; the verification set is used to verify the improved Resnet whose correct rate of labeling the training set samples is greater than or equal to
  • the preset standard value is 97%, and the preset expected value is 99%.
  • the preset standard value is used to measure how likely the training set samples are directly derived from living bodies; the preset expected value is to measure the accuracy of Resnet's judgment on the training set samples. That is, only when the probability value of Resnet output training set samples directly derived from a living body is greater than or equal to 97%, the training set samples will be labeled as "live body". Due to the existence of the Resnet error, according to this method, the final label will be mislabeled, that is, the judgment of the training set samples may not be accurate.
  • the purpose of using the training set for training is to achieve this method, so that the accuracy of the judgment of the training set samples can be greater than or equal to the preset expected value, which is 99%.
  • Resnet After using the training set samples to meet the training purpose, Resnet must be verified. This is because the training process is repeated training using the same set of samples, and there is sample deviation.
  • the judgment accuracy rate of the training set samples can be greater than or equal to the preset expected value, which does not mean that the judgment accuracy rate of the samples outside the training set can also be greater than or equal to the preset expected value. Therefore, the validation set samples are used for verification and adjustment, so that Resnet's judgment accuracy of the training set samples and the validation set samples is greater than or equal to the preset expected value, which is 99%. At this point, Resnet's training is complete. Through this method, the occurrence of over-fitting is further reduced, so that in practical applications, Resnet can correctly determine whether the input face image is directly derived from a living body.
  • obtaining the probability value that the face image is directly derived from a living body includes: according to the area where the face image is located From left to right, sequentially input each of the face images into the improved Resnet, and obtain the probability value that the face image output by the improved Resnet is directly derived from a living body. Through this method, the probability value that each face image in the single frame image to be detected is directly derived from a living body is determined.
  • determining whether the face image is directly derived from a living body includes: if the probability value is greater than or equal to the preset threshold, determining the person The face image is directly derived from a living body; if the probability value is less than a preset threshold, it is determined that the face image is directly derived from a non-living body.
  • the preset threshold value is 98.7%, that is, only the corresponding face image with a probability value of greater than or equal to 98.7% directly derived from a living body will be determined to be directly derived from a living body.
  • face image A After the face image A is input to the improved Resnet, the probability value of the direct source from the living body output by Resnet is 99.1%, which is greater than the preset threshold. Therefore, it is determined that the face image A is directly derived from the living body; the face image B is input to the improvement After the Resnet, the probability value of the direct source from the living body output by the Resnet is 95.3%, which is less than the preset threshold. Therefore, it is determined that the face image B is directly derived from the non-living body.
  • the probability value with a preset threshold value it is determined whether the face image is directly derived from a living body, thereby achieving the purpose of living body detection.
  • a face living detection device 20 based on an improved Resnet which specifically includes: a first acquisition module 201 configured to acquire a single frame image containing a face image to be detected
  • the second acquisition module 202 is configured to acquire the probability value of the face image directly derived from a living body based on the improved Resnet for each face image in the single frame image to be detected;
  • the determination module 203 is configured to be based on According to the matching result of the probability value and the preset threshold value, it is determined whether the face image is directly derived from a living body.
  • the first acquisition module 201 in the improved Resnet-based face living detection device 20 includes: a video acquisition module 2011 to be detected, configured to acquire a video to be detected; a decomposition module 2012, configured to The video to be detected is decomposed into single-frame images; the single-frame image acquisition module 2013 to be detected is configured to acquire a single-frame image to be detected containing a face image from the single-frame image based on the dlib framework.
  • the single-frame image acquisition module 2013 to be detected in the improved Resnet-based face living detection device 20 includes: a single-frame image extraction module 20131 configured to randomly extract one frame from the single-frame image
  • the face image detection module 20132 is configured to confirm whether the original single frame image contains a face image based on the dlib framework
  • the discrimination module 20133 is configured to determine whether the original single frame image contains a person Face image, using the original single frame image as the single frame image to be detected; if it is confirmed that the original single frame image does not contain a human face image, another one is randomly selected from the single frame image as the original single frame image Until it is confirmed that the original single frame image contains a human face image, the original single frame image is used as the single frame image to be detected.
  • the second acquisition module 202 in the improved Resnet-based face living detection device 20 includes: a face image acquisition module 2021, configured to acquire each face in the single frame image to be detected Image; the probability value acquisition module 2022, configured to acquire the probability value of the face image directly derived from a living body based on an improved Resnet.
  • the face image acquisition module 2021 in the improved Resnet-based face living detection device 20 includes: a face feature point extraction module 20111, configured to extract the single frame to be detected based on the dlib framework Face feature points in the image; the face feature point combination module 20112 is configured to use each group of images of a predetermined shape and size area where the face feature points are located as the face image.
  • modules or units of the device for action execution are mentioned in the above detailed description, this division is not mandatory.
  • the features and functions of two or more modules or units described above may be embodied in one module or unit.
  • the features and functions of a module or unit described above can be further divided into multiple modules or units to be embodied.
  • the exemplary embodiments described herein can be implemented by software, or can be implemented by combining software with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (can be a CD-ROM, U disk, mobile hard disk, etc.) or on the network , Including several instructions to make a computing device (which may be a personal computer, a server, a mobile terminal, or a network device, etc.) execute the method according to the embodiment of the present disclosure.
  • a computing device which may be a personal computer, a server, a mobile terminal, or a network device, etc.
  • Fig. 6 shows a system architecture diagram of face living detection based on improved Resnet according to an exemplary embodiment of the present disclosure.
  • the system architecture includes: a video recording terminal 310, a server 320, and a management terminal 330.
  • the management terminal 330 sends the parameters required for Resnet training: preset standard values and preset expected values to the server 320, so that the server 320 can complete the Resnet training.
  • the server 320 obtains the video to be detected uploaded from the video recording terminal 310, and obtains a single frame image after framing the video to be detected. After obtaining the single-frame image to be detected containing the face image therefrom, input each face image in the single-frame image to be detected into the improved Resnet, thereby determining whether each face image is directly derived from a living body.
  • the server 320 sends the recognition result to the management terminal 330, so that the management terminal 330 performs corresponding service processing based on the recognition result.
  • an electronic device capable of implementing the above method is also provided.
  • the electronic device 400 according to this embodiment of the present application will be described below with reference to FIG. 7.
  • the electronic device 400 shown in FIG. 7 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present application.
  • the electronic device 400 takes the form of a general-purpose computing device.
  • the components of the electronic device 400 may include, but are not limited to: the aforementioned at least one processing unit 410, the aforementioned at least one storage unit 420, and a bus 430 connecting different system components (including the storage unit 420 and the processing unit 410).
  • the storage unit stores program code, and the program code can be executed by the processing unit 410, so that the processing unit 410 executes the various exemplary methods described in the "Exemplary Method" section of this specification.
  • the processing unit 410 may perform step S100 as shown in FIG. 1: Obtain a single frame image to be detected containing a face image; Step S110: For each face image in the single frame image to be detected, based on improved Resnet of, obtains the probability value that the face image is directly derived from a living body; Step S120: Based on the matching result of the probability value and a preset threshold, determine whether the face image is directly derived from a living body.
  • the storage unit 420 may include a readable medium in the form of a volatile storage unit, such as a random access storage unit (RAM) 4201 and/or a cache storage unit 4202, and may further include a read-only storage unit (ROM) 4203.
  • RAM random access storage unit
  • ROM read-only storage unit
  • the storage unit 420 may also include a program/utility tool 4204 having a set of (at least one) program module 4205.
  • program module 4205 includes but is not limited to: an operating system, one or more application programs, other program modules, and program data, Each of these examples or some combination may include the implementation of a network environment.
  • the bus 430 may represent one or more of several types of bus structures, including a storage unit bus or a storage unit controller, a peripheral bus, a graphics acceleration port, a processing unit, or a local area using any bus structure among multiple bus structures. bus.
  • the electronic device 400 can also communicate with one or more external devices 500 (such as keyboards, pointing devices, Bluetooth devices, etc.), and can also communicate with one or more devices that enable a user to interact with the electronic device 400, and/or communicate with Any device (such as a router, modem, etc.) that enables the electronic device 400 to communicate with one or more other computing devices. This communication can be performed through an input/output (I/O) interface 450.
  • the electronic device 400 may also communicate with one or more networks (for example, a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet) through the network adapter 460.
  • networks for example, a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet
  • the network adapter 460 communicates with other modules of the electronic device 400 through the bus 430. It should be understood that although not shown in the figure, other hardware and/or software modules can be used in conjunction with the electronic device 400, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives And data backup storage system, etc.
  • the exemplary embodiments described herein can be implemented by software, or can be implemented by combining software with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (can be a CD-ROM, U disk, mobile hard disk, etc.) or on the network , Including several instructions to make a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the embodiments of the present disclosure.
  • a computing device which may be a personal computer, a server, a terminal device, or a network device, etc.
  • a computer non-volatile readable storage medium on which is stored a program product capable of implementing the above-mentioned method in this specification.
  • various aspects of the present application can also be implemented in the form of a program product, which includes program code.
  • the program product runs on a terminal device, the program code is used to enable the The terminal device executes the steps according to various exemplary embodiments of the present application described in the above-mentioned "Exemplary Method" section of this specification.
  • a program product 600 for implementing the above method according to an embodiment of the present application is described. It can adopt a portable compact disk read-only memory (CD-ROM) and include program code, and can be installed in a terminal device, For example, running on a personal computer.
  • the program product of this application is not limited to this.
  • the non-volatile readable storage medium can be any tangible medium that contains or stores a program.
  • the program can be used by or combined with an instruction execution system, device, or device. use.
  • the program product can use any combination of one or more readable media.
  • the non-volatile readable storage medium may be a readable signal medium or a readable storage medium.
  • the non-volatile readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination of the above.
  • non-volatile readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read-only memory (ROM ), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or flash memory erasable programmable read-only memory
  • CD-ROM compact disk read-only memory
  • magnetic storage device or any suitable combination of the above.
  • the computer-readable signal medium may include a data signal propagated in baseband or as a part of a carrier wave, and readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the readable signal medium may also be any readable medium other than a non-volatile readable storage medium, and the readable medium may send, propagate, or transmit a program for use by or in combination with the instruction execution system, apparatus, or device.
  • the program code contained on the readable medium can be transmitted by any suitable medium, including but not limited to wireless, wired, optical cable, RF, etc., or any suitable combination of the foregoing.
  • the program code for performing the operations of this application can be written in any combination of one or more programming languages.
  • the programming languages include object-oriented programming languages such as Java, C++, etc., as well as conventional procedural programming languages. Programming language-such as "C" language or similar programming language.
  • the program code can be executed entirely on the user's computing device, partly on the user's device, executed as an independent software package, partly on the user's computing device and partly executed on the remote computing device, or entirely on the remote computing device or server Executed on.
  • the remote computing device can be connected to a user computing device through any kind of network, including a local area network (LAN) or a wide area network (WAN), or can be connected to an external computing device (for example, using Internet service providers) Business to connect via the Internet).
  • LAN local area network
  • WAN wide area network
  • Internet service providers Internet service providers

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un procédé de détection in vivo de visage humain fondé sur un Resnet amélioré et un dispositif associé, se rapportant au domaine de la détection in vivo, le procédé consistant : à obtenir une image à trame unique à détecter contenant une image de visage humain ; pour chaque image de visage humain dans l'image à trame unique à détecter, en fonction du Resnet amélioré, à obtenir une valeur de probabilité que l'image de visage humain provienne directement d'un corps humain ; à évaluer si l'image de visage humain provient directement d'un corps humain en fonction d'un résultat de correspondance de la valeur de probabilité avec un seuil prédéfini. Le procédé améliore la précision de la détection in vivo de visage humain.
PCT/CN2019/089163 2019-03-04 2019-05-30 Procédé de détection in vivo de visage humain fondé sur un resnet amélioré et dispositif associé WO2020177226A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910160807.4A CN110059542A (zh) 2019-03-04 2019-03-04 基于改进的Resnet的人脸活体检测的方法及相关设备
CN201910160807.4 2019-03-04

Publications (1)

Publication Number Publication Date
WO2020177226A1 true WO2020177226A1 (fr) 2020-09-10

Family

ID=67316559

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/089163 WO2020177226A1 (fr) 2019-03-04 2019-05-30 Procédé de détection in vivo de visage humain fondé sur un resnet amélioré et dispositif associé

Country Status (2)

Country Link
CN (1) CN110059542A (fr)
WO (1) WO2020177226A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112329730A (zh) * 2020-11-27 2021-02-05 上海商汤智能科技有限公司 视频检测方法、装置、设备及计算机可读存储介质
CN112364724A (zh) * 2020-10-27 2021-02-12 北京地平线信息技术有限公司 活体检测方法和装置、存储介质、电子设备

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111079606B (zh) * 2019-12-06 2023-05-26 北京爱笔科技有限公司 一种人脸防伪方法及装置

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106960177A (zh) * 2015-02-15 2017-07-18 北京旷视科技有限公司 活体人脸验证方法及系统、活体人脸验证装置
CN109101871A (zh) * 2018-08-07 2018-12-28 北京华捷艾米科技有限公司 一种基于深度和近红外信息的活体检测装置、检测方法及其应用

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106960177A (zh) * 2015-02-15 2017-07-18 北京旷视科技有限公司 活体人脸验证方法及系统、活体人脸验证装置
CN109101871A (zh) * 2018-08-07 2018-12-28 北京华捷艾米科技有限公司 一种基于深度和近红外信息的活体检测装置、检测方法及其应用

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112364724A (zh) * 2020-10-27 2021-02-12 北京地平线信息技术有限公司 活体检测方法和装置、存储介质、电子设备
CN112329730A (zh) * 2020-11-27 2021-02-05 上海商汤智能科技有限公司 视频检测方法、装置、设备及计算机可读存储介质
CN112329730B (zh) * 2020-11-27 2024-06-11 上海商汤智能科技有限公司 视频检测方法、装置、设备及计算机可读存储介质

Also Published As

Publication number Publication date
CN110059542A (zh) 2019-07-26

Similar Documents

Publication Publication Date Title
CN108875833B (zh) 神经网络的训练方法、人脸识别方法及装置
US11996091B2 (en) Mixed speech recognition method and apparatus, and computer-readable storage medium
CN109564618B (zh) 用于面部图像分析的方法和系统
US9183429B2 (en) Method and apparatus for facial recognition
WO2018121737A1 (fr) Procédés de prédiction de point-clé, de formation de réseau et de traitement d'image, dispositif et dispositif électronique
WO2022105118A1 (fr) Procédé et appareil d'identification d'état de santé basés sur une image, dispositif et support de stockage
AU2011318719B2 (en) Method and apparatus for recognizing an emotion of an individual based on facial action units
CN108288051B (zh) 行人再识别模型训练方法及装置、电子设备和存储介质
WO2020253127A1 (fr) Procédé et appareil d'apprentissage de modèle d'extraction de caractéristiques faciales, procédé et appareil d'extraction de caractéristiques faciales, dispositif et support d'informations
WO2020177226A1 (fr) Procédé de détection in vivo de visage humain fondé sur un resnet amélioré et dispositif associé
CN111476309A (zh) 图像处理方法、模型训练方法、装置、设备及可读介质
WO2020024484A1 (fr) Procédé et dispositif de production de données
CN112071322B (zh) 一种端到端的声纹识别方法、装置、存储介质及设备
WO2020019591A1 (fr) Procédé et dispositif utilisés pour la génération d'informations
US11734954B2 (en) Face recognition method, device and electronic equipment, and computer non-volatile readable storage medium
WO2020006964A1 (fr) Procédé et dispositif de détection d'image
CN111242291A (zh) 神经网络后门攻击的检测方法、装置和电子设备
CN109118420B (zh) 水印识别模型建立及识别方法、装置、介质及电子设备
CN109214501B (zh) 用于识别信息的方法和装置
CN111291902B (zh) 后门样本的检测方法、装置和电子设备
WO2022127480A1 (fr) Procédé de reconnaissance de visage et dispositif associé
WO2023019927A1 (fr) Procédé et appareil de reconnaissance faciale, support de stockage et dispositif électronique
CN113140012B (zh) 图像处理方法、装置、介质及电子设备
CN112651467B (zh) 卷积神经网络的训练方法和系统以及预测方法和系统
WO2020215682A1 (fr) Procédé et appareil d'extension d'échantillons d'image de fond d'oeil, dispositif électronique et support de stockage non volatil lisible par ordinateur

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19917559

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 26.10.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19917559

Country of ref document: EP

Kind code of ref document: A1