WO2018166515A1 - 人脸防伪检测方法和系统、电子设备、程序和介质 - Google Patents

人脸防伪检测方法和系统、电子设备、程序和介质 Download PDF

Info

Publication number
WO2018166515A1
WO2018166515A1 PCT/CN2018/079247 CN2018079247W WO2018166515A1 WO 2018166515 A1 WO2018166515 A1 WO 2018166515A1 CN 2018079247 W CN2018079247 W CN 2018079247W WO 2018166515 A1 WO2018166515 A1 WO 2018166515A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
video
image
neural network
terminal device
Prior art date
Application number
PCT/CN2018/079247
Other languages
English (en)
French (fr)
Inventor
吴立威
暴天鹏
于萌
车英慧
赵晨旭
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Publication of WO2018166515A1 publication Critical patent/WO2018166515A1/zh
Priority to US16/451,208 priority Critical patent/US11080517B2/en
Priority to US17/203,435 priority patent/US11482040B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/191Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
    • G06V30/19173Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present application relates to computer vision technology, and in particular to a face detection method and system, an electronic device, a program and a medium.
  • In vivo detection refers to the use of computer vision technology to determine whether the face image in front of the camera is from a real person.
  • the face anti-counterfeiting focuses on detecting whether the face is authentic; the activity detection focuses on detecting whether the face is active.
  • An active face is not necessarily a non-forgery face.
  • a non-forged face is not necessarily active.
  • the embodiment of the present application provides a technical solution for performing face anti-counterfeiting detection.
  • a method for detecting a face security includes:
  • the extracted features include one or more of the following: a local binary pattern feature, a sparsely encoded histogram feature, a panorama feature, and a face map feature. , face detail map features.
  • the forged face cue information has human eye observability under visible light conditions.
  • the forged face cue information includes any one or more of the following: forged clue information of the imaging medium, forged clue information of the imaging medium, and a real fake person The clue information of the face.
  • the forged clue information of the imaging medium includes: edge information, reflective information, and/or material information of the imaging medium; and/or,
  • the forged clue information of the imaging medium includes: a screen edge of the display device, a screen reflection, and/or a screen moiré; and/or,
  • the clue information of the real fake face includes: the characteristics of the masked face, the characteristics of the model face, and the characteristics of the sculpture face.
  • the extracting the feature of the image to be detected or the video, and detecting whether the extracted feature includes at least one forged face cue information includes:
  • a detection result indicating whether the image or video to be detected includes at least one forged face cue information, wherein the neural network is included
  • the training image set with forged face cue information is pre-trained.
  • the training image set includes: a plurality of face images that can be used as positive samples for training and a plurality of images that can be used as negative samples for training;
  • the method for acquiring a training image set including forged face cue information includes:
  • Image processing for simulating forged face cue information is performed on at least a portion of the acquired at least one face image to generate at least one image that can be used as a negative sample for training.
  • the acquiring an image or a video to be detected including a human face includes:
  • the image or image to be detected including the face is acquired by the visible light camera of the terminal device.
  • the neural network includes: a first neural network located in the terminal device;
  • the determining, according to the detection result, whether the face passes the face anti-counterfeiting detection comprises: determining, by the terminal device, whether the face passes the face anti-counterfeiting detection according to the detection result output by the first neural network.
  • the acquiring an image or a video to be detected including a human face includes:
  • the server receives the image or video to be detected including the face sent by the terminal device.
  • the neural network includes: a second neural network located in the server.
  • the determining, according to the detection result, whether the image or video to be detected passes the face anti-counterfeiting detection comprises: detecting, by the server, the output according to the second neural network As a result, it is determined whether the face passes the face anti-counterfeiting detection, and returns to the terminal device whether the face passes the determination result of the face anti-counterfeiting detection.
  • the neural network further includes: a first neural network located in the terminal device; the size of the first neural network is smaller than a size of the second neural network;
  • the method further includes:
  • a partial video or image is selected from the video including the face as the image or image to be detected is transmitted to the server.
  • selecting a part of a video or an image from the video that includes a human face as the image or video to be detected is sent to the server, including:
  • a part of the video obtained from the terminal device is selected as the to-be-detected video and sent to the server; and/or,
  • At least one image that meets the preset criterion is selected from the video acquired by the terminal device as the to-be-detected The image is sent to the server.
  • the video to be detected is input to the server, and the video to be detected is input into the second nerve.
  • detecting, by the second neural network, a detection result indicating whether the to-be-detected video includes at least one forged face cue information including:
  • the server selects at least one image from the to-be-detected video as the image to be detected is input to the second neural network, and outputs, by the second neural network, whether the image to be detected includes at least one forged face.
  • the detection result of the clue information is input to the second neural network, and outputs, by the second neural network, whether the image to be detected includes at least one forged face.
  • the determining, according to the detection result, whether the face passes the face anti-counterfeiting detection includes: the terminal device determines, according to the detection result output by the first neural network, that the face does not pass the face anti-counterfeiting detection.
  • the method further includes: the server returning, by the server, a detection result output by the second neural network to the terminal device;
  • the determining, according to the detection result, whether the face passes the face anti-counterfeiting detection comprises: determining, by the terminal device, whether the face passes the face anti-counterfeiting detection according to the detection result output by the second neural network.
  • the determining, according to the detection result, whether the face passes the face anti-counterfeiting detection comprises: determining, by the server, the detection result output by the second neural network Whether the face passes the face anti-counterfeiting detection, and sends a determination result of whether the face passes the face anti-counterfeiting detection to the terminal device.
  • the method further includes:
  • the face anti-counterfeiting detection method described in any of the above embodiments of the present application is executed in response to the detection of the living body.
  • performing live detection on the video acquired by the terminal device by using the neural network including: performing, by the first neural network, a video acquired by the terminal device Living test
  • performing live detection on the video acquired by the terminal device by using the neural network including:
  • the living body detection passes at least the detection result in response to the validity of the prescribed action satisfies a predetermined condition.
  • the specified action includes any one or more of the following: blinking, opening, closing, smiling, nodding, nodding, turning left, right turning Left-handed, right-handed, bowed, and headed.
  • the predetermined action is a predetermined action set in advance or a predetermined action randomly selected.
  • a face anti-counterfeiting detecting apparatus including:
  • a first acquiring module configured to acquire an image or a to-be-detected image including a human face
  • the anti-counterfeiting detection module is configured to extract features of the image or video to be detected, and detect whether the extracted features include forged face cue information.
  • a determining module configured to determine, according to the detection result, whether the face passes the face anti-counterfeiting detection.
  • the feature extracted by the anti-counterfeiting detection module includes one or more of the following: a local binary pattern feature, a sparsely encoded histogram feature, and a panorama feature. , face map features, face detail map features.
  • the forged face cue information has human eye observability under visible light conditions.
  • the forged face cue information includes any one or more of the following: forged clue information of the imaging medium, forged clue information of the imaging medium, and a real fake person The clue information of the face.
  • the forged clue information of the imaging medium includes: edge information, reflective information, and/or material information of the imaging medium; and/or,
  • the forged clue information of the imaging medium includes: a screen edge of the display device, a screen reflection, and/or a screen moiré; and/or,
  • the clue information of the real fake face includes: the characteristics of the masked face, the characteristics of the model face, and the characteristics of the sculpture face.
  • the anti-counterfeiting detection module includes:
  • a neural network configured to receive the input image or video to be detected, and output a detection result indicating whether the image or video to be detected includes at least one forged face cue information, wherein the neural network is based on The training image set for forging the face cue information is pre-trained.
  • the training image set includes: a plurality of face images that can be used as positive samples for training and a plurality of images that can be used as negative samples for training;
  • the device also includes:
  • a second acquiring module configured to acquire a plurality of face images that can be used as positive samples for training; and perform image processing for simulating forged face cue information on at least part of the acquired at least one face image to generate at least An image that can be used as a negative sample for training.
  • the first acquiring module includes:
  • Visible light camera for terminal equipment.
  • the neural network includes: a first neural network located in the terminal device;
  • the determining module is located in the terminal device, and is configured to: determine, according to the detection result output by the first neural network, whether the face passes the face anti-counterfeiting detection.
  • the first acquiring module is located on a server, and is configured to receive an image or a video to be detected, including a human face, sent by the terminal device;
  • the neural network includes: a second neural network located in the server.
  • the neural network further includes: a first neural network located in the terminal device, configured to receive the input image or video to be detected, and output for indicating Whether the video including the face includes at least one detection result of the forged face cue information; the size of the first neural network is smaller than the size of the second neural network
  • the device also includes:
  • a first sending module located on the terminal device, configured to respond to the detection result output by the first neural network, in response to the detection result that the video including the face does not include forged face cue information, from the A part of the video or image is selected from the video of the face as the image or video to be detected is sent to the server.
  • the first sending module is configured to:
  • a part of the video obtained from the terminal device is selected as the to-be-detected video and sent to the server; and/or,
  • At least one image that meets the preset criterion is selected from the video acquired by the terminal device as the to-be-detected The image is sent to the server.
  • the server when a part of the video is selected as the video to be detected and sent to the server, the server further includes:
  • a selection module configured to select at least one image from the to-be-detected video as the image to be detected is input to the second neural network.
  • the determining module is located on the terminal device, and is further configured to: in response to the video that includes the human face, include a detection result of the forged face cue information, according to the The detection result of the output of the first neural network determines that the face does not pass the face anti-counterfeiting detection.
  • the method further includes:
  • a second sending module located on the server, for returning the detection result output by the second neural network to the terminal device;
  • the determining module is located on the terminal device, and is configured to determine, according to the detection result of the second neural network output, whether the face passes the face anti-counterfeiting detection.
  • the determining module is located on the server, and is configured to determine, according to the detection result output by the second neural network, whether the face passes the face anti-counterfeiting detection ;
  • the device also includes:
  • the second sending module is located on the server, and is configured to send, to the terminal device, a determination result of whether the face passes the face anti-counterfeiting detection.
  • the neural network is further configured to perform live detection on the video acquired by the terminal device.
  • the neural network is configured to perform a living body detection on a video acquired by the terminal device by using a first neural network; and Extracting, by the neural network, a feature of the video acquired by the terminal device, and detecting whether the extracted feature includes the operation of forging the face cue information; or
  • the neural network is configured to perform live detection on the video acquired by the terminal device by using the first neural network; and receive the image to be detected sent by the first sending module located on the terminal device in response to the detection of the living body Or video, and outputting a detection result indicating whether the image or video to be detected contains at least one forged face cue information.
  • the method when the neural network performs the live detection on the video acquired by the terminal device, the method is configured to: perform validity detection on a specified action of the video acquired by the terminal device ;
  • the living body detection passes at least the detection result in response to the validity of the prescribed action satisfies a predetermined condition.
  • the specified action includes any one or more of the following: blinking, opening, closing, smiling, nodding, nodding, left turning, right turning Left-handed, right-handed, bowed, and headed.
  • the predetermined action is a predetermined action set in advance or a predetermined action randomly selected.
  • an electronic device including the face anti-counterfeiting detection system of any of the above embodiments of the present application.
  • another electronic device including:
  • a memory for storing executable instructions
  • a processor configured to communicate with the memory to execute the executable instruction to complete an operation of the face anti-counterfeiting detection method according to any one of the embodiments of the present application.
  • a computer program comprising computer readable code, when the computer readable code is run on a device, the processor in the device performs the implementation of the present application.
  • a computer readable storage medium for storing computer readable instructions that, when executed, perform steps in the method of any of the embodiments of the present application. operating.
  • the method and system for detecting the face anti-counterfeiting After acquiring the image or video to be detected including the face, the method and system for detecting the face anti-counterfeiting provided by the above-mentioned embodiments of the present application, extracting the feature of the image or video to be detected, and detecting the extracted feature. Whether the fake face cue information is included in the middle, and whether the image or video to be detected passes the face anti-counterfeiting detection according to the detection result.
  • the embodiment of the present application can realize effective face anti-counterfeiting detection without relying on a special multi-spectral device, and does not need special hardware equipment, thereby reducing the hardware cost caused thereby, and can be conveniently applied to various face detection scenarios. .
  • FIG. 1 is a flow chart of an embodiment of an applicant's face anti-counterfeiting detection method.
  • FIG. 2 is a flow chart of another embodiment of the applicant's face anti-counterfeiting detection method.
  • FIG. 3 is a flow chart of still another embodiment of the applicant's face anti-counterfeiting detection method.
  • FIG. 4 is a flow chart of still another embodiment of the applicant's face anti-counterfeiting detection method.
  • FIG. 5 is a flowchart of still another embodiment of the applicant's face anti-counterfeiting detection method.
  • FIG. 6 is a schematic structural diagram of an embodiment of an applicant's face anti-counterfeiting detection system.
  • FIG. 7 is a schematic structural view of another embodiment of the applicant's face anti-counterfeiting detection system.
  • FIG. 8 is a schematic structural view of still another embodiment of the applicant's face anti-counterfeiting detection system.
  • FIG. 9 is a schematic structural diagram of an application embodiment of an electronic device according to the present application.
  • Embodiments of the present application can be applied to electronic devices such as computer systems/servers that can operate with numerous other general purpose or special purpose computing system environments or configurations.
  • Examples of well-known computing systems, environments, and/or configurations suitable for use with electronic devices such as computer systems/servers include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, handheld or laptop Devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, small computer systems, mainframe computer systems, and distributed cloud computing technology environments including any of the above, and the like.
  • Electronic devices such as computer systems/servers can be described in the general context of computer system executable instructions (such as program modules) being executed by a computer system.
  • program modules may include routines, programs, target programs, components, logic, data structures, and the like that perform particular tasks or implement particular abstract data types.
  • Electronic devices such as computer systems/servers can be implemented in a distributed cloud computing environment where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules may be located on a local or remote computing system storage medium including storage devices.
  • FIG. 1 is a flow chart of an embodiment of an applicant's face anti-counterfeiting detection method. As shown in FIG. 1, the face anti-counterfeiting detection method of this embodiment includes:
  • the operation 102 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a first acquisition module executed by the processor.
  • the features extracted in the embodiments of the present application may include, but are not limited to, any of the following: a local binary pattern (LBP) feature, a sparsely encoded histogram (HSC) Feature, panorama (LARGE) feature, face map (SMALL) feature, face detail map (TINY) feature.
  • LBP local binary pattern
  • HSC sparsely encoded histogram
  • LARGE panorama
  • SMALL face map
  • TINY face detail map
  • the feature items included in the extracted features may be updated based on the fake face cue information that may appear.
  • the edge information in the image can be highlighted by the LBP feature; the reflection and fuzzy information in the image can be more clearly reflected by the HSC feature; the LARGE feature is a full-image feature, which can be extracted to the most obvious image based on the LARGE feature.
  • a forged hack; a face map (SMALL) is a region cut of a size of a face frame (for example, 1.5 times the size) in an image, which includes a face, a face, and a background, which are based on the SMALL feature.
  • Extracting the forged clues such as reflection, remake device screen moiré and the edge of the model or mask; face detail map (TINY) is the area cut image of the size of the face frame, including the face, based on the TINY feature, can be extracted to the image PS (based on image editing software photoshop editing), remake screen moiré and the texture of the model or mask and other forged clues.
  • TINY face detail map
  • the forged face cue information in the embodiment of the present application has human eye observability under visible light conditions, that is, the human eye can observe these forgeries under visible light conditions. Face clue information. Based on this feature of the fake face cue information, it is possible to realize anti-counterfeiting detection by using a still image or a dynamic video captured by a visible light camera (such as an RGB camera), avoiding the introduction of a specific camera and reducing the hardware cost.
  • the forged face cue information may include, for example but not limited to, any one or more of the following: forged clue information of the imaging medium, forged clue information of the imaging medium, and clue information of the fake face that is actually present.
  • the forged clue information of the imaging medium is also referred to as 2D-type forged face cue information, and the forged clue information of the imaging medium may be referred to as 2.5D-type forged face cue information, and the cue information of the real fake face may be referred to as 3D.
  • the fake face cue information that needs to be detected may be updated correspondingly according to a possible fake face manner.
  • the electronic device can "discover" the boundaries between various real faces and fake faces, and realize various types of anti-counterfeiting detection under the condition of general hardware devices such as visible light cameras. Resist the "hack" attack and improve security.
  • the forged clue information of the imaging medium may include, but is not limited to, edge information, reflective information, and/or material information of the imaging medium.
  • the forged clue information of the imaging medium may include, for example but is not limited to, a screen edge of the display device, a screen reflection, and/or a screen moiré.
  • the clue information of the real fake face may include, but is not limited to, the characteristics of the masked face, the characteristics of the model face, and the characteristics of the sculpture face.
  • the forged face cue information in the embodiment of the present application can be observed by the human eye under visible light conditions.
  • the forged face cue information can be divided into 2D class, 2.5D class and 3D class forged face from the dimension.
  • the 2D-type forged face refers to a face image printed by a paper-like material, and the 2D-type forged face cue information may include, for example, a paper face edge, a paper material, a paper reflective, a paper edge, and the like. Clue information.
  • the 2.5D-type forged face refers to a face image carried by a carrier device such as a video remake device
  • the 2.5D-type forged face cues information may include, for example, a screen moiré of a carrier device such as a video remake device, a screen reflection, a screen edge, and the like.
  • 3D fake faces refer to real fake faces, such as masks, models, sculptures, 3D printing, etc.
  • the 3D fake faces also have corresponding forged clue information, such as the stitching of the mask and the abstraction of the model. Or forged clue information such as too smooth skin.
  • the operation 104 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by an anti-counterfeiting detection module executed by the processor.
  • any feature that is extracted from the video or image to be detected includes any forged face cue information
  • it is determined that the image to be detected is a forged face image
  • the image or video to be detected is determined not to be detected.
  • the face anti-counterfeiting detection When the feature extracted from the video or image to be detected does not include any of the forged face cue information, it is determined that the video or image to be detected is not a fake face image, and is a real face image, and the image or video to be detected is determined to pass through the face.
  • Anti-counterfeiting detection is performed by the feature extracted from the video or image to be detected.
  • the operation 106 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a determination module executed by the processor.
  • the method for detecting a face anti-counterfeiting after acquiring an image or a video to be detected including a face, extracting features of the image or video to be detected, and detecting whether the extracted feature includes forged face cue information. Whether the image or video to be detected passes the face anti-counterfeiting detection is determined according to the detection result.
  • the embodiment of the present application can realize effective face anti-counterfeiting detection without relying on a special multi-spectral device, for example, can realize effective face anti-counterfeiting detection under visible light conditions, and does not need to be reduced by special hardware equipment.
  • the hardware cost can be easily applied to various face detection scenarios, especially for general mobile applications.
  • the operation 104 may be implemented by inputting an image or video to be detected into a neural network, extracting features of the image or video to be detected by the neural network, and detecting and extracting Whether the feature includes forged face cue information, and outputs a detection result indicating whether the to-be-detected image or video includes at least one forged face cue information, wherein the neural network is based on training including forged face cue information Pre-training is done with the image set.
  • the neural network of various embodiments of the present application may be a deep neural network, which refers to a multi-layer neural network, such as a multi-layer convolutional neural network.
  • the above training image set may include: a plurality of face images which can be used as positive samples for training and a plurality of images which can be used as negative samples for training.
  • a training image set including forged face cue information may be obtained by:
  • Image processing for simulating forged face cue information is performed on at least a portion of the acquired at least one face image to generate at least one image that can be used as a negative sample for training.
  • the operation 102 includes: acquiring, by the visible light camera of the terminal device, an image or a video to be detected including a human face.
  • the neural network may include a first neural network located in the terminal device, that is, the operation 104 in the above embodiments is performed by the first neural network located in the terminal device.
  • the operation 102 includes: the server receiving an image or a video to be detected including a face sent by the terminal device.
  • the neural network can include a second neural network located in the server, i.e., performing operations 104 in the various embodiments described above by a second neural network located in the server.
  • the example may further include: the server transmitting, to the terminal device, whether the extracted feature includes a detection result of the forged face cue information, or whether the to-be-detected image or video passes the determination result of the face anti-counterfeiting detection.
  • the neural network includes a first neural network located in the terminal device and a second neural network located in the server as an example for description.
  • the size of the first neural network is smaller than the size of the second neural network.
  • the first neural network may be smaller than the second neural network in the network layer and/or the number of parameters.
  • the first neural network and the second neural network may each be a multi-layer neural network (ie, a deep neural network), such as a multi-layer convolutional neural network, such as LeNet, AlexNet, or the like. Any neural network model such as GoogLeNet, VGG, ResNet.
  • the first neural network and the second neural network may employ a neural network of the same type and structure, or a neural network of different types and structures.
  • the face anti-counterfeiting detection method of this embodiment includes:
  • the terminal device acquires a video including a face.
  • a video including a human face can be acquired by the terminal device via its visible light camera.
  • the operation 202 can be performed by a visible light camera on the terminal device.
  • a video, including a human face acquired by the terminal device, into a first neural network in the terminal device, and extract, by the first neural network, a feature of the video that includes the human face, and detect whether the extracted feature includes a fake face.
  • the clue information is outputted to output a detection result indicating whether the video including the face includes at least one forged face cue information.
  • the forged face cue information therein has human eye observability under visible light conditions.
  • the first neural network is pre-trained based on a training image set including forged face cue information.
  • the forged face cues contained in the features extracted in the embodiments of the present application may be learned by the first neural network by training the first neural network in advance, and then any information including the forged face cues After the image is input into the first neural network, it will be detected, and it can be judged as a fake face image, otherwise it is a real face image.
  • the operation 204 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a first neural network in a terminal device operated by the processor.
  • the terminal device selects a part of the video or image from the video including the face as the image or image to be detected, and sends the image to the server.
  • Some of the videos may be a video stream including one or more images to be detected, or only one or more images to be detected; may be selected according to preset settings, or may be adjusted in real time according to actual conditions.
  • the first neural network outputting the video including the face includes the detection result of the forged face cue information, it may be determined that the to-be-detected image or video does not pass the face anti-counterfeiting detection, and may be performed by the terminal device according to the first neural network.
  • the detection result of the output determines that the face in the image or video to be detected does not pass the determination result of the face anti-counterfeiting detection, and the subsequent process of the embodiment is not performed.
  • the operation 206 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a first one of the terminal devices being executed by the processor.
  • the server After receiving the image or video to be detected that is sent by the terminal device, including the face, the server inputs the image or video to be detected including the face into the second neural network, and the second neural network extracts the feature of the image or video to be detected. Detecting whether the extracted feature includes forged face cue information, and outputting a detection result indicating whether the to-be-detected image or video includes at least one forged face cue information.
  • the forged face cue information therein has human eye observability under visible light conditions.
  • the second neural network is pre-trained based on a training image set including forged face cue information.
  • the forged face cues contained in the features extracted in the embodiments of the present application may be learned by the second neural network in advance by training the second neural network, and then any information including the forged face cues is included. After the image is input into the second neural network, it will be detected, and it can be judged as a fake face image, otherwise it is a real face image.
  • the terminal device sends a video to the server
  • the server may select at least one image from the received video as the image to be detected and input to the second neural network.
  • the second neural network extracts the feature of the image to be detected, detects whether the extracted feature includes forged face cue information, and outputs a detection result indicating whether the image to be detected includes at least one forged face cue information.
  • the terminal device sends an image to the server.
  • the server may input all the received images into the second neural network, or select at least one image from the received image to input the second image.
  • the neural network extracts the feature of the received image by the second neural network, detects whether the extracted feature includes forged face cue information, and outputs a detection result indicating whether the image includes at least one forged face cue information.
  • the operation 208 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a first acquisition module and a second neural network in a server operated by the processor.
  • the server determines, according to the detection result, whether the face in the to-be-detected image or video passes the face anti-counterfeiting detection, and sends a determination result of whether the face in the image to be detected or the video passes the face anti-counterfeiting detection to the terminal device.
  • the operation 210 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a determining module in the server.
  • the detection result of the second neural network outputted by the server may be alternatively used to indicate whether the image or video to be detected includes at least one forged face cue information.
  • the terminal device determines, according to the detection result output by the first neural network and the detection result output by the second neural network, whether the face in the image or video to be detected passes the face anti-counterfeiting detection, and the server does not determine the to-be-detected Whether the face in the image or video passes the face anti-counterfeiting detection and transmits the determination result to the terminal device.
  • the detection result of the first neural network output is that the image to be detected or the video includes at least one forged face cue information, determining that the face in the image or video to be detected does not pass the face anti-counterfeiting detection; if the first neural network The output detection result is that the image to be detected or the video does not contain any forged face cue information, but the detection result output by the second neural network is that the image to be detected or the video contains at least one forged face cue information, and the image or video to be detected is determined.
  • the face in the face does not pass the face anti-counterfeiting detection; if the detection result of the first neural network output is that the image to be detected or the video does not contain any forged face cue information, and the detection result output by the second neural network is the image to be detected or The video does not contain any forged face cue information, and the face in the image or video to be detected is determined to pass the face anti-counterfeiting detection.
  • the neural network that performs more feature extraction and detection will require more computing and storage resources, while the computing and storage resources of the terminal device are relatively limited compared to the cloud server, in order to save the terminal device side nerve
  • the calculation and storage resources occupied by the network can ensure the effective detection of the face security.
  • the first neural network with a small network (lower network and/or less network parameters) is set in the terminal device.
  • HSC features can also extract HSC features, LARGE features, TINY features and other features that may contain forged face clue information
  • the second neural network is used to perform more accurate and comprehensive face anti-counterfeiting detection, thereby improving the accuracy of the detection result; collecting in the first neural network.
  • the face in the video does not pass the face anti-counterfeiting detection, there is no need to perform the face anti-counterfeiting detection through the second neural network, which improves the efficiency of the face anti-counterfeiting detection.
  • the terminal device selects a part of the video or the image from the video including the face as the image to be detected or the video is sent to the server, and may include:
  • the operation may be performed by the terminal device prior to performing operation 202, or at any time prior to operation 206;
  • the network condition currently used by the terminal device eg, the type of network used, network bandwidth, etc.
  • the network currently used by the terminal device is a wireless local area network (eg, WiFi)
  • the bandwidth is greater than the first pre-
  • the terminal device selects a part of the video that is obtained from the above-mentioned video including the face as the to-be-detected video and sends the video to the server.
  • the video to be detected is sent to the server or a part of the video to be detected is sent to the server, because the video includes more images, and the network conditions permit.
  • Sending video to the server for face anti-counterfeiting detection can achieve more comprehensive face anti-counterfeiting detection;
  • the terminal device When the network condition currently used by the terminal device does not satisfy the first preset condition but meets the second preset condition, for example, when the network currently used by the terminal device is a mobile data network, and the bandwidth is greater than the second preset bandwidth, or When the network currently used by the terminal device is a wireless local area network (for example, WiFi) and the bandwidth is smaller than the first preset bandwidth, the terminal device selects one or more images that meet the preset standard from the video that includes the face obtained above. The image to be detected is sent to the server, so that the face anti-counterfeiting detection when the network condition is poor can also be realized.
  • the network condition currently used by the terminal device does not satisfy the first preset condition but meets the second preset condition, for example, when the network currently used by the terminal device is a mobile data network, and the bandwidth is greater than the second preset bandwidth, or
  • the network currently used by the terminal device is a wireless local area network (for example, WiFi) and the bandwidth is smaller than the first preset bandwidth
  • the terminal device selects
  • the method may further include:
  • the terminal device may And outputting a prompt message for detecting the failure, or extracting, by using the first neural network in the terminal device, the feature of the video including the face, detecting whether the extracted feature includes forged face cue information, and outputting to indicate the foregoing Whether the video of the face includes at least one detection result of the forged face cue information, and the terminal device determines whether the face passes the face anti-counterfeiting detection according to the detection result.
  • the feature of the to-be-detected video is extracted by the second neural network, and the extracted feature is detected.
  • Whether to include fake face clue information including:
  • the server selects at least one image from the to-be-detected video as the image to be detected and inputs the image to the second neural network, and outputs, by the second neural network, a detection result indicating whether the image to be detected includes at least one forged face cue information.
  • the terminal device selects a part of the video or image from the video including the face as the image to be detected or the video is sent to the server, or the image to be detected sent by the server from the terminal device or
  • the high-quality image detection forged face cue information may be selected according to a preset selection criterion.
  • the selection criterion may be any one or more of the following: whether the face orientation is positive, the image clarity, the exposure level, etc., and the image with higher comprehensive quality is selected according to the corresponding standard for face security detection. In order to improve the feasibility of face anti-counterfeiting detection and the accuracy of the test results.
  • silent living body detection it is possible to focus on detecting whether there is a forged clue in the image or video to be detected (ie, forging face cue information), and verifying the activity in a nearly non-interactive manner, which is called silent living body detection. There is basically no interaction in the whole process of silent living detection, which greatly simplifies the living detection process.
  • the detected person only needs to face the video or image acquisition device (for example: visible light camera) of the device where the neural network is located, and adjust the light and position. Need to interact with any action class.
  • the neural network in the embodiment of the present application learns in advance the forged face cues information that can be “observed” by the human eye in multiple dimensions through the method of learning training, thereby determining whether the face image is derived from the subsequent application. Real life. If the video or image to be detected contains any face forgery clue information, these clues will be captured by the neural network, and the user's face image will be prompted to falsify the face image. For example, a fake face image of a video remake type can determine that the face is not a living body by judging the characteristics of the screen reflection or the edge of the screen in the face image.
  • any of the foregoing embodiments of the present application may first perform a live detection (302) on the video acquired by the terminal device by using a neural network.
  • the operation 302 may include: performing, by using a neural network, validity detection of a prescribed action of the video acquired by the terminal device; at least the detection result of the validity of the prescribed action satisfies a predetermined condition, and the living body detection passes.
  • the specified action may be a preset action preset or a randomly selected action, that is, the user may be required to make a predetermined action preset in a preset time, or the user may be required to make a prescribed time within a preset time.
  • the specified action is randomly selected in the action set. For example, it may include but is not limited to any one or more of the following: blinking, opening mouth, closing mouth, smiling, nodding, nodding, turning left, turning right, left hoe, right hoe, looking down, looking up, etc. .
  • the flow of the embodiment of the method for detecting the facial anti-counterfeiting method is performed, for example, the process of performing the operation 102 in the embodiment shown in FIG. 1 or the operation 202 in the embodiment shown in FIG. 2 is performed to perform the face anti-counterfeiting detection. .
  • FIG. 3 it is a flowchart of still another embodiment of the applicant's face anti-counterfeiting detection method.
  • the human body detection is performed, and the face anti-counterfeiting detection is implemented, which can resist the forgery attack situation, and solves the problem that the criminals can easily use the photo or video of the user to be verified to forge the user action when performing the living body detection.
  • the problem is that the security of the face authentication technology is improved; and the hardware cost caused by the special hardware device is reduced, and the utility model can be conveniently applied to various face detection scenarios, and has wide application range, especially suitable for Universal mobile app.
  • FIG. 4 is a flowchart of still another embodiment of a method for detecting a face anti-counterfeiting according to an embodiment of the present application.
  • the face anti-counterfeiting detection method of this embodiment includes:
  • the terminal device acquires a video.
  • the operation 402 can be performed by a terminal device.
  • the first neural network can determine whether the living body detection passes by detecting whether the user in the video makes a valid prescribed action within a preset time.
  • the living body detection passes, and the operation 406 is performed. Otherwise, the detection result in response to the validity of the prescribed action does not satisfy the predetermined condition, the living body detection does not pass, and the subsequent flow of the embodiment is not performed.
  • the operation 404 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a first neural network on a terminal device operated by the processor.
  • the terminal device selects a video or an image including a human face from the obtained video as the to-be-detected video or image input in the first neural network of the terminal device.
  • the operation 406 can be performed by the terminal device or the first transmitting module therein.
  • the first neural network extracts a feature of the image or video to be detected, detects whether the extracted feature includes forged face cue information, and outputs a detection for indicating whether the image or video to be detected includes at least one forged face cue information. result.
  • the operation 404 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a first neural network that is executed by the processor.
  • the terminal device determines, according to the detection result of the output of the first neural network, whether the image or video to be detected passes the face anti-counterfeiting detection.
  • the operation 410 determines that the image to be detected is a forged face image, and determines the image or video to be detected. Failing to pass the face anti-counterfeiting detection.
  • the operation 410 determines that the video or image to be detected is not a forged face image, and is a real person. The face image determines that the image or video to be detected passes the face security detection.
  • the operation 410 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a determination module in a terminal device being executed by the processor.
  • FIG. 5 is a flowchart of still another embodiment of a method for detecting a face anti-counterfeiting according to an embodiment of the present application.
  • the face anti-counterfeiting detection method of this embodiment includes:
  • the terminal device acquires a video.
  • the operation 502 can be performed by a terminal device.
  • 504 Perform validity detection of the specified action on the acquired video by using a first neural network on the terminal device.
  • At least the predetermined result is satisfied in response to the detection result of the validity of the prescribed action, and the living body detection passes, and operation 506 is performed. Otherwise, the detection result in response to the validity of the prescribed action does not satisfy the predetermined condition, the living body detection does not pass, and the subsequent flow of the embodiment is not performed.
  • the operation 504 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a first neural network on a terminal device operated by the processor.
  • the terminal device selects a video or an image including a human face from the obtained video as the to-be-detected video or image, and sends the video to the server.
  • the operation 506 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a terminal device operated by the processor or a first transmitting module therein.
  • the server After receiving the to-be-detected video or image sent by the terminal device, the server inputs the to-be-detected video or image into the second neural network located on the server.
  • the operation 508 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a first acquisition module and a second neural network in a server operated by the processor.
  • the second neural network extracts a feature of the image or video to be detected, detects whether the extracted feature includes forged face cue information, and outputs a detection for indicating whether the image or video to be detected includes at least one forged face cue information. result.
  • the operation 510 may be performed by a processor invoking a corresponding instruction stored in a memory or by a second neural network in a server being executed by the processor.
  • the server determines, according to the detection result, whether the image or video to be detected passes the face anti-counterfeiting detection, and sends a determination result of whether the to-be-detected image or video passes the face anti-counterfeiting detection to the terminal device.
  • operation 510 of this embodiment when any of the features extracted from the video or image to be detected includes any of the forged face cue information, the operation 512 determines that the image to be detected is a forged face image, and determines the image or video to be detected. Failing to pass the face anti-counterfeiting detection.
  • the operation 512 determines that the video or image to be detected is not a forged face image, and is a real person. The face image determines that the image or video to be detected passes the face security detection.
  • the operation 512 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a determination module and a second transmission module in a server being executed by the processor.
  • the video acquired by the terminal device may be detected by the first neural network by using the first neural network or the operations 502-504, after the living body is detected. Obtaining a video including a face from the video acquired by the terminal device, and then performing operations 204-210.
  • the activity detection of the video may be performed first, and whether the face in the video is active is detected, and the face anti-counterfeiting detection is performed after the video is detected by the living body, which can resist the situation of the forgery attack and solve the problem.
  • live detection on a video the criminal is easy to use the photo or video of the user to be authenticated to falsify the user's action.
  • any of the methods for detecting the face security provided by the embodiments of the present application may be performed by any suitable device having data processing capabilities, including but not limited to: a terminal device, a server, and the like.
  • any method for detecting the face anti-counterfeiting detection provided by the embodiment of the present application may be executed by a processor, such as the processor executing any one of the face anti-counterfeiting detection methods mentioned in the embodiments of the present application by calling corresponding instructions stored in the memory. This will not be repeated below.
  • the foregoing program may be stored in a computer readable storage medium, and the program is executed when executed.
  • the foregoing steps include the steps of the foregoing method embodiments; and the foregoing storage medium includes: a medium that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.
  • FIG. 6 is a schematic structural diagram of an embodiment of an applicant's face anti-counterfeiting detection system.
  • the face anti-counterfeiting system of the embodiment can be used to implement the foregoing embodiments of the face anti-counterfeiting method of the present application.
  • the face anti-counterfeiting detection system of this embodiment includes: a first acquisition module, an anti-counterfeiting detection module, and a determination module. among them:
  • the first acquiring module is configured to acquire an image or a to-be-detected image including a human face.
  • the first acquisition module may be a visible light camera of the terminal device.
  • the anti-counterfeiting detection module is configured to extract features of the image or video to be detected, and detect whether the extracted features include forged face cue information, wherein the forged face cue information has human eye observability under visible light conditions.
  • the features extracted in the embodiments of the present application may include, but are not limited to, any of the following: an LBP feature, an HSC feature, a LARGE feature, a SMALL feature, and a TINY feature.
  • the feature items included in the extracted feature may be updated according to the fake face cue information that may appear.
  • the forged face cue information in various embodiments of the present application has human eye observability under visible light conditions.
  • the forged face cue information may include, for example but not limited to, any one or more of the following: forged clue information of the imaging medium, forged clue information of the imaging medium, and clue information of the fake face that is actually present.
  • the forged clue information of the imaging medium may include, but is not limited to, edge information, reflective information, and/or material information of the imaging medium.
  • the forged clue information of the imaging medium may include, for example but is not limited to, a screen edge of the display device, a screen reflection, and/or a screen moiré.
  • the clue information of the real fake face may include, but is not limited to, the characteristics of the masked face, the characteristics of the model face, and the characteristics of the sculpture face.
  • the determining module is configured to determine, according to the detection result, whether the face passes the face anti-counterfeiting detection.
  • the facial anti-counterfeiting detection system provided by the above-mentioned embodiment of the present application, after acquiring an image or a video to be detected including a human face, extracting features of the image or video to be detected, and detecting whether the extracted feature includes forged face cue information. Whether the image or video to be detected passes the face anti-counterfeiting detection is determined according to the detection result.
  • the embodiment of the present application can realize effective face anti-counterfeiting detection without relying on a special multi-spectral device, thereby realizing effective face anti-counterfeiting detection under visible light conditions, and reducing the result without special hardware equipment.
  • the hardware cost can be easily applied to various face detection scenarios, especially for general mobile applications.
  • the anti-counterfeiting detection module may be implemented by a neural network for receiving an input image or video to be detected and outputting an image for indicating the image to be detected. Or whether the video includes a detection result of at least one forged face cue information, wherein the neural network is pre-trained based on the training image set including the forged face cue information.
  • the above-described training image set may include: a plurality of face images that can be used as positive samples for training and a plurality of images that can be used as negative samples for training.
  • the face anti-counterfeiting detection system of this embodiment may further include: a second acquiring module, configured to acquire a plurality of face images that can be used as positive samples for training; and at least part of the acquired at least one face image Image processing for simulating forged face cue information is performed to generate at least one image that can be used as a negative sample for training.
  • the neural network includes: a first neural network located in the terminal device.
  • the first obtaining module and the determining module are located in the terminal device.
  • the determining module is configured to: determine, according to the detection result of the output of the first neural network, whether the face passes the face anti-counterfeiting detection.
  • FIG. 7 is a schematic structural diagram of a human face anti-counterfeiting detection system according to the embodiment of the present application.
  • the first obtaining module is located on the server, and is configured to receive an image or a video to be detected, including a human face, sent by the terminal device.
  • the neural network includes: a second neural network located in the server.
  • the neural network may further include: a first neural network located in the terminal device, configured to receive the input image or video to be detected, and output the Indicates whether the video including the face includes at least one detection result of the forged face cue information, wherein the size of the first neural network is smaller than the size of the second neural network.
  • FIG. 8 is a schematic diagram of one possible structure of the face anti-counterfeiting detection system of the embodiment of the present application.
  • the anti-counterfeiting detection system may further include: a first sending module, located on the terminal device, configured to respond to the detection result including the human face according to the detection result output by the first neural network
  • the video does not include the detection result of the forged face cue information, and a part of the video or image is selected as a to-be-detected image or video from the video including the face to be sent to the server.
  • the first sending module may be configured to: acquire a network condition currently used by the terminal device; and select a part of the video as the to-be-detected video from the video acquired by the terminal device when the network condition currently used by the terminal device meets the first preset condition. Sending to the server; and/or, when the network condition currently used by the terminal device does not satisfy the first preset condition and meets the second preset condition, selecting at least one image that satisfies the preset standard from the video acquired by the terminal device The image to be detected is sent to the server.
  • the server may further include: a selecting module, configured to select at least one image from the to-be-detected video as the to-be-detected, when the first sending module selects a part of the video as the to-be-detected video is sent to the server.
  • the image is input to the second neural network.
  • the determining module is located on the terminal device, and is further configured to output, according to the detection result of the forged face cue information, in response to the video including the face, according to the first neural network output.
  • the detection result determines that the face has not passed the face anti-counterfeiting detection.
  • the method further includes: a second sending module, located on the server, for returning the detection result output by the second neural network to the terminal device.
  • the determining module is located on the terminal device, and is configured to determine whether the face passes the face anti-counterfeiting detection according to the detection result output by the second neural network.
  • the determining module is located on the server, and is configured to determine whether the face passes the face anti-counterfeiting detection according to the detection result output by the second neural network.
  • the face anti-counterfeiting detection system of the embodiment further includes: a second sending module, located on the server, configured to send, to the terminal device, a determination result of whether the face passes the face anti-counterfeiting detection.
  • the neural network can also be used for performing live detection on the video acquired by the terminal device.
  • the neural network when the neural network performs the live detection on the video acquired by the terminal device, the neural network can be used to perform the validity detection of the specified action on the video acquired by the terminal device. At least the detection result in response to the validity of the prescribed action satisfies the predetermined condition, and the living body detection passes.
  • the neural network may be configured to perform live detection on the video acquired by the terminal device by using the first neural network; and extracting features of the video acquired by the terminal device by using the first neural network, and detecting and extracting, in response to the detection of the living body. Whether the feature includes the operation of forging the face cue information; or the neural network is operable to perform the living body detection on the video acquired by the terminal device by using the first neural network; and receiving the first device located on the terminal device in response to the living body detection passing Sending a to-be-detected image or video sent by the module, and outputting a detection result indicating whether the image or video to be detected contains at least one forged face cue information.
  • the specified action may be a preset action preset or a randomly selected action, that is, the user may be required to make a predetermined action preset in a preset time, or the user may be required to make a prescribed time within a preset time.
  • the specified action is randomly selected in the action set. For example, it may include but is not limited to any one or more of the following: blinking, opening mouth, closing mouth, smiling, nodding, nodding, turning left, turning right, left hoe, right hoe, looking down, looking up, etc. .
  • the embodiment of the present application further provides an electronic device, which may include the face anti-counterfeiting detection system of any of the above embodiments.
  • the electronic device may be, for example, a terminal device or a server.
  • another electronic device provided by the embodiment of the present application includes:
  • a memory for storing executable instructions
  • a processor configured to communicate with the memory to execute the executable instruction to complete the operation of the face anti-counterfeiting detection method of any of the above embodiments of the present application.
  • FIG. 9 is a schematic structural diagram of an application embodiment of an electronic device according to the present application.
  • the electronic device includes one or more processors, a communication unit, etc., such as one or more central processing units (CPUs), and/or one or more images.
  • processors such as one or more central processing units (CPUs), and/or one or more images.
  • a processor GPU or the like, the processor can perform various appropriate actions and processes according to executable instructions stored in a read only memory (ROM) or executable instructions loaded from a storage portion into a random access memory (RAM) .
  • ROM read only memory
  • RAM random access memory
  • the communication portion may include, but is not limited to, a network card, which may include, but is not limited to, an IB (Infiniband) network card, and the processor may communicate with the read only memory and/or the random access memory to execute executable instructions, and connect to the communication portion through the bus. And communicating with the other target device by the communication unit, so as to complete the operation corresponding to any method provided by the embodiment of the present application, for example, acquiring an image or a video to be detected including a human face; extracting features of the image or video to be detected, And detecting whether the extracted feature includes forged face cue information; and determining, according to the detection result, whether the face passes the face anti-counterfeiting detection.
  • a network card which may include, but is not limited to, an IB (Infiniband) network card
  • the processor may communicate with the read only memory and/or the random access memory to execute executable instructions, and connect to the communication portion through the bus. And communicating with the other target device by the communication unit, so as to complete the operation
  • the CPU, ROM, and RAM are connected to each other through a bus.
  • the ROM is an optional module.
  • the RAM stores executable instructions, or writes executable instructions to the ROM at runtime, the executable instructions causing the processor to perform operations corresponding to any of the methods described above.
  • An input/output (I/O) interface is also connected to the bus.
  • the communication unit can be integrated or set up with multiple sub-modules (eg multiple IB network cards) and on the bus link.
  • the following components are connected to the I/O interface: an input portion including a keyboard, a mouse, and the like; an output portion including a cathode ray tube (CRT), a liquid crystal display (LCD), and the like, and a speaker; a storage portion including a hard disk or the like; The communication part of the network interface card of the LAN card, modem, etc.
  • the communication section performs communication processing via a network such as the Internet.
  • the drive is also connected to the I/O interface as needed.
  • a removable medium such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory or the like is mounted on the drive as needed so that a computer program read therefrom is installed into the storage portion as needed.
  • FIG. 9 is only an optional implementation manner.
  • the number and types of components in FIG. 9 may be selected, deleted, added, or replaced according to actual needs; Different function components can also be implemented in separate settings or integrated settings, such as GPU and CPU detachable settings or GPU can be integrated on the CPU, the communication part can be separated, or integrated on the CPU or GPU. and many more. These alternative embodiments are all within the scope of protection disclosed herein.
  • an embodiment of the present disclosure includes a computer program product comprising a computer program tangibly embodied on a machine readable medium, the computer program comprising program code for executing the method illustrated in the flowchart, the program code comprising The instructions corresponding to the steps of the face anti-counterfeiting detection method provided by the embodiment of the present application are executed.
  • the computer program can be downloaded and installed from the network via a communication portion, and/or installed from a removable medium.
  • the embodiment of the present application further provides a computer program, including computer readable code, when the computer readable code is run on a device, the processor in the device executes to implement any of the embodiments of the present application. Instructions for each step in the method.
  • the embodiment of the present application further provides a computer readable storage medium for storing computer readable instructions, when the instructions are executed, performing the operations of the steps in the method of any embodiment of the present application.
  • the methods and apparatus of the present application may be implemented in a number of ways.
  • the methods and apparatus of the present application can be implemented in software, hardware, firmware, or any combination of software, hardware, and firmware.
  • the above-described sequence of steps for the method is for illustrative purposes only, and the steps of the method of the present application are not limited to the order of the above optional description unless otherwise specifically stated.
  • the present application can also be implemented as a program recorded in a recording medium, the programs including machine readable instructions for implementing the method according to the present application.
  • the present application also covers a recording medium storing a program for executing the method according to the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)
  • Image Processing (AREA)

Abstract

本申请实施例公开了一种人脸防伪检测方法和系统、电子设备、计算机存储介质,其中,人脸防伪检测方法包括:获取包括人脸的待检测图像或视频;提取所述待检测图像或视频的特征、并检测提取的特征中是否包含伪造人脸线索信息;根据检测结果确定所述人脸是否通过人脸防伪检测。本申请实施例无需依赖于特殊的多光谱设备,便可以实现有效的人脸防伪检测。

Description

人脸防伪检测方法和系统、电子设备、程序和介质
本申请要求在2017年03月16日提交中国专利局、申请号为CN 201710157715.1、发明名称为“人脸防伪检测方法和装置、系统、电子设备”的中国专利申请的优先权,以及在2017年12月01日提交中国专利局、申请号为CN 201711251762.9、发明名称为“人脸防伪检测方法和系统、电子设备、程序和介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机视觉技术,尤其是一种人脸防伪检测方法和系统、电子设备、程序和介质。
背景技术
活体检测是指使用计算机视觉的技术,判定在摄像头前的人脸图像是否来自真实的人。活体检测通常有两种实现思路:一是人脸活性检测,二是人脸防伪检测,这两种思路各有侧重。其中,人脸防伪侧重检测人脸是否具有真实性;活性检测侧重检测人脸是否具备活性。具备活性的人脸并不一定是非伪造人脸,同样,非伪造人脸不一定具备活性。
发明内容
本申请实施例提供一种用于进行人脸防伪检测的技术方案。
根据本申请实施例的一个方面,提供的一种人脸防伪检测方法,包括:
获取包括人脸的待检测图像或视频;
提取所述待检测图像或视频的特征、并检测提取的特征中是否包含伪造人脸线索信息;
根据检测结果确定所述人脸是否通过人脸防伪检测。
可选地,在本申请上述各实施例的方法中,提取的所述特征包括以下一项或任意多项:局部二值模式特征、稀疏编码的柱状图特征、全景图特征、人脸图特征、人脸细节图特征。
可选地,在本申请上述各实施例的方法中,所述伪造人脸线索信息具有可见光条件下的人眼可观测性。
可选地,在本申请上述各实施例的方法中,所述伪造人脸线索信息包括以下任意一项或多项:成像介质的伪造线索信息、成像媒介的伪造线索信息、真实存在的伪造人脸的线索信息。
可选地,在本申请上述各实施例的方法中,所述成像介质的伪造线索信息包括:成像介质的边缘信息、反光信息和/或材质信息;和/或,
所述成像媒介的伪造线索信息包括:显示设备的屏幕边缘、屏幕反光和/或屏幕摩尔纹;和/或,
所述真实存在的伪造人脸的线索信息包括:带面具人脸的特性、模特类人脸的特性、雕塑类人脸的特性。
可选地,在本申请上述各实施例的方法中,所述提取所述待检测图像或视频的特征、并检测提取的特征中是否包含至少一伪造人脸线索信息,包括:
将所述待检测图像或视频输入神经网络,并经所述神经网络输出用于表示所述待检测图像或视频是否包含至少一伪造人脸线索信息的检测结果,其中,所述神经网络基于包括有伪造人脸线索信息的训练用图像集预先训练完成。
可选地,在本申请上述各实施例的方法中,所述训练用图像集包括:可作为训练用正样本的多张人脸图像和可作为训练用负样本的多张图像;
所述包括有伪造人脸线索信息的训练用图像集的获取方法,包括:
获取可作为训练用正样本的多张人脸图像;
对获取的至少一张人脸图像的至少局部进行用于模拟伪造人脸线索信息的图像处理,以生成至少一张可作为训练用负样本的图像。
可选地,在本申请上述各实施例的方法中,所述获取包括人脸的待检测图像或视频,包括:
经终端设备的可见光摄像头获取包括人脸的待检测图像或视频。
可选地,在本申请上述各实施例的方法中,所述神经网络包括:位于终端设备中的第一神经网络;
所述根据检测结果确定所述人脸是否通过人脸防伪检测,包括:所述终端设备根据所述第一神经网络输出的检测结果确定所述人脸是否通过人脸防伪检测。
可选地,在本申请上述各实施例的方法中,所述获取包括人脸的待检测图像或视频,包括:
服务器接收终端设备发送的包括人脸的待检测图像或视频。
可选地,在本申请上述各实施例的方法中,所述神经网络包括:位于服务器中的第二神经网络。
可选地,在本申请上述各实施例的方法中,所述根据检测结果确定所述待检测图像或视频是否通过人脸防伪检测,包括:所述服务器根据所述第二神经网络输出的检测结果确定所述人脸是否通过人脸防伪检测,并向所述终端设备返回所述人脸是否通过人脸防伪检测的确定结果。
可选地,在本申请上述各实施例的方法中,所述神经网络还包括:位于终端设备中的第一神经网络;所述第一神经网络的大小小于所述第二神经网络的大小;
所述方法还包括:
将终端设备获取的包括人脸的视频输入第一神经网络,并经所述第一神经网络输出用于表示所述包括人脸的视频是否包含至少一伪造人脸线索信息的检测结果;
响应于所述包括人脸的视频未包含伪造人脸线索信息的检测结果,从所述包括人脸的视频中选取部分视频或图像作为所述待检测图像或视频发送给所述服务器。
可选地,在本申请上述各实施例的方法中,从所述包括人脸的视频中选取部分视频或图像作为所述待检测图像或视频发送给所述服务器,包括:
获取所述终端设备当前使用的网络状况;
在所述终端设备当前使用的网络状况满足第一预设条件时,从所述终端设备获取的视频中选取部分视频作为所述待检测视频发送给所述服务器;和/或,
在所述终端设备当前使用的网络状况不满足第一预设条件、满足第二预设条件时,从所述终端设备获取的视频中选取至少一张满足预设标准的图像作为所述待检测图像发送给所述服务器。
可选地,在本申请上述各实施例的方法中,从所述终端设备获取的视频中选取部分视频作为所述待检测视频发送给所述服务器时,将所述待检测视频输入第二神经网络,并经所述第二神经网络输出用于表示所述待检测视频是否包含至少一伪造人脸线索信息的检测结果,包括:
服务器从所述待检测视频中选取至少一张图像作为待检测图像输入至所述第二神经网络,并经所述第二神经网络输出用于表示所述待检测图像是否包含至少一伪造人脸线索信息的检测结果。
可选地,在本申请上述各实施例的方法中,响应于所述包括人脸的视频包含伪造人脸线索信息的检测结果,所述根据检测结果确定所述人脸是否通过人脸防伪检测,包括:所述终端设备根据所述第一神经网络输出的检测结果确定所述人脸未通过人脸防伪检测。
可选地,在本申请上述各实施例的方法中,还包括:所述服务器将所述第二神经网络输出的检测结果返回给所述终端设备;
所述根据检测结果确定所述人脸是否通过人脸防伪检测,包括:所述终端设备根据所述第二神经网络输出的检测结果确定所述人脸是否通过人脸防伪检测。
可选地,在本申请上述各实施例的方法中,所述根据检测结果确定所述人脸是否通过人脸防伪检测,包括:所述服务器根据所述第二神经网络输出的检测结果,确定所述人脸是否通过人脸防伪检测,并向所述终端设备发送所述人脸是否通过人脸防伪检测的确定结果。
可选地,在本申请上述各实施例的方法中,还包括:
利用所述神经网络对所述终端设备获取的视频进行活体检测;
响应于活体检测通过,执行本申请上述任一实施例所述的人脸防伪检测方法。
可选地,在本申请上述各实施例的方法中,利用所述神经网络对所述终端设备获取的视频进行活体检测,包括:由所述第一神经网络对所述终端设备获取的视频进行活体检测;
响应于活体检测通过,执行本申请上述任一实施例所述的人脸防伪检测方法,包括:
响应于活体检测通过,执行所述将终端设备获取的视频输入第一神经网络,由所述第一神经网络提取所述终端设备获取的视频的特征、检测提取的特征中是否包含伪造人脸线索信息的操作;或者
响应于活体检测通过,从所述终端设备获取的视频中选取部分视频或图像作为所述待检测图像或视频,执行所述将所述待检测图像或视频输入神经网络,并经所述神经网络输出用于表示所述待检测图像或视频是否包含至少一伪造人脸线索信息的检测结果的操作。
可选地,在本申请上述各实施例的方法中,利用所述神经网络对所述终端设备获取的视频进行活体检测,包括:
利用所述神经网络对所述终端设备获取的视频进行规定动作的有效性检测;
至少响应于所述规定动作的有效性的检测结果满足预定条件,所述活体检测通过。
可选地,在本申请上述各实施例的方法中,所述规定动作包括以下任意一项或几项:眨眼、张嘴、闭嘴、微笑、上点头、下点头、左转头、右转头、左歪头、右歪头、俯头、仰头。
可选地,在本申请上述各实施例的方法中,所述规定动作为预先设置的规定动作或者随机选择的规定动作。
根据本申请实施例的另一个方面,提供的一种人脸防伪检测装置,包括:
第一获取模块,用于获取包括人脸的待检测图像或视频;
防伪检测模块,用于提取所述待检测图像或视频的特征、并检测提取的特征中是否包含伪造人脸线索信息。
确定模块,用于根据检测结果确定所述人脸是否通过人脸防伪检测。
可选地,在本申请上述各实施例的装置中,所述防伪检测模块提取的所述特征包括以下一项或任意多项:局部二值模式特征、稀疏编码的柱状图特征、全景图特征、人脸图特征、人脸细节图特征。
可选地,在本申请上述各实施例的装置中,所述伪造人脸线索信息具有可见光条件下的人眼可观测性。
可选地,在本申请上述各实施例的装置中,所述伪造人脸线索信息包括以下任意一项或多项:成像介质的伪造线索信息、成像媒介的伪造线索信息、真实存在的伪造人脸的线索信息。
可选地,在本申请上述各实施例的装置中,所述成像介质的伪造线索信息包括:成像介质的边缘信息、反光信息和/或材质信息;和/或,
所述成像媒介的伪造线索信息包括:显示设备的屏幕边缘、屏幕反光和/或屏幕摩尔纹;和/或,
所述真实存在的伪造人脸的线索信息包括:带面具人脸的特性、模特类人脸的特性、雕塑类人脸的特性。
可选地,在本申请上述各实施例的装置中,所述防伪检测模块包括:
神经网络,用于接收输入的所述待检测图像或视频,并输出用于表示所述待检测图像或视频是否包含至少一伪造人脸线索信息的检测结果,其中,所述神经网络基于包括有伪造人脸线索信息的训练用图像集预先训练完成。
可选地,在本申请上述各实施例的装置中,所述训练用图像集包括:可作为训练用正样本的多张人脸图像和可作为训练用负样本的多张图像;
所述装置还包括:
第二获取模块,用于获取可作为训练用正样本的多张人脸图像;以及对获取的至少一张人脸图像的至少局部进行用于模拟伪造人脸线索信息的图像处理,以生成至少一张可作为训练用负样本的图像。
可选地,在本申请上述各实施例的装置中,所述第一获取模块包括:
终端设备的可见光摄像头。
可选地,在本申请上述各实施例的装置中,所述神经网络包括:位于终端设备中的第一神经网络;
所述确定模块位于所述终端设备中,用于:根据所述第一神经网络输出的检测结果确定所述人脸是否通过人脸防伪检测。
可选地,在本申请上述各实施例的装置中,所述第一获取模块位于服务器上,用于接收终端设备发送的包括人脸的待检测图像或视频;
可选地,在本申请上述各实施例的装置中,所述神经网络包括:位于服务器中的第二神经网络。
可选地,在本申请上述各实施例的装置中,所述神经网络还包括:位于终端设备中的第一神经网络,用于接收输入的所述待检测图像或视频,并输出用于表示所述包括人脸的视频是否包含至少一伪造人脸线索信息的检测结果;所述第一神经网络的大小小于所述第二神经网络的大小
所述装置还包括:
第一发送模块,位于所述终端设备上,用于根据所述第一神经网络输出的检测结果,响应于所述包括人脸的视频未包含伪造人脸线索信息的检测结果,从所述包括人脸的视频中选取部分视频或图像作为所述待检测图像或视频发送给所述服务器。
可选地,在本申请上述各实施例的装置中,所述第一发送模块用于:
获取所述终端设备当前使用的网络状况;
在所述终端设备当前使用的网络状况满足第一预设条件时,从所述终端设备获取的视频中选取部分视频作为所述待检测视频发送给所述服务器;和/或,
在所述终端设备当前使用的网络状况不满足第一预设条件、满足第二预设条件时,从所述终端设备获取的视频中选取至少一张满足预设标准的图像作为所述待检测图像发送给所述服务器。
可选地,在本申请上述各实施例的装置中,从所述终端设备获取的视频中选取部分视频作为所述待检测视频发送给所述服务器时,所述服务器还包括:
选取模块,用于从所述待检测视频中选取至少一张图像作为待检测图像输入至所述第二神经网络。
可选地,在本申请上述各实施例的装置中,所述确定模块位于所述终端设备上,还用于响应于所述包括人脸的视频包含伪造人脸线索信息的检测结果,根据所述第一神经网络输出的检测结果确定所述人脸未通过人脸防伪检测。
可选地,在本申请上述各实施例的装置中,还包括:
第二发送模块,位于所述服务器上,用于将所述第二神经网络输出的检测结果返回给所述终端设备;
所述确定模块位于所述终端设备上,用于根据所述第二神经网络输出的检测结果确定所述人脸是否通过人脸防伪检测。
可选地,在本申请上述各实施例的装置中,所述确定模块位于所述服务器上,用于根据所述第二神经网络输出的检测结果,确定所述人脸是否通过人脸防伪检测;
所述装置还包括:
第二发送模块,位于所述服务器上,用于向所述终端设备发送所述人脸是否通过人脸防伪检测的确定结果。
可选地,在本申请上述各实施例的装置中,所述神经网络,还用于对所述终端设备获取的视频进行活体检测。
可选地,在本申请上述各实施例的装置中,所述神经网络,用于利用第一神经网络对所述终端设备获取的视频进行活体检测;以及响应于活体检测通过,利用所述第一神经网络提取所述终端设备获取的视频的特征、检测提取的特征中是否包含伪造人脸线索信息的操作;或者
所述神经网络,用于利用第一神经网络对所述终端设备获取的视频进行活体检测;以及响应于活体检测通过,接收位于所述终端设备上的第一发送模块发送的所述待检测图像或视频,并输出用于表示所述待检测图像或视频是否包含至少一伪造人脸线索信息的检测结果。
可选地,在本申请上述各实施例的装置中,所述神经网络对所述终端设备获取的视频进行活体检测时,用于:对所述终端设备获取的视频进行规定动作的有效性检测;
至少响应于所述规定动作的有效性的检测结果满足预定条件,所述活体检测通过。
可选地,在本申请上述各实施例的装置中,所述规定动作包括以下任意一项或几项:眨眼、张嘴、闭嘴、微笑、上点头、下点头、左转头、右转头、左歪头、右歪头、俯头、仰头。
可选地,在本申请上述各实施例的装置中,所述规定动作为预先设置的规定动作或者随机选择的规定动作。
根据本申请实施例的又一个方面,提供的一种电子设备,包括本申请上述任一实施例的人脸防伪检测系统。
根据本申请实施例的又一个方面,提供的另一种电子设备,包括:
存储器,用于存储可执行指令;以及
处理器,用于与所述存储器通信以执行所述可执行指令从而完成本申请任一实施例所述人脸防伪检测方法的操作。
根据本申请实施例的再一个方面,提供的一种计算机程序,包括计算机可读代码,当所述计算机可读代码在设备上运行时,所述设备中的处理器执行用于实现本申请任一实施例所述方法中各步骤的指令。
根据本申请实施例的再一个方面,提供的一种计算机可读存储介质,用于存储计算机可读取的指令,所述指令被执行时执行本申请任一实施例所述方法中各步骤的操作。
基于本申请上述实施例提供的人脸防伪检测方法和系统、电子设备、程序和介质,获取包括人脸的待检测图像或视频后,提取该待检测图像或视频的特征、并检测提取的特征中是否包含伪造人脸线索信息,根据检测结果确定该待检测图像或视频是否通过人脸防伪检测。本申请实施例无需依赖于特殊的多光谱设备,便可以实现有效人脸防伪检测,且无需借助于特殊的硬件设备,降低了由此导致的硬件成本,可方便应用于各种人脸检测场景。
下面通过附图和实施例,对本申请的技术方案做进一步的详细描述。
附图说明
构成说明书的一部分的附图描述了本申请的实施例,并且连同描述一起用于解释本申请的原理。
参照附图,根据下面的详细描述,可以更加清楚地理解本申请,其中:
图1为本申请人脸防伪检测方法一个实施例的流程图。
图2为本申请人脸防伪检测方法另一个实施例的流程图。
图3为本申请人脸防伪检测方法又一个实施例的流程图。
图4为本申请人脸防伪检测方法再一个实施例的流程图。
图5为本申请人脸防伪检测方法还一个实施例的流程图。
图6为本申请人脸防伪检测系统一个实施例的结构示意图。
图7为本申请人脸防伪检测系统另一个实施例的结构示意图。
图8为本申请人脸防伪检测系统又一个实施例的结构示意图。
图9为本申请电子设备一个应用实施例的结构示意图。
具体实施方式
现在将参照附图来详细描述本申请的各种示例性实施例。应注意到:除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本申请的范围。
同时,应当明白,为了便于描述,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本申请及其应用或使用的任何限制。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。
本申请实施例可以应用于计算机系统/服务器等电子设备,其可与众多其它通用或专用计算系统环境或配置一起操作。适于与计算机系统/服务器等电子设备一起使用的众所周知的计算系统、环境和/或配置的例子包括但不限于:个人计算机系统、服务器计算机系统、瘦客户机、厚客户机、手持或膝上设备、基于微处理器的系统、机顶盒、可编程消费电子产品、网络个人电脑、小型计算机系统、大型计算机系统和包括上述任何系统的分布式云计算技术环境,等等。
计算机系统/服务器等电子设备可以在由计算机系统执行的计算机系统可执行指令(诸如程序模块)的一般语境下描述。通常,程序模块可以包括例程、程序、目标程序、组件、逻辑、数据结构等等,它们执行特定的任务或者实现特定的抽象数据类型。计算机系统/服务器等电子设备可以在分布式云计算环境中实施,分布式云计算环境中,任务是由通过通信网络链接的远程处理设备执行的。在分布式云计算环境中,程序模块可以位于包括存储设备的本地或远程计算系统存储介质上。
图1为本申请人脸防伪检测方法一个实施例的流程图。如图1所示,该实施例的人脸防伪检测方法包括:
102,获取包括人脸的待检测图像或视频。
在一个可选示例中,该操作102可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一获取模块执行。
104,提取上述待检测图像或视频的特征、并检测提取的特征中是否包含伪造人脸线索信息。
在本申请各实施例的一个可选示例中,本申请各实施例中提取的特征,例如可以包括但不限于以下任意多项:局部二值模式(LBP)特征、稀疏编码的柱状图(HSC)特征、全景图(LARGE)特征、人脸图(SMALL)特征、人脸细节图(TINY)特征。在一些应用中,可以根据可能出现的伪造人脸线索信息对该提取的特征包括的特征项进行更新。
其中,通过LBP特征,可以突出图像中的边缘信息;通过HSC特征,可以更明显的反映图像中的反光与模糊信息;LARGE特征是全图特征,基于LARGE特征,可以提取到图像中最明显的伪造线索(hack);人脸图(SMALL)是图像中人脸框若干倍大小(例如1.5倍大小)的区域切图,其包含人脸、人脸与背景切合的部分,基于SMALL特征,可以提取到反光、翻拍设备屏幕摩尔纹与模特或者面具的边缘等伪造线索;人脸细节图(TINY)是取人脸框大小的区域切图,包含人脸,基于TINY特征,可以提取到图像PS(基于图像编辑软件photoshop编辑)、翻拍屏幕摩尔纹与模特或者面具的纹理等伪造线索。
在本申请各实施例的一个可选示例中,本申请实施例中的伪造人脸线索信息具有可见光条件下的人眼可观测性,也即,人眼在可见光条件下是可以观测到这些伪造人脸线索信息的。基于伪造人脸线索信息具有的该特性,使得在采用可见光摄像头(如RGB摄像头)采集的静态图像或动态视频实现防伪检测成为可能,避免额外引入特定摄像头,降低硬件成本。伪造人脸线索信息例如可以包括但不限于以下任意一项或多项:成像介质的伪造线索信息、成像媒介的伪造线索信息、真实存在的伪造人脸的线索信息。其中,成像介质的伪造线索信息也称为2D类伪造人脸线索信息,成像媒介的伪造线索信息可以称为2.5D类伪造人脸线索信息,真实存在的伪造人脸的线索信息可以称为3D类伪造人脸线索信息,可以根据可能出现的伪造人脸方式对需要检测的伪造人脸线索信息进行相应更新。通过对这些线索信息的检测,使得电子设备可以“发现”各式各样的真实人脸和伪造人脸之间的边界,在可见光摄像头这样通用的硬件设备条件下实现各种不同类型的防伪检测,抵御“hack”攻击,提高安全性。
其中,成像介质的伪造线索信息例如可以包括但不限于:成像介质的边缘信息、反光信息和/或材质信息。成像媒介的伪造线索信息例如可以包括但不限于:显示设备的屏幕边缘、屏幕反光和/或屏幕 摩尔纹。真实存在的伪造人脸的线索信息例如可以包括但不限于:带面具人脸的特性、模特类人脸的特性、雕塑类人脸的特性。
本申请实施例中的伪造人脸线索信息在可见光条件下能被人眼观测到。伪造人脸线索信息从维度上可以划分为2D类、2.5D类和3D类伪造人脸。其中,2D类伪造人脸指的是纸质类材料打印出的人脸图像,该2D类伪造人脸线索信息例如可以包含纸质人脸的边缘、纸张材质、纸面反光、纸张边缘等伪造线索信息。2.5D类伪造人脸指的是视频翻拍设备等载体设备承载的人脸图像,该2.5D类伪造人脸线索信息例如可以包含视频翻拍设备等载体设备的屏幕摩尔纹、屏幕反光、屏幕边缘等伪造线索信息。3D类伪造人脸指的是真实存在的伪造人脸,例如面具、模特、雕塑、3D打印等,该3D类伪造人脸同样具备相应的伪造线索信息,例如面具的缝合处、模特的较为抽象或过于光滑的皮肤等伪造线索信息。
在一个可选示例中,该操作104可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的防伪检测模块执行。
106,根据检测结果确定上述人脸是否通过人脸防伪检测。
在本申请各实施例的操作104中,从待检测视频或图像中提取的特征中包含任意一项伪造人脸线索信息时,确定待检测图像为伪造人脸图像,确定待检测图像或视频未通过人脸防伪检测。从待检测视频或图像中提取的特征未包括任意一项伪造人脸线索信息时,确定待检测视频或图像不是伪造人脸图像,是真实的人脸图像,确定待检测图像或视频通过人脸防伪检测。
在一个可选示例中,该操作106可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的确定模块执行。
基于本申请上述实施例提供的人脸防伪检测方法,获取包括人脸的待检测图像或视频后,提取该待检测图像或视频的特征、并检测提取的特征中是否包含伪造人脸线索信息,根据检测结果确定该待检测图像或视频是否通过人脸防伪检测。本申请实施例无需依赖于特殊的多光谱设备,便可以实现有效人脸防伪检测,例如可以实现在可见光条件下的有效人脸防伪检测,且无需借助于特殊的硬件设备,降低了由此导致的硬件成本,可方便应用于各种人脸检测场景,尤其适用于通用的移动端应用。
在本申请各人脸防伪检测方法实施例的一个可选示例中,操作104可以通过如下方式实现:将待检测图像或视频输入神经网络,由神经网络提取待检测图像或视频的特征、检测提取的特征中是否包含伪造人脸线索信息,并输出用于表示该待检测图像或视频是否包含至少一伪造人脸线索信息的检测结果,其中,该神经网络基于包括有伪造人脸线索信息的训练用图像集预先训练完成。本申请各实施例的神经网络可以是一个深度神经网络,所述深度神经网络是指多层神经网络,例如多层的卷积神经网络。
其中,上述训练用图像集可以包括:可作为训练用正样本的多张人脸图像和可作为训练用负样本的多张图像。
在一个可选示例中,可以通过如下方法获取包括有伪造人脸线索信息的训练用图像集:
获取可作为训练用正样本的多张人脸图像;
对获取的至少一张人脸图像的至少局部进行用于模拟伪造人脸线索信息的图像处理,以生成至少一张可作为训练用负样本的图像。
在本申请各人脸防伪检测方法实施例的一个可选示例中,操作102包括:经终端设备的可见光摄像头获取包括人脸的待检测图像或视频。相应地,在示例中,上述神经网络可以包括:位于终端设备中的第一神经网络,即:由位于终端设备中的第一神经网络执行上述各实施例中的操作104。
在本申请各人脸防伪检测方法实施例的另一个可选示例中,操作102包括:服务器接收终端设备发送的包括人脸的待检测图像或视频。该示例中,神经网络可以包括:位于该服务器中的第二神经网络,即:由位于服务器中的第二神经网络执行上述各实施例中的操作104。相应地,该示例还可以包括:该服务器向终端设备发送提取的特征中是否包含伪造人脸线索信息的检测结果、或者上述待检测图像或视频是否通过人脸防伪检测的确定结果。
图2为本申请人脸防伪检测方法另一个实施例的流程图。该实施例中以神经网络包括位于终端设备中的第一神经网络和位于服务器中的第二神经网络为例进行说明。其中,第一神经网络的大小小于第二神经网络的大小,可选来说,可以是第一神经网络在网络层和/或参数数量上小于第二神经网络。在本申请各实施例中,第一神经网络、第二神经网络,分别可以是一个多层神经网络(即:深度神经网络),例如多层的卷积神经网络,例如可以是LeNet、AlexNet、GoogLeNet、VGG、ResNet等任意神经网络模型。第一神经网络和第二神经网络可以采用相同类型和结构的神经网络,也可以采用不同类型和结构的神经网络。如图2所示,该实施例的人脸防伪检测方法包括:
202,终端设备获取包括人脸的视频。
示例性地,经终端设备可以通过其可见光摄像头获取包括人脸的视频。
在一个可选示例中,该操作202可以由终端设备上的可见光摄像头执行。
204,将终端设备获取的包括人脸的视频输入该终端设备中的第一神经网络,由该第一神经网络提取该包括人脸的视频的特征、检测该提取的特征中是否包含伪造人脸线索信息,并输出用于表示上述包括人脸的视频是否包含至少一伪造人脸线索信息的检测结果。
其中的伪造人脸线索信息具有可见光条件下的人眼可观测性。第一神经网络基于包括有伪造人脸线索信息的训练用图像集预先训练完成。
示例性地,在本申请各实施例中提取的各项特征中包含的伪造人脸线索,可以预先通过训练第一神经网络,被第一神经网络学习到,之后任何包含这些伪造人脸线索信息的图像输入第一神经网络后均会被检测出来,就可以判断为伪造人脸图像,否则为真实人脸图像。
在一个可选示例中,该操作204可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的终端设备中的第一神经网络执行。
206,响应于上述包括人脸的视频未包含伪造人脸线索信息的检测结果,终端设备从上述包括人脸的视频中选取部分视频或图像作为待检测图像或视频发送给服务器。
其中的部分视频可以是一段包括一张或多张待检测图像的视频流,也可以仅仅是一张或多张待检测图像;可以根据预先设置选取,也可以根据实际情况实时调整。
可选地,若第一神经网络输出上述包括人脸的视频包含伪造人脸线索信息的检测结果,可以确定待检测图像或视频未通过人脸防伪检测,可以由终端设备根据该第一神经网络输出的检测结果确定上述待检测图像或视频中的人脸未通过人脸防伪检测的确定结果,不再执行本实施例的后续流程。
在一个可选示例中,该操作206可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的终端设备中的第一发送模块执行。
208,服务器接收到终端设备发送的包括人脸的待检测图像或视频后,将包括人脸的待检测图像或视频输入第二神经网络,由第二神经网络提取待检测图像或视频的特征、检测提取的特征中是否包含伪造人脸线索信息,并输出用于表示该待检测图像或视频是否包含至少一伪造人脸线索信息的检测结果。
其中的伪造人脸线索信息具有可见光条件下的人眼可观测性。第二神经网络基于包括有伪造人脸线索信息的训练用图像集预先训练完成。
示例性地,在本申请各实施例中提取的各项特征中包含的伪造人脸线索,可以预先通过训练第二神经网络,被第二神经网络学习到,之后任何包含这些伪造人脸线索信息的图像输入第二神经网络后均会被检测出来,就可以判断为伪造人脸图像,否则为真实人脸图像。
示例性地,若操作206中,终端设备向服务器发送的是视频,则该操作208中,服务器可以从接收到的该视频中选取至少一张图像作为待检测图像输入第二神经网络,由第二神经网络提取待检测图像的特征、检测提取的特征中是否包含伪造人脸线索信息,并输出用于表示该待检测图像是否包含至少一伪造人脸线索信息的检测结果。
另外,若操作206中,终端设备向服务器发送的是图像,该操作208中,服务器可以将接收到的图像全部输入第二神经网络、或从接收到的图像中选取至少一张图像输入第二神经网络,由第二神经网络提取接收到的图像的特征、检测提取的特征中是否包含伪造人脸线索信息,并输出用于表示该图像是否包含至少一伪造人脸线索信息的检测结果。
在一个可选示例中,该操作208可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的服务器中的第一获取模块和第二神经网络执行。
210,服务器根据检测结果确定上述待检测图像或视频中的人脸是否通过人脸防伪检测,并向终端设备发送上述待检测图像或视频中的人脸是否通过人脸防伪检测的确定结果。
在一个可选示例中,该操作210可以由处理器调用存储器存储的相应指令执行,也可以由服务器中的确定模块执行。
另外,在另一可选实施例中,操作210中,也可以替换性地由服务器将第二神经网络输出的用于表示该待检测图像或视频是否包含至少一伪造人脸线索信息的检测结果发送给终端设备,由终端设备根据第一神经网络输出的检测结果和第二神经网络输出的检测结果确定待检测图像或视频中的人脸是否通过人脸防伪检测,而不由服务器确定上述待检测图像或视频中的人脸是否通过人脸防伪检测、并向终端设备发送确定结果。
可选的,若第一神经网络输出的检测结果为待检测图像或视频包含至少一伪造人脸线索信息,确定待检测图像或视频中的人脸未通过人脸防伪检测;若第一神经网络输出的检测结果为待检测图像或视频未包含任一伪造人脸线索信息,但是第二神经网络输出的检测结果为待检测图像或视频包含至少一伪造人脸线索信息,确定待检测图像或视频中的人脸未通过人脸防伪检测;若第一神经网络输出的检测结果为待检测图像或视频未包含任一伪造人脸线索信息、且第二神经网络输出的检测结果为待检测图像或视频未包含任一伪造人脸线索信息,确定待检测图像或视频中的人脸通过人脸防伪检测。
由于终端设备的硬件性能通常有限,进行更多特征提取和检测的神经网络将需要更多的计算和存储资源,而终端设备的计算、存储资源相对于云端服务器比较有限,为了节省终端设备侧神经网络占用的计算和存储资源、又能保证实现有效的人脸防伪检测,本申请实施例中,在终端设备中设置较小(网络较浅和/或网络参数较少)的第一神经网络,融合较少特征,例如仅从待检测图像或视频中提取LBP特征与人脸SMALL特征、来进行相应的伪造人脸线索信息的检测,在硬件性能较好的云端服务器设置较大(网络较深和/或网络参数较多)的第二神经网络,融合全面的防伪线索特征,使得该第二神经网络更加健壮、检测性能更好,除了从待检测图像或视频中提取LBP特征与人脸SMALL特征,还可以提取HSC特征、LARGE特征、TINY特征等其他可能包含伪造人脸线索信息的特征,在第一神经网络采集到的视频中人脸通过人脸防伪检测时,再通过第二神经网络进行更加精确、全面的人脸防伪检测,提高了检测结果的准确性;在第一神经网络采集到的视频中人脸未通过人脸防伪检测时,便无需通过第二神经网络进行人脸防伪检测,提升了人脸防伪检测的效率。
进一步地,在本申请各实施例的一可选示例中,操作206中终端设备从上述包括人脸的视频中选取部分视频或图像作为待检测图像或视频发送给服务器,可以包括:
获取该终端设备当前使用的网络状况,该操作可以由终端设备在执行操作202之前执行,或者在操作206之前的任意时刻执行;
在终端设备当前使用的网络状况(例如使用的网络类型、网络带宽等)满足第一预设条件时,例如,在终端设备当前使用的网络为无线局域网(例如WiFi)、且带宽大于第一预设带宽时,终端设备从上述获取的包括人脸的视频中选取部分视频作为待检测视频发送给服务器。在终端设备当前使用的网络状况较好时,将上述待检测视频发送给所述服务器或从上述待检测视频中选取部分视频发送给服务器,由于视频包括的图像较多,在网络条件允许的情况下,向服务器发送视频用于人脸防伪检测,可以实现更全面的人脸防伪检测;
在终端设备当前使用的网络状况不满足第一预设条件、但是满足第二预设条件时,例如,在终端设备当前使用的网络为移动数据网络、且带宽大于第二预设带宽时,或者在终端设备当前使用的网络为无线局域网(例如WiFi)、且带宽小于第一预设带宽时,终端设备从上述获取的包括人脸的视频中选取一张或多张满足预设标准的图像作为待检测图像发送给服务器,从而也可以实现在网络状况较差时的人脸防伪检测。
另外,在进一步可选示例中,还可以包括:
在终端设备当前使用的网络状况不满足第二预设条件时,例如,在终端设备当前未接入任何网络时,或者在终端设备当前使用的网络带宽小于第二预设带宽时,终端设备可以输出检测失败的提示消息,也可以仅利用终端设备中的第一神经网络提取该包括人脸的视频的特征、检测该提取的特征中是否包含伪造人脸线索信息,并输出用于表示上述包括人脸的视频是否包含至少一伪造人脸线索信息的检测结果,并由终端设备根据检测结果确定人脸是否通过人脸防伪检测。
在上述实施例的一个可选示例中,从终端设备获取的视频中选取部分视频作为待检测视频发送给第二神经网络时,由第二神经网络提取待检测视频的特征、检测提取的特征中是否包含伪造人脸线索信息,包括:
服务器从待检测视频中选取至少一张图像作为待检测图像输入至第二神经网络,并经该第二神经网络输出用于表示上述待检测图像是否包含至少一伪造人脸线索信息的检测结果。
在本申请各实施例的一个可选示例中,终端设备从上述包括人脸的视频中选取部分视频或图像作为待检测图像或视频发送给服务器时,或者服务器从终端设备发送的待检测图像或视频中选取待检测图像检测伪造人脸线索信息时,可以根据预先设置的选取标准,选取高质量的图像检测伪造人脸线索信息。其中的选取标准例如可以是以下任意一项或多项:人脸朝向是否正面朝向、图像清晰度的高低、曝光度高低等,依据相应的标准选取综合质量较高的图像进行人脸防伪检测,以便提高人脸防伪检测的可行性和检测结果的准确性。
本申请实施例,可着重检测待检测图像或视频中是否具有伪造线索(即:伪造人脸线索信息),使用近乎无交互的方式验证活性,称为静默活体检测。静默活体检测全程基本无交互,极大地简化了活体检测流程,被检测者只需正对神经网络所在设备的视频或图像采集设备(例如:可见光摄像头),调整好光线和位置即可,全程不需要任何动作类的交互。本申请实施例中的神经网络通过学习训练的方法,预先学习出人眼在多个维度,可以“观测”到的伪造人脸线索信息,由此在后续应用中,判断人脸图像是否来源于真实的活体。如果待检测视频或图像包含任意人脸伪造线索信息,这些线索会被神经网络捕获到,就会提示用户其中的人脸图像为伪造人脸图像。例如,视频翻拍类的伪造人脸图像,可以通过判断人脸图像中的屏幕反光或屏幕边缘的特征,判断其中的人脸为非活体。
可选地,本申请上述任一实施例还可以先利用神经网络对终端设备获取的视频进行活体检测(302)。 示例性地,该操作302可以包括:利用神经网络对终端设备获取的视频进行规定动作的有效性检测;至少响应于该规定动作的有效性的检测结果满足预定条件,活体检测通过。
其中的规定动作可以是预先设置的规定动作或者随机选择的规定动作,即:可以要求用户在预设时间内做出预先设置的规定动作,也可以要求用户在预设时间内做出由从规定动作集中随机选择的规定动作。例如可以包括但不限于以下任意一项或几项:眨眼、张嘴、闭嘴、微笑、上点头、下点头、左转头、右转头、左歪头、右歪头、俯头、仰头等。
在活体检测通过后,再执行上述各人脸防伪检测方法实施例的流程,例如开始执行图1所示实施例中操作102或图2所示实施例中操作202的流程,进行人脸防伪检测。如图3所示,为本申请人脸防伪检测方法又一个实施例的流程图。
本申请实施例在进行活体检测的同时还实现了人脸防伪检测,能够抵御伪造攻击的情况,解决了针对进行活体检测时,不法分子易于利用被待验证用户的照片或视频伪造该用户动作的问题,提高了人脸认证技术的安全性;并且,无需借助于特殊的硬件设备,降低了由此导致的硬件成本,且可方便应用于各种人脸检测场景,适用范围广,尤其适用于通用的移动端应用。
图4为本申请实施例人脸防伪检测方法再一个实施例的流程图。如图4所示,该实施例的人脸防伪检测方法包括:
402,终端设备获取视频。
在一个可选示例中,该操作402可以由终端设备执行。
404,利用终端设备上的第一神经网络对上述获取的视频进行规定动作的有效性检测。
示例性地,第一神经网络可以通过检测视频中用户是否在预设时间内做出有效的规定动作,来判断活体检测是否通过。
至少响应于规定动作的有效性的检测结果满足预定条件,活体检测通过,执行操作406。否则,响应于规定动作的有效性的检测结果不满足预定条件,活体检测未通过,不执行本实施例的后续流程。
在一个可选示例中,该操作404可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的终端设备上的第一神经网络执行。
406,终端设备从上述获取的视频中选取包括人脸的视频或图像作为待检测视频或图像输入位于该终端设备的第一神经网络。
在一个可选示例中,该操作406可以由终端设备或其中的第一发送模块执行。
408,第一神经网络提取待检测图像或视频的特征、检测提取的特征中是否包含伪造人脸线索信息,并输出用于表示该待检测图像或视频是否包含至少一伪造人脸线索信息的检测结果。
在一个可选示例中,该操作404可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一神经网络执行。
410,终端设备根据第一神经网络输出的检测结果确定待检测图像或视频是否通过人脸防伪检测。
在本实施例的操作408中,从待检测视频或图像中提取的特征中包含任意一项伪造人脸线索信息时,该操作410确定待检测图像为伪造人脸图像,确定待检测图像或视频未通过人脸防伪检测。在本实施例的操作408中,从待检测视频或图像中提取的特征未包括任意一项伪造人脸线索信息时,该操作410确定待检测视频或图像不是伪造人脸图像,是真实的人脸图像,确定待检测图像或视频通过人脸防伪检测。
在一个可选示例中,该操作410可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的终端设备中的确定模块执行。
图5为本申请实施例人脸防伪检测方法还一个实施例的流程图。如图5所示,该实施例的人脸防伪检测方法包括:
502,终端设备获取视频。
在一个可选示例中,该操作502可以由终端设备执行。
504,利用终端设备上的第一神经网络对上述获取的视频进行规定动作的有效性检测。
至少响应于规定动作的有效性的检测结果满足预定条件,活体检测通过,执行操作506。否则,响应于规定动作的有效性的检测结果不满足预定条件,活体检测未通过,不执行本实施例的后续流程。
在一个可选示例中,该操作504可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的终端设备上的第一神经网络执行。
506,终端设备从上述获取的视频中选取包括人脸的视频或图像作为待检测视频或图像发送给服务器。
在一个可选示例中,该操作506可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的终端设备或其中的第一发送模块执行。
508,服务器接收到终端设备发送的待检测视频或图像后,将该待检测视频或图像输入位于该服务器上的第二神经网络。
在一个可选示例中,该操作508可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的服务器中的第一获取模块和第二神经网络执行。
510,第二神经网络提取待检测图像或视频的特征、检测提取的特征中是否包含伪造人脸线索信息,并输出用于表示该待检测图像或视频是否包含至少一伪造人脸线索信息的检测结果。
在一个可选示例中,该操作510可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的服务器中的第二神经网络执行。
512,服务器根据检测结果确定待检测图像或视频是否通过人脸防伪检测,并向终端设备发送该待检测图像或视频是否通过人脸防伪检测的确定结果。
在本实施例的操作510中,从待检测视频或图像中提取的特征中包含任意一项伪造人脸线索信息时,该操作512确定待检测图像为伪造人脸图像,确定待检测图像或视频未通过人脸防伪检测。在本实施例的操作510中,从待检测视频或图像中提取的特征未包括任意一项伪造人脸线索信息时,该操作512确定待检测视频或图像不是伪造人脸图像,是真实的人脸图像,确定待检测图像或视频通过人脸防伪检测。
在一个可选示例中,该操作512可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的服务器中的确定模块和第二发送模块执行。
另外,在本申请实施例人脸防伪检测方法又一个实施例中,可以通过操作402-404或者操作502-504,利用第一神经网络对终端设备获取的视频进行活体检测,在活体检测通过后,从终端设备获取的视频中获取包括人脸的视频,之后执行操作204-210。
基于上述各人脸防伪检测方法实施例,可以先对视频进行活性检测,检测视频中的人脸是否具备活性,在视频通过活体检测后再进行人脸防伪检测,能够抵御伪造攻击的情况,解决了针对视频进行活体检测时,不法分子易于利用待验证用户的照片或视频伪造该用户动作的问题。
本申请实施例提供的任一种人脸防伪检测方法可以由任意适当的具有数据处理能力的设备执行,包括但不限于:终端设备和服务器等。或者,本申请实施例提供的任一种人脸防伪检测方法可以由处理器执行,如处理器通过调用存储器存储的相应指令来执行本申请实施例提及的任一种人脸防伪检测方法。下文不再赘述。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
图6为本申请人脸防伪检测系统一个实施例的结构示意图。该实施例的人脸防伪检系统可用于实现本申请上述各人脸防伪检方法实施例。如图6所示,该实施例的人脸防伪检测系统包括:第一获取模块,防伪检测模块和确定模块。其中:
第一获取模块,用于获取包括人脸的待检测图像或视频。在其中一个可选示例中,该第一获取模块可以是终端设备的可见光摄像头。
防伪检测模块,用于提取待检测图像或视频的特征、并检测提取的特征中是否包含伪造人脸线索信息,其中,伪造人脸线索信息具有可见光条件下的人眼可观测性。在本申请各实施例的一个可选示例中,本申请各实施例中提取的特征,例如可以包括但不限于以下任意多项:LBP特征、HSC特征、LARGE特征、SMALL特征、TINY特征。可选应用中,可以根据可能出现的伪造人脸线索信息对该提取的特征包括的特征项进行更新。在本申请各实施例的一个可选示例中,本申请各实施例中的伪造人脸线索信息具有可见光条件下的人眼可观测性。伪造人脸线索信息例如可以包括但不限于以下任意一项或多项:成像介质的伪造线索信息、成像媒介的伪造线索信息、真实存在的伪造人脸的线索信息。其中,成像介质的伪造线索信息例如可以包括但不限于:成像介质的边缘信息、反光信息和/或材质信息。成像媒介的伪造线索信息例如可以包括但不限于:显示设备的屏幕边缘、屏幕反光和/或屏幕摩尔纹。真实存在的伪造人脸的线索信息例如可以包括但不限于:带面具人脸的特性、模特类人脸的特性、雕塑类人脸的特性。
确定模块,用于根据检测结果确定人脸是否通过人脸防伪检测。
基于本申请上述实施例提供的人脸防伪检测系统,获取包括人脸的待检测图像或视频后,提取该待检测图像或视频的特征、并检测提取的特征中是否包含伪造人脸线索信息,根据检测结果确定该待检测图像或视频是否通过人脸防伪检测。本申请实施例无需依赖于特殊的多光谱设备,便可以实现有效人脸防伪检测,便可以实现在可见光条件下的有效人脸防伪检测,且无需借助于特殊的硬件设备,降低了由此导致的硬件成本,可方便应用于各种人脸检测场景,尤其适用于通用的移动端应用。
在本申请各人脸防伪检测系统实施例的一个可选示例中,防伪检测模块可以通过一个神经网络实现,该神经网络用于接收输入的待检测图像或视频,并输出用于表示待检测图像或视频是否包含至少一伪造人脸线索信息的检测结果,其中,该神经网络基于包括有伪造人脸线索信息的训练用图像集预先训练完成。
示例性地,上述训练用图像集可以包括:可作为训练用正样本的多张人脸图像和可作为训练用负样本的多张图像。相应地,该实施例的人脸防伪检测系统还可以包括:第二获取模块,用于获取可作为训练用正样本的多张人脸图像;以及对获取的至少一张人脸图像的至少局部进行用于模拟伪造人脸线索信息的图像处理,以生成至少一张可作为训练用负样本的图像。
在本申请各人脸防伪检测系统实施例的一个可选示例中,上述神经网络包括:位于终端设备中的第一神经网络。相应地,该实施例中,第一获取模块和确定模块位于终端设备中。其中,确定模块可用于:根据第一神经网络输出的检测结果确定人脸是否通过人脸防伪检测。如图7所示,为本申请该实施例下人脸防伪检测系统的一个结构示意图。
在本申请各人脸防伪检测系统实施例的另一个可选示例中,第一获取模块位于服务器上,可用于接收终端设备发送的包括人脸的待检测图像或视频。相应地,该实施例中,上述神经网络包括:位于服务器中的第二神经网络。
另外,在基于上述另一个可选示例的又一个可选示例中,上述神经网络还可以包括:位于终端设备中的第一神经网络,用于接收输入的待检测图像或视频,并输出用于表示包括人脸的视频是否包含至少一伪造人脸线索信息的检测结果,其中,第一神经网络的大小小于第二神经网络的大小。如图8所示,为本申请该实施例下人脸防伪检测系统的其中一个可能的结构示意图。
可选地,在上述又一个可选示例中,人脸防伪检测系统还可以包括:第一发送模块,位于终端设备上,用于根据第一神经网络输出的检测结果,响应于包括人脸的视频未包含伪造人脸线索信息的检测结果,从包括人脸的视频中选取部分视频或图像作为待检测图像或视频发送给服务器。
示例性地,第一发送模块可用于:获取终端设备当前使用的网络状况;在终端设备当前使用的网络状况满足第一预设条件时,从终端设备获取的视频中选取部分视频作为待检测视频发送给服务器;和/或,在终端设备当前使用的网络状况不满足第一预设条件、满足第二预设条件时,从终端设备获取的视频中选取至少一张满足预设标准的图像作为待检测图像发送给服务器。
示例性地,第一发送模块从终端设备获取的视频中选取部分视频作为待检测视频发送给服务器时,服务器还可以包括:选取模块,用于从待检测视频中选取至少一张图像作为待检测图像输入至第二神经网络。
在图8所示各系统实施例的一个可选示例中,确定模块位于终端设备上,还用于响应于包括人脸的视频包含伪造人脸线索信息的检测结果,根据第一神经网络输出的检测结果确定人脸未通过人脸防伪检测。
在图8所示各系统实施例的另一个可选示例中,还可以包括:第二发送模块,位于服务器上,用于将第二神经网络输出的检测结果返回给终端设备。相应地,该实施例中,确定模块位于终端设备上,可用于根据第二神经网络输出的检测结果确定人脸是否通过人脸防伪检测。
在图8所示各系统实施例的又一个可选示例中,确定模块位于服务器上,可用于根据第二神经网络输出的检测结果,确定人脸是否通过人脸防伪检测。相应地,该实施例的人脸防伪检测系统还包括:第二发送模块,位于服务器上,用于向终端设备发送人脸是否通过人脸防伪检测的确定结果。
进一步地,在本申请上述各实施例的人脸防伪检测系统,神经网络还可以用于对终端设备获取的视频进行活体检测。
在其中一个可选示例中,神经网络对终端设备获取的视频进行活体检测时,可用于对终端设备获取的视频进行规定动作的有效性检测。至少响应于规定动作的有效性的检测结果满足预定条件,活体检测通过。
在其中一个可选示例中,神经网络可用于利用第一神经网络对终端设备获取的视频进行活体检测;以及响应于活体检测通过,利用第一神经网络提取终端设备获取的视频的特征、检测提取的特征中是否包含伪造人脸线索信息的操作;或者,神经网络,可用于利用第一神经网络对终端设备获取的视频进行活体检测;以及响应于活体检测通过,接收位于终端设备上的第一发送模块发送的待检测图像或视频,并输出用于表示待检测图像或视频是否包含至少一伪造人脸线索信息的检测结果。
其中的规定动作可以是预先设置的规定动作或者随机选择的规定动作,即:可以要求用户在预设时间内做出预先设置的规定动作,也可以要求用户在预设时间内做出由从规定动作集中随机选择的规定动作。例如可以包括但不限于以下任意一项或几项:眨眼、张嘴、闭嘴、微笑、上点头、下点头、左转头、右转头、左歪头、右歪头、俯头、仰头等。
另外,本申请实施例还提供了一种电子设备,其可以包括如上任一实施例的人脸防伪检测系统。可选地,该电子设备例如可以是终端设备或者服务器等设备。
另外,本申请实施例提供的另一种电子设备,包括:
存储器,用于存储可执行指令;以及
处理器,用于与所述存储器通信以执行所述可执行指令从而完成本申请上述任一实施例人脸防伪检测方法的操作。
图9为本申请电子设备一个应用实施例的结构示意图。下面参考图9,其示出了适于用来实现本申请实施例的终端设备或服务器的电子设备的结构示意图。如图9所示,该电子设备包括一个或多个处理器、通信部等,所述一个或多个处理器例如:一个或多个中央处理单元(CPU),和/或一个或多个图像处理器(GPU)等,处理器可以根据存储在只读存储器(ROM)中的可执行指令或者从存储部分加载到随机访问存储器(RAM)中的可执行指令而执行各种适当的动作和处理。通信部可包括但不限于网卡,所述网卡可包括但不限于IB(Infiniband)网卡,处理器可与只读存储器和/或随机访问存储器中通信以执行可执行指令,通过总线与通信部相连、并经通信部与其他目标设备通信,从而完成本申请实施例提供的任一方法对应的操作,例如,获取包括人脸的待检测图像或视频;提取所述待检测图像或视频的特征、并检测提取的特征中是否包含伪造人脸线索信息;根据检测结果确定所述人脸是否通过人脸防伪检测。
此外,在RAM中,还可存储有装置操作所需的各种程序和数据。CPU、ROM以及RAM通过总线彼此相连。在有RAM的情况下,ROM为可选模块。RAM存储可执行指令,或在运行时向ROM中写入可执行指令,可执行指令使处理器执行本申请上述任一方法对应的操作。输入/输出(I/O)接口也连接至总线。通信部可以集成设置,也可以设置为具有多个子模块(例如多个IB网卡),并在总线链接上。
以下部件连接至I/O接口:包括键盘、鼠标等的输入部分;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分;包括硬盘等的存储部分;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分。通信部分经由诸如因特网的网络执行通信处理。驱动器也根据需要连接至I/O接口。可拆卸介质,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器上,以便于从其上读出的计算机程序根据需要被安装入存储部分。
需要说明的,如图9所示的架构仅为一种可选实现方式,在一些实践过程中,可根据实际需要对上述图9的部件数量和类型进行选择、删减、增加或替换;在不同功能部件设置上,也可采用分离设置或集成设置等实现方式,例如GPU和CPU可分离设置或者可将GPU集成在CPU上,通信部可分离设置,也可集成设置在CPU或GPU上,等等。这些可替换的实施方式均落入本申请公开的保护范围。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括有形地包含在机器可读介质上的计算机程序,计算机程序包含用于执行流程图所示的方法的程序代码,程序代码可包括对应执行本申请实施例提供的人脸防伪检测方法步骤对应的指令。在这样的实施例中,该计算机程序可以通过通信部分从网络上被下载和安装,和/或从可拆卸介质被安装。在该计算机程序被CPU执行时,执行本申请的方法中限定的上述功能。
另外,本申请实施例还提供了一种计算机程序,包括计算机可读代码,当所述计算机可读代码在设备上运行时,所述设备中的处理器执行用于实现本申请任一实施例所述方法中各步骤的指令。
另外,本申请实施例还提供了一种计算机可读存储介质,用于存储计算机可读取的指令,所述指令被执行时执行本申请任一实施例所述方法中各步骤的操作。
本说明书中各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似的部分相互参见即可。对于系统实施例而言,由于其与方法实施例基本对应,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
可能以许多方式来实现本申请的方法和装置。例如,可通过软件、硬件、固件或者软件、硬件、固件的任何组合来实现本申请的方法和装置。用于所述方法的步骤的上述顺序仅是为了进行说明,本申请的方法的步骤不限于以上可选描述的顺序,除非以其它方式特别说明。此外,在一些实施例中,还可将本申请实施为记录在记录介质中的程序,这些程序包括用于实现根据本申请的方法的机器可读指令。因而,本申请还覆盖存储用于执行根据本申请的方法的程序的记录介质。
本申请的描述是为了示例和描述起见而给出的,而并不是无遗漏的或者将本申请限于所公开的形式。很多修改和变化对于本领域的普通技术人员而言是显然的。选择和描述实施例是为了更好说明本申请的原理和实际应用,并且使本领域的普通技术人员能够理解本申请从而设计适于特定用途的带有各种修改的各种实施例。

Claims (49)

  1. 一种人脸防伪检测方法,其特征在于,包括:
    获取包括人脸的待检测图像或视频;
    提取所述待检测图像或视频的特征、并检测提取的特征中是否包含伪造人脸线索信息;
    根据检测结果确定所述人脸是否通过人脸防伪检测。
  2. 根据权利要求1所述的方法,其特征在于,提取的所述特征包括以下一项或任意多项:局部二值模式特征、稀疏编码的柱状图特征、全景图特征、人脸图特征、人脸细节图特征。
  3. 根据权利要求1或2所述的方法,其特征在于,所述伪造人脸线索信息具有可见光条件下的人眼可观测性。
  4. 根据权利要求1-3任一所述的方法,其特征在于,所述伪造人脸线索信息包括以下任意一项或多项:成像介质的伪造线索信息、成像媒介的伪造线索信息、真实存在的伪造人脸的线索信息。
  5. 根据权利要求4所述的方法,其特征在于,所述成像介质的伪造线索信息包括:成像介质的边缘信息、反光信息和/或材质信息;和/或,
    所述成像媒介的伪造线索信息包括:显示设备的屏幕边缘、屏幕反光和/或屏幕摩尔纹;和/或,
    所述真实存在的伪造人脸的线索信息包括:带面具人脸的特性、模特类人脸的特性、雕塑类人脸的特性。
  6. 根据权利要求1-5任一所述的方法,其特征在于,所述提取所述待检测图像或视频的特征、并检测提取的特征中是否包含至少一伪造人脸线索信息,包括:
    将所述待检测图像或视频输入神经网络,并经所述神经网络输出用于表示所述待检测图像或视频是否包含至少一伪造人脸线索信息的检测结果,其中,所述神经网络基于包括有伪造人脸线索信息的训练用图像集预先训练完成。
  7. 根据权利要求6所述的方法,其特征在于,所述训练用图像集包括:可作为训练用正样本的多张人脸图像和可作为训练用负样本的多张图像;
    所述包括有伪造人脸线索信息的训练用图像集的获取方法,包括:
    获取可作为训练用正样本的多张人脸图像;
    对获取的至少一张人脸图像的至少局部进行用于模拟伪造人脸线索信息的图像处理,以生成至少一张可作为训练用负样本的图像。
  8. 根据权利要求1-7任一所述的方法,其特征在于,所述获取包括人脸的待检测图像或视频,包括:
    经终端设备的可见光摄像头获取包括人脸的待检测图像或视频。
  9. 根据权利要求6至8任意一项所述的方法,其特征在于,所述神经网络包括:位于终端设备中的第一神经网络;
    所述根据检测结果确定所述人脸是否通过人脸防伪检测,包括:所述终端设备根据所述第一神经网络输出的检测结果确定所述人脸是否通过人脸防伪检测。
  10. 根据权利要求1-8任一所述的方法,其特征在于,所述获取包括人脸的待检测图像或视频,包括:
    服务器接收终端设备发送的包括人脸的待检测图像或视频。
  11. 根据权利要求6、7或10所述的方法,其特征在于,所述神经网络包括:位于服务器中的第二神经网络。
  12. 根据权利要求11所述的方法,其特征在于,所述根据检测结果确定所述待检测图像或视频是否通过人脸防伪检测,包括:所述服务器根据所述第二神经网络输出的检测结果确定所述人脸是否通过人脸防伪检测,并向所述终端设备返回所述人脸是否通过人脸防伪检测的确定结果。
  13. 根据权利要求11所述的方法,其特征在于,所述神经网络还包括:位于终端设备中的第一神经网络;所述第一神经网络的大小小于所述第二神经网络的大小;
    所述方法还包括:
    将终端设备获取的包括人脸的视频输入第一神经网络,并经所述第一神经网络输出用于表示所述包括人脸的视频是否包含至少一伪造人脸线索信息的检测结果;
    响应于所述包括人脸的视频未包含伪造人脸线索信息的检测结果,从所述包括人脸的视频中选取部分视频或图像作为所述待检测图像或视频发送给所述服务器。
  14. 根据权利要求13所述的方法,其特征在于,从所述包括人脸的视频中选取部分视频或图像作 为所述待检测图像或视频发送给所述服务器,包括:
    获取所述终端设备当前使用的网络状况;
    在所述终端设备当前使用的网络状况满足第一预设条件时,从所述终端设备获取的视频中选取部分视频作为所述待检测视频发送给所述服务器;和/或,
    在所述终端设备当前使用的网络状况不满足第一预设条件、满足第二预设条件时,从所述终端设备获取的视频中选取至少一张满足预设标准的图像作为所述待检测图像发送给所述服务器。
  15. 根据权利要求14所述的方法,其特征在于,从所述终端设备获取的视频中选取部分视频作为所述待检测视频发送给所述服务器时,将所述待检测视频输入第二神经网络,并经所述第二神经网络输出用于表示所述待检测视频是否包含至少一伪造人脸线索信息的检测结果,包括:
    服务器从所述待检测视频中选取至少一张图像作为待检测图像输入至所述第二神经网络,并经所述第二神经网络输出用于表示所述待检测图像是否包含至少一伪造人脸线索信息的检测结果。
  16. 根据权利要求13至15任意一项所述的方法,其特征在于,响应于所述包括人脸的视频包含伪造人脸线索信息的检测结果,所述根据检测结果确定所述人脸是否通过人脸防伪检测,包括:所述终端设备根据所述第一神经网络输出的检测结果确定所述人脸未通过人脸防伪检测。
  17. 根据权利要求13至15任意一项所述的方法,其特征在于,还包括:所述服务器将所述第二神经网络输出的检测结果返回给所述终端设备;
    所述根据检测结果确定所述人脸是否通过人脸防伪检测,包括:所述终端设备根据所述第二神经网络输出的检测结果确定所述人脸是否通过人脸防伪检测。
  18. 根据权利要求13至15任意一项所述的方法,其特征在于,所述根据检测结果确定所述人脸是否通过人脸防伪检测,包括:所述服务器根据所述第二神经网络输出的检测结果,确定所述人脸是否通过人脸防伪检测,并向所述终端设备发送所述人脸是否通过人脸防伪检测的确定结果。
  19. 根据权利要求6至18任意一项所述的方法,其特征在于,还包括:
    利用所述神经网络对所述终端设备获取的视频进行活体检测;
    响应于活体检测通过,执行权利要求6至18任意一项所述的人脸防伪检测方法。
  20. 根据权利要求19所述的方法,其特征在于,利用所述神经网络对所述终端设备获取的视频进行活体检测,包括:由所述第一神经网络对所述终端设备获取的视频进行活体检测;
    响应于活体检测通过,执行权利要求6至18任意一项所述的人脸防伪检测方法,包括:
    响应于活体检测通过,执行所述将终端设备获取的视频输入第一神经网络,由所述第一神经网络提取所述终端设备获取的视频的特征、检测提取的特征中是否包含伪造人脸线索信息的操作;或者
    响应于活体检测通过,从所述终端设备获取的视频中选取部分视频或图像作为所述待检测图像或视频,执行所述将所述待检测图像或视频输入神经网络,并经所述神经网络输出用于表示所述待检测图像或视频是否包含至少一伪造人脸线索信息的检测结果的操作。
  21. 根据权利要求19或20所述的方法,其特征在于,利用所述神经网络对所述终端设备获取的视频进行活体检测,包括:
    利用所述神经网络对所述终端设备获取的视频进行规定动作的有效性检测;
    至少响应于所述规定动作的有效性的检测结果满足预定条件,所述活体检测通过。
  22. 根据权利要求21所述的方法,其特征在于,所述规定动作包括以下任意一项或几项:眨眼、张嘴、闭嘴、微笑、上点头、下点头、左转头、右转头、左歪头、右歪头、俯头、仰头。
  23. 根据权利要求21或22所述的方法,其特征在于,所述规定动作为预先设置的规定动作或者随机选择的规定动作。
  24. 一种人脸防伪检测系统,其特征在于,包括:
    第一获取模块,用于获取包括人脸的待检测图像或视频;
    防伪检测模块,用于提取所述待检测图像或视频的特征、并检测提取的特征中是否包含伪造人脸线索信息;
    确定模块,用于根据检测结果确定所述人脸是否通过人脸防伪检测。
  25. 根据权利要求24所述的系统,其特征在于,所述防伪检测模块提取的所述特征包括以下一项或任意多项:局部二值模式特征、稀疏编码的柱状图特征、全景图特征、人脸图特征、人脸细节图特征。
  26. 根据权利要求24或25所述的系统,其特征在于,所述伪造人脸线索信息具有可见光条件下的人眼可观测性。
  27. 根据权利要求24至26任意一项所述的系统,其特征在于,所述伪造人脸线索信息包括以下任意一项或多项:成像介质的伪造线索信息、成像媒介的伪造线索信息、真实存在的伪造人脸的线索信息。
  28. 根据权利要求27所述的系统,其特征在于,所述成像介质的伪造线索信息包括:成像介质的 边缘信息、反光信息和/或材质信息;和/或,
    所述成像媒介的伪造线索信息包括:显示设备的屏幕边缘、屏幕反光和/或屏幕摩尔纹;和/或,
    所述真实存在的伪造人脸的线索信息包括:带面具人脸的特性、模特类人脸的特性、雕塑类人脸的特性。
  29. 根据权利要求24至28任意一项所述的系统,其特征在于,所述防伪检测模块包括:
    神经网络,用于接收输入的所述待检测图像或视频,并输出用于表示所述待检测图像或视频是否包含至少一伪造人脸线索信息的检测结果,其中,所述神经网络基于包括有伪造人脸线索信息的训练用图像集预先训练完成。
  30. 根据权利要求29所述的系统,其特征在于,所述训练用图像集包括:可作为训练用正样本的多张人脸图像和可作为训练用负样本的多张图像;
    所述系统还包括:
    第二获取模块,用于获取可作为训练用正样本的多张人脸图像;以及对获取的至少一张人脸图像的至少局部进行用于模拟伪造人脸线索信息的图像处理,以生成至少一张可作为训练用负样本的图像。
  31. 根据权利要求24至30任意一项所述的系统,其特征在于,所述第一获取模块包括:
    终端设备的可见光摄像头。
  32. 根据权利要求29至31任意一项所述的系统,其特征在于,所述神经网络包括:位于终端设备中的第一神经网络;
    所述确定模块位于所述终端设备中,用于:根据所述第一神经网络输出的检测结果确定所述人脸是否通过人脸防伪检测。
  33. 根据权利要求21至30任意一项所述的系统,其特征在于,所述第一获取模块位于服务器上,用于接收终端设备发送的包括人脸的待检测图像或视频;
  34. 根据权利要求29、30或33所述的系统,其特征在于,所述神经网络包括:位于服务器中的第二神经网络。
  35. 根据权利要求34所述的系统,其特征在于,所述神经网络还包括:位于终端设备中的第一神经网络,用于接收输入的所述待检测图像或视频,并输出用于表示所述包括人脸的视频是否包含至少一伪造人脸线索信息的检测结果;所述第一神经网络的大小小于所述第二神经网络的大小;
    所述系统还包括:
    第一发送模块,位于所述终端设备上,用于根据所述第一神经网络输出的检测结果,响应于所述包括人脸的视频未包含伪造人脸线索信息的检测结果,从所述包括人脸的视频中选取部分视频或图像作为所述待检测图像或视频发送给所述服务器。
  36. 根据权利要求35所述的系统,其特征在于,所述第一发送模块用于:
    获取所述终端设备当前使用的网络状况;
    在所述终端设备当前使用的网络状况满足第一预设条件时,从所述终端设备获取的视频中选取部分视频作为所述待检测视频发送给所述服务器;和/或,
    在所述终端设备当前使用的网络状况不满足第一预设条件、满足第二预设条件时,从所述终端设备获取的视频中选取至少一张满足预设标准的图像作为所述待检测图像发送给所述服务器。
  37. 根据权利要求36所述的系统,其特征在于,所述第一发送模块从所述终端设备获取的视频中选取部分视频作为所述待检测视频发送给所述服务器时,所述服务器还包括:
    选取模块,用于从所述待检测视频中选取至少一张图像作为待检测图像输入至所述第二神经网络。
  38. 根据权利要求35至37任意一项所述的系统,其特征在于,所述确定模块位于所述终端设备上,还用于响应于所述包括人脸的视频包含伪造人脸线索信息的检测结果,根据所述第一神经网络输出的检测结果确定所述人脸未通过人脸防伪检测。
  39. 根据权利要求35至37任意一项所述的系统,其特征在于,还包括:
    第二发送模块,位于所述服务器上,用于将所述第二神经网络输出的检测结果返回给所述终端设备;
    所述确定模块位于所述终端设备上,用于根据所述第二神经网络输出的检测结果确定所述人脸是否通过人脸防伪检测。
  40. 根据权利要求35至37任意一项所述的系统,其特征在于,所述确定模块位于所述服务器上,用于根据所述第二神经网络输出的检测结果,确定所述人脸是否通过人脸防伪检测;
    所述系统还包括:
    第二发送模块,位于所述服务器上,用于向所述终端设备发送所述人脸是否通过人脸防伪检测的确定结果。
  41. 根据权利要求29至40任意一项所述的系统,其特征在于,所述神经网络,还用于对所述终端 设备获取的视频进行活体检测。
  42. 根据权利要求41所述的系统,其特征在于,所述神经网络,用于利用第一神经网络对所述终端设备获取的视频进行活体检测;以及响应于活体检测通过,利用所述第一神经网络提取所述终端设备获取的视频的特征、检测提取的特征中是否包含伪造人脸线索信息的操作;或者
    所述神经网络,用于利用第一神经网络对所述终端设备获取的视频进行活体检测;以及响应于活体检测通过,接收位于所述终端设备上的第一发送模块发送的所述待检测图像或视频,并输出用于表示所述待检测图像或视频是否包含至少一伪造人脸线索信息的检测结果。
  43. 根据权利要求41或42所述的系统,其特征在于,所述神经网络对所述终端设备获取的视频进行活体检测时,用于:对所述终端设备获取的视频进行规定动作的有效性检测;
    至少响应于所述规定动作的有效性的检测结果满足预定条件,所述活体检测通过。
  44. 根据权利要求43所述的系统,其特征在于,所述规定动作包括以下任意一项或几项:眨眼、张嘴、闭嘴、微笑、上点头、下点头、左转头、右转头、左歪头、右歪头、俯头、仰头。
  45. 根据权利要求43或44所述的系统,其特征在于,所述规定动作为预先设置的规定动作或者随机选择的规定动作。
  46. 一种电子设备,其特征在于,包括权利要求24-45任一所述的人脸防伪检测系统。
  47. 一种电子设备,其特征在于,包括:
    存储器,用于存储可执行指令;以及
    处理器,用于与所述存储器通信以执行所述可执行指令从而完成权利要求1-23任一所述方法的操作。
  48. 一种计算机程序,包括计算机可读代码,其特征在于,当所述计算机可读代码在设备上运行时,所述设备中的处理器执行用于实现权利要求1-23任一所述方法中各步骤的指令。
  49. 一种计算机可读存储介质,用于存储计算机可读取的指令,其特征在于,所述指令被执行时执行权利要求1-23任一所述方法中各步骤的操作。
PCT/CN2018/079247 2017-03-16 2018-03-16 人脸防伪检测方法和系统、电子设备、程序和介质 WO2018166515A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/451,208 US11080517B2 (en) 2017-03-16 2019-06-25 Face anti-counterfeiting detection methods and systems, electronic devices, programs and media
US17/203,435 US11482040B2 (en) 2017-03-16 2021-03-16 Face anti-counterfeiting detection methods and systems, electronic devices, programs and media

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201710157715 2017-03-16
CN201710157715.1 2017-03-16
CN201711251762.9A CN108229326A (zh) 2017-03-16 2017-12-01 人脸防伪检测方法和系统、电子设备、程序和介质
CN201711251762.9 2017-12-01

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/451,208 Continuation US11080517B2 (en) 2017-03-16 2019-06-25 Face anti-counterfeiting detection methods and systems, electronic devices, programs and media

Publications (1)

Publication Number Publication Date
WO2018166515A1 true WO2018166515A1 (zh) 2018-09-20

Family

ID=62653294

Family Applications (3)

Application Number Title Priority Date Filing Date
PCT/CN2018/079273 WO2018166525A1 (zh) 2017-03-16 2018-03-16 人脸防伪检测方法和系统、电子设备、程序和介质
PCT/CN2018/079247 WO2018166515A1 (zh) 2017-03-16 2018-03-16 人脸防伪检测方法和系统、电子设备、程序和介质
PCT/CN2018/079267 WO2018166524A1 (zh) 2017-03-16 2018-03-16 人脸检测方法和系统、电子设备、程序和介质

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/079273 WO2018166525A1 (zh) 2017-03-16 2018-03-16 人脸防伪检测方法和系统、电子设备、程序和介质

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/079267 WO2018166524A1 (zh) 2017-03-16 2018-03-16 人脸检测方法和系统、电子设备、程序和介质

Country Status (3)

Country Link
US (2) US11080517B2 (zh)
CN (5) CN108229326A (zh)
WO (3) WO2018166525A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444873A (zh) * 2020-04-02 2020-07-24 北京迈格威科技有限公司 视频中人物真伪的检测方法、装置、电子设备及存储介质
WO2020159437A1 (en) * 2019-01-29 2020-08-06 Agency For Science, Technology And Research Method and system for face liveness detection
CN113095272A (zh) * 2021-04-23 2021-07-09 深圳前海微众银行股份有限公司 活体检测方法、设备、介质及计算机程序产品
CN114760524A (zh) * 2020-12-25 2022-07-15 深圳Tcl新技术有限公司 视频处理方法、装置、智能终端及计算机可读存储介质
CN115063870A (zh) * 2022-07-08 2022-09-16 广东警官学院(广东省公安司法管理干部学院) 一种基于面部动作单元的伪造视频人像检测方法

Families Citing this family (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10635894B1 (en) * 2016-10-13 2020-04-28 T Stamp Inc. Systems and methods for passive-subject liveness verification in digital media
CN108229326A (zh) * 2017-03-16 2018-06-29 北京市商汤科技开发有限公司 人脸防伪检测方法和系统、电子设备、程序和介质
US11093771B1 (en) 2018-05-04 2021-08-17 T Stamp Inc. Systems and methods for liveness-verified, biometric-based encryption
US11496315B1 (en) 2018-05-08 2022-11-08 T Stamp Inc. Systems and methods for enhanced hash transforms
CN109034059B (zh) * 2018-07-25 2023-06-06 深圳市中悦科技有限公司 静默式人脸活体检测方法、装置、存储介质及处理器
CN110163053B (zh) 2018-08-02 2021-07-13 腾讯科技(深圳)有限公司 生成人脸识别的负样本的方法、装置及计算机设备
CN109325413A (zh) * 2018-08-17 2019-02-12 深圳市中电数通智慧安全科技股份有限公司 一种人脸识别方法、装置及终端
US11461384B2 (en) * 2018-09-10 2022-10-04 Algoface, Inc. Facial images retrieval system
CN111046703B (zh) * 2018-10-12 2023-04-18 杭州海康威视数字技术股份有限公司 人脸防伪检测方法、装置及多目相机
KR102140340B1 (ko) * 2018-10-18 2020-07-31 엔에이치엔 주식회사 컨볼루션 뉴럴 네트워크를 통해 이미지 위변조를 탐지하는 시스템 및 이를 이용하여 무보정 탐지 서비스를 제공하는 방법
CN111291586B (zh) * 2018-12-06 2024-05-24 北京市商汤科技开发有限公司 活体检测方法、装置、电子设备及计算机可读存储介质
CN109670430A (zh) * 2018-12-11 2019-04-23 浙江大学 一种基于深度学习的多分类器融合的人脸活体识别方法
CN111488756B (zh) * 2019-01-25 2023-10-03 杭州海康威视数字技术股份有限公司 基于面部识别的活体检测的方法、电子设备和存储介质
CN109828668A (zh) * 2019-01-30 2019-05-31 维沃移动通信有限公司 一种显示控制方法及电子设备
CN109858471A (zh) * 2019-04-03 2019-06-07 深圳市华付信息技术有限公司 基于图像质量的活体检测方法、装置及计算机设备
US11301586B1 (en) 2019-04-05 2022-04-12 T Stamp Inc. Systems and processes for lossy biometric representations
CN111783505A (zh) * 2019-05-10 2020-10-16 北京京东尚科信息技术有限公司 伪造人脸的识别方法、装置和计算机可读存储介质
CN110245612A (zh) 2019-06-14 2019-09-17 百度在线网络技术(北京)有限公司 人脸图像的检测方法和装置
CN110248100B (zh) * 2019-07-18 2021-02-19 联想(北京)有限公司 一种拍摄方法、装置及存储介质
CN110414437A (zh) * 2019-07-30 2019-11-05 上海交通大学 基于卷积神经网络模型融合篡改人脸检测分析方法和系统
WO2021038298A2 (en) 2019-08-29 2021-03-04 PXL Vision AG Id verification with a mobile device
CN110555931A (zh) * 2019-08-31 2019-12-10 华南理工大学 一种基于深度学习识别的人脸检测与门禁系统装置
CN110688946A (zh) * 2019-09-26 2020-01-14 上海依图信息技术有限公司 基于图片识别的公有云静默活体检测设备和方法
US11687778B2 (en) 2020-01-06 2023-06-27 The Research Foundation For The State University Of New York Fakecatcher: detection of synthetic portrait videos using biological signals
US11403369B2 (en) 2020-01-21 2022-08-02 Disney Enterprises, Inc. Secure content processing pipeline
CN111339832B (zh) * 2020-02-03 2023-09-12 中国人民解放军国防科技大学 人脸合成图像的检测方法及装置
US11425120B2 (en) 2020-02-11 2022-08-23 Disney Enterprises, Inc. Systems for authenticating digital contents
CN111611873B (zh) * 2020-04-28 2024-07-16 平安科技(深圳)有限公司 人脸替换检测方法及装置、电子设备、计算机存储介质
US11967173B1 (en) 2020-05-19 2024-04-23 T Stamp Inc. Face cover-compatible biometrics and processes for generating and using same
CN111611967A (zh) * 2020-05-29 2020-09-01 哈尔滨理工大学 一种人脸识别的活体检测方法
CN111739046A (zh) * 2020-06-19 2020-10-02 百度在线网络技术(北京)有限公司 用于模型更新和检测图像的方法、装置、设备和介质
CN112085701B (zh) * 2020-08-05 2024-06-11 深圳市优必选科技股份有限公司 一种人脸模糊度检测方法、装置、终端设备及存储介质
CN111986180B (zh) * 2020-08-21 2021-07-06 中国科学技术大学 基于多相关帧注意力机制的人脸伪造视频检测方法
CN112307973B (zh) * 2020-10-30 2023-04-18 中移(杭州)信息技术有限公司 活体检测方法、系统、电子设备和存储介质
CN112689159B (zh) * 2020-12-22 2023-04-18 深圳市九洲电器有限公司 视频文件处理方法、装置、终端设备以及存储介质
CN112580621B (zh) * 2020-12-24 2022-04-29 成都新希望金融信息有限公司 身份证翻拍识别方法、装置、电子设备及存储介质
CN112733760B (zh) * 2021-01-15 2023-12-12 上海明略人工智能(集团)有限公司 人脸防伪检测方法及系统
US12079371B1 (en) 2021-04-13 2024-09-03 T Stamp Inc. Personal identifiable information encoder
CN112906676A (zh) * 2021-05-06 2021-06-04 北京远鉴信息技术有限公司 人脸图像来源的识别方法、装置、存储介质及电子设备
CN113538185A (zh) * 2021-07-15 2021-10-22 山西安弘检测技术有限公司 一种放射卫生检测的模拟训练方法及装置
CN113486853B (zh) * 2021-07-29 2024-02-27 北京百度网讯科技有限公司 视频检测方法及装置、电子设备和介质
CN113869906A (zh) * 2021-09-29 2021-12-31 北京市商汤科技开发有限公司 人脸支付方法及装置、存储介质
CN113869253A (zh) * 2021-09-29 2021-12-31 北京百度网讯科技有限公司 活体检测方法、训练方法、装置、电子设备及介质
CN114359798A (zh) * 2021-12-29 2022-04-15 天翼物联科技有限公司 实人认证的数据稽核方法、装置、计算机设备及存储介质
CN114694209A (zh) * 2022-02-07 2022-07-01 湖南信达通信息技术有限公司 视频处理方法、装置、电子设备及计算机存储介质
CN115174138B (zh) * 2022-05-25 2024-06-07 北京旷视科技有限公司 摄像头攻击检测方法、系统、设备、存储介质及程序产品
TWI807851B (zh) * 2022-06-08 2023-07-01 中華電信股份有限公司 一種領域泛化之人臉防偽的特徵解析系統、方法及其電腦可讀媒介
CN115272130A (zh) * 2022-08-22 2022-11-01 苏州大学 基于多光谱级联归一化的图像去摩尔纹系统及方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105023005A (zh) * 2015-08-05 2015-11-04 王丽婷 人脸识别装置及其识别方法
CN105205455A (zh) * 2015-08-31 2015-12-30 李岩 一种移动平台上人脸识别的活体检测方法及系统
CN105447432A (zh) * 2014-08-27 2016-03-30 北京千搜科技有限公司 一种基于局部运动模式的人脸防伪方法
CN105513221A (zh) * 2015-12-30 2016-04-20 四川川大智胜软件股份有限公司 一种基于三维人脸识别的atm机防欺诈装置及系统

Family Cites Families (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2398711B (en) * 2003-02-18 2006-04-12 Samsung Electronics Co Ltd Neural networks
US7418128B2 (en) * 2003-07-31 2008-08-26 Microsoft Corporation Elastic distortions for automatic generation of labeled data
WO2006088042A1 (ja) * 2005-02-16 2006-08-24 Matsushita Electric Industrial Co., Ltd. 生体判別装置および認証装置ならびに生体判別方法
CN101833646B (zh) * 2009-03-11 2012-05-02 北京中科虹霸科技有限公司 一种虹膜活体检测方法
CN101669824B (zh) * 2009-09-22 2012-01-25 浙江工业大学 基于生物特征识别的人与身份证同一性检验装置
US9026580B2 (en) * 2009-11-20 2015-05-05 Microsoft Technology Licensing, Llc Validation pipeline
US9202105B1 (en) * 2012-01-13 2015-12-01 Amazon Technologies, Inc. Image analysis for user authentication
US9025830B2 (en) * 2012-01-20 2015-05-05 Cyberlink Corp. Liveness detection system based on face behavior
CN102622588B (zh) * 2012-03-08 2013-10-09 无锡中科奥森科技有限公司 双验证人脸防伪方法及装置
US9177246B2 (en) * 2012-06-01 2015-11-03 Qualcomm Technologies Inc. Intelligent modular robotic apparatus and methods
US10452894B2 (en) * 2012-06-26 2019-10-22 Qualcomm Incorporated Systems and method for facial verification
US9811775B2 (en) * 2012-12-24 2017-11-07 Google Inc. Parallelizing neural networks during training
CN103106397B (zh) * 2013-01-19 2016-09-21 华南理工大学 基于亮瞳效应的人脸活体检测方法
CN104143078B (zh) * 2013-05-09 2016-08-24 腾讯科技(深圳)有限公司 活体人脸识别方法、装置和设备
US9460382B2 (en) * 2013-12-23 2016-10-04 Qualcomm Incorporated Neural watchdog
CN103886301B (zh) 2014-03-28 2017-01-18 北京中科奥森数据科技有限公司 一种人脸活体检测方法
US9484022B2 (en) * 2014-05-23 2016-11-01 Google Inc. Training multiple neural networks with different accuracy
US20160005050A1 (en) * 2014-07-03 2016-01-07 Ari Teman Method and system for authenticating user identity and detecting fraudulent content associated with online activities
US10417525B2 (en) * 2014-09-22 2019-09-17 Samsung Electronics Co., Ltd. Object recognition with reduced neural network weight precision
CN105354557B (zh) 2014-11-03 2019-04-16 苏州思源科安信息技术有限公司 一种生物识别防伪造物活体检测方法
CN104615997B (zh) * 2015-02-15 2018-06-19 四川川大智胜软件股份有限公司 一种基于多摄像机的人脸防伪方法
CN106033435B (zh) * 2015-03-13 2019-08-02 北京贝虎机器人技术有限公司 物品识别方法和装置,室内地图生成方法和装置
US10079827B2 (en) * 2015-03-16 2018-09-18 Ricoh Company, Ltd. Information processing apparatus, information processing method, and information processing system
CN204481940U (zh) * 2015-04-07 2015-07-15 北京市商汤科技开发有限公司 双目摄像头拍照移动终端
US10275672B2 (en) * 2015-04-29 2019-04-30 Beijing Kuangshi Technology Co., Ltd. Method and apparatus for authenticating liveness face, and computer program product thereof
CN104766072A (zh) * 2015-04-29 2015-07-08 深圳市保千里电子有限公司 一种活体人脸识别的装置及其使用方法
WO2016197298A1 (zh) * 2015-06-08 2016-12-15 北京旷视科技有限公司 活体检测方法、活体检测系统以及计算机程序产品
US10878320B2 (en) * 2015-07-22 2020-12-29 Qualcomm Incorporated Transfer learning in neural networks
CN105117695B (zh) * 2015-08-18 2017-11-24 北京旷视科技有限公司 活体检测设备和活体检测方法
CN105005779A (zh) * 2015-08-25 2015-10-28 湖北文理学院 基于交互式动作的人脸验证防伪识别方法及系统
CN105224924A (zh) * 2015-09-29 2016-01-06 小米科技有限责任公司 活体人脸识别方法和装置
CN106897658B (zh) * 2015-12-18 2021-12-14 腾讯科技(深圳)有限公司 人脸活体的鉴别方法和装置
CN105718863A (zh) * 2016-01-15 2016-06-29 北京海鑫科金高科技股份有限公司 一种人脸活体检测方法、装置及系统
CN105718874A (zh) * 2016-01-18 2016-06-29 北京天诚盛业科技有限公司 活体检测及认证的方法和装置
CN105930797B (zh) * 2016-04-21 2019-03-26 腾讯科技(深圳)有限公司 一种人脸验证方法及装置
CN106096519A (zh) * 2016-06-01 2016-11-09 腾讯科技(深圳)有限公司 活体鉴别方法及装置
CN106203305B (zh) * 2016-06-30 2020-02-04 北京旷视科技有限公司 人脸活体检测方法和装置
KR102483642B1 (ko) * 2016-08-23 2023-01-02 삼성전자주식회사 라이브니스 검사 방법 및 장치
US10664722B1 (en) * 2016-10-05 2020-05-26 Digimarc Corporation Image processing arrangements
CN106372629B (zh) * 2016-11-08 2020-02-07 汉王科技股份有限公司 一种活体检测方法和装置
CN108229326A (zh) * 2017-03-16 2018-06-29 北京市商汤科技开发有限公司 人脸防伪检测方法和系统、电子设备、程序和介质
KR102387571B1 (ko) * 2017-03-27 2022-04-18 삼성전자주식회사 라이브니스 검사 방법 및 장치
KR102324468B1 (ko) * 2017-03-28 2021-11-10 삼성전자주식회사 얼굴 인증을 위한 장치 및 방법
CN108229120B (zh) * 2017-09-07 2020-07-24 北京市商汤科技开发有限公司 人脸解锁及其信息注册方法和装置、设备、程序、介质
US10970571B2 (en) * 2018-06-04 2021-04-06 Shanghai Sensetime Intelligent Technology Co., Ltd. Vehicle control method and system, vehicle-mounted intelligent system, electronic device, and medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447432A (zh) * 2014-08-27 2016-03-30 北京千搜科技有限公司 一种基于局部运动模式的人脸防伪方法
CN105023005A (zh) * 2015-08-05 2015-11-04 王丽婷 人脸识别装置及其识别方法
CN105205455A (zh) * 2015-08-31 2015-12-30 李岩 一种移动平台上人脸识别的活体检测方法及系统
CN105513221A (zh) * 2015-12-30 2016-04-20 四川川大智胜软件股份有限公司 一种基于三维人脸识别的atm机防欺诈装置及系统

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020159437A1 (en) * 2019-01-29 2020-08-06 Agency For Science, Technology And Research Method and system for face liveness detection
CN111444873A (zh) * 2020-04-02 2020-07-24 北京迈格威科技有限公司 视频中人物真伪的检测方法、装置、电子设备及存储介质
CN111444873B (zh) * 2020-04-02 2023-12-12 北京迈格威科技有限公司 视频中人物真伪的检测方法、装置、电子设备及存储介质
CN114760524A (zh) * 2020-12-25 2022-07-15 深圳Tcl新技术有限公司 视频处理方法、装置、智能终端及计算机可读存储介质
CN113095272A (zh) * 2021-04-23 2021-07-09 深圳前海微众银行股份有限公司 活体检测方法、设备、介质及计算机程序产品
CN113095272B (zh) * 2021-04-23 2024-03-29 深圳前海微众银行股份有限公司 活体检测方法、设备、介质及计算机程序产品
CN115063870A (zh) * 2022-07-08 2022-09-16 广东警官学院(广东省公安司法管理干部学院) 一种基于面部动作单元的伪造视频人像检测方法
CN115063870B (zh) * 2022-07-08 2024-04-30 广东警官学院(广东省公安司法管理干部学院) 一种基于面部动作单元的伪造视频人像检测方法

Also Published As

Publication number Publication date
CN108229325A (zh) 2018-06-29
CN108229329A (zh) 2018-06-29
CN108229328A (zh) 2018-06-29
CN108229331A (zh) 2018-06-29
WO2018166524A1 (zh) 2018-09-20
US11080517B2 (en) 2021-08-03
CN108229326A (zh) 2018-06-29
US20210200995A1 (en) 2021-07-01
US11482040B2 (en) 2022-10-25
WO2018166525A1 (zh) 2018-09-20
CN108229329B (zh) 2022-01-28
US20190318156A1 (en) 2019-10-17

Similar Documents

Publication Publication Date Title
WO2018166515A1 (zh) 人脸防伪检测方法和系统、电子设备、程序和介质
WO2020034733A1 (zh) 身份认证方法和装置、电子设备和存储介质
WO2020199577A1 (zh) 活体检测方法和装置、设备和存储介质
US11107232B2 (en) Method and apparatus for determining object posture in image, device, and storage medium
Boulkenafet et al. OULU-NPU: A mobile face presentation attack database with real-world variations
US10810423B2 (en) Iris liveness detection for mobile devices
US11354917B2 (en) Detection of fraudulently generated and photocopied credential documents
US11244152B1 (en) Systems and methods for passive-subject liveness verification in digital media
Galbally et al. Three‐dimensional and two‐and‐a‐half‐dimensional face recognition spoofing using three‐dimensional printed models
CN109255299A (zh) 身份认证方法和装置、电子设备和存储介质
WO2020048140A1 (zh) 活体检测方法和装置、电子设备、计算机可读存储介质
CN109359502A (zh) 防伪检测方法和装置、电子设备、存储介质
US11373449B1 (en) Systems and methods for passive-subject liveness verification in digital media
Safaa El‐Din et al. Deep convolutional neural networks for face and iris presentation attack detection: Survey and case study
CN109543635A (zh) 活体检测方法、装置、系统、解锁方法、终端及存储介质
CN111931544A (zh) 活体检测的方法、装置、计算设备及计算机存储介质
CN111209863B (zh) 一种活体模型训练和人脸活体检测方法、装置和电子设备
Priyanka et al. Genuine selfie detection algorithm for social media using image quality measures
CN118279952A (zh) 身份验证的方法、装置及计算机可读存储介质
CN116486202A (zh) 活体人脸检测模型的训练样本确定方法及装置、设备
CN114743279A (zh) 活体检测函数生成方法、装置、存储介质及设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18767208

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 17.12.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 18767208

Country of ref document: EP

Kind code of ref document: A1