WO2020083111A1 - 活体检测方法、装置、电子设备、存储介质及应用活体检测方法的相关系统 - Google Patents

活体检测方法、装置、电子设备、存储介质及应用活体检测方法的相关系统 Download PDF

Info

Publication number
WO2020083111A1
WO2020083111A1 PCT/CN2019/111912 CN2019111912W WO2020083111A1 WO 2020083111 A1 WO2020083111 A1 WO 2020083111A1 CN 2019111912 W CN2019111912 W CN 2019111912W WO 2020083111 A1 WO2020083111 A1 WO 2020083111A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
detected
constraint
living body
constraint frame
Prior art date
Application number
PCT/CN2019/111912
Other languages
English (en)
French (fr)
Inventor
罗文寒
王耀东
刘威
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP19876300.5A priority Critical patent/EP3872689B1/en
Publication of WO2020083111A1 publication Critical patent/WO2020083111A1/zh
Priority to US17/073,035 priority patent/US11551481B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/188Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/08Payment architectures
    • G06Q20/18Payment architectures involving self-service terminals [SST], vending machines, kiosks or multimedia terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/30Payment architectures, schemes or protocols characterised by the use of specific devices or networks
    • G06Q20/32Payment architectures, schemes or protocols characterised by the use of specific devices or networks using wireless devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/00174Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
    • G07C9/00563Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys using personal physical data of the operator, e.g. finger prints, retinal images, voicepatterns
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/10Movable barriers with registering means
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Definitions

  • the embodiments of the present application relate to the technical field of biometrics recognition, and in particular, to a living body detection method, device, electronic device, storage medium, and payment system, video monitoring system, and access control system using the living body detection method.
  • biometric recognition is widely used, for example, face payment, face recognition in video surveillance, and fingerprint recognition and iris recognition in access control authorization.
  • biometrics there are various threats to biometrics, such as attackers using forged faces, fingerprints, irises, etc. for biometrics.
  • Various embodiments of the present application provide a living body detection method, device, electronic equipment, storage medium, and payment system, video monitoring system, and access control system applying the living body detection method.
  • An embodiment of the present application provides a living body detection method, which is executed by an electronic device and includes: acquiring an image of an object to be detected; performing key point detection on the biological characteristics of the image corresponding to the object to be detected; based on the detected key Construct a constrained frame in the image; capture the shape change of the constrained frame constructed in the image; if an abnormal deformation of the constrained frame is captured or a key point is not detected, the object to be detected is determined to be Prosthesis.
  • An embodiment of the present application provides a living body detection device, including: an image acquisition module for acquiring an image of an object to be detected; a key point detection module for performing keying on the biological characteristics of the image corresponding to the object to be detected Point detection; constraint frame construction module, used to construct a constraint frame in the image according to the detected key points; deformation capture module, used to capture shape changes of the constraint frame constructed in the image; prosthesis determination module , Used to determine that the object to be detected is a prosthesis if it is captured that the constraint frame has abnormal deformation or no key point is detected.
  • An embodiment of the present application provides an electronic device, including a processor and a memory, where computer-readable instructions are stored on the memory, and when the computer-readable instructions are executed by the processor, the living body detection method described above is implemented.
  • An embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the living body detection method as described above is implemented.
  • An embodiment of the present application provides a payment system, the payment system includes a payment terminal and a payment server, wherein the payment terminal is used to collect images of a payment user; the payment terminal includes a living body detection device used to The constraint frame is constructed according to the detected key points in the image of the payment user, and the abnormal deformation of the constraint frame in the image is captured. If the abnormal deformation of the constraint frame is not captured, the payment user is determined to be a living body; when The payment user is a living body, and the payment terminal authenticates the payment user to initiate a payment request to the payment server when the payment user passes the identity verification.
  • An embodiment of the present application provides a video monitoring system.
  • the video monitoring system includes a monitoring screen, a plurality of cameras and a monitoring server, wherein a plurality of the cameras are used to collect images of a monitoring object;
  • the monitoring server includes a living body detection device, It is used to construct a constraint frame according to the detected key points in the image of the monitored object, and to capture the abnormal deformation of the constraint frame in the image. If the abnormal deformation of the constraint frame is not captured, the monitoring is determined
  • the object is a living body; when the monitoring object is a living body, the monitoring server identifies the monitoring object to obtain a tracking target, and performs video monitoring on the tracking target through an image screen on the monitoring screen.
  • An embodiment of the present application provides an access control system.
  • the access control system includes a reception device, an identification server, and an access control device.
  • the reception device is used to collect images of objects in and out.
  • the identification server includes a living body detection device. Construct a constraint frame according to the detected key points in the image of the entry and exit object, and capture the abnormal deformation of the constraint frame in the image, and if the abnormal deformation of the constraint frame is not captured, determine the entry and exit object Is a living body; when the access object is a living body, the identification server performs identity identification on the access object, so that the access control device configures the access authority for the access object that successfully completes the identification, so that the access object is configured according to the configuration
  • the access control authority controls the access barriers in the designated work area to perform the release action.
  • Fig. 1 is a block diagram of a hardware structure of an electronic device according to an exemplary embodiment
  • Fig. 2 is a flowchart of a method for detecting a living body according to an exemplary embodiment
  • FIG. 3 is a schematic diagram of the key points in the image of the facial features involved in the embodiment corresponding to FIG. 2;
  • FIG. 4 is a schematic diagram of constructing a constraint frame in an image from key points corresponding to facial features involved in the corresponding embodiment of FIG. 2;
  • FIG. 5 is a schematic diagram of the shape change of the constraint frame involved in the embodiment corresponding to FIG. 2 during the opening process of the object to be detected;
  • FIG. 6 is a flowchart of step 350 in one embodiment in the embodiment corresponding to FIG. 2;
  • step 351 is a flowchart of step 351 in an embodiment corresponding to the embodiment of FIG. 6;
  • FIG. 8 is a flowchart of step 370 in the embodiment corresponding to FIG. 2 in one embodiment
  • step 320 is a flowchart of step 320 in one embodiment in the embodiment corresponding to FIG. 2;
  • Fig. 10 is a flowchart of another method for detecting a living body according to an exemplary embodiment
  • FIG. 11 is a schematic diagram of the key points in the image of the eye involved in the corresponding embodiment of FIG. 10;
  • FIG. 12 is a schematic diagram of a specific implementation of performing a queue-in operation / queue-out operation for a biometric structure distance ratio corresponding to an image in which the queue involved in the embodiment corresponding to FIG. 10;
  • step 503 is a flowchart of step 503 in an embodiment corresponding to the embodiment of FIG. 10;
  • FIG. 14 is a schematic diagram of the trend of the aspect ratio of the eye involved in the embodiment corresponding to FIG. 13;
  • 15 is a schematic diagram of an implementation environment based on identity verification in an application scenario
  • 16 is a schematic diagram of an implementation environment based on identity recognition in an application scenario
  • 17 is a schematic diagram of another implementation environment based on identity recognition in an application scenario
  • FIG. 18 is a schematic diagram of a specific implementation of a living body detection method in an application scenario
  • Fig. 19 is a block diagram of a living body detection device according to an exemplary embodiment
  • Fig. 20 is a block diagram of an electronic device according to an exemplary embodiment.
  • the living body detection method is to perform living body detection on the image of the object to be detected, that is, to detect whether the biometric contour of the object to be detected has changed in the image, if it is detected that the biometric contour of the object to be detected has changed in the image, That is, the object to be detected is determined to be a living body.
  • the biological feature of the object to be detected in the image is eyes or mouth.
  • the object to be detected blinks or opens its mouth, the contour of the biological feature in the image will change, so that the object to be detected can be determined as a living body.
  • the attacker's prosthesis attack behavior for example, rotation bending attack, multi-angle rotation attack, etc.
  • the attacker uses the characteristics of the eyes or mouth to bend, distort, stolen the image of the object to be detected, Rotational offset, etc., causes the contour of the biometrics in the image to be distorted or the image to deflect laterally, causing the prosthesis to blink, open its mouth, and other artifacts, making the prosthesis misjudged as a living body.
  • the embodiments of the present application specifically propose a living body detection method, which can effectively improve the defense against prosthetic attacks and has high security.
  • This kind of living body detection method is realized by a computer program, and correspondingly, the constructed living body detection device can be stored in an electronic device with a von Neumann system architecture to be executed in the electronic device, thereby realizing the object to be detected Biopsy.
  • the electronic device may be a smart phone, tablet computer, notebook computer, desktop computer, server, etc., which is not limited herein.
  • FIG. 1 is a block diagram of an electronic device according to an exemplary embodiment of the present application. It should be noted that this type of electronic device is only an example adapted to the embodiments of the present application, and cannot be considered as providing any limitation on the scope of use of the embodiments of the present application. This type of electronic device cannot also be interpreted as requiring or having to depend on one or more components in the exemplary electronic device 100 shown in FIG. 1.
  • the electronic device 100 includes a memory 101, a storage controller 103, one or more (only one is shown in FIG. 1) processor 105, a peripheral interface 107, a radio frequency module 109, a positioning module 111, a camera module 113, an audio module 115, a touch screen 117, and a key module 119. These components communicate with each other through one or more communication buses / signal lines 121.
  • the memory 101 may be used to store computer programs and modules, such as the computer programs and modules corresponding to the living body detection method and apparatus in the exemplary embodiments of the present application, and the processor 105 executes the computer program stored in the memory 101 to execute Various functions and data processing, for example, to complete the living body detection method described in any embodiment of the present application.
  • the memory 101 serves as a carrier for resource storage, and may be a random access memory, such as a high-speed random access memory, a non-volatile memory, such as one or more magnetic storage devices, flash memory, or other solid-state memory.
  • the storage method can be short-term storage or permanent storage.
  • the peripheral interface 107 may include at least one wired or wireless network interface, at least one serial-to-parallel conversion interface, at least one input-output interface, and at least one USB interface, etc., for coupling various external input / output devices to the memory 101 and the processor 105, to achieve communication with various external input / output devices.
  • the radio frequency module 109 is used to send and receive electromagnetic waves to realize the mutual conversion of electromagnetic waves and electrical signals, so as to communicate with other devices through a communication network.
  • the communication network includes a cellular telephone network, a wireless local area network, or a metropolitan area network.
  • the above communication network can use various communication standards, protocols, and technologies.
  • the positioning module 111 is used to obtain the current geographic location of the electronic device 100.
  • Examples of the positioning module 111 include, but are not limited to, global satellite positioning system (GPS), positioning technology based on a wireless local area network or a mobile communication network.
  • GPS global satellite positioning system
  • the camera module 113 belongs to the camera and is used to take pictures or videos.
  • the captured pictures or videos can be stored in the memory 101, and can also be sent to the upper computer through the radio frequency module 109.
  • the camera module 113 is used to photograph the object to be detected to form an image of the object to be detected.
  • the audio module 115 provides an audio interface to the user, which may include one or more microphone interfaces, one or more speaker interfaces, and one or more headphone interfaces. Perform audio data interaction with other devices through the audio interface.
  • the audio data may be stored in the memory 101, and may also be sent through the radio frequency module 109.
  • the touch screen 117 provides an input and output interface between the electronic device 100 and the user. Specifically, the user can perform input operations through the touch screen 117, such as gesture operations such as tap, touch, and slide, so that the electronic device 100 responds to the input operations.
  • the electronic device 100 displays and outputs the output content formed in any form or combination of text, pictures or videos to the user through the touch screen 117.
  • the key module 119 includes at least one key to provide an interface for the user to input to the electronic device 100.
  • the user can press the different keys to cause the electronic device 100 to perform different functions.
  • the sound adjustment button can be used by the user to adjust the volume of the sound played by the electronic device 100.
  • FIG. 1 is merely an illustration, and the electronic device 100 may further include more or fewer components than those shown in FIG. 1 or have different components than those shown in FIG. 1.
  • Each component shown in FIG. 1 may be implemented by hardware, software, or a combination thereof.
  • a living body detection method is applicable to an electronic device.
  • the structure of the electronic device may be as shown in FIG. 1.
  • This living body detection method can be executed by an electronic device, and can include the following steps:
  • Step 310 Acquire an image of the object to be detected.
  • the object to be detected may be a payment user of an order to be paid, or an in-and-out object to be passed through an access control system, or it may be a monitored object to be tracked.
  • This embodiment does not specifically treat the detected object limited.
  • different objects to be detected may correspond to different application scenarios, for example, a payment user of an order to be paid corresponds to an identity verification scenario, a monitoring object to be tracked corresponds to an identity recognition scenario, and a certain to pass access control
  • the in and out objects correspond to the identification scene.
  • the monitoring object may avoid tracking through the dummy. Therefore, the living body detection method provided in this embodiment may be based on the to-be-detected Different objects are suitable for different application scenarios.
  • the image acquisition of the object to be detected may be an image acquired in real time or an image stored in advance, that is, an image acquired by reading a historical time period in the cache area, and this embodiment does not address this. Be limited.
  • the camera device collects the image of the object to be detected in real time, it can perform live detection on the image of the object to be detected, for example, perform live detection on the image taken by the entry and exit object, so that the entry and exit object can pass through in real time Access control; you can also pre-store the image of the object to be detected before performing live detection. For example, when tracking the monitored object, follow the instructions of the security personnel to read the historical monitoring image of the monitored object for live detection.
  • the camera device may be a video camera, a video recorder, or other electronic devices with image acquisition functions, such as a smart phone.
  • Step 320 Perform key point detection on the biological features corresponding to the object to be detected in the image.
  • the biological characteristics of the object to be detected may be, for example, human face, eyes, mouth, hands, feet, fingerprints, iris, etc.
  • the biometrics of the object to be detected have corresponding contours in the image, and the contour is composed of a series of pixels in the image. Therefore, the key pixels in the series of pixels are regarded as biometrics in the image Key points in the process.
  • the biological feature of the object to be detected as the face feature there are 68 key points in the image of the face feature, including: the six key points 37-42 of the left and right eyes in the image. 43-48, 20 key points of the mouth in the image 49-68 and so on.
  • the key points of the biometrics in the image are uniquely represented by different coordinates (x, y).
  • step 330 If a key point is detected, jump to step 330 to construct the constraint frame.
  • step 370 determines that the object to be detected is a prosthesis.
  • Step 330 Construct a constraint frame in the image according to the detected key points.
  • a number of key points can be further selected from them to facilitate the construction of the constraint frame in the image.
  • the constraint frame is constructed by the key point 46, the key point 37, and the key point 52, as shown by the triangle box in FIG.
  • Step 350 Capture the shape change of the constraint frame constructed in the image.
  • prosthesis attacks include rotation and bending attacks, multi-angle rotation attacks, etc. It can be understood that rotation and bending attacks will cause image bending, distortion, rotation offset, etc., while multi-angle rotation attacks may cause images The rotation is offset, which in turn causes a sharp deformation of the constraint frame in the image.
  • the bounding box in the image does not substantially change in shape during the blinking or opening of the object to be detected, as shown in FIG. 5.
  • the object to be detected is a prosthesis based on whether the constraint frame has a significant shape change.
  • the shape change of the constraint frame includes but is not limited to: the shape of the constraint frame in the image significantly changes, and the position of the constraint frame in the image significantly changes.
  • the shape of the constraint frame in the image will be significantly distorted, which can capture the abnormal deformation of the constraint frame.
  • the position of the constraint frame in the image will be significantly shifted, and the abnormal deformation of the constraint frame will also be captured.
  • step 370 is executed to determine that the object to be detected is a prosthesis.
  • Step 370 if it is captured that the constraint frame has abnormal deformation or no key point is detected, it is determined that the object to be detected is a prosthesis.
  • a living body detection scheme based on a constrained frame is implemented, so as to effectively filter out the attacker's prosthesis attack behavior, thereby improving the defensiveness of the living body detection against the prosthesis attack, and having higher security.
  • step 350 may include the following steps:
  • step 351 shape data is calculated according to the shape of the constraint frame.
  • the shape of the constraining frame is a triangle. If there is an abnormal deformation of the constraining frame, then the position of the triangle may be significantly shifted, or the triangle may be significantly distorted.
  • shape data can refer to the coordinates of the constrained frame in the image to represent the position of the constrained frame in the image, or it can refer to the shape scale value of the constrained frame in the image to represent the constrained frame in the image shape.
  • the calculation process of the shape scale value may include the following steps:
  • Step 3511 Calculate the shape scale value of the constraint frame according to the side length of the figure described by the constraint frame in the image.
  • Step 3513 Use the shape scale value of the constraint frame as the shape data.
  • the shape of the constraint frame is a triangle
  • the shape described by the constraint frame in the image is a triangle
  • P represents the shape scale value of the constraint box
  • a, b, c all represent the side length of the triangle described by the constraint box in the image.
  • Step 353 Compare the shape data with the dynamic change range.
  • the dynamic change range is obtained by testing a large number of living samples, which reflects the change range of the position of the constraint frame in the image and / or the change range of the shape ratio when the object to be detected is a living body. It can also be understood that if the object to be detected is not a prosthesis, the shape data thus obtained should be within the dynamic change range.
  • Step 355 if the shape data is outside the dynamic change range, it is captured that the constraint frame has abnormal deformation.
  • the calculation and comparison of shape data are realized, which is used as the data basis for capturing the abnormal deformation of the constrained frame, which fully guarantees the accuracy of capturing the abnormal deformation of the constrained frame, which is beneficial to improve the accuracy of the live detection .
  • step 370 may include the following steps:
  • Step 371 if it is captured that the constraint frame has abnormal deformation or no key point is detected, the first counter is controlled to accumulate.
  • the resolution, lighting conditions, and shooting angle of the image of the object to be detected may be different, so that the image of the object to be detected is in progress Various complicated situations may occur during biopsy, which may lead to misdetection.
  • a first counter is set to capture the abnormal deformation of the constrained frame or the disappearance of the constrained frame (that is, no key point is detected) , Control the first counter to accumulate.
  • Step 373 Perform live detection on the object to be detected according to the last several frames of the video where the image is located.
  • Step 375 when the count value of the first counter exceeds the first accumulation threshold, it is determined that the object to be detected is a prosthesis.
  • the image acquisition of the object to be detected is for the video. That is to say, the living body detection is performed based on a piece of video of the object to be detected. Of course, in other embodiments, the living body detection may also be performed based on multiple photos of the object to be detected, which is not specifically limited in this embodiment.
  • the living body detection is performed in units of image frames. For this reason, after the detection of the current frame of images is completed, the next few frames of images in the video are traversed to perform living body detection on the traversed images.
  • the first counter is controlled to accumulate.
  • the current frame of the image that is undergoing live detection can be regarded as the current image. If the current image completes the live detection, it will be converted into a historical image, and the next frame of the image for live detection Then update to the current image.
  • the living body detection of the object to be detected for the video where the image is located can be performed not only with reference to the foregoing calculation formula (1), but also with reference to the following calculation formula (2).
  • P represents the shape scale value of the constraint box
  • a, b, c all represent the side length of the triangle described by the constraint box in the image.
  • a 0 and b 0 represent the corresponding side lengths of the triangle described by the constraint frame in the first frame of the video.
  • the shape ratio values of the new constraint frames constructed based on this are all calculated based on the first frame of the video, that is, according to the constraints constructed in the first frame of the image
  • the shape ratio of the frame is calculated to reflect the change of the shape of several new constrained frames relative to the shape of the constrained frame in the video.
  • the shape of several new constrained frames changes abnormally in the video relative to the shape of the constrained frame, it is considered that there is an abnormal deformation of the new constrained frames relative to the constrained frame, and then the first counter is controlled to accumulate.
  • any of the new constraint boxes fails to be constructed, it means that the image is distorted due to the presence of a prosthesis attack, which in turn causes the biological features corresponding to the object to be detected in the image to be completely destroyed, making the key The point detection fails, and the phenomenon that the new constraint box disappears will also control the first counter to accumulate.
  • the count value of the first counter exceeds the first accumulation threshold, it can be determined that the object to be detected is a prosthesis, eliminating the possibility of misdetection due to the image itself, thereby improving the accuracy of living body detection.
  • the absolute change judgment is replaced by the relative change judgment, which further avoids false detection, thereby enhancing the robustness and stability of the living body detection.
  • the biological features of the object to be detected are facial features.
  • facial features include, but are not limited to: eyebrows, eyes, nose, mouth, ears, and so on.
  • step 320 may include the following steps:
  • Step 321 Perform grayscale processing on the image to obtain a grayscale image of the image.
  • Step 323 Input the grayscale image of the image into a face key point model for face feature recognition, and obtain key points of the face feature of the object to be detected in the image.
  • the face keypoint model essentially constructs an index relationship for the face features in the image, so that the keypoints of specific face features can be located from the image through the constructed index relationship.
  • the key points of the face features in the image are indexed, as shown in FIG. 3, the left and right eyes are in the image
  • the indexes marked by the six key points are 37-42 and 43-48, and the indexes marked by the twenty key points in the image of the mouth are 49-68.
  • the coordinates of the key points indexed in the image are stored accordingly, so as to construct the index relationship between the index and the coordinates corresponding to the image for the facial features.
  • the coordinates of the key points of the facial features in the image can be obtained from the index.
  • the face key point model is generated by performing model training on a specified mathematical model through a large number of image samples.
  • the image sample refers to an image that has been indexed.
  • Model training is essentially to iteratively optimize the parameters of the specified mathematical model so that the specified algorithm function constructed from the parameters satisfies the convergence conditions.
  • specified mathematical models including but not limited to: machine learning models such as logistic regression, support vector machines, random forests, neural networks, etc.
  • Specify algorithm functions including but not limited to: maximum expectation function, loss function, etc.
  • the parameters of the specified mathematical model are updated, and the loss value of the loss function constructed by the updated parameters is calculated according to the latter image sample.
  • the living body detection process may further include the following steps:
  • Step 501 Locate and obtain key points of the biological feature of the object to be detected in the image.
  • the face feature recognition through the face key point detection model as shown in Figure 3, there are 68 key points in the image of the face feature, including: left and right The six key points 37-42 and 43-48 of the eye in the image, the 20 key points 49-68 of the mouth in the image and so on.
  • Step 502 Calculate the distance ratio of the biometric structure corresponding to the image according to the key points of the biometrics of the object to be detected in the image.
  • the biological characteristics of the object to be detected may be, for example, human face, eyes, mouth, hands, feet, fingerprints, iris, etc. It can be understood that for different biometric features of the object to be detected, the structure of the biometric features in the image will be different, so that the distance ratio of the biometric structure corresponding to the image is also different.
  • the distance ratio of the biometric structure corresponding to the image is the aspect ratio of the eye to describe the structure of the eye of the object to be detected in the image; if the biometric feature of the object to be detected is the mouth, then The distance ratio of the biometric structure corresponding to the image is the aspect ratio of the mouth to describe the structure of the mouth of the object to be detected in the image.
  • the biometric structure distance ratio corresponding to the image will not be enumerated one by one. Different objects to be detected have their corresponding images, and then there is the biometric structure distance ratio corresponding to their corresponding images, so as to be accurate To describe the structure of biometrics of different objects to be detected in corresponding images.
  • the aspect ratio of the eyes corresponding to the image can be calculated, thereby reflecting the structure of the eyes in the image.
  • EAR is the aspect ratio of the eye
  • p 1 is the coordinate of the key point where the right eye corner is located
  • p 2 and p 3 are the coordinate of the key point where the upper eyelid is located
  • p 4 is the coordinate of the key point where the left eye corner is located
  • p 5 and p 6 Respectively represent the coordinates of key points where the lower eyelid is located.
  • represents the norm of the coordinate difference between a pair of key points where the left and right eye corners are located, the same way,
  • represents the coordinate difference between a pair of key points where the upper and lower eyelids are The norm of
  • represents the norm of the coordinate difference between another pair of key points where the upper and lower eyelids are located.
  • the numerator represents the vertical distance between the upper and lower eyelids of the eye
  • the denominator represents the horizontal distance between the left and right corners of the eye. It should be noted that since the numerator contains two sets of vertical distances, and the denominator contains only a set of horizontal distances, the denominator is weighted, that is ⁇ 2.
  • Step 503 Capture the action behavior of the object to be detected according to the change in the distance ratio of the biometric structure corresponding to the image relative to the distance ratio of the biometric structure in the feature sequence.
  • the distance ratio of the biometric structure in the feature sequence is calculated according to the first several historical images in the video where the image is located.
  • the feature sequence essentially reflects the normal structure of the biological characteristics of the object to be detected in the historical image. It can also be understood that the feature sequence is used to achieve an accurate description of the normal structure of the biological characteristics of the object to be detected during the historical image acquisition period.
  • the distance ratio of the biometric structure corresponding to the image changes from the distance ratio of the biometric structure corresponding to the historical image in the feature sequence, it indicates that the structure of the biometric feature of the object to be detected in the image, relative to the description of the feature sequence
  • the normal structure of the biological characteristics of the object to be detected has changed during the historical image acquisition period.
  • the normal structure is the outline of the eye when the eye is opened, then the structure that has changed is the outline of the eye when blinking.
  • the action behavior of the object to be detected includes, but is not limited to: blinking behavior, opening mouth behavior, closing mouth behavior, beckoning behavior, stomping behavior, etc.
  • the distance ratio of the biometric structure corresponding to the image has changed relative to the distance ratio of the biometric structure corresponding to the historical images of the previous frames, it indicates that the biometric contour of the object to be detected in the image For example, if the object to be detected blinks, at this time, it is regarded as capturing the action behavior of the object to be detected, and then the object to be detected is determined to be a living body.
  • Step 504 If it is captured that the object to be detected has an action behavior, control the second counter to accumulate.
  • a second counter is provided only when the count value accumulated in the second counter When the second accumulation threshold is exceeded, the object to be detected is regarded as a living body, that is, step 505 is executed.
  • Step 505 when the count value of the second counter exceeds the second accumulation threshold, it is determined that the object to be detected is a living body.
  • the blinking behavior, opening mouth behavior, closing mouth behavior, beckoning behavior, or stomping behavior of the object to be detected is captured, it can be determined that the object to be detected is a living body.
  • a living body detection scheme based on the relative change of the distance ratio of the biometric structure is further implemented, that is, for the video of the object to be detected, only In one frame of the image, the biometric structure distance ratio has changed relative to the biometric structure distance ratio of the historical images of the previous frames, and the object to be detected will be judged as a living body, so as to filter the prosthesis to attack the organisms in the sample.
  • the misjudgment of the prosthesis caused by the mutation of the characteristic contour further effectively improves the defense of the living body detection method against the prosthesis attack sample, and has higher security.
  • the prosthesis attack sample refers to the image of the object to be detected is altered or obscured by the attacker or the outline of the mouth, thereby causing the prosthesis to blink, shut up and other artifacts.
  • step 506 if there is no action behavior captured in the object to be detected, the distance ratio of the biometric structure corresponding to the image is added to the feature sequence.
  • the biometric structure distance ratio corresponding to the image is compared with the normal structure interval, and if the biometric structure distance ratio corresponding to the image is within the normal structure interval, the biometric structure distance ratio corresponding to the image is added to the feature sequence.
  • the structure of the biometrics is relatively fixed. Therefore, the structure of the biometrics of the object to be detected in the image is also relatively fixed, which is regarded as a normal structure. For example, if the biological feature of the object to be detected is an eye, the outline of the eye when the eye is opened is regarded as a normal structure.
  • the normal structure interval represents the fluctuation range of the normal structure of the biological characteristics of the object to be detected in the image.
  • This normal structure interval can be flexibly set according to the actual requirements of the application scenario. For example, in an application scenario with high accuracy requirements, a normal structure interval with a narrow fluctuation range is set, which is not limited in this embodiment.
  • the biometric structure distance corresponding to the image is within the normal structure interval, it is allowed to be added to the feature sequence, so as to prevent the abnormal biometric structure distance ratio from existing in the feature sequence, and fully guarantee the feature sequence Accuracy, which in turn helps to improve the accuracy of biopsy.
  • the feature sequence is a queue of a specified length.
  • a queue of specified length N includes N storage locations, and each storage location can be used to store a biometric structure distance ratio that satisfies the normal structure interval.
  • the specified length of the queue can be flexibly adjusted according to the actual needs of the application scenario. For example, for application scenarios with high accuracy requirements, if the number of images of the object to be detected is large, set a larger specified length; for electronic device storage For application scenarios with higher space requirements, a smaller specified length is set, which is not limited in this embodiment.
  • the biometric structure distance ratio a 1 When the queue is empty, if the first biometric structure distance ratio a 1 satisfies the normal structure interval, the biometric structure distance ratio a 1 is stored to the first storage location in the queue.
  • the biometric structure distance ratio a 2 is stored in the second storage location in the queue to complete the biometric structure distance ratio a 2 The enqueue operation.
  • the distance of the N + 1th biometric structure is greater than a n + 1 and meets the normal structure interval, adhering to the principle of "first-in-first-out", then the distance of the first biometric structure than a 1 is removed from the head of the team Queue, and move the second biometric structure distance ratio a 2 to the first storage position in the direction of the team head, and so on, the Nth biometric structure distance ratio a n moves to the first N-1 in the team head direction Storage locations, thereby completing the dequeue operation.
  • the N-th queue storage location is empty, then the first N + 1 th biological features a n + 1 than the distance from the end of the team stored in the N-th storage location, to complete the structure of the biometric distance ratio a n +1 enqueue operation.
  • a queue-based living body detection method is realized, which can not only effectively filter the false judgment of the prosthesis caused by the mutation of the biometric structure in the sample of the prosthesis attack, but also can be applied to different facial features
  • the crowd that is, the distance ratio of the biometric structure in different cohorts can reflect the normal structure of different facial features, making the living body detection method have good adaptability and versatility.
  • Step 507 Traverse the last few frames of the video.
  • the current image completes the living body detection, it will continue to traverse the next few frames of the video until the object to be detected is detected as a prosthesis, or after all the images in the video have been detected, the object to be detected is determined to be a living body.
  • step 503 may include the following steps:
  • Step 5031 Calculate the average value of the distance ratio of the biometric structure in the feature sequence.
  • Step 5033 Calculate the relative change rate of the biometric structure distance ratio corresponding to the image according to the average value and the biometric structure distance ratio corresponding to the image.
  • the aspect ratio of the eyes is approximately constant, only fluctuating up and down within the range of 0.25, and once blinking or closing the eyes, the vertical distance is almost zero, which will make The aspect ratio of the eye also decreases to zero accordingly.
  • the aspect ratio of the eye rises again to the range of 0.25, that is, by recording the trend of the aspect ratio of the eye during the image acquisition time period, it is determined whether blinking has occurred.
  • the movement behavior of the living body can be sharply captured by the aspect ratio of the eye.
  • the attacker quickly occludes the outline of the eye in the image multiple times in a row, because several key points of the eye in the image are destroyed , It is easy to cause the aspect ratio of the eye to be less than the judgment threshold, which leads to the false judgment of the prosthesis as a living body.
  • the probability of an apparent jump in the aspect ratio of the eye will be greatly reduced. Extreme cases may also occur when the object to be detected opens the eye.
  • the corresponding aspect ratio of the eye is already smaller than the determination threshold, which results in the inability to detect the apparent jump of the aspect ratio of the eye during the blinking of the object to be detected, and misjudges the living body as a prosthesis.
  • the living body detection method is implemented by the relative change of the biometric structure of the object to be detected in the image, as shown in the following formula (4):
  • represents the relative change rate of the distance ratio of the biometric structure corresponding to the current image
  • Ear_ave represents the average value of the distance ratio of the biometric structure in the feature sequence
  • Ear ' represents the distance ratio of the biometric structure corresponding to the current image.
  • Ear_ave it reflects the normal structure of the biological characteristics of the object to be detected in the historical image acquisition period.
  • Ear ' reflects the structure of the biological characteristics of the object to be detected in the current image acquisition period.
  • the relative change rate ⁇ is not zero, it means that within the same time period as the image to be detected for image acquisition, the biological characteristics reflected by Ear ' Structure, relative to the normal structure of the biological characteristics reflected by Ear_ave has changed, then, the object to be detected may have action behavior.
  • Step 5035 if the relative change rate of the distance ratio of the biometric structure corresponding to the image is less than the change threshold, it is captured that the object to be detected has action behavior.
  • the change threshold is set, that is, only when the relative change rate ⁇ is smaller than the set change threshold, it is considered that there is an action behavior of the object to be detected.
  • the change threshold can be flexibly set according to the actual needs of the application scenarios. For example, for application scenarios that require high detection sensitivity, a smaller change threshold is set, which is not limited in this embodiment .
  • the relative change judgment is used instead of the absolute change judgment to avoid the difference in the structure of biological characteristics due to different action behaviors.
  • the blinking range of small eyes is smaller than that of large eyes, and the living body is misjudged as a prosthetic. Defects, thereby enhancing the robustness and stability of live detection.
  • step 506 the method as described above may further include the following steps:
  • a face recognition model is called to perform face recognition on the image of the object to be detected.
  • the implementation environment includes a payment user 510, a smartphone 530, and a payment server 550.
  • the payment user 510 brushes the face through the camera configured on the smartphone 530, so that the smartphone 530 obtains the corresponding user image to be recognized by the payment user 510, and then uses the face recognition model to carry out the image of the user to be recognized Face recognition.
  • the user feature of the user image to be recognized is extracted through the face recognition model, and the similarity between the user feature and the specified user feature is calculated. If the similarity is greater than the similarity threshold, the paying user 510 passes the identity verification.
  • the specified user characteristics are extracted by the smartphone 530 for the paying user 510 in advance through the face recognition model.
  • the smartphone 530 After the payment user 510 passes the identity verification, the smartphone 530 initiates an order payment request to the payment server 550 for the order to be paid, thereby completing the payment process of the order to be paid.
  • 16 is a schematic diagram of an implementation environment based on identity recognition in an application scenario. For example, in video surveillance, through identification, the tracking target is determined in multiple face images displayed on the image screen. Many-to-one feature comparison is implemented in this application scenario, which can be regarded as a special case of one-to-one feature comparison.
  • the implementation environment includes a monitoring screen 610, cameras 630 arranged everywhere, and a monitoring server 650 that enables interaction between the camera 630 and the monitoring screen 610.
  • a large number of cameras 630 are arranged to facilitate video surveillance at any time through the image collected by the camera 630.
  • a large number of cameras 630 are arranged to form a video surveillance system, and the image screen is obtained through the interaction between the monitoring server 650 and each camera 630 in the video surveillance system, and then the video surveillance of the tracking target is realized through the image screen on the monitoring screen 610 .
  • the monitoring server 650 For the face recognition of the monitored object in the image frame to determine the tracking target, the monitoring server 650 completes.
  • the face features of multiple face images in the image frame are extracted through the face recognition model, and the similarity between these face features and the specified target feature is calculated separately.
  • the specified target features are extracted in advance based on the tracking target through the face recognition model.
  • the facial features with the highest similarity and the similarity exceeding the similarity threshold can be obtained, and the identity of the monitored object is determined as the identity associated with the facial features with the largest similarity and the similarity exceeding the similarity threshold.
  • the tracking target is identified in the image frame, so as to facilitate continuous tracking of the tracking target.
  • the implementation environment includes a reception device 710, an identification server 730, and an access control device 750.
  • a camera is installed on the reception device 710 to take a face photograph of the entry-exit object 770, and send the obtained image of the person to be identified of the entry-exit object 770 to the recognition server 730 for face recognition.
  • access objects 770 include staff and visitors.
  • the recognition server 730 extracts the person feature of the person to be recognized image through the face recognition model, and calculates the similarity between the person feature and multiple specified person features to obtain the specified person feature with the greatest similarity, and then the specified person feature with the greatest similarity
  • the identity of the associated person is determined as the identity of the entry-exit object 770, so as to complete the identity identification of the entry-exit object 770.
  • the designated person characteristics are extracted by the recognition server 730 for the entry-exit object 770 in advance through the face recognition model.
  • the recognition server 730 sends an access authorization instruction to the access control device 750 for the access object 770, so that the access control device 750 configures the corresponding access control authority for the access object 770 according to the access authorization instruction, thereby enabling the access object 770 uses this access authority to control the access barriers in the designated work area to perform the release action.
  • the identification server 730 and the access control device 750 can be deployed as the same server, or the reception device 710 and the access control device 750 can be deployed on the same server. This application scenario does not limit this.
  • the living body detection device can be used as a precursor module for face recognition.
  • the prosthesis is first determined based on the constraint frame. Whether the constraint frame has a relatively abnormal change or the constraint frame disappears, it will be regarded as a prosthesis attack. Determine the object to be detected as a prosthesis.
  • steps 807 to 810 it is further used to determine whether the object to be detected is a living body according to whether the structure of the facial features of the object to be detected has relatively changed.
  • the living body detection device can accurately determine whether the object to be detected is a living body, and thus realize the defense against prosthetic attacks, which can not only fully guarantee the security of identity verification / identification, but also effectively reduce the face of the later stage. Recognize the working pressure and flow pressure to better facilitate various face recognition tasks.
  • the amount of computer programs involved in the living body detection device is light, and the hardware configuration requirements of the electronic equipment are simple. It can be applied not only to smart phones, but also to servers equipped with the windows operating system and linux operating system, which has fully improved The versatility and practicality of biopsy methods.
  • the following is an embodiment of the device of the present application, which can be used to execute the living body detection method involved in any embodiment of the present application.
  • the device embodiments of the present application please refer to the method embodiments of the living body detection method involved in the present application.
  • a living body detection device 900 includes but is not limited to: an image acquisition module 910, a key point detection module 920, a constraint frame construction module 930, a deformation capture module 950, and a prosthesis Decision module 970.
  • the image acquisition module 910 is used to acquire the image of the object to be detected.
  • the key point detection module 920 is configured to perform key point detection on the biological features corresponding to the object to be detected in the image.
  • the constraint box construction module 930 is configured to construct a constraint box in the image according to the detected key points.
  • the deformation capture module 950 is used to capture the shape change of the constraint frame constructed in the image.
  • the prosthesis determination module 970 is configured to determine that the object to be detected is a prosthesis if it is captured that the constraint frame has abnormal deformation or no key point is detected.
  • the deformation capture module includes: a data calculation unit for calculating shape data according to the shape of the constraint frame; a data comparison unit for comparing the shape data with the dynamic change range; abnormal The capturing unit is configured to capture an abnormal deformation of the constraint frame if the shape data is outside the dynamic change range.
  • the data calculation unit includes: a scale calculation subunit for calculating the shape scale value of the constraint frame according to the side length of the graphic described by the constraint frame in the image; data A subunit is defined for taking the shape scale value of the constraint frame as the shape data.
  • the prosthesis determination module includes: an accumulation unit for controlling the first counter to accumulate if an abnormal deformation of the constraint frame is captured or a key point is not detected; the image traversal unit uses In order to perform live detection on the object to be detected according to the last few frames of the video where the image is located; a prosthesis detection unit is used to detect when the count value of the first counter exceeds the first accumulation threshold
  • the object to be detected is a prosthesis.
  • the image traversal unit includes: a new constraint frame construction subunit, configured to construct a number of new constraint frames according to the next few frames of the video where the image is located, each of the new constraint frames corresponding The last frame of the image; the tracking subunit, used to track the constrained frame constructed in the image according to a number of the new constrained frames; the relative change monitoring subunit, used to track, in the video where the image is located Monitoring changes of several new constraint boxes relative to the constraint boxes; an accumulating sub-unit is used to control the accumulation of the first counter if a relative change abnormality is detected or any of the new constraint boxes fails to be constructed.
  • the biological feature of the object to be detected is a face feature
  • the key point detection module includes: a gray-scale processing unit for performing gray-scale processing on the image to obtain grayscale of the image
  • the model calling unit is used to input the grayscale image of the image into a key point model of the face for facial feature recognition, and obtain key points of the facial feature of the object to be detected in the image.
  • the apparatus further includes: a distance ratio calculation module, configured to, if the abnormal deformation of the constraint frame is not captured, according to the number of biological characteristics corresponding to the object to be detected in the image
  • the key point is to calculate the biometric structure distance ratio corresponding to the image
  • the behavior capture module is used to capture the to-be-detected according to the change of the historical biometric structure distance ratio in the characteristic sequence according to the biometric structure distance ratio corresponding to the image
  • the action behavior of the object, the historical biometric structure distance ratio in the feature sequence is calculated based on the historical images of the first few frames in the video where the image is located;
  • the living body judgment module is used to capture the action if the object to be detected exists Behavior, it is determined that the object to be detected is a living body.
  • the behavior capturing module includes: an average value calculation unit for calculating an average value of the distance ratios of the biometric structures in the feature sequence; a change rate calculation unit for determining the average value and A biometric structure distance ratio corresponding to the image, calculating a relative change rate of the biometric structure distance ratio corresponding to the image; a judging unit, if the relative change rate of the biometric structure distance ratio corresponding to the image is less than a change threshold , It is captured that the object to be detected has an action behavior.
  • the judging unit includes: an accumulation subunit for controlling the second counter to accumulate if the action of the object to be detected is captured; a capture subunit is used for when the second When the count value of the counter exceeds the second accumulation threshold, it is detected that the object to be detected is a living body.
  • the device further includes: a distance ratio comparison module for comparing the distance ratio of the biometric structure corresponding to the image with a normal structure interval; a distance ratio addition module for if the image If the corresponding biometric structure distance ratio is within the normal structure interval, the biometric structure distance ratio corresponding to the image is added to the feature sequence.
  • the feature sequence is a queue of a specified length;
  • the vector addition module includes: a first enqueuing unit for controlling the queue to correspond to the image if the queue is not full The distance between the biometric structure and the enqueue operation is performed; the second enqueue unit is used to control the queue to perform the dequeue operation at the head of the queue if the queue is full, and the creature corresponding to the image at the end of the queue The feature structure distance is more than the enqueue operation.
  • the biometrics of the object to be detected include eyes and / or mouth
  • the biometric structure distance ratio includes eye aspect ratio and / or mouth aspect ratio.
  • the device further includes: a face recognition module, configured to call a face recognition model to perform face recognition on the image of the object to be detected if the object to be detected is determined to be a living body.
  • a face recognition module configured to call a face recognition model to perform face recognition on the image of the object to be detected if the object to be detected is determined to be a living body.
  • the living body detection device provided in the above embodiment performs the living body detection process
  • only the above-mentioned division of each functional module is used as an example for illustration.
  • the above-mentioned functions may be allocated by different functional modules as needed.
  • Completed, that is, the internal structure of the living body detection device will be divided into different functional modules to complete all or part of the functions described above.
  • the living body detection device and the living body detection method embodiments provided in the above embodiments belong to the same concept, and the specific manner in which each module performs operations has been described in detail in the method embodiments, and will not be repeated here.
  • an electronic device 1000 includes at least one processor 1001, at least one memory 1002, and at least one communication bus 1003.
  • the memory 1002 stores computer readable instructions, and the processor 1001 reads the computer readable instructions stored in the memory 1002 through the communication bus 1003.
  • a computer-readable storage medium has stored thereon a computer program, and when the computer program is executed by a processor, the living body detection method in each of the foregoing embodiments is implemented.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Accounting & Taxation (AREA)
  • Computer Security & Cryptography (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

本申请实施例公开了一种活体检测方法、装置、电子设备、存储介质及应用活体检测方法的支付系统、视频监控系统、门禁系统,属于生物特征识别技术领域,所述活体检测方法包括:获取待检测对象的图像;对所述图像中对应于所述待检测对象的生物特征进行关键点检测;根据检测到的关键点在所述图像中构建约束框;对所述图像中构建的约束框进行形状变化的捕捉;如果捕捉到所述约束框存在异常形变或者未检测到关键点,则判定所述待检测对象为假体。

Description

活体检测方法、装置、电子设备、存储介质及应用活体检测方法的相关系统
本申请要求于2018年10月25日提交国家知识产权局、申请号为201811252616.2,申请名称为“活体检测方法、装置及应用活体检测方法的相关系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及生物特征识别技术领域,尤其涉及一种活体检测方法、装置、电子设备、存储介质以及应用该活体检测方法的支付系统、视频监控系统和门禁系统。
背景技术
随着生物特征识别技术的发展,生物特征识别被广泛地应用,例如,刷脸支付、视频监控中的人脸识别,以及门禁授权中的指纹识别、虹膜识别等等。生物特征识别也因此而存在着各种各样的威胁,比如攻击者利用伪造的人脸、指纹、虹膜等等进行生物特征识别。
发明内容
本申请各实施例提供一种活体检测方法、装置、电子设备、存储介质及应用活体检测方法的支付系统、视频监控系统、门禁系统。
本申请实施例提供一种活体检测方法,由电子设备执行,包括:获取待检测对象的图像;对所述图像中对应于所述待检测对象的生物特征进行关键点检测;根据检测到的关键点在所述图像中构建约束框;对所述图像中构建的约束框进行形状变化的捕捉;如果捕捉到所述约束框存在异常形变或者未检测到关键点,则判定所述待检测对象为假体。
本申请实施例提供一种活体检测装置,包括:图像获取模块,用于获取待检测对象的图像;关键点检测模块,用于对所述图像中对应于所述待检测对象的生物特征进行关键点检测;约束框构建模块,用于根据 检测到的关键点在所述图像中构建约束框;形变捕捉模块,用于对所述图像中构建的约束框进行形状变化的捕捉;假体判定模块,用于如果捕捉到所述约束框存在异常形变或者未检测到关键点,则判定所述待检测对象为假体。
本申请实施例提供一种电子设备,包括处理器及存储器,所述存储器上存储有计算机可读指令,所述计算机可读指令被所述处理器执行时实现如上所述的活体检测方法。
本申请实施例提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如上所述的活体检测方法。
本申请实施例提供一种支付系统,所述支付系统包括支付终端和支付服务器,其中,所述支付终端,用于采集支付用户的图像;所述支付终端包括活体检测装置,用于在所述支付用户的图像中根据检测到的关键点构建约束框,并捕捉所述图像中约束框存在的异常形变,如果未捕捉到所述约束框存在异常形变,则判定所述支付用户为活体;当所述支付用户为活体,所述支付终端对所述支付用户进行身份验证,以在所述支付用户通过身份验证时,向所述支付服务器发起支付请求。
本申请实施例提供一种视频监控系统,所述视频监控系统包括监控屏幕、若干摄像头和监控服务器,其中,若干所述摄像头,用于采集监控对象的图像;所述监控服务器包括活体检测装置,用于在所述监控对象的图像中根据检测到的关键点构建约束框,并捕捉所述图像中约束框存在的异常形变,如果未捕捉到所述约束框存在异常形变,则判定所述监控对象为活体;当所述监控对象为活体,所述监控服务器对所述监控对象进行身份识别,以获得追踪目标,并在所述监控屏幕中通过图像画面对所述追踪目标进行视频监控。
本申请实施例提供一种门禁系统,所述门禁系统包括接待设备、识别服务器和门禁控制设备,其中,所述接待设备,用于采集出入对象的图像;所述识别服务器包括活体检测装置,用于在所述出入对象的图像中根据检测到的关键点构建约束框,并捕捉所述图像中约束框存在的异常形变,如果未捕捉到所述约束框存在异常形变,则判定所述出入对象 为活体;当所述出入对象为活体,所述识别服务器对所述出入对象进行身份识别,以使所述门禁控制设备为成功完成身份识别的出入对象配置门禁权限,使得该出入对象根据所配置的门禁权限控制指定工作区域的门禁道闸执行放行动作。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请实施例。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请实施例的原理。
图1是根据一示例性实施例示出的一种电子设备的硬件结构框图;
图2是根据一示例性实施例示出的一种活体检测方法的流程图;
图3是图2对应实施例所涉及的人脸特征在图像中关键点的示意图;
图4是图2对应实施例所涉及的由对应于人脸特征的关键点在图像中构建约束框的示意图;
图5是图2对应实施例所涉及的约束框在待检测对象张嘴过程中形状变化的示意图;
图6是图2对应实施例中步骤350在一个实施例的流程图;
图7是图6对应实施例中步骤351在一个实施例的流程图;
图8是图2对应实施例中步骤370在一个实施例的流程图;
图9是图2对应实施例中步骤320在一个实施例的流程图;
图10是根据一示例性实施例示出的另一种活体检测方法的流程图;
图11是图10对应实施例所涉及的眼睛在图像中关键点的示意图;
图12为图10对应实施例所涉及的队列为图像对应的生物特征结构距离比执行入队操作/出队操作的具体实现示意图;
图13是图10对应实施例中步骤503在一个实施例的流程图;
图14是图13对应实施例所涉及的眼睛纵横比的变化趋势的示意图;
图15是一应用场景中基于身份验证的实施环境的示意图;
图16是一应用场景中基于身份识别的一个实施环境的示意图;
图17是一应用场景中基于身份识别的另一实施环境的示意图;
图18是一应用场景中一种活体检测方法的具体实现示意图;
图19是根据一示例性实施例示出的一种活体检测装置的框图;
图20是根据一示例性实施例示出的一种电子设备的框图。
通过上述附图,已示出本申请明确的实施例,后文中将有更详细的描述,这些附图和文字描述并不是为了通过任何方式限制本申请实施例构思的范围,而是通过参考特定实施例为本领域技术人员说明本申请实施例的概念。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请实施例相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请实施例的一些方面相一致的装置和方法的例子。
活体检测方法,是针对待检测对象的图像进行的活体检测,即检测待检测对象在图像中的生物特征轮廓是否发生了变化,如果检测到待检测对象在图像中的生物特征轮廓发生了变化,即判定待检测对象为活体。
例如,待检测对象在图像中的生物特征为眼睛或者嘴巴,当待检测对象眨眼或者张嘴,将造成图像中的生物特征轮廓发生变化,由此便可判定待检测对象为活体。
对于攻击者的假体攻击行为而言,例如,旋转弯折攻击、多角度旋转攻击等,是攻击者利用眼睛或者嘴巴的活动特点,对所窃取的待检测对象的图像进行弯折、扭曲、旋转偏移等,导致图像中生物特征轮廓被扭曲或者图像发生侧向偏转,而造成假体眨眼、张嘴等假象,使得假体被误判为活体。
由上可知,现有的活体检测方法仍存在对攻击者的假体攻击行为的防御性较差的缺陷。
为此,本申请实施例特提出了一种活体检测方法,该种活体检测方法能够有效地提高对假体攻击的防御性,具有较高的安全性。
该种活体检测方法由计算机程序实现,与之相对应的,所构建的活体检测装置可存储于架构有冯诺依曼体系的电子设备中,以在该电子设备中执行,进而实现待检测对象的活体检测。例如,电子设备可以是智能手机、平板电脑、笔记本电脑、台式电脑、服务器等等,在此并未加以限定。
请参阅图1,图1是根据本申请一示例性实施例示出的一种电子设备的框图。需要说明的是,该种电子设备只是一个适配于本申请实施例的示例,不能认为是提供了对本申请实施例的使用范围的任何限制。该种电子设备也不能解释为需要依赖于或者必须具有图1中示出的示例性的电子设备100中的一个或者多个组件。
如图1所示,电子设备100包括存储器101、存储控制器103、一个或多个(图1中仅示出一个)处理器105、外设接口107、射频模块109、定位模块111、摄像模块113、音频模块115、触控屏幕117以及按键模块119。这些组件通过一条或多条通讯总线/信号线121相互通讯。
其中,存储器101可用于存储计算机程序以及模块,如本申请示例性实施例中的活体检测方法及装置所对应的计算机程序及模块,处理器105通过执行存储在存储器101内的计算机程序,从而执行各种功能以及数据处理,以例如完成本申请任一实施例所述的活体检测方法。
存储器101作为资源存储的载体,可以是随机存储器、例如高速随机存储器、非易失性存储器,如一个或多个磁性存储装置、闪存、或者其它固态存储器。存储方式可以是短暂存储或者永久存储。
外设接口107可以包括至少一有线或无线网络接口、至少一串并联转换接口、至少一输入输出接口以及至少一USB接口等,用于将外部各种输入/输出装置耦合至存储器101以及处理器105,以实现与外部各种输入/输出装置的通信。
射频模块109用于收发电磁波,实现电磁波与电信号的相互转换, 从而通过通讯网络与其他设备进行通讯。通信网络包括蜂窝式电话网、无线局域网或者城域网,上述通信网络可以使用各种通信标准、协议及技术。
定位模块111用于获取电子设备100的当前所在的地理位置。定位模块111的实例包括但不限于全球卫星定位系统(GPS)、基于无线局域网或者移动通信网的定位技术。
摄像模块113隶属于摄像头,用于拍摄图片或者视频。拍摄的图片或者视频可以存储至存储器101内,还可以通过射频模块109发送至上位机。例如,利用摄像模块113拍摄待检测对象,以形成待检测对象的图像。
音频模块115向用户提供音频接口,其可包括一个或多个麦克风接口、一个或多个扬声器接口以及一个或多个耳机接口。通过音频接口与其它设备进行音频数据的交互。音频数据可以存储至存储器101内,还可以通过射频模块109发送。
触控屏幕117在电子设备100与用户之间提供一个输入输出界面。具体地,用户可通过触控屏幕117进行输入操作,例如点击、触摸、滑动等手势操作,以使电子设备100对该输入操作进行响应。电子设备100则将文字、图片或者视频任意一种形式或者组合所形成的输出内容通过触控屏幕117向用户显示输出。
按键模块119包括至少一个按键,用以提供用户向电子设备100进行输入的接口,用户可以通过按下不同的按键使电子设备100执行不同的功能。例如,声音调节按键可供用户实现对电子设备100播放的声音音量的调节。
可以理解,图1所示的结构仅为示意,电子设备100还可包括比图1中所示更多或更少的组件,或者具有与图1所示不同的组件。图1中所示的各组件可以采用硬件、软件或者其组合来实现。
请参阅图2,在本申请一示例性实施例中,一种活体检测方法适用于电子设备,该电子设备的结构可以如图1所示。
该种活体检测方法可以由电子设备执行,可以包括以下步骤:
步骤310,获取待检测对象的图像。
首先,待检测对象可以是某个待支付订单的支付用户,还可以是某个待通过门禁的出入对象,又或者,是某个待追踪的监控对象,本实施例并未对待检测对象作具体限定。
相应地,不同的待检测对象可对应于不同的应用场景,例如,某个待支付订单的支付用户对应于身份验证场景,某个待追踪的监控对象对应于身份识别场景,某个待通过门禁的出入对象则对应于身份识别场景。
可以理解,无论是身份验证,还是身份识别,都可能存在攻击者的假体攻击行为,例如,监控对象可能通过假人来躲避追踪,因此,本实施例所提供的活体检测方法可根据待检测对象的不同而适用于不同的应用场景。
其次,对于待检测对象的图像获取,可以是实时采集到的图像,也可以是预先存储的图像,即通过读取缓存区域中一历史时间段采集到的图像,本实施例也并未对此进行限定。
换而言之,摄像设备实时采集到待检测对象的图像之后,可以针对待检测对象的图像实时地进行活体检测,例如,对出入对象所拍摄图像进行活体检测,以便于出入对象能够实时地通过门禁;也可以预先存储待检测对象的图像后再进行活体检测,例如,在进行监控对象的追踪时,按照安保人员的指示读取该监控对象的历史监控图像进行活体检测。
其中,摄像设备可以是摄像机、录像机、或者其它具有图像采集功能的电子设备,例如,智能手机等。
步骤320,对所述图像中对应于所述待检测对象的生物特征进行关键点检测。
待检测对象的生物特征,例如,可以是人脸、眼睛、嘴巴、手、脚、指纹、虹膜等等。
可以理解,待检测对象的生物特征在图像中具有相应的轮廓,该轮廓是由图像中一系列像素点构成的,由此,该一系列像素点中关键的像素点即视为生物特征在图像中的关键点。
以待检测对象的生物特征为人脸特征进行说明,如图3所示,人脸特征在图像中存在68个关键点,具体包括:左、右眼睛在图像中的6个关键点37~42和43~48,嘴巴在图像中的20个关键点49~68等等。
在本申请一实施例中,生物特征在图像中的关键点,是由不同的坐标(x,y)进行唯一表示的。
如果检测到关键点,则跳转执行步骤330,进行约束框的构建。
反之,如果未检测到关键点,表示因存在假体攻击,导致图像扭曲程度过高,进而造成图像中对应于待检测对象的生物特征被完全破坏,使得关键点检测失败,而无法进行约束框的构建,则跳转执行步骤370,判定待检测对象为假体。
步骤330,根据检测到的关键点,在所述图像中构建约束框。
在检测到生物特征在图像中的关键点之后,便可从中进一步选取若干关键点,以便于在图像中构建约束框。
举例来说,针对图3所示的人脸特征在图像中存在的68个关键点,从中选取对应于眼睛左眼角的关键点46、对应于眼睛右眼角的关键点37、以及对应于嘴巴人中位置处的关键点52。
那么,约束框便是由关键点46、关键点37、以及关键点52构建的,如图4中三角框所示。
步骤350,对所述图像中构建的约束框进行形状变化的捕捉。
如前所述,假体攻击包括旋转弯折攻击、多角度旋转攻击等等,可以理解,旋转弯折攻击会造成图像弯折、扭曲、旋转偏移等,而多角度旋转攻击则可能造成图像旋转偏移,进而导致图像中的约束框发生剧烈的形变。
也可以理解为,如果不存在攻击者的假体攻击行为,在待检测对象眨眼或者张嘴过程中,图像中的约束框实质上并未发生太明显的形状变化,如图5所示。
由此,本实施例中,可基于约束框是否发生了明显的形状变化,来判断待检测对象是否为假体。
在本申请一实施例中,约束框的形状变化包括但不限于:约束框在 图像中的形状发生了明显变化、约束框在图像中的位置发生了明显变化。
例如,针对旋转弯折攻击,约束框在图像中的形状将明显扭曲,便可捕捉到约束框存在异常形变。
又或者,针对多角度旋转攻击,约束框在图像中的位置将明显偏移,同样会捕捉到约束框存在异常形变。
如果捕捉到约束框存在异常形变,则执行步骤370,判定待检测对象为假体。
反之,如果约束框不存在异常形变,便可视为不存在攻击者的假体攻击行为,进而进一步检测待检测对象是否为活体。
步骤370,如果捕捉到所述约束框存在异常形变或者未检测到关键点,则判定所述待检测对象为假体。
也就是说,无论是约束框存在异常形变,还是未检测到关键点而导致约束框消失,都将被视为存在假体攻击,而判定待检测对象为假体。
通过如上所述的过程,实现了基于约束框的活体检测方案,以此有效地过滤掉攻击者的假体攻击行为,从而提高活体检测对假体攻击的防御性,具有较高的安全性。
请参阅图6,在本申请一示例性实施例中,步骤350可以包括以下步骤:
步骤351,根据所述约束框的形状计算得到形状数据。
如图4所示,约束框的形状是三角形,那么,如果约束框存在异常形变,则可能是该三角形的位置发生了明显偏移,还可能是该三角形发生了明显扭曲。
为此,形状数据可以是指约束框在图像中的坐标,以此表示约束框在图像中的位置,还可以是指约束框在图像中的形状比例值,以此表示约束框在图像中的形状。
具体而言,如图7所示,若形状数据为约束框在图像中的形状比例值,该形状比例值的计算过程可以包括以下步骤:
步骤3511,根据所述约束框在所述图像中所描述图形的边长,计算所述约束框的形状比例值。
步骤3513,以所述约束框的形状比例值作为所述形状数据。
举例来说,当约束框的形状为三角形,也即是,该约束框在图像中所描述图形为三角形。
那么,该约束框的形状比例值的计算公式(1)如下:
Figure PCTCN2019111912-appb-000001
其中,P表示约束框的形状比例值,a、b、c均表示约束框在图像中所描述三角形的边长。
Figure PCTCN2019111912-appb-000002
已知,可以通过大量活体样本测试得到,表示约束框在大量活体样本中所描述三角形的相应边长的平均值。
步骤353,比较所述形状数据与动态变化范围。
其中,动态变化范围通过大量活体样本测试得到,反映了当待检测对象为活体时,约束框在图像中的位置变化范围,和/或,形状比例变化范围。也可以理解为,如果待检测对象并非假体,则由此得到的形状数据应当在动态变化范围之内。
步骤355,如果所述形状数据在所述动态变化范围之外,则捕捉到所述约束框存在异常形变。
通过上述实施例的配合,实现了形状数据的计算和比较,以此作为捕捉约束框异常形变的数据依据,充分地保障了捕捉约束框异常形变的准确性,进而有利于提高活体检测的准确性。
请参阅图8,在本申请一示例性实施例中,步骤370可以包括以下步骤:
步骤371,如果捕捉到所述约束框存在异常形变或者未检测到关键点,则控制第一计数器累加。
可以理解,针对待检测对象所处环境的不同、以及摄像设备的不同,可能造成所拍摄的待检测对象的图像的分辨率、光照条件、拍摄角度有所区别,使得待检测对象的图像在进行活体检测时可能会出现各种复杂的情况,进而导致误检。
因此,本实施例中,为了过滤掉噪声或者突然抖动对待检测对象的 图像的影响,设置第一计数器,以在捕捉到约束框存在异常形变或者约束框消失(也即是未检测到关键点)时,控制第一计数器累加。
步骤373,根据所述图像所在视频中的后若干帧图像对所述待检测对象进行活体检测。
步骤375,当所述第一计数器的计数值超过第一累加阈值时,判定所述待检测对象为假体。
为了避免误检,本实施例中,待检测对象的图像获取,是针对视频而言的。也就是说,活体检测是基于待检测对象的一段视频进行的,当然,在其它实施例中,也可以使得活体检测基于待检测对象的多张照片进行,本实施例并非对此构成具体限定。
可以理解,活体检测是以图像帧为单位进行的,为此,在当前一帧图像检测完毕之后,便对视频中的后若干帧图像进行遍历,以针对遍历到的图像进行活体检测。
如果遍历到的图像捕捉到约束框存在异常形变,则控制第一计数器累加。
反之,如果遍历到的图像未捕捉到约束框存在异常形变,则继续进行视频中后若干帧图像的遍历。
需要说明的是,对于视频中的图像而言,正在进行活体检测的当前一帧图像可视为当前图像,如果当前图像完成活体检测,则其转变为历史图像,而后一帧进行活体检测的图像则更新为当前图像。
值得一提的是,针对图像所在视频对待检测对象进行的活体检测,不仅可以参照前述计算公式(1)进行,还可以参照下述计算公式(2)进行。
具体地,约束框的形状比例值的计算公式(2)如下:
Figure PCTCN2019111912-appb-000003
其中,P表示约束框的形状比例值,a、b、c均表示约束框在图像中所描述三角形的边长。
a 0、b 0表示约束框在视频的第一帧图像中所描述三角形的相应边长。
那么,对于视频中的后若干帧图像而言,由此所构建的若干新约束 框的形状比例值均是基于视频中第一帧图像计算的,也即是根据第一帧图像中构建的约束框的形状比例值计算的,以此反映若干新约束框的形状在视频中相对约束框形状的变化。
如果若干新约束框的形状在视频中相对约束框形状的变化平稳,则视为未捕捉到若干新约束框相对约束框存在异常形变。
反之,如果若干新约束框的形状在视频中相对约束框形状的变化异常,则视为捕捉到若干新约束框相对约束框存在异常形变,进而控制第一计数器累加。
当然,如果若干新约束框中任一个新约束框构建时失败,则表示因存在假体攻击,导致图像扭曲程度过高,进而造成图像中对应于待检测对象的生物特征被完全破坏,使得关键点检测失败,而发生新约束框消失的现象,也将控制第一计数器累加。
由此,当第一计数器的计数值超过第一累加阈值时,便可判定待检测对象为假体,排除了因图像本身而误检的可能性,以此提高了活体检测的准确性。
通过如此设置,以相对变化判定取代了绝对变化判定,进一步地避免误检,以此增强了活体检测的鲁棒性和稳定性。
请参阅图9,在本申请一示例性实施例中,所述待检测对象的生物特征为人脸特征。其中,人脸特征包括但不限于:眉毛、眼睛、鼻子、嘴巴、耳朵等等。
相应地,步骤320可以包括以下步骤:
步骤321,对所述图像进行灰度处理,得到所述图像的灰度图。
步骤323,将所述图像的灰度图输入人脸关键点模型进行人脸特征识别,获得所述待检测对象的人脸特征在所述图像中的关键点。
人脸关键点模型,实质上为图像中的人脸特征构建了索引关系,以便于通过所构建的索引关系能够从图像中定位得到特定人脸特征的关键点。
具体地,对于待检测对象的图像,将其输入至人脸关键点模型之后,人脸特征在图像中的关键点即进行了索引标记,如图3所示,左、右眼 睛在图像中的六个关键点所标记的索引分别为37~42和43~48,嘴巴在图像中的二十个关键点所标记的索引为49~68。
同时,相应地存储进行了索引标记的关键点在图像中的坐标,以此为人脸特征构建了对应于图像的索引与坐标之间的索引关系。
那么,基于索引关系,由索引便可得到人脸特征在图像中关键点的坐标。
在本申请一实施例中,人脸关键点模型,是通过海量的图像样本对指定数学模型进行模型训练生成的。其中,图像样本,是指进行了索引标记的图像。
模型训练,实质上是对指定数学模型的参数加以迭代优化,使得由此参数构建的指定算法函数满足收敛条件。
其中,指定数学模型,包括但不限于:逻辑回归、支持向量机、随机森林、神经网络等机器学习模型。
指定算法函数,包括但不限于:最大期望函数、损失函数等等。
举例来说,随机初始化指定数学模型的参数,根据当前一个图像样本计算随机初始化的参数所构建的损失函数的损失值。
如果损失函数的损失值未达到最小,则更新指定数学模型的参数,并根据后一个图像样本计算更新的参数所构建的损失函数的损失值。
如此迭代循环,直至损失函数的损失值达到最小,即视为损失函数收敛,使得指定数学模型收敛为人脸关键点模型,并停止迭代。
否则,迭代更新指定数学模型的参数,并根据其余图像样本迭代计算所更新参数构建的损失函数的损失值,直至损失函数收敛。
值得一提的是,如果在损失函数收敛之前,迭代次数已经达到迭代阈值,也将停止迭代,以此保证模型训练的效率。
由上可知,利用完成模型训练的人脸关键点模型,便可快速实时地得到图像中人脸特征的若干关键点,充分地保障了活体检测的时效性。
此外,基于人脸关键点模型,对于不同面部表情的人脸特征识别,都有较好的准确性和稳定性,充分保证了活体检测的准确性。
在定位得到待检测对象的人脸特征在图像中的关键点之后,便可从 中选取若干关键点,以在图像中构建约束框参与活体检测。
请参阅图10,在本申请一示例性实施例中,活体检测过程还可以进一步地包括以下步骤:
步骤501,定位得到所述待检测对象的生物特征在所述图像中的关键点。
以待检测对象的生物特征为人脸特征进行说明,通过人脸关键点检测模型的人脸特征识别,如图3所示,人脸特征在图像中存在68个关键点,具体包括:左、右眼睛在图像中的6个关键点37~42和43~48,嘴巴在图像中的20个关键点49~68等等。
步骤502,根据所述待检测对象的生物特征在所述图像中的关键点,计算所述图像对应的生物特征结构距离比。
待检测对象的生物特征,例如,可以是人脸、眼睛、嘴巴、手、脚、指纹、虹膜等等。可以理解,对于待检测对象不同的生物特征,图像中生物特征的结构将有所区别,进而使得图像对应的生物特征结构距离比也各不相同。
例如,若待检测对象的生物特征为眼睛,则图像对应的生物特征结构距离比为眼睛纵横比,以此描述待检测对象在图像中眼睛的结构;若待检测对象的生物特征为嘴巴,则图像对应的生物特征结构距离比为嘴巴纵横比,以此描述待检测对象在图像中嘴巴的结构。
在此,对于图像对应的生物特征结构距离比,不再进行一一列举,不同的待检测对象,均有其相应的图像,进而存在其相应图像所对应的生物特征结构距离比,以便于准确地描述不同待检测对象在相应图像中生物特征的结构。
以待检测对象的生物特征为眼睛进行说明,如图11所示,通过图像中眼睛的六个关键点,便可计算出该图像对应的眼睛纵横比,进而反映出该图像中眼睛的结构。
具体地,眼睛纵横比的计算公式如(3)所示:
Figure PCTCN2019111912-appb-000004
其中,EAR表示眼睛纵横比,p 1表示右眼角所在关键点的坐标,p 2和p 3分别表示上眼睑所在关键点的坐标,p 4表示左眼角所在关键点的坐标,p 5和p 6分别表示下眼睑所在关键点的坐标。
||p1-p4||表示左右眼角所在的一对关键点之间的坐标差的范数,同理,||p2-p6||表示上下眼睑所在的其中一对关键点之间的坐标差的范数,||p3-p5||表示上下眼睑所在的另一对关键点之间的坐标差的范数。
在计算公式(3)中,分子表示眼睛上下眼睑之间的垂直距离,分母表示眼睛左右眼角之间的水平距离。应当说明的是,由于分子包含了两组垂直距离,而分母仅包含了一组水平距离,为此,对分母进行了加权,即×2。
步骤503,根据所述图像对应的生物特征结构距离比相对特征序列中生物特征结构距离比的变化,捕捉所述待检测对象的动作行为。
其中,所述特征序列中的生物特征结构距离比是根据所述图像所在视频中的前若干帧历史图像计算的。
因此,特征序列,实质上反映了待检测对象在历史图像中生物特征的正常结构。也可以理解为,特征序列,用于实现对待检测对象在历史图像采集时间段内生物特征的正常结构的准确描述。
那么,如果图像对应的生物特征结构距离比,相对于特征序列中历史图像所对应生物特征结构距离比发生了变化,则表明了待检测对象在图像中生物特征的结构,相对特征序列所描述的待检测对象在历史图像采集时间段内生物特征的正常结构发生了变化。
仍以待检测对象的生物特征为眼睛进行说明,正常结构为眼睛睁开时的眼睛轮廓,那么,发生了变化的结构即为眨眼时的眼睛轮廓。
其次,待检测对象的动作行为,包括但不限于:眨眼行为、张嘴行为、闭嘴行为、招手行为、跺脚行为等等。
由上可知,对于图像所在视频,当图像对应的生物特征结构距离比相对于前若干帧历史图像对应的生物特征结构距离比发生了相对变化时,表明待检测对象在图像中的生物特征轮廓发生了变化,例如,待检测对象眨眼了,此时,即视为捕捉到待检测对象的动作行为,进而判定 待检测对象为活体。
步骤504,如果捕捉到所述待检测对象存在动作行为,则控制第二计数器累加。
可以理解,对待检测对象进行图像采集时,可能是待检测对象拍照时闭眼了,而并非真正的眨眼,故而,本实施例中,设置第二计数器,仅当第二计数器中累加的计数值超过第二累加阈值,才视为待检测对象为活体,即执行步骤505。
由此,进一步地排除了因图像本身存在的生物特征结构变化而造成活体检测误检的可能性。
步骤505,当所述第二计数器的计数值超过第二累加阈值时,判定所述待检测对象为活体。
例如,当捕捉到待检测对象的眨眼行为、张嘴行为、闭嘴行为、招手行为、或者跺脚行为等等,便可判定待检测对象为活体。
通过如上所述的过程,在通过约束框排除待检测对象为假体的前提下,进一步地实现了基于生物特征结构距离比相对变化的活体检测方案,即对于待检测对象的视频而言,仅在其中一帧图像对应的生物特征结构距离比相对于前若干帧历史图像对应的生物特征结构距离比发生了相对变化,待检测对象才会被判定为活体,以此过滤假体攻击样本中生物特征轮廓的突变而造成的假体误判,进一步有效地提高了活体检测方法对假体攻击样本的防御性,具有较高的安全性。
在此补充说明的是,假体攻击样本,是指待检测对象的图像中被攻击者涂改或遮挡了眼睛或者嘴巴的轮廓,以此造成假体眨眼、闭嘴等假象。
步骤506,如果未捕捉到所述待检测对象存在动作行为,则将所述图像对应的生物特征结构距离比添加至所述特征序列。
具体地,进行图像对应的生物特征结构距离比与正常结构区间的比较,如果图像对应的生物特征结构距离比在正常结构区间之内,则将图像对应的生物特征结构距离比添加至特征序列。
应当理解,对于同一待检测对象而言,其生物特征的结构相对固定, 故而,待检测对象在图像中生物特征的结构也相对固定,视为正常结构。例如,若待检测对象的生物特征为眼睛,则眼睛睁开时的眼睛轮廓视为正常结构。
正常结构区间,表示待检测对象在图像中生物特征的正常结构的波动范围。此正常结构区间可根据应用场景的实际需求灵活地设置,例如,在精度要求较高的应用场景中,设置波动范围较窄的正常结构区间,本实施例并未对此加以限定。
由此,仅在图像对应的生物特征结构距离比在正常结构区间之内,才被允许添加至特征序列,以此防止异常的生物特征结构距离比存在于特征序列,充分地保障了特征序列的准确性,进而有利于提升活体检测的准确性。
在本申请一实施例的具体实现中,特征序列为指定长度的队列。
如图12所示,指定长度N的队列包括N个存储位置,每一个存储位置可用于存储一个满足正常结构区间的生物特征结构距离比。
其中,队列的指定长度可以根据应用场景的实际需求灵活地调整,例如,对于精度要求较高的应用场景,如果待检测对象的图像数量较多,则设置较大的指定长度;对于电子设备存储空间要求较高的应用场景,则设置较小的指定长度,本实施例并未对此加以限定。
假设待检测对象的图像有2n个,每个图像对应的生物特征结构距离比为a i,1<=i<=2n。
当队列为空,如果第一个生物特征结构距离比a 1满足正常结构区间,则将生物特征结构距离比a 1存储至队列中的第一个存储位置。
当队列未满,如果第二个生物特征结构距离比a 2满足正常结构区间,则将生物特征结构距离比a 2存储至队列中的第二个存储位置,以完成生物特征结构距离比a 2的入队操作。
以此类推,如果第N个生物特征结构距离比a n满足正常结构区间,则将生物特征结构距离比a n存储至队列中的第N个存储位置,此时,队列已满。
当队列已满,如果第N+1个生物特征结构距离比a n+1满足正常结构 区间,秉持“先进先出”的原则,则将第一个生物特征结构距离比a 1从队头移出队列,并将第二个生物特征结构距离比a 2沿队头方向移动至第一个存储位置,以此类推,第N个生物特征结构距离比a n沿队头方向移动至第N-1个存储位置,由此完成出队操作。
此时,队列中的第N个存储位置为空,则将第N+1个生物特征结构距离比a n+1从队尾存储至第N个存储位置,以完成生物特征结构距离比a n+1的入队操作。
由上可知,随着待检测对象的图像的持续采集,限于队列中的存储位置有限,队列中存储的生物特征结构距离比将随之实时更新,以此实现滑动窗口式的过滤作用,充分地保证队列所描述的待检测对象在历史图像采集时间段内生物特征的正常结构的准确性。
通过如此设置,实现了基于队列的活体检测方法,不仅能够有效地过滤由于假体攻击样本中生物特征结构的突变而造成的假体误判为活体,而且能够适用于人脸特征各不相同的人群,即不同队列中的生物特征结构距离比可反映出不同人脸特征的正常结构,使得活体检测方法具有良好的适应性和通用性。
步骤507,遍历视频中后若干帧图像。
如果当前图像完成活体检测,则继续遍历视频中的后若干帧图像,直至检测到待检测对象为假体,或者,视频中的全部图像检测完毕,判定待检测对象为活体。
进一步地,请参阅图13,在本申请一示例性实施例中,步骤503可以包括以下步骤:
步骤5031,计算所述特征序列中生物特征结构距离比的平均值。
步骤5033,根据所述平均值和所述图像对应的生物特征结构距离比,计算所述图像对应的生物特征结构距离比的相对变化率。
以眼睛纵横比为例,如图14所示,当眼睛睁开时,眼睛纵横比大致恒定,仅在0.25范围上下波动,而一旦发生眨眼、闭眼时,由于垂直距离几乎为零,将使得眼睛纵横比也相应地降低为零,再次睁眼时,眼睛纵横比重新上升至0.25范围,即通过记录眼睛纵横比在图像采集时间 段内的变化趋势,来判断是否发生了眨眼。
可以理解,对于活体来说,通过眼睛纵横比可以敏锐地捕捉到活体所存在的动作行为,然而,如果攻击者连续多次快速遮挡图像中眼睛的轮廓,由于图像中眼睛的若干关键点被破坏,很容易造成眼睛纵横比小于判定阈值的情况,而导致假体被误判为活体。
此外,误判还存在另一种情况,即活体被误判为假体。
对于待检测对象的图像而言,如果图像中的眼睛本身就比较小,眼睛纵横比发生明显跳变的概率将大大减小,极端情况可能还会出现在待检测对象睁开眼睛时,图像所对应的眼睛纵横比就已经小于判定阈值,而导致无法检测到眼睛纵横比在待检测对象眨眼期间所发生的明显跳变,而将活体误判为假体。
为此,本实施例中,活体检测方法通过待检测对象在图像中的生物特征结构的相对变化实现,如以下公式(4)所示:
Figure PCTCN2019111912-appb-000005
其中,α表示当前图像对应的生物特征结构距离比的相对变化率,Ear_ave表示特征序列中生物特征结构距离比的平均值,Ear'表示当前图像对应的生物特征结构距离比。
也就是说,通过Ear_ave,反映待检测对象在历史图像采集时间段内生物特征的正常结构。
通过Ear',反映待检测对象在当前图像采集时间段内生物特征的结构。
由于历史图像采集时间段和当前图像采集时间段是连续的,故而,如果相对变化率α不为零,则表明在对待检测图像进行图像采集的同一时间段内,Ear'所反映的生物特征的结构,相对于Ear_ave所反映的生物特征的正常结构发生了变化,那么,待检测对象可能存在动作行为。
步骤5035,如果所述图像对应的生物特征结构距离比的相对变化率小于变化阈值,则捕捉到所述待检测对象存在动作行为。
如前所述,当相对变化率α不为零时,待检测对象可能存在动作行 为。为此,本实施例,设定变化阈值,即仅在相对变化率α小于所设定的变化阈值时,视为捕捉到待检测对象存在动作行为。
在此说明的是,变化阈值可以根据应用场景的实际需要灵活地设定,例如,对检测敏感度要求较高的应用场景,设定较小的变化阈值,本实施例并未对此加以限定。
在上述过程中,以相对变化判定替代绝对变化判定,避免结构不同的生物特征因动作行为幅度不同,例如,小眼睛的眨眼幅度小于大眼睛的眨眼幅度,而造成的活体误判为假体的缺陷,从而增强了活体检测的鲁棒性和稳定性。
在本申请一示例性实施例中,步骤506之后,如上所述的方法还可以包括以下步骤:
如果所述待检测对象为活体,则调用人脸识别模型对所述待检测对象的图像进行人脸识别。
下面结合具体应用场景对人脸识别过程加以说明。
图15是一应用场景中基于身份验证的实施环境的示意图。如图15所示,该应用场景中,实施环境包括支付用户510、智能手机530和支付服务器550。
针对某个待支付订单,支付用户510通过智能手机530所配置的摄像头进行刷脸,使得智能手机530获得支付用户510相应的待识别用户图像,进而利用人脸识别模型对该待识别用户图像进行人脸识别。
具体地,通过人脸识别模型提取待识别用户图像的用户特征,并计算该用户特征与指定用户特征的相似度,若相似度大于相似阈值,则支付用户510通过身份验证。其中,指定用户特征是智能手机530通过人脸识别模型预先为支付用户510提取的。
在支付用户510通过身份验证之后,智能手机530为待支付订单向支付服务器550发起订单支付请求,以此完成待支付订单的支付流程。
图16是一应用场景中基于身份识别的实施环境的示意图。例如,视频监控中,通过身份识别,在图像画面所显示的多个人脸图像中确定追踪目标。该应用场景中实现了多对一的特征比对,可视为一对一特征 比对的特例。
如图16所示,该应用场景中,实施环境包括监控屏幕610、布设于各处的摄像头630、以及实现摄像头630和监控屏幕610二者之间交互的监控服务器650。
在该应用场景中,无论是室内还是室外,均布设了大量的摄像头630,以便于随时通过摄像头630采集的图像画面而实现视频监控。具体而言,布设的大量摄像头630形成了视频监控系统,通过监控服务器650与视频监控系统中各摄像头630的交互来获得图像画面,进而在监控屏幕610中通过图像画面实现对追踪目标的视频监控。
对于图像画面中监控对象的人脸识别,以确定追踪目标,则是由监控服务器650完成的。
具体地,通过人脸识别模型提取图像画面中多个人脸图像的人脸特征,并分别计算这些人脸特征与指定目标特征的相似度。其中,指定目标特征是通过人脸识别模型基于追踪目标预先提取的。
由此,便能够获得相似度最大且相似度超过相似阈值的人脸特征,进而将监控对象的身份确定为该相似度最大且相似度超过相似阈值的人脸特征所关联的身份,以此在图像画面中识别出追踪目标,以便于后续进行追踪目标的连续追踪。
需要说明的是,由于并非每一图像画面中都存在追踪目标,故而对于相似度最大的人脸特征,还需要进行相似度比较,以此确保连续追踪的准确性。
图17是一应用场景中基于身份识别的另一实施环境的示意图。如图17所示,该实施环境包括接待设备710、识别服务器730和门禁控制设备750。
其中,接待设备710上安装有摄像头,以对出入对象770进行人脸拍照,并将获得的出入对象770的待识别人员图像发送至识别服务器730进行人脸识别。本应用场景中,出入对象770包括工作人员和来访人员。
识别服务器730通过人脸识别模型提取待识别人员图像的人员特征,并计算该人员特征与多个指定人员特征的相似度,得到相似度最大的指 定人员特征,进而将相似度最大的指定人员特征所关联的人员身份确定为出入对象770的身份,以此完成出入对象770的身份识别。其中,指定人员特征是识别服务器730通过人脸识别模型预先为出入对象770提取的。
待出入对象770的身份识别完成,识别服务器730为出入对象770向门禁控制设备750发送门禁授权指令,使得门禁控制设备750根据该门禁授权指令为出入对象770配置相应的门禁权限,进而使得出入对象770凭借该门禁权限控制指定工作区域的门禁道闸执行放行动作。
当然,在不同的应用场景,可以根据实际应用需求进行灵活部署,例如,识别服务器730与门禁控制设备750可部署为同一个服务器,或者,接待设备710与门禁控制设备750部署于同一个服务器,本应用场景并未对此加以限定。
在上述三种应用场景中,活体检测装置可作为人脸识别的前驱模块。
如图18所示,通过执行步骤801至步骤806,首先基于约束框对待检测对象进行假体判定,无论是约束框存在相对变化异常,还是约束框消失,都将视为存在假体攻击,而判定待检测对象为假体。
然后再通过执行步骤807至步骤810,进一步地利用待检测对象的人脸特征的结构是否发生相对变化,来判断待检测对象是否为活体。
由此,活体检测装置便能够准确地判断待检测对象是否为活体,进而实现对假体攻击的防御,不仅能够充分地保证身份验证/身份识别的安全性,而且还能够有效地减轻后期人脸识别的工作压力和流量压力,从而更好地为各种人脸识别任务提供便利。
此外,活体检测装置所涉及的计算机程序量轻巧,对电子设备自身的硬件配置要求简单,不仅可应用于智能手机,而且适用于配置了windows操作系统和linux操作系统的服务器中,充分地提高了活体检测方法的通用性和实用性。
下述为本申请装置实施例,可以用于执行本申请任一实施例所涉及 的活体检测方法。对于本申请装置实施例中未披露的细节,请参照本申请所涉及的活体检测方法的方法实施例。
请参阅图19,在本申请一示例性实施例中,一种活体检测装置900包括但不限于:图像获取模块910、关键点检测模块920、约束框构建模块930、形变捕捉模块950和假体判定模块970。
其中,图像获取模块910用于获取待检测对象的图像。
关键点检测模块920用于对所述图像中对应于所述待检测对象的生物特征进行关键点检测。
约束框构建模块930用于根据检测到的关键点在所述图像中构建约束框。
形变捕捉模块950用于对所述图像中构建的约束框进行形状变化的捕捉。
假体判定模块970用于如果捕捉到所述约束框存在异常形变或者未检测到关键点,则判定所述待检测对象为假体。
在一示例性实施例中,所述形变捕捉模块包括:数据计算单元,用于根据所述约束框的形状计算得到形状数据;数据比较单元,用于比较所述形状数据与动态变化范围;异常捕捉单元,用于如果所述形状数据在所述动态变化范围之外,则捕捉到所述约束框存在异常形变。
在一示例性实施例中,所述数据计算单元包括:比例计算子单元,用于根据所述约束框在所述图像中所描述图形的边长,计算所述约束框的形状比例值;数据定义子单元,用于以所述约束框的形状比例值作为所述形状数据。
在一示例性实施例中,所述假体判定模块包括:累加单元,用于如果捕捉到所述约束框存在异常形变或者未检测到关键点,则控制第一计数器累加;图像遍历单元,用于根据所述图像所在视频中的后若干帧图像对所述待检测对象进行活体检测;假体检测单元,用于当所述第一计数器的计数值超过第一累加阈值时,检测到所述待检测对象为假体。
在一示例性实施例中,所述图像遍历单元包括:新约束框构建子单 元,用于根据所述图像所在视频中的后若干帧图像构建若干新约束框,每一个所述新约束框对应后一帧图像;跟踪子单元,用于根据若干所述新约束框对所述图像中构建的约束框进行跟踪;相对变化监测子单元,用于通过所述跟踪,在所述图像所在视频中监测若干所述新约束框相对所述约束框的变化;累加子单元,用于如果监测到相对变化异常或者任一个所述新约束框构建失败,则控制所述第一计数器累加。
在一示例性实施例中,所述待检测对象的生物特征为人脸特征,所述关键点检测模块包括:灰度处理单元,用于对所述图像进行灰度处理,得到所述图像的灰度图;模型调用单元,用于将所述图像的灰度图输入人脸关键点模型进行人脸特征识别,获得所述待检测对象的人脸特征在所述图像中的关键点。
在一示例性实施例中,所述装置还包括:距离比计算模块,用于如果未捕捉到所述约束框存在异常形变,则根据所述图像中对应于所述待检测对象生物特征的若干关键点,计算所述图像对应的生物特征结构距离比;行为捕捉模块,用于根据所述图像对应的生物特征结构距离比相对特征序列中历史生物特征结构距离比的变化,捕捉所述待检测对象的动作行为,所述特征序列中的历史生物特征结构距离比是根据所述图像所在视频中的前若干帧历史图像计算的;活体判定模块,用于如果捕捉到所述待检测对象存在动作行为,则判定所述待检测对象为活体。
在一示例性实施例中,所述行为捕捉模块包括:平均值计算单元,用于计算所述特征序列中生物特征结构距离比的平均值;变化率计算单元,用于根据所述平均值和所述图像对应的生物特征结构距离比,计算所述图像对应的生物特征结构距离比的相对变化率;判断单元,用于如果所述图像对应的生物特征结构距离比的相对变化率小于变化阈值,则捕捉到所述待检测对象存在动作行为。
在一示例性实施例中,所述判断单元包括:累加子单元,用于如果捕捉到所述待检测对象存在动作行为,则控制第二计数器累加;捕捉子单元,用于当所述第二计数器的计数值超过第二累加阈值时,检测到所述待检测对象为活体。
在一示例性实施例中,所述装置还包括:距离比比较模块,用于进行所述图像对应的生物特征结构距离比与正常结构区间的比较;距离比添加模块,用于如果所述图像对应的生物特征结构距离比在正常结构区间之内,则将所述图像对应的生物特征结构距离比添加至所述特征序列。
在一示例性实施例中,所述特征序列为指定长度的队列;所述向量添加模块包括:第一入队单元,用于如果所述队列未满,则控制所述队列为所述图像对应的生物特征结构距离比执行入队操作;第二入队单元,用于如果所述队列已满,则控制所述队列在队头执行出队操作,并在队尾为所述图像对应的生物特征结构距离比执行入队操作。
在一示例性实施例中,所述待检测对象的生物特征包括眼睛和/或嘴巴,所述生物特征结构距离比包括眼睛纵横比和/或嘴巴纵横比。
在一示例性实施例中,所述装置还包括:人脸识别模块,用于如果所述待检测对象判定为活体,则调用人脸识别模型对所述待检测对象的图像进行人脸识别。
需要说明的是,上述实施例所提供的活体检测装置在进行活体检测处理时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即活体检测装置的内部结构将划分为不同的功能模块,以完成以上描述的全部或者部分功能。
另外,上述实施例所提供的活体检测装置与活体检测方法的实施例属于同一构思,其中各个模块执行操作的具体方式已经在方法实施例中进行了详细描述,此处不再赘述。
请参阅图20,在本申请一示例性实施例中,一种电子设备1000,包括至少一处理器1001、至少一存储器1002、以及至少一通信总线1003。
其中,存储器1002上存储有计算机可读指令,处理器1001通过通信总线1003读取存储器1002中存储的计算机可读指令。
该计算机可读指令被处理器1001执行时实现上述各实施例中的活体检测方法。
在本申请一示例性实施例中,一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述各实施例中的活体检测方法。
上述内容,仅为本申请的示例性实施例,并非用于限制本申请实施例的实施方案,本领域普通技术人员根据本申请实施例的主要构思和精神,可以十分方便地进行相应的变通或修改,故本申请实施例的保护范围应以权利要求书所要求的保护范围为准。

Claims (17)

  1. 一种活体检测方法,由电子设备执行,包括:
    获取待检测对象的图像;
    对所述图像中对应于所述待检测对象的生物特征进行关键点检测;
    根据检测到的关键点在所述图像中构建约束框;
    对所述图像中构建的约束框进行形状变化的捕捉;
    如果捕捉到所述约束框存在异常形变或者未检测到关键点,则判定所述待检测对象为假体。
  2. 如权利要求1所述的方法,所述对所述图像中构建的约束框进行形状变化的捕捉,包括:
    根据所述约束框的形状计算得到形状数据;
    比较所述形状数据与动态变化范围;
    如果所述形状数据在所述动态变化范围之外,则捕捉到所述约束框存在异常形变。
  3. 如权利要求2所述的方法,所述根据所述约束框的形状计算得到形状数据,包括:
    根据所述约束框在所述图像中所描述图形的边长,计算所述约束框的形状比例值;
    以所述约束框的形状比例值作为所述形状数据。
  4. 如权利要求1所述的方法,所述如果捕捉到所述约束框存在异常形变或者未检测到关键点,则判定所述待检测对象为假体,包括:
    如果捕捉到所述约束框存在异常形变或者未检测到关键点,则控制第一计数器累加;
    根据所述图像所在视频中的后若干帧图像对所述待检测对象进行活体检测;
    当所述第一计数器的计数值超过第一累加阈值时,判定所述待检测对象为假体。
  5. 如权利要求4所述的方法,所述根据所述图像所在视频中的后若干帧图像对所述待检测对象进行活体检测,包括:
    根据所述图像所在视频中的后若干帧图像构建若干新约束框,每一个所述新约束框对应后一帧图像;
    根据若干所述新约束框对所述图像中构建的约束框进行跟踪;
    通过所述跟踪,在所述图像所在视频中监测若干所述新约束框相对所述约束框的变化;
    如果监测到相对变化异常或者任一个所述新约束框构建失败,则控制所述第一计数器累加。
  6. 如权利要求1所述的方法,所述待检测对象的生物特征为人脸特征;
    所述对所述图像中对应于所述待检测对象的生物特征进行关键点检测,包括:
    对所述图像进行灰度处理,得到所述图像的灰度图;
    将所述图像的灰度图输入人脸关键点模型进行人脸特征识别,获得所述待检测对象的人脸特征在所述图像中的关键点。
  7. 如权利要求1至6任一项所述的方法,还包括:
    如果未捕捉到所述约束框存在异常形变,则根据所述待检测对象的生物特征在所述图像中的关键点,计算所述图像对应的生物特征结构距离比;
    根据所述图像对应的生物特征结构距离比相对特征序列中生物特征结构距离比的变化,捕捉所述待检测对象的动作行为,所述特征序列中的生物特征结构距离比是根据所述图像所在视频中的前若干帧历史图像计算的;
    如果捕捉到所述待检测对象存在动作行为,则判定所述待检测对象为活体。
  8. 如权利要求7所述的方法,所述根据所述图像对应的生物特征结构距离比相对特征序列中生物特征结构距离比的变化,捕捉所述待检测对象的动作行为,包括:
    计算所述特征序列中生物特征结构距离比的平均值;
    根据所述平均值和所述图像对应的生物特征结构距离比,计算所述 图像对应的生物特征结构距离比的相对变化率;
    如果所述图像对应的生物特征结构距离比的相对变化率小于变化阈值,则捕捉到所述待检测对象存在动作行为。
  9. 如权利要求7所述的方法,还包括:
    进行所述图像对应的生物特征结构距离比与正常结构区间的比较;
    如果所述图像对应的生物特征结构距离比在正常结构区间之内,则将所述图像对应的生物特征结构距离比添加至所述特征序列。
  10. 如权利要求9所述的方法,所述特征序列为指定长度的队列;
    所述将所述图像对应的生物特征结构距离比添加至所述特征序列,包括:
    如果所述队列未满,则控制所述队列为所述图像对应的生物特征结构距离比执行入队操作;
    如果所述队列已满,则控制所述队列在队头执行出队操作,并在队尾为所述图像对应的生物特征结构距离比执行入队操作。
  11. 如权利要求7所述的方法,所述待检测对象的生物特征包括眼睛和/或嘴巴,所述生物特征结构距离比包括眼睛纵横比和/或嘴巴纵横比。
  12. 一种活体检测装置,包括:
    图像获取模块,用于获取待检测对象的图像;
    关键点检测模块,用于对所述图像中对应于所述待检测对象的生物特征进行关键点检测;
    约束框构建模块,用于根据检测到的关键点在所述图像中构建约束框;
    形变捕捉模块,用于对所述图像中构建的约束框进行形状变化的捕捉;
    假体判定模块,用于如果捕捉到所述约束框存在异常形变或者未检测到关键点,则判定所述待检测对象为假体。
  13. 一种支付系统,包括支付终端和支付服务器,其中,
    所述支付终端,用于采集支付用户的图像;
    所述支付终端包括活体检测装置,用于在所述支付用户的图像中根据检测到的关键点构建约束框,并捕捉所述图像中约束框存在的异常形变,如果未捕捉到所述约束框存在异常形变,则判定所述支付用户为活体;
    当所述支付用户为活体,所述支付终端对所述支付用户进行身份验证,以在所述支付用户通过身份验证时,向所述支付服务器发起支付请求。
  14. 一种视频监控系统,包括监控屏幕、若干摄像头和监控服务器,其中,
    若干所述摄像头,用于采集监控对象的图像;
    所述监控服务器包括活体检测装置,用于在所述监控对象的图像中根据检测到的关键点构建约束框,并捕捉所述图像中约束框存在的异常形变,如果未捕捉到所述约束框存在异常形变,则判定所述监控对象为活体;
    当所述监控对象为活体,所述监控服务器对所述监控对象进行身份识别,以获得追踪目标,并在所述监控屏幕中通过图像画面对所述追踪目标进行视频监控。
  15. 一种门禁系统,包括接待设备、识别服务器和门禁控制设备,其中,
    所述接待设备,用于采集出入对象的图像;
    所述识别服务器包括活体检测装置,用于在所述出入对象的图像中根据检测到的关键点构建约束框,并捕捉所述图像中约束框存在的异常形变,如果未捕捉到所述约束框存在异常形变,则判定所述出入对象为活体;
    当所述出入对象为活体,所述识别服务器对所述出入对象进行身份 识别,以使所述门禁控制设备为成功完成身份识别的出入对象配置门禁权限,使得该出入对象根据所配置的门禁权限控制指定工作区域的门禁道闸执行放行动作。
  16. 一种电子设备,包括处理器以及与所述处理器相连接的存储器,所述存储器中存储有可由所述处理器执行的计算机可读指令,所述处理器执行所述计算机可读指令以完成权利要求1至11任一项所述的方法。
  17. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序可由处理器执行以完成权利要求1至11任一项所述的方法。
PCT/CN2019/111912 2018-10-25 2019-10-18 活体检测方法、装置、电子设备、存储介质及应用活体检测方法的相关系统 WO2020083111A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19876300.5A EP3872689B1 (en) 2018-10-25 2019-10-18 Liveness detection method and device, electronic apparatus, storage medium and related system using the liveness detection method
US17/073,035 US11551481B2 (en) 2018-10-25 2020-10-16 Living body detection method and apparatus, electronic device, storage medium, and related system to which living body detection method is applied

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811252616.2A CN109492551B (zh) 2018-10-25 2018-10-25 活体检测方法、装置及应用活体检测方法的相关系统
CN201811252616.2 2018-10-25

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/073,035 Continuation US11551481B2 (en) 2018-10-25 2020-10-16 Living body detection method and apparatus, electronic device, storage medium, and related system to which living body detection method is applied

Publications (1)

Publication Number Publication Date
WO2020083111A1 true WO2020083111A1 (zh) 2020-04-30

Family

ID=65691966

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/111912 WO2020083111A1 (zh) 2018-10-25 2019-10-18 活体检测方法、装置、电子设备、存储介质及应用活体检测方法的相关系统

Country Status (4)

Country Link
US (1) US11551481B2 (zh)
EP (1) EP3872689B1 (zh)
CN (1) CN109492551B (zh)
WO (1) WO2020083111A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116442393A (zh) * 2023-06-08 2023-07-18 山东博硕自动化技术有限公司 基于视频识别的搅拌站智能卸料方法、系统及控制设备

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201904265A (zh) * 2017-03-31 2019-01-16 加拿大商艾維吉隆股份有限公司 異常運動偵測方法及系統
CN109492551B (zh) * 2018-10-25 2023-03-24 腾讯科技(深圳)有限公司 活体检测方法、装置及应用活体检测方法的相关系统
CN109492550B (zh) * 2018-10-25 2023-06-06 腾讯科技(深圳)有限公司 活体检测方法、装置及应用活体检测方法的相关系统
WO2020213166A1 (ja) * 2019-04-19 2020-10-22 富士通株式会社 画像処理装置、画像処理方法、及び画像処理プログラム
CN110175522A (zh) * 2019-04-24 2019-08-27 深圳智慧林网络科技有限公司 考勤方法、系统及相关产品
CN111860057A (zh) * 2019-04-29 2020-10-30 北京眼神智能科技有限公司 人脸图像模糊和活体检测方法、装置、存储介质及设备
CN111739060A (zh) * 2019-08-01 2020-10-02 北京京东尚科信息技术有限公司 识别方法、设备及存储介质
CN110866473B (zh) * 2019-11-04 2022-11-18 浙江大华技术股份有限公司 目标对象的跟踪检测方法及装置、存储介质、电子装置
CN112200001A (zh) * 2020-09-11 2021-01-08 南京星耀智能科技有限公司 一种指定场景下深度伪造视频识别方法
CN113392719A (zh) * 2021-05-21 2021-09-14 华南农业大学 智能电子锁解锁方法、电子设备以及存储介质
CN113396880A (zh) * 2021-06-11 2021-09-17 重庆电子工程职业学院 室内活体检测系统及方法
CN113420070B (zh) * 2021-06-24 2023-06-30 平安国际智慧城市科技股份有限公司 排污监测数据处理方法、装置、电子设备及存储介质
CN113553928B (zh) * 2021-07-13 2024-03-22 厦门瑞为信息技术有限公司 一种人脸活体检测方法、系统和计算机设备
CN114187666B (zh) * 2021-12-23 2022-09-02 中海油信息科技有限公司 边走路边看手机的识别方法及其系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104794464A (zh) * 2015-05-13 2015-07-22 上海依图网络科技有限公司 一种基于相对属性的活体检测方法
CN104794465A (zh) * 2015-05-13 2015-07-22 上海依图网络科技有限公司 一种基于姿态信息的活体检测方法
CN105023010A (zh) * 2015-08-17 2015-11-04 中国科学院半导体研究所 一种人脸活体检测方法及系统
CN109492551A (zh) * 2018-10-25 2019-03-19 腾讯科技(深圳)有限公司 活体检测方法、装置及应用活体检测方法的相关系统

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3614783B2 (ja) * 2001-01-26 2005-01-26 株式会社資生堂 顔だち分類法
US9406212B2 (en) * 2010-04-01 2016-08-02 Sealed Air Corporation (Us) Automated monitoring and control of contamination activity in a production area
US9294475B2 (en) * 2013-05-13 2016-03-22 Hoyos Labs Ip, Ltd. System and method for generating a biometric identifier
US9003196B2 (en) * 2013-05-13 2015-04-07 Hoyos Labs Corp. System and method for authorizing access to access-controlled environments
AU2015274445B2 (en) * 2014-06-11 2019-05-23 Veridium Ip Limited System and method for facilitating user access to vehicles based on biometric information
WO2016033184A1 (en) * 2014-08-26 2016-03-03 Hoyos Labs Ip Ltd. System and method for determining liveness
US9633269B2 (en) * 2014-09-05 2017-04-25 Qualcomm Incorporated Image-based liveness detection for ultrasonic fingerprints
US9690998B2 (en) * 2014-11-13 2017-06-27 Intel Corporation Facial spoofing detection in image based biometrics
CN106302330B (zh) * 2015-05-21 2021-01-05 腾讯科技(深圳)有限公司 身份验证方法、装置和系统
KR20180000027A (ko) * 2016-06-21 2018-01-02 한양대학교 에리카산학협력단 특징점을 이용한 감정 판단 시스템
US10210380B2 (en) * 2016-08-09 2019-02-19 Daon Holdings Limited Methods and systems for enhancing user liveness detection
US10331942B2 (en) * 2017-05-31 2019-06-25 Facebook, Inc. Face liveness detection
CN108427871A (zh) * 2018-01-30 2018-08-21 深圳奥比中光科技有限公司 3d人脸快速身份认证方法与装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104794464A (zh) * 2015-05-13 2015-07-22 上海依图网络科技有限公司 一种基于相对属性的活体检测方法
CN104794465A (zh) * 2015-05-13 2015-07-22 上海依图网络科技有限公司 一种基于姿态信息的活体检测方法
CN105023010A (zh) * 2015-08-17 2015-11-04 中国科学院半导体研究所 一种人脸活体检测方法及系统
CN109492551A (zh) * 2018-10-25 2019-03-19 腾讯科技(深圳)有限公司 活体检测方法、装置及应用活体检测方法的相关系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3872689A4

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116442393A (zh) * 2023-06-08 2023-07-18 山东博硕自动化技术有限公司 基于视频识别的搅拌站智能卸料方法、系统及控制设备
CN116442393B (zh) * 2023-06-08 2024-02-13 山东博硕自动化技术有限公司 基于视频识别的搅拌站智能卸料方法、系统及控制设备

Also Published As

Publication number Publication date
CN109492551B (zh) 2023-03-24
EP3872689A4 (en) 2021-12-01
CN109492551A (zh) 2019-03-19
EP3872689A1 (en) 2021-09-01
US11551481B2 (en) 2023-01-10
EP3872689B1 (en) 2022-11-23
US20210042548A1 (en) 2021-02-11

Similar Documents

Publication Publication Date Title
WO2020083111A1 (zh) 活体检测方法、装置、电子设备、存储介质及应用活体检测方法的相关系统
CN109492550B (zh) 活体检测方法、装置及应用活体检测方法的相关系统
US10515199B2 (en) Systems and methods for facial authentication
CN108197586B (zh) 脸部识别方法和装置
US10339402B2 (en) Method and apparatus for liveness detection
US10430679B2 (en) Methods and systems for detecting head motion during an authentication transaction
WO2017181769A1 (zh) 一种人脸识别方法、装置和系统、设备、存储介质
WO2016127437A1 (zh) 活体人脸验证方法及系统、计算机程序产品
CN106295499B (zh) 年龄估计方法及装置
US10579783B1 (en) Identity authentication verification
CN108875468B (zh) 活体检测方法、活体检测系统以及存储介质
WO2016172923A1 (zh) 视频检测方法、视频检测系统以及计算机程序产品
WO2021082045A1 (zh) 微笑表情检测方法、装置、计算机设备及存储介质
WO2020164284A1 (zh) 基于平面检测的活体识别方法、装置、终端及存储介质
CN110705356A (zh) 功能控制方法及相关设备
Peng et al. Face liveness detection for combating the spoofing attack in face recognition
CN108764153A (zh) 人脸识别方法、装置、系统和存储介质
Fegade et al. Residential security system based on facial recognition
US11600111B2 (en) System and method for face recognition
CN110572618B (zh) 一种非法拍照行为监控方法、装置及系统
RU2815689C1 (ru) Способ, терминал и система для биометрической идентификации
RU2798179C1 (ru) Способ, терминал и система для биометрической идентификации
JP7480841B2 (ja) イベントの管理方法、イベント管理装置、システム及びプログラム
WO2023105635A1 (ja) 判定装置、判定方法、および判定プログラム
WO2021082006A1 (zh) 监控装置以及控制方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19876300

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019876300

Country of ref document: EP

Effective date: 20210525