WO2021197369A1 - 活体检测方法、装置、设备及计算机可读存储介质 - Google Patents

活体检测方法、装置、设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2021197369A1
WO2021197369A1 PCT/CN2021/084308 CN2021084308W WO2021197369A1 WO 2021197369 A1 WO2021197369 A1 WO 2021197369A1 CN 2021084308 W CN2021084308 W CN 2021084308W WO 2021197369 A1 WO2021197369 A1 WO 2021197369A1
Authority
WO
WIPO (PCT)
Prior art keywords
living body
data
movement
target
moving
Prior art date
Application number
PCT/CN2021/084308
Other languages
English (en)
French (fr)
Inventor
葛昊
赵晓辉
陈斌
宋晨
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021197369A1 publication Critical patent/WO2021197369A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Definitions

  • This application relates to artificial intelligence, and in particular to a method, device, electronic device, and computer-readable storage medium for living body detection.
  • Facial recognition is related to the identity authentication of a person, and therefore is related to everyone’s vital interests, such as property, account number, privacy, etc.
  • facial recognition applications it has spawned the opportunity to profit from attacks on the facial recognition system behavior.
  • the face attendance system there may be photos of work cards, photos taken by mobile phones, etc. instead of other people's attendance; in face recognition online payment, a whole set of black products for face authentication have been produced.
  • face recognition online payment a whole set of black products for face authentication have been produced.
  • non-living bodies need to be detected.
  • the existing algorithms are mainly divided into three categories: live action, glare live, and silent live.
  • moving a living body requires the target to be detected to make corresponding actions according to the prompts, such as blinking, shaking the head, etc.
  • moving a living body can better defend against static non-living attacks, such as work cards and printing paper, but for dynamic Attack methods are less effective, such as mobile phone video remakes, etc.
  • silent live bodies can defend against relatively simple non-living attacks with obvious non-living characteristics, such as work cards, medium/low-resolution electronic screen remakes, printing paper, etc., but for high-definition
  • the defensive power of the screen remake is low; the glare live defense is strong, and it can defend against most non-living attacks, but the stability is insufficient, and it is easily affected by the surrounding environment light.
  • the present application provides a living body detection method, device, electronic equipment, and computer-readable storage medium, the main purpose of which is to improve the overall living body detection accuracy.
  • a living body detection method includes:
  • a moving target sensor instruction is generated, and the moving target sensor instruction is used to instruct the target to be tested to move the mobile terminal in a preset direction while maintaining face information collection ;
  • the data change range calculation is performed on the feedback data commanded by the moving target sensor by a preset data change range formula to obtain the feedback data change range, where the feedback data includes movement angle data, movement speed data, and movement amplitude data;
  • the living body data judgment is performed on the change range of the movement angle data, the movement speed data, and the movement amplitude data to obtain the movement angle, the movement speed and the movement amplitude.
  • the result of living body judgment is performed on the change range of the movement angle data, the movement speed data, and the movement amplitude data to obtain the movement angle, the movement speed and the movement amplitude.
  • the present application also provides a living body detection device, which includes:
  • the instruction generation module is used to generate a moving target sensor instruction according to the living detection request of the target to be tested sent by the mobile terminal. Move the mobile terminal in a direction;
  • the data change range calculation module is used to calculate the data change range of the feedback data commanded by the moving target sensor through a preset data change range formula to obtain the feedback data change range, wherein the feedback data includes movement angle data, movement Speed data and movement range data;
  • the data living body judgment module is used to judge the change range of the movement angle data, the movement speed data and the movement amplitude data according to the preset movement angle threshold, the preset movement speed threshold and the preset movement amplitude threshold.
  • the detection result generation module generates the detection result that the target to be tested is a living body according to the obtained living body judgment result of the movement angle, the movement speed and the movement amplitude are all living body judgment information.
  • the present application also provides an electronic device, the electronic device including:
  • At least one processor and,
  • a memory communicatively connected with the at least one processor; wherein,
  • the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the following steps:
  • a moving target sensor instruction is generated, and the moving target sensor instruction is used to instruct the target to be tested to move the mobile terminal in a preset direction while maintaining face information collection ;
  • the data change range calculation is performed on the feedback data commanded by the moving target sensor by a preset data change range formula to obtain the feedback data change range, where the feedback data includes movement angle data, movement speed data, and movement amplitude data;
  • the living body data judgment is performed on the change range of the movement angle data, the movement speed data, and the movement amplitude data to obtain the movement angle, the movement speed and the movement amplitude.
  • the result of living body judgment is performed on the change range of the movement angle data, the movement speed data, and the movement amplitude data to obtain the movement angle, the movement speed and the movement amplitude.
  • the present application also provides a computer-readable storage medium, which implements the following steps when the computer program is executed by a processor:
  • a moving target sensor instruction is generated, and the moving target sensor instruction is used to instruct the target to be tested to move the mobile terminal in a preset direction while maintaining face information collection ;
  • the data change range calculation is performed on the feedback data commanded by the moving target sensor by a preset data change range formula to obtain the feedback data change range, where the feedback data includes movement angle data, movement speed data, and movement amplitude data;
  • the living body data judgment is performed on the change range of the movement angle data, the movement speed data, and the movement amplitude data to obtain the movement angle, the movement speed and the movement amplitude.
  • the result of living body judgment is performed on the change range of the movement angle data, the movement speed data, and the movement amplitude data to obtain the movement angle, the movement speed and the movement amplitude.
  • the living body detection method, device, electronic equipment, and computer-readable storage medium proposed in this application obtain the living body judgment result and comprehensive judgment information by comparing the change range of the feedback data obtained by the target sensor with the preset corresponding type of feedback data threshold. Generate the test result that the target to be tested is a living body.
  • This application makes use of the information change law of various self-contained sensors in mobile devices such as smart phones to determine whether the user, that is, the target to be measured, is a living body, with simple operation, high precision, and high accuracy.
  • FIG. 1 is a schematic flowchart of a living body detection method provided by an embodiment of this application.
  • FIG. 2 is a schematic diagram of modules of a Chinese living body detection device provided by an embodiment of this application.
  • FIG. 3 is a schematic diagram of the internal structure of an electronic device implementing a Chinese living body detection method provided by an embodiment of the application;
  • FIG. 1 it is a schematic flowchart of a living body detection method provided by an embodiment of this application.
  • the method can be executed by a device, and the device can be implemented by software and/or hardware.
  • the living body detection method includes:
  • S110 Generate a moving target sensor instruction according to the living body detection request of the target to be tested sent by the mobile terminal, where the moving target sensor instruction is used to instruct the target to be tested to move the mobile terminal in a preset direction while maintaining face information collection.
  • the target to be tested ie, the user
  • a certain software that needs to perform live detection such as software used for online payment, face scanning for attendance, face unlocking of smart devices, etc.
  • the live detection is required.
  • the client sends out a live detection prompt, and the target to be tested is selected to confirm whether to perform a live detection.
  • a live detection request of the target to be tested is generated, and the processor obtains the live detection of the target to be tested Request, generate a moving target sensor instruction, where the moving target sensor instruction can be to require the target to be tested to move the target sensor while receiving the living body detection, for example, "Please move your detection device to the left" "Please move you to the right
  • the target sensor is a separate sensor or a mobile device with a target sensor (iOS, Android platform), for example Smartphones, tablets, iwatch, etc.
  • a moving target sensor command is generated according to the living detection request of the target to be tested sent by the mobile terminal, and the moving target sensor command is used to instruct the target to be tested to take a preset direction while maintaining face information collection.
  • Mobile mobile terminals include:
  • a moving target sensor instruction is generated, and the moving target sensor instruction is used to instruct the target to be measured to move the mobile terminal in a preset direction while maintaining face information collection.
  • the living body detection request of the target to be measured is pre-associated with the camera device opening instruction of the mobile terminal.
  • the processor receives the living body detection request
  • the mobile terminal displays the image display area collected by turning on the camera device, and prompts to be measured
  • the target adjusts the position of the face and presents the complete face image in the image display area.
  • the complete face image of the target to be tested is detected, it generates a moving target sensor instruction, and instructs the target to be tested to keep the face by voice or text.
  • the mobile terminal can also display a vertical and horizontal centerline in the image display area.
  • the vertical and horizontal centerline is a reference line for the target to be measured to move the mobile terminal in the preset direction while maintaining face information collection.
  • S120 Perform a data change range calculation on the feedback data of the moving target sensor command by a preset data change range formula to obtain the feedback data change range, where the feedback data includes movement angle data, movement speed data, and movement amplitude data.
  • the target sensor sensor when the target to be measured gives feedback to the moving target sensor command, the target sensor sensor will generate feedback data corresponding to the sensor type. For example, when a user uses a smart phone to unlock the face, the generated moving target sensor The command is "Please move your phone to the left".
  • the built-in sensors in the smart phone may include acceleration sensors, direction sensors, gyroscopes, etc. The sensors may be different depending on the brand of the phone. However, for the current For smart devices, it is easy to obtain the movement angle, movement speed, and movement range of the smart device through the built-in sensor.
  • the preset data change range formula is:
  • V out
  • V out is the feedback data range
  • max (v i) is the maximum value of some of the feedback data
  • min (v i) is the minimum value of some of the feedback data.
  • the variation range of each type of data refers to the absolute value of the difference between the maximum value and the minimum value of each sensor reading generated during the process of moving the target sensor.
  • a certain APP on a smart phone uses visual animation to prompt the user to hold the device and move the device horizontally. During the movement, the screen faces the user and requires the user to look at the phone. During this process, the user will hold the device to make an approximate arc trajectory.
  • the living body data judgment is performed on the change range of the movement angle data, the movement speed data, and the movement amplitude data to obtain the movement angle, the movement speed, and the movement amplitude. The result of in vivo judgment.
  • the preset movement angle threshold, the preset movement speed threshold, and the preset movement amplitude threshold are compared with the change ranges of the movement angle data, movement speed data, and movement amplitude data, respectively, and various movement data are performed according to the comparison result.
  • Living body data judgment so as to obtain the living body judgment result of movement angle, movement speed and movement range.
  • the comparison between the above-mentioned variation range of various data and the corresponding data threshold can be carried out separately or sequentially; when carried out separately at the same time, as long as the living body judgment result of each data is living body, it indicates that the target to be tested is Living body, otherwise it is non-living body; when proceeding in sequence, only the previous living body judgment result is living body, then the next living body judgment will be carried out.
  • the preset movement angle threshold, the preset movement speed threshold, and the preset movement amplitude threshold are all stored in the blockchain, according to the preset movement angle threshold, the preset movement speed threshold, and the preset movement
  • the amplitude threshold is used to judge the change range of the movement angle data, the movement speed data, and the movement amplitude data in vivo data respectively, and obtain the living body judgment results of the movement angle, movement speed and movement amplitude respectively, including:
  • the change range of the movement range data is compared with the preset movement range threshold, and the comparison result that the obtained change range of the movement range data is greater than the preset movement range threshold is used as the movement angle, movement speed, and movement range The result of in vivo judgment.
  • the variation range of the movement angle data is compared with the preset movement angle threshold, and the obtained variation range of the movement angle data is greater than the preset movement angle threshold.
  • the comparison result shows that the target to be measured can be judged to be a living body according to the variation range of the movement angle data.
  • the living body judgment result is all living body judgment information, and the detection result that the target to be measured is a living body is generated.
  • the living body judgment results of the movement angle, movement speed, and movement range are all living bodies, indicating that the target to be measured is a living body, then a detection result that the target to be measured is a living body is generated, and the detection result can be in the form of text or voice Feedback to the target to be tested. If only one of the living body judgment results of the movement angle, the movement speed and the movement amplitude is non-living, the detection result that the target to be measured is non-living is generated, and is also fed back to the target to be measured in the form of text or voice.
  • the method before the obtained living body judgment result of the movement angle, movement speed, and movement amplitude are all living body judgment information, the method further includes:
  • the moving background video information of the target to be measured is collected through the camera device
  • the detection result that the target to be tested is a living body is generated.
  • the camera device of the device is used to collect the moving background video information of the target to be tested, and the moving background video information is to be tested based on the moving background video information.
  • the target performs a silent living body judgment, and if the silent living body judgment passes, a detection result that the target to be tested is a living body is generated.
  • performing silent living judgment of the target to be measured according to the moving background video information includes:
  • the moving background video information is subjected to frame extraction processing at preset time intervals to obtain video frames, and then face recognition is performed on the video frames to obtain face video frames. If the face is not recognized during the face recognition process , You need to re-process the frame until you get the face video frame, and then perform the face key point positioning process on the face video frame.
  • the general face key point positioning includes the left pupil, the pupil and the mouth, and then save it in advance.
  • the face image of the target to be tested for example, the ID card, or the original face image entered when the phone unlock verification starts, etc.
  • the face positioning coordinates obtained after the processing of the key points of the face are compared with the face image of the target to be tested
  • the alignment process obtains the face-aligned picture, and then inputs the face-aligned picture into the classifier to calculate the live body score to obtain the live body score. According to the comparison between the live body score and the preset silent live body threshold, the silent live body judgment result is obtained.
  • the method before generating the detection result that the target to be tested is a living body according to the obtained information about the result of the judgment of the living body by silence, the method further includes:
  • the image of the human eye sight of the target to be measured is obtained through the camera device
  • the detection result that the target to be tested is a living body is generated.
  • the human eye is used to make judgments.
  • the principle is that when collecting target sensor data and moving according to the prompts, the eyes of a real person can follow the moving target sensor according to the prompts, but the eyes of the screen attack and paper attack cannot Following the moving target sensor, this can be determined by the algorithm of eye gaze estimation.
  • the operating mode of the sight living body is similar to the above silent living body, which can be performed silently in the background, with the purpose of increasing the probability of detecting non-living bodies.
  • FIG. 2 it is a functional block diagram of a living body detection device according to an embodiment of the present application.
  • the living body detection device 200 described in this application can be installed in an electronic device.
  • the living body detection device may include an instruction generation module 210, a data change range calculation module 220, a data living body judgment module 230, and a detection result generation module 240.
  • the module described in the present invention can also be called a unit, which refers to a series of computer program segments that can be executed by the processor of an electronic device and can complete fixed functions, and are stored in the memory of the electronic device.
  • each module/unit is as follows:
  • the instruction generation module 210 is used to generate a moving target sensor instruction according to the living detection request of the target to be tested sent by the mobile terminal, and the moving target sensor instruction is used to instruct the target to be tested to move in a preset direction while maintaining face information collection. terminal.
  • the target to be tested ie, the user
  • a certain software that needs to perform live detection such as software used for online payment, face scanning for attendance, face unlocking of smart devices, etc.
  • the live detection is required.
  • the client sends out a live detection prompt, and the target to be tested is selected to confirm whether to perform a live detection.
  • a live detection request of the target to be tested is generated, and the processor obtains the live detection of the target to be tested Request, generate a moving target sensor instruction, where the moving target sensor instruction can be to require the target to be tested to move the target sensor while receiving the living body detection, for example, "Please move your detection device to the left" "Please move you to the right
  • the target sensor is a separate sensor or a mobile device with a target sensor (iOS, Android platform), for example Smartphones, tablets, iwatch, etc.
  • a moving target sensor command is generated according to the living detection request of the target to be tested sent by the mobile terminal, and the moving target sensor command is used to instruct the target to be tested to take a preset direction while maintaining face information collection.
  • Mobile mobile terminals include:
  • a moving target sensor instruction is generated, and the moving target sensor instruction is used to instruct the target to be measured to move the mobile terminal in a preset direction while maintaining face information collection.
  • the living body detection request of the target to be measured is pre-associated with the camera device opening instruction of the mobile terminal.
  • the processor receives the living body detection request
  • the mobile terminal displays the image display area collected by turning on the camera device, and prompts to be measured
  • the target adjusts the position of the face and presents the complete face image in the image display area.
  • the complete face image of the target to be tested is detected, it generates a moving target sensor instruction, and instructs the target to be tested to keep the face by voice or text.
  • the mobile terminal can also display a vertical and horizontal centerline in the image display area.
  • the vertical and horizontal centerline is a reference line for the target to be measured to move the mobile terminal in the preset direction while maintaining face information collection.
  • the data change range calculation module 220 is used to calculate the data change range of the feedback data commanded by the moving target sensor through a preset data change range formula to obtain the feedback data change range.
  • the feedback data includes movement angle data, movement speed data, and Movement range data.
  • the target sensor sensor when the target to be measured gives feedback to the moving target sensor command, the target sensor sensor will generate feedback data corresponding to the sensor type. For example, when a user uses a smart phone to unlock the face, the generated moving target sensor The command is "Please move your phone to the left".
  • the built-in sensors in the smart phone may include acceleration sensors, direction sensors, gyroscopes, etc. The sensors may be different depending on the brand of the phone. However, for the current For smart devices, it is easy to obtain the movement angle, movement speed, and movement range of the smart device through the built-in sensor.
  • the preset data change range formula is:
  • V out
  • V out is the feedback data range
  • max (v i) is the maximum value of some of the feedback data
  • min (v i) is the minimum value of some of the feedback data.
  • the variation range of each type of data refers to the absolute value of the difference between the maximum value and the minimum value of each sensor reading generated during the process of moving the target sensor.
  • a certain APP on a smart phone uses visual animation to prompt the user to hold the device and move the device horizontally. During the movement, the screen faces the user and requires the user to look at the phone. During this process, the user will hold the device to make an approximate arc trajectory.
  • the data living body judgment module 230 is used to judge the change range of the movement angle data, the movement speed data and the movement amplitude data respectively according to the preset movement angle threshold, the preset movement speed threshold and the preset movement amplitude threshold to obtain the movement angle. , The result of living body judgment of moving speed and moving range.
  • the preset movement angle threshold, the preset movement speed threshold, and the preset movement amplitude threshold are compared with the change ranges of the movement angle data, movement speed data, and movement amplitude data, respectively, and various movement data are performed according to the comparison result.
  • Living body data judgment so as to obtain the living body judgment result of movement angle, movement speed and movement range.
  • the comparison between the above-mentioned variation range of various data and the corresponding data threshold can be carried out separately or sequentially; when carried out separately at the same time, as long as the living body judgment result of each data is living body, it indicates that the target to be tested is Living body, otherwise it is non-living body; when proceeding in sequence, only the previous living body judgment result is living body, then the next living body judgment will be carried out.
  • the preset movement angle threshold, the preset movement speed threshold, and the preset movement amplitude threshold are all stored in the blockchain, according to the preset movement angle threshold, the preset movement speed threshold, and the preset movement
  • the amplitude threshold is used to judge the change range of the movement angle data, the movement speed data and the movement amplitude data in vivo data respectively, and the living body judgment results of the movement angle, movement speed and movement amplitude are obtained respectively, including:
  • the change range of the movement range data is compared with the preset movement range threshold, and the comparison result that the obtained change range of the movement range data is greater than the preset movement range threshold is used as the movement angle, movement speed, and movement range The result of in vivo judgment.
  • the change range of the movement angle data is compared with the preset movement angle threshold, and the obtained change range of the movement angle data is greater than the preset movement angle threshold.
  • the comparison result shows that the target to be measured can be judged to be a living body according to the change range of the movement angle data.
  • the detection result generation module 240 generates a detection result that the target to be tested is a living body according to the obtained living body judgment result of the movement angle, the movement speed, and the movement amplitude are all living body judgment information.
  • the living body judgment results of the movement angle, movement speed, and movement range are all living bodies, indicating that the target to be measured is a living body, then a detection result that the target to be measured is a living body is generated, and the detection result can be in the form of text or voice Feedback to the target to be tested. If only one of the living body judgment results of the movement angle, the movement speed and the movement amplitude is non-living, then the detection result that the object to be measured is non-living is generated, and is also fed back to the object to be measured in the form of text or voice.
  • the method before the obtained living body judgment result of the movement angle, movement speed, and movement amplitude are all living body judgment information, the method further includes:
  • the moving background video information of the target to be measured is collected through the camera device
  • the detection result that the target to be tested is a living body is generated.
  • the camera device of the device is used to collect the moving background video information of the target to be tested, and the moving background video information is to be tested based on the moving background video information.
  • the target performs a silent living body judgment, and if the silent living body judgment passes, a detection result that the target to be tested is a living body is generated.
  • performing silent living judgment of the target to be measured according to the moving background video information includes:
  • the moving background video information is subjected to frame extraction processing at preset time intervals to obtain video frames, and then face recognition is performed on the video frames to obtain face video frames. If the face is not recognized during the face recognition process , You need to re-process the frame until you get the face video frame, and then perform the face key point positioning process on the face video frame.
  • the general face key point positioning includes the left pupil, the pupil and the mouth, and then save it in advance.
  • the face image of the target to be tested for example, the ID card, or the original face image entered when the phone unlock verification starts, etc.
  • the face positioning coordinates obtained after the processing of the key points of the face are compared with the face image of the target to be tested
  • the alignment process obtains the face-aligned picture, and then inputs the face-aligned picture into the classifier to calculate the live body score to obtain the live body score. According to the comparison of the live body score with the preset silent live body threshold, the silent live body judgment result is obtained.
  • the method before generating the detection result that the target to be tested is a living body according to the obtained information about the result of the judgment of the living body by silence, the method further includes:
  • the detection result that the target to be tested is a living body is generated.
  • the human eye is used to make judgments.
  • the principle is that when collecting target sensor data and moving according to the prompts, the eyes of a real person can follow the moving target sensor according to the prompts, but the eyes of screen attacks and paper attacks cannot Following the moving target sensor, this can be determined by the algorithm of eye gaze estimation.
  • the operating mode of the sight living body is similar to the above silent living body, which can be performed silently in the background, with the purpose of increasing the probability of detecting non-living bodies.
  • FIG. 3 it is a schematic structural diagram of an electronic device that implements a living body detection method according to an embodiment of the present application.
  • the electronic device 1 may include a processor 10, a memory 11, and a bus, and may also include a computer program stored in the memory 11 and running on the processor 10, such as a living body detection program 12.
  • the memory 11 includes at least one type of readable storage medium.
  • the computer-readable storage medium may be non-volatile or volatile.
  • the readable storage medium includes flash memory, mobile hard disk, and multimedia.
  • Card card-type memory (for example: SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc.
  • the memory 11 may be an internal storage unit of the electronic device 1 in some embodiments, for example, a mobile hard disk of the electronic device 1. In other embodiments, the memory 11 may also be an external storage device of the electronic device 1, such as a plug-in mobile hard disk, a smart media card (SMC), and a secure digital (Secure Digital) equipped on the electronic device 1. , SD) card, flash card (Flash Card), etc.
  • the memory 11 may also include both an internal storage unit of the electronic device 1 and an external storage device.
  • the memory 11 can be used not only to store application software and various types of data installed in the electronic device 1, such as codes of a living body detection program, etc., but also to temporarily store data that has been output or will be output.
  • the processor 10 may be composed of integrated circuits in some embodiments, for example, may be composed of a single packaged integrated circuit, or may be composed of multiple integrated circuits with the same function or different functions, including one or more Combinations of central processing unit (CPU), microprocessor, digital processing chip, graphics processor, and various control chips, etc.
  • the processor 10 is the control unit of the electronic device, which uses various interfaces and lines to connect the various components of the entire electronic device, and runs or executes programs or modules (such as a living body) stored in the memory 11 Detection programs, etc.), and call data stored in the memory 11 to execute various functions of the electronic device 1 and process data.
  • the bus may be a peripheral component interconnect standard (PCI) bus or an extended industry standard architecture (EISA) bus, etc.
  • PCI peripheral component interconnect standard
  • EISA extended industry standard architecture
  • the bus can be divided into address bus, data bus, control bus and so on.
  • the bus is configured to implement connection and communication between the memory 11 and at least one processor 10 and the like.
  • FIG. 3 only shows an electronic device with components. Those skilled in the art can understand that the structure shown in FIG. 3 does not constitute a limitation on the electronic device 1, and may include fewer or more components than shown in the figure. Components, or a combination of certain components, or different component arrangements.
  • the electronic device 1 may also include a power source (such as a battery) for supplying power to various components.
  • the power source may be logically connected to the at least one processor 10 through a power management device, thereby controlling power
  • the device implements functions such as charge management, discharge management, and power consumption management.
  • the power supply may also include any components such as one or more DC or AC power supplies, recharging devices, power failure detection circuits, power converters or inverters, and power status indicators.
  • the electronic device 1 may also include various sensors, Bluetooth modules, Wi-Fi modules, etc., which will not be repeated here.
  • the electronic device 1 may also include a network interface.
  • the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a Bluetooth interface, etc.), which is usually used in the electronic device 1 Establish a communication connection with other electronic devices.
  • the electronic device 1 may also include a user interface.
  • the user interface may be a display (Display) and an input unit (such as a keyboard (Keyboard)).
  • the user interface may also be a standard wired interface or a wireless interface.
  • the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode, organic light-emitting diode) touch device, etc.
  • the display can also be appropriately called a display screen or a display unit, which is used to display the information processed in the electronic device 1 and to display a visualized user interface.
  • the living body detection program 12 stored in the memory 11 in the electronic device 1 is a combination of multiple instructions, and when running in the processor 10, it can realize:
  • a moving target sensor instruction is generated, where the moving target sensor instruction is used to instruct the target to be tested to move the mobile terminal in a preset direction while maintaining face information collection;
  • the data change range calculation is performed on the feedback data of the moving target sensor command to obtain the feedback data change range, where the feedback data includes movement angle data, movement speed data, and movement amplitude data;
  • the living body data judgment is performed on the change range of the movement angle data, the movement speed data, and the movement amplitude data to obtain the movement angle, the movement speed and the movement amplitude.
  • the result of living body judgment is performed on the change range of the movement angle data, the movement speed data, and the movement amplitude data to obtain the movement angle, the movement speed and the movement amplitude.
  • the living body judgment results are all living body judgment information, and the detection result that the target to be measured is a living body is generated.
  • the above preset movement angle threshold, preset movement speed threshold, and preset movement amplitude threshold It can also be stored in a node of a blockchain.
  • the integrated module/unit of the electronic device 1 is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) .
  • the blockchain referred to in this application is a new application mode of computer technology such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Telephone Function (AREA)

Abstract

涉及一种人工智能,揭露一种活体检测方法,包括:根据移动终端发送的待测目标的活体检测请求,生成移动目标传感器指令;通过预设数据变化范围公式,对移动目标传感器指令的反馈数据进行数据变化范围计算,得到反馈数据变化范围;根据预设移动角度阈值、预设移动速度阈值和预设移动幅度阈值分别对移动角度数据、移动速度数据和移动幅度数据的变化范围进行活体数据判断,得到活体判断结果;根据获取的活体判断结果均为活体的判断信息,生成待测目标为活体的检测结果。还涉及区块链技术,所述预设移动角度阈值、预设移动速度阈值和预设移动幅度阈值均存储于区块链中。能够提升整体的活体检测精度。

Description

活体检测方法、装置、设备及计算机可读存储介质
本申请要求于2020年11月12日提交中国专利局、申请号为202011263189.5,发明名称为“活体检测方法、装置、设备及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能,尤其涉及一种活体检测的方法、装置、电子设备及计算机可读存储介质。
背景技术
人脸识别技术在日常生活中的应用越来越普遍,例如刷脸考勤、人脸门禁、人脸解锁手机、人脸识别在线支付、人脸识别在线身份验证等等。人脸识别同人的身份认证相关联,因此同每个人的切身利益相关,例如财产、账号、隐私等,随着人脸识别的应用发展,催生了通过对人脸识别系统进行攻击从中获利的行为。例如,在人脸考勤系统中,可能会出现用工卡照片、手机拍摄的照片等代替他人考勤;在人脸识别在线支付中更是产生过一整套人脸认证的黑产。为了防御上述的攻击行为,需要对非活体进行检测。目前已有的算法主要分为三类:动作活体、眩光活体、静默活体。
发明人意识到,动作活体即要求待检测目标根据提示做出相对应的动作,如眨眼、摇头等,动作活体能较好地防御静态的非活体攻击,例如工卡、打印纸,但对于动态攻击手段效果较差,例如手机视频翻拍等;静默活体能防御比较简单的、非活体特征明显的非活体攻击,例如工卡、中/低分辨率的电子屏翻拍、打印纸等,但对高清屏翻拍的防御力较低;眩光活体防御力较强,对大多数非活体攻击都可以防御,但稳定性不足,容易受到周边环境光照情况的影响。
发明内容
本申请提供一种活体检测方法、装置、电子设备及计算机可读存储介质,其主要目的在于能够提升整体的活体检测精度。
第一方面,为实现上述目的,本申请提供的一种活体检测方法,包括:
根据移动终端发送的待测目标的活体检测请求,生成移动目标传感器指令,所述移动目标传感器指令用于指示所述待测目标在保持人脸信息采集的同时按照预设方向移动所述移动终端;
通过预设数据变化范围公式,对所述移动目标传感器指令的反馈数据进行数据变化范围计算,得到反馈数据变化范围,其中,所述反馈数据包括移动角度数据、移动速度数据以及移动幅度数据;
根据预设移动角度阈值、预设移动速度阈值和预设移动幅度阈值分别对所述移动角度数据、移动速度数据和移动幅度数据的变化范围进行活体数据判断,得到移动角度、移动速度和移动幅度的活体判断结果;
根据获取的所述移动角度、移动速度和移动幅度的活体判断结果均为活体的判断信息,生成所述待测目标为活体的检测结果。
第二方面,为了解决上述问题,本申请还提供一种活体检测装置,所述装置包括:
指令生成模块,用于根据移动终端发送的待测目标的活体检测请求,生成移动目标传感器指令,所述移动目标传感器指令用于指示所述待测目标在保持人脸信息采集的同时按照预设方向移动所述移动终端;
数据变化范围计算模块,用于通过预设数据变化范围公式,对所述移动目标传感器指令的反馈数据进行数据变化范围计算,得到反馈数据变化范围,其中,所述反馈数据包括移动角度数据、移动速度数据以及移动幅度数据;
数据活体判断模块,用于根据预设移动角度阈值、预设移动速度阈值和预设移动幅度阈值分别对所述移动角度数据、移动速度数据和移动幅度数据的变化范围进行活体数据判断,得到移动角度、移动速度和移动幅度的活体判断结果;
检测结果生成模块,根据获取的所述移动角度、移动速度和移动幅度的活体判断结果均为活体的判断信息,生成所述待测目标为活体的检测结果。
第三方面,为了解决上述问题,本申请还提供一种电子设备,所述电子设备包括:
至少一个处理器;以及,
与所述至少一个处理器通信连接的存储器;其中,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如下步骤:
根据移动终端发送的待测目标的活体检测请求,生成移动目标传感器指令,所述移动目标传感器指令用于指示所述待测目标在保持人脸信息采集的同时按照预设方向移动所述移动终端;
通过预设数据变化范围公式,对所述移动目标传感器指令的反馈数据进行数据变化范围计算,得到反馈数据变化范围,其中,所述反馈数据包括移动角度数据、移动速度数据以及移动幅度数据;
根据预设移动角度阈值、预设移动速度阈值和预设移动幅度阈值分别对所述移动角度数据、移动速度数据和移动幅度数据的变化范围进行活体数据判断,得到移动角度、移动速度和移动幅度的活体判断结果;
根据获取的所述移动角度、移动速度和移动幅度的活体判断结果均为活体的判断信息,生成所述待测目标为活体的检测结果。
第四方面,为了解决上述问题,本申请还提供一种计算机可读存储介质,所述计算机程序被处理器执行时实现如下步骤:
根据移动终端发送的待测目标的活体检测请求,生成移动目标传感器指令,所述移动目标传感器指令用于指示所述待测目标在保持人脸信息采集的同时按照预设方向移动所述移动终端;
通过预设数据变化范围公式,对所述移动目标传感器指令的反馈数据进行数据变化范围计算,得到反馈数据变化范围,其中,所述反馈数据包括移动角度数据、移动速度数据以及移动幅度数据;
根据预设移动角度阈值、预设移动速度阈值和预设移动幅度阈值分别对所述移动角度数据、移动速度数据和移动幅度数据的变化范围进行活体数据判断,得到移动角度、移动速度和移动幅度的活体判断结果;
根据获取的所述移动角度、移动速度和移动幅度的活体判断结果均为活体的判断信息,生成所述待测目标为活体的检测结果。
本申请提出的活体检测方法、装置、电子设备及计算机可读存储介质,通过利用目标传感器获取的反馈数据变化范围与预设相应种类的反馈数据阈值的比较,得到活体判断结果,综合判断信息,生成待测目标为活体的检测结果。本申请利用了智能手机等移动设备中多种自带传感器的信息变化规律判断用户即待测目标是否为活体,操作简单、精度高、准确率高。
附图说明
图1为本申请一实施例提供的活体检测方法的流程示意图;
图2为本申请一实施例提供的中文活体检测装置的模块示意图;
图3为本申请一实施例提供的实现中文活体检测方法的电子设备的内部结构示意图;
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请提供一种活体检测方法。参照图1所示,为本申请一实施例提供的活体检测方法的流程示意图。该方法可以由一个装置执行,该装置可以由软件和/或硬件实现。
在本实施例中,活体检测方法包括:
S110、根据移动终端发送的待测目标的活体检测请求,生成移动目标传感器指令,移动目标传感器指令用于指示待测目标在保持人脸信息采集的同时按照预设方向移动所述移动终端。
具体的,待测目标(即使用者)使用某种需要进行活体检测的软件时,如、用于在线支付、刷脸考勤、人脸解锁智能设备等软件时,需要进行活体检测,该软件向客户端发出活体检测的提示,待测目标通过选择的方式,确认是否进行活体检测,如果待测目标选择进行活体检测,则生成待测目标的活体检测请求,处理器获取待测目标的活体检测请求,生成移动目标传感器指令,其中,移动目标传感器指令可以是,要求待测目标在接受活体检测的同时,移动目标传感器,例如,“请向左移动您的检测设备”“请向右移动您的检测设备”、“请向上移动您的检测设备”“请向下移动您的检测设备”,目标传感器可是单独的传感器,也可以是带有目标传感器的移动设备(iOS、Android平台),例如智能手机、平板电脑、iwatch等。
作为本申请的一个优选实施例,根据移动终端发送的待测目标的活体检测请求,生成移动目标传感器指令,移动目标传感器指令用于指示待测目标在保持人脸信息采集的同时照预设方向移动移动终端包括:
接收终端发送的待测目标的活体检测请求;
在移动终端显示通过开启摄像装置采集到的图像显示区域,并提示待测目标调整头部姿态和/或所述移动终端以使所述图像显示区域显示完整人脸图像;
当检测到待测目标的完整人脸图像时,生成移动目标传感器指令,移动目标传感器指令用于指示待测目标在保持人脸信息采集的同时按照预设方向移动移动终端。
具体的,待测目标的活体检测请求与移动终端的摄像装置开启指令预先关联,当处理器收到活体检测请求时,在移动终端显示通过开启摄像装置采集到的图像显示区域,并提示待测目标调整脸部位置,将完整人脸图像在图像显示区域呈现,当检测到待测目标的完整人脸图像时,生成移动目标传感器指令,通过语音或者文字的方式指示待测目标在保持人脸信息采集的同时按照预设方向移动移动终端,移动终端还可以在图像显示区域显示纵横中线,纵横中线为待测目标在保持人脸信息采集的同时按照预设方向移动移动终端的参考线。
S120、通过预设数据变化范围公式,对移动目标传感器指令的反馈数据进行数据变化范围计算,得到反馈数据变化范围,其中,反馈数据包括移动角度数据、移动速度数据以及移动幅度数据。
具体的,当待测目标对移动目标传感器指令做出反馈时,目标传感器传感器将产生与传感器类型相应种类的反馈数据,例如,当某用户使用智能手机进行人脸解锁时,生成的移动目标传感器指令是“请向左移动您的手机”,此时,位于智能手机中的自带传感器可能有加速度传感器、方向传感器、陀螺仪等,手机的品牌不同,传感器也可能不同,但是,对于目前的智能设备而言,通过内置的传感器获取该智能设备的移动角度、移动速度以及移动幅度是很容易实现的。
作为本申请的一个优选实施例,所述预设数据变化范围公式为:
V out=|max(v i)-min(v i)|
其中,V out为反馈数据变化范围,max(v i)为某种反馈数据的最大值,min(v i)为某种反馈数据的最小值。
具体的,每种数据的变化范围是指在移动目标传感器的过程中,产生的每种传感器读数中,最大值与最小值的差的绝对值。例如,智能手机的某种APP会使用可视化的动画提示用户手持设备,水平方向移动设备,在移动过程中,屏幕面向用户,且要求用户目光注视手机。此过程中用户会手持设备做一个近似圆弧的轨迹。
S130、根据预设移动角度阈值、预设移动速度阈值和预设移动幅度阈值分别对移动角度数据、移动速度数据和移动幅度数据的变化范围进行活体数据判断,得到移动角度、移动速度和移动幅度的活体判断结果。
具体的,将预设移动角度阈值、预设移动速度阈值和预设移动幅度阈值分别与移动角度数据、移动速度数据和移动幅度数据的变化范围进行比较,根据比较结果,对各种移动数据进行活体数据判断,从而得到移动角度、移动速度和移动幅度的活体判断结果。其中上述的各种数据的变化范围与相应数据阈值的比较可以是同时分别进行也可以是依次进行;当同时分别进行时,只要每种数据的活体判断结果均为活体时,表明待测目标为活体,否则为非活体;当依次进行时,只有前一次的活体判断结果为活体,才进行下一次的活体判断。
作为本申请的一个优选实施例,预设移动角度阈值、预设移动速度阈值和预设移动幅度阈值均存储于区块链中,根据预设移动角度阈值、预设移动速度阈值和预设移动幅度阈值分别对移动角度数据、移动速度数据和移动幅度数据的变化范围进行活体数据判断,分别得到移动角度、移动速度和移动幅度的活体判断结果,包括:
将移动角度数据的变化范围与预设移动角度阈值进行比较,将得到的移动角度数据的变化范围大于预设移动角度阈值的比较结果作为第一活体判断结果;
根据第一活体判断结果,将移动速度数据的变化范围与预设移动速度阈值进行比较,将得到的移动速度数据的变化范围大于预设移动速度阈值的比较结果作为第二活体判断结果;
根据第二活体判断结果,将移动幅度数据的变化范围与预设移动幅度阈值进行比较,将得到的移动幅度数据的变化范围大于预设移动幅度阈值的比较结果作为移动角度、移动速度和移动幅度的活体判断结果。
具体的,移动角度数据的变化范围与预设移动角度阈值进行比较,得到的移动角度数据的变化范围大于预设移动角度阈值的比较结果说明根据移动角度数据的变化范围可判断待测目标为活体,则根据该结果进行移动速度数据的变化范围是否为活体数据的判断,如果得到的移动角度数据的变化范围小于等于预设移动角度阈值的比较结果,则说明根据移动角度数据的变化范围可判断待测目标为非活体,则无需再进行下一数据的判断,移动速度采用上述相同的方式即可。
S140、根据获取的移动角度、移动速度和移动幅度的活体判断结果均为活体的判断信 息,生成待测目标为活体的检测结果。
具体的,当移动角度、移动速度和移动幅度的活体判断结果均为活体的时,说明待测目标为活体,则生成待测目标为活体的检测结果,该检测结果可以通过文字或者语音的方式反馈给待测目标。如果移动角度、移动速度和移动幅度的活体判断结果中只要有一种判断结果为非活体,则生成待测目标为非活体的检测结果,同样以文字或者语音的方式反馈给待测目标。
作为本申请的一个优选实施例,在根据获取的移动角度、移动速度和移动幅度的活体判断结果均为活体的判断信息,生成待测目标为活体的检测结果之前还包括:
在移动目标传感器指令的反馈数据产生的同时,通过摄像装置采集待测目标的移动背景视频信息;
根据移动背景视频信息对待测目标进行静默活体判断;
根据获取的通过静默活体判断结果信息,生成待测目标为活体的检测结果。
具体的,为了使活体检测结果更加精准,所以在待测目标移动带有目标传感器的设备时,同时使用设备自带的摄像装置采集待测目标的移动背景视频信息,根据移动背景视频信息对待测目标进行静默活体判断,如果静默活体判断通过,则生成待测目标为活体的检测结果。
作为本申请的一个优选实施例,根据移动背景视频信息对待测目标进行静默活体判断包括:
对移动背景视频信息按照预设时间间隔进行抽帧处理,得到视频帧;
对视频帧进行人脸识别,得到人脸视频帧;
对人脸视频帧进行人脸关键点定位处理,得到人脸定位坐标;
将人脸定位坐标与预先获取的待测目标人脸图片进行对齐处理,得到人脸对齐图片;
将人脸对齐图片输入分类器中进行活体得分计算,得到活体得分;
将活体得分与预设静默活体阈值进行比较。
具体的,对移动背景视频信息按照预设时间间隔进行抽帧处理,得到视频帧,再视频帧进行人脸识别,得到人脸视频帧,如果在人脸识别的过程中,没有识别到人脸,则需要重新再进行抽帧处理,直到得到人脸视频帧,然后对人脸视频帧进行人脸关键点定位处理,一般人脸关键点定位包括,左瞳孔、有瞳孔和嘴巴,再通过预先储存的待测目标人脸图片,例如,身份证,或者手机解锁验证开始时,输入的原始人脸图片等,将人脸关键点定位处理后得到的人脸定位坐标与待测目标人脸图片进行对齐处理,得到人脸对齐图片,再将人脸对齐图片输入分类器中进行活体得分计算,得到活体得分,根据活体得分与预设静默活体阈值的比较,从而得到静默活体判断结果。
作为本申请的一个优选实施例,在根据获取的通过静默活体判断结果信息,生成待测目标为活体的检测结果之前,还包括:
在移动目标传感器指令的反馈数据产生的同时,通过摄像装置获取待测目标的人眼视 线图片;
通过视线估计对待测目标的人眼视线图片进行活体鉴定;
根据通过活体鉴定的信息,生成待测目标为活体的检测结果。
具体的,使用人眼视线活体进行判断,其原理是,在采集目标传感器数据按照提示移动时,真人的眼神可以根据提示一直跟随正在移动的目标传感器,但屏幕攻击和纸张攻击中的眼神是无法跟随移动的目标传感器的,这可以使用视线估计(eye gaze estimation)的算法判定。视线活体的运行方式同上述静默活体类似,可以在后台静默进行,目的是增加检出非活体的几率。
如图2所示,是本申请一个实施例的活体检测装置的功能模块图。
本申请所述活体检测装置200可以安装于电子设备中。根据实现的功能,所述活体检测装置可以包括指令生成模块210、数据变化范围计算模块220、数据活体判断模块230、检测结果生成模块240。本发所述模块也可以称之为单元,是指一种能够被电子设备处理器所执行,并且能够完成固定功能的一系列计算机程序段,其存储在电子设备的存储器中。
在本实施例中,关于各模块/单元的功能如下:
指令生成模块210,用于根据移动终端发送的待测目标的活体检测请求,生成移动目标传感器指令,移动目标传感器指令用于指示待测目标在保持人脸信息采集的同时按照预设方向移动移动终端。
具体的,待测目标(即使用者)使用某种需要进行活体检测的软件时,如、用于在线支付、刷脸考勤、人脸解锁智能设备等软件时,需要进行活体检测,该软件向客户端发出活体检测的提示,待测目标通过选择的方式,确认是否进行活体检测,如果待测目标选择进行活体检测,则生成待测目标的活体检测请求,处理器获取待测目标的活体检测请求,生成移动目标传感器指令,其中,移动目标传感器指令可以是,要求待测目标在接受活体检测的同时,移动目标传感器,例如,“请向左移动您的检测设备”“请向右移动您的检测设备”、“请向上移动您的检测设备”“请向下移动您的检测设备”,目标传感器可是单独的传感器,也可以是带有目标传感器的移动设备(iOS、Android平台),例如智能手机、平板电脑、iwatch等。
作为本申请的一个优选实施例,根据移动终端发送的待测目标的活体检测请求,生成移动目标传感器指令,移动目标传感器指令用于指示待测目标在保持人脸信息采集的同时照预设方向移动移动终端包括:
接收终端发送的待测目标的活体检测请求;
在移动终端显示通过开启摄像装置采集到的图像显示区域,并提示待测目标调整头部姿态和/或所述移动终端以使所述图像显示区域显示完整人脸图像;
当检测到待测目标的完整人脸图像时,生成移动目标传感器指令,移动目标传感器指令用于指示待测目标在保持人脸信息采集的同时按照预设方向移动移动终端。
具体的,待测目标的活体检测请求与移动终端的摄像装置开启指令预先关联,当处理 器收到活体检测请求时,在移动终端显示通过开启摄像装置采集到的图像显示区域,并提示待测目标调整脸部位置,将完整人脸图像在图像显示区域呈现,当检测到待测目标的完整人脸图像时,生成移动目标传感器指令,通过语音或者文字的方式指示待测目标在保持人脸信息采集的同时按照预设方向移动移动终端,移动终端还可以在图像显示区域显示纵横中线,纵横中线为待测目标在保持人脸信息采集的同时按照预设方向移动移动终端的参考线。
数据变化范围计算模块220,用于通过预设数据变化范围公式,对移动目标传感器指令的反馈数据进行数据变化范围计算,得到反馈数据变化范围,其中,反馈数据包括移动角度数据、移动速度数据以及移动幅度数据。
具体的,当待测目标对移动目标传感器指令做出反馈时,目标传感器传感器将产生与传感器类型相应种类的反馈数据,例如,当某用户使用智能手机进行人脸解锁时,生成的移动目标传感器指令是“请向左移动您的手机”,此时,位于智能手机中的自带传感器可能有加速度传感器、方向传感器、陀螺仪等,手机的品牌不同,传感器也可能不同,但是,对于目前的智能设备而言,通过内置的传感器获取该智能设备的移动角度、移动速度以及移动幅度是很容易实现的。
作为本申请的一个优选实施例,所述预设数据变化范围公式为:
V out=|max(v i)-min(v i)|
其中,V out为反馈数据变化范围,max(v i)为某种反馈数据的最大值,min(v i)为某种反馈数据的最小值。
具体的,每种数据的变化范围是指在移动目标传感器的过程中,产生的每种传感器读数中,最大值与最小值的差的绝对值。例如,智能手机的某种APP会使用可视化的动画提示用户手持设备,水平方向移动设备,在移动过程中,屏幕面向用户,且要求用户目光注视手机。此过程中用户会手持设备做一个近似圆弧的轨迹。
数据活体判断模块230,用于根据预设移动角度阈值、预设移动速度阈值和预设移动幅度阈值分别对移动角度数据、移动速度数据和移动幅度数据的变化范围进行活体数据判断,得到移动角度、移动速度和移动幅度的活体判断结果。
具体的,将预设移动角度阈值、预设移动速度阈值和预设移动幅度阈值分别与移动角度数据、移动速度数据和移动幅度数据的变化范围进行比较,根据比较结果,对各种移动数据进行活体数据判断,从而得到移动角度、移动速度和移动幅度的活体判断结果。其中上述的各种数据的变化范围与相应数据阈值的比较可以是同时分别进行也可以是依次进行;当同时分别进行时,只要每种数据的活体判断结果均为活体时,表明待测目标为活体,否则为非活体;当依次进行时,只有前一次的活体判断结果为活体,才进行下一次的活体判断。
作为本申请的一个优选实施例,预设移动角度阈值、预设移动速度阈值和预设移动幅度阈值均存储于区块链中,根据预设移动角度阈值、预设移动速度阈值和预设移动幅度阈 值分别对移动角度数据、移动速度数据和移动幅度数据的变化范围进行活体数据判断,分别得到移动角度、移动速度和移动幅度的活体判断结果,包括:
将移动角度数据的变化范围与预设移动角度阈值进行比较,将得到的移动角度数据的变化范围大于预设移动角度阈值的比较结果作为第一活体判断结果;
根据第一活体判断结果,将移动速度数据的变化范围与预设移动速度阈值进行比较,将得到的移动速度数据的变化范围大于预设移动速度阈值的比较结果作为第二活体判断结果;
根据第二活体判断结果,将移动幅度数据的变化范围与预设移动幅度阈值进行比较,将得到的移动幅度数据的变化范围大于预设移动幅度阈值的比较结果作为移动角度、移动速度和移动幅度的活体判断结果。
具体的,移动角度数据的变化范围与预设移动角度阈值进行比较,得到的移动角度数据的变化范围大于预设移动角度阈值的比较结果说明根据移动角度数据的变化范围可判断待测目标为活体,则根据该结果进行移动速度数据的变化范围是否为活体数据的判断,如果得到的移动角度数据的变化范围小于等于预设移动角度阈值的比较结果,则说明根据移动角度数据的变化范围可判断待测目标为非活体,则无需再进行下一数据的判断,移动速度采用上述相同的方式即可。
检测结果生成模块240,根据获取的移动角度、移动速度和移动幅度的活体判断结果均为活体的判断信息,生成待测目标为活体的检测结果。
具体的,当移动角度、移动速度和移动幅度的活体判断结果均为活体的时,说明待测目标为活体,则生成待测目标为活体的检测结果,该检测结果可以通过文字或者语音的方式反馈给待测目标。如果移动角度、移动速度和移动幅度的活体判断结果中只要有一种判断结果为非活体,则生成待测目标为非活体的检测结果,同样以文字或者语音的方式反馈给待测目标。
作为本申请的一个优选实施例,在根据获取的移动角度、移动速度和移动幅度的活体判断结果均为活体的判断信息,生成待测目标为活体的检测结果之前还包括:
在移动目标传感器指令的反馈数据产生的同时,通过摄像装置采集待测目标的移动背景视频信息;
根据移动背景视频信息对待测目标进行静默活体判断;
根据获取的通过静默活体判断结果信息,生成待测目标为活体的检测结果。
具体的,为了使活体检测结果更加精准,所以在待测目标移动带有目标传感器的设备时,同时使用设备自带的摄像装置采集待测目标的移动背景视频信息,根据移动背景视频信息对待测目标进行静默活体判断,如果静默活体判断通过,则生成待测目标为活体的检测结果。
作为本申请的一个优选实施例,根据移动背景视频信息对待测目标进行静默活体判断包括:
对移动背景视频信息按照预设时间间隔进行抽帧处理,得到视频帧;
对视频帧进行人脸识别,得到人脸视频帧;
对人脸视频帧进行人脸关键点定位处理,得到人脸定位坐标;
将人脸定位坐标与预先获取的待测目标人脸图片进行对齐处理,得到人脸对齐图片;
将人脸对齐图片输入分类器中进行活体得分计算,得到活体得分;
将活体得分与预设静默活体阈值进行比较。
具体的,对移动背景视频信息按照预设时间间隔进行抽帧处理,得到视频帧,再视频帧进行人脸识别,得到人脸视频帧,如果在人脸识别的过程中,没有识别到人脸,则需要重新再进行抽帧处理,直到得到人脸视频帧,然后对人脸视频帧进行人脸关键点定位处理,一般人脸关键点定位包括,左瞳孔、有瞳孔和嘴巴,再通过预先储存的待测目标人脸图片,例如,身份证,或者手机解锁验证开始时,输入的原始人脸图片等,将人脸关键点定位处理后得到的人脸定位坐标与待测目标人脸图片进行对齐处理,得到人脸对齐图片,再将人脸对齐图片输入分类器中进行活体得分计算,得到活体得分,根据活体得分与预设静默活体阈值的比较,从而得到静默活体判断结果。
作为本申请的一个优选实施例,在根据获取的通过静默活体判断结果信息,生成待测目标为活体的检测结果之前,还包括:
在移动目标传感器指令的反馈数据产生的同时,通过摄像装置获取待测目标的人眼视线图片;
通过视线估计对待测目标的人眼视线图片进行活体鉴定;
根据通过活体鉴定的信息,生成待测目标为活体的检测结果。
具体的,使用人眼视线活体进行判断,其原理是,在采集目标传感器数据按照提示移动时,真人的眼神可以根据提示一直跟随正在移动的目标传感器,但屏幕攻击和纸张攻击中的眼神是无法跟随移动的目标传感器的,这可以使用视线估计(eye gaze estimation)的算法判定。视线活体的运行方式同上述静默活体类似,可以在后台静默进行,目的是增加检出非活体的几率。
如图3所示,是本申请一个实施例实现活体检测方法的电子设备的结构示意图。
所述电子设备1可以包括处理器10、存储器11和总线,还可以包括存储在所述存储器11中并可在所述处理器10上运行的计算机程序,如活体检测程序12。
其中,所述存储器11至少包括一种类型的可读存储介质,所述计算机可读存储介质可以是非易失性,也可以是易失性,所述可读存储介质包括闪存、移动硬盘、多媒体卡、卡型存储器(例如:SD或DX存储器等)、磁性存储器、磁盘、光盘等。所述存储器11在一些实施例中可以是电子设备1的内部存储单元,例如该电子设备1的移动硬盘。所述存储器11在另一些实施例中也可以是电子设备1的外部存储设备,例如电子设备1上配备的插接式移动硬盘、智能存储卡(Smart Media Card,SMC)、安全数字(Secure Digital,SD)卡、闪存卡(Flash Card)等。进一步地,所述存储器11还可以既包括电子设备1的内部 存储单元也包括外部存储设备。所述存储器11不仅可以用于存储安装于电子设备1的应用软件及各类数据,例如活体检测程序的代码等,还可以用于暂时地存储已经输出或者将要输出的数据。
所述处理器10在一些实施例中可以由集成电路组成,例如可以由单个封装的集成电路所组成,也可以是由多个相同功能或不同功能封装的集成电路所组成,包括一个或者多个中央处理器(Central Processing unit,CPU)、微处理器、数字处理芯片、图形处理器及各种控制芯片的组合等。所述处理器10是所述电子设备的控制核心(Control Unit),利用各种接口和线路连接整个电子设备的各个部件,通过运行或执行存储在所述存储器11内的程序或者模块(例如活体检测程序等),以及调用存储在所述存储器11内的数据,以执行电子设备1的各种功能和处理数据。
所述总线可以是外设部件互连标准(peripheral component interconnect,简称PCI)总线或扩展工业标准结构(extended industry standard architecture,简称EISA)总线等。该总线可以分为地址总线、数据总线、控制总线等。所述总线被设置为实现所述存储器11以及至少一个处理器10等之间的连接通信。
图3仅示出了具有部件的电子设备,本领域技术人员可以理解的是,图3示出的结构并不构成对所述电子设备1的限定,可以包括比图示更少或者更多的部件,或者组合某些部件,或者不同的部件布置。
例如,尽管未示出,所述电子设备1还可以包括给各个部件供电的电源(比如电池),优选地,电源可以通过电源管理装置与所述至少一个处理器10逻辑相连,从而通过电源管理装置实现充电管理、放电管理、以及功耗管理等功能。电源还可以包括一个或一个以上的直流或交流电源、再充电装置、电源故障检测电路、电源转换器或者逆变器、电源状态指示器等任意组件。所述电子设备1还可以包括多种传感器、蓝牙模块、Wi-Fi模块等,在此不再赘述。
进一步地,所述电子设备1还可以包括网络接口,可选地,所述网络接口可以包括有线接口和/或无线接口(如WI-FI接口、蓝牙接口等),通常用于在该电子设备1与其他电子设备之间建立通信连接。
可选地,该电子设备1还可以包括用户接口,用户接口可以是显示器(Display)、输入单元(比如键盘(Keyboard)),可选地,用户接口还可以是标准的有线接口、无线接口。可选地,在一些实施例中,显示器可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。其中,显示器也可以适当的称为显示屏或显示单元,用于显示在电子设备1中处理的信息以及用于显示可视化的用户界面。
应该了解,所述实施例仅为说明之用,在专利申请范围上并不受此结构的限制。
所述电子设备1中的所述存储器11存储的活体检测程序12是多个指令的组合,在所述处理器10中运行时,可以实现:
根据移动终端发送的待测目标的活体检测请求,生成移动目标传感器指令,移动目标传感器指令用于指示待测目标在保持人脸信息采集的同时按照预设方向移动所述移动终端;
通过预设数据变化范围公式,对移动目标传感器指令的反馈数据进行数据变化范围计算,得到反馈数据变化范围,其中,反馈数据包括移动角度数据、移动速度数据以及移动幅度数据;
根据预设移动角度阈值、预设移动速度阈值和预设移动幅度阈值分别对所述移动角度数据、移动速度数据和移动幅度数据的变化范围进行活体数据判断,得到移动角度、移动速度和移动幅度的活体判断结果;
根据获取的移动角度、移动速度和移动幅度的活体判断结果均为活体的判断信息,生成待测目标为活体的检测结果。
具体地,所述处理器10对上述指令的具体实现方法可参考图1对应实施例中相关步骤的描述,在此不赘述。需要强调的是,为进一步保证上述预设移动角度阈值、预设移动速度阈值和预设移动幅度阈值的私密和安全性,上述预设移动角度阈值、预设移动速度阈值和预设移动幅度阈值还可以存储于一区块链的节点中。
进一步地,所述电子设备1集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)。
在本申请所提供的几个实施例中,对于本领域技术人员而言,本申请不限于上述示范性实施例的细节,而且在不背离本申请的精神或基本特征的情况下,能够以其他的具体形式实现本申请。
本申请所指区块链是分布式数据存储、点对点传输、共识机制、加密算法等计算机技术的新型应用模式。
以上实施例仅用以说明本申请的技术方案而非限制,尽管参照较佳实施例对本申请进行了详细说明,本领域的普通技术人员应当理解,可以对本申请的技术方案进行修改或等同替换,而不脱离本申请技术方案的精神和范围。

Claims (20)

  1. 一种活体检测方法,其中,所述方法包括:
    根据移动终端发送的待测目标的活体检测请求,生成移动目标传感器指令,所述移动目标传感器指令用于指示所述待测目标在保持人脸信息采集的同时按照预设方向移动所述移动终端;
    通过预设数据变化范围公式,对所述移动目标传感器指令的反馈数据进行数据变化范围计算,得到反馈数据变化范围,其中,所述反馈数据包括移动角度数据、移动速度数据以及移动幅度数据;
    根据预设移动角度阈值、预设移动速度阈值和预设移动幅度阈值分别对所述移动角度数据、移动速度数据和移动幅度数据的变化范围进行活体数据判断,得到移动角度、移动速度和移动幅度的活体判断结果;
    根据获取的所述移动角度、移动速度和移动幅度的活体判断结果均为活体的判断信息,生成所述待测目标为活体的检测结果。
  2. 根据权利要求1所述的活体检测方法,其中,所述根据移动终端发送的待测目标的活体检测请求,生成移动目标传感器指令,所述移动目标传感器指令用于指示所述待测目标在保持人脸信息采集的同时照预设方向移动所述移动终端包括:
    接收终端发送的待测目标的活体检测请求;
    在所述移动终端显示通过开启摄像装置采集到的图像显示区域,并提示待测目标调整头部姿态和/或所述移动终端以使所述图像显示区域显示完整人脸图像;
    当检测到所述待测目标的完整人脸图像时,生成移动目标传感器指令,所述移动目标传感器指令用于指示所述待测目标在保持人脸信息采集的同时按照预设方向移动所述移动终端。
  3. 根据权利要求1所述的活体检测方法,其中,所述预设数据变化范围公式为:
    V out=|max(v i)-min(v i)|
    其中,V out为反馈数据变化范围,max(v i)为某种反馈数据的最大值,min(v i)为某种反馈数据的最小值。
  4. 根据权利要求1所述的活体检测方法,其中,所述预设移动角度阈值、所述预设移动速度阈值和所述预设移动幅度阈值均存储于区块链中,所述根据预设移动角度阈值、预设移动速度阈值和预设移动幅度阈值分别对所述移动角度数据、移动速度数据和移动幅度数据的变化范围进行活体数据判断,分别得到移动角度、移动速度和移动幅度的活体判断结果,包括:
    将所述移动角度数据的变化范围与所述预设移动角度阈值进行比较,将得到的所述移动角度数据的变化范围大于所述预设移动角度阈值的比较结果作为第一活体判断结果;
    根据所述第一活体判断结果,将所述移动速度数据的变化范围与所述预设移动速度阈 值进行比较,将得到的所述移动速度数据的变化范围大于所述预设移动速度阈值的比较结果作为第二活体判断结果;
    根据所述第二活体判断结果,将所述移动幅度数据的变化范围与所述预设移动幅度阈值进行比较,将得到的所述移动幅度数据的变化范围大于所述预设移动幅度阈值的比较结果作为所述移动角度、移动速度和移动幅度的活体判断结果。
  5. 根据权利要求1所述的活体检测方法,其中,在所述根据获取的所述移动角度、移动速度和移动幅度的活体判断结果均为活体的判断信息,生成所述待测目标为活体的检测结果之前还包括:
    在所述移动目标传感器指令的反馈数据产生的同时,通过摄像装置采集所述待测目标的移动背景视频信息;
    根据所述移动背景视频信息对所述待测目标进行静默活体判断;
    根据获取的通过静默活体判断结果信息,生成所述待测目标为活体的检测结果。
  6. 根据权利要求5所述的活体检测方法,其中,所述根据所述移动背景视频信息对所述待测目标进行静默活体判断包括:
    对所述移动背景视频信息按照预设时间间隔进行抽帧处理,得到视频帧;
    对所述视频帧进行人脸识别,得到人脸视频帧;
    对所述人脸视频帧进行人脸关键点定位处理,得到人脸定位坐标;
    将所述人脸定位坐标与预先获取的待测目标人脸图片进行对齐处理,得到人脸对齐图片;
    将所述人脸对齐图片输入分类器中进行活体得分计算,得到活体得分;
    将所述活体得分与预设静默活体阈值进行比较。
  7. 根据权利要求5所述的活体检测方法,其中,在所述根据获取的通过静默活体判断结果信息,生成所述待测目标为活体的检测结果之前,还包括:
    在所述移动目标传感器指令的反馈数据产生的同时,通过摄像装置获取所述待测目标的人眼视线图片;
    通过视线估计对所述待测目标的人眼视线图片进行活体鉴定;
    根据通过活体鉴定的信息,生成所述待测目标为活体的检测结果。
  8. 一种活体检测装置,其中,所述装置包括:
    指令生成模块,用于根据移动终端发送的待测目标的活体检测请求,生成移动目标传感器指令,所述移动目标传感器指令用于指示所述待测目标在保持人脸信息采集的同时按照预设方向移动所述移动终端;
    数据变化范围计算模块,用于通过预设数据变化范围公式,对所述移动目标传感器指令的反馈数据进行数据变化范围计算,得到反馈数据变化范围,其中,所述反馈数据包括移动角度数据、移动速度数据以及移动幅度数据;
    数据活体判断模块,用于根据预设移动角度阈值、预设移动速度阈值和预设移动幅度 阈值分别对所述移动角度数据、移动速度数据和移动幅度数据的变化范围进行活体数据判断,得到移动角度、移动速度和移动幅度的活体判断结果;
    检测结果生成模块,根据获取的所述移动角度、移动速度和移动幅度的活体判断结果均为活体的判断信息,生成所述待测目标为活体的检测结果。
  9. 一种电子设备,其中,所述电子设备包括:
    至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如下步骤:
    根据移动终端发送的待测目标的活体检测请求,生成移动目标传感器指令,所述移动目标传感器指令用于指示所述待测目标在保持人脸信息采集的同时按照预设方向移动所述移动终端;
    通过预设数据变化范围公式,对所述移动目标传感器指令的反馈数据进行数据变化范围计算,得到反馈数据变化范围,其中,所述反馈数据包括移动角度数据、移动速度数据以及移动幅度数据;
    根据预设移动角度阈值、预设移动速度阈值和预设移动幅度阈值分别对所述移动角度数据、移动速度数据和移动幅度数据的变化范围进行活体数据判断,得到移动角度、移动速度和移动幅度的活体判断结果;
    根据获取的所述移动角度、移动速度和移动幅度的活体判断结果均为活体的判断信息,生成所述待测目标为活体的检测结果。
  10. 根据权利要求9所述的电子设备,其中,所述根据移动终端发送的待测目标的活体检测请求,生成移动目标传感器指令,所述移动目标传感器指令用于指示所述待测目标在保持人脸信息采集的同时照预设方向移动所述移动终端包括:
    接收终端发送的待测目标的活体检测请求;
    在所述移动终端显示通过开启摄像装置采集到的图像显示区域,并提示待测目标调整头部姿态和/或所述移动终端以使所述图像显示区域显示完整人脸图像;
    当检测到所述待测目标的完整人脸图像时,生成移动目标传感器指令,所述移动目标传感器指令用于指示所述待测目标在保持人脸信息采集的同时按照预设方向移动所述移动终端。
  11. 根据权利要求9所述的电子设备,其中,所述预设数据变化范围公式为:
    V out=|max(v i)-min(v i)|
    其中,V out为反馈数据变化范围,max(v i)为某种反馈数据的最大值,min(v i)为某种反馈数据的最小值。
  12. 根据权利要求9所述的电子设备,其中,所述预设移动角度阈值、所述预设移动速度阈值和所述预设移动幅度阈值均存储于区块链中,所述根据预设移动角度阈值、预设 移动速度阈值和预设移动幅度阈值分别对所述移动角度数据、移动速度数据和移动幅度数据的变化范围进行活体数据判断,分别得到移动角度、移动速度和移动幅度的活体判断结果,包括:
    将所述移动角度数据的变化范围与所述预设移动角度阈值进行比较,将得到的所述移动角度数据的变化范围大于所述预设移动角度阈值的比较结果作为第一活体判断结果;
    根据所述第一活体判断结果,将所述移动速度数据的变化范围与所述预设移动速度阈值进行比较,将得到的所述移动速度数据的变化范围大于所述预设移动速度阈值的比较结果作为第二活体判断结果;
    根据所述第二活体判断结果,将所述移动幅度数据的变化范围与所述预设移动幅度阈值进行比较,将得到的所述移动幅度数据的变化范围大于所述预设移动幅度阈值的比较结果作为所述移动角度、移动速度和移动幅度的活体判断结果。
  13. 根据权利要求9所述的电子设备,其中,在所述根据获取的所述移动角度、移动速度和移动幅度的活体判断结果均为活体的判断信息,生成所述待测目标为活体的检测结果之前还包括:
    在所述移动目标传感器指令的反馈数据产生的同时,通过摄像装置采集所述待测目标的移动背景视频信息;
    根据所述移动背景视频信息对所述待测目标进行静默活体判断;
    根据获取的通过静默活体判断结果信息,生成所述待测目标为活体的检测结果。
  14. 根据权利要求13所述的电子设备,其中,所述根据所述移动背景视频信息对所述待测目标进行静默活体判断包括:
    对所述移动背景视频信息按照预设时间间隔进行抽帧处理,得到视频帧;
    对所述视频帧进行人脸识别,得到人脸视频帧;
    对所述人脸视频帧进行人脸关键点定位处理,得到人脸定位坐标;
    将所述人脸定位坐标与预先获取的待测目标人脸图片进行对齐处理,得到人脸对齐图片;
    将所述人脸对齐图片输入分类器中进行活体得分计算,得到活体得分;
    将所述活体得分与预设静默活体阈值进行比较。
  15. 一种计算机可读存储介质,存储有计算机程序,其中,所述计算机程序被处理器执行时实现如下步骤:
    根据移动终端发送的待测目标的活体检测请求,生成移动目标传感器指令,所述移动目标传感器指令用于指示所述待测目标在保持人脸信息采集的同时按照预设方向移动所述移动终端;
    通过预设数据变化范围公式,对所述移动目标传感器指令的反馈数据进行数据变化范围计算,得到反馈数据变化范围,其中,所述反馈数据包括移动角度数据、移动速度数据以及移动幅度数据;
    根据预设移动角度阈值、预设移动速度阈值和预设移动幅度阈值分别对所述移动角度数据、移动速度数据和移动幅度数据的变化范围进行活体数据判断,得到移动角度、移动速度和移动幅度的活体判断结果;
    根据获取的所述移动角度、移动速度和移动幅度的活体判断结果均为活体的判断信息,生成所述待测目标为活体的检测结果。
  16. 根据权利要求15所述的计算机可读存储介质,其中,所述根据移动终端发送的待测目标的活体检测请求,生成移动目标传感器指令,所述移动目标传感器指令用于指示所述待测目标在保持人脸信息采集的同时照预设方向移动所述移动终端包括:
    接收终端发送的待测目标的活体检测请求;
    在所述移动终端显示通过开启摄像装置采集到的图像显示区域,并提示待测目标调整头部姿态和/或所述移动终端以使所述图像显示区域显示完整人脸图像;
    当检测到所述待测目标的完整人脸图像时,生成移动目标传感器指令,所述移动目标传感器指令用于指示所述待测目标在保持人脸信息采集的同时按照预设方向移动所述移动终端。
  17. 根据权利要求15所述的计算机可读存储介质,其中,所述预设数据变化范围公式为:
    V out=|max(v i)-min(v i)|
    其中,V out为反馈数据变化范围,max(v i)为某种反馈数据的最大值,min(v i)为某种反馈数据的最小值。
  18. 根据权利要求15所述的计算机可读存储介质,其中,所述预设移动角度阈值、所述预设移动速度阈值和所述预设移动幅度阈值均存储于区块链中,所述根据预设移动角度阈值、预设移动速度阈值和预设移动幅度阈值分别对所述移动角度数据、移动速度数据和移动幅度数据的变化范围进行活体数据判断,分别得到移动角度、移动速度和移动幅度的活体判断结果,包括:
    将所述移动角度数据的变化范围与所述预设移动角度阈值进行比较,将得到的所述移动角度数据的变化范围大于所述预设移动角度阈值的比较结果作为第一活体判断结果;
    根据所述第一活体判断结果,将所述移动速度数据的变化范围与所述预设移动速度阈值进行比较,将得到的所述移动速度数据的变化范围大于所述预设移动速度阈值的比较结果作为第二活体判断结果;
    根据所述第二活体判断结果,将所述移动幅度数据的变化范围与所述预设移动幅度阈值进行比较,将得到的所述移动幅度数据的变化范围大于所述预设移动幅度阈值的比较结果作为所述移动角度、移动速度和移动幅度的活体判断结果。
  19. 根据权利要求15所述的计算机可读存储介质,其中,在所述根据获取的所述移动角度、移动速度和移动幅度的活体判断结果均为活体的判断信息,生成所述待测目标为活体的检测结果之前还包括:
    在所述移动目标传感器指令的反馈数据产生的同时,通过摄像装置采集所述待测目标的移动背景视频信息;
    根据所述移动背景视频信息对所述待测目标进行静默活体判断;
    根据获取的通过静默活体判断结果信息,生成所述待测目标为活体的检测结果。
  20. 根据权利要求19所述的计算机可读存储介质,其中,所述根据所述移动背景视频信息对所述待测目标进行静默活体判断包括:
    对所述移动背景视频信息按照预设时间间隔进行抽帧处理,得到视频帧;
    对所述视频帧进行人脸识别,得到人脸视频帧;
    对所述人脸视频帧进行人脸关键点定位处理,得到人脸定位坐标;
    将所述人脸定位坐标与预先获取的待测目标人脸图片进行对齐处理,得到人脸对齐图片;
    将所述人脸对齐图片输入分类器中进行活体得分计算,得到活体得分;
    将所述活体得分与预设静默活体阈值进行比较。
PCT/CN2021/084308 2020-11-12 2021-03-31 活体检测方法、装置、设备及计算机可读存储介质 WO2021197369A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011263189.5 2020-11-12
CN202011263189.5A CN112380979B (zh) 2020-11-12 2020-11-12 活体检测方法、装置、设备及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2021197369A1 true WO2021197369A1 (zh) 2021-10-07

Family

ID=74583399

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/084308 WO2021197369A1 (zh) 2020-11-12 2021-03-31 活体检测方法、装置、设备及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN112380979B (zh)
WO (1) WO2021197369A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113900526A (zh) * 2021-10-29 2022-01-07 深圳Tcl数字技术有限公司 三维人体形象展示控制方法、装置、存储介质及显示设备
CN116883003A (zh) * 2023-07-10 2023-10-13 国家电网有限公司客户服务中心 基于生物探针技术的移动端支付购电反欺诈方法、系统
CN117011950A (zh) * 2023-08-29 2023-11-07 国政通科技有限公司 一种活体检测方法及装置

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112380979B (zh) * 2020-11-12 2024-05-07 平安科技(深圳)有限公司 活体检测方法、装置、设备及计算机可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170323299A1 (en) * 2016-05-03 2017-11-09 Facebook, Inc. Facial recognition identification for in-store payment transactions
CN107346422A (zh) * 2017-06-30 2017-11-14 成都大学 一种基于眨眼检测的活体人脸识别方法
CN109886084A (zh) * 2019-01-03 2019-06-14 广东数相智能科技有限公司 基于陀螺仪的人脸认证方法、电子设备及存储介质
CN112380979A (zh) * 2020-11-12 2021-02-19 平安科技(深圳)有限公司 活体检测方法、装置、设备及计算机可读存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512632B (zh) * 2015-12-09 2019-04-05 北京旷视科技有限公司 活体检测方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170323299A1 (en) * 2016-05-03 2017-11-09 Facebook, Inc. Facial recognition identification for in-store payment transactions
CN107346422A (zh) * 2017-06-30 2017-11-14 成都大学 一种基于眨眼检测的活体人脸识别方法
CN109886084A (zh) * 2019-01-03 2019-06-14 广东数相智能科技有限公司 基于陀螺仪的人脸认证方法、电子设备及存储介质
CN112380979A (zh) * 2020-11-12 2021-02-19 平安科技(深圳)有限公司 活体检测方法、装置、设备及计算机可读存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113900526A (zh) * 2021-10-29 2022-01-07 深圳Tcl数字技术有限公司 三维人体形象展示控制方法、装置、存储介质及显示设备
CN116883003A (zh) * 2023-07-10 2023-10-13 国家电网有限公司客户服务中心 基于生物探针技术的移动端支付购电反欺诈方法、系统
CN117011950A (zh) * 2023-08-29 2023-11-07 国政通科技有限公司 一种活体检测方法及装置
CN117011950B (zh) * 2023-08-29 2024-02-02 国政通科技有限公司 一种活体检测方法及装置

Also Published As

Publication number Publication date
CN112380979A (zh) 2021-02-19
CN112380979B (zh) 2024-05-07

Similar Documents

Publication Publication Date Title
WO2021197369A1 (zh) 活体检测方法、装置、设备及计算机可读存储介质
US11521423B2 (en) Occlusion detection for facial recognition processes
US10824849B2 (en) Method, apparatus, and system for resource transfer
JP2020194608A (ja) 生体検知装置、生体検知方法、および、生体検知プログラム
EP2869238A1 (en) Methods and systems for determining user liveness
GB2560340A (en) Verification method and system
US20120320181A1 (en) Apparatus and method for security using authentication of face
CN104143086A (zh) 人像比对在移动终端操作系统上的应用技术
US20220139116A1 (en) System and method for liveness detection
EP4033458A2 (en) Method and apparatus of face anti-spoofing, device, storage medium, and computer program product
US11756336B2 (en) Iris authentication device, iris authentication method, and recording medium
CN109543390B (zh) 一种信息安全管理方法和系统
TW202009761A (zh) 身分識別方法、裝置和電腦可讀儲存媒體
US20230306792A1 (en) Spoof Detection Based on Challenge Response Analysis
CN113888500A (zh) 基于人脸图像的炫光程度检测方法、装置、设备及介质
US20230177886A1 (en) Biometric determination device and biometric determination method
WO2021215015A1 (ja) 認証装置、認証方法及び認証プログラム
CN109543389B (zh) 一种信息保护方法和系统
TWI727337B (zh) 電子裝置及人臉識別方法
US12014577B2 (en) Spoof detection using catadioptric spatiotemporal corneal reflection dynamics
KR102627254B1 (ko) 전자 장치 및 얼굴 인식 시스템, 그리고 이의 스푸핑 방지 방법
US20230084760A1 (en) Spoof detection using catadioptric spatiotemporal corneal reflection dynamics
KR20180114605A (ko) 안면인식방법
CN117765621A (zh) 活体检测方法、装置及存储介质
JP2019200722A (ja) 顔画像の適正判定装置、顔画像の適正判定方法、プログラム、および記録媒体

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21779568

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21779568

Country of ref document: EP

Kind code of ref document: A1