WO2023024473A1 - 活体检测方法及装置、电子设备、计算机可读存储介质和计算机程序产品 - Google Patents

活体检测方法及装置、电子设备、计算机可读存储介质和计算机程序产品 Download PDF

Info

Publication number
WO2023024473A1
WO2023024473A1 PCT/CN2022/079043 CN2022079043W WO2023024473A1 WO 2023024473 A1 WO2023024473 A1 WO 2023024473A1 CN 2022079043 W CN2022079043 W CN 2022079043W WO 2023024473 A1 WO2023024473 A1 WO 2023024473A1
Authority
WO
WIPO (PCT)
Prior art keywords
living body
body detection
image
target person
face
Prior art date
Application number
PCT/CN2022/079043
Other languages
English (en)
French (fr)
Inventor
舒荣涛
刘春秋
谢洪彪
焦建成
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023024473A1 publication Critical patent/WO2023024473A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Definitions

  • the present disclosure relates to but not limited to the field of security technologies, and in particular relates to a living body detection method and device, electronic equipment, a computer-readable storage medium and a computer program product.
  • face recognition technology has been widely used in different application scenarios.
  • liveness detection is used to improve security.
  • the accuracy of living body detection is not high.
  • Embodiments of the present disclosure provide a living body detection method and device, electronic equipment, a computer-readable storage medium, and a computer program product.
  • An embodiment of the present disclosure provides a living body detection method, the method is applied to a living body detection device, and the method includes:
  • the living body detection state includes a normal state or an abnormal state
  • the normal state means that the environment where the living body detection device is located is not in a state of being attacked by a non-living body
  • the The abnormal state indicates that the environment where the living body detection device is located is in a state of being attacked by a non-living body
  • the living body detection status of the environment where the living body detection device is located includes the abnormal state, at least two first images to be processed are acquired, and the at least two first images to be processed both include a target person;
  • a first living body detection result of the target person is determined.
  • An embodiment of the present disclosure provides a living body detection device, and the living body detection device includes:
  • the acquisition part is configured to acquire the living body detection state of the environment where the living body detection device is located, the living body detection state includes a normal state or an abnormal state, and the normal state indicates that the environment where the living body detection device is located is not in a state of non-living body
  • the state of attack, the abnormal state indicates that the environment where the living body detection device is located is in a state of being attacked by a non-living body;
  • the acquisition part is further configured to acquire at least two first images to be processed, and the at least two first images to be processed are The images all contain the target person;
  • the first processing part is configured to determine a first living body detection result of the target person based on the at least two first images to be processed.
  • An embodiment of the present disclosure provides an electronic device, including: a processor and a memory, the memory is used to store computer program codes, the computer program codes include computer instructions, and when the processor executes the computer instructions , the electronic device executes the above-mentioned living body detection method.
  • An embodiment of the present disclosure provides another electronic device, including: a processor, a sending device, an input device, an output device, and a memory, the memory is used to store computer program codes, and the computer program codes include computer instructions.
  • the processor executes the computer instructions
  • the electronic device executes the above-mentioned living body detection method.
  • An embodiment of the present disclosure provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, the computer program includes program instructions, and when the program instructions are executed by a processor, the The processor executes the above-mentioned living body detection method.
  • An embodiment of the present disclosure provides a computer program product, the computer program product includes a computer program or an instruction, and when the computer program or instruction is run on an electronic device, the electronic device is made to execute the above method for detecting a living body.
  • FIG. 1 is a schematic diagram of a pixel coordinate system provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of a living body detection method provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of a binocular image provided by an embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram of a binocular image provided by an embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram of the composition and structure of a living body detection device provided by an embodiment of the present disclosure
  • FIG. 6 is a schematic diagram of a hardware structure of a living body detection device provided by an embodiment of the present disclosure.
  • At least one (item) means one or more
  • “multiple” means two or more
  • at least two (items) means two or three And three or more
  • "and/or” is used to describe the association relationship of associated objects, indicating that there can be three types of relationships, for example, "A and/or B” can mean: only A exists, only B exists, and A exists at the same time and B, where A and B can be singular or plural.
  • the character "/" can indicate that the contextual objects are an "or” relationship, which refers to any combination of these items, including any combination of single items (items) or plural items (items).
  • At least one item (piece) of a, b or c can mean: a, b, c, "a and b", “a and c", “b and c", or "a and b and c ", where a, b, c can be single or multiple.
  • an embodiment means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the present disclosure.
  • the occurrences of this phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is understood explicitly and implicitly by those skilled in the art that the embodiments described herein can be combined with other embodiments.
  • face recognition technology has been widely used in different application scenarios, among which, confirming the identity of a person through face recognition is an important application scenario, for example, real-name authentication through face recognition technology , authentication, etc.
  • Face recognition technology obtains face feature data by performing feature extraction processing on the face image obtained by collecting the face area of the person. The identity of the person in the face image is determined by comparing the extracted face feature data with the face feature data in the database.
  • non-living face data includes: paper face photos, electronic face images, and the like.
  • Using non-living face data to attack face recognition technology means replacing the face area of a person with non-living face data to achieve the effect of deceiving face recognition technology.
  • Zhang San puts Li Si's photo in front of Li Si's mobile phone for facial recognition unlocking.
  • the mobile phone shoots Li Si's photo through the camera to obtain a face image including Li Si's face area, and then determines Zhang San's identity as Li Si, and unlocks the mobile phone.
  • Zhang San successfully deceived the face recognition technology of the mobile phone by using Li Si's photo to unlock Li Si's mobile phone.
  • non-living attack is of great significance.
  • Viability detection of face data can effectively prevent non-living face data from attacking face recognition technology.
  • Traditional liveness detection methods mainly include silent liveness detection methods and video liveness detection methods.
  • the silent living detection method refers to the method of completing living detection based on a single image.
  • the video liveness detection method refers to a method for completing liveness detection based on video. For example, the person to be detected completes corresponding actions (such as shaking the head, blinking, and opening the mouth) according to corresponding instructions during the recording of the liveness detection video.
  • By processing the living body detection video it is determined whether the person to be detected has completed the corresponding action, and then it is determined whether the person to be detected is a living body.
  • the video liveness detection method takes a long time, the video liveness detection method is not suitable for the scene where the liveness detection needs to be completed in a short time (hereinafter, this scene is called a fast detection scene).
  • this scene is called a fast detection scene.
  • the access control device determines whether the person to be detected is passable, it needs to perform liveness detection on the person to be detected. If the liveness detection takes a long time, it will affect the user experience, and the silent liveness detection method has a lower success rate of liveness detection than the video liveness detection method. Therefore, how to improve user experience and improve the success rate of liveness detection is of great significance.
  • an embodiment of the present disclosure provides a living body detection method, so as to improve user experience and increase the success rate of living body detection.
  • the positions in the image that appear below all refer to the positions under the pixel coordinates of the image.
  • the abscissa of the pixel coordinate system is used to indicate the number of columns where the pixel points are located, and the ordinate in the pixel coordinate system is used to indicate the number of rows where the pixel points are located.
  • FIG. 1 is a schematic diagram of a pixel coordinate system provided by an embodiment of the present disclosure.
  • the direction of the X axis and the direction parallel to the column of the image are the direction of the Y axis, and the pixel coordinate system is constructed as XOY.
  • the units of the abscissa and ordinate are pixels.
  • the coordinates of pixel A 11 in Fig. 1 are (1, 1)
  • the coordinates of pixel A 23 are (3, 2)
  • the coordinates of pixel A 42 are (2, 4)
  • the coordinates of pixel A 34 are The coordinates are (4, 3).
  • the executor of the embodiments of the present disclosure is a living body detection device, wherein the living body detection device may be any electronic device capable of implementing the technical solutions disclosed in the method embodiments of the present disclosure.
  • the living body detection device may be at least one of the following: a mobile phone, a computer, a tablet computer, a wearable smart device, and the like.
  • Fig. 2 is a schematic flowchart of a living body detection method provided by an embodiment of the present disclosure. As shown in Fig. 2, the method includes steps S201 to S203, wherein:
  • Step S201 Obtain the living body detection status of the environment where the living body detection device is located, where the living body detection status includes a normal state or an abnormal state.
  • the living body detection status of the living body detection environment when the living body detection status of the living body detection environment includes a normal state, it indicates that the living body detection environment is not in a state of being attacked by non-living bodies; when the living body detection status of the living body detection environment includes an abnormal state, it indicates that the living body detection environment is in The state of being attacked by non-living entities.
  • the living body detection device determines that the person to be detected is a non-living body in n consecutive live body detections, indicating that the living body detection device may be attacked by a non-living body.
  • the living body detection state of the environment in which the device is located includes an abnormal state. Conversely, if the living body detection status of the environment where the living body detection device is located is not an abnormal state, then the living body detection status of the environment where the living body detection device is located includes a normal state.
  • the environment where the living body detection device is located is the above-mentioned living body detection environment, and the living body detection status is used to indicate whether the environment where the living body detection device is located is in a state of being attacked by a non-living body.
  • the living body detection device receives the living body detection status input by the user through the input component.
  • the above-mentioned input components include at least one of the following: keyboard, mouse, touch screen, touch pad, audio input device and the like.
  • the living body detection device receives the living body detection status sent by the terminal.
  • the above-mentioned terminal may be any of the following: mobile phone, computer, tablet computer, server and so on.
  • Step S202 when the life detection state of the environment where the life detection device is located includes an abnormal state, acquire at least two first images to be processed, both of which contain the target person.
  • the first images to be processed all include the target person, that is, the living body detection device needs to determine whether the target person in each first image to be processed is a living body.
  • the life detection device includes a camera, and the life detection device uses the camera to collect n images as at least two first images to be processed, where n is an integer greater than 1. For example, when the life detection device detects the life detection instruction, use the camera to collect n images as at least two first images to be processed.
  • the life detection device includes a camera, and the life detection device uses the camera to capture a video of a preset duration, and at least two images in the video are used as at least two first images to be processed.
  • the living body detection device detects the living body detection instruction, it uses the camera to collect a 1-minute video. If the video includes 30 frames of images, then the living body detection device randomly selects at least two frames of images from the 30 frames of images in the video as at least two first images to be processed.
  • the living body detection device receives at least two images input by the user through the input component as the at least two first images to be processed. In one embodiment, the living body detection device uses at least two images sent by the receiving terminal as at least two first images to be processed.
  • Step S203 based on at least two first images to be processed, determine a first living body detection result of the target person.
  • the living body detection result includes the living body detection passing or the living body detection failing, wherein the living body detection pass represents the living body detection
  • the object is a living body, and the living body detection does not pass the characterization.
  • the living body detection object is a non-living body.
  • the target person is alive; if the first living body detection result includes failing the living body detection, then the target person is not alive.
  • the living body detection device can obtain the living body detection result of the living body detection object in the first image to be processed by performing the living body detection process on the first image to be processed.
  • the probability that the living body detection device suffers from a non-living body attack when the living body detection state includes an abnormal state is greater than the probability that the living body detection state includes a normal state, the probability of suffering a non-living body attack is large.
  • the living body detection standard should be improved to Reduced success rate of non-living attacks.
  • the living body detection standard can be improved, that is: the living body detection state includes an abnormal state
  • the living body detection standard for judging whether the person to be detected is alive should be stricter than the living body detection standard for judging whether the person to be detected is alive when the living body detection state includes a normal state. In this way, the false detection rate of the living body detection can be reduced, thereby improving the accuracy of the living body detection result.
  • the silent liveness detection method has a lower success rate of liveness detection, that is, the liveness detection standard of the silent liveness detection method is lower than that of the video liveness detection method.
  • the silent liveness detection method determines the liveness detection result based on a single image, while the video liveness detection method determines the liveness detection result based on at least two images, so the liveness detection standard of the video liveness detection method is higher than that of the silent liveness detection method.
  • the living body detection standard is high.
  • the life detection device determines the first life detection result of the target person based on at least two first images to be processed. Thus, the accuracy of the first living body detection result is improved.
  • the live body detection device performs live body detection processing on a first image to be processed to obtain a first intermediate live body detection result, wherein the first intermediate live body detection result includes the live body probability that the target person is alive.
  • the living body detection device performs living body detection processing on at least two first images to be processed, and can obtain at least two first intermediate living body detection results.
  • the living body detection device calculates an average value of at least two first intermediate living body detection results to obtain a second intermediate living body detection result.
  • the living body probability of the second intermediate living body detection result is greater than the living body detection threshold, the first living body detection result of the target person is determined to be the target person is alive; the living body probability of the second intermediate living body detection result is less than or equal to the living body detection threshold In some cases, it is determined that the first liveness detection result of the target person is that the target person is not alive.
  • At least two first intermediate living body detection results include a first intermediate living body detection result a and a first intermediate living body detection result b, wherein the first intermediate living body detection result a includes a living body probability that the target person is alive is 0.8, and the first The intermediate living body detection result b includes a living body probability of 0.76 that the target person is alive.
  • the second intermediate living body detection result is obtained by calculating the mean value of the first intermediate living body detection result a and the first intermediate living body detection result b, wherein the second intermediate living body detection result includes a living body probability of 0.78 for the target person.
  • the living body detection threshold is 0.79, since the second intermediate living body detection result is less than the living body detection threshold (0.78 is less than 0.79), the first living body detection result indicates that the target person is not alive. If the liveness detection threshold is 0.7, since the liveness probability of the second intermediate liveness detection result is greater than the liveness detection threshold (0.78 is greater than 0.7), the first liveness detection result indicates that the target person is alive.
  • the living body detection device obtains the first living body detection result by calculating the mean value of at least two first intermediate living body detection results, which can reduce the false detection rate caused by performing live body detection processing on a single image, thereby improving the second 1. Accuracy of living body detection results.
  • the embodiment of the present disclosure is different from the traditional method, in the case of not judging the living body detection status, the living body detection result is obtained by performing the living body detection processing on a single image, or the video living body detection is adopted in the case of not judging the living body detection state Methods
  • the living body detection was carried out to obtain the living body detection results.
  • the living body detection device determines the first living body detection result of the target person based on at least two first images to be processed, which can improve the living body detection standard, by This improves the accuracy of the liveness detection result.
  • the life detection device uses a silent life detection method to determine the first life detection result of the target person. For example, the living body detection device determines the first living body detection result based on any one of the at least two first images to be processed.
  • the living body detection device performs the following steps S1 to S5 during the execution of step S201, wherein:
  • Step S1 Obtain the first threshold and at least one second image to be processed.
  • the maximum time stamp in the at least one second image to be processed is smaller than the minimum time stamp in the at least two first images to be processed.
  • the first threshold may be a positive number less than or equal to 1.
  • the living body detection device acquires the first threshold by receiving the first threshold input by the user through the input component.
  • the living body detection device acquires the first threshold by receiving the first threshold sent by the terminal.
  • the maximum time stamp of at least one second image to be processed is smaller than the minimum time stamp of at least one or two first images to be processed, that is, the acquisition time of any second image to be processed is shorter than that of any The acquisition time of the first image to be processed is early.
  • the living body detection device uses the at least one second image to be processed input by the user through the input component as the at least one second image to be processed. In an implementation manner, the living body detection device receives at least one second image to be processed sent by the terminal as the at least one second image to be processed. In one embodiment, the living body detection device includes a camera. The living body detection device acquires at least one second image to be processed by using the camera.
  • the life detection device when the life detection device detects the life detection instruction, it uses the camera to acquire at least one second image to be processed. For example, when the living body detection device detects that there is a person a in the living body detection area, it uses the camera to shoot the person a to obtain at least one second image to be processed.
  • the living body detection device when the living body detection device detects that there is a person a in the living body detection area, it uses the camera to shoot the person a to obtain the second image A to be processed.
  • the living body detection device detects that there is a person b in the living body detection area, it uses the camera to shoot the person b to obtain the second image B to be processed.
  • at least one second image to be processed includes the second image A to be processed and the second image B to be processed.
  • the step of acquiring the first threshold value and the step of acquiring at least one second image to be processed may be performed separately, or may be performed simultaneously.
  • the living body detection device may acquire the first threshold first, and then acquire at least one second image to be processed.
  • the living body detection device may first acquire at least one second image to be processed, and then acquire the first threshold.
  • the living body detection device acquires at least one second image to be processed during the process of acquiring the first threshold, or acquires the first threshold during the process of acquiring at least one second image to be processed.
  • Step S2 performing a living body detection process on at least one second image to be processed, and obtaining at least one second living body detection result.
  • each second image to be processed includes a person to be detected.
  • the persons to be detected contained in different second images to be processed may be the same or different.
  • at least one second image to be processed includes image a and image b. It may be that image a contains Zhang San, and image b contains Li Si. It is also possible that both image a and image b contain Zhang San. It should be understood that the person to be detected may be the same as or different from the target person.
  • the living body detection device performs living body detection processing on a second image to be processed, and can obtain a second living body detection result.
  • the live body detection device can obtain at least one second live body detection result by performing live body detection processing on at least one second image to be processed.
  • at least one second image to be processed includes image a and image b. If the living body detection device obtains the living body detection result A by performing the living body detection process on the image a during the execution of step S2, at this time at least one second living body detection result includes the living body detection result A.
  • the living body detection device obtains the living body detection result A by performing the living body detection processing on the image a, and obtains the living body detection result B by performing the living body detection processing on the image b during the execution of step S2.
  • the at least one second living body detection result includes the living body detection result A and the living body detection result B.
  • Step S3. Determine a first number of first positive results in the at least one second living body detection result, where the first positive result represents the second living body detection result that passed the living body detection.
  • Step S4 determining a first ratio between the first quantity and the second quantity, where the second quantity is the quantity of the second living body detection result.
  • Step S5 based on the above-mentioned first ratio and the above-mentioned first threshold, determine the living body detection state.
  • the first ratio represents the proportion of the first positive result in the second living body detection result
  • the first threshold is the basis for judging whether the first ratio is large or small, that is, the first threshold can be used to judge the proportion of the first positive result.
  • a ratio is large or small. If the first ratio is large, it means that the proportion of the first positive result in at least one second living body detection result is large, and the living body detection state includes a normal state; if the first ratio is small, it means that the first positive result in at least one second living body detection result is positive. If the proportion of the results is small, the liveness detection status includes an abnormal status.
  • the living body detection device calculates the square of the first ratio to obtain the first intermediate value.
  • the living body detection device determines the first ratio is small, and then it is determined that the liveness detection status is an abnormal status.
  • the living body detection device can calculate the proportion of the first positive result in the at least one second living body detection result based on the first number and the second number after determining the first number and the second number . Furthermore, the living body detection status may be determined according to the ratio of the first positive result in the at least one second living body detection result and the first threshold.
  • the life detection state when the life detection state includes an abnormal state, the life detection state further includes an abnormal level, wherein the abnormal level is used to characterize the degree of abnormality of the environment where the living body detection device is located.
  • the level of abnormality may include level one, level two, and level three, wherein the level of abnormality represented by level three is higher than that of level two, and the level of abnormality represented by level two is higher than that of level one .
  • the abnormal level may include general, higher and special, wherein the abnormal degree represented by special is higher than the abnormal degree represented by higher, and the abnormal degree represented by higher is higher than the abnormal degree represented by general.
  • the living body detection device determines the abnormal level of the living body detection state by performing the following steps S6 to S7:
  • Step S6 acquiring a first mapping relationship between the abnormal level and the first ratio.
  • the first mapping relationship represents the mapping between the proportion of the first positive result in the at least one second living body detection result and the abnormal level.
  • Step S7 based on the first mapping relationship and the first ratio, determine the abnormal level of the living body detection state.
  • the living body detection device executes the following step S8 during the execution of step S201, wherein:
  • Step S8 When the first life detection instruction carrying the life detection state information is detected, use the life detection state indicated by the first life detection instruction as the life detection state of the environment where the life detection device is located.
  • the first living body detection instruction is used to instruct the living body detection device to perform the living body detection, and the instruction carries the living body detection state information.
  • the life detection state information carried by the first life detection instruction is: the life detection state includes an abnormal state; for another example, the life detection state information carried by the first life detection instruction is: the life detection state includes a normal state.
  • the first living body detection instruction is input by the user to the living body detection device through the input component.
  • the first living body detection instruction is obtained by fusing the first intermediate living body detection instruction with the living body detection state information.
  • the first intermediate life detection instruction is generated by the life detection device when it is determined that there is a person to be detected in the life detection area.
  • the living body detection status information is randomly generated by the living body detection device.
  • the life detection device determines that there is a person to be detected in the life detection area, it generates a first intermediate life detection instruction, and randomly generates life detection status information including an abnormal state.
  • the first living body detection instruction is obtained by fusing the living body detection state information with the first intermediate living body detection instruction.
  • the life detection state information carried by the first life detection instruction is that the life detection state includes an abnormal state.
  • the living body detection standard adopted by the living body detection device when the living body detection state is an abnormal state is stricter than the living body detection standard adopted when the living body detection state is a normal state, which can increase attackers (i.e. The person who conducts non-living attack on the living detection device) makes it difficult to carry out non-living attacks on the living detection device, thereby reducing the success rate of non-living attacks.
  • the attacker may adopt a specific non-living attack to prevent the live detection device from determining the live detection status as an abnormal state, thereby preventing the live detection device from adopting stricter live detection standards.
  • the living body detection device includes an access control device. If the three consecutive living body detection results include that the living body detection fails, the access control device determines that the living body detection status includes an abnormal state.
  • the liveness detection state information is randomly generated and the liveness detection state is randomly determined, in this way, specific non-living body attacks can be effectively avoided, thereby improving the accuracy of the liveness detection.
  • the living body detection device performs the following steps S9 to S12 during the execution of step S201, wherein:
  • Step S9 acquiring a binocular image and a second threshold, the binocular image includes a first image and a second image, and both the first image and the second image include a human face to be detected.
  • the binocular image refers to two images obtained by shooting the same scene from different positions at the same time by two different imaging devices (hereinafter referred to as binocular imaging devices).
  • the binocular imaging device captures the same scene from different positions at the same time to obtain the first image and the second image.
  • the living body detection device receives binocular images input by a user through an input component. In an implementation manner of acquiring a binocular image, the life detection device receives the binocular image sent by the terminal. In an implementation manner of acquiring binocular images, the living body detection device includes a binocular camera. The living body detection device acquires binocular images through binocular cameras.
  • the role of the second threshold is different from that of the first threshold, and the value of the second threshold may be the same as that of the first threshold, or may be different.
  • the second threshold is a positive number.
  • the life detection device acquires the second threshold by receiving the second threshold input by the user through the input component. In an implementation manner of acquiring the second threshold, the living body detection device acquires the second threshold by receiving the second threshold sent by the terminal.
  • the step of acquiring the second threshold and the step of acquiring the binocular image may be performed separately or simultaneously.
  • the living body detection device may acquire the second threshold first, and then acquire the binocular image.
  • the living body detection device may acquire binocular images first, and then acquire the second threshold.
  • the living body detection device acquires a binocular image during the process of acquiring the second threshold, or acquires the second threshold during the process of acquiring the binocular image.
  • Step S10 determining a first position of the face to be detected in the first image, and determining a second position of the face to be detected in the second image.
  • the face to be detected is the face of a person to be detected.
  • the position of the face to be detected in the image refers to the position of the face to be detected in the pixel coordinate system of the image.
  • the position of the face to be detected in the pixel coordinate system of the first image is the first position
  • the position of the face to be detected in the pixel coordinate system of the second image is the second position.
  • the living body detection device determines the position of the face frame containing the face to be detected in the first image as the first position by performing face detection processing on the first image.
  • the living body detection device determines the position of the face frame containing the face to be detected in the second image as the second position by performing face detection processing on the second image.
  • Step S11 based on the first position and the second position, determine the parallax displacement of the face to be detected in the binocular image.
  • both Figure 3 and Figure 4 are schematic diagrams of a binocular image provided by an embodiment of the present disclosure
  • the image shown in Figure 3 and the image shown in Figure 4 are binocular images
  • the image shown in Figure 3 and Figure 4 The images shown all contain faces to be detected.
  • the position of the face to be detected in the image shown in FIG. 3 is the position of the first face frame to be detected including the face to be detected in the image shown in FIG. 3 , that is, the position of point A.
  • the position of the face to be detected in the image shown in FIG. 4 is the position of the second face frame to be detected including the face to be detected in the image shown in FIG. 4 , that is, the position of point B.
  • the parallax displacement of the face to be detected in the image shown in image 3 and the image shown in Figure 4 is between point A (coordinates (2, 1)) and point B (coordinates (3, 2)) distance:
  • the living body detection device may obtain the parallax displacement of the face to be detected in the binocular image based on the first position and the second position.
  • Step S12 based on the parallax displacement and the second threshold, determine the living body detection status.
  • the binocular image obtained by the binocular imaging device on the three-dimensional object is called the first type of binocular image
  • the binocular image obtained by the binocular imaging device on the two-dimensional object is called the first type. Class II binocular images. Then the parallax displacement of the three-dimensional object in the first type of binocular image is smaller than the parallax displacement of the two-dimensional object in the second type of binocular image.
  • Liveness detection status can be determined.
  • the second threshold is the basis for judging whether the parallax displacement of the face to be detected in the binocular image is large or small, that is, whether the parallax displacement is large or small can be judged by the second threshold.
  • the parallax displacement is large, it can be determined that the binocular image obtained by the living body detection device is the second type of binocular image, that is, the face to be detected is a two-dimensional human face, and then it can be determined that the person to be detected is a non-living body; if the parallax displacement is small , it can be determined that the binocular image acquired by the living body detection device is the first type of binocular image, that is, the face to be detected is a three-dimensional face, and then it can be determined that the person to be detected is a living body. If the person to be detected is a living body, it can be determined that the detection state of the living body includes a normal state; if the person to be detected is not a living body, it can be determined that the detection state of the living body includes an abnormal state.
  • the living body detection device determines that the parallax displacement is small, and then determines that the living body detection state is a normal state; when the parallax displacement is greater than or equal to the second threshold, the living body detection device The device determines that the parallax displacement is large, and then determines that the living body detection state is an abnormal state. In one embodiment, when the parallax displacement is less than or equal to the second threshold, the living body detection device determines that the parallax displacement is small, and then determines that the living body detection state is a normal state; The device determines that the parallax displacement is large, and then determines that the living body detection state is an abnormal state.
  • the living body detection device calculates the square of the parallax displacement to obtain the second intermediate value.
  • the living body detection device determines that the parallax displacement is small, and then determines that the living body detection state is a normal state; when the second intermediate value is greater than or equal to the second threshold value, the living body detection device determines the parallax If the displacement is large, it is determined that the living body detection state is an abnormal state.
  • the living body detection device can determine that the face to be detected is in the binocular image by performing steps S9 to S12. The parallax displacement in the target image, and then it can be determined based on the parallax displacement and the second threshold whether the person to be detected is a living body or a non-living body, so that the living body detection status can be determined.
  • the living body detection device when the living body detection state includes an abnormality, the living body detection device also obtains a second mapping relationship between the parallax displacement and the abnormal level; based on the parallax displacement and the second mapping relationship, determine the abnormality of the living body detection state grade.
  • the living body detection device performs the following steps S13 to S15 during the execution of step S201, wherein:
  • Step S13 acquiring a fourth threshold and a third image to be processed.
  • the role of the fourth threshold is different from that of the first threshold and the role of the second threshold.
  • the value of the fourth threshold may be the same as or different from that of the first threshold.
  • the value of the fourth threshold may be the same as or different from the value of the second threshold.
  • the fourth threshold value is a positive number.
  • the living body detection device receives the fourth threshold input by the user through the input component. In an implementation manner of acquiring the fourth threshold, the living body detection device receives the fourth threshold sent by the terminal.
  • the living body detection device receives the third image to be processed input by the user through the input component. In an implementation manner of acquiring a third image to be processed, the living body detection device receives the third image to be processed sent by the terminal. In an implementation manner of acquiring a third image to be processed, the living body detection device includes a camera. The living body detection device acquires the third image to be processed by using the camera.
  • the life detection device when the life detection device detects the life detection instruction, it uses the camera to acquire the third image to be processed. For example, when the living body detection device detects that there is a person a in the living body detection area, it uses the camera to shoot the person a to obtain a third image to be processed. For another example, when the living body detection device detects that there is a person a in the living body detection area, it uses the camera to shoot the person a to obtain a third image to be processed.
  • the step of acquiring the fourth threshold and the step of acquiring the third image to be processed may be performed separately, or may be performed simultaneously.
  • the living body detection device may acquire the fourth threshold first, and then acquire the third image to be processed.
  • the living body detection device may acquire the third image to be processed first, and then acquire the fourth threshold.
  • the living body detection device acquires the third image to be processed in the process of acquiring the fourth threshold, or acquires the fourth threshold in the process of acquiring the third image to be processed.
  • Step S14 determine the distance between the face to be detected in the third image to be processed and the target hand in the third image to be processed.
  • the living body detection device obtains the third position of the face to be detected in the third image to be processed by performing face detection processing on the third image to be processed.
  • the living body detection device obtains the fourth position of the target hand in the third image to be processed by performing object detection processing on the third image to be processed.
  • the living body detection device determines the distance between the face to be detected and the target hand based on the third position and the fourth position.
  • the living body detection device determines that the third image to be processed contains at least two human face regions by performing face detection processing on the third image to be processed.
  • the living body detection device uses the face in the largest face area as the face to be detected.
  • the living body detection device determines that the third image to be processed contains at least two hands by performing object detection processing on the third image to be processed.
  • the living body detection device separately calculates the distance between each hand and the face to be detected to obtain at least two distances to be confirmed. The minimum value of the at least two distances to be confirmed is used as the distance between the face to be detected and the target hand.
  • Step S15 based on the distance and the fourth threshold, determine the living body detection status.
  • the size of the face to be detected can be determined to be a living body or a non-living body, and then the living body detection status can be determined.
  • the fourth threshold is a basis for judging whether the distance between the face to be detected and the target hand is large or small. Judging whether the distance is large or small by the fourth threshold, if the distance is large, it can be determined that the person to be detected is a living body; if the distance is small, it can be determined that the person to be detected is not a living body. If the person to be detected is a living body, it can be determined that the detection state of the living body includes a normal state; if the person to be detected is not a living body, it can be determined that the detection state of the living body includes an abnormal state.
  • the living body detection device determines that the living body detection state is a normal state; when the distance is greater than or equal to the fourth threshold value, the living body detection device determines that the living body detection state is an abnormal state . In one embodiment, when the distance is less than or equal to the fourth threshold, the living body detection device determines that the living body detection state is a normal state; when the distance is greater than the fourth threshold value, the living body detection device determines that the living body detection state is an abnormal state . In one embodiment, the living body detection device calculates the square of the distance to obtain the third intermediate value.
  • the living body detection device determines that the living body detection state is a normal state; when the third intermediate value is greater than or equal to the fourth threshold value, the living body detection device determines that the living body detection state is an abnormal state.
  • the living body detection device obtains the distance between the face to be detected and the target hand by performing steps S13 to S15, and then can determine whether the person to be detected is a living body or a non-living body based on the distance and the fourth threshold, Thereby, the living body detection state can be determined.
  • the living body detection device when the living body detection state includes an abnormal state, the living body detection device also obtains a third mapping relationship between the distance and the abnormal level; based on the distance and the second mapping relationship, determine the abnormal level of the living body detection state .
  • the living body detection device performs the following steps in the process of executing step S201: acquiring the threshold value of the number of people and the fourth image to be processed.
  • the role of the number of people threshold is different from the role of the first threshold, the role of the second threshold, and the role of the fourth threshold.
  • the value of the number of people threshold may be the same as or different from the value of the first threshold.
  • the value of the number of people threshold and the value of the second threshold may be the same or different.
  • the value of the number threshold and the fourth threshold may be the same or different.
  • the number of people threshold is a positive number.
  • the living body detection device receives the threshold of the number of people input by the user through the input component. In an implementation manner of acquiring the threshold of the number of people, the living body detection device receives the threshold of the number of people sent by the terminal.
  • the living body detection device receives the fourth image to be processed input by the user through the input component. In an implementation manner of acquiring a fourth image to be processed, the living body detection device receives the fourth image to be processed sent by the terminal. In an implementation manner of acquiring a fourth image to be processed, the living body detection device includes a camera. The living body detection device acquires the fourth image to be processed by using the camera.
  • the life detection device when the life detection device detects the life detection instruction, it uses the camera to acquire the fourth image to be processed. For example, when the living body detection device detects that there is a person a in the living body detection area, it uses the camera to shoot the person a to obtain a fourth image to be processed.
  • the living body detection device when it detects that there is a person a in the living body detection area, it uses the camera to shoot the person a to obtain a fourth image to be processed.
  • the step of obtaining the threshold value of the number of people and the step of obtaining the fourth image to be processed may be performed separately, or may be performed simultaneously.
  • the living body detection device may acquire the threshold value of the number of people first, and then acquire the fourth image to be processed.
  • the living body detection device may first acquire the fourth image to be processed, and then acquire the number threshold.
  • the living body detection device acquires the fourth image to be processed during the process of acquiring the threshold value of the number of people, or acquires the threshold value of the number of people during the process of acquiring the fourth image to be processed.
  • the living body detection device determines the number of people in the fourth image to be processed. When the number of people is greater than the number threshold, it is determined that the living body detection state includes an abnormal state; when the number of people is less than or equal to the number of people threshold, it is determined that the living body detection state includes a normal state.
  • the living body detection device may determine the living body detection status based on the number of people in the fourth image to be processed and the number of people threshold.
  • the living body detection device executes the following steps S16 to S17 during the execution of step S203, wherein:
  • Step S16 performing life detection processing on the at least two first images to be processed, and obtaining at least two third life detection results of the target person.
  • the living body detection device performs the living body detection processing on a first image to be processed, and can obtain a third living body detection result of the target person.
  • the living body detection device performs the living body detection processing on the at least two first images to be processed, and can obtain at least two third living body detection results of the target person.
  • the at least two first images to be processed include a first image to be processed a and a first image to be processed b.
  • the biopsy detection device performs biopsy detection processing on the first image a to be processed to obtain a third biopsy detection result A
  • the biopsy detection device performs biopsy detection processing on the first image b to be processed to obtain a third biopsy detection result B.
  • the at least two third living body detection results include the third living body detection result A and the third living body detection result B.
  • Step S17 Based on at least two third living body detection results, determine the first living body detection result of the target person.
  • the third living body detection results all include the living body probability that the target person is alive.
  • the living body detection device determines that the third living body detection result containing the highest probability of living body is the third intermediate living body detection result.
  • the living body probability of the third intermediate living body detection result is greater than the living body detection threshold, it is determined that the first living body detection result is that the target person is alive; when the living body probability of the third intermediate living body detection result is less than or equal to the living body detection threshold, The first living body detection result for determining the target person is that the target person is not alive.
  • the living body detection device performs the following steps S18 to S21 during the execution of step S17, wherein:
  • Step S18 acquiring a third threshold.
  • the role of the third threshold is different from that of the first threshold, the second threshold, the fourth threshold, and the number of people threshold.
  • the value of the third threshold may be the same as or different from that of the first threshold.
  • the value of the third threshold may be the same as or different from the value of the second threshold.
  • the value of the third threshold and the value of the fourth threshold may be the same or different.
  • the third threshold is a positive number.
  • the living body detection device receives the third threshold input by the user through the input component to obtain the third threshold. In some implementations, the living body detection device obtains the third threshold by receiving the third threshold sent by the terminal.
  • Step S19 determining a third number of second positive results among the at least two third liveness detection results, where the second positive result represents the third liveness detection result that passes the liveness detection.
  • Step S20 determining a second ratio between the third quantity and the fourth quantity, where the fourth quantity is the quantity of the third living body detection result.
  • Step S21 based on the second ratio and the third threshold, determine the first living body detection result of the target person.
  • the second ratio represents the proportion of the second positive result in at least two third live body detection results
  • the third threshold is the basis for judging whether the second ratio is large or small, that is, passing the third threshold It can be judged whether the second ratio is large or small. If the second ratio is large, it means that the proportion of the second positive result in at least two third live body detection results is large, and the target person is a living body; if the second ratio is small, it means that the second positive result in at least two third live body detection results is positive. If the proportion of the result is small, the target person is a non-living body.
  • the living body detection device determines that the second ratio is large, and then determines that the first living body detection result is: the target person is alive; when the second ratio is less than or equal to the third threshold In the case of the threshold value, the living body detection device determines that the second ratio is smaller, and further determines that the first living body detection result is: the target person is not alive.
  • the living body detection device determines that the second ratio is large, and then determines that the first living body detection result is: the target person is alive; In the case of three thresholds, the living body detection device determines that the second ratio is smaller, and further determines that the first living body detection result is: the target person is not alive. In some implementations, the living body detection device calculates the square of the second ratio to obtain the fourth intermediate value.
  • the living body detection device determines that the second ratio is large, and then determines that the first living body detection result indicates that the target person is alive; when the fourth intermediate value is less than or equal to the third threshold, The living body detection device determines that the second ratio is small, and further determines that the first living body detection result indicates that the target person is not alive.
  • the living body detection device performs one of the following steps S22 to S23 during the execution of step S17:
  • Step S22 In the case where the number of the second positive results is greater than the number of negative results in the at least two third living body detection results, determine that the first living body detection result of the target person is that the target person is alive, and the second positive result represents the living body detection The third living body detection result that passes, and the negative result represents the third living body detection result that fails the living body detection.
  • Step S23 in the case that the number of the second positive results in the at least two third liveness detection results is less than or equal to the negative result, determine the first liveness detection result of the target person as the target person is not alive.
  • the at least two third living body detection results include the third living body detection result a, the third living body detection result b, and the third living body detection result c, wherein the third living body detection result a is that the living body detection failed (that is, the target person is non-living body), the third living body detection result b means that the living body detection is passed (that is, the target person is alive), and the third living body detection result c means that the living body detection fails (that is, the target person is not alive).
  • both the third living body detection result a and the third living body detection result c are negative results, and the third living body detection result b is the second positive result. Since the number of the second positive results is less than the number of negative results, the living body detection device determines that the target person is not alive in the first living body detection result by performing step 23 .
  • At least two third living body detection results include the third living body detection result a, the third living body detection result b, and the third living body detection result c, wherein the third living body detection result a is a living body detection pass (that is, the target person is a living body ), the third living body detection result b means that the living body detection passed (that is, the target person is alive), and the third living body detection result c means that the living body detection failed (that is, the target person is not alive).
  • both the third living body detection result a and the third living body detection result b are the second positive result, and the third living body detection result c is a negative result. Since the number of the second positive results is greater than the number of negative results, the living body detection device determines that the first living body detection result indicates that the target person is alive by performing step S22.
  • the life detection device after determining that the life detection state includes an abnormal state, the life detection device further performs the following step S24 before performing step S16, wherein:
  • Step S24 increasing the first living body detection threshold in the living body detection process to obtain a second living body detection threshold.
  • the living body detection device can obtain the living body probability that the person to be detected in the image is a living body by performing living body detection processing on the image.
  • the probability is greater than the first living body detection threshold, it is determined that the person to be detected is a living body; when the probability is less than or equal to the first living body detection threshold, it is determined that the person to be detected is not a living body.
  • the first living body detection threshold is the basis for judging whether the person to be detected is alive in the living body detection process.
  • the first living body detection threshold is the basis for judging whether the person to be detected is alive when the living body detection device has not determined the living body detection state; or the first living body detection threshold is the living body detection device determining the living body detection state Including the basis for judging whether the person to be detected is a living body under normal conditions.
  • the value of the first living body detection threshold is a positive number less than 1.
  • the living body detection device obtains the second living body detection threshold by increasing the first living body detection threshold, which increases the basis for judging whether the person to be detected is alive.
  • step S24 the living body detection device executes the following step S25 during the execution of step S16, wherein:
  • Step S25 performing life detection processing on at least two first images to be processed based on the second life detection threshold, and determining at least two third life detection results of the target person.
  • the living body detection device uses the second living body detection threshold as a basis for judging whether the target person is alive.
  • the live body detection device obtains the live body probability that the target person in the image is alive by performing live body detection processing on the image. When the probability is greater than the second living body detection threshold, it is determined that the target person is alive; when the probability is less than or equal to the second living body detection threshold, it is determined that the target person is not alive.
  • the second living body detection threshold is obtained by increasing the first living body detection threshold, and the second living body detection threshold is used as a basis for judging whether the target person is alive. In this way, by using the second living body detection threshold as a basis for judging whether the target person is alive, the living body detection device can improve the living body detection standard, thereby improving the accuracy of the third living body detection result.
  • the living body detection device includes an access control device, and at least two first images to be processed both include the face of the target person.
  • the access control device determines that the living body detection state includes an abnormal state, it also executes steps S26 to S29, wherein:
  • Step S26 increasing the first human face similarity threshold of the human face comparison to obtain a second human face similarity threshold.
  • the access control device can determine whether the target person is a registered person by comparing the face image of the target person with at least one registered face image, wherein the registered person includes a trusted person, registered person
  • the face images include face images of registered persons.
  • the first human face similarity threshold is a basis for judging whether the target person is a registered person in the face comparison.
  • the access control device can obtain the face similarity between the target person and the registered person by comparing the image of the target person with at least one registered face image. When the face similarity is greater than the first face similarity threshold, it is determined that the target person is a registered person; when the face similarity is less than or equal to the first human face similarity threshold, it is determined that the target person is not Registered person.
  • the value of the first face similarity threshold is a positive number less than 1.
  • the access control device obtains the second face similarity threshold by increasing the first face similarity threshold, that is, the second face similarity threshold is greater than the first face similarity threshold.
  • Step S27 acquiring at least one registered face image.
  • the registered face images are all face images of trustworthy persons, and at least one registered face image includes the face images of all trustworthy persons.
  • the access control device is the access control device of Company A.
  • Company A has three employees, Zhang San, Li Si, and Wang Wu.
  • the registered face images include Zhang San's face image, Li Si's face image and Wang Wu's. face image.
  • the living body detection device uses at least one registered image input by the user through the input component as the at least one registered image.
  • the access control device receives at least one registered face image sent by the terminal as at least one registered image.
  • the access control device executes step S26 and step S27 in no order.
  • the access control device may first execute step S26 and then step S27, or first execute step S27 and then execute step S26, or at the same time Execute step S26 and step S27.
  • Step S28 Based on the second face similarity threshold, face comparison is performed between the face image to be detected and the above-mentioned at least one registered face image to obtain a face comparison result.
  • the face images to be detected are at least two Any one of the images to be processed.
  • the access control device uses the second face similarity threshold as a basis for judging whether the target person is a registered person.
  • the access control device can obtain the face similarity between the target person and the registered person by comparing the face image to be detected with at least one registered face image. When the face similarity is greater than the second face similarity threshold, it is determined that the target person is a registered person; when the face similarity is less than or equal to the second face similarity threshold, it is determined that the target person is not Registered person.
  • the value of the second face similarity threshold is a positive number less than 1.
  • Step S29 based on the face comparison result and the first living body detection result, determine the passing state of the target person in the access control device.
  • the access control device determines that the pass status of the target person in the access control device is not allowed. pass.
  • the access control device determines that the target person's passing status in the access control device is impassable when the first living body detection result includes that the target person is not alive.
  • the access control device determines that the target person is in the access control device when the first living body detection result includes that the target person is alive, and the face comparison result includes at least one registered face image that matches the face image to be detected.
  • the pass status of is passable.
  • the access control device obtains the second face similarity threshold by increasing the first face similarity threshold, and uses the second face similarity threshold as the basis for judging whether the target person is a registered person basis. In this way, the access control device can improve the face comparison standard by using the second face similarity threshold as the basis for judging whether the target person is a registered person, thereby improving the accuracy of the face comparison result, thereby improving the recognition accuracy of the access control device To improve the security in the abnormal living detection state.
  • the access control device executes the following steps S30 to S31 during the execution of step S28, wherein:
  • Step S30 selecting at least one reference face image including a reference face from at least one registered face image.
  • the access control device when the access control device is in an abnormal state, the access control device may be attacked by non-living bodies at any time, so the access control device only allows authorized persons to pass through, which can improve the identification accuracy of the access control device.
  • the access control device selects a face image of an authorized person (that is, at least one reference face image including a reference face) from at least one registered face image for subsequent face comparison to improve the security of the access control device. recognition accuracy.
  • Step S31 performing face comparison between the image of the face to be detected and at least one reference face image, and obtaining a face comparison result.
  • At least one reference face image contains an authorized person
  • by comparing the faces of the to-be-detected face image with at least one reference face image it can be determined whether the to-be-detected face image contains an authorized person, that is, to determine the target Whether the person is an authorized person. For example, if at least one reference face image includes Zhang San's face image and Li Si's face image, then authorized persons include Zhang San and Li Si. Comparing the face image to be detected with the reference face image set can determine whether the face image to be detected contains Zhang San and whether the face image to be detected contains Li Si.
  • the access control device is the access control device of company A, and company A has three employees, Zhang San, Li Si, and Wang Wu, among which Zhang San is the person in charge of company A.
  • Company A stipulates that only Zhang San is allowed to enter the company when an abnormal situation occurs, and then Zhang San is determined to be an authorized person. Then, when the access control device determines that the living body detection state includes an abnormal state, it selects the face image of Zhang San from the registered face images as the reference face image.
  • the access control device when the access control device determines that the living body detection state includes an abnormal state, it selects the face image of the authorized person from the registered face images as the reference face image, and combines the reference face image with the to-be Detecting face images and performing face comparison to obtain face comparison results, thereby reducing the risk of non-living attacks on the access control device when the live detection status is abnormal, and reducing potential safety hazards.
  • the access control device obtains the face comparison result through steps S30 and S31, and if the face comparison result includes at least one reference face image, there is no image matching the face image to be detected , the access control device determines that the target person is impassable in the access status of the access control device. In the case that the first living body detection result includes that the target person is a non-living body, the access control device determines that the target person is impassable in the access state of the access control device. When the first living body detection result includes that the target person is alive, and the face comparison result includes an image matching the face image to be detected in at least one reference face image, the access control device determines that the target person is in the access control device. Pass status is passable.
  • the access control device also performs the following step S32 before performing step S28, wherein:
  • Step S32 obtaining the effective duration.
  • the access control device receives the valid duration input by the user through the input component. In an implementation manner of obtaining the effective duration, the access control device receives the effective duration sent by the control terminal.
  • step S32 the access control device executes the following steps S33 to S34 in the process of executing step S28, wherein:
  • Step S33 within the effective time period, by performing face comparison between the face image to be detected and at least one reference face image, the face comparison result is obtained, and the starting time of the effective time period is to determine the living body
  • the detection status includes the time of the abnormal status, and the duration of the valid time period is the valid duration.
  • Step S34 outside the effective time period, perform face comparison between the face image to be detected and at least one registered face image to obtain a face comparison result.
  • the access control device determines at 09:20:15 on June 21, 2021 that the living body detection state includes an abnormal state. Assume that the effective duration is 2 hours. Then, within 9:20:15 on June 21, 2021 to 11:20:15 on June 21, 2021, the access control device will perform the face image to be detected with the image in at least one reference face image. Face comparison, get the result of face comparison. Starting from 11:20:16 on June 21, 2021, the face comparison result is obtained by comparing the face image to be detected with at least one registered face image.
  • the face image to be detected and the image in at least one reference face image are carried out.
  • the result of face comparison obtained by face comparison can reduce the probability of non-living attack on the access control device, thereby improving security.
  • the access control device carry out face comparison with the images in the face image to be detected and at least one reference face image within the valid time period to obtain the face comparison result, which can not only reduce potential safety hazards, but also improve the security of the access control device. Recognition accuracy, thereby improving user experience.
  • the living body detection device executes the following step S35 when the living body detection state includes an abnormal level, wherein:
  • Step S35 based on the abnormal level, determine the target abnormal living body detection scheme.
  • the abnormal living body detection scheme is a living body detection scheme executed when the living body detection state includes an abnormal state. Since the probabilities of the living body detection device being attacked by non-living bodies are different at different abnormality levels, the living body detection device determines a target abnormal living body detection scheme based on the abnormality level, which can improve the accuracy of living body detection.
  • the higher the degree of abnormality represented by the abnormality level the higher the living body detection standard of the target abnormal living body detection scheme.
  • the living body detection device acquires a fourth mapping relationship between the abnormal level and the abnormal living body detection scheme.
  • the living body detection device determines a target abnormal living body detection scheme based on the fourth mapping relationship and the abnormal level of the living body detection state.
  • step S35 the living body detection device executes the following step S36 during the execution of step S17, wherein:
  • Step S36 based on the target abnormal living body detection scheme and at least two third living body detection results, determine the first living body detection result of the target person.
  • the abnormality level includes one of the following: general, higher, special, wherein special is characterized by a higher degree of abnormality than higher, and higher is characterized by a higher degree of abnormality than general degree.
  • the target abnormal living body detection scheme includes: when the third number is greater than the number of negative results, determining that the first living body detection result is that the target person is alive; when the third number is less than or equal to the negative result In the case of the number of , it is determined that the first living body detection result indicates that the target person is not alive.
  • the third number represents the quantity of the second positive result in at least two third live body detection results
  • the second positive result represents the third live body detection result that passed the live body detection
  • the negative result is the third live body test result that failed the live body detection .
  • the target abnormal living body detection scheme includes: when the difference between the third number and the number of negative results is greater than the fifth threshold, determining that the first living body detection result is that the target person is alive; When the difference between the third number and the number of negative results is less than or equal to the fifth threshold, it is determined that the target person is not alive in the first living body detection result.
  • the fifth threshold may be a positive integer. In some implementations, the fifth threshold may be set to two.
  • the target abnormal living body detection scheme includes: in the case where the difference between the third number and the number of negative results is greater than the sixth threshold, determining that the first living body detection result is that the target person is alive; When the difference between the three numbers and the number of negative results is less than or equal to the sixth threshold, it is determined that the target person is not alive in the first living body detection result.
  • the sixth threshold may be a positive integer, and the sixth threshold is greater than the fifth threshold. In some implementations, the sixth threshold can be set to 3.
  • the living body detection device determines the abnormal living body detection scheme based on the abnormal level, and determines the first living body detection result based on the abnormal living body detection scheme and at least two third living body detection results, thereby improving the first living body detection result.
  • the accuracy of the test results is the accuracy of the test results.
  • the living body detection device also performs the following step S37, wherein:
  • Step S37 In the case that the living body detection state includes an abnormal state, send a prompt command to the management terminal, and the prompt command carries information that the detection state includes an abnormal state.
  • the prompting instruction may be used to prompt related personnel through the management terminal that the state of the living body detection includes an abnormal state. In this way, relevant personnel can take corresponding measures to prevent the living body detection device from being attacked by non-living bodies.
  • the living body detection device includes a camera.
  • the living body detection device detects the remote video instruction sent for the prompt instruction, it sends the video captured by the camera in real time to the management terminal.
  • the living body detection device includes a speaker.
  • the living body detection device receives the voice data transmitted for the prompt instruction, it outputs the voice data through the speaker.
  • the relevant management personnel know that the life detection status of the life detection device includes an abnormal state through the prompt command obtained by the management terminal, and then send voice data to the life detection device through the management terminal to inform the person who conducts non-living attack on the life detection device .
  • the living body detection device further outputs the voice data through the speaker when receiving the voice data.
  • the prompt instruction is used to instruct the management terminal to output alarm information, so as to remind relevant personnel to deal with the situation that the life detection state of the life detection device includes an abnormal state in a timely manner.
  • the life detection device when the life detection state includes an abnormal state, the life detection device outputs alarm information.
  • the access control device stops performing face recognition on the target person.
  • the access control device stops performing face recognition on the target person, that is, the access control device stops using functions related to face recognition.
  • the abnormality level includes at least one of the following: general, higher, special, wherein special is characterized by a higher degree of abnormality than higher, and higher is characterized by a higher degree of abnormality than general abnormal degree.
  • the access control device stops performing face recognition on the target person when determining that the abnormal level of the living body detection state is special.
  • the access control device determines that the abnormality level of the living body detection state is a preset level, it stops using the face recognition function, and then prohibits anyone from passing, thereby reducing the rate of false passage.
  • wrong passage means that the access control device determines that people other than the target person can pass.
  • the access control device when the access control device determines that the live body detection status includes an abnormal status, it stops using the face recognition function, and prompts the target person to enter identity information or use a key to enter.
  • inputting the identity information may be placing a card carrying the identity information in the card identification area, wherein the card carrying the identity information includes at least one of the following: an ID card, an access control card, and a work card.
  • the access control device can obtain the identity information of the target person through the identification card, thereby determining whether the target person can pass.
  • inputting the identity information may be placing a two-dimensional code carrying the identity information in the two-dimensional code recognition area.
  • the access control device can obtain the identity information of the target person by identifying the two-dimensional code, thereby determining whether the target person can pass.
  • the access control device stops using the face recognition function when it is determined that the living body detection state includes an abnormal state. In this way, the user cannot pass through face recognition, and thus cannot conduct non-living attacks on the access control device, thereby reducing the false recognition rate of live detection and improving the recognition accuracy of the access control device.
  • the writing order of each step does not mean a strict execution order and constitutes any limitation on the implementation process.
  • the specific execution order of each step should be based on its function and possible
  • the inner logic is OK.
  • the embodiment of the present disclosure provides a device corresponding to the above method. Since the problem-solving principle of the device in the embodiment of the present disclosure is similar to the above-mentioned method in the embodiment of the present disclosure, the implementation of the device can refer to the implementation of the method.
  • FIG. 5 is a schematic structural diagram of a living body detection device provided by an embodiment of the present disclosure.
  • the living body detection device 1 includes an acquisition part 11 and a first processing part 12 .
  • the living body detection device 1 further includes a second processing part 13 . in:
  • the acquiring part 11 is configured to acquire the living body detection status of the environment where the living body detection device 1 is located, the living body detection status includes a normal state or an abnormal state, the normal state means that the environment where the living body detection device 1 is located is not in a state of being attacked by a non-living body, and the abnormal state The state indicates that the environment where the living body detection device 1 is located is in a state of being attacked by a non-living body;
  • the acquisition part 11 is further configured to acquire at least two first images to be processed, where the at least two first images to be processed both contain the target figure;
  • the first processing part 12 is configured to determine a first living body detection result of the target person based on at least two first images to be processed.
  • the acquisition part 11 is further configured to: acquire the first threshold and at least one second image to be processed, the maximum time stamp in the at least one second image to be processed is smaller than the at least The minimum time stamp in the two first images to be processed; performing liveness detection processing on the at least one second image to be processed to obtain at least one second liveness detection result; determining the second liveness detection result in the at least one second liveness detection result A first number of positive results, the first positive result characterizes the second living body detection result passed by the living body detection; determine the first ratio of the first number and the second number, the second number is the The number of second live body detection results; based on the first ratio and the first threshold, determine the live body detection status.
  • the acquisition part 11 is further configured to: acquire a binocular image and a second threshold, the binocular image includes a first image and a second image, and the first image and the second threshold Both images include a human face to be detected; determining a first position of the human face to be detected in the first image, and determining a second position of the human face to be detected in the second image; based on the The first position and the second position are used to determine the parallax displacement of the face to be detected in the binocular image; based on the parallax displacement and the second threshold, the living body detection status is determined.
  • the first processing part 12 is further configured to: perform life detection processing on the at least two first images to be processed, to obtain at least two third life detection results of the target person; Based on the at least two third living body detection results, a first living body detection result of the target person is determined.
  • the first processing part 12 is further configured to: acquire a third threshold; determine a third number of second positive results among the at least two third living body detection results, the second positive The result represents the third living body detection result that passed the living body detection; determine the second ratio between the third number and the fourth number, and the fourth number is the number of the third living body detection result; based on the second The ratio and the third threshold determine the first living body detection result of the target person.
  • the first processing part 12 is further configured to: determine the target if the number of the second positive results is greater than the number of negative results in the at least two third living body detection results
  • the first living body detection result of the person is that the target person is alive
  • the second positive result represents the third living body detection result that passes the living body detection
  • the negative result represents the third living body detection that fails the living body detection Result; in the case where the number of the second positive results in the at least two third live body detection results is less than or equal to the number of the negative results, it is determined that the first live body detection result of the target person is that the target person is non-living.
  • the living body detection device 1 further includes: a second processing part 13 configured to, after determining that the living body detection state includes the abnormal state, after the pair of the at least two first standby Before processing the image and performing liveness detection processing, and obtaining at least two third liveness detection results of the target person, increasing the first liveness detection threshold in the liveness detection process to obtain a second liveness detection threshold; the first processing part 12 further It is configured to: perform life detection processing on the at least two first images to be processed based on the second life detection threshold to obtain at least two third life detection results of the target person.
  • the living body detection device 1 includes an access control device, and the at least two first images to be processed both include the face of the target person; when it is determined that the living body detection status includes the abnormal situation , the first processing part 12 is also configured to increase the first face similarity threshold of the face comparison to obtain the second face similarity threshold; the acquisition part 11 is also configured to obtain at least one already Registering a face image; the first processing part 12 is also configured to perform face comparison between the face image to be detected and the at least one registered face image based on the second face similarity threshold, Obtain the face comparison result, the face image to be detected is any one image in the at least two first images to be processed; the first processing part 12 is also configured to Based on the result and the first living body detection result, the passing state of the target person in the access control device is determined.
  • the first processing part 12 is further configured to: when the face comparison result includes the at least one registered face image, there is no match with the face image to be detected.
  • the face comparison result includes the at least one registered face image, there is no match with the face image to be detected.
  • the image of the target person it is determined that the target person is impassable in the pass state of the access control device; The pass status of the access control device is impassable; the first living body detection result includes that the target person is alive, and the face comparison result includes the presence of the same person in the at least one registered face image.
  • an image matching the face image to be detected it is determined that the passing state of the target person in the access control device is passable.
  • the acquiring part 11 may be a data interface, and both the first processing part 12 and the second processing part 13 may be processors.
  • the functions or parts included in the apparatus provided by the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments, and for specific implementation, refer to the descriptions of the above method embodiments.
  • a "part" may be a part of a circuit, a part of a processor, a part of a program or software, etc., of course it may also be a unit, a module or a non-modular one.
  • FIG. 6 is a schematic diagram of a hardware structure of a living body detection device provided by an embodiment of the present disclosure.
  • the living body detection device 2 includes a processor 21 , a memory 22 , an input device 23 and an output device 24 .
  • the processor 21 , memory 22 , input device 23 and output device 24 are coupled through a connector 25 , and the connector 25 includes various interfaces, transmission lines or buses, etc., which are not limited in this embodiment of the present disclosure.
  • coupling refers to interconnection in a specific manner, including direct connection or indirect connection through other devices, for example, connection through various interfaces, transmission lines, and buses.
  • the processor 21 can be one or more graphics processing units (Graphics Processing Unit, GPU), and in the case where the processor 21 is a GPU, the GPU can be a single-core GPU or a multi-core GPU.
  • the processor 21 may be a processor group composed of multiple GPUs, and the multiple processors are coupled to each other through one or more buses.
  • the processor may also be other types of processors, etc., which are not limited in this embodiment of the present disclosure.
  • the memory 22 can be used to store computer program instructions and various computer program codes including program codes for implementing the solutions of the present disclosure.
  • the memory includes, but is not limited to, random access memory (Random Access Memory, RAM), read-only memory (Read-only Memory, ROM), erasable programmable read-only memory (Erasable Programmable Read Only Memory , EPROM), or portable read-only memory (Compact Disc Read-only Memory, CD-ROM), which is used for related instructions and data.
  • the input device 23 is used for inputting data and/or signals and the output device 24 is used for outputting data and/or signals.
  • the input device 23 and the output device 24 can be independent devices, or an integrated device.
  • the memory 22 can be used not only to store related instructions, but also to store related data, for example, the memory 22 can be used to store at least two first images to be processed acquired through the input device 23, or the The memory 22 can also be used to store the first living body detection result obtained by the processor 21, etc., and the embodiment of the present disclosure does not limit the specific data stored in the memory.
  • Fig. 6 shows a simplified design of a living body detection device.
  • the living body detection device can also include other necessary components, including but not limited to any number of input/output devices, processors, memories, etc., and all the living body detection devices that can realize the embodiments of the present disclosure are described in this within the scope of public protection.
  • the disclosed systems, devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the parts is only a logical function division. In actual implementation, there may be other division methods.
  • multiple parts or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or parts may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted via a computer-readable storage medium.
  • the computer instructions can be transmitted from a website site, computer, server or data center to Another website site, computer, server or data center for transmission.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device including a server, a data center, and the like integrated with one or more available media.
  • the available medium may be a magnetic medium, (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a digital versatile disc (Digital Versatile Disc, DVD)), or a semiconductor medium (for example, a solid state disk (Solid State Disk, SSD) )wait.
  • the processes can be completed by computer programs to instruct related hardware.
  • the programs can be stored in computer-readable storage media. When the programs are executed , may include the processes of the foregoing method embodiments.
  • the aforementioned storage medium includes various media capable of storing program codes such as read-only memory (ROM) or random access memory (Random Access Memory, RAM), magnetic disk or optical disk.
  • Embodiments of the present disclosure provide a living body detection method and device, electronic equipment, a computer-readable storage medium, and a computer program product.
  • the method is applied to a living body detection device, and includes: acquiring a living body detection status of an environment where the living body detection device is located, where the living body detection status includes a normal state or an abnormal state, and the normal state indicates that the environment where the living body detection device is located is not in a state of being attacked by a non-living body,
  • the abnormal state indicates that the environment where the living body detection device is located is in a state of being attacked by a non-living body; when the living body detection state of the environment where the living body detection device is located includes an abnormal state, at least two first images to be processed are obtained, and at least two first images to be processed are acquired.
  • the images to be processed all contain the target person; based on at least two first images to be processed, the first living body detection result of the target person is determined.
  • the above solution can improve the living body detection standard, thereby improving

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

本公开实施例提供了一种活体检测方法及装置、电子设备、计算机可读存储介质和计算机程序产品。所述方法应用于活体检测装置,所述方法包括:获取所述活体检测装置所处环境的活体检测状态,所述活体检测状态包括正常状态或异常状态,所述正常状态表征所述活体检测装置所处环境未处于被非活体攻击的状态,所述异常状态表征所述活体检测装置所处环境处于被非活体攻击的状态;在所述活体检测装置所处环境的活体检测状态包括所述异常状态的情况下,获取至少两张第一待处理图像,所述至少两张第一待处理图像均包含目标人物;基于所述至少两张第一待处理图像,确定所述目标人物的第一活体检测结果。

Description

活体检测方法及装置、电子设备、计算机可读存储介质和计算机程序产品
相关申请的交叉引用
本公开实施例基于申请号为202110988130.0、申请日为2021年08月26日、申请名称为“活体检测方法及装置、电子设备及计算机可读存储介质”的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本公开作为参考。
技术领域
本公开涉及但不限于安防技术领域,尤其涉及一种活体检测方法及装置、电子设备、计算机可读存储介质和计算机程序产品。
背景技术
随着人脸识别技术的发展,人脸识别技术已广泛应用于不同的应用场景。在人脸识别的过程中,采用活体检测来提高安全性。在相关技术中,活体检测的准确度不高。
发明内容
本公开实施例提供一种活体检测方法及装置、电子设备、计算机可读存储介质和计算机程序产品。
本公开实施例提供了一种活体检测方法,所述方法应用于活体检测装置,所述方法包括:
获取所述活体检测装置所处环境的活体检测状态,所述活体检测状态包括正常状态或异常状态,所述正常状态表征所述活体检测装置所处环境未处于被非活体攻击的状态,所述异常状态表征所述活体检测装置所处环境处于被非活体攻击的状态;
在所述活体检测装置所处环境的活体检测状态包括所述异常状态的情况下,获取至少两张第一待处理图像,所述至少两张第一待处理图像均包含目标人物;
基于所述至少两张第一待处理图像,确定所述目标人物的第一活体检测结果。
本公开实施例提供了一种活体检测装置,所述活体检测装置包括:
获取部分,被配置为获取所述活体检测装置所处环境的活体检测状态,所述活体检测状态包括正常状态或异常状态,所述正常状态表征所述活体检测装置所处环境未处于被非活体攻击的状态,所述异常状态表征所述活体检测装置所处环境处于被非活体攻击的状态;
所述获取部分,还被配置为在所述活体检测装置所处环境的活体检测状态包括所述异常状态的情况下,获取至少两张第一待处理图像,所述至少两张第一待处理图像均包含目标人物;
第一处理部分,被配置为基于所述至少两张第一待处理图像,确定所述目标人物的第一活体检测结果。
本公开实施例提供了一种电子设备,包括:处理器和存储器,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,在所述处理器执行所述计算机指令的情况下,所述电子设备执行上述活体检测方法。
本公开实施例提供了另一种电子设备,包括:处理器、发送装置、输入装置、输出装置和存储器,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,在所述处理器执行所述计算机指令的情况下,所述电子设备执行上述活体检测方法。
本公开实施例提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,所述计算机程序包括程序指令,在所述程序指令被处理器执行的情况下,使所述处理器执行上述活体检测方法。
本公开实施例提供了一种计算机程序产品,所述计算机程序产品包括计算机程序或 指令,在所述计算机程序或指令在电子设备上运行的情况下,使得所述电子设备执行上述活体检测方法。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本公开。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对本公开实施例中所需要使用的附图进行说明。
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。
图1为本公开实施例提供的一种像素坐标系的示意图;
图2为本公开实施例提供的一种活体检测方法的流程示意图;
图3为本公开实施例提供的一种双目图像的示意图;
图4为本公开实施例提供的一种双目图像的示意图;
图5为本公开实施例提供的一种活体检测装置的组成结构示意图;
图6为本公开实施例提供的一种活体检测装置的硬件结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本公开的技术方案,下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是还可能包括没有列出的步骤或单元,或可能还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。
应当理解,在本公开中,“至少一个(项)”是指一个或者多个,“多个”是指两个或两个以上,“至少两个(项)”是指两个或三个及三个以上,“和/或”,用于描述关联对象的关联关系,表示可以存在三种关系,例如,“A和/或B”可以表示:只存在A,只存在B以及同时存在A和B三种情况,其中A,B可以是单数或者复数。字符“/”可表示前后关联对象是一种“或”的关系,是指这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b或c中的至少一项(个),可以表示:a,b,c,“a和b”,“a和c”,“b和c”,或“a和b和c”,其中a,b,c可以是单个,也可以是多个。字符“/”还可表示数学运算中的除号,例如,a/b=a除以b;6/3=2。“以下至少一项(个)”或其类似表达。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本公开的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
随着人脸识别技术的发展,人脸识别技术已广泛应用于不同的应用场景,其中,通过人脸识别确认人物的身份即为一个重要的应用场景,例如,通过人脸识别技术进行实名认证、身份认证等等。
人脸识别技术通过对采集人物的脸部区域得到的人脸图像进行特征提取处理,得到人脸特征数据。通过将提取得到的人脸特征数据与数据库中的人脸特征数据进行比对, 以确定人脸图像中的人物的身份。
然而,近来越来越多使用非活体人脸数据攻击人脸识别技术的事件发生。上述非活体人脸数据包括:纸质人脸照片、电子人脸图像等等。使用非活体人脸数据攻击人脸识别技术即使用非活体人脸数据替换人物的脸部区域,达到欺骗人脸识别技术的效果。
例如,张三将李四的照片放置于李四的手机前,以进行人脸识别解锁。手机通过摄像头对李四的照片进行拍摄,得到包含李四的脸部区域的人脸图像,进而确定张三的身份为李四,并对手机进行解锁处理。这样,张三就通过使用李四的照片成功欺骗手机的人脸识别技术,实现对李四的手机的解锁。
因此,如何防止非活体人脸数据对人脸识别技术的攻击(下文将称为非活体攻击)具有非常重要的意义。
通过对人脸数据进行活体检测可有效防止非活体人脸数据对人脸识别技术的攻击。传统的活体检测方法主要包括静默活体检测方法和视频活体检测方法。静默活体检测方法指基于单张图像完成活体检测的方法。视频活体检测方法指基于视频完成活体检测的方法,例如,待检测人物在录制活体检测视频的过程中,按相应指示完成相应的动作(如摇头,眨眼,张嘴)。通过对活体检测视频进行处理,确定待检测人物是否完成相应的动作,进而确定待检测人物是否为活体。
由于视频活体检测方法耗时较长,视频活体检测方法不适用于需要在短时间内完成活体检测的场景(下文将这种场景称为快速检测场景)。例如,在门禁设备确定待检测人物是否可通行时,需要对待检测人物进行活体检测。若活体检测的耗时较长会影响用户体验,而静默活体检测方法相较于视频活体检测方法,活体检测成功率较低。因此,如何提升用户体验并提高活体检测成功率具有非常重要的意义。基于此,本公开实施例提供了一种活体检测方法,以提升用户体验并提高活体检测成功率。
为表述方便,下文中出现的图像中的位置均指图像的像素坐标下的位置。本公开实施例中的像素坐标系的横坐标用于表示像素点所在的列数,像素坐标系下的纵坐标用于表示像素点所在的行数。
图1为本公开实施例提供的一种像素坐标系的示意图,如图1所示,在图1所示的图像中,以图像的左上角为坐标原点O、平行于图像的行的方向为X轴的方向、平行于图像的列的方向为Y轴的方向,构建像素坐标系为XOY。横坐标和纵坐标的单位均为像素点。例如,图1中的像素点A 11的坐标为(1,1),像素点A 23的坐标为(3,2),像素点A 42的坐标为(2,4),像素点A 34的坐标为(4,3)。
本公开实施例的执行主体为活体检测装置,其中,活体检测装置可以是任意一种可执行本公开方法实施例所公开的技术方案的电子设备。在一些实施方式中,活体检测装置可以是以下中的至少一种:手机、计算机、平板电脑、可穿戴智能设备等。
应理解,本公开方法实施例还可以通过处理器执行计算机程序代码的方式实现。下面结合本公开实施例中的附图对本公开实施例进行描述。图2为本公开实施例提供的一种活体检测方法的流程示意图,如图2所示,所述方法包括步骤S201至步骤S203,其中:
步骤S201、获取活体检测装置所处环境的活体检测状态,活体检测状态包括正常状态或异常状态。
其中,在活体检测环境的活体检测状态包括正常状态的情况下,表征活体检测环境未处于被非活体攻击的状态;在活体检测环境的活体检测状态包括异常状态的情况下,表征活体检测环境处于被非活体攻击的状态。
例如,活体检测装置在连续n次活体检测中均确定待检测人物为非活体,说明活体检测装置可能遭受非活体攻击,此时活体检测装置所处环境处于被非活体攻击的状态,即活体检测装置所处的环境的活体检测状态包括异常状态。反之,若活体检测装置所处环境的活体检测状态不是异常状态,则活体检测装置所处环境的活体检测状态包括正常 状态。
本公开实施例中,活体检测装置所处环境即为上述活体检测环境,活体检测状态用于表征活体检测装置所处环境是否处于被非活体攻击的状态。
在一种获取活体检测状态的实施方式中,活体检测装置接收用户通过输入组件输入的活体检测状态。上述输入组件包括以下至少一种:键盘、鼠标、触控屏、触控板、音频输入器等。在另一种获取活体检测状态的实施方式中,活体检测装置接收终端发送的活体检测状态。上述终端可以是以下任意一种:手机、计算机、平板电脑、服务器等。
步骤S202、在活体检测装置所处环境的活体检测状态包括异常状态的情况下,获取至少两张第一待处理图像,至少两张第一待处理图像均包含目标人物。
这里,第一待处理图像均包含目标人物,即活体检测装置需要确定每一第一待处理图像中的目标人物是否为活体。
在一些实施方式中,活体检测装置包括摄像头,活体检测装置使用摄像头采集n张图像,作为至少两张第一待处理图像,其中,n为大于1的整数。例如,活体检测装置在检测到活体检测指令的情况下,使用摄像头采集n张图像,作为至少两张第一待处理图像。
在一种实施方式中,活体检测装置包括摄像头,活体检测装置使用摄像头采集预设时长的视频,将该视频中的至少两张图像作为至少两张第一待处理图像。例如,活体检测装置在检测到活体检测指令的情况下,使用摄像头采集1分钟的视频。若该视频包括30帧图像,那么活体检测装置从该视频中的30帧图像中任取至少两帧图像,作为至少两张第一待处理图像。
在一种实施方式中,活体检测装置接收用户通过输入组件输入的至少两张图像作为至少两张第一待处理图像。在一种实施方式中,活体检测装置通过接收终端发送的至少两张图像作为至少两张第一待处理图像。
步骤S203、基于至少两张第一待处理图像,确定目标人物的第一活体检测结果。
这里,活体检测结果(包括上述第一活体检测结果,以及下文将要提及的第二活体检测结果、第三活体检测结果)包括活体检测通过或活体检测未通过,其中,活体检测通过表征活体检测对象为活体,活体检测未通过表征活体检测对象为非活体。
例如,若第一活体检测结果包括活体检测通过,那么目标人物为活体;若第一活体检测结果包括活体检测未通过,那么目标人物为非活体。
在一些实施方式中,活体检测装置通过对第一待处理图像进行活体检测处理,可得到第一待处理图像中的活体检测对象的活体检测结果。
由于活体检测状态包括异常状态时活体检测装置遭受非活体攻击的概率,比活体检测状态包括正常状态时遭受非活体攻击的概率大,在活体检测状态包括异常状态时,应提高活体检测标准,以降低非活体攻击的成功率。
在一些实施方式中,由于活体检测状态包括异常状态时遭受非活体攻击的概率比活体检测状态包括正常状态时遭受非活体攻击的概率大,可以提高活体检测标准,即:活体检测状态包括异常状态时判断待检测人物是否为活体的活体检测标准,应该比活体检测状态包括正常状态时判断待检测人物是否为活体的活体检测标准严格。这样可降低活体检测的误检测率,从而提高活体检测结果的准确度。
如上所述,静默活体检测方法相较于视频活体检测方法,活体检测成功率较低,即静默活体检测方法的活体检测标准比视频活体检测方法的活体检测标准低。在一些实施方式中,静默活体检测方法是依据单张图像确定活体检测结果,而视频活体检测方法是依据至少两张图像确定活体检测结果,因此视频活体检测方法的活体检测标准比静默活体检测方法的活体检测标准高。
因此,活体检测装置在活体检测状态包括异常状态的情况下,基于至少两张第一待处理图像,确定目标人物的第一活体检测结果。由此,提高第一活体检测结果的准确度。
在一种实施方式中,活体检测装置对一张第一待处理图像进行活体检测处理,可得到一个第一中间活体检测结果,其中,第一中间活体检测结果包括目标人物为活体的活体概率。
活体检测装置对至少两张第一待处理图像进行活体检测处理,可得到至少两个第一中间活体检测结果。活体检测装置计算至少两个第一中间活体检测结果的均值,得到第二中间活体检测结果。在第二中间活体检测结果的活体概率大于活体检测阈值的情况下,确定目标人物的第一活体检测结果为目标人物为活体;在第二中间活体检测结果的活体概率小于或等于活体检测阈值的情况下,确定目标人物的第一活体检测结果为目标人物为非活体。
例如,至少两个第一中间活体检测结果包括第一中间活体检测结果a和第一中间活体检测结果b,其中,第一中间活体检测结果a包括目标人物为活体的活体概率为0.8,第一中间活体检测结果b包括目标人物为活体的活体概率为0.76。通过计算第一中间活体检测结果a和第一中间活体检测结果b的均值得到第二中间活体检测结果,其中,第二中间活体检测结果包括目标人物为活体的活体概率为0.78。
若活体检测阈值为0.79,由于第二中间活体检测结果小于活体检测阈值(0.78小于0.79),第一活体检测结果为目标人物为非活体。若活体检测阈值为0.7,由于第二中间活体检测结果的活体概率大于活体检测阈值(0.78大于0.7),第一活体检测结果为目标人物为活体。
在该种实现方式中,活体检测装置通过计算至少两个第一中间活体检测结果的均值得到第一活体检测结果,可降低通过对单张图像进行活体检测处理导致的误检测率,从而提高第一活体检测结果的准确度。
本公开实施例区别于传统方法,在未对活体检测状态进行判断的情况下通过对单张图像进行活体检测处理得到活体检测结果,或在没对活体检测状态进行判断的情况下采用视频活体检测方法进行活体检测得到活体检测结果。本公开实施方式中的活体检测装置在确定活体检测装置所处环境包括异常状态的情况下,基于至少两张第一待处理图像确定目标人物的第一活体检测结果,可以提高活体检测标准,由此提高活体检测结果的准确度。同时可以避免在活体检测状态为正常状态的情况下采用视频活体检测方法,从而可以提升用户体验。
在一些实施方式中,活体检测装置在活体检测状态包括正常状态的情况下,采用静默活体检测方法确定目标人物的第一活体检测结果。例如,活体检测装置基于至少两张第一待处理图像中的任意一张图像,确定第一活体检测结果。
在一些实施方式中,活体检测装置在执行步骤S201的过程中执行以下步骤S1至步骤S5,其中:
步骤S1、获取第一阈值和至少一张第二待处理图像,至少一张第二待处理图像中的最大时间戳小于至少两张第一待处理图像中的最小时间戳。
在一些实施方式中,第一阈值可以是小于或等于1的正数。在一种获取第一阈值的实施方式中,活体检测装置通过接收用户通过输入组件输入的第一阈值获取第一阈值。
在另一种获取第一阈值的实施方式中,活体检测装置通过接收终端发送的第一阈值获取第一阈值。
本公开实施例中,至少一张第二待处理图像的最大时间戳小于至少一两张第一待处理图像中的最小时间戳,即任意一张第二待处理图像的采集时间均比任意一张第一待处理图像的采集时间早。
在一种实施方式中,活体检测装置将用户通过输入组件输入的至少一张第二待处理图像作为至少一张第二待处理图像。在一种实施方式中,活体检测装置接收终端发送的至少一张第二待处理图像作为至少一张第二待处理图像。在一种实施方式中,活体检测装置包括摄像头。活体检测装置使用该摄像头采集得到至少一张第二待处理图像。
在一些实施方式中,活体检测装置在检测到活体检测指令的情况下,使用该摄像头采集得到至少一张第二待处理图像。例如,活体检测装置在检测到活体检测区域内存在人物a的情况下,使用该摄像头对人物a进行拍摄得到至少一张第二待处理图像。
又例如,活体检测装置在检测到活体检测区域内存在人物a的情况下,使用该摄像头对人物a进行拍摄得到第二待处理图像A。活体检测装置在检测到活体检测区域内存在人物b的情况下,使用该摄像头对人物b进行拍摄得到第二待处理图像B。此时,至少一张第二待处理图像包括第二待处理图像A和第二待处理图像B。
应理解,在本公开实施例中,获取第一阈值的步骤和获取至少一张第二待处理图像的步骤可以分开执行,也可以同时执行。例如,活体检测装置可先获取第一阈值,再获取至少一张第二待处理图像。又例如,活体检测装置可先获取至少一张第二待处理图像,再获取第一阈值。再例如,活体检测装置在获取第一阈值的过程中获取至少一张第二待处理图像,或在获取至少一张第二待处理图像的过程中获取第一阈值。
步骤S2、对至少一张第二待处理图像进行活体检测处理,得到至少一个第二活体检测结果。
这里,每一张第二待处理图像均包含待检测人物。不同的第二待处理图像所包含的待检测人物可以相同,也可以不同。例如,至少一张第二待处理图像包括图像a和图像b。可以是图像a包含张三,图像b包含李四。也可以是图像a和图像b均包含张三。应理解,待检测人物与目标人物可以相同,也可以不同。
在一种实施方式中,活体检测装置对一张第二待处理图像进行活体检测处理,可得到一个第二活体检测结果。活体检测装置通过对至少一张第二待处理图像进行活体检测处理,可得到至少一个第二活体检测结果。例如,至少一张第二待处理图像包括图像a和图像b。若活体检测装置在执行步骤S2的过程中,通过对图像a进行活体检测处理得到活体检测结果A,此时至少一个第二活体检测结果包括活体检测结果A。若活体检测装置在执行步骤S2的过程中,通过对图像a进行活体检测处理得到活体检测结果A,并通过对图像b进行活体检测处理得到活体检测结果B。此时至少一个第二活体检测结果包括活体检测结果A和活体检测结果B。
步骤S3、确定至少一个第二活体检测结果中第一正结果的第一数量,第一正结果表征活体检测通过的第二活体检测结果。
步骤S4、确定第一数量和第二数量的第一比值,第二数量为第二活体检测结果的数量。
步骤S5、基于上述第一比值和上述第一阈值,确定活体检测状态。
这里,至少一个第二活体检测结果中第一正结果的占比越大,说明活体检测装置被非活体攻击的概率越低。在本公开实施例中,第一比值表征第一正结果在第二活体检测结果中的占比,第一阈值为判断第一比值是大或是小的依据,即通过第一阈值可判断第一比值是大或是小。若第一比值大,说明至少一个第二活体检测结果中第一正结果的占比大,则活体检测状态包括正常状态;若第一比值小,说明至少一个第二活体检测结果中第一正结果的占比小,则活体检测状态包括异常状态。
在一种实施方式中,在第一比值大于第一阈值的情况下,确定第一比值大,进而确定活体检测状态为正常状态;在第一比值小于或等于第一阈值的情况下,确定第一比值小,进而确定活体检测状态为异常状态。在一种实施方式中,在第一比值大于或等于第一阈值的情况下,确定第一比值大,进而确定活体检测状态为正常状态;在第一比值小于第一阈值的情况下,确定第一比值小,进而确定活体检测状态为异常状态。在一种实施方式中,活体检测装置计算第一比值的平方得到第一中间数值。在第一中间数值大于第一阈值的情况下,确定第一比值大,进而确定活体检测状态为正常状态;活体检测装置在第一中间数值小于或等于第一阈值的情况下,确定第一比值小,进而确定活体检测状态为异常状态。
活体检测装置通过执行步骤S1至步骤S5,可以在确定第一数量和第二数量的情况下,基于第一数量和第二数量计算得到至少一个第二活体检测结果中第一正结果的占比。进而可依据至少一个第二活体检测结果中第一正结果的占比和第一阈值确定活体检测状态。
在一些实施方式中,在活体检测状态包括异常状态的情况下,活体检测状态还包括异常等级,其中,异常等级用于表征活体检测装置所处环境的异常程度。
例如,异常等级可以包括一级、二级和三级,其中,三级所表征的异常程度高于二级所表征的异常程度,二级所表征的异常程度高于一级所表征的异常程度。又例如,异常等级可以包括一般、较高和特别,其中,特别所表征的异常程度高于较高所表征的异常程度,较高所表征的异常程度高于一般所表征的异常程度。
在活体检测装置所处的环境的活体检测状态包括异常状态的情况下,活体检测装置还通过执行以下步骤S6至步骤S7,确定活体检测状态的异常等级:
步骤S6、获取异常等级与第一比值之间的第一映射关系。
这里,第一映射关系表征第一正结果在至少一个第二活体检测结果中的占比与异常等级之间的映射。
步骤S7、基于第一映射关系和第一比值,确定活体检测状态的异常等级。
在一些实施方式,活体检测装置在执行步骤S201的过程中执行以下步骤S8,其中:
步骤S8、在检测到携带活体检测状态信息的第一活体检测指令的情况下,将第一活体检测指令所指示的活体检测状态作为活体检测装置所处环境的活体检测状态。
本公开实施例中,第一活体检测指令用于指示活体检测装置进行活体检测,该指令携带活体检测状态信息。例如,第一活体检测指令携带的活体检测状态信息为:活体检测状态包括异常状态;又例如,第一活体检测指令携带的活体检测状态信息为:活体检测状态包括正常状态。
在一种实施方式中,第一活体检测指令由用户通过输入组件向活体检测装置输入。
在一种实施方式中,第一活体检测指令通过将第一中间活体检测指令与活体检测状态信息进行融合得到。第一中间活体检测指令由活体检测装置在确定活体检测区域内存在待检测人物的情况下生成。活体检测状态信息由活体检测装置随机生成。
例如,活体检测装置在确定活体检测区域内存在待检测人物的情况下,生成第一中间活体检测指令,并随机生成活体检测状态包括异常状态的活体检测状态信息。将活体检测状态信息与第一中间活体检测指令融合得到第一活体检测指令。此时,第一活体检测指令所携带的活体检测状态信息为,活体检测状态包括异常状态。
为提高活体检测准确度,活体检测装置在活体检测状态为异常状态时所采用的活体检测标准,比在活体检测状态为正常状态时所采用的活体检测标准严格,这样可增加攻击者(即对活体检测装置进行非活体攻击的人)对活体检测装置进行非活体攻击的难度,从而降低非活体攻击的成功率。而攻击者为提高非活体攻击的成功率,可能会采取特定非活体攻击以避免活体检测装置确定活体检测状态为异常状态,进而避免活体检测装置采用更严格的活体检测标准。
例如,活体检测装置包括门禁装置。若连续三次活体检测结果包括活体检测未通过,则门禁装置确定活体检测状态包括异常状态。张三通过对门禁装置进行多次非活体攻击后,了解门禁装置确定活体检测状态的逻辑,采用了特定的非活体攻击,例如:在每三次内进行两次非活体攻击,且在每三次内进行一次活体识别。这样,可避免门禁装置确定活体检测状态包括异常状态,从而可以避免活体检测装置采用更严格的活体检测标准进行活体检测。
在该种实现方式中,由于活体检测状态信息是随机生成的,活体检测状态是随机确定的,这样,可以有效避免特定非活体攻击,从而提高活体检测的准确度。
在一种实施方式中,活体检测装置在执行步骤S201的过程中执行以下步骤S9至步 骤S12,其中:
步骤S9、获取双目图像和第二阈值,双目图像包括第一图像和第二图像,且第一图像和第二图像均包括待检测人脸。
这里,双目图像指通过两个不同的成像设备(下文将称为双目成像设备)在同一时刻从不同位置对同一场景进行拍摄获得的两张图像。在一些实施方式中,双目成像设备在同一时刻从不同位置对同一场景进行拍摄得到第一图像和第二图像。
在一种获取双目图像的实施方式中,活体检测装置接收用户通过输入组件输入的双目图像。在一种获取双目图像的实施方式中,活体检测装置接收终端发送的双目图像。在一种获取双目图像的实施方式中,活体检测装置包括双目摄像头。活体检测装置通过双目摄像头采集得到双目图像。
本公开实施例中,第二阈值的作用与第一阈值的作用不同,第二阈值的取值与第一阈值的取值可以相同,也可以不同。第二阈值为正数。
在一种获取第二阈值的实施方式中,活体检测装置通过接收用户通过输入组件输入的第二阈值获取第二阈值。在一种获取第二阈值的实施方式中,活体检测装置通过接收终端发送的第二阈值获取第二阈值。
应理解,在本公开实施例中,获取第二阈值的步骤和获取双目图像的步骤可以分开执行,也可以同时执行。例如,活体检测装置可先获取第二阈值,再获取双目图像。又例如,活体检测装置可先获取双目图像,再获取第二阈值。再例如,活体检测装置在获取第二阈值的过程中获取双目图像,或在获取双目图像的过程中获取第二阈值。
步骤S10、确定待检测人脸在第一图像中的第一位置,并确定待检测人脸在第二图像中的第二位置。
这里,待检测人脸为待检测人物的脸。待检测人脸在图像(包括第一图像和第二图像)中的位置指,待检测人脸在图像的像素坐标系下的位置。待检测人脸在第一图像的像素坐标系下的位置即为第一位置,待检测人脸在第二图像的像素坐标系下的位置即为第二位置。
在一种实施方式中,活体检测装置通过对第一图像进行人脸检测处理,确定包含待检测人脸的人脸框在第一图像中的位置,作为第一位置。活体检测装置通过对第二图像进行人脸检测处理,确定包含待检测人脸的人脸框在第二图像中的位置,作为第二位置。
步骤S11、基于第一位置和第二位置,确定待检测人脸在双目图像中的视差位移。
这里,视差位移指依据同一对象在双目图像中的位置得到的距离。例如,图3和图4均为本公开实施例提供的一种双目图像的示意图,图3所示的图像和图4所示的图像为双目图像,图3所示的图像和图4所示的图像均包含待检测人脸。待检测人脸在图3所示的图像中的位置为,包含待检测人脸的第一待检测人脸框在图3所示的图像的位置,即点A的位置。待检测人脸在图4所示的图像中的位置为,包含待检测人脸的第二待检测人脸框在图4所示的图像的位置,即点B的位置。
此时待检测人脸在图像3所示的图像和图4所示的图像中的视差位移为点A(坐标为(2,1))与点B(坐标为(3,2))之间的距离:
Figure PCTCN2022079043-appb-000001
在一些实施方式中,活体检测装置在得到第一位置和第二位置后,可以基于第一位置和第二位置得到待检测人脸在双目图像中的视差位移。
步骤S12、基于视差位移和第二阈值,确定活体检测状态。
在一些实施方式中,若将双目成像设备对三维对象进行拍摄得到的双目图像称为第一类双目图像,将双目成像设备对二维对象进行拍摄得到的双目图像称为第二类双目图像。那么三维对象在第一类双目图像中的视差位移,比二维对象在第二类双目图像中的视差位移小。因此,可以通过判断待检测人脸在双目图像中的视差位移的大小,判断待检测人脸为二维人脸或是三维人脸,进而可确定待检测人脸为活体或是非活体,从而可以确定活体检测状态。
本公开实施例中,第二阈值为判断待检测人脸在双目图像中的视差位移是大或是小的依据,即通过第二阈值可判断视差位移是大或是小。若视差位移大,则可以确定活体检测装置获取到的双目图像为第二类双目图像,即待检测人脸为二维人脸,进而可以确定待检测人物为非活体;若视差位移小,则可以确定活体检测装置获取到的双目图像为第一类双目图像,即待检测人脸为三维人脸,进而可以确定待检测人物为活体。若待检测人物为活体,则可以确定活体检测状态包括正常状态;若待检测人物为非活体,则可以确定活体检测状态包括异常状态。
在一种实施方式中,在视差位移小于第二阈值的情况下,活体检测装置确定视差位移小,进而确定活体检测状态为正常状态;在视差位移大于或等于第二阈值的情况下,活体检测装置确定视差位移大,进而确定活体检测状态为异常状态。在一种实施方式中,在视差位移小于或等于第二阈值的情况下,活体检测装置确定视差位移小,进而确定活体检测状态为正常状态;在视差位移大于第二阈值的情况下,活体检测装置确定视差位移大,进而确定活体检测状态为异常状态。在一种实施方式中,活体检测装置计算视差位移的平方得到第二中间数值。在第二中间数值小于第二阈值的情况下,活体检测装置确定视差位移小,进而确定活体检测状态为正常状态;在第二中间数值大于或等于第二阈值的情况下,活体检测装置确定视差位移大,进而确定活体检测状态为异常状态。
本公开实施例中,由于二维对象在双目图像中的视差位移比三维对象在双目图像中的视差位移大,活体检测装置通过执行步骤S9至步骤S12,可以确定待检测人脸在双目图像中的视差位移,进而可以基于该视差位移和第二阈值确定待检测人物是活体或是非活体,从而可以确定活体检测状态。
在一些实施方式中,在活体检测状态包括异常的情况下,活体检测装置还获取视差位移与异常等级之间的第二映射关系;基于该视差位移和第二映射关系,确定活体检测状态的异常等级。
在一些实施方式中,活体检测装置在执行步骤S201的过程中执行以下步骤S13至步骤S15,其中:
步骤S13、获取第四阈值和第三待处理图像。
这里,第四阈值的作用与第一阈值的作用、第二阈值的作用均不同。第四阈值的取值与第一阈值的取值可以相同,也可以不同。第四阈值的取值与第二阈值的取值可以相同,也可以不同。第四阈值为正数。
在一种获取第四阈值的实施方式中,活体检测装置接收用户通过输入组件输入的第四阈值。在一种获取第四阈值的实施方式中,活体检测装置接收终端发送的第四阈值。
在一种获取第三待处理图像的实施方式中,活体检测装置接收用户通过输入组件输入的第三待处理图像。在一种获取第三待处理图像的实施方式中,活体检测装置接收终端发送的第三待处理图像。在一种获取第三待处理图像的实施方式中,活体检测装置包括摄像头。活体检测装置使用该摄像头采集得到第三待处理图像。
在一些实施方式中,活体检测装置在检测到活体检测指令的情况下,使用该摄像头采集得到第三待处理图像。例如,活体检测装置在检测到活体检测区域内存在人物a的情况下,使用该摄像头对人物a进行拍摄得到第三待处理图像。又例如,活体检测装置在检测到活体检测区域内存在人物a的情况下,使用该摄像头对人物a进行拍摄得到第三待处理图像。
应理解,在本公开实施例中,获取第四阈值的步骤和获取第三待处理图像的步骤可以分开执行,也可以同时执行。例如,活体检测装置可先获取第四阈值,再获取第三待处理图像。又例如,活体检测装置可先获取第三待处理图像,再获取第四阈值。再例如,活体检测装置在获取第四阈值的过程中获取第三待处理图像,或在获取第三待处理图像的过程中获取第四阈值。
步骤S14、确定第三待处理图像中的待检测人脸和第三待处理图像中的目标手之间 的距离。
在一种实施方式中,活体检测装置通过对第三待处理图像进行人脸检测处理得到待检测人脸在第三待处理图像中的第三位置。活体检测装置通过对第三待处理图像进行目标检测处理得到目标手在第三待处理图像中的第四位置。活体检测装置基于该第三位置和第四位置,确定待检测人脸和目标手之间的距离。
在一些实施方式中,若活体检测装置通过对第三待处理图像进行人脸检测处理,确定第三待处理图像中包含至少两个人脸区域。活体检测装置将面积最大的人脸区域内的人脸作为待检测人脸。在一些实施方式中,若活体检测装置通过对第三待处理图像进行目标检测处理,确定第三待处理图像中包含至少两只手。活体检测装置分别计算每一只手与待检测人脸之间的距离,得到至少两个待确认距离。将至少两个待确认距离中的最小值作为待检测人脸和目标手之间的距离。
步骤S15、基于距离和第四阈值,确定活体检测状态。
考虑到人在使用非活体数据对活体检测装置进行非活体攻击时,通常会用手拿着非活体数据,而人在进行活体检测时,通常不会将手放置于脸上,通过判断上述距离的大小,可以确定待检测人脸是活体或是非活体,进而可以确定活体检测状态。
其中,第四阈值为判断待检测人脸与目标手之间的距离是大或是小的依据。通过第四阈值判断该距离是大或是小,若该距离大,则可以确定待检测人物为活体;若该距离小,则可以确定待检测人物为非活体。若待检测人物为活体,则可以确定活体检测状态包括正常状态;若待检测人物为非活体,则可以确定活体检测状态包括异常状态。
在一种实施方式中,在距离小于第四阈值的情况下,活体检测装置确定活体检测状态为正常状态;在距离大于或等于第四阈值的情况下,活体检测装置确定活体检测状态为异常状态。在一种实施方式中,在距离小于或等于第四阈值的情况下,活体检测装置确定活体检测状态为正常状态;在距离大于第四阈值的情况下,活体检测装置确定活体检测状态为异常状态。在一种实施方式中,活体检测装置计算距离的平方得到第三中间数值。在第三中间数值小于第四阈值的情况下,活体检测装置确定活体检测状态为正常状态;在第三中间数值大于或等于第四阈值的情况下,活体检测装置确定活体检测状态为异常状态。
在本公开实施例中,活体检测装置通过执行步骤S13至步骤S15,得到待检测人脸与目标手之间的距离,进而可以基于该距离和第四阈值确定待检测人物是活体或是非活体,从而可以确定活体检测状态。
在一些实施方式中,在活体检测状态包括异常状态的情况下,活体检测装置还获取距离与异常等级之间的第三映射关系;基于该距离和第二映射关系,确定活体检测状态的异常等级。
在一些实施方式中,活体检测装置在执行步骤S201的过程中执行以下步骤:获取人数阈值和第四待处理图像。
这里,人数阈值的作用与第一阈值的作用、第二阈值的作用、第四阈值的作用均不同。人数阈值的取值与第一阈值的取值可以相同,也可以不同。人数阈值的取值与第二阈值的取值可以相同,也可以不同。人数阈值的取值与第四阈值的取值可以相同,也可以不同。人数阈值为正数。
在一种获取人数阈值的实施方式中,活体检测装置通过接收用户通过输入组件输入的人数阈值。在一种获取人数阈值的实施方式中,活体检测装置通过接收终端发送的人数阈值。
在一种获取第四待处理图像的实施方式中,活体检测装置接收用户通过输入组件输入的第四待处理图像。在一种获取第四待处理图像的实施方式中,活体检测装置接收终端发送的第四待处理图像。在一种获取第四待处理图像的实施方式中,活体检测装置包括摄像头。活体检测装置使用该摄像头采集得到第四待处理图像。
在一些实施方式中,活体检测装置在检测到活体检测指令的情况下,使用该摄像头采集得到第四待处理图像。例如,活体检测装置在检测到活体检测区域内存在人物a的情况下,使用该摄像头对人物a进行拍摄得到第四待处理图像。
又例如,活体检测装置在检测到活体检测区域内存在人物a的情况下,使用该摄像头对人物a进行拍摄得到第四待处理图像。
应理解,在本公开实施例中,获取人数阈值的步骤和获取第四待处理图像的步骤可以分开执行,也可以同时执行。例如,活体检测装置可先获取人数阈值,再获取第四待处理图像。又例如,活体检测装置可先获取第四待处理图像,再获取人数阈值。再例如,活体检测装置在获取人数阈值的过程中获取第四待处理图像,或在获取第四待处理图像的过程中获取人数阈值。
活体检测装置确定第四待处理图像中的人数。在该人数大于人数阈值的情况下,确定活体检测状态包括异常状态;在该人数小于或等于人数阈值的情况下,确定活体检测状态包括正常状态。
在进行活体检测时,通常是一个一个依次出现在活体检测区域进行活体检测,若同时出现在活体检测区域的人数较多,说明活体检测装置处于被非活体攻击状态的概率较大,因此在该种实施方式中,活体检测装置可以基于第四待处理图像中的人数和人数阈值,确定活体检测状态。
在一些实施方式中,活体检测装置在执行步骤S203的过程中执行以下步骤S16至步骤S17,其中:
步骤S16、对至少两张第一待处理图像进行活体检测处理,得到目标人物的至少两个第三活体检测结果。
这里,活体检测装置对一张第一待处理图像进行活体检测处理,可以得到目标人物的一个第三活体检测结果。活体检测装置对至少两张第一待处理图像进行活体检测处理,可以得到目标人物的至少两个第三活体检测结果。
例如,至少两张第一待处理图像包括第一待处理图像a和第一待处理图像b。活体检测装置对第一待处理图像a进行活体检测处理得到第三活体检测结果A,活体检测装置对第一待处理图像b进行活体检测处理得到第三活体检测结果B。此时,至少两个第三活体检测结果包括第三活体检测结果A和第三活体检测结果B。
步骤S17、基于至少两个第三活体检测结果,确定上述目标人物的第一活体检测结果。
在一些实施方式中,第三活体检测结果均包含目标人物为活体的活体概率。活体检测装置确定所包含的活体概率最大的第三活体检测结果为第三中间活体检测结果。在第三中间活体检测结果的活体概率大于活体检测阈值的情况下,确定第一活体检测结果为目标人物为活体;在第三中间活体检测结果的活体概率小于或等于活体检测阈值的情况下,确定目标人物的第一活体检测结果为目标人物为非活体。
在一些实施方式中,活体检测装置在执行步骤S17的过程中执行以下步骤S18至步骤S21,其中:
步骤S18、获取第三阈值。
这里,第三阈值的作用与第一阈值的作用、第二阈值的作用、第四阈值的作用和人数阈值的作用均不同。第三阈值的取值与第一阈值的取值可以相同,也可以不同。第三阈值的取值与第二阈值的取值可以相同,也可以不同。第三阈值的取值与第四阈值的取值可以相同,也可以不同。第三阈值为正数。
在一些实施方式中,活体检测装置接收用户通过输入组件输入的第三阈值获取第三阈值。在一些实施方式中,活体检测装置通过接收终端发送的第三阈值获取第三阈值。
步骤S19、确定至少两个第三活体检测结果中第二正结果的第三数量,第二正结果表征活体检测通过的第三活体检测结果。
步骤S20、确定第三数量和第四数量的第二比值,第四数量为第三活体检测结果的数量。
步骤S21、基于第二比值和第三阈值,确定目标人物的第一活体检测结果。
这里,至少两个第三活体检测结果中第二正结果的占比越大,说明目标人物为活体的活体概率越高。在本公开实施例中,第二比值表征第二正结果在至少两个第三活体检测结果中的占比,第三阈值为判断第二比值是大或是小的依据,即通过第三阈值可判断第二比值是大或是小。若第二比值大,说明至少两个第三活体检测结果中第二正结果的占比大,则目标人物为活体;若第二比值小,说明至少两个第三活体检测结果中第二正结果的占比小,则目标人物为非活体。
在一些实施方式中,在第二比值大于第三阈值的情况下,活体检测装置确定第二比值大,进而确定第一活体检测结果为:目标人物为活体;在第二比值小于或等于第三阈值的情况下,活体检测装置确定第二比值小,进而确定第一活体检测结果为:目标人物为非活体。在一种实施方式中,在第二比值大于或等于第三阈值的情况下,活体检测装置确定第二比值大,进而确定第一活体检测结果为:目标人物为活体;在第二比值小于第三阈值的情况下,活体检测装置确定第二比值小,进而确定第一活体检测结果为:目标人物为非活体。在一些实施方式中,活体检测装置计算第二比值的平方得到第四中间数值。在第四中间数值大于第三阈值的情况下,活体检测装置确定第二比值大,进而确定第一活体检测结果为目标人物为活体;在第四中间数值小于或等于第三阈值的情况下,活体检测装置确定第二比值小,进而确定第一活体检测结果为目标人物为非活体。
在一些实施方式中,活体检测装置在执行步骤S17的过程中执行以下步骤S22至步骤S23中的一个步骤:
步骤S22、在至少两个第三活体检测结果中第二正结果的数量大于负结果的数量的情况下,确定目标人物的第一活体检测结果为目标人物为活体,第二正结果表征活体检测通过的第三活体检测结果,负结果表征活体检测未通过的第三活体检测结果。
步骤S23、在至少两个第三活体检测结果中第二正结果的数量小于或等于负结果的情况下,确定目标人物的第一活体检测结果为目标人物为非活体。
例如,至少两个第三活体检测结果包括第三活体检测结果a、第三活体检测结果b、第三活体检测结果c,其中,第三活体检测结果a为活体检测未通过(即目标人物为非活体),第三活体检测结果b为活体检测通过(即目标人物为活体),第三活体检测结果c为活体检测未通过(即目标人物为非活体)。此时,第三活体检测结果a和第三活体检测结果c均为负结果,第三活体检测结果b为第二正结果。由于第二正结果的数量小于负结果的数量,活体检测装置通过执行步骤23确定第一活体检测结果为目标人物为非活体。
例如,至少两个第三活体检测结果包括第三活体检测结果a、第三活体检测结果b、第三活体检测结果c,其中,第三活体检测结果a为活体检测通过(即目标人物为活体),第三活体检测结果b为活体检测通过(即目标人物为活体),第三活体检测结果c为活体检测未通过(即目标人物为非活体)。此时,第三活体检测结果a和第三活体检测结果b均为第二正结果,第三活体检测结果c为负结果。由于第二正结果的数量大于负结果的数量,活体检测装置通过执行步骤S22确定第一活体检测结果为目标人物为活体。
在一些实施方式中,在确定活体检测状态包括异常状态之后,活体检测装置在执行步骤S16之前,还执行以下步骤S24,其中:
步骤S24、增大活体检测处理的第一活体检测阈值,得到第二活体检测阈值。
这里,活体检测装置通过对图像进行活体检测处理,可以得到图像中的待检测人物为活体的活体概率。在该概率大于第一活体检测阈值的情况下,确定待检测人物为活体;在该概率小于或等于第一活体检测阈值的情况下,确定待检测人物为非活体。
即在本公开的实施方式中,第一活体检测阈值为活体检测处理中用于判断待检测人 物是否为活体的依据。在一些实施方式中,第一活体检测阈值为活体检测装置在未确定活体检测状态的情况下,判断待检测人物是否为活体的依据;或第一活体检测阈值为活体检测装置在确定活体检测状态包括正常状态的情况下,判断待检测人物是否为活体的依据。在一些实施方式中,第一活体检测阈值的取值为小于1的正数。
活体检测装置通过增大第一活体检测阈值得到第二活体检测阈值,增大了用于判断待检测人物是否为活体的依据。
在执行完步骤S24后,活体检测装置在执行步骤S16的过程中执行以下步骤S25,其中:
步骤S25、基于第二活体检测阈值对至少两张第一待处理图像进行活体检测处理,确定目标人物的至少两个第三活体检测结果。
本步骤中,活体检测装置将第二活体检测阈值作为判断目标人物是否为活体的依据。在一些实施方式中,活体检测装置通过对图像进行活体检测处理得到图像中的目标人物为活体的活体概率。在该概率大于第二活体检测阈值的情况下,确定目标人物为活体;在该概率小于或等于第二活体检测阈值的情况下,确定目标人物为非活体。
活体检测装置在确定活体检测状态包括异常状态的情况下,通过增大第一活体检测阈值得到第二活体检测阈值,并将第二活体检测阈值作为判断目标人物是否为活体的依据。这样,活体检测装置通过将第二活体检测阈值作为判断目标人物是否为活体的依据,可以提高活体检测标准,从而提高第三活体检测结果的准确度。
在一些实施方式中,活体检测装置包括门禁装置,且至少两张第一待处理图像均包括目标人物的人脸。在该种实施方式中,门禁装置在确定活体检测状态包括异常状态的情况下,还执行以步骤S26至步骤S29,其中:
步骤S26、增大人脸比对的第一人脸相似度阈值,得到第二人脸相似度阈值。
这里,门禁装置通过将目标人物的人脸图像与至少一张已注册人脸图像进行人脸比对,可以确定目标人物是否为已注册人物,其中,已注册人物包括可信任人物,已注册人脸图像包括已注册人物的人脸图像。
在本公开实施方式中,第一人脸相似度阈值为人脸比对中用于判断目标人物是否为已注册人物的依据。在一些实施方式中,门禁装置通过将目标人物的图像与至少一张已注册人脸图像进行人脸比对,可以得到目标人物与已注册人物之间的人脸相似度。在该人脸相似度大于第一人脸相似度阈值的情况下,确定目标人物为已注册人物;在该人脸相似度小于或等于第一人脸相似度阈值的情况下,确定目标人物不是已注册人物。在一些实施方式中,第一人脸相似度阈值的取值为小于1的正数。
门禁装置通过增大第一人脸相似度阈值得到第二人脸相似度阈值,即第二人脸相似度阈值比第一人脸相似度阈值大。
步骤S27、获取至少一张已注册人脸图像。
这里,已注册人脸图像均为可信任人物的人脸图像,至少一张已注册人脸图像包含所有可信任人物的人脸图像。例如,门禁装置为A公司的门禁装置,A公司共有张三、李四、王五这三名员工,已注册人脸图像包括张三的人脸图像、李四的人脸图像和王五的人脸图像。
在一些实施方式中,活体检测装置将用户通过输入组件输入的至少一张已注册图像作为至少一张已注册图像。在一些实施方式中,门禁装置接收终端发送的至少一张已注册人脸图像作为至少一张已注册图像。
应理解,在本公开实施例中,门禁装置执行步骤S26和执行步骤S27并未先后顺序,门禁装置可先执行步骤S26再执行步骤S27,也可先执行步骤S27再执行步骤S26,还可同时执行步骤S26和步骤S27。
步骤S28、基于第二人脸相似度阈值,对待检测人脸图像与上述至少一张已注册人脸图像进行人脸比对,得到人脸比对结果,待检测人脸图像为至少两张第一待处理图像 中的任意一张图像。
这里,门禁装置将第二人脸相似度阈值作为判断目标人物是否为已注册人物的依据。在一些实施方式中,门禁装置通过将待检测人脸图像与至少一张已注册人脸图像进行人脸比对,可以得到目标人物与已注册人物之间的人脸相似度。在该人脸相似度大于第二人脸相似度阈值的情况下,确定目标人物为已注册人物;在该人脸相似度小于或等于第二人脸相似度阈值的情况下,确定目标人物不是已注册人物。在一些实施方式中,第二人脸相似度阈值的取值为小于1的正数。
步骤S29、基于人脸比对结果和第一活体检测结果,确定目标人物在上述门禁装置的通行状态。
在一些实施方式中,门禁装置在人脸比对结果包括至少一张已注册人脸图像中不存在与待检测人脸图像匹配的图像的情况下,确定目标人物在门禁装置的通行状态为不可通行。门禁装置在第一活体检测结果包括目标人物为非活体的情况下,确定目标人物在门禁装置的通行状态为不可通行。门禁装置在第一活体检测结果包括目标人物为活体,且人脸比对结果包括至少一张已注册人脸图像中存在与待检测人脸图像匹配的图像的情况下,确定目标人物在门禁装置的通行状态为可通行。
门禁装置在活体检测状态包括异常状态的情况下,通过增大第一人脸相似度阈值得到第二人脸相似度阈值,并将第二人脸相似度阈值作为判断目标人物是否为已注册人物的依据。这样门禁装置通过将第二人脸相似度阈值作为判断目标人物是否为已注册人物的依据,可以提高人脸比对标准,进而提高人脸比对结果的准确度,从而提高门禁装置的识别准确度,提高在异常活体检测状态下的安全性。
在一些实施方式中,门禁装置在执行步骤S28的过程中执行以下步骤S30至步骤S31,其中:
步骤S30、从至少一张已注册人脸图像中选取包含参考人脸的至少一张参考人脸图像。
在一些实施方式中,在门禁装置为异常状态的情况下,门禁装置可能随时会遭受非活体攻击,由此门禁装置只允许授权人物通行,可以提高门禁装置的识别准确度。门禁装置进而从至少一张已注册人脸图像中选取授权人物的人脸图像(即包含参考人脸的至少一张参考人脸图像)用于进行后续的人脸比对,以提高门禁装置的识别准确度。
步骤S31、将待检测人脸图像与至少一张参考人脸图像中的图像进行人脸比对,得到人脸比对结果。
这里,由于至少一张参考人脸图像包含授权人物,通过将待检测人脸图像与至少一张参考人脸图像进行人脸比对,可以确定待检测人脸图像是否包含授权人物,即确定目标人物是否为授权人物。例如,至少一张参考人脸图像包括张三的人脸图像、李四的人脸图像,那么授权人物包括张三和李四。将待检测人脸图像与参考人脸图像集进行人脸比对,可以确定待检测人脸图像是否包含张三,以及确定待检测人脸图像是否包含李四。
例如,门禁装置为A公司的门禁装置,A公司共有张三、李四、王五这三名员工,其中,张三为A公司的负责人。A公司规定在出现异常情况时,只允许张三进入公司,进而确定张三为授权人物。那么,门禁装置在确定活体检测状态包括异常状态的情况下,从已注册人脸图像中选取张三的人脸图像,作为参考人脸图像。
在该种实施方式中,门禁装置在确定活体检测状态包括异常状态的情况下,从已注册人脸图像中选取授权人物的人脸图像作为参考人脸图像,并通过将参考人脸图像与待检测人脸图像进行人脸比对得到人脸比对结果,由此降低门禁装置在活体检测状态为异常状态时遭受非活体攻击的风险,降低安全隐患。
在一些实施方式中,门禁装置通过步骤S30和步骤S31得到人脸比对结果,在人脸比对结果包括至少一张参考人脸图像中不存在与待检测人脸图像匹配的图像的情况下,门禁装置确定目标人物在门禁装置的通行状态为不可通行。在第一活体检测结果包括目 标人物为非活体的情况下,门禁装置确定目标人物在门禁装置的通行状态为不可通行。在第一活体检测结果包括目标人物为活体,且人脸比对结果包括至少一张参考人脸图像中存在与待检测人脸图像匹配的图像的情况下,门禁装置确定目标人物在门禁装置的通行状态为可通行。
在一些实施方式中,门禁装置在执行步骤S28之前,还执行以下步骤S32,其中:
步骤S32、获取有效时长。
在一种获取有效时长的实施方式中,门禁装置接收用户通过输入组件输入的有效时长。在一种获取有效时长的实施方式中,门禁装置接收控制终端发送的有效时长。
在执行完步骤S32的情况下,门禁装置在执行步骤S28的过程中执行以下步骤S33至步骤S34,其中:
步骤S33、在有效时间段内,通过将待检测人脸图像与至少一张参考人脸图像中的图像进行人脸比对,得到人脸比对结果,有效时间段的起始时间为确定活体检测状态包括异常状态的时间,有效时间段的时长为有效时长。
步骤S34、在有效时间段外,通过将待检测人脸图像与至少一张已注册人脸图像进行人脸比对,得到人脸比对结果。
例如,门禁装置在2021年6月21日9点20分15秒确定活体检测状态包括异常状态。假设有效时长为2个小时。那么门禁装置在2021年6月21日9点20分15秒~2021年6月21日11点20分15秒内,通过将待检测人脸图像与至少一张参考人脸图像中的图像进行人脸比对,得到人脸比对结果。从2021年6月21日11点20分16秒开始,通过将待检测人脸图像与至少一张已注册人脸图像进行人脸比对,得到人脸比对结果。
考虑到人对门禁装置进行非活体攻击通常集中在一段时间内,因此在确定活体检测状态包括异常状态的情况下,通过将待检测人脸图像与至少一张参考人脸图像中的图像进行人脸比对得到人脸比对结果,可以降低门禁装置遭受非活体攻击的概率,从而提高安全性。
又考虑到非活体攻击的持续时间较短,若将待检测人脸图像与至少一张参考人脸图像中的图像进行人脸比对得到人脸比对结果,会使除目标人物之外的可信任人物无法正常通过门禁装置。因此,使门禁装置在有效时长内将待检测人脸图像与至少一张参考人脸图像中的图像进行人脸比对得到人脸比对结果,不仅可降低安全隐患,还可以提高门禁装置的识别准确度,从而提升用户体验。
在一些实施方式中,活体检测装置在活体检测状态包括异常等级的情况下,执行以下步骤S35,其中:
步骤S35、基于异常等级,确定目标异常活体检测方案。
这里,异常活体检测方案为在活体检测状态包括异常状态的情况下所执行的活体检测方案。由于不同异常等级时活体检测装置遭受非活体攻击的概率不同,活体检测装置基于该异常等级确定目标异常活体检测方案,可以提高活体检测准确度。
在一些实施方式中,异常等级所表征的异常程度越高,目标异常活体检测方案的活体检测标准就越高。
在一些实施方式中,活体检测装置获取异常等级和异常活体检测方案之间的第四映射关系。活体检测装置基于该第四映射关系和活体检测状态的异常等级,确定目标异常活体检测方案。
在执行完步骤S35后,活体检测装置在执行步骤S17的过程中执行以下步骤S36,其中:
步骤S36、基于目标异常活体检测方案和至少两个第三活体检测结果,确定目标人物的第一活体检测结果。
例如,假设异常等级包括以下中的一个:一般、较高、特别,其中,特别所表征的异常程度高于较高所表征的异常程度,较高所表征的异常程度高于一般所表征的异常程 度。
在异常等级包括一般的情况下,目标异常活体检测方案包括:在第三数量大于负结果的数量的情况下,确定第一活体检测结果为目标人物为活体;在第三数量小于或等于负结果的数量的情况下,确定第一活体检测结果为目标人物为非活体。其中,第三数量表征至少两个第三活体检测结果中第二正结果的数量,第二正结果表征活体检测通过的第三活体检测结果,负结果为活体检测未通过的第三活体检测结果。
在异常等级包括较高的情况下,目标异常活体检测方案包括:在第三数量与负结果的数量的差值大于第五阈值的情况下,确定第一活体检测结果为目标人物为活体;在第三数量与负结果的数量的差值小于或等于第五阈值的情况下,确定第一活体检测结果为目标人物为非活体。其中,第五阈值可以是正整数。在一些实施方式中,可以将第五阈值设置为2。
在异常等级包括特别的情况下,目标异常活体检测方案包括:在第三数量与负结果的数量的差值大于第六阈值的情况下,确定第一活体检测结果为目标人物为活体;在第三数量与负结果的数量的差值小于或等于第六阈值的情况下,确定第一活体检测结果为目标人物为非活体。其中,第六阈值可以是正整数,且第六阈值大于第五阈值。在一些实施方式中,可以将第六阈值设置为3。
在步骤S35和步骤S36中,活体检测装置基于该异常等级确定异常活体检测方案,并基于该异常活体检测方案和至少两个第三活体检测结果,确定第一活体检测结果,从而提高第一活体检测结果的准确度。
在一些实施方式中,活体检测装置还执行以下步骤S37,其中:
步骤S37、在活体检测状态包括异常状态的情况下,向管理终端发送提示指令,提示指令携带检测状态包括异常状态的信息。
这里,提示指令可以用于通过管理终端提示相关人员活体检测状态包括异常状态。这样,相关人员可以采取相应的措施避免活体检测装置遭受非活体攻击。
在一些实施方式中,活体检测装置包括摄像头。活体检测装置在检测到针对提示指令发送的远程视频指令的情况下,向管理终端发送摄像头实时采集到的视频。
在一些实施方式中,活体检测装置包括扬声器。活体检测装置在接收到针对提示指令发送的语音数据的情况下,通过扬声器输出该语音数据。
例如,相关管理人员通过管理终端所获取到的提示指令获知活体检测装置的活体检测状态包括异常状态,进而通过管理终端向活体检测装置发送语音数据,以告知对活体检测装置进行非活体攻击的人。活体检测装置进而在接收到该语音数据的情况下,通过扬声器输出该语音数据。
在一些实施方式中,提示指令用于指示管理终端输出报警信息,以提示相关人员及时处理活体检测装置的活体检测状态包括异常状态的情况。
在一些实施方式中,在活体检测状态包括异常状态的情况下,活体检测装置输出报警信息。
在一些实施方式中,在确定活体检测状态的异常等级为预设异常等级的情况下,门禁装置停止对目标人物进行人脸识别。在该种实施方式中,门禁装置停止对目标人物进行人脸识别,即门禁装置停止使用人脸识别相关功能。
例如,假设异常等级包括以下中的至少一个:一般、较高、特别,其中,特别所表征的异常程度高于较高所表征的异常程度,较高所表征的异常程度高于一般所表征的异常程度。
若预设等级为特别,那么门禁装置在确定活体检测状态的异常等级为特别的情况下,停止对目标人物进行人脸识别。
在该种实施方式中,门禁装置在确定活体检测状态的异常等级为预设等级的情况下,停止使用人脸识别功能,进而禁止任何人通行,从而降低误通行率。其中,误通行指门 禁装置确定目标人物之外的人物可通行。
在一些实施方式中,门禁装置在确定活体检测状态包括异常状态的情况下,停止使用人脸识别功能,并提示目标人物输入身份信息或使用钥匙进入。
在一些实施方式中,输入身份信息可以是将携带身份信息的卡片放置于卡片识别区域,其中,携带身份信息的卡片包括以下中至少一个:身份证、门禁卡、工牌。这样,门禁装置可以通过识别卡片获取目标人物的身份信息,从而确定目标人物是否可通行。
在一些实施方式中,输入身份信息可以是将携带身份信息的二维码放置于二维码识别区域。这样,门禁装置可以通过识别二维码获取目标人物的身份信息,从而确定目标人物是否可以通行。
在该种实施方式中,门禁装置在确定活体检测状态包括异常状态的情况下,停止使用人脸识别功能。这样,用户无法通过人脸识别实现通行,进而无法对门禁装置进行非活体攻击,从而降低活体检测的误识别率,提高门禁装置的识别准确度。
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
本公开实施例中提供了与上述方法对应的装置,由于本公开实施例中的装置解决问题的原理与本公开实施例上述方法相似,因此装置的实施可以参见方法的实施。
图5为本公开实施例提供的一种活体检测装置的结构示意图,如图5所示,该活体检测装置1包括获取部分11和第一处理部分12。在一些实施方式中,该活体检测装置1还包括第二处理部分13。其中:
获取部分11,被配置为获取活体检测装置1所处环境的活体检测状态,活体检测状态包括正常状态或异常状态,正常状态表征活体检测装置1所处环境未处于被非活体攻击的状态,异常状态表征活体检测装置1所处环境处于被非活体攻击的状态;
所述获取部分11,还被配置为在活体检测装置1所处环境的活体检测状态包括异常状态的情况下,获取至少两张第一待处理图像,至少两张第一待处理图像均包含目标人物;
第一处理部分12,被配置为基于至少两张第一待处理图像,确定目标人物的第一活体检测结果。
在一些实施方式中,所述获取部分11,还被配置为:获取第一阈值和至少一张第二待处理图像,所述至少一张第二待处理图像中的最大时间戳小于所述至少两张第一待处理图像中的最小时间戳;对所述至少一张第二待处理图像进行活体检测处理,得到至少一个第二活体检测结果;确定所述至少一个第二活体检测结果中第一正结果的第一数量,所述第一正结果表征活体检测通过的所述第二活体检测结果;确定所述第一数量和第二数量的第一比值,所述第二数量为所述第二活体检测结果的数量;基于所述第一比值和所述第一阈值,确定所述活体检测状态。
在一些实施方式中,所述获取部分11,还被配置为:获取双目图像和第二阈值,所述双目图像包括第一图像和第二图像,且所述第一图像和所述第二图像均包括待检测人脸;确定所述待检测人脸在所述第一图像中的第一位置,并确定所述待检测人脸在所述第二图像中的第二位置;基于所述第一位置和所述第二位置,确定所述待检测人脸在所述双目图像中的视差位移;基于所述视差位移和所述第二阈值,确定所述活体检测状态。
在一些实施方式中,所述第一处理部分12,还被配置为:对所述至少两张第一待处理图像进行活体检测处理,得到所述目标人物的至少两个第三活体检测结果;基于所述至少两个第三活体检测结果,确定所述目标人物的第一活体检测结果。
在一些实施方式中,所述第一处理部分12,还被配置为:获取第三阈值;确定所述至少两个第三活体检测结果中第二正结果的第三数量,所述第二正结果表征活体检测通过的所述第三活体检测结果;确定所述第三数量和第四数量的第二比值,所述第四数量 为所述第三活体检测结果的数量;基于所述第二比值和所述第三阈值,确定所述目标人物的第一活体检测结果。
在一些实施方式中,所述第一处理部分12,还被配置为:在所述至少两个第三活体检测结果中第二正结果的数量大于负结果的数量的情况下,确定所述目标人物的第一活体检测结果为所述目标人物为活体,所述第二正结果表征活体检测通过的所述第三活体检测结果,所述负结果表征活体检测未通过的所述第三活体检测结果;在所述至少两个第三活体检测结果中第二正结果的数量小于或等于所述负结果的数量的情况下,确定所述目标人物的第一活体检测结果为所述目标人物为非活体。
在一些实施方式中,所述活体检测装置1还包括:第二处理部分13,被配置为在确定所述活体检测状态包括所述异常状态之后,在所述对所述至少两张第一待处理图像进行活体检测处理,得到所述目标人物的至少两个第三活体检测结果之前,增大活体检测处理的第一活体检测阈值得到第二活体检测阈值;所述第一处理部分12,还被配置为:基于所述第二活体检测阈值对所述至少两张第一待处理图像进行活体检测处理,得到所述目标人物的至少两个第三活体检测结果。
在一些实施方式中,所述活体检测装置1包括门禁装置,所述至少两张第一待处理图像均包括所述目标人物的人脸;在确定所述活体检测状态包括所述异常的情况下,所述第一处理部分12,还被配置为增大人脸比对的第一人脸相似度阈值得到第二人脸相似度阈值;所述获取部分11,还被配置为获取至少一张已注册人脸图像;所述第一处理部分12,还被配置为基于所述第二人脸相似度阈值,对待检测人脸图像与所述至少一张已注册人脸图像进行人脸比对,得到人脸比对结果,所述待检测人脸图像为所述至少两张第一待处理图像中的任意一张图像;所述第一处理部分12,还被配置为基于所述人脸比对结果和所述第一活体检测结果,确定所述目标人物在所述门禁装置的通行状态。
在一些实施方式中,所述第一处理部分12,还被配置为:在所述人脸比对结果包括所述至少一张已注册人脸图像中不存在与所述待检测人脸图像匹配的图像的情况下,确定所述目标人物在所述门禁装置的通行状态为不可通行;在所述第一活体检测结果包括所述目标人物为非活体的情况下,确定所述目标人物在所述门禁装置的通行状态为不可通行;在所述第一活体检测结果包括所述目标人物为活体,且所述人脸比对结果包括所述至少一张已注册人脸图像中存在与所述待检测人脸图像匹配的图像的情况下,确定所述目标人物在所述门禁装置的通行状态为可通行。
本公开实施例中,所述获取部分11可以是数据接口,第一处理部分12和第二处理部分13均可以是处理器。
在一些实施例中,本公开实施例提供的装置具有的功能或包含的部分可以用于执行上文方法实施例描述的方法,其具体实现可以参照上文方法实施例的描述。
在本公开实施例以及其他的实施例中,“部分”可以是部分电路、部分处理器、部分程序或软件等等,当然也可以是单元,还可以是模块也可以是非模块化的。
图6为本公开实施例提供的一种活体检测装置的硬件结构示意图。该活体检测装置2包括处理器21,存储器22,输入装置23,输出装置24。该处理器21、存储器22、输入装置23和输出装置24通过连接器25相耦合,该连接器25包括各类接口、传输线或总线等等,本公开实施例对此不作限定。应当理解,本公开的各个实施例中,耦合是指通过特定方式的相互联系,包括直接相连或者通过其他设备间接相连,例如可以通过各类接口、传输线、总线等相连。
处理器21可以是一个或多个图形处理器(Graphics Processing Unit,GPU),在处理器21是一个GPU的情况下,该GPU可以是单核GPU,也可以是多核GPU。在一些实施方式中,处理器21可以是多个GPU构成的处理器组,多个处理器之间通过一个或多个总线彼此耦合。在一些实施方式中,该处理器还可以为其他类型的处理器等等,本公开实施例不作限定。
存储器22可用于存储计算机程序指令,以及用于执行本公开方案的程序代码在内的各类计算机程序代码。在一些实施方式中,存储器包括但不限于是随机存储记忆体(Random Access Memory,RAM)、只读存储器(Read-only Memory,ROM)、可擦除可编程只读存储器(Erasable Programmable Read Only Memory,EPROM)、或便携式只读存储器(Compact Disc Read-only Memory,CD-ROM),该存储器用于相关指令及数据。
输入装置23用于输入数据和/或信号,以及输出装置24用于输出数据和/或信号。输入装置23和输出装置24可以是独立的器件,也可以是一个整体的器件。
可理解,本公开实施例中,存储器22不仅可用于存储相关指令,还可用于存储相关数据,如该存储器22可用于存储通过输入装置23获取的至少两张第一待处理图像,又或者该存储器22还可用于存储通过处理器21得到的第一活体检测结果等等,本公开实施例对于该存储器中具体所存储的数据不作限定。
可以理解的是,图6示出了一种活体检测装置的简化设计。在实际应用中,活体检测装置还可以分别包含必要的其他元件,包含但不限于任意数量的输入/输出装置、处理器、存储器等,而所有可以实现本公开实施例的活体检测装置都在本公开的保护范围之内。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本公开的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和部分的具体工作过程,可以参考前述方法实施例中的对应过程。所属领域的技术人员还可以清楚地了解到,本公开各个实施例描述各有侧重,因此,在某一实施例未描述或未详细描述的部分可以参见其他实施例的记载。
在本公开所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述部分的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个部分或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或部分的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本公开实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者通过所述计算机可读存储介质进行传输。所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(Digital Subscriber Line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取 的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,数字通用光盘(Digital Versatile Disc,DVD))、或者半导体介质(例如固态硬盘(Solid State Disk,SSD))等。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:只读存储器(Read-only Memory,ROM)或随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可存储程序代码的介质。
工业实用性
本公开实施例提供了一种活体检测方法及装置、电子设备、计算机可读存储介质和计算机程序产品。该方法应用于活体检测装置,包括:获取活体检测装置所处环境的活体检测状态,活体检测状态包括正常状态或异常状态,正常状态表征活体检测装置所处环境未处于被非活体攻击的状态,异常状态表征活体检测装置所处环境处于被非活体攻击的状态;在活体检测装置所处环境的活体检测状态包括异常状态的情况下,获取至少两张第一待处理图像,至少两张第一待处理图像均包含目标人物;基于至少两张第一待处理图像,确定目标人物的第一活体检测结果。上述方案能够提高活体检测标准,从而提高活体检测结果的准确度。

Claims (21)

  1. 一种活体检测方法,所述方法应用于活体检测装置,所述方法包括:
    获取所述活体检测装置所处环境的活体检测状态,所述活体检测状态包括正常状态或异常状态,所述正常状态表征所述活体检测装置所处环境未处于被非活体攻击的状态,所述异常状态表征所述活体检测装置所处环境处于被非活体攻击的状态;
    在所述活体检测装置所处环境的活体检测状态包括所述异常状态的情况下,获取至少两张第一待处理图像,所述至少两张第一待处理图像均包含目标人物;
    基于所述至少两张第一待处理图像,确定所述目标人物的第一活体检测结果。
  2. 根据权利要求1所述的方法,其中,所述获取所述活体检测装置所处环境的活体检测状态,包括:
    获取第一阈值和至少一张第二待处理图像,所述至少一张第二待处理图像中的最大时间戳小于所述至少两张第一待处理图像中的最小时间戳;
    对所述至少一张第二待处理图像进行活体检测处理,得到至少一个第二活体检测结果;
    确定所述至少一个第二活体检测结果中第一正结果的第一数量,所述第一正结果表征活体检测通过的所述第二活体检测结果;
    确定所述第一数量和第二数量的第一比值,所述第二数量为所述第二活体检测结果的数量;
    基于所述第一比值和所述第一阈值,确定所述活体检测状态。
  3. 根据权利要求1所述的方法,其中,所述获取所述活体检测装置所处环境的活体检测状态,包括:
    获取双目图像和第二阈值,所述双目图像包括第一图像和第二图像,且所述第一图像和所述第二图像均包括待检测人脸;
    确定所述待检测人脸在所述第一图像中的第一位置,并确定所述待检测人脸在所述第二图像中的第二位置;
    基于所述第一位置和所述第二位置,确定所述待检测人脸在所述双目图像中的视差位移;
    基于所述视差位移和所述第二阈值,确定所述活体检测状态。
  4. 根据权利要求1至3中任意一项所述的方法,其中,所述基于至少两张第一待处理图像,确定所述目标人物的第一活体检测结果,包括:
    对所述至少两张第一待处理图像进行活体检测处理,得到所述目标人物的至少两个第三活体检测结果;
    基于所述至少两个第三活体检测结果,确定所述目标人物的第一活体检测结果。
  5. 根据权利要求4所述的方法,其中,所述基于所述至少两个第三活体检测结果,确定所述目标人物的第一活体检测结果,包括:
    获取第三阈值;
    确定所述至少两个第三活体检测结果中第二正结果的第三数量,所述第二正结果表征活体检测通过的所述第三活体检测结果;
    确定所述第三数量和第四数量的第二比值,所述第四数量为所述第三活体检测结果的数量;
    基于所述第二比值和所述第三阈值,确定所述目标人物的第一活体检测结果。
  6. 根据权利要求4所述的方法,其中,所述基于所述至少两个第三活体检测结果,确定所述目标人物的第一活体检测结果,包括:
    在所述至少两个第三活体检测结果中第二正结果的数量大于负结果的数量的情况下,确定所述目标人物的第一活体检测结果为所述目标人物为活体,所述第二正结果表征活体检测通过的所述第三活体检测结果,所述负结果表征活体检测未通过的所述第三活体检测结果;
    在所述至少两个第三活体检测结果中第二正结果的数量小于或等于所述负结果的数量的情况下,确定所述目标人物的第一活体检测结果为所述目标人物为非活体。
  7. 根据权利要求4至6中任意一项所述的方法,其中,在确定所述活体检测状态包括所述异常状态之后,在所述对所述至少两张第一待处理图像进行活体检测处理,得到所述目标 人物的至少两个第三活体检测结果之前,所述方法还包括:
    增大活体检测处理的第一活体检测阈值,得到第二活体检测阈值;
    所述对所述至少两张第一待处理图像进行活体检测处理,得到所述目标人物的至少两个第三活体检测结果,包括:
    基于所述第二活体检测阈值对所述至少两张第一待处理图像进行活体检测处理,得到所述目标人物的至少两个第三活体检测结果。
  8. 根据权利要求4至7中任意一项所述的方法,其中,所述活体检测装置包括门禁装置,所述至少两张第一待处理图像均包括所述目标人物的人脸图像;在确定所述活体检测状态包括所述异常状态的情况下,所述方法还包括:
    增大人脸比对的第一人脸相似度阈值,得到第二人脸相似度阈值;
    获取至少一张已注册人脸图像;
    基于所述第二人脸相似度阈值,对待检测人脸图像与所述至少一张已注册人脸图像进行人脸比对,得到人脸比对结果,所述待检测人脸图像为所述至少两张第一待处理图像中的任意一张图像;
    基于所述人脸比对结果和所述第一活体检测结果,确定所述目标人物在所述门禁装置的通行状态。
  9. 根据权利要求8所述的方法,其中,所述基于所述人脸比对结果和所述第一活体检测结果,确定所述目标人物在所述门禁装置的通行状态,包括:
    在所述人脸比对结果包括所述至少一张已注册人脸图像中不存在与所述待检测人脸图像匹配的图像的情况下,确定所述目标人物在所述门禁装置的通行状态为不可通行;
    在所述第一活体检测结果包括所述目标人物为非活体的情况下,确定所述目标人物在所述门禁装置的通行状态为不可通行;
    在所述第一活体检测结果包括所述目标人物为活体,且所述人脸比对结果包括所述至少一张已注册人脸图像中存在与所述待检测人脸图像匹配的图像的情况下,确定所述目标人物在所述门禁装置的通行状态为可通行。
  10. 一种活体检测装置,所述活体检测装置包括:
    获取部分,被配置为获取所述活体检测装置所处环境的活体检测状态,所述活体检测状态包括正常状态或异常状态,所述正常状态表征所述活体检测装置所处环境未处于被非活体攻击的状态,所述异常状态表征所述活体检测装置所处环境处于被非活体攻击的状态;
    所述获取部分,还被配置为在所述活体检测装置所处环境的活体检测状态包括所述异常状态的情况下,获取至少两张第一待处理图像,所述至少两张第一待处理图像均包含目标人物;
    第一处理部分,被配置为基于所述至少两张第一待处理图像,确定所述目标人物的第一活体检测结果。
  11. 根据权利要求10所述的装置,其中,所述获取部分,还被配置为:
    获取第一阈值和至少一张第二待处理图像,所述至少一张第二待处理图像中的最大时间戳小于所述至少两张第一待处理图像中的最小时间戳;对所述至少一张第二待处理图像进行活体检测处理,得到至少一个第二活体检测结果;确定所述至少一个第二活体检测结果中第一正结果的第一数量,所述第一正结果表征活体检测通过的所述第二活体检测结果;确定所述第一数量和第二数量的第一比值,所述第二数量为所述第二活体检测结果的数量;基于所述第一比值和所述第一阈值,确定所述活体检测状态。
  12. 根据权利要求10所述的装置,其中,所述获取部分,还被配置为:
    获取双目图像和第二阈值,所述双目图像包括第一图像和第二图像,且所述第一图像和所述第二图像均包括待检测人脸;确定所述待检测人脸在所述第一图像中的第一位置,并确定所述待检测人脸在所述第二图像中的第二位置;基于所述第一位置和所述第二位置,确定所述待检测人脸在所述双目图像中的视差位移;基于所述视差位移和所述第二阈值,确定所述活体检测状态。
  13. 根据权利要求10至12中任意一项所述的装置,其中,所述第一处理部分,还被配 置为:
    对所述至少两张第一待处理图像进行活体检测处理,得到所述目标人物的至少两个第三活体检测结果;基于所述至少两个第三活体检测结果,确定所述目标人物的第一活体检测结果。
  14. 根据权利要求13所述的装置,其中,所述第一处理部分,还被配置为:
    获取第三阈值;确定所述至少两个第三活体检测结果中第二正结果的第三数量,所述第二正结果表征活体检测通过的所述第三活体检测结果;确定所述第三数量和第四数量的第二比值,所述第四数量为所述第三活体检测结果的数量;基于所述第二比值和所述第三阈值,确定所述目标人物的第一活体检测结果。
  15. 根据权利要求13所述的装置,其中,所述第一处理部分,还被配置为:
    在所述至少两个第三活体检测结果中第二正结果的数量大于负结果的数量的情况下,确定所述目标人物的第一活体检测结果为所述目标人物为活体,所述第二正结果表征活体检测通过的所述第三活体检测结果,所述负结果表征活体检测未通过的所述第三活体检测结果;
    在所述至少两个第三活体检测结果中第二正结果的数量小于或等于所述负结果的数量的情况下,确定所述目标人物的第一活体检测结果为所述目标人物为非活体。
  16. 根据权利要求13至15中任意一项所述的装置,其中,所述装置还包括第二处理部分,所述第二处理部分,被配置为在确定所述活体检测状态包括所述异常状态之后,在所述对所述至少两张第一待处理图像进行活体检测处理,得到所述目标人物的至少两个第三活体检测结果之前,增大活体检测处理的第一活体检测阈值,得到第二活体检测阈值;
    所述第一处理部分,还被配置为:基于所述第二活体检测阈值对所述至少两张第一待处理图像进行活体检测处理,得到所述目标人物的至少两个第三活体检测结果。
  17. 根据权利要求13至16中任意一项所述的装置,其中,所述活体检测装置包括门禁装置,所述至少两张第一待处理图像均包括所述目标人物的人脸图像;在确定所述活体检测状态包括所述异常状态的情况下,所述第一处理部分,还被配置为:增大人脸比对的第一人脸相似度阈值,得到第二人脸相似度阈值;
    所述获取部分,还被配置为:获取至少一张已注册人脸图像;
    所述第一处理部分,还被配置为:基于所述第二人脸相似度阈值,对待检测人脸图像与所述至少一张已注册人脸图像进行人脸比对,得到人脸比对结果,所述待检测人脸图像为所述至少两张第一待处理图像中的任意一张图像;基于所述人脸比对结果和所述第一活体检测结果,确定所述目标人物在所述门禁装置的通行状态。
  18. 根据权利要求17所述的装置,其中,所述第一处理部分,还被配置为:
    在所述人脸比对结果包括所述至少一张已注册人脸图像中不存在与所述待检测人脸图像匹配的图像的情况下,确定所述目标人物在所述门禁装置的通行状态为不可通行;
    在所述第一活体检测结果包括所述目标人物为非活体的情况下,确定所述目标人物在所述门禁装置的通行状态为不可通行;
    在所述第一活体检测结果包括所述目标人物为活体,且所述人脸比对结果包括所述至少一张已注册人脸图像中存在与所述待检测人脸图像匹配的图像的情况下,确定所述目标人物在所述门禁装置的通行状态为可通行。
  19. 一种电子设备,包括:处理器和存储器,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,在所述处理器执行所述计算机指令的情况下,所述电子设备执行权利要求1至9中任意一项所述的方法。
  20. 一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,所述计算机程序包括程序指令,在所述程序指令被处理器执行的情况下,使所述处理器执行权利要求1至9中任意一项所述的方法。
  21. 一种计算机程序产品,所述计算机程序产品包括计算机程序或指令,在所述计算机程序或指令在电子设备上运行的情况下,使得所述电子设备执行权利要求1至9中任意一项所述的方法。
PCT/CN2022/079043 2021-08-26 2022-03-03 活体检测方法及装置、电子设备、计算机可读存储介质和计算机程序产品 WO2023024473A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110988130.0A CN113705428A (zh) 2021-08-26 2021-08-26 活体检测方法及装置、电子设备及计算机可读存储介质
CN202110988130.0 2021-08-26

Publications (1)

Publication Number Publication Date
WO2023024473A1 true WO2023024473A1 (zh) 2023-03-02

Family

ID=78655172

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/079043 WO2023024473A1 (zh) 2021-08-26 2022-03-03 活体检测方法及装置、电子设备、计算机可读存储介质和计算机程序产品

Country Status (2)

Country Link
CN (1) CN113705428A (zh)
WO (1) WO2023024473A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705428A (zh) * 2021-08-26 2021-11-26 北京市商汤科技开发有限公司 活体检测方法及装置、电子设备及计算机可读存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100316261A1 (en) * 2009-06-11 2010-12-16 Fujitsu Limited Biometric authentication device, authentication accuracy evaluation device and biometric authentication method
CN108875688A (zh) * 2018-06-28 2018-11-23 北京旷视科技有限公司 一种活体检测方法、装置、系统及存储介质
CN111160178A (zh) * 2019-12-19 2020-05-15 深圳市商汤科技有限公司 图像处理方法及装置、处理器、电子设备及存储介质
CN111291668A (zh) * 2020-01-22 2020-06-16 北京三快在线科技有限公司 活体检测方法、装置、电子设备及可读存储介质
CN112926464A (zh) * 2021-03-01 2021-06-08 创新奇智(重庆)科技有限公司 一种人脸活体检测方法以及装置
CN113705428A (zh) * 2021-08-26 2021-11-26 北京市商汤科技开发有限公司 活体检测方法及装置、电子设备及计算机可读存储介质

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512632B (zh) * 2015-12-09 2019-04-05 北京旷视科技有限公司 活体检测方法及装置
JP6794692B2 (ja) * 2016-07-19 2020-12-02 富士通株式会社 センサデータ学習方法、センサデータ学習プログラム、及びセンサデータ学習装置
US10776609B2 (en) * 2018-02-26 2020-09-15 Samsung Electronics Co., Ltd. Method and system for facial recognition
CN108460266A (zh) * 2018-03-22 2018-08-28 百度在线网络技术(北京)有限公司 用于认证身份的方法和装置
CN108596041B (zh) * 2018-03-28 2019-05-14 中科博宏(北京)科技有限公司 一种基于视频的人脸活体检测方法
CN109635770A (zh) * 2018-12-20 2019-04-16 上海瑾盛通信科技有限公司 活体检测方法、装置、存储介质及电子设备
CN111767760A (zh) * 2019-04-01 2020-10-13 北京市商汤科技开发有限公司 活体检测方法和装置、电子设备及存储介质
CN110598580A (zh) * 2019-08-25 2019-12-20 南京理工大学 一种人脸活体检测方法
CN110909693B (zh) * 2019-11-27 2023-06-20 深圳华付技术股份有限公司 3d人脸活体检测方法、装置、计算机设备及存储介质
CN111209820B (zh) * 2019-12-30 2024-04-23 新大陆数字技术股份有限公司 人脸活体检测方法、系统、设备及可读存储介质
CN111767788A (zh) * 2020-05-12 2020-10-13 贵阳像树岭科技有限公司 一种非交互式单目活体检测方法
CN112257538A (zh) * 2020-10-15 2021-01-22 杭州锐颖科技有限公司 基于双目深度信息的活体检测方法、设备及存储介质
CN113011385B (zh) * 2021-04-13 2024-07-05 深圳市赛为智能股份有限公司 人脸静默活体检测方法、装置、计算机设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100316261A1 (en) * 2009-06-11 2010-12-16 Fujitsu Limited Biometric authentication device, authentication accuracy evaluation device and biometric authentication method
CN108875688A (zh) * 2018-06-28 2018-11-23 北京旷视科技有限公司 一种活体检测方法、装置、系统及存储介质
CN111160178A (zh) * 2019-12-19 2020-05-15 深圳市商汤科技有限公司 图像处理方法及装置、处理器、电子设备及存储介质
CN111291668A (zh) * 2020-01-22 2020-06-16 北京三快在线科技有限公司 活体检测方法、装置、电子设备及可读存储介质
CN112926464A (zh) * 2021-03-01 2021-06-08 创新奇智(重庆)科技有限公司 一种人脸活体检测方法以及装置
CN113705428A (zh) * 2021-08-26 2021-11-26 北京市商汤科技开发有限公司 活体检测方法及装置、电子设备及计算机可读存储介质

Also Published As

Publication number Publication date
CN113705428A (zh) 2021-11-26

Similar Documents

Publication Publication Date Title
CN109508694B (zh) 一种人脸识别方法及识别装置
US9619723B1 (en) Method and system of identification and authentication using facial expression
JP6447234B2 (ja) 端末監視装置、端末監視方法及びプログラム
JP2018508875A (ja) 生体顔検出用の方法及び装置
US11995916B2 (en) Electronic device capable of identifying ineligible object
WO2011156143A2 (en) Distinguishing live faces from flat surfaces
CN108875484B (zh) 用于移动终端的人脸解锁方法、装置和系统及存储介质
WO2016172923A1 (zh) 视频检测方法、视频检测系统以及计算机程序产品
CN107786487B (zh) 一种信息认证处理方法、系统以及相关设备
WO2019200872A1 (zh) 身份验证方法和装置、电子设备、计算机程序和存储介质
KR101724971B1 (ko) 광각 카메라를 이용한 얼굴 인식 시스템 및 그를 이용한 얼굴 인식 방법
WO2019216091A1 (ja) 顔認証装置、顔認証方法および顔認証システム
CN109034029A (zh) 检测活体的人脸识别方法、可读存储介质和电子设备
CN113642639B (zh) 活体检测方法、装置、设备和存储介质
JP6028453B2 (ja) 画像処理装置、画像処理方法および画像処理プログラム
US20230091865A1 (en) Face image verification method and apparatus, electronic device, and storage medium
CN110929244A (zh) 数字化身份识别方法、装置、设备及存储介质
CN110619239A (zh) 应用界面处理方法、装置、存储介质及终端
CN111079687A (zh) 证件伪装识别方法、装置、设备及存储介质
WO2023024473A1 (zh) 活体检测方法及装置、电子设备、计算机可读存储介质和计算机程序产品
Bresan et al. Facespoof buster: a presentation attack detector based on intrinsic image properties and deep learning
JP6311237B2 (ja) 照合装置及び照合方法、照合システム、並びにコンピュータ・プログラム
CN108647650B (zh) 一种基于角膜反射和光学编码的人脸活体检测方法及系统
CN110163164B (zh) 一种指纹检测的方法及装置
CN111666550A (zh) 一种互动合影的方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE