CN109492585B - Living body detection method and electronic equipment - Google Patents

Living body detection method and electronic equipment Download PDF

Info

Publication number
CN109492585B
CN109492585B CN201811331045.1A CN201811331045A CN109492585B CN 109492585 B CN109492585 B CN 109492585B CN 201811331045 A CN201811331045 A CN 201811331045A CN 109492585 B CN109492585 B CN 109492585B
Authority
CN
China
Prior art keywords
identified
behavior
frame
user
attack
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811331045.1A
Other languages
Chinese (zh)
Other versions
CN109492585A (en
Inventor
杨震宇
刘振华
张建邦
师忠超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201811331045.1A priority Critical patent/CN109492585B/en
Publication of CN109492585A publication Critical patent/CN109492585A/en
Application granted granted Critical
Publication of CN109492585B publication Critical patent/CN109492585B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The application provides a living body detection method, which comprises the steps of detecting a received video frame stream to obtain a detection frame, wherein the detection frame comprises an object to be identified meeting a preset condition, and the video frame stream comprises at least two frames of images; acquiring at least one frame of image before a detection frame; analyzing the behaviors of the object to be identified in the detection frame and the at least one frame of image to obtain a behavior identification result of the object to be identified; and if the object to be identified is characterized as an attack user based on the behavior identification result, executing corresponding operation, wherein the attack user is a user masquerading the identity of other people. By adopting the method, whether the user is an attacking user can be obtained by analysis according to the action behaviors of the user, and compared with mass calculation required by distinguishing the real skin from other materials in the prior art, the method has small calculated amount and reduces the burden of data processing.

Description

Living body detection method and electronic equipment
Technical Field
The present invention relates to the field of electronic devices, and more particularly, to a living body detection method and an electronic device.
Background
In the large environment of the artificial intelligence era, the face detection and recognition technology has been widely applied to the fields of finance, security protection, education, medical treatment and the like, and becomes an important user identity recognition and authentication means. Meanwhile, as with other biological information recognition schemes, the face recognition has potential safety hazards of counterfeit attacks.
For security reasons, face recognition applications typically require some method of in-vivo detection to determine whether the detected face is a real face or a counterfeit attack.
The existing living body detection method needs to analyze pictures or videos acquired by a user. Most of image analysis algorithms are used for analyzing image quality, texture or frequency domain of a face area so as to achieve the purpose of distinguishing real skin from other materials (paper, electronic screen and the like), but the algorithm has high computational complexity, so that identification equipment has high computational capacity.
Disclosure of Invention
In view of the above, the invention provides a living body detection method, which solves the problem of higher calculation complexity in the prior art.
In order to achieve the above purpose, the present invention provides the following technical solutions:
a living body detection method, comprising:
detecting a received video frame stream to obtain a detection frame, wherein the detection frame comprises an object to be identified meeting a preset condition, and the video frame stream comprises at least two frames of images;
acquiring at least one frame of image before the detection frame;
analyzing the behaviors of the object to be identified in the detection frame and the at least one frame of image to obtain a behavior identification result of the object to be identified;
And if the object to be identified is characterized as an attack user based on the behavior identification result, executing corresponding operation, wherein the attack user is a user masquerading the identity of other people.
Preferably, the method further comprises:
if the object to be identified is represented as a real user based on the behavior identification result, identifying the object to be identified in the detection frame according to a preset first identification mode, wherein the real user is a user which does not impersonate the identity of other people.
Preferably, the performing the corresponding operation includes:
and identifying the object to be identified in the detection frame according to a preset second identification mode to obtain the true identity of the attacking user.
Preferably, the identifying the identification object in the detection frame according to the preset second identifying mode specifically includes:
acquiring an analysis frame, wherein the moment of the analysis frame is before the moment of the detection frame in the video frame stream;
acquiring a first characteristic of an attacking user in the analysis frame;
and identifying the first characteristic according to a preset first identification mode to obtain the true identity of the attack object.
Preferably, the identifying the identification object in the detection frame according to the preset second identifying mode specifically includes:
Acquiring a second characteristic of the object to be identified in the detection frame;
and identifying the second characteristic according to a preset second identification mode to obtain the true identity of the attack object.
Preferably, the method further comprises:
and training a preset deep learning model based on the video frame stream and the corresponding object to be identified as an attacking user, so that the re-acquired video frame stream is processed based on the trained deep learning model to obtain the identity of the object to be identified.
Preferably, the analyzing the behavior of the object to be identified in the detection frame and the at least one frame of picture to obtain a behavior identification result of the object to be identified includes:
based on the detection frame and the at least one frame of image, identifying the behavior of the object to be identified;
if the behavior of the object to be identified does not meet the preset behavior condition, judging that the behavior of the object to be identified is an attack behavior;
and the behavior recognition result characterizes the object to be recognized as an attack user.
Preferably, the analyzing the behavior of the object to be identified in the detection frame and the at least one frame of picture to obtain a behavior identification result of the object to be identified includes:
Based on the detection frame and the at least one frame of image, identifying the behavior of the object to be identified;
analyzing the detection frame and the at least one frame of image to obtain a first object;
determining that the behavior of the object to be identified is an attack behavior based on the relative position relation of the object to be identified and the first object meeting a preset attack condition and/or the behavior of the object to be identified not meeting a preset action condition;
and the behavior recognition result characterizes the object to be recognized as an attack user.
Preferably, the analyzing the behavior of the object to be identified in the detection frame and the at least one frame of picture to obtain a behavior identification result of the object to be identified includes:
based on the detection frame and the at least one frame of image, identifying the behavior of the object to be identified;
analyzing the detection frame to obtain a third characteristic of the object to be identified;
and judging that the behavior of the object to be identified is an attack behavior based on the fact that the third characteristic does not meet a preset characteristic condition and/or the behavior of the object to be identified does not meet a preset action condition, and representing the object to be identified as an attack user by the behavior identification result.
An electronic device, comprising:
the camera is used for collecting the video of the image collecting area;
the processor is used for detecting the received video frame stream to obtain a detection frame, wherein the detection frame comprises an object to be identified which meets a preset condition, and the video frame stream comprises at least two frames of images; acquiring at least one frame of image before the detection frame; analyzing the behaviors of the object to be identified in the detection frame and the at least one frame of image to obtain a behavior identification result of the object to be identified; and if the object to be identified is characterized as an attack user based on the behavior identification result, executing corresponding operation, wherein the attack user is a user masquerading the identity of other people.
As can be seen from the above technical solution, compared with the prior art, the present invention provides a living body detection method, which includes: detecting a received video frame stream to obtain a detection frame, wherein the detection frame comprises an object to be identified meeting a preset condition, and the video frame stream comprises at least two frames of images; acquiring at least one frame of image before the detection frame; analyzing the behaviors of the object to be identified in the detection frame and the at least one frame of picture to obtain a behavior identification result of the object to be identified; and if the object to be identified is characterized as an attack user based on the behavior identification result, executing corresponding operation, wherein the attack user is a user masquerading the identity of other people. By adopting the method, the multi-frame pictures before the detection frame of the object to be identified meeting the preset conditions are acquired, the corresponding behaviors of the object to be identified are obtained through analysis, so that whether the object to be identified is an attacking user or not is determined, living body detection is realized, in the process, whether the object to be identified is the attacking user or not can be obtained through analysis according to the action behaviors of the user, compared with the mass calculation required by distinguishing the real skin from other materials in the prior art, the calculation amount is small, and the burden of data processing is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of an embodiment 1 of a method for in-vivo detection provided herein;
fig. 2 (a) is a schematic diagram of an attack in a living body detection method embodiment 1 provided in the present application, fig. 2 (b) is another schematic diagram of an attack in a living body detection method embodiment 1 provided in the present application, and fig. 2 (c) is yet another schematic diagram of an attack in a living body detection method embodiment 1 provided in the present application;
FIG. 3 is a flowchart of an embodiment 2 of a method for in-vivo detection provided herein;
FIG. 4 is a flowchart of an embodiment 3 of a method for in-vivo detection provided herein;
fig. 5 is a schematic structural diagram of an embodiment 1 of an electronic device provided in the present application.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1, a flowchart of an embodiment 1 of an information processing method provided in the present application is applied to an electronic device, where the electronic device has a screen capturing function, and the method includes the following steps:
step S101: detecting a received video frame stream to obtain a detection frame, wherein the detection frame comprises an object to be identified meeting a preset condition, and the video frame stream comprises at least two frames of images.
The video frame stream may be obtained by shooting through a video acquisition device connected with the electronic device, or may be obtained by shooting through a video acquisition device provided in the electronic device.
And the video acquisition equipment connected with the electronic equipment or a video acquisition device arranged in the electronic equipment transmits the shot video frame stream to the electronic equipment, and the electronic equipment receives the video frame stream. The video frame stream received by the electronic device may include at least two frames of images. Each frame of image can be: a picture or image.
The electronic device may detect the received video frame stream after receiving the video frame stream, and use a video frame in the video frame stream (e.g., in a detection area or within a set period of time) that meets a requirement as a detection frame.
It should be noted that, the detection frame includes an object to be identified that satisfies a preset condition. The preset conditions may be set according to actual needs, for example, the preset conditions may be set as follows: in the central region of the image and occupies a large area.
Step S102, at least one frame of image before the detection frame is acquired.
In this embodiment, at least one frame image before the detection frame may be acquired from the received video frame stream.
Of course, if there is no image before the detection frame in the received video frame stream, the video frame stream before the received video frame stream may be acquired, and at least one frame image before the detection frame may be acquired from the previous video frame stream.
And step 103, analyzing the behaviors of the object to be identified in the detection frame and the at least one frame of image to obtain a behavior identification result of the object to be identified.
The behavior of the object to be identified can be understood as: action behavior of the object to be identified. And analyzing the behaviors of the object to be identified in the detection frame and the at least one frame of image, and determining which action behaviors (such as moving objects and shielding lenses) of the object to be identified are specifically executed in the detection frame and at least one frame of image before the detection frame, so as to complete the behavior analysis of the object to be identified at different time points.
And the result of analyzing the behaviors of the object to be identified in the detection frame and the at least one frame of image can be used as a behavior identification result of the object to be identified.
Preferably, the behavior of the object to be identified in the detection frame and at least one frame of image preceding the detection frame may be analyzed according to a machine analysis model (e.g., a deep learning model or a neural network model). By effectively training the machine analysis model, the accuracy of the machine analysis model is ensured, and the accuracy of analyzing the behavior of the object to be identified by the machine analysis model is ensured.
Step S104, if the object to be identified is characterized as an attack user based on the behavior identification result, corresponding operation is executed.
Based on the behavior analysis results of the object to be identified at different time points in step S103, the determined behavior identification result of the object to be identified can ensure that the type of the object to be identified, which is characterized by the behavior identification result, is more accurate.
The attacking user can understand as follows: a user impersonating the identity of another person.
It can be understood that before identity recognition, a user who impersonates the identity of another person generally performs an attack action (for example, a picture is used for shielding a face, or an electronic display screen is used for displaying images of another person, or a simulation mask is used for impersonating another person), so as to achieve the purpose of impersonating the identity of another person.
In this embodiment, the schematic diagrams of fig. 2 (a) - (c) can be referred to, the attack behavior can be more clearly described, the attack behavior of blocking the face by using the picture can be referred to fig. 2 (a), the attack behavior of displaying the images of other people by using the electronic display screen can be referred to fig. 2 (b), and the attack behavior of masquerading the other people by using the simulation mask can be referred to fig. 2 (c). However, fig. 2 (a) - (c) are only examples of the present embodiment, and are not intended to be all limiting modes of attack.
If the object to be identified is characterized as an attack user based on the behavior identification result, corresponding operation can be executed to cope with the attack behavior of the attack user.
In summary, the living body detection method provided in the embodiment includes: detecting a received video frame stream to obtain a detection frame, wherein the detection frame comprises an object to be identified meeting a preset condition, and the video frame stream comprises at least two frames of images; acquiring at least one frame of image before the detection frame; analyzing the behaviors of the object to be identified in the detection frame and the at least one frame of picture to obtain a behavior identification result of the object to be identified; and if the object to be identified is characterized as an attack user based on the behavior identification result, executing corresponding operation, wherein the attack user is a user masquerading the identity of other people. By adopting the method, the multi-frame pictures before the detection frame of the object to be identified meeting the preset conditions are acquired, the corresponding behaviors of the object to be identified are obtained through analysis, so that whether the object to be identified is an attacking user or not is determined, living body detection is realized, in the process, whether the object to be identified is the attacking user or not can be obtained through analysis according to the action behaviors of the user, compared with the mass calculation required by distinguishing the real skin from other materials in the prior art, the calculation amount is small, and the burden of data processing is reduced.
As shown in fig. 3, a flowchart of embodiment 2 of a living body detection method is provided, which includes the following steps:
step 301, detecting a received video frame stream to obtain a detection frame, wherein the detection frame comprises an object to be identified meeting a preset condition, and the video frame stream comprises at least two frames of images.
Step S302, at least one frame of image before the detection frame is acquired.
Step S303, analyzing the behaviors of the object to be identified in the detection frame and the at least one frame of image, to obtain a behavior identification result of the object to be identified.
And step S304, if the object to be identified is characterized as an attack user based on the behavior identification result, executing corresponding operation, wherein the attack user is a user which impersonates the identity of other people.
Steps S301 to S304 are identical to steps S101 to S104 in embodiment 1, and detailed description is omitted in this embodiment.
Step 305, if the object to be identified is represented as a real user based on the behavior identification result, identifying the object to be identified in the detection frame according to a preset first identification mode.
The real user can be understood as: a user who does not impersonate the identity of another person.
It can be understood that the real user generally adopts own real identity to complete identity recognition, and before identity recognition, normal actions (for example, normal far and near identity recognition devices and no face shielding before the identity recognition devices) are performed to achieve the purpose of identity recognition, so that the embodiment can also obtain a behavior recognition result by analyzing the behavior of the object to be recognized, and determine whether the object to be recognized is the real user based on the behavior recognition result.
If the object to be identified is represented as a real user based on the behavior identification result, the embodiment can identify the object to be identified in the detection frame according to a preset first identification mode, so as to complete the identification of the real user.
In this embodiment, the specific implementation of the preset first recognition mode is not limited, and the preset first recognition mode may be used to complete the identification. For example, the preset first identification manner may be: face recognition method, iris recognition method, or fingerprint recognition method.
In another embodiment of the present application, a description is given of a procedure for performing the corresponding operation in step S104 in embodiment 1, which may specifically include:
and identifying the object to be identified in the detection frame according to a preset second identification mode to obtain the true identity of the attacking user.
In this embodiment, the preset second recognition mode is used for performing identity recognition on the object to be recognized in the detection frame.
It should be noted that, when the identity of the attacking user is identified, the attacking user may not complete the identity identification in a matching manner, so that, preferably, the preset second identification mode may be set to a mode that the identification of the object to be identified can be completed without matching the object to be identified, for example, a face recognition mode or an iris identification mode, or a mode that the physical attribute (such as body shape, height, etc.) of the object to be identified is obtained by scanning the object to be identified, and the identity identification is completed in a matching manner with the physical attribute stored in the database.
After the true identity of the attacking user is identified, the true identity of the attacking user and the behavior of the identity of the impersonating other person can be recorded as the behavior record of the attacking user. The method can specifically formulate corresponding measures according to the behavior records of the attacking user to stop or punish the subsequent attacking behavior of the attacking user.
Of course, performing the corresponding operation may also include:
and sending out alarm information of the object to be identified as an attacking user.
The object to be identified is characterized as an attack user based on the behavior identification result, and the alarm information of the object to be identified as the attack user can be sent out immediately, so that the purpose of warning the attack user and reducing the attack behavior is achieved.
In another embodiment of the present application, the identifying the identification object in the detection frame according to the preset second identification manner described in the foregoing embodiment may specifically include:
a11, acquiring an analysis frame, wherein the moment of the analysis frame is before the moment of the detection frame in the video frame stream.
Because the attack user generally has attack behaviors, the attack user can shield the face or other parts of the attack user by using pictures of images of other people, or display the images of other people by using a display screen, and the face of the attack user is outside a detection area to impersonate the identity of other people, the effective characteristics belonging to the attack user can not be detected in the detection frame, so that an analysis frame before the moment of the detection frame can be acquired to acquire the effective characteristics.
A12, acquiring a first characteristic of the attacking user in the analysis frame.
The first feature can be understood as: can characterize the valid features of the true identity of the attacking user.
The first feature may include, but is not limited to: face features.
A13, identifying the first characteristic according to a preset first identification mode to obtain the true identity of the attack object.
In this embodiment, the preset first recognition method is the same as the preset first recognition method described in step S305 in the foregoing embodiment, and will not be described herein.
Under the condition that the first feature is a face feature, a corresponding preset first identification mode is as follows: face recognition mode.
The process of identifying the first feature according to the face recognition mode can refer to the process of identifying the identity by using the face recognition technology in the prior art, and will not be described herein.
It can be understood that the first feature is identified according to the preset first identification mode to obtain the true identity of the attack object, so that the true identity of the attack user can be effectively identified under the condition that the picture of the attack user printed with the portrait of the other person covers the face of the attack user, or the display screen is used for displaying the portrait of the other person and the attack behavior of the attack user outside the detection area, the picture of the attack user printed with the portrait of the other person covers the face of the attack user, or the display screen is used for displaying the portrait of the other person and the face of the attack user is outside the detection area, and the true identity of the attack user can be effectively identified, so that the reliability of identity identification is improved.
In another embodiment of the present application, another process for identifying an identification object in the detection frame according to a preset second identification manner may include:
and B11, acquiring a second characteristic of the object to be identified in the detection frame.
In this embodiment, whether the second feature is an effective feature for representing the identity of the object to be identified is detected directly from the detection frame, and if the second feature is an effective feature for representing the identity of the object to be identified is obtained from the detection frame.
The manner of obtaining the second feature may refer to the process of feature extraction in the prior art, and will not be described herein.
Preferably, the second feature may be: iris characteristics.
Since the iris features are not easy to forge, the iris features of the object to be identified can be obtained from the detection frame and used as the features for identifying the identity of the object to be identified.
And B12, identifying the second characteristic according to a preset second identification mode to obtain the true identity of the attack object.
The process of identifying the second feature according to the preset second identification manner may refer to the process of identifying the object to be identified in the detection frame according to the preset second identification manner in the foregoing embodiment, which is not described herein again.
In the case that the second feature is an iris feature, the preset second recognition mode may be set correspondingly as: iris recognition mode.
The process of identifying the second feature according to the iris recognition method can refer to the process of identifying the second feature by utilizing the iris recognition technology in the prior art, and will not be described herein.
It can be understood that the second feature is identified according to the preset second identification mode to obtain the true identity of the attack object, so that the behavior that the attack user is masqueraded as other people by adopting the simulation mask can be effectively resisted, and the true identity of the attack user can be effectively identified under the condition that the attack user is masqueraded as other people by adopting the simulation mask, and the reliability of identity identification is improved.
As shown in fig. 4, a flowchart of embodiment 3 of a living body detection method is provided, and the method includes the following steps:
step S401, detecting a received video frame stream to obtain a detection frame, where the detection frame includes an object to be identified that satisfies a preset condition, and the video frame stream includes at least two frames of images.
Step S402, at least one frame of image before the detection frame is acquired.
Step S403, analyzing the behaviors of the object to be identified in the detection frame and the at least one frame of image, to obtain a behavior identification result of the object to be identified.
And step S404, if the object to be identified is characterized as an attack user based on the behavior identification result, executing corresponding operation, wherein the attack user is a user masquerading the identity of other people.
Steps S401 to S404 are identical to steps S101 to S104 in embodiment 1, and are not described in detail in this embodiment.
Step S405, training a preset deep learning model based on the video frame stream and the corresponding object to be identified as an attacking user, so that the identity of the object to be identified is obtained by processing the video frame stream acquired again based on the trained deep learning model.
Based on the video frame stream and the corresponding object to be identified as an attacking user, training a preset deep learning model, and processing the video stream acquired again by the trained deep learning model to obtain the identity of the object to be identified.
The process that the deep learning model obtained after training can process the re-acquired video stream to obtain the identity of the object to be identified can comprise the following steps: the deep learning model obtained after training analyzes the behavior of the object to be identified in the detection frame in the video stream obtained again and at least one frame of image before the detection frame to obtain a behavior identification result of the object to be identified; and if the behavior recognition result of the object to be recognized represents that the object to be recognized is an attack user, carrying out identity recognition on the object to be recognized to obtain the true identity of the attack user.
Of course, the embodiment may also train a preset deep learning model based on the video frame stream and the corresponding object to be identified as the attack user and the corresponding object to be identified as the real user, so that the identity of the object to be identified is obtained by processing the video frame stream acquired again based on the trained deep learning model.
Based on the video frame stream and the corresponding object to be identified as an attack user and the corresponding object to be identified as a real user, the training-obtained deep learning model can analyze behaviors of the detection frame in the video frame stream and at least one frame of image before the detection frame to obtain a behavior identification result of the object to be identified; if the behavior recognition result of the object to be recognized represents that the object to be recognized is an attack user, carrying out identity recognition on the object to be recognized to obtain the true identity of the attack user; and if the behavior recognition result of the object to be recognized represents that the object to be recognized is a real user, carrying out identity recognition on the object to be recognized to obtain the real identity of the real user.
And processing the video frame stream acquired again based on the trained deep learning model to obtain the identity of the object to be identified, so that the identity identification efficiency can be improved. And the accuracy of the deep learning model for identity recognition can be improved by improving the training accuracy of the deep learning model.
In another embodiment of the present application, the analyzing the behavior of the object to be identified in the detection frame and the at least one frame of picture in step S103 of embodiment 1 to obtain a behavior identification result of the object to be identified may specifically include:
and C11, identifying the behavior of the object to be identified based on the detection frame and the at least one frame of image.
The embodiment can identify the behavior of the object to be identified in the detection frame and at least one frame of image before the detection frame.
Preferably, the behavior recognition model may be pre-trained. And identifying the behaviors of the object to be identified in the detection frame and at least one frame of image before the detection frame according to the pre-trained behavior identification model.
The process of identifying the behavior of the object to be identified may refer to the process of identifying the behavior in the prior art, which is not described in detail in this embodiment.
And C12, if the behavior of the object to be identified does not meet the preset behavior condition, judging that the behavior of the object to be identified is an attack behavior, wherein the behavior identification result represents that the object to be identified is an attack user.
The preset behavior conditions can be understood as: preset normal behavior conditions, such as a far and near movement condition or a condition without special action (e.g., shielding a human face) during movement.
After the preset behavior condition is set, if the behavior of the object to be identified does not meet the preset behavior condition, the behavior of the object to be identified can be judged to be an attack behavior. The attack behavior is the behavior recognition result of the object to be recognized.
Accordingly, the behavior recognition result of the object to be recognized is an attack behavior, and the behavior recognition result characterizes that the object to be recognized is an attack user.
In this embodiment, by setting reasonable preset behavior conditions, it can be accurately and conveniently determined whether the behavior of the object to be identified is an attack behavior.
In another embodiment of the present application, the analyzing the behavior of the object to be identified in the detection frame and the at least one frame of picture in step S103 of embodiment 1 to obtain another implementation process of the behavior identification result of the object to be identified may specifically include:
and D11, identifying the behavior of the object to be identified based on the detection frame and the at least one frame of image.
Step D11 is the same as step C11 in the foregoing embodiment, and the detailed process of step D11 may be referred to the description related to step C11, which is not repeated here.
And D12, analyzing the detection frame and the at least one frame of image to obtain a first object.
Based on the fact that an attacking user generally uses some articles to implement an attack when impersonating the identity of other people (for example, the image of the portrait of other people is used for shielding the face of the user or the electronic picture of other people is displayed by an electronic display screen), in this embodiment, a first object capable of representing an attacking tool can be obtained by analyzing a detection frame and at least one frame of image before the detection frame.
And D13, judging that the behavior of the object to be identified is an attack behavior based on the fact that the relative position relation between the object to be identified and the first object meets a preset attack condition and/or the behavior of the object to be identified does not meet a preset action condition, and representing the object to be identified as an attack user by the behavior identification result.
When an attack user performs an attack, a relative positional relationship generally exists between an object to be identified and an attack tool (for example, a hand of the object to be identified and the attack tool (for example, a picture printed with an portrait of another person) are at the same horizontal position, see fig. 2 (a)), so that the attack condition can be set according to the embodiment, and the set attack condition can be: the condition of the relative position relationship between the object to be identified and the attack tool can be characterized when the object to be identified implements the attack behavior.
If the relative position relation between the object to be identified and the first object does not meet the preset attack condition, the behavior of the object to be identified can be judged to be attack.
Of course, it may also be determined that the behavior of the object to be identified is an attack behavior based on that the behavior of the object to be identified does not satisfy a preset action condition. The process of determining that the behavior of the object to be identified is an attack behavior based on that the behavior of the object to be identified does not meet the preset action condition may refer to the description of step C12 in the foregoing embodiment, which is not repeated herein.
In this embodiment, it may further be determined that the behavior of the object to be identified is an attack behavior based on the relative positional relationship between the object to be identified and the first object meeting a preset attack condition and the behavior of the object to be identified not meeting a preset action condition.
And judging whether the behavior of the object to be identified is an attack behavior or not based on whether the relative position relation between the object to be identified and the first object meets the preset attack condition or not and whether the behavior of the object to be identified is an attack behavior or not based on whether the behavior of the object to be identified does not meet the preset action condition or not based on the relative position relation between the object to be identified and the first object alone, wherein the judgment is higher in accuracy.
Accordingly, the behavior recognition result of the object to be recognized is an attack behavior, and the behavior recognition result characterizes that the object to be recognized is an attack user.
In another embodiment of the present application, the analyzing the behavior of the object to be identified in the detection frame and the at least one frame of picture in step S103 of embodiment 1 to obtain another implementation process of the behavior identification result of the object to be identified may specifically include:
and E11, identifying the behavior of the object to be identified based on the detection frame and the at least one frame of image.
Step E11 is the same as step C11 in the foregoing embodiment, and the detailed process of step E11 may be referred to the related description of step C11, which is not repeated here.
And E12, analyzing the detection frame to obtain a third characteristic of the object to be identified.
The third feature can be understood as: features of a certain body part of the object to be identified, such as facial features.
And E13, judging that the behavior of the object to be identified is an attack behavior based on the fact that the third characteristic does not meet a preset characteristic condition and/or the behavior of the object to be identified does not meet a preset action condition, and representing the object to be identified as an attack user by the behavior identification result.
It is understood that the characteristics of a certain body part of the living subject should be spatially variable (e.g., depth variable), and thus the preset characteristic condition may be set as the stereoscopic information characteristic condition. Based on the above, whether the behavior of the object to be identified is an attack behavior is determined by judging whether the third feature satisfies the preset feature condition, and if the third feature does not satisfy the preset feature condition, the behavior of the object to be identified is determined to be an attack behavior.
It should be noted that, under the condition that the preset feature condition is set as the stereo information feature condition, the video capturing device may specifically use a 3D (three-dimensional) camera to ensure that an effective stereo information feature can be extracted from the video shot by the 3D camera.
Of course, it may also be determined that the behavior of the object to be identified is an attack behavior based on that the behavior of the object to be identified does not satisfy a preset action condition. The process of determining that the behavior of the object to be identified is an attack behavior based on that the behavior of the object to be identified does not meet the preset action condition may refer to the description of step C12 in the foregoing embodiment, which is not repeated herein.
In this embodiment, it may further be determined that the behavior of the object to be identified is an attack behavior based on the relative positional relationship between the object to be identified and the first object meeting a preset attack condition and the behavior of the object to be identified not meeting a preset action condition.
And judging whether the behavior of the object to be identified is an attack behavior based on the third characteristic failing to meet a preset characteristic condition and the behavior of the object to be identified failing to meet a preset action condition, wherein the accuracy is higher compared with judging whether the behavior of the object to be identified is an attack behavior based on the third characteristic failing to meet the preset characteristic condition or judging whether the behavior of the object to be identified fails to meet the preset action condition.
Accordingly, the behavior recognition result of the object to be recognized is an attack behavior, and the behavior recognition result characterizes that the object to be recognized is an attack user.
It can be understood that, because the picture printed with the portrait of the other person and the body part feature in the portrait of the other person in the display screen have no three-dimensional change and do not meet the preset feature condition, whether the behavior of the object to be identified is an attack behavior is determined by judging whether the third feature meets the preset feature condition, so that the user can shield the face of the user when the user is attacked by the picture printed with the portrait of the other person or accurately determine whether the object to be identified is the attack behavior when the portrait of the other person is displayed by the display screen.
Corresponding to the embodiment of the living body detection method provided by the application, the application also provides an embodiment of the electronic equipment applying the living body detection method.
Fig. 5 is a schematic structural diagram of an embodiment 1 of an electronic device provided in the present application, where the electronic device has a screen capturing function, and the electronic device includes the following structures:
a camera 501 and a processor 502.
The camera 501 is configured to collect video of an image collection area.
Preferably, the camera 501 may be a 2D (2 Dimensions) camera or a 3D camera, for video capturing to meet different requirements.
The processor 502 is configured to detect a received video frame stream to obtain a detection frame, where the detection frame includes an object to be identified that satisfies a preset condition, and the video frame stream includes at least two frames of images; acquiring at least one frame of image before the detection frame; analyzing the behaviors of the object to be identified in the detection frame and the at least one frame of image to obtain a behavior identification result of the object to be identified; and if the object to be identified is characterized as an attack user based on the behavior identification result, executing corresponding operation, wherein the attack user is a user masquerading the identity of other people.
The processor 502 may specifically be a structure, a chip, etc. with a strong information processing capability, such as a CPU (central processing unit ).
Preferably, the processor 502 may be further configured to identify the object to be identified in the detection frame according to a preset first identification manner if the object to be identified is characterized as a real user based on the behavior identification result, where the real user is a user who does not impersonate the identity of the other person.
Preferably, the performing the corresponding operation may include:
and identifying the object to be identified in the detection frame according to a preset second identification mode to obtain the true identity of the attacking user.
Preferably, the identifying the identification object in the detection frame according to the preset second identifying mode may specifically include:
acquiring an analysis frame, wherein the moment of the analysis frame is before the moment of the detection frame in the video frame stream;
acquiring a first characteristic of an attacking user in the analysis frame;
and identifying the first characteristic according to a preset first identification mode to obtain the true identity of the attack object.
Preferably, the identifying the identification object in the detection frame according to the preset second identifying mode may specifically include:
acquiring a second characteristic of the object to be identified in the detection frame;
And identifying the second characteristic according to a preset second identification mode to obtain the true identity of the attack object.
Preferably, the processor 502 may be further configured to:
and training a preset deep learning model based on the video frame stream and the corresponding object to be identified as an attacking user, so that the re-acquired video frame stream is processed based on the trained deep learning model to obtain the identity of the object to be identified.
Preferably, the analyzing the behavior of the object to be identified in the detection frame and the at least one frame of picture to obtain a behavior identification result of the object to be identified may include:
based on the detection frame and the at least one frame of image, identifying the behavior of the object to be identified;
if the behavior of the object to be identified does not meet the preset behavior condition, judging that the behavior of the object to be identified is an attack behavior;
and the behavior recognition result characterizes the object to be recognized as an attack user.
Preferably, the analyzing the behavior of the object to be identified in the detection frame and the at least one frame of picture to obtain a behavior identification result of the object to be identified may include:
Based on the detection frame and the at least one frame of image, identifying the behavior of the object to be identified;
analyzing the detection frame and the at least one frame of image to obtain a first object;
determining that the behavior of the object to be identified is an attack behavior based on the relative position relation of the object to be identified and the first object meeting a preset attack condition and/or the behavior of the object to be identified not meeting a preset action condition;
and the behavior recognition result characterizes the object to be recognized as an attack user.
Preferably, the analyzing the behavior of the object to be identified in the detection frame and the at least one frame of picture to obtain a behavior identification result of the object to be identified includes:
based on the detection frame and the at least one frame of image, identifying the behavior of the object to be identified;
analyzing the detection frame to obtain a third characteristic of the object to be identified;
and judging that the behavior of the object to be identified is an attack behavior based on the fact that the third characteristic does not meet a preset characteristic condition and/or the behavior of the object to be identified does not meet a preset action condition, and representing the object to be identified as an attack user by the behavior identification result.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. The device provided in the embodiment corresponds to the method provided in the embodiment, so that the description is simpler, and the relevant points refer to the description of the method.
The previous description of the provided embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features provided herein.

Claims (7)

1. A living body detection method, comprising:
detecting a received video frame stream to obtain a detection frame, wherein the detection frame comprises an object to be identified meeting a preset condition, and the video frame stream comprises at least two frames of images;
Acquiring at least one frame of image before the detection frame;
analyzing the behaviors of the object to be identified in the detection frame and the at least one frame of image to obtain a behavior identification result of the object to be identified;
if the object to be identified is characterized as an attack user based on the behavior identification result, executing corresponding operation, wherein the attack user is a user masquerading the identity of other people;
the performing the corresponding operations includes:
acquiring an analysis frame, wherein the moment of the analysis frame is before the moment of the detection frame in the video frame stream;
acquiring a first characteristic of an attacking user in the analysis frame;
and identifying the first characteristic according to a preset first identification mode to obtain the true identity of the attacking user.
2. The method of claim 1, further comprising:
if the object to be identified is represented as a real user based on the behavior identification result, identifying the object to be identified in the detection frame according to a preset first identification mode, wherein the real user is a user which does not impersonate the identity of other people.
3. The method of claim 1, further comprising:
and training a preset deep learning model based on the video frame stream and the corresponding object to be identified as an attacking user, so that the re-acquired video frame stream is processed based on the trained deep learning model to obtain the identity of the object to be identified.
4. The method according to claim 1, wherein the analyzing the behavior of the object to be identified in the detection frame and the at least one frame image to obtain the behavior identification result of the object to be identified includes:
based on the detection frame and the at least one frame of image, identifying the behavior of the object to be identified;
if the behavior of the object to be identified does not meet the preset behavior condition, judging that the behavior of the object to be identified is an attack behavior;
and the behavior recognition result characterizes the object to be recognized as an attack user.
5. The method according to claim 1, wherein the analyzing the behavior of the object to be identified in the detection frame and the at least one frame image to obtain the behavior identification result of the object to be identified includes:
based on the detection frame and the at least one frame of image, identifying the behavior of the object to be identified;
analyzing the detection frame and the at least one frame of image to obtain a first object;
determining that the behavior of the object to be identified is an attack behavior based on the relative position relation of the object to be identified and the first object meeting a preset attack condition and/or the behavior of the object to be identified not meeting a preset action condition;
And the behavior recognition result characterizes the object to be recognized as an attack user.
6. The method according to claim 1, wherein the analyzing the behavior of the object to be identified in the detection frame and the at least one frame image to obtain the behavior identification result of the object to be identified includes:
based on the detection frame and the at least one frame of image, identifying the behavior of the object to be identified;
analyzing the detection frame to obtain a third characteristic of the object to be identified;
and judging that the behavior of the object to be identified is an attack behavior based on the fact that the third characteristic does not meet a preset characteristic condition and/or the behavior of the object to be identified does not meet a preset action condition, and representing the object to be identified as an attack user by the behavior identification result.
7. An electronic device, comprising:
the camera is used for collecting the video of the image collecting area;
the processor is used for detecting the received video frame stream to obtain a detection frame, wherein the detection frame comprises an object to be identified which meets a preset condition, and the video frame stream comprises at least two frames of images; acquiring at least one frame of image before the detection frame; analyzing the behaviors of the object to be identified in the detection frame and the at least one frame of image to obtain a behavior identification result of the object to be identified; if the object to be identified is characterized as an attack user based on the behavior identification result, executing corresponding operation, wherein the attack user is a user masquerading the identity of other people;
The performing the corresponding operations includes:
acquiring an analysis frame, wherein the moment of the analysis frame is before the moment of the detection frame in the video frame stream;
acquiring a first characteristic of an attacking user in the analysis frame;
and identifying the first characteristic according to a preset first identification mode to obtain the true identity of the attacking user.
CN201811331045.1A 2018-11-09 2018-11-09 Living body detection method and electronic equipment Active CN109492585B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811331045.1A CN109492585B (en) 2018-11-09 2018-11-09 Living body detection method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811331045.1A CN109492585B (en) 2018-11-09 2018-11-09 Living body detection method and electronic equipment

Publications (2)

Publication Number Publication Date
CN109492585A CN109492585A (en) 2019-03-19
CN109492585B true CN109492585B (en) 2023-07-25

Family

ID=65694207

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811331045.1A Active CN109492585B (en) 2018-11-09 2018-11-09 Living body detection method and electronic equipment

Country Status (1)

Country Link
CN (1) CN109492585B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112997185A (en) * 2019-09-06 2021-06-18 深圳市汇顶科技股份有限公司 Face living body detection method, chip and electronic equipment
CN111062019A (en) * 2019-12-13 2020-04-24 支付宝(杭州)信息技术有限公司 User attack detection method and device and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426815A (en) * 2015-10-29 2016-03-23 北京汉王智远科技有限公司 Living body detection method and device
CN106203235A (en) * 2015-04-30 2016-12-07 腾讯科技(深圳)有限公司 Live body discrimination method and device
CN106778525A (en) * 2016-11-25 2017-05-31 北京旷视科技有限公司 Identity identifying method and device
CN106897658A (en) * 2015-12-18 2017-06-27 腾讯科技(深圳)有限公司 The discrimination method and device of face live body
WO2017139325A1 (en) * 2016-02-09 2017-08-17 Aware, Inc. Face liveness detection using background/foreground motion analysis
CN108182409A (en) * 2017-12-29 2018-06-19 北京智慧眼科技股份有限公司 Biopsy method, device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203235A (en) * 2015-04-30 2016-12-07 腾讯科技(深圳)有限公司 Live body discrimination method and device
CN105426815A (en) * 2015-10-29 2016-03-23 北京汉王智远科技有限公司 Living body detection method and device
CN106897658A (en) * 2015-12-18 2017-06-27 腾讯科技(深圳)有限公司 The discrimination method and device of face live body
WO2017139325A1 (en) * 2016-02-09 2017-08-17 Aware, Inc. Face liveness detection using background/foreground motion analysis
CN106778525A (en) * 2016-11-25 2017-05-31 北京旷视科技有限公司 Identity identifying method and device
CN108182409A (en) * 2017-12-29 2018-06-19 北京智慧眼科技股份有限公司 Biopsy method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于人脸的活体检测系统;张高铭等;《计算机系统应用》;20171215(第12期);第39-44页 *

Also Published As

Publication number Publication date
CN109492585A (en) 2019-03-19

Similar Documents

Publication Publication Date Title
CN105612533B (en) Living body detection method, living body detection system, and computer program product
CN105184246B (en) Living body detection method and living body detection system
CN105518711B (en) Biopsy method, In vivo detection system and computer program product
RU2431190C2 (en) Facial prominence recognition method and device
WO2019056988A1 (en) Face recognition method and apparatus, and computer device
CN108021892B (en) Human face living body detection method based on extremely short video
CN108140123A (en) Face living body detection method, electronic device and computer program product
CN111563480B (en) Conflict behavior detection method, device, computer equipment and storage medium
CN105518710B (en) Video detecting method, video detection system and computer program product
CN105426827A (en) Living body verification method, device and system
CN105718863A (en) Living-person face detection method, device and system
CN104361326A (en) Method for distinguishing living human face
CN108171138B (en) Biological characteristic information acquisition method and device
CN110674680B (en) Living body identification method, living body identification device and storage medium
Maraci et al. Searching for structures of interest in an ultrasound video sequence
CN109492585B (en) Living body detection method and electronic equipment
KR20160009972A (en) Iris recognition apparatus for detecting false face image
CN105993022B (en) Method and system for recognition and authentication using facial expressions
CN114894337A (en) Temperature measurement method and device for outdoor face recognition
CN112861588B (en) Living body detection method and device
CN106611417A (en) A method and device for classifying visual elements as a foreground or a background
JP6786098B2 (en) Gait analyzer
JP2020095651A (en) Productivity evaluation system, productivity evaluation device, productivity evaluation method, and program
KR101704717B1 (en) Apparatus for recognizing iris and operating method thereof
An et al. Support vector machine algorithm for human fall recognition kinect-based skeletal data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant