CN109492585A - A kind of biopsy method and electronic equipment - Google Patents
A kind of biopsy method and electronic equipment Download PDFInfo
- Publication number
- CN109492585A CN109492585A CN201811331045.1A CN201811331045A CN109492585A CN 109492585 A CN109492585 A CN 109492585A CN 201811331045 A CN201811331045 A CN 201811331045A CN 109492585 A CN109492585 A CN 109492585A
- Authority
- CN
- China
- Prior art keywords
- identified
- behavior
- attack
- detection frame
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
This application provides a kind of biopsy methods, obtain detection frame including detecting the stream of video frames received, include the object to be identified for meeting preset condition in detection frame, include at least two frame images in stream of video frames;Obtain at least frame image before detection frame;The behavior of object to be identified in detection frame and an at least frame image is analyzed, the Activity recognition result of the object to be identified is obtained;If characterizing the object to be identified based on the Activity recognition result is attack user, corresponding operation is executed, the attack user is the user of personation identity.Using this method, it is only necessary to can be analyzed to obtain whether it is attack user according to the action behavior of user, middle compared with the existing technology to distinguish the calculating of magnanimity required for real skin and other materials, calculation amount is small, reduces the burden of data processing.
Description
Technical field
The present invention relates to field of electronic devices, and more specifically, it relates to a kind of biopsy method and electronic equipments.
Background technique
Under the overall situation in artificial intelligence epoch, human face detection and recognition technology has been widely used in finance, peace
The fields such as anti-, education, medical treatment, become important user identity identification and authentication means.At the same time, with other biological informations
Identifying schemes are the same, and there is also the security risks for being forged attack for recognition of face.
For security consideration, the method that face recognition application usually requires to take some In vivo detections detects to differentiate
Face be real human face or forgery attack.
Existing biopsy method needs the collected picture of user or video to analyze.Wherein, most pictures
Parser is all to carry out image quality, texture or frequency-domain analysis to human face region, distinguishes real skin and other materials (paper to reach
, electronic curtain etc.) purpose, but the computational complexity of this kind of algorithm is larger, result in the need for identification equipment have it is relatively strong
Computing capability.
Summary of the invention
In view of this, the present invention provides a kind of biopsy method, solve in the prior art computational complexity compared with
Big problem.
To achieve the above object, the invention provides the following technical scheme:
A kind of biopsy method, comprising:
It detects the stream of video frames received and obtains detection frame, include to meet the to be identified right of preset condition in the detection frame
As including at least two frame images in the stream of video frames;
Obtain at least frame image before the detection frame;
The behavior of object to be identified in the detection frame and an at least frame image is analyzed, is obtained described
The Activity recognition result of object to be identified;
If characterizing the object to be identified based on the Activity recognition result is attack user, corresponding operation, institute are executed
State the user that attack user is personation identity.
Preferably, further includes:
If characterizing the object to be identified based on the Activity recognition result is real user, according to preset first identification
Mode identifies that the real user is the user of non-personation identity to the object to be identified in the detection frame.
Preferably, the execution corresponding operation, comprising:
The object to be identified in the detection frame is identified according to preset second identification method, obtains the attack
The true identity of user.
Preferably, described that the identification object in the detection frame is identified according to preset second identification method, tool
Body includes:
Obtain analysis frame, at the time of the detection frame at the time of analysis frame in stream of video frames before;
Obtain the fisrt feature of the attack user in the analysis frame;
The fisrt feature is identified according to preset first identification method, obtains the true body of the object of attack
Part.
Preferably, described that the identification object in the detection frame is identified according to preset second identification method, tool
Body includes:
Obtain the second feature of the object to be identified in the detection frame;
The second feature is identified according to preset second identification method, obtains the true body of the object of attack
Part.
Preferably, further includes:
User is attacked with corresponding object to be identified based on the stream of video frames, trains preset deep learning model,
So that being handled to obtain object to be identified to the stream of video frames got again based on the deep learning model after training
Identity.
Preferably, the behavior of the object to be identified to the detection frame and in an at least frame picture divides
Analysis, obtains the Activity recognition result of the object to be identified, comprising:
Based on the detection frame and an at least frame image, the behavior of the object to be identified is identified;
If the behavior of the object to be identified is unsatisfactory for preset behavior condition, the behavior of the object to be identified is determined
It is attack;
It is attack user that the Activity recognition result, which characterizes the object to be identified,.
Preferably, the behavior of the object to be identified to the detection frame and in an at least frame picture divides
Analysis, obtains the Activity recognition result of the object to be identified, comprising:
Based on the detection frame and an at least frame image, the behavior of the object to be identified is identified;
The detection frame and an at least frame image are analyzed, the first object is obtained;
Relative positional relationship based on the object to be identified and first object meet preset Prerequisite and/or
The behavior of the object to be identified is unsatisfactory for preset operation condition, determines that the behavior of the object to be identified is attack;
It is attack user that the Activity recognition result, which characterizes the object to be identified,.
Preferably, the behavior of the object to be identified to the detection frame and in an at least frame picture divides
Analysis, obtains the Activity recognition result of the object to be identified, comprising:
Based on the detection frame and an at least frame image, the behavior of object to be identified is identified;
The detection frame is analyzed, the third feature of the object to be identified is obtained;
The behavior for being unsatisfactory for default characteristic condition and/or the object to be identified based on the third feature is unsatisfactory for presetting
Operation condition, determine that the behavior of the object to be identified is attack, then Activity recognition result characterization is described wait know
Other object is attack user.
A kind of electronic equipment, comprising:
Camera is acquired for the video to image acquisition region;
Processor obtains detection frame for detecting the stream of video frames received, includes the default item of satisfaction in the detection frame
The object to be identified of part includes at least two frame images in the stream of video frames;Obtain at least frame shadow before the detection frame
Picture;The behavior of object to be identified in the detection frame and an at least frame image is analyzed, is obtained described wait know
The Activity recognition result of other object;If characterizing the object to be identified based on the Activity recognition result is attack user, execute
Corresponding operation, the attack user are the users of personation identity.
It can be seen via above technical scheme that compared with prior art, the present invention provides a kind of biopsy method, packets
It includes: detecting the stream of video frames received and obtain detection frame, include the object to be identified for meeting preset condition, institute in the detection frame
It states in stream of video frames comprising at least two frame images;Obtain at least frame image before the detection frame;To the detection frame with
And the behavior of the object to be identified in an at least frame picture is analyzed, and the Activity recognition knot of the object to be identified is obtained
Fruit;If characterizing the object to be identified based on the Activity recognition result is attack user, corresponding operation, the attack are executed
User is the user of personation identity.Using this method, by detection frame where the object to be identified for meeting preset condition
Multiframe picture before is obtained, and analysis obtains the corresponding behavior of the object to be identified and analyzed, to determine that this is to be identified
Whether object is attack user, In vivo detection is realized, during being somebody's turn to do, it is only necessary to can analyze to obtain it according to the action behavior of user
It whether is attack user, it is middle compared with the existing technology to distinguish the calculating of magnanimity required for real skin and other materials, calculation amount
It is small, reduce the burden of data processing.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
The embodiment of invention for those of ordinary skill in the art without creative efforts, can also basis
The attached drawing of offer obtains other attached drawings.
Fig. 1 is a kind of flow chart of biopsy method embodiment 1 provided by the present application;
Fig. 2 (a) is a kind of schematic diagram of attack in a kind of biopsy method embodiment 1 provided by the present application, Fig. 2
It (b) is another schematic diagram of attack in a kind of biopsy method embodiment 1 provided by the present application, Fig. 2 (c) is this Shen
Another schematic diagram of attack in a kind of biopsy method embodiment 1 that please be provide;
Fig. 3 is a kind of flow chart of biopsy method embodiment 2 provided by the present application;
Fig. 4 is a kind of flow chart of biopsy method embodiment 3 provided by the present application;
Fig. 5 is the structural schematic diagram of a kind of electronic equipment embodiment 1 provided by the present application.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
As shown in Figure 1, it is a kind of flow chart of information processing method embodiment 1 provided by the present application, this method application
Have the function of screenshotss in an electronic equipment, the electronic equipment, method includes the following steps:
Step S101: it detects the stream of video frames received and obtains detection frame, include to meet preset condition in the detection frame
Object to be identified, include at least two frame images in the stream of video frames.
Wherein, stream of video frames can be what the video capture device being connected with the electronic equipment was shot, be also possible to
What the video acquisition device being arranged in the electronic equipment was shot.
The video acquisition device being arranged in the video capture device or the electronic equipment being connected with the electronic equipment will be shot
The stream of video frames arrived is sent to the electronic equipment, which receives stream of video frames.The video frame that the electronic equipment receives
It may include at least two frame images in stream.Every frame image can be with are as follows: picture or image.
The electronic equipment can detect the stream of video frames received, after receiving stream of video frames by video frame
The video frame (e.g., in detection zone or in the set time period) is met the requirements in stream, as detection frame.
It should be noted that including the object to be identified for meeting preset condition in detection frame.Preset condition can be according to reality
Border needs to be configured, and e.g., preset condition can be set are as follows: entreats region in the picture and footprint area is larger.
Step S102, at least frame image before the detection frame is obtained.
It, can be from at least frame image before obtaining detection frame in the stream of video frames received in the present embodiment.
Certainly, available to receive if there is no the image before detection frame in the stream of video frames received
Stream of video frames before stream of video frames, at least frame image before obtaining detection frame in stream of video frames before.
Step S103, the behavior of the object to be identified in the detection frame and an at least frame image is divided
Analysis, obtains the Activity recognition result of the object to be identified.
The behavior of object to be identified, it is possible to understand that are as follows: the action behavior of object to be identified.To the detection frame and described
The behavior of object to be identified in an at least frame image is analyzed, and can be determined object to be identified in detection frame and examined
Which action behavior (e.g., mobile article blocks camera lens) is specifically performed in an at least frame image before surveying frame, is completed wait know
The behavioural analysis of other object in different time points.
To the behavior of the object to be identified in the detection frame and an at least frame image analyzed as a result, can
Using the Activity recognition result as object to be identified.
It preferably, can be according to equipment analysis model (e.g., deep learning model or neural network model) to detection frame
And the behavior of the object to be identified in at least frame image before detection frame is analyzed.Pass through effective training machine
Analysis model guarantees the precision of equipment analysis model, guarantees what equipment analysis model analyzed the behavior of object to be identified
Accuracy.
If step S104, characterizing the object to be identified based on the Activity recognition result is attack user, execute corresponding
Operation.
Based on step S103 by object to be identified behavioural analysis in different time points as a result, the object to be identified determined
Activity recognition result, it is ensured that the type of the object to be identified of Activity recognition result characterization is more accurate.
Attack user it is to be understood that personation identity user.
It is understood that the user of personation identity can generally execute attack before carrying out identification
(e.g., blocking face, or, other people images are shown with electronic display, or, with emulation mask personation with picture), to reach
The purpose of personation identity, thus the present embodiment can by analyze object to be identified behavior, obtain Activity recognition as a result,
Behavior-based control recognition result come determine object to be identified whether be attack user.
In the present embodiment, Fig. 2 (a)-(c) schematic diagram may refer to, it is clearer that attack is introduced, with figure
The attack that piece blocks face may refer to Fig. 2 (a), show that other people may refer in the attack of image with electronic display
Fig. 2 (b) may refer to Fig. 2 (c) with the attack of emulation mask personation.But it should be recognized that Fig. 2 (a)-(c) is only
For a kind of example of the present embodiment, it is not intended as whole limited ways of attack.
If characterizing the object to be identified based on the Activity recognition result is attack user, corresponding behaviour can be executed
Make, to cope with the attack of attack user.
To sum up, biopsy method provided in this embodiment, comprising: it detects the stream of video frames received and obtains detection frame,
Include the object to be identified for meeting preset condition in the detection frame, includes at least two frame images in the stream of video frames;It obtains
An at least frame image before the detection frame;To the object to be identified in the detection frame and an at least frame picture
Behavior is analyzed, and the Activity recognition result of the object to be identified is obtained;If based on described in Activity recognition result characterization
Object to be identified is attack user, executes corresponding operation, and the attack user is the user of personation identity.Using the party
Method, by obtaining to the multiframe picture before detection frame where the object to be identified for meeting preset condition, analysis is somebody's turn to do
The corresponding behavior of object to be identified is analyzed, and to determine whether the object to be identified is attack user, realizes In vivo detection, should
In the process, it is only necessary to can be analyzed to obtain whether it is attack user according to the action behavior of user, compared with the existing technology middle area
Magnanimity required for real skin and other materials is divided to calculate, calculation amount is small, reduces the burden of data processing.
As shown in Figure 3, it is a kind of flow chart of biopsy method embodiment 2 provided by the present application, this method includes
Following steps:
Step S301, the stream of video frames that detection receives obtains detection frame, includes to meet preset condition in the detection frame
Object to be identified, include at least two frame images in the stream of video frames.
Step S302, at least frame image before the detection frame is obtained.
Step S303, the behavior of the object to be identified in the detection frame and an at least frame image is divided
Analysis, obtains the Activity recognition result of the object to be identified.
If step S304, characterizing the object to be identified based on the Activity recognition result is attack user, execute corresponding
Operation, it is described attack user be personation identity user.
Step S301-S304 is consistent with the step S101-S104 in embodiment 1, does not repeat them here in the present embodiment.
If step S305, characterizing the object to be identified based on the Activity recognition result is real user, according to default
The first identification method the object to be identified in the detection frame is identified.
Real user, it is possible to understand that are as follows: the user of non-personation identity.
It is understood that real user generally uses oneself true identity to complete identification, identification is being carried out
Before, execution is that normal behaviour (e.g., does not block before identity recognition device from remote and close close identity recognition device normally
Face), to achieve the purpose that identification, therefore the present embodiment again may be by the behavior of analysis object to be identified, obtain
Activity recognition is as a result, Behavior-based control recognition result determines whether object to be identified is real user.
If characterizing the object to be identified based on the Activity recognition result is real user, the present embodiment can be according to pre-
If the first identification method in the detection frame object to be identified carry out identification, complete real user identity know
Not.
The specific implementation of preset first identification method is not limited in the present embodiment, preset first identification method can be with
Complete identification.Such as, preset first identification method can be with are as follows: recognition of face mode or iris recognition mode or fingerprint
Identification method.
In another embodiment of the application, in step S104 in embodiment 1 execute corresponding operation process into
Row is introduced, and can specifically include:
The object to be identified in the detection frame is identified according to preset second identification method, obtains the attack
The true identity of user.
In the present embodiment, preset second identification method is used to carry out identification to the object to be identified in detection frame.
It should be noted that attack user may not cooperate completion identity when carrying out identification to attack user
Identification, it is preferred, therefore, that can set preset second identification method to can not needing object to be identified cooperation can be complete
Pairs of object to be identified is known otherwise, e.g., recognition of face mode or iris recognition mode or to be identified right by scanning
As obtaining the physical attribute (e.g., figure, height etc.) of object to be identified, and carry out with the physical attribute stored in database
The mode matched completes identification.
It, can also be to the true identity and personation identity of attack user after the true identity for identifying attack user
Behavior recorded, as attack user behavior record.It can specifically be formulated corresponding according to the behavior record of attack user
Measure, come to attack the subsequent attack of user prevent or punish.
Certainly, corresponding operation is executed, also may include:
Issuing object to be identified is the warning message for attacking user.
It is attack user that Behavior-based control recognition result, which characterizes object to be identified, can issue object to be identified at once as attack
The warning message of user reaches warning attack user, reduces the purpose of attack.
In another embodiment of the application, to previous embodiment introduction according to preset second identification method to institute
State the identification object in detection frame carry out identification be introduced, can specifically include:
A11, obtain analysis frame, at the time of the detection frame at the time of analysis frame in stream of video frames before.
Since generally there are attacks by attack user, attacking user may be sheltered from certainly with the picture for being printed on other people portraits
Oneself face or other positions, or, show that other people draw a portrait with display screen, and the face for attacking user pretends to be it in the external of detection zone
Other people identity may can't detect the feature for effectively belonging to attack user in detection frame, therefore available detect
Analysis frame before at the time of frame, to obtain validity feature.
The fisrt feature for attacking user in A12, the acquisition analysis frame.
Fisrt feature is it is to be understood that the validity feature of the true identity of attack user can be characterized.
Fisrt feature may include but be not limited to: face characteristic.
A13, the fisrt feature is identified according to preset first identification method, obtains the true of the object of attack
Real identity.
Step S305 is introduced in the present embodiment, in preset first identification method and previous embodiment preset first is known
Other mode is identical, and details are not described herein.
In the case where fisrt feature is face characteristic, corresponding, preset first identification method are as follows: recognition of face side
Formula.
The process identified according to recognition of face mode to fisrt feature may refer to know in prior art using face
Other technology carries out the process of identification, and details are not described herein.
It is understood that identifying according to preset first identification method to the fisrt feature, described attack is obtained
The true identity of object is hit, the face that attack user shelters from oneself with the picture for being printed on other people portraits can be successfully managed, or, with
Display screen shows that other people draw a portrait, and attacks attack of the face of user except detection zone, in attack user with being printed on him
The picture of people's portrait shelters from oneself face, or, show that other people draw a portrait with display screen, and attack the face of user detection zone it
In the case where outer, it still can effectively identify the true identity of attack user, improve the reliability of identification.
In another embodiment of the application, another is introduced according to preset second identification method to the detection
The process that identification object in frame is identified may include:
B11, the second feature for obtaining object to be identified in the detection frame.
In the present embodiment, the validity feature for having for characterizing object to be identified can be directly detected whether from detection frame
That is second feature, and if it exists, the validity feature i.e. second feature for characterizing object to be identified identity is then obtained from detection frame.
The mode for obtaining second feature may refer to the process of feature extraction in prior art, and details are not described herein.
Preferably, second feature can be with are as follows: iris feature.
Since iris feature is not easy to forge, the iris feature of object to be identified can be obtained from detection frame, as
Identify the feature of object to be identified identity.
B12, the second feature is identified according to preset second identification method, obtains the true of the object of attack
Real identity.
According to the process that preset second identification method identifies the second feature, previous embodiment may refer to
The middle process that the object to be identified in the detection frame is identified according to preset second identification method, it is no longer superfluous herein
It states.
In the case where second feature is iris feature, preset second identification method can be correspondingly arranged are as follows: iris is known
Other mode.
According to the process that iris recognition mode identifies second feature, may refer to know in prior art using iris
Other technology carries out the process of identification, and details are not described herein.
It is understood that identifying according to preset second identification method to second feature, object of attack is obtained
True identity can successfully manage attack user using the behavior of emulation mask personation, use emulation face in attack user
In the case where cover personation, it still can effectively identify the true identity of attack user, improve the reliability of identification.
As shown in Figure 4, it is a kind of flow chart of biopsy method embodiment 3 provided by the present application, this method includes
Following steps:
Step S401, the stream of video frames that detection receives obtains detection frame, includes to meet preset condition in the detection frame
Object to be identified, include at least two frame images in the stream of video frames.
Step S402, at least frame image before the detection frame is obtained.
Step S403, the behavior of the object to be identified in the detection frame and an at least frame image is divided
Analysis, obtains the Activity recognition result of the object to be identified.
If step S404, characterizing the object to be identified based on the Activity recognition result is attack user, execute corresponding
Operation, it is described attack user be personation identity user.
Step S401-S404 is consistent with the step S101-S104 in embodiment 1, does not repeat them here in the present embodiment.
It step S405, is attack user, the preset depth of training with corresponding object to be identified based on the stream of video frames
Learning model, so that being handled to obtain wait know to the stream of video frames got again based on the deep learning model after training
The identity of other object.
User is attacked with corresponding object to be identified based on the stream of video frames, trains preset deep learning model,
The deep learning model obtained after training can handle the video flowing got again to obtain the identity of object to be identified.
It is to be identified right that the deep learning model obtained after training can be handled to obtain to the video flowing got again
The process of the identity of elephant, may include: after training obtained deep learning model to the detection in the video flowing got again
The behavior of frame and the object to be identified in at least frame image before detection frame is analyzed, and object to be identified is obtained
Activity recognition result;If the Activity recognition result characterization object to be identified of object to be identified is attack user, to be identified right
As carrying out identification, the true identity of attack user is obtained.
Certainly, the present embodiment is also based on the stream of video frames and corresponding object to be identified as attack user and corresponding
Object to be identified be real user, the preset deep learning model of training, so that based on the deep learning model after training
The stream of video frames got again is handled to obtain the identity of object to be identified.
It is attack user based on the stream of video frames and corresponding object to be identified with corresponding object to be identified is true
User, the deep learning model that training obtains can be to the detection frame in the stream of video frames got again and in detection frame
The behavior of the object to be identified in an at least frame image before is analyzed, and the Activity recognition result of object to be identified is obtained;
If the Activity recognition result characterization object to be identified of object to be identified is attack user, identity knowledge is carried out to object to be identified
Not, the true identity of attack user is obtained;If the Activity recognition result characterization object to be identified of object to be identified is real user,
Identification then is carried out to object to be identified, obtains the true identity of real user.
The stream of video frames got again is handled to obtain object to be identified based on the deep learning model after training
Identity, the efficiency of identification can be improved.And the precision of raising deep learning model training, Lai Tigao depth can be passed through
The accuracy of learning model progress identification.
In another embodiment of the application, to step S103 in embodiment 1, to the detection frame and it is described at least
The behavior of object to be identified in one frame picture is analyzed, and the Activity recognition result for obtaining the object to be identified is situated between
It continues, can specifically include:
C11, it is based on the detection frame and an at least frame image, the behavior of the object to be identified is known
Not.
The present embodiment can to detection frame and in at least frame image before detection frame object to be identified behavior
It is identified.
Preferably, Activity recognition model can be trained in advance.And according to preparatory trained Activity recognition model, to detection
Frame and in at least frame image before detection frame the behavior of object to be identified identified.
Wherein, the process identified to the behavior of object to be identified also may refer to carry out Activity recognition in prior art
Process, repeat no more in the present embodiment.
If the behavior of C12, the object to be identified are unsatisfactory for preset behavior condition, the object to be identified is determined
Behavior is attack, and it is attack user that the Activity recognition result, which characterizes the object to be identified,.
Preset behavior condition is it is to be understood that preset normal behaviour condition, e.g., from remote and close mobile condition or
Condition without special action (e.g., blocking face) in moving process.
It, can be with if the behavior of object to be identified is unsatisfactory for preset behavior condition after setting preset behavior condition
The behavior for being judged as object to be identified is attack.Attack, that is, object to be identified Activity recognition result.
Correspondingly, the Activity recognition result of object to be identified is attack, and Activity recognition result characterizes object to be identified
To attack user.
In the present embodiment, by setting reasonable preset behavior condition, it is to be identified right accurately and easily to judge
Whether the behavior of elephant is attack.
In another embodiment of the application, to step S103 in embodiment 1, to the detection frame and it is described at least
The behavior of object to be identified in one frame picture is analyzed, and other the one of the Activity recognition result of the object to be identified is obtained
Kind implementation process is introduced, and can specifically include:
D11, it is based on the detection frame and an at least frame image, the behavior of the object to be identified is known
Not.
Step D11 is identical as the step C11 in previous embodiment, and the detailed process of step D11 may refer to step C11's
Related introduction, details are not described herein.
D12, the detection frame and an at least frame image are analyzed, obtains the first object.
Based on attack user in personation identity, it can generally implement attack using certain articles (e.g., with print
The picture for having other people to draw a portrait blocks the face of oneself or with other people electronic photo of electronic display screen displays), it can be in the present embodiment
An at least frame image through analysis detection frame and before detection frame, obtains the first object that can characterize attack tool.
D13, the relative positional relationship based on the object to be identified and first object meet preset Prerequisite
And/or the behavior of the object to be identified is unsatisfactory for preset operation condition, determines that the behavior of the object to be identified is attack
Behavior, then it is attack user that the Activity recognition result, which characterizes the object to be identified,.
User is attacked when implementing attack, generally there are relative positional relationships between object to be identified and attack tool
(for example, the hand of object to be identified and attack tool (picture for e.g., being printed on other people portraits) may refer in the same horizontal position
In Fig. 2 (a), the hand and picture of user is in the same horizontal position), therefore the present embodiment can set Prerequisite, setting is attacked
The condition of hitting can be with are as follows: opposite between object to be identified and attack tool when can characterize object to be identified and implementing attack
The condition of positional relationship.
If the relative positional relationship of object to be identified and the first object is unsatisfactory for preset Prerequisite, institute can be determined that
The behavior for stating object to be identified is attack.
It is of course also possible to which the behavior based on the object to be identified is unsatisfactory for preset operation condition, determine described wait know
The behavior of other object is attack.Wherein, the behavior based on the object to be identified is unsatisfactory for preset operation condition, determines
The behavior of the object to be identified is the related introduction that the process of attack may refer to step C12 in previous embodiment,
This is repeated no more.
In the present embodiment, the relative positional relationship for being also based on the object to be identified and first object meets in advance
If the behavior of Prerequisite and the object to be identified be unsatisfactory for preset operation condition, determine the row of the object to be identified
To be attack.
Relative positional relationship based on the object to be identified and first object meets preset Prerequisite and institute
The behavior for stating object to be identified is unsatisfactory for preset operation condition, determines that the behavior of the object to be identified is attack, phase
Than judging whether in being based solely on relative positional relationship of the object to be identified with first object and meet preset Prerequisite
It is unsatisfactory for preset operation condition for attack or the behavior for being based solely on object to be identified, judges the behavior of object to be identified
It whether is attack, accuracy is higher.
Correspondingly, the Activity recognition result of object to be identified is attack, and Activity recognition result characterizes object to be identified
To attack user.
In another embodiment of the application, to step S103 in embodiment 1, to the detection frame and it is described at least
The behavior of object to be identified in one frame picture is analyzed, and other the one of the Activity recognition result of the object to be identified is obtained
Kind implementation process is introduced, and can specifically include:
E11, it is based on the detection frame and an at least frame image, the behavior of object to be identified is identified.
Step E11 is identical as the step C11 in previous embodiment, and the detailed process of step E11 may refer to step C11's
Related introduction, details are not described herein.
E12, the analysis detection frame, obtain the third feature of the object to be identified.
Third feature it is to be understood that some physical feeling of object to be identified feature, e.g., face characteristic.
E13, the behavior for being unsatisfactory for default characteristic condition and/or the object to be identified based on the third feature are unsatisfactory for
Preset operation condition determines that the behavior of the object to be identified is attack, then described in the Activity recognition result characterization
Object to be identified is attack user.
It is understood that the feature of some physical feeling of live subject should have three-dimensional variation (e.g., depth change
Change), therefore default characteristic condition can be set as steric information characteristic condition.Based on this, by whether judging third feature
Whether be attack, if third feature is unsatisfactory for default feature if meeting default characteristic condition come the behavior for determining object to be identified
Condition then can be determined that the behavior of object to be identified is attack.
It should be noted that video acquisition is set in the case where setting default characteristic condition as steric information characteristic condition
It is standby specifically to use 3D (3Dimensions, three-dimensional/three-dimensional) camera, to guarantee from the video that 3D camera is shot
Extract effective steric information feature.
It is of course also possible to which the behavior based on the object to be identified is unsatisfactory for preset operation condition, determine described wait know
The behavior of other object is attack.Wherein, the behavior based on the object to be identified is unsatisfactory for preset operation condition, determines
The behavior of the object to be identified is the related introduction that the process of attack may refer to step C12 in previous embodiment,
This is repeated no more.
In the present embodiment, the relative positional relationship for being also based on the object to be identified and first object meets in advance
If the behavior of Prerequisite and the object to be identified be unsatisfactory for preset operation condition, determine the row of the object to be identified
To be attack.
It is unsatisfactory for based on the behavior that the third feature is unsatisfactory for default characteristic condition and the object to be identified preset
Operation condition determines that the behavior of the object to be identified is attack, is unsatisfactory for compared to the third feature is based solely on
Default characteristic condition, which judges whether it is attack or is based solely on the behavior of object to be identified, is unsatisfactory for preset operation condition,
Whether the behavior for judging object to be identified is attack, and accuracy is higher.
Correspondingly, the Activity recognition result of object to be identified is attack, and Activity recognition result characterizes object to be identified
To attack user.
It is understood that due to the physical feeling feature being printed in the picture and display screen that other people draw a portrait in other people portraits
There is no solids to change, and is unsatisfactory for default characteristic condition, thus by judge third feature whether meet default characteristic condition come
Whether the behavior for determining object to be identified is attack, can be sheltered from certainly in attack user with the picture for being printed on other people portraits
Oneself face, or, whether accurately determine object to be identified is attack when showing that other people draw a portrait with display screen.
Corresponding with a kind of above-mentioned biopsy method embodiment provided by the present application, present invention also provides applications should
The electronic equipment embodiment of biopsy method.
As shown in Figure 5 is the structural schematic diagram of a kind of electronic equipment embodiment 1 provided by the present application, in the electronic equipment
Have the function of screenshotss, which includes with flowering structure:
Camera 501 and processor 502.
Wherein, camera 501 are acquired for the video to image acquisition region.
Preferably, camera 501 can be 2D (2Dimensions, two dimension/plane) camera or 3D camera, be used for
Meet the video acquisition of different demands.
Processor 502 obtains detection frame for detecting the stream of video frames received, presets in the detection frame comprising meeting
The object to be identified of condition includes at least two frame images in the stream of video frames;Obtain at least frame before the detection frame
Image;The behavior of object to be identified in the detection frame and an at least frame image is analyzed, obtain it is described to
Identify the Activity recognition result of object;If characterizing the object to be identified based on the Activity recognition result is attack user, hold
Row corresponding operation, the attack user are the users of personation identity.
Wherein, which specifically can be using structure, chip etc. with stronger information processing capability, such as CPU
(central processing unit, central processing unit).
Preferably, processor 502, if can be also used for characterizing the object to be identified based on the Activity recognition result
For real user, the object to be identified in the detection frame is identified according to preset first identification method, it is described true
User is the user of non-personation identity.
Preferably, above-mentioned execution corresponding operation may include:
The object to be identified in the detection frame is identified according to preset second identification method, obtains the attack
The true identity of user.
Preferably, above-mentioned that the identification object in the detection frame is identified according to preset second identification method, tool
Body may include:
Obtain analysis frame, at the time of the detection frame at the time of analysis frame in stream of video frames before;
Obtain the fisrt feature of the attack user in the analysis frame;
The fisrt feature is identified according to preset first identification method, obtains the true body of the object of attack
Part.
Preferably, above-mentioned that the identification object in the detection frame is identified according to preset second identification method, tool
Body may include:
Obtain the second feature of the object to be identified in the detection frame;
The second feature is identified according to preset second identification method, obtains the true body of the object of attack
Part.
Preferably, which can be also used for:
User is attacked with corresponding object to be identified based on the stream of video frames, trains preset deep learning model,
So that being handled to obtain object to be identified to the stream of video frames got again based on the deep learning model after training
Identity.
Preferably, the behavior of above-mentioned object to be identified to the detection frame and in an at least frame picture divides
Analysis, obtains the Activity recognition of the object to be identified as a result, may include:
Based on the detection frame and an at least frame image, the behavior of the object to be identified is identified;
If the behavior of the object to be identified is unsatisfactory for preset behavior condition, the behavior of the object to be identified is determined
It is attack;
It is attack user that the Activity recognition result, which characterizes the object to be identified,.
Preferably, the behavior of above-mentioned object to be identified to the detection frame and in an at least frame picture divides
Analysis, obtains the Activity recognition of the object to be identified as a result, may include:
Based on the detection frame and an at least frame image, the behavior of the object to be identified is identified;
The detection frame and an at least frame image are analyzed, the first object is obtained;
Relative positional relationship based on the object to be identified and first object meet preset Prerequisite and/or
The behavior of the object to be identified is unsatisfactory for preset operation condition, determines that the behavior of the object to be identified is attack;
It is attack user that the Activity recognition result, which characterizes the object to be identified,.
Preferably, the behavior of above-mentioned object to be identified to the detection frame and in an at least frame picture divides
Analysis, obtains the Activity recognition result of the object to be identified, comprising:
Based on the detection frame and an at least frame image, the behavior of object to be identified is identified;
The detection frame is analyzed, the third feature of the object to be identified is obtained;
The behavior for being unsatisfactory for default characteristic condition and/or the object to be identified based on the third feature is unsatisfactory for presetting
Operation condition, determine that the behavior of the object to be identified is attack, then Activity recognition result characterization is described wait know
Other object is attack user.
Each embodiment in this specification is described in a progressive manner, the highlights of each of the examples are with other
The difference of embodiment, the same or similar parts in each embodiment may refer to each other.The device provided for embodiment
For, since it is corresponding with the method that embodiment provides, so being described relatively simple, related place is said referring to method part
It is bright.
To the above description of provided embodiment, enable those skilled in the art to implement or use the present invention.
Various modifications to these embodiments will be readily apparent to those skilled in the art, as defined herein
General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore, of the invention
It is not intended to be limited to the embodiments shown herein, and is to fit to and principle provided in this article and features of novelty phase one
The widest scope of cause.
Claims (10)
1. a kind of biopsy method, comprising:
It detects the stream of video frames received and obtains detection frame, include the object to be identified for meeting preset condition in the detection frame,
It include at least two frame images in the stream of video frames;
Obtain at least frame image before the detection frame;
The behavior of object to be identified in the detection frame and an at least frame image is analyzed, is obtained described wait know
The Activity recognition result of other object;
If characterizing the object to be identified based on the Activity recognition result is attack user, corresponding operation is executed, it is described to attack
Hit the user that user is personation identity.
2. according to the method described in claim 1, further include:
If characterizing the object to be identified based on the Activity recognition result is real user, according to preset first identification method
Object to be identified in the detection frame is identified, the real user is the user of non-personation identity.
3. according to the method described in claim 1, the execution corresponding operation, comprising:
The object to be identified in the detection frame is identified according to preset second identification method, obtains the attack user
True identity.
4. according to the method described in claim 3, it is described according to preset second identification method to the identification in the detection frame
Object is identified, is specifically included:
Obtain analysis frame, at the time of the detection frame at the time of analysis frame in stream of video frames before;
Obtain the fisrt feature of the attack user in the analysis frame;
The fisrt feature is identified according to preset first identification method, obtains the true identity of the object of attack.
5. according to the method described in claim 3, it is described according to preset second identification method to the identification in the detection frame
Object is identified, is specifically included:
Obtain the second feature of the object to be identified in the detection frame;
The second feature is identified according to preset second identification method, obtains the true identity of the object of attack.
6. according to the method described in claim 1, further include:
User is attacked with corresponding object to be identified based on the stream of video frames, trains preset deep learning model, so that
It obtains and the stream of video frames got again is handled to obtain the identity of object to be identified based on the deep learning model after training.
7. according to the method described in claim 1, described to be identified to the detection frame and in an at least frame picture
The behavior of object is analyzed, and the Activity recognition result of the object to be identified is obtained, comprising:
Based on the detection frame and an at least frame image, the behavior of the object to be identified is identified;
If the behavior of the object to be identified is unsatisfactory for preset behavior condition, determine that the behavior of the object to be identified is to attack
Hit behavior;
It is attack user that the Activity recognition result, which characterizes the object to be identified,.
8. according to the method described in claim 1, described to be identified to the detection frame and in an at least frame picture
The behavior of object is analyzed, and the Activity recognition result of the object to be identified is obtained, comprising:
Based on the detection frame and an at least frame image, the behavior of the object to be identified is identified;
The detection frame and an at least frame image are analyzed, the first object is obtained;
Relative positional relationship based on the object to be identified and first object meets preset Prerequisite and/or described
The behavior of object to be identified is unsatisfactory for preset operation condition, determines that the behavior of the object to be identified is attack;
It is attack user that the Activity recognition result, which characterizes the object to be identified,.
9. according to the method described in claim 1, described to be identified to the detection frame and in an at least frame picture
The behavior of object is analyzed, and the Activity recognition result of the object to be identified is obtained, comprising:
Based on the detection frame and an at least frame image, the behavior of object to be identified is identified;
The detection frame is analyzed, the third feature of the object to be identified is obtained;
The behavior for being unsatisfactory for default characteristic condition and/or the object to be identified based on the third feature is unsatisfactory for preset move
Make condition, determine that the behavior of the object to be identified is attack, then the Activity recognition result characterization is described to be identified right
As for attack user.
10. a kind of electronic equipment, comprising:
Camera is acquired for the video to image acquisition region;
Processor obtains detection frame for detecting the stream of video frames received, includes to meet preset condition in the detection frame
Object to be identified includes at least two frame images in the stream of video frames;Obtain at least frame image before the detection frame;It is right
The behavior of the detection frame and the object to be identified in an at least frame image is analyzed, and the object to be identified is obtained
Activity recognition result;If characterizing the object to be identified based on the Activity recognition result is attack user, execute corresponding
Operation, the attack user are the users of personation identity.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811331045.1A CN109492585B (en) | 2018-11-09 | 2018-11-09 | Living body detection method and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811331045.1A CN109492585B (en) | 2018-11-09 | 2018-11-09 | Living body detection method and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109492585A true CN109492585A (en) | 2019-03-19 |
CN109492585B CN109492585B (en) | 2023-07-25 |
Family
ID=65694207
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811331045.1A Active CN109492585B (en) | 2018-11-09 | 2018-11-09 | Living body detection method and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109492585B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111062019A (en) * | 2019-12-13 | 2020-04-24 | 支付宝(杭州)信息技术有限公司 | User attack detection method and device and electronic equipment |
WO2021042375A1 (en) * | 2019-09-06 | 2021-03-11 | 深圳市汇顶科技股份有限公司 | Face spoofing detection method, chip, and electronic device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105426815A (en) * | 2015-10-29 | 2016-03-23 | 北京汉王智远科技有限公司 | Living body detection method and device |
CN106203235A (en) * | 2015-04-30 | 2016-12-07 | 腾讯科技(深圳)有限公司 | Live body discrimination method and device |
CN106778525A (en) * | 2016-11-25 | 2017-05-31 | 北京旷视科技有限公司 | Identity identifying method and device |
CN106897658A (en) * | 2015-12-18 | 2017-06-27 | 腾讯科技(深圳)有限公司 | The discrimination method and device of face live body |
WO2017139325A1 (en) * | 2016-02-09 | 2017-08-17 | Aware, Inc. | Face liveness detection using background/foreground motion analysis |
CN108182409A (en) * | 2017-12-29 | 2018-06-19 | 北京智慧眼科技股份有限公司 | Biopsy method, device, equipment and storage medium |
-
2018
- 2018-11-09 CN CN201811331045.1A patent/CN109492585B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106203235A (en) * | 2015-04-30 | 2016-12-07 | 腾讯科技(深圳)有限公司 | Live body discrimination method and device |
CN105426815A (en) * | 2015-10-29 | 2016-03-23 | 北京汉王智远科技有限公司 | Living body detection method and device |
CN106897658A (en) * | 2015-12-18 | 2017-06-27 | 腾讯科技(深圳)有限公司 | The discrimination method and device of face live body |
WO2017139325A1 (en) * | 2016-02-09 | 2017-08-17 | Aware, Inc. | Face liveness detection using background/foreground motion analysis |
CN106778525A (en) * | 2016-11-25 | 2017-05-31 | 北京旷视科技有限公司 | Identity identifying method and device |
CN108182409A (en) * | 2017-12-29 | 2018-06-19 | 北京智慧眼科技股份有限公司 | Biopsy method, device, equipment and storage medium |
Non-Patent Citations (1)
Title |
---|
张高铭等: "基于人脸的活体检测系统", 《计算机系统应用》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021042375A1 (en) * | 2019-09-06 | 2021-03-11 | 深圳市汇顶科技股份有限公司 | Face spoofing detection method, chip, and electronic device |
CN112997185A (en) * | 2019-09-06 | 2021-06-18 | 深圳市汇顶科技股份有限公司 | Face living body detection method, chip and electronic equipment |
CN111062019A (en) * | 2019-12-13 | 2020-04-24 | 支付宝(杭州)信息技术有限公司 | User attack detection method and device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN109492585B (en) | 2023-07-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105243386B (en) | Face living body judgment method and system | |
CN105612533B (en) | Living body detection method, living body detection system, and computer program product | |
CN106022209B (en) | A kind of method and device of range estimation and processing based on Face datection | |
CN104143086B (en) | Portrait compares the application process on mobile terminal operating system | |
RU2431190C2 (en) | Facial prominence recognition method and device | |
KR101118654B1 (en) | rehabilitation device using motion analysis based on motion capture and method thereof | |
CN110059644A (en) | A kind of biopsy method based on facial image, system and associated component | |
CN108021892B (en) | Human face living body detection method based on extremely short video | |
US11006864B2 (en) | Face detection device, face detection system, and face detection method | |
CN105718863A (en) | Living-person face detection method, device and system | |
CN110163126A (en) | A kind of biopsy method based on face, device and equipment | |
CN108875469A (en) | In vivo detection and identity authentication method, device and computer storage medium | |
CN105993022B (en) | Method and system for recognition and authentication using facial expressions | |
CN109492585A (en) | A kind of biopsy method and electronic equipment | |
JP7268725B2 (en) | Image processing device, image processing method, and image processing program | |
CN104063041B (en) | A kind of information processing method and electronic equipment | |
CN111639582A (en) | Living body detection method and apparatus | |
CN109729268B (en) | Face shooting method, device, equipment and medium | |
CN111178233A (en) | Identity authentication method and device based on living body authentication | |
CN109788193B (en) | Camera unit control method | |
CN106611417A (en) | A method and device for classifying visual elements as a foreground or a background | |
CN110263753A (en) | A kind of object statistical method and device | |
JP2020095651A (en) | Productivity evaluation system, productivity evaluation device, productivity evaluation method, and program | |
JP7463792B2 (en) | Information processing system, information processing device, and information processing method | |
CN111507124A (en) | Non-contact video lie detection method and system based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |