CN107992794B - A kind of biopsy method, device and storage medium - Google Patents
A kind of biopsy method, device and storage medium Download PDFInfo
- Publication number
- CN107992794B CN107992794B CN201711012244.1A CN201711012244A CN107992794B CN 107992794 B CN107992794 B CN 107992794B CN 201711012244 A CN201711012244 A CN 201711012244A CN 107992794 B CN107992794 B CN 107992794B
- Authority
- CN
- China
- Prior art keywords
- test object
- light
- sequence
- reflected light
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Geometry (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The embodiment of the invention discloses a kind of biopsy method, device and storage mediums;The embodiment of the present invention is when needing to carry out In vivo detection, light source be can star to test object throw light, and Image Acquisition is carried out to the test object, when the surface for identifying test object in the image sequence collected, there are when reflected light signal caused by the throw light, type using default identification model to the reflected light signal in the affiliated object of test object image forming surface feature identifies, if recognition result indicates that the type of the affiliated object of the characteristics of image is living body, it is determined that the test object is living body;The program can improve In vivo detection effect, to improve the accuracy and safety of authentication.
Description
This application claims in submission on December 30th, 2016 Patent Office of the People's Republic of China, application No. is 2016112570522, invention names
This is hereby incorporated by reference in the referred to as priority of the Chinese patent application of " a kind of biopsy method and device ", entire contents
In application.
Technical field
The present invention relates to fields of communication technology, and in particular to a kind of biopsy method, device and storage medium.
Background technique
In recent years, identity validation technology, such as the identification of fingerprint recognition, eyeprint, iris recognition and recognition of face all obtain
Great development.Wherein, face recognition technology is the most prominent, has been applied to all kinds of authentication systems more and more widely
In system.
Identity authorization system based on recognition of face, it is usually required mainly for solve two problems, one is face verification, Ling Yishi
In vivo detection.Wherein, In vivo detection is primarily used to confirm that the data such as collected facial image are to come from user, without
It is playback or forgery material.For the attack means of current In vivo detection, such as photo attack, video playback attack, synthesis
Face attack etc., existing to propose " randomization interaction " technology, so-called " randomization interaction " technology is referred to by video septum reset
The motion changes of different parts is cut, and the randomization interactive action for needing user to cooperate on one's own initiative is incorporated, for example, blink, shake the head or
Lip reading identification, etc., and judge whether test object is living body, etc. accordingly.
In the research and practice process to the prior art, it was found by the inventors of the present invention that existing scheme In vivo detection institute
The algorithm of use, the accuracy rate differentiated is not high, moreover, synthesis face attack also can not be effectively kept out, in addition, cumbersome master
Dynamic interaction also will be greatly reduced the percent of pass of correct sample, so, generally speaking, the In vivo detection effect of existing scheme is not
It is good, largely effect on the accuracy and safety of authentication.
Summary of the invention
The embodiment of the present invention provides a kind of biopsy method, device and storage medium, can improve In vivo detection effect,
To improve the accuracy and safety of authentication.
The embodiment of the present invention provides a kind of biopsy method, comprising:
Receive In vivo detection request;
Starting light source is requested according to the In vivo detection, and to test object throw light;
Image Acquisition is carried out to the test object, to obtain image sequence;
Identify the surface of test object in described image sequence there are reflected light signal caused by the throw light,
The reflected light signal is in the test object image forming surface feature;
It is identified using type of the default identification model to the affiliated object of described image feature, the default identification model
By multiple feature samples training form, the feature samples for the reflected light signal marking types subject surface institute shape
At characteristics of image;
If recognition result indicates that the type of the affiliated object of described image feature is living body, it is determined that the test object is to live
Body.
Correspondingly, the embodiment of the present invention also provides a kind of living body detection device, comprising:
A kind of living body detection device characterized by comprising
Receiving unit, for receiving In vivo detection request;
Start unit, for requesting starting light source according to the In vivo detection, and to test object throw light;
Acquisition unit, for carrying out Image Acquisition to the test object, to obtain image sequence;
Detection unit, there are produced by the throw light for the surface of test object in described image sequence out for identification
Reflected light signal, the reflected light signal is in the test object image forming surface feature, using default identification model pair
The type of the affiliated object of described image feature is identified that the default identification model is formed by the training of multiple feature samples, institute
It states feature samples and is formed by characteristics of image in the subject surface of marking types for the reflected light signal, if recognition result refers to
The type for showing the affiliated object of described image feature is living body, it is determined that the test object is living body.
Optionally, in some embodiments, the start unit is specifically used for pre- according to In vivo detection request starting
If color mask, the color mask is as light source to test object throw light.
Optionally, in some embodiments, the start unit is specifically used for requesting starting inspection according to the In vivo detection
Interface is surveyed, the detection interface includes non-detection region, and the non-detection region is flashed color mask, the color mask conduct
Light source is to test object throw light.
Optionally, in some embodiments, the living body detection device can also include generation unit, as follows:
The generation unit, for generating color mask, the light that the color mask is projected according to
Default rule is changed.
Optionally, in some embodiments, the generation unit, is also used to:
For the light of same color, preset screen intensity adjusting parameter is obtained, according to the screen intensity adjusting parameter
Screen intensity of the light of the same color before and after variation is adjusted, to adjust the change intensity of light;
For the light of different colours, preset color difference adjusting parameter is obtained, institute is adjusted according to the color difference adjusting parameter
Color difference of the light of different colours before and after variation is stated, to adjust the change intensity of light.
Optionally, in some embodiments, the generation unit, is specifically used for:
Predetermined code sequence is obtained, the coded sequence includes multiple codings;
According to pre-arranged code algorithm, the corresponding face of each coding is successively determined according to the sequence encoded in the coded sequence
Color obtains colour sequential;
Color mask is generated based on the colour sequential, so that the light that the color mask is projected is according to the face
The indicated color of color-sequential column is changed.
Optionally, in some embodiments, the detection unit may include computation subunit, judgment sub-unit and identification
Subelement, as follows:
The computation subunit obtains regression result for the variation of frame in regression analysis described image sequence;
The judgment sub-unit, for determining that the surface of test object in described image sequence is according to the regression result
It is no that there are reflected light signals caused by the throw light;
The identification subelement, for determining that there are institutes on the surface of test object in described image sequence in judgment sub-unit
When stating reflected light signal caused by throw light, using default identification model to the type of the affiliated object of described image feature into
Row identification, if the type of the recognition result instruction affiliated object of described image feature is living body, it is determined that the test object is to live
Body.
Optionally, in some embodiments, whether the judgment sub-unit is greater than specifically for the determination regression result
Preset threshold, if so, determining that there are reflections caused by the throw light for the surface of test object in described image sequence
Optical signal;If not, it is determined that there is no reflections caused by the throw light on the surface of test object in described image sequence
Optical signal.
Optionally, in some embodiments, the judgment sub-unit, specifically for by default global characteristics algorithm or in advance
If identification model carries out classification analysis to the regression result;If analyzing the interframe variation on the surface of result instruction test object greatly
In setting value, it is determined that there are the letters of reflected light caused by the throw light on the surface of test object in described image sequence
Number;If the interframe variation for analyzing the surface of result instruction test object is not more than setting value, it is determined that examined in described image sequence
Reflected light signal caused by the throw light is not present in the surface for surveying object.
Optionally, in some embodiments, the color mask is to be generated according to predetermined code sequence, then the judgement
Subelement is specifically used for being decoded described image sequence according to the regression result according to default decoding algorithm, being solved
Sequence after code;Whether sequence matches with the coded sequence after determining the decoding;If matching, it is determined that in described image sequence
There are reflected light signals caused by the throw light on the surface of test object;If mismatching, it is determined that described image sequence
There is no reflected light signals caused by the throw light on the surface of middle test object.
Optionally, in some embodiments, the computation subunit, specifically for determining the change in location journey of test object
When degree is less than default changing value, the pixel coordinate of contiguous frames in described image sequence is obtained respectively, and frame is calculated based on pixel coordinate
Between it is poor;Alternatively,
The computation subunit, when specifically for determining that the change in location degree of test object is less than default changing value, point
The pixel coordinate that frame corresponding to throw light variation front and back is obtained not from described image sequence, calculates frame based on pixel coordinate
Difference;Alternatively,
The computation subunit, when specifically for determining that the change in location degree of test object is less than default changing value, point
The chrominance/luminance that frame corresponding to throw light variation front and back is obtained not from described image sequence, by presetting regression function
The chrominance/luminance is calculated, the chrominance/luminance variation obtained between frame corresponding to throw light variation front and back is opposite
Value, chrominance/luminance variation relative value is poor as the frame between frame corresponding to throw light variation front and back.
In addition, the embodiment of the present invention also provides a kind of storage medium, the storage medium is stored with a plurality of instruction, the finger
It enables and being loaded suitable for processor, to execute the step in any biopsy method provided by the embodiment of the present invention.
The embodiment of the present invention can star light source to test object throw light when needing to carry out In vivo detection, and right
The test object carries out Image Acquisition, and when the surface for identifying test object in the image sequence collected, there are the projection lights
Caused by line when reflected light signal, using default identification model to the reflected light signal in the test object image forming surface
The type of the affiliated object of feature is identified, if recognition result indicates that the type of the affiliated object of the characteristics of image is living body, really
The fixed test object is living body;Since the program is not necessarily to carry out cumbersome interactive operation and operation (existing In vivo detection with user
Scheme is generally by the way of instruction action cooperation, such as face turns left, turns right, opening one's mouth, blinking to be detected, and needs user
Cooperation, this mode commonly referred to as actively interacts In vivo detection, and the embodiment of the present invention belongs to no interactions In vivo detection, that is, is not necessarily to
User's cooperation, is noninductive for a user), therefore, the demand to hardware configuration can be reduced, moreover, because the program
The foundation for carrying out living body determination is the reflected light signal on test object surface, and the really living body (composite diagram of living body and forgery
Carrier of piece or video, such as photograph, mobile phone or tablet computer etc.) reflected light signal be different, therefore, the program can also
Effectively to keep out synthesis face attack, the accuracy of differentiation is improved;To sum up, the program can change and improve In vivo detection effect
Fruit, to improve the accuracy and safety of authentication.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment
Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for
For those skilled in the art, without creative efforts, it can also be obtained according to these attached drawings other attached
Figure.
Fig. 1 a is the schematic diagram of a scenario of biopsy method provided in an embodiment of the present invention;
Fig. 1 b is another schematic diagram of a scenario of biopsy method provided in an embodiment of the present invention;
Fig. 1 c is the flow chart of biopsy method provided in an embodiment of the present invention;
Fig. 2 is another flow chart of biopsy method provided in an embodiment of the present invention;
Fig. 3 a is another flow chart of biopsy method provided in an embodiment of the present invention;
Fig. 3 b is the exemplary diagram of color change in biopsy method provided in an embodiment of the present invention;
Fig. 3 c is another exemplary diagram of color change in biopsy method provided in an embodiment of the present invention;
Fig. 4 a is the structural schematic diagram of living body detection device provided in an embodiment of the present invention;
Fig. 4 b is another structural schematic diagram of living body detection device provided in an embodiment of the present invention;
Fig. 5 is the structural schematic diagram of terminal provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, those skilled in the art's every other implementation obtained without creative efforts
Example, shall fall within the protection scope of the present invention.
The embodiment of the present invention provides a kind of biopsy method, device and storage medium.
Wherein, which specifically can integrate in the equipment such as terminal, it can use the screen light of terminal
Strong and color change or using the other components such as flash lamp or infrared transmitter or equipment as light source, is projected to test object
On, then, by analyzing the surface of test object in the image sequence received, such as the reflected light signal of face, Lai Jinhang
In vivo detection.
For example, with it is integrated in the terminal, and for the light source is a color mask, when terminal receives In vivo detection request
When, starting detection interface can be requested according to the In vivo detection, wherein as shown in Figure 1a, the detection interface is in addition to being provided with inspection
It surveys except region, is additionally provided with a non-detection region (grey parts marked in Fig. 1 a), is mainly used for the color mask that flashes,
The color mask can be used as light source to test object throw light, for example, reference can be made to Fig. 1 b.Due to real living body and forge
The reflected light signal of living body (carrier of synthesising picture or video, such as photograph, mobile phone or tablet computer) be different, because
This, can whether there is reflected light signal caused by the throw light, and the reflected light by judging the surface of test object
Whether signal meets preset condition, and to differentiate to living body, for example, can carry out Image Acquisition to the test object (can be with
The case where monitoring, is shown by the detection zone in the detection interface), then, it is determined that in the image sequence collected
The surface of test object is with the presence or absence of reflected light signal caused by the throw light, and if it exists, then using default identification model
Type to the reflected light signal in the affiliated object of test object image forming surface feature identifies, if recognition result refers to
The type for showing the affiliated object of the characteristics of image is living body, it is determined that the test object is non-living body, etc..
It is described in detail separately below.It should be noted that the serial number of following embodiment is not as preferably suitable to embodiment
The restriction of sequence.
The present embodiment will be described from the angle of the living body detection device of terminal (abbreviation living body detection device), the living body
Detection device specifically can integrate in the equipment such as terminal, the terminal be specifically as follows mobile phone, tablet computer, laptop or
The equipment such as personal computer (PC, Personal Computer).
A kind of biopsy method, comprising: In vivo detection request is received, starting light source is requested according to the In vivo detection, and
To test object throw light, then, Image Acquisition is carried out to the test object and identifies the image to obtain image sequence
There are reflected light signals caused by the throw light on the surface of test object in sequence, and the reflected light signal is in the test object
Image forming surface feature is identified using type of the default identification model to the affiliated object of the characteristics of image, if identification knot
Fruit indicates that the type of the affiliated object of the characteristics of image is living body, it is determined that the test object is living body.
As illustrated in figure 1 c, the detailed process of the biopsy method can be such that
101, In vivo detection request is received.
For example, specifically can receive the In vivo detection request of user's triggering, alternatively, also can receive other equipment transmission
In vivo detection request, etc..
102, starting light source is requested according to the In vivo detection, and to test object throw light.
For example, specifically can be according to the corresponding In vivo detection process of the In vivo detection request call, according to the In vivo detection
Process initiation light source, etc..
Wherein, which can be configured according to the demand of practical application, for example, can be by adjusting terminal screen
Brightness realizes, alternatively, can use other luminous components such as flash lamp or infrared transmitter or external device also realizing or
Person, can also be realized by the way that a color mask is arranged on the display interface of terminal or terminal, etc., i.e., step is " according to this
In vivo detection request starting light source " can specifically be realized using any one following mode:
(1) adjustment screen intensity is requested according to the In vivo detection, so that the screen is as light source to test object projection light
Line.
(2) according to the In vivo detection request open preset emission component so that the luminous component as light source to detect pair
As throw light.
Wherein, which may include the components such as flash lamp or infrared transmitter.
(3) preset color mask is started according to In vivo detection request, the color mask is as light source to test object
Throw light.
For example, can specifically request the color mask in starting terminal according to the In vivo detection, so-called color mask refers to
It is can flash light emitting region or component with color light, for example, the face that can flash can be arranged at terminal enclosure edge
Then the component of color mask after receiving In vivo detection request, can start the component, with the color mask that flashes;Or
Person, can also be as follows by display detection interface come the color mask that flashes:
Starting detection interface is requested according to the In vivo detection, flash color mask at the detection interface, the color mask conduct
Light source is to test object throw light.
Wherein, flash in the detection interface color mask region can depending on the demand of practical application, for example, should
Detection interface may include detection zone and non-detection region, and detection zone is mainly used for showing monitoring situation, and is somebody's turn to do
Non-detection region can flash color mask, and the color mask is as light source to test object throw light, etc..
Wherein, the region of color mask is flashed in the non-detection region can be depending on the demand of practical application, can be with
It is that entire non-detection region is provided with color mask, is also possible in certain partial region of the non-detection region or certain several
Partial region is provided with color mask, etc..The parameters such as the color and transparency of the color mask can be according to practical application
Demand is configured, which can be set in advance by system, and is directly transferred when starting detects interface, or
Person can also automatically generate after receiving In vivo detection request, i.e., after step " receiving In vivo detection request ", the work
Body detecting method can also include:
Color mask is generated, the light which is projected is changed according to default rule.
Optionally, for the ease of the subsequent variation that can preferably identify light, the variation of the light can also be adjusted
Intensity.
Wherein, which can adjust the side of the change intensity of the light depending on the demand of practical application
Formula can also there are many, can be by adjusting variation front and back for example, for the light (i.e. the light of Same Wavelength) of same color
Screen intensity adjusts the change intensity of light, for example, the screen intensity of variation front and back is allowed to be set as minimum and maximum, and for
The light (i.e. the light of different wave length) of different colours can adjust the variation of light then by adjusting the color difference of variation front and back
Intensity, etc..I.e. after generating color mask, which can also include:
For the light of same color, preset screen intensity adjusting parameter is obtained, according to the screen intensity adjusting parameter tune
The whole screen intensity of the light before and after variation with color, to adjust the change intensity of light;
For the light of different colours, preset color difference adjusting parameter is obtained, this is adjusted not according to the color difference adjusting parameter
With color difference of the light before and after variation of color, to adjust the change intensity of light.
Wherein, the adjustment amplitude of the change intensity of the light can be configured according to the demand of practical application, can wrap
It includes and substantially adjusts, for example maximize the change intensity etc. of light, also may include fine tune, it is for convenience, subsequent
It will be illustrated by taking the change intensity for maximizing light as an example.
Optionally, reflected light signal preferably can be detected from image frame-to-frame differences in order to subsequent, in addition to adjustable
Except the change intensity of the light, can also select as far as possible the color space that most robust is analyzed signal in the selection of color,
For example, screen is converted to green most bright, the coloration variation maximum of reflected light by red is most bright under pre-set color space, etc.
Deng.
Optionally, in order to improve the accuracy and safety of authentication, the light combinations of pre-arranged code can also be used
As the color mask, i.e. step " generates color mask, the light that the color mask is projected is according to pre-
If rule is changed " may include:
Predetermined code sequence is obtained, which includes multiple codings, according to pre-arranged code algorithm, according to the code sequence
The sequence encoded in column successively determines the corresponding color of each coding, obtains colour sequential, generates color based on the colour sequential
Mask, so that the light that the color mask is projected is changed according to color indicated by the colour sequential.
Wherein, which can be randomly generated, and can also be configured according to the demand of practical application,
And the pre-arranged code algorithm can also be depending on the demand of practical application, the encryption algorithm, can reflect in coded sequence
Corresponding relationship between each coding and various colors, for example, red can be enabled to represent number -1, green represents 0, and blue represents
1, etc., if the coded sequence then got be " 0, -1,1,0 ", it is possible to obtain colour sequential " green, red, blue,
Green ", to generate a color mask, the light which is projected according to " green, red, blue,
The sequence of green " is changed.
It should be noted that in projection, between waiting time when switching between the display duration and each color of each color
Every can be configured according to the demand of practical application, for example, can allow each color display when it is 2 seconds a length of, and wait
Time interval is set as 0 second or 1 second, etc..
Wherein, during latency period, can not throw light, alternatively, scheduled light can also be projected, than
Such as, wait for that the time interval phase is not 0 with this, and during latency period not for throw light, if color mask is projected
The color sequence of light out is " green, red, blue, green ", then throw light specific manifestation are as follows: " green-> unglazed
Line-> red-> dull thread-> blue-> dull thread-> green ".And if being somebody's turn to do to the time interval phase is 0 second, color screening
Color can directly be switched by covering projected light, i.e. the throw light specific manifestation are as follows: " green-> red-> blue-
> green ", and so on, details are not described herein.
Optionally, in order to further improve the security, the rule change of light can also be further complicated, for example, will
Latency period when switching between the display duration and different colours of each color is also also configured as inconsistent number
Value, for example, the display duration of green can be 3 seconds, and it is 2 seconds a length of when the display of red, blue is 4 seconds, green and red
Between latency period when switching be 1 second, and latency period when switching between red and blue is 1.5 seconds, with this
Analogize, etc..
103, Image Acquisition is carried out to the test object, to obtain image sequence.
For example, can specifically photographic device be called to shoot in real time to test object, image sequence be obtained, and will shooting
Obtained image sequence carries out showing in the detection zone, etc..
Wherein, which includes but is not limited to included camera, IP Camera and the monitoring camera of terminal
Head and other equipment etc. that can acquire image.It should be noted that by the light projected to test object can be can
It is light-exposed, it is also possible to black light, it therefore, can also be according to practical application in photographic device provided by the embodiment of the present invention
Demand, different light receivers, such as infrared light receiver etc. are configured, to incude to different light, to adopt
Collect required image sequence, details are not described herein.
Optionally, may be used also after obtaining image sequence to reduce the influence that numerical value caused by noise floats to signal
To carry out Denoising disposal to the image sequence.For example, specifically can be used in timing so that noise model is Gaussian noise as an example
Multi-frame mean and/or it is at same frame it is multiple dimensioned it is average reduce noise as much as possible, details are not described herein.
Optionally, other pretreatments, such as scaling, cutting, sharpening, blurred background can also be carried out to the image sequence
Deng operation, to improve the efficiency and accuracy of subsequent identification.
104, identify that there are reflected light signals caused by the throw light for the surface of test object in the image sequence.
Wherein, for the reflected light signal in the test object image forming surface feature, so-called characteristics of image may include figure
Color characteristic, textural characteristics, shape feature and spatial relation characteristics of picture etc..Color characteristic is a kind of global characteristics, is described
The surface nature of scenery corresponding to image or image-region;Textural characteristics are also a kind of global characteristics, it also illustrates image
Or the surface nature of scenery corresponding to image-region;Shape feature has two class representation methods, and one kind is contour feature, another kind of to be
Provincial characteristics, the contour feature of image is mainly for the outer boundary of object, and the provincial characteristics of image is then related to entire shape
Region;Spatial relation characteristics refer to the mutual spatial position or relative direction between the multiple targets split in image
Relationship, these relationships can also be divided into connection/syntople, overlapping/overlapping relation and comprising/containment relationships etc.;When it is implemented,
The characteristics of image can specifically include local binary patterns (LBP, Local Binary Patterns) Feature Descriptor, scale
Invariant features convert (SIFT, Scale-invariant feature transform) Feature Descriptor, and/or convolutional Neural
The information such as the Feature Descriptor that network extracts.Therefore not to repeat here.
Wherein, identify the surface of test object in the image sequence with the presence or absence of the letter of reflected light caused by the throw light
Number mode can there are many, for example, specifically can use the variation of frame in image sequence to detect the reflected light information, than
Such as, it specifically can be such that
(1) in regression analysis (Regression Analysis) image sequence frame variation, obtain regression result.
For example, specifically can be with the numerical expression of the chrominance/luminance of every frame in regression analysis image sequence, which can
Think sequence of values, then, the change of the chrominance/luminance of frame in the image sequence is judged according to the numerical expression such as sequence of values
Change, obtains regression result;That is, can use numerical expression, for example the variation of sequence of values represents the coloration of every frame
Perhaps the coloration variation of brightness change and every frame or brightness change can be used as the regression result for variation.
Wherein, in regression analysis image sequence the numerical expression of the chrominance/luminance of every frame mode can there are many, than
Such as, preset image regression analysis model column be can use to return to the chrominance/luminance of every frame in the image sequence
Analysis, obtains the numerical expression of chrominance/luminance, etc. of every frame.
Wherein, which can be regression tree or returns convolutional neural networks, etc., specifically can be with
It is configured according to the demand of practical application.The image regression analysis model can be trained in advance by other equipment, can also
To be trained by the living body detection device, for example, the training process can be such that
Image of the preset number with the reflective information of different surfaces (such as the reflective information of face) is acquired as collecting sample collection,
The collecting sample concentrated to the collecting sample is labeled according to preset strategy, using the collecting sample after mark as training sample
This, obtain training sample set, then, using preset image regression analysis initial model (such as regression tree or return convolution
Neural network etc.), training sample set is learnt, the image regression analysis model is obtained.
Wherein, the preset strategy of the mark can be depending on the demand of practical application, for example, with the bright of regression analysis frame
Degree variation, and by reflective information type be divided into the presence of relatively it is strong it is reflective, weaker it is reflective, without being said for any reflective three classes
It is bright, then it can be at this time 1 to the label for labelling that there is relatively strong reflective collecting sample, in the presence of weaker reflective collecting sample
Label for labelling is 0.5, is 0, etc. to the label for labelling without any reflective collecting sample, then, is returned using preset image
Return analysis initial model, training sample set is learnt, concentrated with finding in training sample, can most be fitted original image and return
Return the regression function of mapping relations between the serial number label of analysis, the image regression analysis model can be obtained.
Similarly, the regression analysis of the coloration variation of frame is also similar, for example, can specifically be deposited with acquisition testing subject surface
In the picture frame (i.e. collecting sample) of different colours light, then these picture frames are labeled, wherein the label of mark is no longer
It is one-dimensional scalar, but the triple of corresponding RGB (Red, Green, Blue, i.e. three primary colors) color, such as (255,0,0) represent red
Color, etc., then, using preset image regression analysis initial model, to picture frame (the i.e. training sample obtained after mark
Collection) learnt, to obtain the image regression analysis model, etc..
After obtaining the image regression analysis model, the image regression analysis model can be utilized, to the image sequence
Every frame carry out regression analysis.For example, if the image regression analysis model is the image training changed according to different light intensities
Made of, then test object surface can directly be returned there are the picture frame that different light intensities change (i.e. brightness change)
One 0 to 1 serial number out, come the reflective degree of strength of face, etc. for expressing the picture frame, details are not described herein.
Optionally, in addition to that can be calculated by the numerical expression of the chrominance/luminance of every frame in regression analysis image sequence
Except regression result, can also directly in the regression analysis image sequence interframe variation, obtain regression result.
Wherein, in the image sequence between frame variation can by calculate the difference in the image sequence between frame come
It arrives, the difference between the frame can be frame-to-frame differences, be also possible to that frame is poor, and frame-to-frame differences refers to the difference between adjacent two frame, and frame
Difference is the difference between frame corresponding to throw light variation front and back.
For example, for calculating frame-to-frame differences specifically default become can be less than in the change in location degree for determining test object
When change value, the pixel coordinate of contiguous frames in the image sequence is obtained respectively, then, frame-to-frame differences is calculated based on the pixel coordinate.
In another example can specifically determine that the change in location degree of test object is less than default variation for calculating frame difference
When value, the pixel coordinate of frame corresponding to throw light variation front and back is obtained from the image sequence respectively, is based on pixel coordinate
It is poor to calculate frame.
Wherein, based on the pixel coordinate calculate frame-to-frame differences or frame difference mode can there are many, for example, can be such that
The pixel coordinate of contiguous frames is converted, to minimize the registration error of the pixel coordinate, according to transformation results
The pixel that correlation meets preset condition is filtered out, frame-to-frame differences is calculated according to the pixel filtered out.
Alternatively, being converted to the pixel coordinate of frame corresponding to throw light variation front and back, to minimize pixel seat
Target registration error filters out the pixel that correlation meets preset condition according to transformation results, according to the pixel filtered out
It is poor to calculate frame.
Wherein, presetting changing value and preset condition can be configured according to the demand of practical application, and details are not described herein.
It optionally, can also be using other modes, for example, can be at certain other than the mode of above-mentioned calculating frame difference
On the channel of one color space, or any can describe the color gone between two frames of analysis in the dimension of coloration or brightness change
The relative value of variation or the relative value of brightness change are spent, i.e., " variation of frame, obtains step in regression analysis described image sequence
To regression result " may include:
When the change in location degree for determining test object is less than default changing value, projection is obtained from image sequence respectively
The chrominance/luminance of frame corresponding to light variation front and back calculates throw light variation front and back institute according to the chrominance/luminance got
Coloration variation relative value or brightness change relative value between corresponding frame, using chrominance/luminance variation relative value as projection light
Frame between frame corresponding to line variation front and back is poor, which is regression result.
For example, can specifically be calculated by default regression function the chrominance/luminance, before obtaining throw light variation
Chrominance/luminance variation relative value (i.e. coloration variation relative value or brightness change relative value), etc. between corresponding frame afterwards.
Wherein, which can be configured according to the demand of practical application, for example, specifically can be recurrent nerve
Network, etc..
It should be noted that, however, it is determined that the change in location degree of test object is more than or equal to default changing value, then can be from image
Frame corresponding to other contiguous frames or other throw lights variation front and back is obtained in sequence to be calculated, or reacquires figure
As sequence.
(2) determine that the surface of test object in the image sequence is produced with the presence or absence of the throw light according to the regression result
Raw reflected light signal, for example, specifically can be using any one following mode:
First way:
Determine whether the regression result is greater than preset threshold, if so, determining the surface of test object in the image sequence
There are reflected light signals caused by the throw light, if not, it is determined that the surface of test object is not present in the image sequence
Reflected light signal caused by the throw light.
Wherein, which can be depending on the demand of practical application, and details are not described herein.
The second way:
Classification analysis is carried out to the regression result by default global characteristics algorithm or default identification model, if analysis result
Indicate that the interframe variation on the surface of test object is greater than the set value, it is determined that the surface of test object, which exists, in the image sequence is somebody's turn to do
Reflected light signal caused by throw light, if the interframe variation on the surface of analysis result instruction test object is no more than setting
Value, it is determined that there is no reflected light signals caused by the throw light on the surface of test object in the image sequence.
Wherein, which can be somebody's turn to do depending on the demand of practical application " by default global characteristics algorithm or in advance
If identification model to the frame-to-frame differences carry out classification analysis " mode can also there are many, for example, can be such that
The regression result is analyzed, to judge in the image sequence with the presence or absence of reflection caused by the throw light
Optical signal, reflected light signal caused by the throw light, the then interframe for generating the surface of instruction test object become if it does not exist
Change the analysis result for being not more than setting value;Reflected light signal caused by the throw light if it exists, then by default global special
It levies algorithm or default identification model judges whether the reflector of existing reflected light information is the test object, if the detection pair
As then generating the interframe analysis that is greater than the set value of variation on the surface of instruction test object as a result, if not the test object, then
The interframe variation for generating the surface of instruction test object is not more than the analysis result of setting value.
Alternatively, can also be carried out by presetting global characteristics algorithm or default identification model to the image in the image sequence
Classification, to filter out the frame there are the test object, obtains candidate frame, analyzes the frame-to-frame differences of the candidate frame, to judge the detection
Object is with the presence or absence of reflected light signal caused by the throw light, and reflected light caused by the throw light is believed if it does not exist
Number, then the interframe variation for generating the surface of instruction test object is not more than the analysis result of setting value;The throw light if it exists
Generated reflected light signal, the then interframe for generating the surface of instruction test object change the analysis being greater than the set value as a result, waiting
Deng.
Wherein, global characteristics algorithm refers to the algorithm based on global characteristics, wherein global characteristics may include gray scale
Mean variance, gray level co-occurrence matrixes, fast Fourier transform (FFT, Fast Fourier Transformation) and discrete cosine
Convert transformed frequency spectrums such as (DCT, Discrete cosine transform).Default identification model may include classifier
Or other identification models (such as human face recognition model), classifier may include support vector machines (SVM, Support
Vector Machine), neural network and decision tree etc..
The third mode:
It optionally, can also be by being carried out to optical signal if color mask is generated according to predetermined code sequence
Decoding, to identify the surface of test object in the image sequence with the presence or absence of reflected light signal caused by the throw light, example
Such as, it specifically can be such that
According to the regression result, the image sequence is decoded according to default decoding algorithm, sequence after being decoded, really
Whether sequence matches with coded sequence after the fixed decoding, if matching, it is determined that the surface of test object exists in the image sequence
Reflected light signal caused by the throw light;If sequence and coded sequence mismatch after decoding, it is determined that in the image sequence
There is no reflected light signals caused by the throw light on the surface of test object.
For example, if regression result is the chrominance/luminance variation relative value between frame corresponding to throw light variation front and back
(being detailed in (1) description as described in calculating the regression result), then at this point it is possible to using default decoding algorithm, successively to the image
(chrominance/luminance i.e. between frame corresponding to throw light variation front and back changes phase to chrominance/luminance variation relative value in sequence
To value) it calculates, the chrominance/luminance absolute value of frame corresponding to each throw light variation front and back is obtained, then, by what is obtained
Chrominance/luminance absolute value is as sequence after decoding, or converts according to preset strategy to obtained chrominance/luminance absolute value,
Sequence after being decoded.
Wherein, decoding algorithm is preset to match with encryption algorithm, it specifically can be depending on encryption algorithm;And default plan
It can also be slightly configured according to the demand of practical application, details are not described herein.
Optionally, determine after the decoding sequence and the whether matched mode of coded sequence can also there are many, for example, can be with
Determining whether sequence and coded sequence are consistent after the decoding, if unanimously, it is determined that sequence is matched with coded sequence after the decoding, if
It is inconsistent, it is determined that sequence and coded sequence mismatch after the decoding;Alternatively, sequence and code sequence after the decoding can also be determined
Whether the relationship between column meets default corresponding relationship, if so, sequence is matched with coded sequence after the fixed decoding, otherwise, if
It does not meet, it is determined that sequence and coded sequence mismatch, etc. after the decoding.Wherein, which can be according to reality
The demand of border application is configured.
It should be noted that if identifying, the surface of test object in the image sequence is not present caused by the throw light
Reflected light signal, then process terminates, or, it may be determined that the test object is non-living body, or, it can also return to step
103, i.e., Image Acquisition is carried out to the test object again, or, the light source of starting can also be detected, however, it is determined that light
There is no problem for the light that source is projected to test object, then process terminates, determines that the test object is non-living body or again
Image Acquisition carried out to the test object, and if it is determined that the light that is projected to test object of light source there are problems, then return and hold
Row step 102, i.e. restarting light source, and to test object throw light, the strategy specifically executed can be according to practical application
Demand and be configured, details are not described herein.
105, using default identification model, to the characteristics of image, (i.e. reflected light signal is formed by the test object surface
Characteristics of image) belonging to the type of object identified, if recognition result indicates that the type of the affiliated object of the characteristics of image is living body,
Then determine that the test object is living body.
Conversely, for example being " mobile phone screen " if recognition result indicates that the type of the affiliated object of the characteristics of image is non-living body
Deng, then can determine the test object be non-living body.
Since reflected light signal will form characteristics of image on the test object surface, when identifying test object
Surface there are reflected light signal caused by the throw light (i.e. step 104) when, it is available should " reflected light signal be in the inspection
Survey subject surface and be formed by characteristics of image ", then, the characteristics of image is identified using default identification model, Jin Erji
Judge whether the test object is living body in recognition result, for example, if recognition result indicates the affiliated object of the characteristics of image
Type is living body, for example indicates that the type of the affiliated object of the characteristics of image is " face ", then, the detection can be determined at this time
Object is living body;Otherwise, if recognition result indicates that the type of the affiliated object of the characteristics of image is not living body, the figure is for example indicated
As the affiliated object of feature type be " mobile phone screen ", then can determine at this time the test object be non-living body, and so on, etc.
Deng.
Wherein, default identification model may include classifier or other identification models (such as human face recognition model), divide
Class device may include SVM, neural network and decision tree etc..
The default identification model can be formed by the training of multiple feature samples, and this feature sample is the reflected light signal
The subject surface of marking types is formed by characteristics of image.For example, meeting after the throw light being radiated on the face of people
Characteristics of image is formed by as feature samples, and is labeled as " face ", which is radiated at institute's shape on mobile phone screen
At characteristics of image as feature samples, and be labeled as " mobile phone screen ", and so on, collecting a large amount of feature samples
Afterwards, these feature samples (i.e. the characteristics of image of marking types) Lai Jianli identification models can be passed through.
It should be noted that be stored in default memory space after the identification model can be established by other equipment, when
When the living body detection device needs the type to the affiliated object of characteristics of image to identify, directly acquired from the memory space,
Alternatively, the identification model can also voluntarily be established by the living body detection device, i.e., in step " using default identification model pair
The type of the affiliated object of the characteristics of image is identified " before, which can also include:
Multiple feature samples are obtained, default initial identification model is trained according to this feature sample, obtain default knowledge
Other model.
In addition, it should be noted that, if in step 103, mainly identifying this by the preset identification model
The surface of test object, then can also be according to identification there are if reflected light signal caused by the throw light in image sequence
As a result directly judge whether the test object is living body, i.e. the identification model surface that judges test object in the image sequence again
There are while reflected light signal caused by the throw light, the type of the affiliated object of the characteristics of image can also be identified,
In other words, it can by the preset identification model, to identify that the surface of test object in the image sequence whether there is
Reflected light signal caused by the throw light, and identifying there are when reflected light signal caused by the throw light,
Identify the type of object belonging to respective image feature (i.e. the reflection light is formed by characteristics of image on the surface of test object),
Details are not described herein.
From the foregoing, it will be observed that the present embodiment when needing to carry out In vivo detection, can star light source to test object throw light,
And Image Acquisition is carried out to the test object, when the surface for identifying test object in the image sequence collected, there are the throwings
When penetrating reflected light signal caused by light, the reflected light signal is formed on the test object surface using default identification model
The type of the affiliated object of characteristics of image is identified, if recognition result indicates that the type of the affiliated object of the characteristics of image is living body,
Then determine that the test object is living body;Since the program is not necessarily to user carry out cumbersome interactive operation and operation, it can be with
The demand to hardware configuration is reduced, moreover, because the foundation of program progress living body determination is the reflected light on test object surface
Signal, and the really living body (carrier of synthesising picture or video, such as photograph, mobile phone or tablet computer etc.) of living body and forgery
Reflected light signal be different, therefore, the program can also effectively keep out synthesis face attack, improve the accuracy of differentiation;
So to sum up, In vivo detection effect can be improved in the program, to improve the accuracy and safety of authentication.
Citing is described in further detail according to below method described in upper one embodiment.
In the present embodiment, it will specifically be integrated in the terminal with the living body detection device, light source is specially color mask, inspection
It surveys object and is specifically the face of people, and be to obtain regression result especially by the variation of interframe in the regression analysis image sequence
Example is illustrated.
As shown in Fig. 2, a kind of biopsy method, detailed process can be such that
201, terminal receives In vivo detection request.
For example, terminal specifically can receive the In vivo detection request of user's triggering, alternatively, also can receive other equipment hair
In vivo detection request sent, etc..
For example, by taking user triggers as an example, when user starts the In vivo detection function, such as the start key of click In vivo detection
When, it can trigger and generate In vivo detection request, so that terminal receives In vivo detection request.
202, terminal generates color mask, and the light which is projected is carried out according to default rule
Variation.
Optionally, for the ease of the subsequent variation that can preferably identify light, the change of the light can also be maximized
Change intensity.
Wherein, which can maximize the change intensity of the light depending on the demand of practical application
Mode can also there are many, for example, for the light of same color, can be maximized by adjusting the screen intensity of variation front and back
The change intensity of light, for example, the screen intensity of variation front and back is allowed to be set as minimum and maximum, and for the light of different colours
Line can maximize the change intensity of light then by adjusting the color difference of variation front and back, such as by screen by black most blackout
It is most bright, etc. to become white.
Optionally, reflected light signal preferably can be detected from image frame-to-frame differences in order to subsequent, in addition to can be maximum
Except the change intensity for changing the light, the color for analyzing signal most robust can also be selected as far as possible empty in the selection of color
Between, for example, screen is converted to green most bright, the coloration variation maximum of reflected light by red is most bright under pre-set color space,
And so on, etc..
203, terminal requests starting detection interface according to the In vivo detection, and passes through the non-detection region in the detection interface
Color of flashing mask, so that the color mask is as light source to the facial throw light of test object, such as people.
For example, terminal specifically can be according to the corresponding In vivo detection process of the In vivo detection request call, according to the living body
Corresponding detection interface of detection procedure starting, etc..
Wherein, which may include detection zone and non-detection region, and detection zone is mainly used for getting
Image sequence shown that and the non-detection region can flash color mask, the color mask is as light source to detection pair
As throw light, Fig. 1 b can be specifically participated in, in this way, reflected light will be generated because of the light in test object, moreover,
Not only according to parameters such as the color of light and intensity, the reflected light generated also can different from.
It should be noted that in order to guarantee that the light that color mask is emitted can be projected to test object, the test object
It needs to be maintained in a certain distance with the screen of the mobile device, for example, when user needs to detect whether some face is living
When body, mobile device can be taken to the front distance place appropriate of the face, to be monitored to the face, etc.
Deng.
204, terminal carries out Image Acquisition to the test object, to obtain image sequence.
For example, can specifically call the camera of terminal, test object is shot in real time, obtains image sequence, and
The image sequence that shooting obtains is shown in the detection zone.
Optionally, may be used also after obtaining image sequence to reduce the influence that numerical value caused by noise floats to signal
To carry out Denoising disposal to the image sequence.For example, specifically can be used in timing so that noise model is Gaussian noise as an example
Multi-frame mean and/or it is at same frame it is multiple dimensioned it is average reduce noise as much as possible, details are not described herein.
Optionally, in order to improve the efficiency and accuracy of subsequent identification, which can also be carried out other pre-
Processing, such as the operation such as scaling, cutting, sharpening, blurred background.
205, terminal calculates the frame-to-frame differences in the image sequence.
Wherein, reflected light signal is detected with frame-to-frame differences, it is necessary to the two-dimensional image vegetarian refreshments energy in image sequence between image
Enough one-to-one correspondence as far as possible.Therefore, it can be aligned in the case where detecting user's face does not have violent change in location using interframe
Method makees the pixel pair of frame-to-frame differences come finer correction.It can be less than in the change in location degree for determining test object pre-
If when changing value, obtaining the pixel coordinate of contiguous frames in the image sequence respectively, then, which is converted, with
The registration error of the pixel coordinate is minimized, then calculates frame-to-frame differences based on transformation results, for example, can be such that
Enabling the same point on object in the pixel coordinate of two contiguous frames I and I ' is respectively p=[x, y, w]TAnd p0=[x0,
y0,w0]T, wherein w is homogeneous coordinates item, solves 3*3 transformation matrix M, as follows:
[x', y', w']=Mp0
Herein, the alternative types of used transformation matrix M are the highest homography conversion of freedom degree, so as to most
Smallization registration error.
In the method for optimization M, more common method is that mean square error (MSE, Mean Square Error) is estimated
Meter and random sampling unification algorism (RANSAC, Random Sample Consensus).Optionally, more robust in order to obtain
Flow algorithm (homography flow) is singly answered as a result, can also use.
Even if due to that can solve optimal transform matrix M, interframe be also possible that can not matched pixel therefore can
To filter out the stronger pixel of correlation, and the weaker pixel of correlation is neglected, then, based on the pixel filtered out
Make frame-to-frame differences calculating again, so that on the one hand calculation amount can be reduced, on the other hand, can be enhanced as a result, i.e. optionally, step
" calculating frame-to-frame differences based on transformation results " may include:
The pixel that correlation meets preset condition is filtered out according to transformation results, and is calculated according to the pixel filtered out
Frame-to-frame differences.
Wherein, presetting changing value and preset condition can be configured according to the demand of practical application, and details are not described herein.
It should be noted that, however, it is determined that the change in location degree of test object is more than or equal to default changing value, then can be from image
Other contiguous frames are obtained in sequence to be calculated, or reacquire image sequence, then calculated.
206, produced by terminal determines the face of people in the image sequence with the presence or absence of the throw light according to the frame-to-frame differences
Reflected light signal, and if it exists, then follow the steps 207, if it does not exist, then can be operated according to preset strategy.
For example, terminal can determine whether the frame-to-frame differences is greater than preset threshold, if so, determining people in the image sequence
There are reflected light signals caused by the throw light for face, can execute step 207, if not, it is determined that in the image sequence
The face of people can be operated there is no reflected light signal caused by the throw light according to preset strategy.
Wherein, which can be depending on the demand of practical application, and the preset strategy can also be according to reality
Depending on the demand of application, for example, can be set to " terminating process ", alternatively, can be set to " generate and reflected light signal is not present
Prompt information ", " determine that the test object is non-living body " alternatively, may be set to be, alternatively, can also return to step
204, i.e., Image Acquisition is carried out to the test object again, or, the light source of starting can also be detected, be somebody's turn to do with determining
Whether light source projects on the face of the test object such as people, however, it is determined that light source is not present to the light that the test object is projected
Problem, then process terminates, determines that the test object is non-living body or carries out Image Acquisition to the test object again, and if
Determining light source, there are problems to the light that the face of test object such as people is projected, for example the light source does not project the face of people
Portion, but project on the object beside it, alternatively, the light source does not project light, then 203 are returned to step, i.e.,
Light source is restarted, and to test object throw light, etc., details are not described herein.
Optionally, in order to improve the accuracy rate of detection, and reduce the calculation amount of detection, cascade can also be used to differentiate mould
Type is handled, for example, can using global characteristics algorithm or default identification model (such as classifier) come to frame-to-frame differences into
Row pre-processes, and to determine the generation of reflected light signal roughly, allows to skip the common of most of not reflected light signal
The subsequent processing of frame, i.e., subsequent needs handle the frame there are reflected light signal.That is, step " terminal according to
The frame-to-frame differences determines the face of people in the image sequence with the presence or absence of reflected light signal caused by the throw light " it can wrap
It includes:
Classification analysis is carried out to the frame-to-frame differences by default global characteristics algorithm or default identification model, if analysis result refers to
The interframe variation for the face leted others have a look at is greater than the set value, it is determined that there are produced by the throw light for the face of people in the image sequence
Reflected light signal, if analysis result assignor face interframe variation be not more than setting value, it is determined that in the image sequence
There is no reflected light signals caused by the throw light for the face of people.
Wherein, which can be somebody's turn to do depending on the demand of practical application " by default global characteristics algorithm or in advance
If identification model to the frame-to-frame differences carry out classification analysis " mode can also there are many, for example, can be such that
The frame-to-frame differences is analyzed, to judge in the image sequence with the presence or absence of reflected light caused by the throw light
Signal, reflected light signal caused by the throw light, then the interframe variation for generating the face of assignor are not more than if it does not exist
The analysis result of setting value;Reflected light signal caused by the throw light if it exists, then by default global characteristics algorithm or
Default identification model judges whether the reflector of existing reflected light information is the face of people, if being the face of people, generation refers to
The analysis that the interframe variation for the face leted others have a look at is greater than the set value is as a result, if not the face of people, then generate the face of assignor
Interframe variation is not more than the analysis result of setting value.
Alternatively, can also be carried out by presetting global characteristics algorithm or default identification model to the image in the image sequence
Classification obtains candidate frame, analyzes the frame-to-frame differences of the candidate frame, to judge the face of the people to filter out the frame of the face there are people
Portion whether there is reflected light signal caused by the throw light, if it does not exist reflected light signal caused by the throw light,
The interframe variation for then generating the face of assignor is not more than the analysis result of setting value;It is anti-caused by the throw light if it exists
Optical signal is penetrated, then the interframe for generating the face of assignor changes the analysis result, etc. being greater than the set value.
Wherein, global characteristics algorithm refers to the algorithm based on global characteristics, wherein global characteristics may include gray scale
The transformed frequency spectrum such as mean variance, gray level co-occurrence matrixes, FFT and DCT.
And default identification model is specifically as follows classifier or other identification models (such as human face recognition model), with classifier
For, which can be configured according to the demand of practical application, for example, if being served only for discriminating whether that there are reflected light letters
Number, then relatively simple classifier can be used, and if for discriminating whether face etc. for people, can be used more complicated
Classifier, such as neural network classifier etc. are handled, and details are not described herein.
It should be noted that in addition to the face of people in image sequence can be analyzed by calculating frame-to-frame differences with the presence or absence of the throwing
It penetrates except reflected light signal caused by light, (can also be not necessarily by the way that the frame before and after calculating throw light variation is poor
Two adjacent frames) analyze the face of people in image sequence with the presence or absence of reflected light signal caused by the throw light, specifically
It can be found in the embodiment of front, details are not described herein.
207, to the characteristics of image, (i.e. reflected light signal is on the test object surface, such as using default identification model for terminal
The face of people is formed by characteristics of image) belonging to the type of object identify, if recognition result indicates belonging to the characteristics of image
The type of object is living body, it is determined that the test object is living body.
Conversely, for example being " mobile phone screen " if recognition result indicates that the type of the affiliated object of the characteristics of image is non-living body
Deng, then can determine the test object be non-living body.
Wherein, default identification model may include classifier or other identification model etc., classifier may include SVM,
Neural network and decision tree etc..
The default identification model can be formed by the training of multiple feature samples, and this feature sample is the reflected light signal
The subject surface of marking types is formed by characteristics of image.
Optionally, it after which can be established by other equipment, is stored in default memory space, when the end
When end needs the type to the affiliated object of characteristics of image to identify, directly acquired from the memory space by the terminal, alternatively,
The identification model can also voluntarily be established by the terminal, for example, the available multiple feature samples of terminal, according to this feature
Sample is trained default initial identification model, obtains default identification model, etc., for details, reference can be made to the embodiment of front,
Details are not described herein.
Optionally, in order to more further increase the accuracy rate of detection, some interactive operations can also be properly joined into, than
Such as, user is allowed to execute blink or the movement such as open one's mouth, i.e., " determining the face of people in the image sequence, there are the throw lights in step
After generated reflected light signal ", which can also include:
The prompt information that instruction test object (such as face of people) executes deliberate action is generated, shows the prompt information,
And the test object is monitored, if monitoring test object performs the deliberate action, just determine that the test object is to live
Body, otherwise, if monitoring test object is not carried out the deliberate action, it is determined that the test object is non-living body.
Wherein, which can be configured according to the demand of practical application, it should be noted that, in order to avoid cumbersome
Interactive operation, can quantity to the deliberate action and complexity carry out certain restrictions, for example, need to only carry out primary simple
Interaction, such as blink or open one's mouth, details are not described herein.
From the foregoing, it will be observed that the present embodiment can by detection interface be arranged a non-detection region, the color that can flash mask,
Wherein, which can be used as light source and needs to carry out living body in this way, working as such as the facial throw light of people to test object
When detection, Image Acquisition can be carried out to the face of the people, then determine whether the face of people in obtained image sequence is deposited
The reflected light signal caused by the throw light, and the reflected light signal is right belonging to the formed characteristics of image of face of the people
Whether the type of elephant is living body, if there is and type be living body, it is determined that the face of the people be living body;Since the program is not necessarily to
Cumbersome interactive operation and operation are carried out with user, therefore, the demand to hardware configuration can be reduced, moreover, because the program
The foundation for carrying out living body determination is the reflected light signal on test object surface, and the really living body (composite diagram of living body and forgery
Carrier of piece or video, such as photograph, mobile phone or tablet computer etc.) reflected light signal be different, therefore, the program can also
Effectively to keep out synthesis face attack, the accuracy of differentiation is improved;So to sum up, the program can be limited in terminal
Under hardware configuration, In vivo detection effect is improved, to improve the accuracy and safety of authentication.
Identical with previous embodiment to be, the present embodiment is equally specifically integrated with the living body detection device in the terminal,
Light source is specially color mask, and the surface of test object be specifically people face for be illustrated, with previous embodiment
Unlike, in the present embodiment, as the color mask, it will will be carried out below detailed using the light combinations of pre-arranged code
Explanation.
As shown in Figure 3a, a kind of biopsy method, detailed process can be such that
301, terminal receives In vivo detection request.
For example, terminal specifically can receive the In vivo detection request of user's triggering, alternatively, also can receive other equipment hair
In vivo detection request sent, etc..
For example, by taking user triggers as an example, when user starts the In vivo detection function, such as the start key of click In vivo detection
When, it can trigger and generate In vivo detection request, so that terminal receives In vivo detection request.
302, terminal obtains predetermined code sequence, which includes multiple codings.
Wherein, which can be randomly generated, and can also be configured according to the demand of practical application.
For example, the coded sequence can be Serial No., and such as: 0, -1,1,0 ... ..., etc..
303, terminal successively determines each coding pair according to the sequence encoded in the coded sequence according to pre-arranged code algorithm
The color answered, obtains colour sequential.
Wherein, which can reflect the corresponding pass between each coding and various colors in coded sequence
System, the corresponding relationship specifically can be depending on the demands of practical application, for example, red can be enabled to represent number -1, green generation
Table 0, blue represents 1, etc..
For example, green represents 0, for blue represents 1, if in step 302, getting to enable red represent number -1
Coded sequence be " 0, -1,1,0 ", then at this point, terminal can according to it is each coding various colors between corresponding relationship, press
The corresponding color of each coding is successively determined according to the sequence encoded in the coded sequence, obtains colour sequential " green, red, indigo plant
Color, green ".
304, terminal is based on the colour sequential and generates color mask, so that the light that the color mask is projected is according to this
Color indicated by colour sequential is changed.
For example, if in step 303, obtaining colour sequential " green, red, blue, green ", then terminal can be generated one
Color mask, the light which is projected are carried out according to the sequence of " green, red, blue, green "
Variation, referring to Fig. 3 b and Fig. 3 c.
It should be noted that in projection, between waiting time when switching between the display duration and each color of each color
Every can be configured according to the demand of practical application, for example, as shown in Figure 3b, can allow each color display when it is a length of
1 second, and latency period is set as 0 second, etc., i.e., according to the direction of time shaft meaning in Fig. 3 b, the throw light is specific
It can show as " green-> red-> blue-> green ", wherein claim at the time of being switched to another color from a kind of color
For color change point.
Optionally, waiting time interval can not also be 0, for example, as shown in Figure 3c, can allow the aobvious of each color
A length of 1 second when showing, and latency period is set as 0.5 second, etc., wherein during latency period, can not be projected
Light (i.e. dull thread), i.e., according to the direction of time shaft meaning in Fig. 3 c, the throw light can specifically show as " green->
Dull thread-> red-> dull thread-> blue-> dull thread-> green ".
Optionally, in order to further improve the security, the rule change of light can also be further complicated, for example, will
Latency period when switching between the display duration and different colours of each color is also configured as inconsistent number
Value, for example, the display duration of green can be 3 seconds, and it is 2 seconds a length of when the display of red, blue is 4 seconds, green and red
Between latency period when switching be 1 second, and latency period when switching between red and blue is 1.5 seconds, with this
Analogize, etc..
305, terminal carries out Image Acquisition to the test object, to obtain image sequence.
For example, terminal can specifically call the camera of terminal, test object is shot in real time, obtains image sequence
Column, and the image sequence obtained to shooting is shown, for example, the image sequence that shooting obtains is carried out in the detection zone
Display, etc..
For example, the light projected with color mask be colour sequential as shown in Figure 3b (i.e. green-> red->
Blue-> green), and test object be user face for, then the picture frame in this four seconds video, in first second
1 corresponding face goes up viridescent screen reflection light, has red screen reflection on the corresponding face of picture frame 2 in second second
Light has the screen reflection light of blue, the 4th second corresponding face of picture frame 4 on the corresponding face of picture frame 3 in the third second
Upper viridescent screen reflection light.All picture frames are the original data with reflected light signal, and as the present invention is real
Apply the image sequence of example.
Optionally, may be used also after obtaining image sequence to reduce the influence that numerical value caused by noise floats to signal
To carry out Denoising disposal to the image sequence.For example, specifically can be used in timing so that noise model is Gaussian noise as an example
Multi-frame mean and/or it is at same frame it is multiple dimensioned it is average reduce noise as much as possible, details are not described herein.
306, terminal is when the change in location degree for determining test object is less than default changing value, respectively from image sequence
Obtain the chrominance/luminance of frame corresponding to throw light variation front and back.
For example, still by taking color change is " green-> red-> blue-> green " as an example, when terminal is determining detection
When the change in location degree of object is less than default changing value, throw light can be obtained from image sequence respectively by green conversion
The chrominance/luminance of corresponding frame when for red, the chrominance/luminance of frame corresponding when by red conversion being blue, by blue
Be converted to the chrominance/luminance of frame corresponding when green.
For example, if the image sequence successively includes picture frame 1, picture frame 2, picture frame 3 and picture frame 4.Wherein, scheme
As the 1 upper viridescent screen reflection light of corresponding face of frame, there are red screen reflection light, figure on the corresponding face of picture frame 2
As there is the screen reflection light of blue on the corresponding face of frame 3, the corresponding face of picture frame 4 goes up viridescent screen reflection light, then
At this point it is possible to obtain the chrominance/luminance of picture frame 1, picture frame 2, picture frame 3 and picture frame 4 respectively.
For another example, if the image sequence successively includes picture frame 1, picture frame 2, picture frame 3, picture frame 4, picture frame 5, figure
As frame 6, picture frame 7, picture frame 8, picture frame 9, picture frame 10, picture frame 11 and picture frame 12.Wherein, picture frame 1, figure
As the 3 upper viridescent screen reflection light of corresponding face of frame 2 and picture frame, picture frame 4, picture frame 5 and the corresponding face of picture frame 6
There is red screen reflection light in portion, there is the screen reflection of blue on picture frame 7, picture frame 8 and the corresponding face of picture frame 9
Light, the 12 upper viridescent screen reflection light of corresponding face of picture frame 10, picture frame 11 and picture frame, then at this point it is possible to respectively
Obtain the chrominance/luminance of picture frame 3, picture frame 4, picture frame 6, picture frame 7, picture frame 9 and picture frame l0, wherein image
Frame 3 and picture frame 4 be color from it is green become red when before and after two frames, picture frame 6 and picture frame 7 be color from it is red become indigo plant when
Two frames before and after when two frames of front and back, picture frame 9 and picture frame 10 become green from indigo plant for color.
It should be noted that, however, it is determined that the change in location degree of test object is more than or equal to default changing value, then can be from image
Frame corresponding to other contiguous frames or other throw lights variation front and back is obtained in sequence to be calculated, or reacquires figure
As sequence.
307, the coloration between frame corresponding to before and after terminal calculates throw light variation according to the chrominance/luminance got
Change relative value or brightness change relative value.
For example, terminal can specifically calculate the chrominance/luminance by default regression function, throw light change is obtained
Change the chrominance/luminance variation relative value (i.e. coloration variation relative value or brightness change relative value) between frame corresponding to front and back,
Etc..
Wherein, which can be configured according to the demand of practical application, for example, specifically can be recurrent nerve
Network, etc..
For example, with picture frame 3 and picture frame 4 be color from it is green become red when before and after two frames, picture frame 6 and picture frame
7 for color from it is red become indigo plant when before and after two frames, picture frame 9 and picture frame 10 be when color becomes green from indigo plant before and after two
Frame, and for calculating coloration variation relative value, then it can calculate following coloration variation relative value:
The difference of the coloration of picture frame 3 and the coloration of picture frame 4 is calculated by default regression function, such as recurrent neural networks
Value obtains the coloration variation relative value of picture frame 3 and picture frame 4;
The difference of the coloration of picture frame 6 and the coloration of picture frame 7 is calculated by default regression function, such as recurrent neural networks
Value obtains the coloration variation relative value of picture frame 6 and picture frame 7;
By presetting regression function, for example recurrent neural networks calculate the coloration of picture frame 9 and the coloration of picture frame 10
Difference obtains the coloration variation relative value of picture frame 9 and picture frame 10.
It should be noted that the calculation of brightness change relative value is similar, details are not described herein.
Wherein, these colorations variation relative value or brightness change relative value are equivalent to a kind of measurement △ I of signal strength,
In above example, the variation relative value (chrominance/luminance variation relative value) of picture frame 3 and picture frame 4 is △ I34, picture frame
6 with the variation relative value (chrominance/luminance variation relative value) of picture frame 7 be △ I67, picture frame 9 is opposite with the variation of picture frame 10
Being worth (chrominance/luminance variation relative value) is △ I910。
308, terminal changes relative value (i.e. between frame corresponding to throw light variation front and back according to the chrominance/luminance
Chrominance/luminance changes relative value), the image sequence is decoded according to default decoding algorithm, sequence after being decoded.
For example, terminal successively can change relative value to the chrominance/luminance in the image sequence using default decoding algorithm
(chrominance/luminance i.e. between frame corresponding to throw light variation front and back changes relative value) is calculated, and each projection light is obtained
Line variation front and back corresponding to frame chrominance/luminance absolute value, using obtained chrominance/luminance absolute value as decode after sequence,
Or obtained chrominance/luminance absolute value is converted according to preset strategy, sequence after being decoded.
Wherein, decoding algorithm is preset to match with encryption algorithm, it specifically can be depending on encryption algorithm;And default plan
It can also be slightly configured according to the demand of practical application, details are not described herein.
For example, if in step 307, determining the variation relative value of picture frame 3 and picture frame 4, (chrominance/luminance variation is opposite
Value) it is △ I34, the variation relative value (chrominance/luminance variation relative value) of picture frame 6 and picture frame 7 is △ I67, picture frame 9 with
The variation relative value (chrominance/luminance variation relative value) of picture frame 10 is △ I910, then at this time can be opposite according to these variations
Value, obtains the absolute strength value (such as coloration absolute value or brightness absolute value) of the heliogram of all frames, specific as follows:
Assuming that the origin and minimum unit length in the space have given, for example enable the absolute strength value I of picture frame 11=
0, and each variation relative value △ I34=-1, △ I67=2, △ I910=-1, then due to picture frame 1, picture frame 2 and picture frame 3
It is green, therefore, it is known that picture frame 1, picture frame 2 are identical with the absolute strength value of the heliogram of picture frame 3, i.e. I3=I2
=I1=0;And since picture frame 4, picture frame 5 and picture frame 6 are red, it is known that picture frame 4, picture frame 5 and image
The absolute strength value of the heliogram of frame 6 is identical;Similarly, since picture frame 7, picture frame 8 and picture frame 9 are blue, image
Frame 10, picture frame 11 and picture frame 12 are green, therefore, the heliogram of picture frame 7, picture frame 8 and picture frame 9 it is absolute
Intensity value is identical, and picture frame 10, picture frame 11 are identical with the absolute strength value of the heliogram of picture frame 12, accordingly, Ke Yitong
Cross the absolute strength value that following formula calculates each picture frame:
I4=I5=I6=I3+△I34=0-1=-1;
I7=I8=I9=I6+△I67=-1+2=1;
I10=I11=I12=I9+△I910=1-1=0;
So far, the Serial No. can be decoded to get to sequence after decoding: 0, -1,1,0 (it is respectively represent green, it is red,
Indigo plant, green).
309, whether sequence matches with coded sequence after terminal determines the decoding, if matching, it is determined that in the image sequence
There are reflected light signals caused by the throw light on the surface of test object, can execute step 310;Otherwise, if decoding
Sequence and coded sequence mismatch afterwards, it is determined that the surface of test object is produced there is no the throw light in described image sequence
Raw reflected light signal can be operated at this time according to preset strategy.
For example, terminal can determine whether sequence and coded sequence are consistent after the decoding, if unanimously, it is determined that the image sequence
There are reflected light signals caused by the throw light on the surface of test object in column, can execute step 310;Otherwise, if
Sequence and coded sequence are inconsistent after decoding, it is determined that the throw light is not present in the surface of test object in described image sequence
Generated reflected light signal, and then operated according to preset strategy.
For example, if in step 308, and sequence after being decoded " 0, -1,1,0 ", it is " 0, -1,1,0 " one with coded sequence
Cause, hence, it can be determined that in the image sequence surface of test object there are reflected light signal caused by the throw light, etc.
Deng.
Wherein, which can be depending on the demand of practical application, and for details, reference can be made in upper one embodiment
Step 206, details are not described herein.
310, to the characteristics of image, (i.e. reflected light signal is on the test object surface, such as using default identification model for terminal
The face of people is formed by characteristics of image) belonging to the type of object identify, if recognition result indicates belonging to the characteristics of image
The type of object is living body, it is determined that the test object is living body.
Conversely, for example being " mobile phone screen " if recognition result indicates that the type of the affiliated object of the characteristics of image is non-living body
Deng, then can determine the test object be non-living body.
Wherein, default identification model may include classifier or other identification model etc., classifier may include SVM,
Neural network and decision tree etc..The default identification model can be formed by the training of multiple feature samples, and this feature sample is that this is anti-
The subject surface that optical signal is penetrated in marking types is formed by characteristics of image.
Optionally, which can be established by other equipment, and is supplied to the terminal and is carried out using can also be by
The terminal is voluntarily established, and for details, reference can be made to the embodiments of front, and details are not described herein.
Optionally, in order to more further increase the accuracy rate of detection, some interactive operations can also be properly joined into, than
Such as, user is allowed to execute blink or the movement such as open one's mouth, i.e., after " determine after the decoding sequence with coded sequence match ", the living body
Detection method can also include:
Terminal generates the prompt information that instruction test object (such as face of people) executes deliberate action, shows that the prompt is believed
Breath, and the test object is monitored, if monitoring test object performs the deliberate action, just determine that the test object is
Living body, otherwise, if monitoring test object is not carried out the deliberate action, it is determined that the test object is non-living body.
Wherein, which can be configured according to the demand of practical application, it should be noted that, in order to avoid cumbersome
Interactive operation, can quantity to the deliberate action and complexity carry out certain restrictions, for example, need to only carry out primary simple
Interaction, such as blink or open one's mouth, details are not described herein.
Optionally, there is corresponding timestamp since collected each frame image with reflected light signal all records, because
This can also further determine that whether these timestamps can be with face after sequence is matched with coded sequence after determining decoding
The time that color mask converts light corresponds, if can correspond to, just determines the surface of test object in the image sequence
There are reflected light signals caused by the throw light;Otherwise, if cannot correspond to, it is determined that reflected light signal and default light
Sample of signal mismatches.That is, attacked if attacker wants to be rendered with human face segmentation, it just not only will be in the color sequence of coding
Column can sequentially match, and cannot also offset on absolute time point (because synthesizing the operation of rendering if synthesis in real time
It is also required to the time of at least Millisecond), attack difficulty greatly improves, and therefore, can be further improved safety.
From the foregoing, it will be observed that the present embodiment can generate a color mask by coded sequence, wherein the color mask can be made
It is light source to test object, such as the facial throw light of people, in this way, when needing to carry out In vivo detection, it can be to the people's
Face is monitored, and the reflected light signal in the image sequence then obtained to monitoring on the face of people is decoded, with determination
Whether match with coded sequence, if matching, it is determined that the face of the people is living body;Since the program is numerous without carrying out with user
Therefore trivial interactive operation and operation can reduce the demand to hardware configuration, moreover, because the program carries out living body determination
Foundation be test object surface reflected light signal, and really living body and forgery the living body (load of synthesising picture or video
Body, such as photograph, mobile phone or tablet computer etc.) reflected light signal be different, therefore, the program can also effectively keep out conjunction
It is attacked at face, improves the accuracy of differentiation.
Further, due to the throw light be generated according to random coded sequence, and it is subsequent differentiate when need
Reflected light signal is decoded, therefore, even if attacker has synthesized a corresponding reflective view according to current colour sequential
Frequently, it can not also be used next time middle, so, for the scheme of upper one embodiment, it can further improve living body
Detection effect, and then improve the accuracy and safety of authentication.
In order to better implement above method, the embodiment of the present invention also provides a kind of living body detection device, the inspection of abbreviation living body
Device is surveyed, as shown in fig. 4 a, which includes receiving unit 401, start unit 402, acquisition unit 403 and detection
Unit 404, as follows:
(1) receiving unit 401;
Receiving unit 401, for receiving In vivo detection request.
For example, receiving unit 401, specifically can be used for receiving the In vivo detection request of user's triggering, alternatively, can also connect
Receive the In vivo detection request, etc. that other equipment are sent.
(2) start unit 402;
Start unit 402, for requesting starting light source according to the In vivo detection, and to test object throw light.
For example, the start unit 402, specifically can be used for according to the corresponding In vivo detection of In vivo detection request call into
Journey, according to the In vivo detection process initiation light source, etc..
Wherein, which can be configured according to the demand of practical application, for example, can be by adjusting terminal screen
Brightness realizes, alternatively, can use other luminous components such as flash lamp or infrared transmitter or external device also realizing or
Person, can also be realized by the way that a color mask is arranged in the display interface, etc., i.e., start unit 402 can specifically execute
Any one following operation:
(1) start unit 402 specifically can be used for requesting adjustment screen intensity according to the In vivo detection, so that the screen
As light source to test object throw light.
(2) start unit 402 specifically can be used for being requested to open preset emission component according to the In vivo detection, so that should
Luminous component is as light source to test object throw light.
Wherein, which may include the components such as flash lamp or infrared transmitter.
(3) start unit 402 specifically can be used for) preset color mask, the face are started according to In vivo detection request
Color mask is as light source to test object throw light.
For example, start unit 402, specifically can be used for requesting the color mask in starting terminal according to the In vivo detection,
For example, the component for the color mask that can flash can be arranged at terminal enclosure edge, then, In vivo detection request is being received
Afterwards, the component can be started, with the color mask that flashes;Alternatively, can also by display detection interface come the color mask that flashes,
It is as follows:
Starting detection interface is requested according to the In vivo detection, which can flash color mask, the color mask
As light source to test object throw light.
Wherein, the region of the color mask that flashes can be depending on the demand of practical application, for example, the detection interface can
To include detection zone and non-detection region, detection zone is mainly used for showing monitoring situation, and the non-detection region
The color that can flash mask, the color mask is as light source to test object throw light, etc..
Wherein, the region of color mask is flashed in the non-detection region can be depending on the demand of practical application, can be with
It is that entire non-detection region is provided with color mask, is also possible in certain partial region of the non-detection region or certain several
Partial region is provided with color mask, etc..The parameters such as the color and transparency of the color mask can be according to practical application
Demand is configured, which can be set in advance by system, and is directly transferred when starting detects interface, or
Person can also automatically generate after receiving In vivo detection request, i.e., as shown in Figure 4 b, which can also wrap
Generation unit 405 is included, as follows:
The generation unit 405, can be used for generating color mask, the light which is projected by
It is changed according to default rule.
Optionally, for the ease of the subsequent variation that can preferably identify light, which be can be also used for
Maximize the change intensity of the light.
Wherein, which can maximize the change intensity of the light depending on the demand of practical application
Mode can also there are many, for example, for the light of same color, can be maximized by adjusting the screen intensity of variation front and back
The change intensity of light, for example, the screen intensity of variation front and back is allowed to be set as minimum and maximum, and for the light of different colours
Line, then can by adjusting variation front and back color difference come the change intensity, etc. that maximizes light.That is:
Generation unit 405 specifically can be used for the light for same color, obtain preset screen intensity adjusting parameter,
The screen intensity of the light before and after variation with color is adjusted according to the screen intensity adjusting parameter, to adjust the variation of light
Intensity;For the light of different colours, preset color difference adjusting parameter is obtained, which is adjusted according to the color difference adjusting parameter
Color difference of the light of color before and after variation, to adjust the change intensity of light.
Wherein, the adjustment amplitude of the change intensity of the light can be configured according to the demand of practical application, can wrap
It includes and substantially adjusts, for example maximize the change intensity of light, also may include fine tune, etc., details are not described herein.
Optionally, reflected light signal preferably can be detected from image frame-to-frame differences in order to subsequent, in addition to adjustable
Except the change intensity of the light, can also select as far as possible the color space that most robust is analyzed signal in the selection of color,
For details, reference can be made to the embodiments of front, and details are not described herein.
Optionally, in order to improve the accuracy and safety of authentication, the light combinations of pre-arranged code can also be used
As the color mask, it may be assumed that
Generation unit 405, specifically can be used for: predetermined code sequence is obtained, which includes multiple codings, according to
Pre-arranged code algorithm successively determines the corresponding color of each coding according to the sequence encoded in the coded sequence, obtains color sequence
Column generate color mask based on the colour sequential, so that the light that the color mask is projected is according to colour sequential meaning
The color shown is changed.
Wherein, which can be randomly generated, and can also be configured according to the demand of practical application,
And the pre-arranged code algorithm can also be depending on the demand of practical application.The encryption algorithm, can reflect in coded sequence
Corresponding relationship between each coding and various colors, for example, red can be enabled to represent number -1, green represents 0, and blue represents
1, etc., if the coded sequence then got be " 0, -1,1,0 ", it is possible to obtain colour sequential " green, red, blue,
Green ", to generate a color mask, the light which is projected according to " green, red, blue,
The sequence of green " is changed.
It should be noted that in projection, between waiting time when switching between the display duration and each color of each color
Every can be configured according to the demand of practical application;In addition, during latency period, can not throw light, alternatively,
Scheduled light can also be projected, for details, reference can be made to the embodiments of the method for front, and details are not described herein.
(3) acquisition unit 403;
Acquisition unit 403, for carrying out Image Acquisition to the test object, to obtain image sequence.
For example, acquisition unit 403, specifically can be used for calling photographic device, shoots, obtain to test object in real time
Image sequence, and the image sequence that shooting obtains is shown in the detection zone.
Wherein, which includes but is not limited to included camera, IP Camera and the monitoring camera of terminal
Head and other equipment etc. that can acquire image.
Optionally, in order to reduce the influence that numerical value caused by noise floats to signal, after obtaining image sequence, acquisition
Unit 403 can also carry out Denoising disposal to the image sequence, be detailed in the embodiment of front, details are not described herein.
Optionally, acquisition unit 403 can also carry out other pretreatments to the image sequence, for example scale, cut, is sharp
The operation such as change, blurred background, to improve the efficiency and accuracy of subsequent identification.
(4) detection unit 404;
Detection unit 404, goes out in the image sequence that there are produced by the throw light for the surface of test object for identification
Reflected light signal, the reflected light signal is in the test object image forming surface feature, using default identification model to this
The type of the affiliated object of characteristics of image is identified, if recognition result indicates that the type of the affiliated object of the characteristics of image is living body,
Then determine that the test object is living body.
The detection unit 404 can be also used for determining that the projection light is not present in the surface of test object in the image sequence
Caused by line when reflected light signal, operated according to preset strategy.
Wherein, which can specifically be configured according to the demand of practical application, for example, the detection can be determined
Object is non-living body, or, Image Acquisition can also be carried out to the test object again with triggering collection unit 403, or,
Start unit 402 can also be triggered and restart light source, and to test object throw light, etc., for details, reference can be made to fronts
Embodiment of the method, details are not described herein.
The detection unit 404 can be also used for indicating that the type of the affiliated object of the characteristics of image is non-live in recognition result
When body, determine that the test object is non-living body.
For example, the detection unit 404 may include computation subunit, judgment sub-unit and identify subelement, it is as follows:
Computation subunit can be used for the variation of frame in the regression analysis image sequence, obtain regression result.
For example, computation subunit, specifically can be used for the numerical value of the chrominance/luminance of every frame in the regression analysis image sequence
Expression, the numerical expression can be sequence of values, be judged in the image sequence such as sequence of values frame according to the numerical expression
The variation of chrominance/luminance, obtains regression result.
Alternatively, computation subunit, specifically can be used for the variation in the regression analysis image sequence between frame, is returned
As a result, etc..
Wherein, before the concrete mode of the numerical expression of the chrominance/luminance of every frame can be found in the regression analysis image sequence
The embodiment of the method in face, and the variation in the image sequence between frame can be by calculating the difference in the image sequence between frame
It obtaining, the difference between the frame can be frame-to-frame differences, be also possible to that frame is poor, and frame-to-frame differences refers to the difference between adjacent two frame,
And frame difference is the difference between frame corresponding to throw light variation front and back.
For example, the computation subunit, the change in location degree for being specifically determined for test object is less than default variation
When value, the pixel coordinate of contiguous frames in the image sequence is obtained respectively, and frame-to-frame differences is calculated based on the pixel coordinate;For example, can be with
The pixel coordinate is converted, to minimize the registration error of the pixel coordinate, then, filters out correlation according to transformation results
Property meet the pixel of preset condition, and frame-to-frame differences, etc. is calculated according to the pixel filtered out.
For another example, the computation subunit, the change in location degree for being specifically determined for test object are less than default become
When change value, the pixel coordinate of frame corresponding to throw light variation front and back is obtained from the image sequence respectively, is based on the pixel
It is poor that coordinate calculates frame;For example, the pixel coordinate can be converted, to minimize the registration error of the pixel coordinate, then,
The pixel that correlation meets preset condition is filtered out according to transformation results, and poor according to the pixel calculating frame filtered out, etc.
Deng.
It optionally, can also be using other modes, for example, can be at certain other than the mode of above-mentioned calculating frame difference
On the channel of one color space, or any can describe the color gone between two frames of analysis in the dimension of coloration or brightness change
Spend the relative value of variation or the relative value of brightness change, it may be assumed that
Computation subunit specifically can be used for when the change in location degree for determining test object is less than default changing value,
The chrominance/luminance for obtaining frame corresponding to throw light variation front and back from image sequence respectively, according to the coloration got/bright
Degree calculates the coloration variation relative value or brightness change relative value between frame corresponding to throw light variation front and back, by coloration/
Brightness change relative value is poor as the frame between frame corresponding to throw light variation front and back.
For example, computation subunit, specifically can be used for calculating the chrominance/luminance by default regression function, obtain
To chrominance/luminance variation relative value (i.e. coloration variation relative value or the brightness between frame corresponding to throw light variation front and back
Change relative value), etc..
Wherein, which can be configured according to the demand of practical application, for example, specifically can be recurrent nerve
Network etc..
Wherein, the default changing value and preset condition can be configured according to the demand of practical application.
Judgment sub-unit, can be used for determining whether the surface of test object in the image sequence deposits according to the regression result
The reflected light signal caused by the throw light, the reflected light signal is in the test object image forming surface feature.
It identifies subelement, can be used for determining that there are the throwings for the surface of test object in the image sequence in judgment sub-unit
When penetrating reflected light signal caused by light, known using type of the default identification model to the affiliated object of the characteristics of image
Not, if recognition result indicates that the type of the affiliated object of the characteristics of image is living body, it is determined that the test object is living body.
The identification subelement can be also used for determining that there is no reflected lights caused by the throw light in judgment sub-unit
It when signal, is operated according to preset strategy, for details, reference can be made to the descriptions as described in preset strategy in detection unit 404, herein not
It repeats again.
The identification subelement can be also used for indicating that the type of the affiliated object of the characteristics of image is non-living body in recognition result
When, determine that the test object is non-living body.
Wherein, which can be formed by the training of multiple feature samples, and this feature sample is reflected light letter
Number characteristics of image is formed by the subject surfaces of marking types.The identification model can be established by other equipment,
And it is supplied to the living body detection device, it can also voluntarily be established by the living body detection device, i.e., the living body detection device is also
It may include model foundation unit, as follows:
Model foundation unit, for obtaining multiple feature samples, according to this feature sample to default initial identification model into
Row training obtains default identification model.
Wherein, determined the surface of test object in the image sequence with the presence or absence of the throw light institute according to the regression result
The mode of the reflected light signal of generation can there are many, for example, can be using any one following mode:
First way:
Judgment sub-unit, is specifically determined for whether the regression result is greater than preset threshold, if so, determining the figure
As there are reflected light signals caused by the throw light for the surface of test object in sequence;If not, it is determined that the image sequence
There is no reflected light signals caused by the throw light on the surface of middle test object.
The second way:
The judgment sub-unit specifically can be used for through default global characteristics algorithm or default identification model to the recurrence knot
Fruit carries out classification analysis, if the interframe variation on the surface of analysis result instruction test object is greater than the set value, it is determined that the image
There are reflected light signals caused by the throw light on the surface of test object in sequence;If analyzing result instruction test object
The interframe variation on surface is not more than setting value, it is determined that the throw light institute is not present in the surface of test object in the image sequence
The reflected light signal of generation.
Wherein, which can be somebody's turn to do depending on the demand of practical application " by default global characteristics algorithm or in advance
If identification model to the frame-to-frame differences carry out classification analysis " mode can also there are many, for example, can be such that
The judgment sub-unit specifically can be used for analyzing the regression result, with judge in the image sequence whether
There are reflected light signals caused by the throw light, and reflected light signal caused by the throw light, then generate if it does not exist
Indicate that the interframe variation on the surface of test object is not more than the analysis result of setting value;It is anti-caused by the throw light if it exists
Penetrate optical signal, then by preset global characteristics algorithm or default identification model judge existing reflected light information reflector whether
Point that the interframe variation on the surface of instruction test object is greater than the set value then is generated if the test object for the test object
Analysis is as a result, if not the test object, then the interframe variation for generating the surface of instruction test object are not more than the analysis of setting value
As a result.
Alternatively, the judgment sub-unit, specifically can be used for through default global characteristics algorithm or default identification model to this
Image in image sequence is classified, and to filter out the frame there are the test object, is obtained candidate frame, is analyzed the candidate frame
Frame-to-frame differences, with judge the test object with the presence or absence of reflected light signal caused by the throw light, the projection light if it does not exist
Reflected light signal caused by line, then the interframe variation for generating the surface of instruction test object are not more than the analysis knot of setting value
Fruit;Reflected light signal caused by the throw light if it exists, then the interframe variation for generating the surface of instruction test object are greater than
The analysis result of setting value.
Wherein, global characteristics algorithm refers to the algorithm based on global characteristics, wherein global characteristics may include gray scale
The transformed frequency spectrum such as mean variance, gray level co-occurrence matrixes, FFT and DCT.
Optionally, if color mask is generated according to predetermined code sequence, the third mode can be used, i.e., it is logical
It crosses and optical signal is decoded, to determine the surface of test object in the image sequence with the presence or absence of caused by the throw light
Reflected light signal, specific as follows:
The third mode:
Judgment sub-unit, specifically can be used for according to the regression result, according to default decoding algorithm to the image sequence into
Row decoding, sequence after being decoded, whether sequence matches with coded sequence after determining the decoding, if matching, it is determined that the image
There are reflected light signals caused by the throw light on the surface of test object in sequence;If sequence and coded sequence are not after decoding
Matching, it is determined that there is no reflected light signals caused by the throw light on the surface of test object in the image sequence.
For example, if regression result is the chrominance/luminance variation relative value between frame corresponding to throw light variation front and back,
Then at this point, judgment sub-unit successively can change relative value to the chrominance/luminance in the image sequence using default decoding algorithm
(chrominance/luminance i.e. between frame corresponding to throw light variation front and back changes relative value) is calculated, and each projection light is obtained
Line variation front and back corresponding to frame chrominance/luminance absolute value, then, using obtained chrominance/luminance absolute value as decode after
Sequence, or obtained chrominance/luminance absolute value is converted according to preset strategy, sequence after being decoded.
Wherein, decoding algorithm is preset to match with encryption algorithm, it specifically can be depending on encryption algorithm;And default plan
It can also be slightly configured according to the demand of practical application, details are not described herein.
Optionally, determine after the decoding sequence and the whether matched mode of coded sequence can also there are many, for example, judgement
Subelement can determine whether sequence and coded sequence are consistent after the decoding, if unanimously, it is determined that sequence and coding after the decoding
Sequences match, if inconsistent, it is determined that sequence and coded sequence mismatch after the decoding;Alternatively, judgment sub-unit can also be true
Whether the relationship after the fixed decoding between sequence and coded sequence meets default corresponding relationship, if so, sequence after the fixed decoding
It is matched with coded sequence, otherwise, if not meeting, it is determined that sequence and coded sequence mismatch, etc. after the decoding.Wherein, should
Default corresponding relationship can be configured according to the demand of practical application.
When it is implemented, above each unit can be used as independent entity to realize, any combination can also be carried out, is made
It is realized for same or several entities, the specific implementation of above each unit can be found in the embodiment of the method for front, herein not
It repeats again.
The living body detection device specifically can integrate in the equipment such as terminal, which is specifically as follows mobile phone, plate electricity
The equipment such as brain, laptop or PC.
From the foregoing, it will be observed that the living body detection device of the present embodiment is when needing to carry out In vivo detection, it can be by start unit 402
Start light source to test object throw light, and the detection zone in the detection interface is passed through to the detection by acquisition unit 403
Object carries out image, is then existed by the surface of the test object in identifying the image sequence collected of detection unit 404
When reflected light signal caused by the throw light, using default identification model to the reflected light signal on the test object surface
The type for forming the affiliated object of characteristics of image is identified, if recognition result indicates the type of the affiliated object of the characteristics of image to live
Body, it is determined that the test object is living body;Since the program is not necessarily to carry out cumbersome interactive operation and operation with user,
The demand to hardware configuration can be reduced, moreover, because the foundation that the program carries out living body determination is the anti-of test object surface
Optical signal is penetrated, and really (carrier of synthesising picture or video, such as photograph, mobile phone or plate are electric for living body and the living body of forgery
Brain etc.) reflected light signal be different, therefore, the program can also effectively keep out synthesis face attack, improve the standard of differentiation
True property.
Further, the generation unit 405 of the living body detection device can also generate this according to random coded sequence
Throw light, and being differentiated by detection unit 404 by decoding the reflected light signal, therefore, even if attacker is according to working as
Preceding colour sequential has synthesized a corresponding reflective video, can not also use next time middle, so, its safety can be made
Property is greatly enhanced.To sum up, the program can greatly improve In vivo detection effect, be conducive to the standard for improving authentication
True property and safety.
Correspondingly, the embodiment of the present invention also provides a kind of terminal, as shown in figure 5, the terminal may include radio frequency (RF,
Radio Frequency) circuit 501, the memory 502, defeated that includes one or more computer readable storage medium
Enter unit 503, display unit 504, sensor 505, voicefrequency circuit 506, Wireless Fidelity (WiFi, Wireless Fidelity)
The components such as module 507, the processor 508 for including one or more than one processing core and power supply 509.This field skill
Art personnel are appreciated that the restriction of the not structure paired terminal of terminal structure shown in Fig. 5, may include more or more than illustrating
Few component perhaps combines certain components or different component layouts.Wherein:
RF circuit 501 can be used for receiving and sending messages or communication process in, signal sends and receivees, particularly, by base station
After downlink information receives, one or the processing of more than one processor 508 are transferred to;In addition, the data for being related to uplink are sent to
Base station.In general, RF circuit 501 includes but is not limited to antenna, at least one amplifier, tuner, one or more oscillators, uses
Family identity module (SIM, Subscriber IdentityModule) card, transceiver, coupler, low-noise amplifier (LNA,
Low Noise Amplifier), duplexer etc..In addition, RF circuit 501 can also by wireless communication with network and other set
Standby communication.Any communication standard or agreement, including but not limited to global system for mobile communications can be used in the wireless communication
(GSM, Global System of Mobile communication), general packet radio service (GPRS, General
Packet Radio Service), CDMA (CDMA, Code Division Multiple Access), wideband code division it is more
Location (WCDMA, Wideband Code Division Multiple Access), long term evolution (LTE, Long Term
Evolution), Email, short message service (SMS, Short Messaging Service) etc..
Memory 502 can be used for storing software program and module, and processor 508 is stored in memory 502 by operation
Software program and module, thereby executing various function application and data processing.Memory 502 can mainly include storage journey
Sequence area and storage data area, wherein storing program area can the (ratio of application program needed for storage program area, at least one function
Such as sound-playing function, image player function) etc.;Storage data area, which can be stored, uses created data according to terminal
(such as audio data, phone directory etc.) etc..In addition, memory 502 may include high-speed random access memory, can also include
Nonvolatile memory, for example, at least a disk memory, flush memory device or other volatile solid-state parts.Phase
Ying Di, memory 502 can also include Memory Controller, to provide processor 508 and input unit 503 to memory 502
Access.
Input unit 503 can be used for receiving the number or character information of input, and generate and user setting and function
Control related keyboard, mouse, operating stick, optics or trackball signal input.Specifically, in a specific embodiment
In, input unit 503 may include touch sensitive surface and other input equipments.Touch sensitive surface, also referred to as touch display screen or touching
Control plate, collect user on it or nearby touch operation (such as user using any suitable object such as finger, stylus or
Operation of the attachment on touch sensitive surface or near touch sensitive surface), and corresponding connection dress is driven according to preset formula
It sets.Optionally, touch sensitive surface may include both touch detecting apparatus and touch controller.Wherein, touch detecting apparatus is examined
The touch orientation of user is surveyed, and detects touch operation bring signal, transmits a signal to touch controller;Touch controller from
Touch information is received on touch detecting apparatus, and is converted into contact coordinate, then gives processor 508, and can reception processing
Order that device 508 is sent simultaneously is executed.Furthermore, it is possible to a variety of using resistance-type, condenser type, infrared ray and surface acoustic wave etc.
Type realizes touch sensitive surface.In addition to touch sensitive surface, input unit 503 can also include other input equipments.Specifically, other are defeated
Entering equipment can include but is not limited to physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse
One of mark, operating stick etc. are a variety of.
Display unit 504 can be used for showing information input by user or be supplied to user information and terminal it is various
Graphical user interface, these graphical user interface can be made of figure, text, icon, video and any combination thereof.Display
Unit 504 may include display panel, optionally, can using liquid crystal display (LCD, Liquid Crystal Display),
The forms such as Organic Light Emitting Diode (OLED, Organic Light-Emitting Diode) configure display panel.Further
, touch sensitive surface can cover display panel, after touch sensitive surface detects touch operation on it or nearby, send processing to
Device 508 is followed by subsequent processing device 508 and is provided on a display panel accordingly according to the type of touch event to determine the type of touch event
Visual output.Although touch sensitive surface and display panel are to realize input and input as two independent components in Fig. 5
Function, but in some embodiments it is possible to touch sensitive surface and display panel are integrated and realizes and outputs and inputs function.
Terminal may also include at least one sensor 505, such as optical sensor, motion sensor and other sensors.
Specifically, optical sensor may include ambient light sensor and proximity sensor, wherein ambient light sensor can be according to ambient light
Light and shade adjust the brightness of display panel, proximity sensor can close display panel and/or back when terminal is moved in one's ear
Light.As a kind of motion sensor, gravity accelerometer can detect (generally three axis) acceleration in all directions
Size can detect that size and the direction of gravity when static, can be used to identify mobile phone posture application (such as horizontal/vertical screen switching,
Dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;It can also configure as terminal
The other sensors such as gyroscope, barometer, hygrometer, thermometer, infrared sensor, details are not described herein.
Voicefrequency circuit 506, loudspeaker, microphone can provide the audio interface between user and terminal.Voicefrequency circuit 506 can
By the electric signal after the audio data received conversion, it is transferred to loudspeaker, voice signal output is converted to by loudspeaker;It is another
The voice signal of collection is converted to electric signal by aspect, microphone, is converted to audio data after being received by voicefrequency circuit 506, then
After the processing of audio data output processor 508, it is sent to such as another terminal through RF circuit 501, or by audio data
Output is further processed to memory 502.Voicefrequency circuit 506 is also possible that earphone jack, with provide peripheral hardware earphone with
The communication of terminal.
WiFi belongs to short range wireless transmission technology, and terminal can help user's transceiver electronics postal by WiFi module 507
Part, browsing webpage and access streaming video etc., it provides wireless broadband internet access for user.Although Fig. 5 is shown
WiFi module 507, but it is understood that, and it is not belonging to must be configured into for terminal, it can according to need do not changing completely
Become in the range of the essence of invention and omits.
Processor 508 is the control centre of terminal, using the various pieces of various interfaces and connection whole mobile phone, is led to
It crosses operation or executes the software program and/or module being stored in memory 502, and call and be stored in memory 502
Data execute the various functions and processing data of terminal, to carry out integral monitoring to mobile phone.Optionally, processor 508 can wrap
Include one or more processing cores;Preferably, processor 508 can integrate application processor and modem processor, wherein answer
With the main processing operation system of processor, user interface and application program etc., modem processor mainly handles wireless communication.
It is understood that above-mentioned modem processor can not also be integrated into processor 508.
Terminal further includes the power supply 509 (such as battery) powered to all parts, it is preferred that power supply can pass through power supply pipe
Reason system and processor 508 are logically contiguous, to realize management charging, electric discharge and power managed by power-supply management system
Etc. functions.Power supply 509 can also include one or more direct current or AC power source, recharging system, power failure inspection
The random components such as slowdown monitoring circuit, power adapter or inverter, power supply status indicator.
Although being not shown, terminal can also include camera, bluetooth module etc., and details are not described herein.Specifically in this implementation
In example, the processor 508 in terminal can be corresponding by the process of one or more application program according to following instruction
Executable file is loaded into memory 502, and the application program being stored in memory 502 is run by processor 508, from
And realize various functions:
In vivo detection request is received, starting light source is requested according to the In vivo detection, and to test object throw light, so
Afterwards, Image Acquisition is carried out to test object, to obtain image sequence, identifies that the surface of test object in the image sequence exists
Reflected light signal caused by the throw light, the reflected light signal is in the test object image forming surface feature, using pre-
If identification model identifies the type of the affiliated object of the characteristics of image, if recognition result indicates the affiliated object of the characteristics of image
Type be living body, it is determined that the test object be living body.
Wherein it is determined that the surface of test object is with the presence or absence of the letter of reflected light caused by the throw light in the image sequence
Number, and to the type of the affiliated object of the characteristics of image known otherwise can there are many, for details, reference can be made to the realities of front
Example is applied, details are not described herein.
Wherein, the implementation of the light source can also there are many, for example, can by adjust terminal screen brightness come reality
It is existing, alternatively, can use other luminous components such as flash lamp or infrared transmitter or external device also realizing or, may be used also
To be realized by the way that a color mask is arranged in the display interface, etc., i.e. application program in the memory 502 can also be with
It implements function such as:
Adjustment screen intensity is requested according to the In vivo detection, so that the screen is as light source to test object throw light.
Alternatively, being requested to open preset emission component according to the In vivo detection, so that the luminous component is as light source to detection
Object throw light.Wherein, which may include the components such as flash lamp or infrared transmitter.
Alternatively, requesting starting color mask, such as starting detection interface according to the In vivo detection, which can dodge
Existing color mask, the color mask is as light source to test object throw light, etc..
Wherein, the region of the color mask that flashes can be depending on the demand of practical application, for example, the detection interface can
To include detection zone and non-detection region, detection zone is mainly used for showing monitoring situation, and the non-detection region
The color that can flash mask, the color mask is as light source to test object throw light, etc..
In addition, it should be noted that, the parameters such as color and transparency of the color mask can be according to the demand of practical application
It is configured, which can be set in advance by system, and directly be transferred when starting detects interface, alternatively,
It can be automatically generated after receiving In vivo detection request, i.e., this is stored in the application program in memory 502, can also be real
Existing following function:
Color mask is generated, the light which is projected is changed according to default rule.
Optionally, for the ease of the subsequent variation that can preferably identify light, the change of the light can also be maximized
Change intensity.
Wherein, maximize the change intensity of the light mode can also there are many, for details, reference can be made to the embodiment of front,
Details are not described herein.
Optionally, reflected light signal preferably can be detected from image frame-to-frame differences in order to subsequent, in addition to can be maximum
Except the change intensity for changing the light, the color for analyzing signal most robust can also be selected as far as possible empty in the selection of color
Between.
Optionally, in order to improve the accuracy and safety of authentication, the light combinations of pre-arranged code can also be used
As the color mask, i.e., this is stored in the application program in memory 502, can also implement function such as:
Predetermined code sequence is obtained, which includes multiple codings, according to pre-arranged code algorithm, according to the code sequence
The sequence encoded in column successively determines the corresponding color of each coding, obtains colour sequential, generates color based on the colour sequential
Mask, so that the light that the color mask is projected is changed according to color indicated by the colour sequential.
Wherein, which can be randomly generated, and can also be configured according to the demand of practical application,
And the pre-arranged code algorithm can also be depending on the demand of practical application, the encryption algorithm, can reflect in coded sequence
Corresponding relationship between each coding and various colors, for example, red can be enabled to represent number -1, green represents 0, and blue represents
1, etc..
If color mask is generated according to predetermined code sequence, can also be come by being decoded to optical signal
Determine that the surface of test object in the image sequence whether there is reflected light signal caused by the throw light, for details, reference can be made to
The embodiment of front, details are not described herein.
Optionally, may be used also after obtaining image sequence to reduce the influence that numerical value caused by noise floats to signal
To carry out Denoising disposal to the image sequence, i.e., this is stored in the application program in memory 502, can also realize following function
Can:
Denoising disposal is carried out to the image sequence.
For example, multi-frame mean and/or more rulers at same frame in timing specifically can be used so that noise model is Gaussian noise as an example
Degree averagely reduces noise, etc. as much as possible.
The specific implementation of above each operation can be found in the embodiment of front, and details are not described herein.
It is thrown from the foregoing, it will be observed that the terminal of the present embodiment when needing to carry out In vivo detection, can star light source to test object
Light is penetrated, and Image Acquisition is carried out to the test object, then determines the surface of test object in the image sequence collected
With the presence or absence of reflected light signal caused by the throw light, if it is present using default identification model to the characteristics of image
The type of affiliated object is identified, if recognition result indicates that the type of the affiliated object of the characteristics of image is living body, it is determined that should
Test object is living body;Since the program is not necessarily to carry out cumbersome interactive operation and operation with user, it can reduce to hard
The demand of part configuration, moreover, because the foundation of program progress living body determination is the reflected light signal on test object surface, and it is true
The reflected light of the living body (carrier of synthesising picture or video, such as photograph, mobile phone or tablet computer etc.) of positive living body and forgery
Signal is different, and therefore, the program can also effectively keep out synthesis face attack, improves the accuracy of differentiation;So it is total and
Yan Zhi, the program can improve In vivo detection effect, to mention under the limited hardware configuration of terminal, especially mobile terminal
The accuracy and safety of high authentication.
It will appreciated by the skilled person that all or part of the steps in the various methods of above-described embodiment can be with
It is completed by instructing, or relevant hardware is controlled by instruction to complete, which can store computer-readable deposits in one
In storage media, and is loaded and executed by processor.
For this purpose, the embodiment of the present invention also provides a kind of storage medium, wherein being stored with a plurality of instruction, which can be located
Reason device is loaded, to execute the step in any biopsy method provided by the embodiment of the present invention.For example, the instruction
It can be with following steps:
In vivo detection request is received, starting light source is requested according to the In vivo detection, and to test object throw light, so
Afterwards, Image Acquisition is carried out to the test object and identifies that the surface of test object in the image sequence is deposited to obtain image sequence
The reflected light signal caused by the throw light, the reflected light signal are used in the test object image forming surface feature
Default identification model identifies the type of the affiliated object of the characteristics of image, if recognition result indicate it is right belonging to the characteristics of image
The type of elephant is living body, it is determined that the test object is living body.
Wherein it is determined that the surface of test object is with the presence or absence of the letter of reflected light caused by the throw light in the image sequence
Number, and to the type of the affiliated object of the characteristics of image known otherwise can there are many, for details, reference can be made to the realities of front
Example is applied, details are not described herein.
Wherein, the implementation of the light source can also there are many, for example, can by adjust terminal screen brightness come reality
It is existing, alternatively, can use other luminous components such as flash lamp or infrared transmitter or external device also realizing or, may be used also
To be realized by the way that a color mask is arranged in the display interface, etc., i.e. the instruction can be with following steps:
Adjustment screen intensity is requested according to the In vivo detection, so that the screen is as light source to test object throw light.
Alternatively, being requested to open preset emission component according to the In vivo detection, so that the luminous component is as light source to detection
Object throw light.Wherein, which may include the components such as flash lamp or infrared transmitter.
Alternatively, requesting starting color mask, such as starting detection interface according to the In vivo detection, which can dodge
Existing color mask, the color mask is as light source to test object throw light.
Wherein, the region of the color mask that flashes can be depending on the demand of practical application, for example, the detection interface can
To include detection zone and non-detection region, detection zone is mainly used for showing monitoring situation, and the non-detection region
The color that can flash mask, the color mask is as light source to test object throw light, etc..
In addition, it should be noted that, the parameters such as color and transparency of the color mask can be according to the demand of practical application
It is configured, which can be set in advance by system, and directly be transferred when starting detects interface, alternatively,
It can be automatically generated after receiving In vivo detection request, i.e. the instruction can be with following steps:
Color mask is generated, the light which is projected is changed according to default rule, than
It such as, specifically can be with the light combinations of pre-arranged code as the color mask, etc..
The specific implementation of above each operation can be found in the embodiment of front, and details are not described herein.
Wherein, which may include: read-only memory (ROM, Read Only Memory), random access memory
Body (RAM, Random Access Memory), disk or CD etc..
By the instruction stored in the storage medium, any living body inspection provided by the embodiment of the present invention can be executed
Step in survey method, it is thereby achieved that achieved by any biopsy method provided by the embodiment of the present invention
Beneficial effect is detailed in the embodiment of front, and details are not described herein.
It is provided for the embodiments of the invention a kind of biopsy method, device and storage medium above and has carried out detailed Jie
It continues, used herein a specific example illustrates the principle and implementation of the invention, and the explanation of above embodiments is only
It is to be used to help understand method and its core concept of the invention;Meanwhile for those skilled in the art, according to the present invention
Thought, there will be changes in the specific implementation manner and application range, in conclusion the content of the present specification should not be construed as
Limitation of the present invention.
Claims (14)
1. a kind of biopsy method characterized by comprising
Receive In vivo detection request;
It is requested to generate color mask according to the In vivo detection, the light that the color mask is projected is according to pre-
If rule is changed;
Using the color mask as light source to test object throw light;
Image Acquisition is carried out to the test object, to obtain image sequence;
Identify the surface of test object in described image sequence there are reflected light signal caused by the throw light, it is described
Reflected light signal is in the test object image forming surface feature;
Identify that the default identification model is by more using type of the default identification model to the affiliated object of described image feature
A feature samples training forms, and the feature samples are that the reflected light signal is formed by the subject surface of marking types
Characteristics of image;
If recognition result indicates that the type of the affiliated object of described image feature is living body, it is determined that the test object is living body.
2. the method according to claim 1, wherein described request to generate color screening according to the In vivo detection
Cover, comprising:
Starting detection interface is requested according to the In vivo detection, the detection interface includes non-detection region, the non-detection area
Flash color mask in domain.
3. the method according to claim 1, wherein after the generation color mask, further includes:
For the light of same color, preset screen intensity adjusting parameter is obtained, is adjusted according to the screen intensity adjusting parameter
Screen intensity of the light of the same color before and after variation, to adjust the change intensity of light;
For the light of different colours, preset color difference adjusting parameter is obtained, it is described not according to color difference adjusting parameter adjustment
With color difference of the light before and after variation of color, to adjust the change intensity of light.
4. the method according to claim 1, wherein the generation color mask, so that the color mask institute
The light projected can be changed according to default rule
Predetermined code sequence is obtained, the coded sequence includes multiple codings;
According to pre-arranged code algorithm, the corresponding color of each coding is successively determined according to the sequence encoded in the coded sequence,
Obtain colour sequential;
Color mask is generated based on the colour sequential, so that the light that the color mask is projected is according to the color sequence
The indicated color of column is changed.
5. method according to any one of claims 1 to 4, which is characterized in that described identify is examined in described image sequence
There are reflected light signals caused by the throw light on the surface of survey object, comprising:
The variation of frame, obtains regression result in regression analysis described image sequence;
Identify that there are produced by the throw light for the surface of test object in described image sequence according to the regression result
Reflected light signal.
6. according to the method described in claim 5, it is characterized in that, described identify described image sequence according to the regression result
There are reflected light signals caused by the throw light on the surface of test object in column, comprising:
Determine whether the regression result is greater than preset threshold;
If so, determining that there are the letters of reflected light caused by the throw light for the surface of test object in described image sequence
Number;
If not, it is determined that there is no the letters of reflected light caused by the throw light on the surface of test object in described image sequence
Number.
7. according to the method described in claim 5, it is characterized in that, described identify described image sequence according to the regression result
There are reflected light signals caused by the throw light on the surface of test object in column, comprising:
The regression result is analyzed, to judge in described image sequence with the presence or absence of anti-caused by the throw light
Penetrate optical signal;
Reflected light signal caused by the throw light if it does not exist, it is determined that the surface of test object in described image sequence
There is no reflected light signals caused by the throw light;
Reflected light signal caused by the throw light if it exists then passes through default global characteristics algorithm or default identification model
Whether the reflector for judging existing reflected light information is the test object, if the test object, it is determined that the figure
As the surface of test object in sequence is there are reflected light signal caused by the throw light, if not the test object,
Then determine that reflected light signal caused by the throw light is not present in the surface of test object in described image sequence.
8. according to the method described in claim 5, it is characterized in that, described identify described image sequence according to the regression result
There are reflected light signals caused by the throw light on the surface of test object in column, comprising:
Classified by default global characteristics algorithm or default identification model to the image in described image sequence, to filter out
There are the frames of the test object, obtain candidate frame;
Analyze the frame-to-frame differences of the candidate frame;
Identify that there are caused by the throw light for the surface of test object in described image sequence according to the frame-to-frame differences
Reflected light signal.
9. according to the method described in claim 5, it is characterized in that, the color mask is to be generated according to predetermined code sequence
, it is described to identify that there are the throw lights to be produced for the surface of test object in described image sequence according to the regression result
Raw reflected light signal, comprising:
According to the regression result, described image sequence is decoded according to default decoding algorithm, sequence after being decoded;
Whether sequence matches with the coded sequence after determining the decoding;
If matching, it is determined that there are the letters of reflected light caused by the throw light on the surface of test object in described image sequence
Number;
If mismatching, it is determined that there is no reflections caused by the throw light on the surface of test object in described image sequence
Optical signal.
10. according to the method described in claim 9, it is characterized in that, the regression result is right for throw light variation front and back
Chrominance/luminance between the frame answered changes relative value, described according to the regression result, according to default decoding algorithm to the figure
As sequence is decoded, sequence after being decoded, comprising:
Using default decoding algorithm, successively the chrominance/luminance variation relative value in described image sequence is calculated, is obtained
To the chrominance/luminance absolute value of frame corresponding to each throw light variation front and back;
It is or absolute to obtained chrominance/luminance according to preset strategy using obtained chrominance/luminance absolute value as sequence after decoding
Value is converted, sequence after being decoded.
11. according to the method described in claim 5, it is characterized in that, in the regression analysis described image sequence frame variation,
Obtain regression result, comprising:
When determining that the change in location degree of test object is less than default changing value, projection light is obtained from described image sequence respectively
The chrominance/luminance of frame corresponding to line variation front and back, calculates the chrominance/luminance by default regression function, obtains
Throw light variation front and back corresponding to frame between chrominance/luminance change relative value, using chrominance/luminance variation relative value as
Regression result.
12. a kind of living body detection device characterized by comprising
Receiving unit, for receiving In vivo detection request;
Generation unit generates color mask for requesting according to the In vivo detection, so that the color mask was projected
Light can be changed according to default rule;
Start unit, for using the color mask as light source to test object throw light;
Acquisition unit, for carrying out Image Acquisition to the test object, to obtain image sequence;
Detection unit, there are anti-caused by the throw light for the surface of test object in described image sequence out for identification
Optical signal is penetrated, the reflected light signal is in the test object image forming surface feature, using default identification model to described
The type of the affiliated object of characteristics of image is identified that the default identification model is formed by the training of multiple feature samples, the spy
Sign sample is formed by characteristics of image in the subject surface of marking types for the reflected light signal, if recognition result indicates institute
The type for stating the affiliated object of characteristics of image is living body, it is determined that the test object is living body.
13. device according to claim 12, which is characterized in that
The start unit is specifically used for requesting starting detection interface according to the In vivo detection, and the detection interface includes non-
Detection zone, the non-detection region are flashed color mask, and the color mask is as light source to test object throw light.
14. a kind of storage medium, which is characterized in that the storage medium is stored with a plurality of instruction, and described instruction is suitable for processor
It is loaded, the step in 1 to 11 described in any item biopsy methods is required with perform claim.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2018/111218 WO2019080797A1 (en) | 2016-12-30 | 2018-10-22 | Living body detection method, terminal, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2016112570522 | 2016-12-30 | ||
CN201611257052 | 2016-12-30 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107992794A CN107992794A (en) | 2018-05-04 |
CN107992794B true CN107992794B (en) | 2019-05-28 |
Family
ID=62031297
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711012244.1A Active CN107992794B (en) | 2016-12-30 | 2017-10-26 | A kind of biopsy method, device and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107992794B (en) |
WO (2) | WO2018121428A1 (en) |
Families Citing this family (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107992794B (en) * | 2016-12-30 | 2019-05-28 | 腾讯科技(深圳)有限公司 | A kind of biopsy method, device and storage medium |
CN107832712A (en) * | 2017-11-13 | 2018-03-23 | 深圳前海微众银行股份有限公司 | Biopsy method, device and computer-readable recording medium |
CN109101881B (en) * | 2018-07-06 | 2021-08-20 | 华中科技大学 | Real-time blink detection method based on multi-scale time sequence image |
CN113408403A (en) * | 2018-09-10 | 2021-09-17 | 创新先进技术有限公司 | Living body detection method, living body detection device, and computer-readable storage medium |
CN111310515A (en) * | 2018-12-11 | 2020-06-19 | 上海耕岩智能科技有限公司 | Code mask biological characteristic analysis method, storage medium and neural network |
CN111310514A (en) * | 2018-12-11 | 2020-06-19 | 上海耕岩智能科技有限公司 | Method for reconstructing biological characteristics of coded mask and storage medium |
CN109660745A (en) * | 2018-12-21 | 2019-04-19 | 深圳前海微众银行股份有限公司 | Video recording method, device, terminal and computer readable storage medium |
CN111488756B (en) * | 2019-01-25 | 2023-10-03 | 杭州海康威视数字技术股份有限公司 | Face recognition-based living body detection method, electronic device, and storage medium |
CN109961025B (en) * | 2019-03-11 | 2020-01-24 | 烟台市广智微芯智能科技有限责任公司 | True and false face identification and detection method and detection system based on image skewness |
CN110414346A (en) * | 2019-06-25 | 2019-11-05 | 北京迈格威科技有限公司 | Biopsy method, device, electronic equipment and storage medium |
CN110298312B (en) * | 2019-06-28 | 2022-03-18 | 北京旷视科技有限公司 | Living body detection method, living body detection device, electronic apparatus, and computer-readable storage medium |
CN112183156B (en) * | 2019-07-02 | 2023-08-11 | 杭州海康威视数字技术股份有限公司 | Living body detection method and equipment |
CN110516644A (en) * | 2019-08-30 | 2019-11-29 | 深圳前海微众银行股份有限公司 | A kind of biopsy method and device |
CN110969077A (en) * | 2019-09-16 | 2020-04-07 | 成都恒道智融信息技术有限公司 | Living body detection method based on color change |
CN110688946A (en) * | 2019-09-26 | 2020-01-14 | 上海依图信息技术有限公司 | Public cloud silence in-vivo detection device and method based on picture identification |
CN111126229A (en) * | 2019-12-17 | 2020-05-08 | 中国建设银行股份有限公司 | Data processing method and device |
CN111274928B (en) * | 2020-01-17 | 2023-04-07 | 腾讯科技(深圳)有限公司 | Living body detection method and device, electronic equipment and storage medium |
CN111310575B (en) * | 2020-01-17 | 2022-07-08 | 腾讯科技(深圳)有限公司 | Face living body detection method, related device, equipment and storage medium |
CN113298747A (en) * | 2020-02-19 | 2021-08-24 | 北京沃东天骏信息技术有限公司 | Picture and video detection method and device |
CN111444831B (en) * | 2020-03-25 | 2023-03-21 | 深圳中科信迅信息技术有限公司 | Method for recognizing human face through living body detection |
SG10202005395VA (en) * | 2020-06-08 | 2021-01-28 | Alipay Labs Singapore Pte Ltd | Face liveness detection system, device and method |
CN111797735A (en) * | 2020-06-22 | 2020-10-20 | 深圳壹账通智能科技有限公司 | Face video recognition method, device, equipment and storage medium |
CN111783640A (en) * | 2020-06-30 | 2020-10-16 | 北京百度网讯科技有限公司 | Detection method, device, equipment and storage medium |
CN111899232B (en) * | 2020-07-20 | 2023-07-04 | 广西大学 | Method for nondestructive detection of bamboo-wood composite container bottom plate by image processing |
CN111914763B (en) * | 2020-08-04 | 2023-11-28 | 网易(杭州)网络有限公司 | Living body detection method, living body detection device and terminal equipment |
CN112528909B (en) * | 2020-12-18 | 2024-05-21 | 平安银行股份有限公司 | Living body detection method, living body detection device, electronic equipment and computer readable storage medium |
CN113807159A (en) * | 2020-12-31 | 2021-12-17 | 京东科技信息技术有限公司 | Face recognition processing method, device, equipment and storage medium thereof |
CN112818782B (en) * | 2021-01-22 | 2021-09-21 | 电子科技大学 | Generalized silence living body detection method based on medium sensing |
CN113837930B (en) * | 2021-09-24 | 2024-02-02 | 重庆中科云从科技有限公司 | Face image synthesis method, device and computer readable storage medium |
CN113869219B (en) * | 2021-09-29 | 2024-05-21 | 平安银行股份有限公司 | Face living body detection method, device, equipment and storage medium |
CN115995102A (en) * | 2021-10-15 | 2023-04-21 | 北京眼神科技有限公司 | Face silence living body detection method, device, storage medium and equipment |
CN116978078A (en) * | 2022-04-14 | 2023-10-31 | 京东科技信息技术有限公司 | Living body detection method, living body detection device, living body detection system, electronic equipment and computer readable medium |
WO2023221996A1 (en) * | 2022-05-16 | 2023-11-23 | 北京旷视科技有限公司 | Living body detection method, electronic device, storage medium, and program product |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105260731A (en) * | 2015-11-25 | 2016-01-20 | 商汤集团有限公司 | Human face living body detection system and method based on optical pulses |
CN105612533A (en) * | 2015-06-08 | 2016-05-25 | 北京旷视科技有限公司 | In-vivo detection method, in-vivo detection system and computer programe products |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016197297A1 (en) * | 2015-06-08 | 2016-12-15 | 北京旷视科技有限公司 | Living body detection method, living body detection system and computer program product |
CN104951769B (en) * | 2015-07-02 | 2018-11-30 | 京东方科技集团股份有限公司 | Vivo identification device, vivo identification method and living body authentication system |
CN105912986B (en) * | 2016-04-01 | 2019-06-07 | 北京旷视科技有限公司 | A kind of biopsy method and system |
CN106529512B (en) * | 2016-12-15 | 2019-09-10 | 北京旷视科技有限公司 | Living body faces verification method and device |
CN107992794B (en) * | 2016-12-30 | 2019-05-28 | 腾讯科技(深圳)有限公司 | A kind of biopsy method, device and storage medium |
CN107273794A (en) * | 2017-04-28 | 2017-10-20 | 北京建筑大学 | Live body discrimination method and device in a kind of face recognition process |
CN107220635A (en) * | 2017-06-21 | 2017-09-29 | 北京市威富安防科技有限公司 | Human face in-vivo detection method based on many fraud modes |
-
2017
- 2017-10-26 CN CN201711012244.1A patent/CN107992794B/en active Active
- 2017-12-22 WO PCT/CN2017/117958 patent/WO2018121428A1/en active Application Filing
-
2018
- 2018-10-22 WO PCT/CN2018/111218 patent/WO2019080797A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105612533A (en) * | 2015-06-08 | 2016-05-25 | 北京旷视科技有限公司 | In-vivo detection method, in-vivo detection system and computer programe products |
CN105260731A (en) * | 2015-11-25 | 2016-01-20 | 商汤集团有限公司 | Human face living body detection system and method based on optical pulses |
Also Published As
Publication number | Publication date |
---|---|
WO2018121428A1 (en) | 2018-07-05 |
WO2019080797A1 (en) | 2019-05-02 |
CN107992794A (en) | 2018-05-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107992794B (en) | A kind of biopsy method, device and storage medium | |
US10579853B2 (en) | Method and apparatus for acquiring fingerprint image and terminal device | |
CN106537816B (en) | Derive method, equipment and the computer-readable medium of identifier in visible light signal | |
CN108573203B (en) | Identity authentication method and device and storage medium | |
CN108566516B (en) | Image processing method, device, storage medium and mobile terminal | |
CN111274928B (en) | Living body detection method and device, electronic equipment and storage medium | |
US20160140390A1 (en) | Liveness detection using progressive eyelid tracking | |
EP2843510A2 (en) | Method and computer-readable recording medium for recognizing an object using captured images | |
JP3802892B2 (en) | Iris authentication device | |
WO2019052329A1 (en) | Facial recognition method and related product | |
CN108399349A (en) | Image-recognizing method and device | |
CN108494996B (en) | Image processing method, device, storage medium and mobile terminal | |
CN110516644A (en) | A kind of biopsy method and device | |
CN109327691B (en) | Image shooting method and device, storage medium and mobile terminal | |
JP2015090569A (en) | Information processing device and information processing method | |
CN108476264A (en) | The method and electronic equipment of operation for controlling iris sensor | |
CN112446252A (en) | Image recognition method and electronic equipment | |
CN208013970U (en) | A kind of living creature characteristic recognition system | |
CN108683845B (en) | Image processing method, device, storage medium and mobile terminal | |
CN108038444A (en) | Image-recognizing method and device | |
CN104077051A (en) | Wearable device standby and standby image providing method and apparatus | |
KR101657377B1 (en) | Portable terminal, case for portable terminal and iris recognition method thereof | |
US9684828B2 (en) | Electronic device and eye region detection method in electronic device | |
CN104463083B (en) | Contactless palmprint authentication method, device and portable terminal device | |
CN112906610A (en) | Method for living body detection, electronic circuit, electronic apparatus, and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |