CN107992794A - A kind of biopsy method, device and storage medium - Google Patents

A kind of biopsy method, device and storage medium Download PDF

Info

Publication number
CN107992794A
CN107992794A CN201711012244.1A CN201711012244A CN107992794A CN 107992794 A CN107992794 A CN 107992794A CN 201711012244 A CN201711012244 A CN 201711012244A CN 107992794 A CN107992794 A CN 107992794A
Authority
CN
China
Prior art keywords
detection
detection object
light
sequence
change
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711012244.1A
Other languages
Chinese (zh)
Other versions
CN107992794B (en
Inventor
刘尧
李季檩
汪铖杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Publication of CN107992794A publication Critical patent/CN107992794A/en
Priority to PCT/CN2018/111218 priority Critical patent/WO2019080797A1/en
Application granted granted Critical
Publication of CN107992794B publication Critical patent/CN107992794B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Geometry (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The embodiment of the invention discloses a kind of biopsy method, device and storage medium;The embodiment of the present invention is when needing to carry out In vivo detection, light source can be started to detection object throw light, and Image Acquisition is carried out to the detection object, when the surface for identifying detection object in the image sequence collected, there are during reflected light signal caused by the throw light, type of the reflected light signal in the affiliated object of detection object image forming surface feature is identified using default identification model, if recognition result indicates that the type of the affiliated object of the characteristics of image is live body, it is determined that the detection object is live body;The program can improve In vivo detection effect, so as to improve the accuracy and security of authentication.

Description

A kind of biopsy method, device and storage medium
This application claims submit Patent Office of the People's Republic of China, Application No. 2016112570522, invention name on December 30th, 2016 This is hereby incorporated by reference in the referred to as priority of the Chinese patent application of " a kind of biopsy method and device ", entire contents In application.
Technical field
The present invention relates to field of communication technology, and in particular to a kind of biopsy method, device and storage medium.
Background technology
In recent years, identity validation technology, such as fingerprint recognition, eyeprint identification, iris recognition and recognition of face all obtain Great development.Wherein, face recognition technology is the most prominent, it has been applied to all kinds of authentication systems more and more widely In system.
Identity authorization system based on recognition of face, it is usually required mainly for solve two problems, one is face verification, Ling Yishi In vivo detection.Wherein, the data such as the facial image that In vivo detection is primarily used to confirm to collect are to come from user, without It is playback or forges material.For the attack means of current In vivo detection, such as photo attack, video playback attack, synthesis Face attack etc., existing to propose " randomization interaction " technology, so-called " randomization interaction " technology, refers to by video septum reset Different parts motion change incision, involvement need the randomization interactive action that user cooperates with one's own initiative, such as blink, shake the head or Lip reading identification, etc., and judge whether detection object is live body, etc. accordingly.
In the research and practice process to the prior art, it was found by the inventors of the present invention that existing scheme In vivo detection institute The algorithm of use, the accuracy rate that it differentiates is not high, moreover, synthesis face attack also can not be effectively kept out, in addition, cumbersome master Dynamic interaction can also substantially reduce the percent of pass of correct sample, so, generally speaking, the In vivo detection effect of existing scheme is not It is good, largely effect on the accuracy and security of authentication.
The content of the invention
The embodiment of the present invention provides a kind of biopsy method, device and storage medium, can improve In vivo detection effect, So as to improve the accuracy and security of authentication.
The embodiment of the present invention provides a kind of biopsy method, including:
Receive In vivo detection request;
Asked to start light source according to the In vivo detection, and to detection object throw light;
Image Acquisition is carried out to the detection object, to obtain image sequence;
The surface of detection object in described image sequence is identified there are reflected light signal caused by the throw light, The reflected light signal is in the detection object image forming surface feature;
The type of the affiliated object of described image feature is identified using default identification model, the default identification model By multiple feature samples training form, the feature samples for the reflected light signal marking types subject surface institute shape Into characteristics of image;
If the type of the recognition result instruction affiliated object of described image feature is live body, it is determined that the detection object is work Body.
Correspondingly, the embodiment of the present invention also provides a kind of living body detection device, including:
A kind of living body detection device, it is characterised in that including:
Receiving unit, for receiving In vivo detection request;
Start unit, for being asked to start light source according to the In vivo detection, and to detection object throw light;
Collecting unit, for carrying out Image Acquisition to the detection object, to obtain image sequence;
Detection unit, for identifying that there are produced by the throw light on the surface of detection object in described image sequence Reflected light signal, the reflected light signal is in the detection object image forming surface feature, using default identification model pair The type of the affiliated object of described image feature is identified, and the default identification model is formed by the training of multiple feature samples, institute The characteristics of image that feature samples are formed by the reflected light signal in the subject surface of marking types is stated, if recognition result refers to The type for showing the affiliated object of described image feature is live body, it is determined that the detection object is live body.
Optionally, in certain embodiments, the start unit, it is pre- specifically for being started according to In vivo detection request If color shade, the color shade is as light source to detection object throw light.
Optionally, in certain embodiments, the start unit, specifically for being asked to start inspection according to the In vivo detection Interface is surveyed, the detection interface includes non-detection region, and the non-detection region is flashed color shade, the color shade conduct Light source is to detection object throw light.
Optionally, in certain embodiments, the living body detection device can also include generation unit, as follows:
The generation unit, for generating color shade so that the light that the color shade is projected can be according to Default rule is changed.
Optionally, in certain embodiments, the generation unit, is additionally operable to:
For the light of same color, default screen intensity adjusting parameter is obtained, according to the screen intensity adjusting parameter Screen intensity of the light of the same color before and after change is adjusted, to adjust the change intensity of light;
For the light of different colours, default aberration adjusting parameter is obtained, institute is adjusted according to the aberration adjusting parameter Aberration of the light of different colours before and after change is stated, to adjust the change intensity of light.
Optionally, in certain embodiments, the generation unit, is specifically used for:
Predetermined code sequence is obtained, the coded sequence includes multiple codings;
According to pre-arranged code algorithm, the corresponding face of each coding is determined successively according to the order encoded in the coded sequence Color, obtains colour sequential;
Color shade is generated based on the colour sequential so that the light that the color shade is projected is according to the face The indicated color of color-sequential row is changed.
Optionally, in certain embodiments, the detection unit can include computation subunit, judgment sub-unit and identification Subelement, it is as follows:
The computation subunit, for the change of frame in regression analysis described image sequence, obtains regression result;
The judgment sub-unit, the surface for determining detection object in described image sequence according to the regression result are It is no that there are reflected light signal caused by the throw light;
The identification subelement, for determining that there are institute on the surface of detection object in described image sequence in judgment sub-unit When stating reflected light signal caused by throw light, using default identification model to the type of the affiliated object of described image feature into Row identification, if the type of the recognition result instruction affiliated object of described image feature is live body, it is determined that the detection object is work Body.
Optionally, in certain embodiments, the judgment sub-unit, specifically for determining whether the regression result is more than Predetermined threshold value, if, it is determined that there are reflection caused by the throw light on the surface of detection object in described image sequence Optical signal;If not, it is determined that there is no reflection caused by the throw light on the surface of detection object in described image sequence Optical signal.
Optionally, in certain embodiments, the judgment sub-unit, specifically for passing through default global characteristics algorithm or pre- If identification model carries out classification analysis to the regression result;If the interframe change on the surface of analysis result instruction detection object is greatly In setting value, it is determined that there are reflected light caused by the throw light on the surface of detection object in described image sequence to believe Number;If the interframe change on the surface of analysis result instruction detection object is not more than setting value, it is determined that is examined in described image sequence Reflected light signal caused by the throw light is not present in the surface for surveying object.
Optionally, in certain embodiments, the color shade is to be generated according to predetermined code sequence, then the judgement Subelement, specifically for according to the regression result, decoding, being solved to described image sequence according to default decoding algorithm Sequence after code;Whether sequence matches with the coded sequence after determining the decoding;If matching, it is determined that in described image sequence There are reflected light signal caused by the throw light on the surface of detection object;If mismatch, it is determined that described image sequence Reflected light signal caused by the throw light is not present in the surface of middle detection object.
Optionally, in certain embodiments, the computation subunit, the change in location journey specifically for determining detection object When degree is less than default changing value, the pixel coordinate of contiguous frames in described image sequence is obtained respectively, and frame is calculated based on pixel coordinate Between it is poor;Alternatively,
The computation subunit, when the change in location degree specifically for determining detection object is less than default changing value, point The pixel coordinate of frame corresponding before and after throw light changes is obtained not from described image sequence, frame is calculated based on pixel coordinate Difference;Alternatively,
The computation subunit, when the change in location degree specifically for determining detection object is less than default changing value, point The chrominance/luminance of frame corresponding before and after throw light changes is obtained not from described image sequence, passes through and presets regression function The chrominance/luminance is calculated, the chrominance/luminance change obtained between frame corresponding before and after throw light changes is opposite Value, the frame before and after chrominance/luminance change relative value is changed as throw light between corresponding frame are poor.
In addition, the embodiment of the present invention also provides a kind of storage medium, the storage medium is stored with a plurality of instruction, the finger Order is loaded suitable for processor, to perform the step in any biopsy method that the embodiment of the present invention provided.
The embodiment of the present invention can start light source to detection object throw light when needing to carry out In vivo detection, and right The detection object carries out Image Acquisition, and when the surface for identifying detection object in the image sequence collected, there are the projection light Caused by line during reflected light signal, using default identification model to the reflected light signal in the detection object image forming surface The type of the affiliated object of feature is identified, if recognition result indicates that the type of the affiliated object of the characteristics of image is live body, really The fixed detection object is live body;Since the program with user without carrying out cumbersome interactive operation and computing (existing In vivo detection Generally by the way of instruction action cooperation, such as face turns left, turns right, opening one's mouth, blinking to be detected, it is necessary to user scheme Coordinate, this mode commonly referred to as actively interacts In vivo detection, and the embodiment of the present invention belongs to no interactions In vivo detection, i.e., need not User coordinates, and is noninductive for a user), therefore, the demand to hardware configuration can be reduced, is additionally, since the program The foundation for carrying out living body determination is the reflected light signal on detection object surface, and really live body and the live body (composite diagram forged The carrier of piece or video, such as photograph, mobile phone or tablet computer etc.) reflected light signal be different, therefore, the program also may be used Effectively to keep out synthesis face attack, the accuracy of differentiation is improved;To sum up, the program, which can improve, improves In vivo detection effect Fruit, so as to improve the accuracy and security of authentication.
Brief description of the drawings
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the present invention, for For those skilled in the art, without creative efforts, it can also be obtained according to these attached drawings other attached Figure.
Fig. 1 a are the schematic diagram of a scenario of biopsy method provided in an embodiment of the present invention;
Fig. 1 b are another schematic diagram of a scenario of biopsy method provided in an embodiment of the present invention;
Fig. 1 c are the flow charts of biopsy method provided in an embodiment of the present invention;
Fig. 2 is another flow chart of biopsy method provided in an embodiment of the present invention;
Fig. 3 a are another flow charts of biopsy method provided in an embodiment of the present invention;
Fig. 3 b are the exemplary plots of color change in biopsy method provided in an embodiment of the present invention;
Fig. 3 c are another exemplary plots of color change in biopsy method provided in an embodiment of the present invention;
Fig. 4 a are the structure diagrams of living body detection device provided in an embodiment of the present invention;
Fig. 4 b are another structure diagrams of living body detection device provided in an embodiment of the present invention;
Fig. 5 is the structure diagram of terminal provided in an embodiment of the present invention.
Embodiment
Below in conjunction with the attached drawing in the embodiment of the present invention, the technical solution in the embodiment of the present invention is carried out clear, complete Site preparation describes, it is clear that described embodiment is only part of the embodiment of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, the every other implementation that those skilled in the art are obtained without creative efforts Example, belongs to the scope of protection of the invention.
The embodiment of the present invention provides a kind of biopsy method, device and storage medium.
Wherein, which can specifically be integrated in the equipment such as terminal, it can utilize the screen light of terminal Strong and color change or by the use of the miscellaneous part such as flash lamp or infrared transmitter or equipment as light source, is projected to detection object On, then, by analyzing the surface of detection object in the image sequence received, such as the reflected light signal of face, to carry out In vivo detection.
For example, with it is integrated in the terminal, and exemplified by the light source is a color shade, when terminal receives In vivo detection request When, it can be asked to start detection interface according to the In vivo detection, wherein, as shown in Figure 1a, the detection interface is except being provided with inspection Survey outside region, be additionally provided with a non-detection region (grey parts marked in Fig. 1 a), be mainly used for the color shade that flashes, The color shade can be used as light source to detection object throw light, such as, reference can be made to Fig. 1 b.Since real live body is with forging Live body (carrier of synthesising picture or video, such as photograph, mobile phone or tablet computer) reflected light signal be it is different, because This, can whether there is reflected light signal caused by the throw light, and the reflected light by judging the surface of detection object Whether signal meets preset condition, and to differentiate to live body, for example, can carry out Image Acquisition to the detection object (can be with The situation of monitoring is shown by the detection zone in the detection interface), then, it is determined that in the image sequence collected The surface of detection object whether there is reflected light signal caused by the throw light, if in the presence of using default identification model Type of the reflected light signal in the affiliated object of detection object image forming surface feature is identified, if recognition result refers to The type for showing the affiliated object of the characteristics of image is live body, it is determined that the detection object is non-living body, etc..
It is described in detail individually below.It should be noted that the sequence number of following embodiments is not as preferably suitable to embodiment The restriction of sequence.
The present embodiment will be described from the angle of the living body detection device of terminal (abbreviation living body detection device), the live body Detection device can be specifically integrated in the equipment such as terminal, the terminal be specifically as follows mobile phone, tablet computer, laptop or The equipment such as personal computer (PC, Personal Computer).
A kind of biopsy method, including:In vivo detection request is received, is asked to start light source according to the In vivo detection, and To detection object throw light, then, Image Acquisition is carried out to the detection object, to obtain image sequence, identifies the image The surface of detection object is there are reflected light signal caused by the throw light in sequence, and the reflected light signal is in the detection object Image forming surface feature, is identified the type of the affiliated object of the characteristics of image using default identification model, if identification knot Fruit indicates that the type of the affiliated object of the characteristics of image is live body, it is determined that the detection object is live body.
As illustrated in figure 1 c, the idiographic flow of the biopsy method can be as follows:
101st, In vivo detection request is received.
For example, the In vivo detection request of user's triggering can be specifically received, alternatively, other equipment transmission can also be received In vivo detection request, etc..
102nd, asked to start light source according to the In vivo detection, and to detection object throw light.
For example, specifically can be according to the corresponding In vivo detection process of the In vivo detection request call, according to the In vivo detection Process initiation light source, etc..
Wherein, which can be configured according to the demand of practical application, such as, can be by adjusting terminal screen Brightness realizes, alternatively, can also using other luminous components such as flash lamp or infrared transmitter or external device realizing or Person, can also be by setting a color shade to realize, etc. on the display interface of terminal or terminal, i.e., step is " according to this In vivo detection request starts light source " it can specifically be realized using any one following mode:
(1) adjustment screen intensity is asked according to the In vivo detection so that the screen is as light source to detection object projection light Line.
(2) according to the In vivo detection ask open preset emission component so that the luminous component as light source to detect pair As throw light.
Wherein, which can include the components such as flash lamp or infrared transmitter.
(3) default color shade is started according to In vivo detection request, the color shade is as light source to detection object Throw light.
For example, can specifically be asked to start the color shade in terminal according to the In vivo detection, so-called color shade, refers to It is can flash light-emitting zone or component with color light, such as, the face that can flash can be set at terminal enclosure edge The component of color shade, then, after In vivo detection request is received, can start the component, with the color shade that flashes;Or Person, can also be as follows by showing detection interface come the color shade that flashes:
Asked to start detection interface according to the In vivo detection, flash color shade at the detection interface, the color shade conduct Light source is to detection object throw light.
Wherein, flash in the detection interface color shade region can depending on the demand of practical application, for example, should Detection interface can include detection zone and non-detection region, and detection zone is mainly used for showing monitoring situation, and is somebody's turn to do Non-detection region can flash color shade, and the color shade is as light source to detection object throw light, etc..
Wherein, the region of color shade is flashed in the non-detection region can be depending on the demand of practical application, can be with It is that whole non-detection region is both provided with color shade or in certain subregion of the non-detection region or certain several Subregion is provided with color shade, etc..The parameters such as the color and transparency of the color shade can be according to practical application Demand is configured, which can be set in advance by system, and is directly transferred when starting and detecting interface, or Person, can also automatically generate after In vivo detection request is received, i.e., after step " receiving In vivo detection request ", the work Body detecting method can also include:
Generate color shade so that the light that the color shade is projected can be changed according to default rule.
Optionally, for the ease of subsequently can preferably identify the change of light, the change of the light can also be adjusted Intensity.
Wherein, which can be depending on the demand of practical application, and adjusts the side of the change intensity of the light Formula can also have it is a variety of, can be front and rear by adjusting change for example, for the light (i.e. the light of Same Wavelength) of same color Screen intensity adjusts the change intensity of light, such as, the screen intensity for making change front and rear is arranged to minimum and maximum, and for The light (i.e. the light of different wave length) of different colours, then can adjust the change of light by adjusting change front and rear aberration Intensity, etc..I.e. after color shade is generated, which can also include:
For the light of same color, default screen intensity adjusting parameter is obtained, according to the screen intensity adjusting parameter tune Whole this is changing front and rear screen intensity with the light of color, to adjust the change intensity of light;
For the light of different colours, default aberration adjusting parameter is obtained, this is adjusted not according to the aberration adjusting parameter Light with color is changing front and rear aberration, to adjust the change intensity of light.
Wherein, the adjustment amplitude of the change intensity of the light can be configured according to the demand of practical application, can be wrapped Include and significantly adjust, for example maximize the change intensity of light etc., fine tune can also be included, for convenience, subsequently To it be illustrated by taking the change intensity for maximizing light as an example.
Optionally, in order to subsequently preferably detect reflected light signal from image frame-to-frame differences, except that can adjust Outside the change intensity of the light, the color space to signal analysis most robust can also be selected as far as possible in the selection of color, For example under pre-set color space, screen is converted to green most bright, the colourity change maximum of its reflected light by red is most bright, etc. Deng.
Optionally, in order to improve the accuracy of authentication and security, the light combinations of pre-arranged code can also be used It is used as the color shade, i.e. step " generation color shade so that the light that the color shade is projected can be according to pre- If rule is changed " it can include:
Predetermined code sequence is obtained, which includes multiple codings, according to pre-arranged code algorithm, according to the code sequence The order encoded in row determines the corresponding color of each coding successively, obtains colour sequential, and color is generated based on the colour sequential Shade so that the light that the color shade is projected is changed according to the color indicated by the colour sequential.
Wherein, which can be randomly generated, and can also be configured according to the demand of practical application, And the pre-arranged code algorithm can also be depending on the demand of practical application, the encryption algorithm, can reflect in coded sequence Correspondence between each coding and a variety of colors, such as, red can be made to represent numeral -1, green represents 0, and blueness represents 1, etc., if the coded sequence then got for " 0, -1,1,0 ", then, can obtain colour sequential " green, red, blueness, Green ", so as to generate a color shade so that the light that the color shade is projected can according to " green, red, blueness, The order of green " is changed.
It should be noted that in projection, between stand-by period when switching between the display duration of each color and each color Every can be configured according to the demand of practical application, such as, can allow each color display when a length of 2 seconds, and wait Time interval is arranged to 0 second or 1 second, etc..
Wherein, during latency period, can not throw light, alternatively, predetermined light can also be projected, than Such as, the time interval phase is treated not for 0 with this, and during latency period not exemplified by throw light, if color shade is projected The color sequence of the light gone out is " green, red, blueness, green ", then the throw light is embodied in:" green->It is unglazed Line->Red->Dull thread->Blueness->Dull thread->Green ".And if this treats that the time interval phase is 0 second, which hides The projected light of cover can directly switch color, i.e., the throw light is embodied in:" green->Red->Blueness- >Green ", and so on, details are not described herein.
Optionally, in order to further improve the security, the rule change of light can also be further complicated, such as, will Latency period when switching between display duration and the different colours of each color is also also configured as inconsistent number Value, such as, green display duration can be 3 seconds, and a length of 2 seconds during the display of red, blueness is 4 seconds, green and red Between latency period when switching be 1 second, and latency period when switching between red and blueness is 1.5 seconds, with this Analogize, etc..
103rd, Image Acquisition is carried out to the detection object, to obtain image sequence.
For example, can specifically call camera device to be shot in real time to detection object, image sequence be obtained, and will shooting Obtained image sequence carries out showing, etc. in the detection zone.
Wherein, which includes but not limited to camera, IP Camera and the monitoring camera that terminal carries Head and other equipment that can gather image etc..It should be noted that by the light projected to detection object can be can See light or black light, therefore, can also be according to practical application in the camera device that the embodiment of the present invention is provided Demand, different light receivers, such as infrared light receiver etc. are configured, to sense to different light, so as to adopt Collect required image sequence, details are not described herein.
Optionally, in order to reduce the influence that the numerical value caused by noise floats to signal, after image sequence is obtained, may be used also To carry out Denoising disposal to the image sequence.For example, by taking noise model is Gaussian noise as an example, can specifically use in sequential Multi-frame mean and/or it is at same frame it is multiple dimensioned it is average reduce noise as much as possible, details are not described herein.
Optionally, other pretreatments, such as scaling, cutting, sharpening, blurred background can also be carried out to the image sequence Deng operation, to improve the efficiency and accuracy that subsequently identify.
104th, identify that there are reflected light signal caused by the throw light for the surface of detection object in the image sequence.
Wherein, the reflected light signal is in the detection object image forming surface feature, so-called characteristics of image, can include figure Color characteristic, textural characteristics, shape facility and spatial relation characteristics of picture etc..Color characteristic is a kind of global characteristics, is described The surface nature of scenery corresponding to image or image-region;Textural characteristics are also a kind of global characteristics, it also illustrates image Or the surface nature of scenery corresponding to image-region;Shape facility has two class method for expressing, and one kind is contour feature, another kind of to be Provincial characteristics, the contour feature of image is mainly for the outer boundary of object, and the provincial characteristics of image is then related to whole shape Region;Spatial relation characteristics, refer to the mutual locus between multiple targets for being split in image or relative direction Relation, these relations can also be divided into connection/syntople, overlapping/overlapping relation and comprising/containment relationships etc.;When it is implemented, The characteristics of image can specifically include local binary patterns (LBP, Local Binary Patterns) Feature Descriptor, scale Invariant features convert (SIFT, Scale-invariant feature transform) Feature Descriptor, and/or convolutional Neural The information such as the Feature Descriptor that network extracts.Therefore not to repeat here.
Wherein, identify that the surface of detection object in the image sequence whether there is reflected light caused by the throw light and believe Number mode can have it is a variety of, for example, specifically the reflected light information can be detected using the change of frame in image sequence, than Such as, specifically can be as follows:
(1) in regression analysis (Regression Analysis) image sequence frame change, obtain regression result.
For example, specifically can be with the numerical expression of the chrominance/luminance in regression analysis image sequence per frame, which can Think sequence of values, then, the change of the chrominance/luminance of frame in the image sequence is judged according to the numerical expression such as sequence of values Change, obtain regression result;That is, numerical expression can be utilized, for example the change of sequence of values represents the colourity of every frame Change either brightness change and this per frame colourity change or brightness change can be used as the regression result.
Wherein, the mode of the numerical expression of the chrominance/luminance in regression analysis image sequence per frame can have a variety of, than Such as, can be arranged using default image regression analysis model to be returned to the chrominance/luminance of every frame in the image sequence Analysis, obtains numerical expression of chrominance/luminance of every frame, etc..
Wherein, which can be regression tree or return convolutional neural networks, etc., specifically can be with It is configured according to the demand of practical application.The image regression analysis model can be trained in advance by other equipment, also may be used To be trained by the living body detection device, such as, which can be as follows:
Image of the preset number with the reflective information of different surfaces (such as the reflective information of face) is gathered as collecting sample collection, The collecting sample concentrated to the collecting sample is labeled according to preset strategy, using the collecting sample after mark as training sample This, obtain training sample set, then, using default image regression analysis initial model (such as regression tree or return convolution Neutral net etc.), training sample set is learnt, obtains the image regression analysis model.
Wherein, the preset strategy of the mark can depending on the demand of practical application, such as, with the bright of regression analysis frame Degree change, and by reflective information type be divided into the presence of relatively it is strong it is reflective, weaker it is reflective, without being said exemplified by any reflective three classes It is bright, then can be at this time 1 to there is the label for labelling of relatively strong reflective collecting sample, to there is weaker reflective collecting sample Label for labelling is 0.5, is 0, etc. to the label for labelling without any reflective collecting sample, then, is returned using default image Return analysis initial model, training sample set is learnt, concentrated with finding in training sample, can most be fitted original image with returning Return the regression function of mapping relations between the serial number label of analysis, the image regression analysis model can be obtained.
Similarly, the regression analysis of the colourity change of frame is also similar, such as, it can specifically be deposited with acquisition testing subject surface In the picture frame (i.e. collecting sample) of different colours light, then these picture frames are labeled, wherein, the label of mark is no longer It is one-dimensional scalar, but the triple of corresponding RGB (Red, Green, Blue, i.e. three primary colors) color, such as (255,0,0) represent red Color, etc., then, using default image regression analysis initial model, to picture frame (the i.e. training sample obtained after mark Collection) learnt, to obtain the image regression analysis model, etc..
After the image regression analysis model is obtained, the image regression analysis model can be utilized, to the image sequence Every frame carry out regression analysis.For example trained if the image regression analysis model is the image changed according to different light intensities Form, then can directly be returned there are the picture frame that different light intensities change (i.e. brightness change) for detection object surface Go out the serial number of one 0 to 1, facial reflective degree of strength to express the picture frame, etc., details are not described herein.
Optionally, except can be calculated by the numerical expression of the chrominance/luminance in regression analysis image sequence per frame Outside regression result, can also directly in the regression analysis image sequence interframe change, obtain regression result.
Wherein, in the image sequence between frame change can by calculate the difference in the image sequence between frame come Arrive, the difference between the frame can be frame-to-frame differences or frame is poor, and frame-to-frame differences refers to the difference between adjacent two frame, and frame Difference of the difference between frame corresponding before and after throw light change.
For example, exemplified by calculating frame-to-frame differences, specifically default become can be less than in the change in location degree for determining detection object During change value, the pixel coordinate of contiguous frames in the image sequence is obtained respectively, then, frame-to-frame differences is calculated based on the pixel coordinate.
In another example exemplified by calculating frame difference, it can specifically determine that the change in location degree of detection object is less than default change During value, the pixel coordinate of frame corresponding before and after throw light changes is obtained from the image sequence respectively, based on pixel coordinate It is poor to calculate frame.
Wherein, based on the pixel coordinate calculate frame-to-frame differences or frame difference mode can have it is a variety of, such as, can be as follows:
The pixel coordinate of contiguous frames is converted, to minimize the registration error of the pixel coordinate, according to transformation results The pixel that correlation meets preset condition is filtered out, frame-to-frame differences is calculated according to the pixel filtered out.
Alternatively, the pixel coordinate of frame corresponding before and after changing to throw light converts, to minimize pixel seat Target registration error, filters out the pixel that correlation meets preset condition, according to the pixel filtered out according to transformation results It is poor to calculate frame.
Wherein, presetting changing value and preset condition can be configured according to the demand of practical application, and details are not described herein.
Optionally, in addition to the mode of above-mentioned calculating frame difference, other modes can also be used, such as, can be at certain On the passage of one color space, or any can describe in the dimension of colourity or brightness change, remove the color between two frames of analysis The relative value of change or the relative value of brightness change are spent, i.e., " change of frame, obtains step in regression analysis described image sequence To regression result " it can include:
When the change in location degree for determining detection object is less than default changing value, projection is obtained from image sequence respectively The chrominance/luminance of corresponding frame before and after light change, institute before and after throw light changes is calculated according to the chrominance/luminance got Colourity change relative value or brightness change relative value between corresponding frame, change relative value as projection light using chrominance/luminance Frame before and after line change between corresponding frame is poor, and frame difference is regression result.
For example the chrominance/luminance can specifically be calculated by default regression function, before obtaining throw light change Chrominance/luminance change relative value (i.e. colourity change relative value or brightness change relative value) between corresponding frame, etc. afterwards.
Wherein, which can be configured according to the demand of practical application, such as, can be specifically recurrent nerve Network, etc..
It should be noted that, however, it is determined that the change in location degree of detection object is more than or equal to default changing value, then can be from image Corresponding frame before and after other contiguous frames or the change of other throw lights is obtained in sequence to be calculated, or reacquires figure As sequence.
(2) surface for determining detection object in the image sequence according to the regression result is produced with the presence or absence of the throw light Raw reflected light signal, for example, can specifically use any one following mode:
First way:
Determine whether the regression result is more than predetermined threshold value, if, it is determined that the surface of detection object in the image sequence There are reflected light signal caused by the throw light, if not, it is determined that the surface of detection object is not present in the image sequence Reflected light signal caused by the throw light.
Wherein, which can be depending on the demand of practical application, and details are not described herein.
The second way:
Classification analysis is carried out to the regression result by default global characteristics algorithm or default identification model, if analysis result Indicate that the interframe change on the surface of detection object is more than setting value, it is determined that the surface of detection object, which exists, in the image sequence is somebody's turn to do Reflected light signal caused by throw light, if the interframe change on the surface of analysis result instruction detection object is no more than setting Value, it is determined that reflected light signal caused by the throw light is not present in the surface of detection object in the image sequence.
Wherein, which can be depending on the demand of practical application, and is somebody's turn to do " by default global characteristics algorithm or pre- If identification model to the frame-to-frame differences carry out classification analysis " mode can also have it is a variety of, for example, can be as follows:
The regression result is analyzed, to judge in the image sequence with the presence or absence of reflection caused by the throw light Optical signal, if reflected light signal caused by the throw light is not present, the interframe on the surface of generation instruction detection object becomes Change the analysis result no more than setting value;If there are reflected light signal caused by the throw light, by default global special Whether the reflector for levying reflected light information existing for algorithm or the judgement of default identification model is the detection object, if the detection pair As, then the interframe change on the surface of generation instruction detection object is more than the analysis result of setting value, if not the detection object, then Analysis result of the interframe change on the surface of generation instruction detection object no more than setting value.
Alternatively, the image in the image sequence can also be carried out by presetting global characteristics algorithm or default identification model Classification, to filter out the frame there are the detection object, obtains candidate frame, analyzes the frame-to-frame differences of the candidate frame, to judge the detection Object whether there is reflected light signal caused by the throw light, believe if there is no reflected light caused by the throw light Number, then the interframe change on the surface of generation instruction detection object is not more than the analysis result of setting value;If there are the throw light Caused reflected light signal, then the interframe change on the surface of generation instruction detection object are more than the analysis result of setting value, etc. Deng.
Wherein, global characteristics algorithm refers to the algorithm based on global characteristics, wherein, global characteristics can include gray scale Mean variance, gray level co-occurrence matrixes, fast Fourier transform (FFT, Fast Fourier Transformation) and discrete cosine Convert the frequency spectrum after the conversion such as (DCT, Discrete cosine transform).Default identification model can include grader Or other identification models (such as human face recognition model), grader can include support vector machines (SVM, Support Vector Machine), neutral net and decision tree etc..
The third mode:
Optionally, can also be by being carried out to optical signal if color shade is generated according to predetermined code sequence Decoding, to identify that the surface of detection object in the image sequence whether there is reflected light signal, example caused by the throw light Such as, specifically can be as follows:
According to the regression result, the image sequence is decoded according to default decoding algorithm, sequence after being decoded, really Whether sequence matches with coded sequence after the fixed decoding, if matching, it is determined that the surface of detection object exists in the image sequence Reflected light signal caused by the throw light;If sequence is mismatched with coded sequence after decoding, it is determined that in the image sequence Reflected light signal caused by the throw light is not present in the surface of detection object.
For example if chrominance/luminance of the regression result between frame corresponding before and after throw light change changes relative value (referring to the description as described in calculating the regression result in (1)), then at this point it is possible to using default decoding algorithm, successively to the image Chrominance/luminance change relative value (the chrominance/luminance change phase i.e. between frame corresponding before and after throw light change in sequence To value) calculate, the chrominance/luminance absolute value of frame corresponding before and after each throw light changes is obtained, then, by what is obtained Chrominance/luminance absolute value is as sequence after decoding, or obtained chrominance/luminance absolute value is changed according to preset strategy, Sequence after being decoded.
Wherein, decoding algorithm is preset with encryption algorithm to match, specifically can be depending on encryption algorithm;And default plan Summary can also be configured according to the demand of practical application, and details are not described herein.
Optionally, determine that sequence can also have a variety of with the whether matched mode of coded sequence after the decoding, such as, can be with Determining whether sequence and coded sequence are consistent after the decoding, if unanimously, it is determined that sequence is matched with coded sequence after the decoding, if It is inconsistent, it is determined that sequence is mismatched with coded sequence after the decoding;Alternatively, it can also determine sequence and code sequence after the decoding Whether the relation between row meets default correspondence, if so, sequence is matched with coded sequence after then determining the decoding, otherwise, if Do not meet, it is determined that sequence and coded sequence mismatch, etc. after the decoding.Wherein, which can be according to reality The demand of border application is configured.
It should be noted that if identifying, the surface of detection object in the image sequence is not present caused by the throw light Reflected light signal, then flow terminate, or, it may be determined that the detection object is non-living body, or, execution step can also be returned 103, i.e., Image Acquisition is carried out to the detection object again, or, the light source of startup can also be detected, however, it is determined that light To the light that detection object is projected, there is no problem in source, then flow terminates, determines the detection object for non-living body or again Carry out Image Acquisition to the detection object, and if it is determined that light source to the light that detection object is projected there are problem, then return to hold Row step 102, that is, restart light source, and to detection object throw light, the strategy specifically performed can be according to practical application Demand and be configured, details are not described herein.
105th, using default identification model, to the characteristics of image, (i.e. reflected light signal is formed on the detection object surface Characteristics of image) belonging to the type of object be identified, if recognition result indicates that the type of the affiliated object of the characteristics of image is live body, Then determine that the detection object is live body.
Conversely, if recognition result indicates that the type of the affiliated object of the characteristics of image is non-living body, for example it is " mobile phone screen " Deng then can determine that the detection object is non-living body.
Since reflected light signal on the detection object surface can form characteristics of image, when identifying detection object Surface there are reflected light signal caused by the throw light (i.e. step 104) when, can obtaining this, " reflected light signal is in the inspection Survey the characteristics of image that subject surface is formed ", then, the characteristics of image is identified using default identification model, Jin Erji Judge whether the detection object is live body in recognition result, such as, if recognition result indicates the affiliated object of the characteristics of image Type is live body, and the type for for example indicating the affiliated object of the characteristics of image is " face ", then, the detection can be determined at this time Object is live body;Otherwise, if recognition result indicates that the type of the affiliated object of the characteristics of image is not live body, the figure is for example indicated As the type of the affiliated object of feature is " mobile phone screen ", then it is non-living body that can determine the detection object at this time, and so on, etc. Deng.
Wherein, default identification model can include grader or other identification models (such as human face recognition model), point Class device can include SVM, neutral net and decision tree etc..
The default identification model can be trained by multiple feature samples to be formed, and this feature sample is the reflected light signal The characteristics of image that the subject surface of marking types is formed.Such as meeting after can the throw light be radiated on the face of people The characteristics of image formed is labeled as " face " as feature samples, which is radiated at institute's shape on mobile phone screen Into characteristics of image as feature samples, and be labeled as " mobile phone screen ", and so on, collecting substantial amounts of feature samples Afterwards, identification model can be established by these feature samples (i.e. the characteristics of image of marking types).
It should be noted that after the identification model can be established by other equipment, it is stored in default memory space, when When the living body detection device needs that the type of the affiliated object of characteristics of image is identified, directly acquired from the memory space, Alternatively, the identification model can also voluntarily be established by the living body detection device, i.e., in step " using default identification model pair The type of the affiliated object of the characteristics of image is identified " before, which can also include:
Multiple feature samples are obtained, default initial identification model is trained according to this feature sample, obtains default knowledge Other model.
In addition, it should be noted that, if in step 103, mainly identifying this by the default identification model The surface of detection object, then can also be according to identification there are reflected light signal caused by the throw light in image sequence As a result directly judge whether the detection object is live body, i.e., identification model judges the surface of detection object in the image sequence again There are while reflected light signal, can also identifying the type of the affiliated object of the characteristics of image caused by the throw light, In other words, you can with by the default identification model, to identify that the surface of detection object in the image sequence whether there is Reflected light signal caused by the throw light, and identifying there are during reflected light signal caused by the throw light, Identify the type of object belonging to respective image feature (characteristics of image that i.e. reflection light is formed on the surface of detection object), Details are not described herein.
From the foregoing, it will be observed that the present embodiment can start light source to detection object throw light when needing to carry out In vivo detection, And Image Acquisition is carried out to the detection object, when the surface for identifying detection object in the image sequence collected, there are the throwing When penetrating reflected light signal caused by light, the reflected light signal is formed on the detection object surface using default identification model The type of the affiliated object of characteristics of image is identified, if recognition result indicates that the type of the affiliated object of the characteristics of image is live body, Then determine that the detection object is live body;Since the program with user without carrying out cumbersome interactive operation and computing, can be with The demand to hardware configuration is reduced, the foundation for being additionally, since program progress living body determination is the reflected light on detection object surface Signal, and really live body and the live body (carrier of synthesising picture or video, such as photograph, mobile phone or tablet computer etc.) forged Reflected light signal be different, therefore, the program can also effectively keep out synthesis face attack, improve the accuracy of differentiation; So to sum up, the program can improve In vivo detection effect, so as to improve the accuracy and security of authentication.
Citing is described in further detail according to below the described method of upper one embodiment.
In the present embodiment, will specifically be integrated in the terminal with the living body detection device, light source is specially color shade, inspection The face that object is specifically people is surveyed, and is to obtain regression result especially by the change of interframe in the regression analysis image sequence Example illustrates.
As shown in Fig. 2, a kind of biopsy method, idiographic flow can be as follows:
201st, terminal receives In vivo detection request.
For example, terminal can specifically receive the In vivo detection request of user's triggering, alternatively, other equipment hair can also be received In vivo detection request sent, etc..
Such as by taking user triggers as an example, when user starts the In vivo detection function, such as the start key of click In vivo detection When, it can trigger and generate In vivo detection request, so that terminal receives In vivo detection request.
202nd, terminal generation color shade so that the light that the color shade is projected can be carried out according to default rule Change.
Optionally, for the ease of subsequently can preferably identify the change of light, the change of the light can also be maximized Change intensity.
Wherein, which can be depending on the demand of practical application, and maximizes the change intensity of the light Mode can also have a variety of, for example, for the light of same color, can be maximized by adjusting the front and rear screen intensity of change The change intensity of light, such as, the screen intensity for making change front and rear is arranged to minimum and maximum, and for the light of different colours Line, then can maximize the change intensity of light by adjusting change front and rear aberration, such as by screen by black most blackout It is most bright, etc. to be changed into white.
Optionally, in order to subsequently preferably detect reflected light signal from image frame-to-frame differences, except can be maximum Outside the change intensity for changing the light, in the selection of color the color to signal analysis most robust can also be selected empty as far as possible Between, such as, under pre-set color space, screen is converted to green most bright, the colourity change maximum of its reflected light by red is most bright, And so on, etc..
203rd, terminal asks to start detection interface according to the In vivo detection, and passes through the non-detection region in the detection interface Color of flashing shade so that facial throw light of the color shade as light source to detection object, such as people.
For example, terminal specifically can be according to the corresponding In vivo detection process of the In vivo detection request call, according to the live body Detection procedure starts corresponding detection interface, etc..
Wherein, which can include detection zone and non-detection region, and detection zone is mainly used for getting Image sequence shown that and the non-detection region can flash color shade, the color shade is as light source to detection pair As throw light, Fig. 1 b can be specifically participated in, in this way, in detection object, reflected light will be produced because of the light, moreover, According to parameters such as the color of light and intensity not only, its reflected light produced also can be otherwise varied.
It should be noted that in order to ensure that the light that color shade is launched can be projected to detection object, the detection object The screen with the movement equipment is needed to be maintained in a certain distance, such as, when user needs to detect whether some face is living During body, movement equipment can be taken to the appropriate place of the front distance of the face, to be monitored to the face, etc. Deng.
204th, the terminal-pair detection object carries out Image Acquisition, to obtain image sequence.
For example, can specifically call the camera of terminal, detection object is shot in real time, obtains image sequence, and The image sequence that shooting obtains is shown in the detection zone.
Optionally, in order to reduce the influence that the numerical value caused by noise floats to signal, after image sequence is obtained, may be used also To carry out Denoising disposal to the image sequence.For example, by taking noise model is Gaussian noise as an example, can specifically use in sequential Multi-frame mean and/or it is at same frame it is multiple dimensioned it is average reduce noise as much as possible, details are not described herein.
Optionally, in order to improve the efficiency and accuracy that subsequently identify, which can also be carried out other pre- Processing, such as the operation such as scaling, cutting, sharpening, blurred background.
205th, terminal calculates the frame-to-frame differences in the image sequence.
Wherein, reflected light signal is detected with frame-to-frame differences, it is necessary to the two-dimensional image vegetarian refreshments energy in image sequence between image One-to-one corresponding enough as far as possible.Therefore, can be alignd in the case where detection user's face does not have violent change in location using interframe Method makees the pixel pair of frame-to-frame differences come finer correction.It can be less than in the change in location degree for determining detection object pre- If during changing value, obtaining the pixel coordinate of contiguous frames in the image sequence respectively, then, which is converted, with Minimize the registration error of the pixel coordinate, then frame-to-frame differences is calculated based on transformation results, for example, can be as follows:
It in the pixel coordinate of two contiguous frames I and I ' is respectively p=[x, y, w] to make the same point on objectTAnd p0=[x0, y0,w0]T, wherein w is homogeneous coordinates item, solves 3*3 transformation matrixs M, as follows:
[x', y', w']=Mp0
Herein, the alternative types of used transformation matrix M are the highest homography conversion of the free degree, so as to most Smallization registration error.
In the method for optimization M, more common method is that mean square error (MSE, Mean Square Error) is estimated Meter and random sampling unification algorism (RANSAC, Random Sample Consensus).Optionally, it is more robust in order to obtain Flow algorithm (homography flow) is singly answered as a result, can also use.
Even if due to that can solve optimal transform matrix M, interframe be also possible that can not matched pixel, therefore, can To filter out the stronger pixel of correlation, and the weaker pixel of correlation is neglected, then, based on the pixel filtered out Make frame-to-frame differences calculating again, so that on the one hand calculation amount can be reduced, on the other hand, can strengthen as a result, i.e. optionally, step " calculating frame-to-frame differences based on transformation results " can include:
The pixel that correlation meets preset condition is filtered out according to transformation results, and is calculated according to the pixel filtered out Frame-to-frame differences.
Wherein, presetting changing value and preset condition can be configured according to the demand of practical application, and details are not described herein.
It should be noted that, however, it is determined that the change in location degree of detection object is more than or equal to default changing value, then can be from image Other contiguous frames are obtained in sequence to be calculated, or reacquire image sequence, then are calculated.
206th, terminal is according to produced by the frame-to-frame differences determines that the face of people in the image sequence whether there is the throw light Reflected light signal, if in the presence of, perform step 207, if being not present, can be operated according to preset strategy.
For example, terminal can determine whether the frame-to-frame differences is more than predetermined threshold value, if, it is determined that people in the image sequence Face can perform step 207, if not, it is determined that in the image sequence there are reflected light signal caused by the throw light Reflected light signal caused by the throw light is not present in the face of people, can be operated according to preset strategy.
Wherein, which can be depending on the demand of practical application, and the preset strategy can also be according to reality Depending on the demand of application, such as, " end flow " is could be provided as, alternatively, could be provided as " generating and reflected light signal being not present Prompt message ", alternatively, " determining that the detection object is non-living body " is may be set to be, alternatively, execution step can also be returned 204, i.e., Image Acquisition is carried out to the detection object again, or, the light source of startup can also be detected, to determine to be somebody's turn to do Whether light source is projected on the face of the detection object such as people, however, it is determined that the light that light source is projected to the detection object is not present Problem, then flow terminate, determine the detection object for non-living body or again to the detection object carry out Image Acquisition, and if Determine light source to the light that is projected of face of detection object such as people there are problem, for example the light source the face for projecting people Portion, but project on the object beside it, alternatively, the light source does not project light, then return and perform step 203, i.e., Light source is restarted, and to detection object throw light, etc., details are not described herein.
Optionally, in order to improve the accuracy rate of detection, and the calculation amount of reduction detection, it can also use cascade and differentiate mould Type is handled, such as, can using global characteristics algorithm or default identification model (such as grader) come to frame-to-frame differences into Row is anticipated, to judge the generation of reflected light signal roughly so that can be skipped most of without the common of reflected light signal The subsequent treatment of frame, i.e., it is follow-up only to need to handle the frame that there are reflected light signal.That is, step " terminal according to The frame-to-frame differences determines that the face of people in the image sequence whether there is reflected light signal caused by the throw light " it can wrap Include:
Classification analysis is carried out to the frame-to-frame differences by default global characteristics algorithm or default identification model, if analysis result refers to The facial interframe change leted others have a look at is more than setting value, it is determined that there are produced by the throw light for the face of people in the image sequence Reflected light signal, if the facial interframe change of analysis result assignor is not more than setting value, it is determined that in the image sequence Reflected light signal caused by the throw light is not present in the face of people.
Wherein, which can be depending on the demand of practical application, and is somebody's turn to do " by default global characteristics algorithm or pre- If identification model to the frame-to-frame differences carry out classification analysis " mode can also have it is a variety of, for example, can be as follows:
The frame-to-frame differences is analyzed, to judge to whether there is reflected light caused by the throw light in the image sequence Signal, if reflected light signal caused by the throw light is not present, the facial interframe change of generation assignor is not more than The analysis result of setting value;If there are reflected light signal caused by the throw light, by default global characteristics algorithm or Whether the reflector of reflected light information is the face of people existing for default identification model judgement, if being the face of people, generation refers to The facial interframe change leted others have a look at is more than the analysis result of setting value, if not the face of people, then generation assignor's is facial Analysis result of the interframe change no more than setting value.
Alternatively, the image in the image sequence can also be carried out by presetting global characteristics algorithm or default identification model Classification, to filter out the facial frame there are people, obtains candidate frame, analyzes the frame-to-frame differences of the candidate frame, to judge the face of the people Portion whether there is reflected light signal caused by the throw light, if reflected light signal caused by the throw light is not present, Then generate analysis result of the facial interframe change no more than setting value of assignor;If there are anti-caused by the throw light Optical signal is penetrated, then generates the facial interframe change of assignor more than analysis result of setting value, etc..
Wherein, global characteristics algorithm refers to the algorithm based on global characteristics, wherein, global characteristics can include gray scale Frequency spectrum after the conversion such as mean variance, gray level co-occurrence matrixes, FFT and DCT.
And default identification model is specifically as follows grader or other identification models (such as human face recognition model), with grader Exemplified by, which can be configured according to the demand of practical application, such as, if being served only for discriminating whether that there are reflected light letter Number, then relatively simple grader can be used, and if for discriminating whether for face of people etc., to use more complicated Grader, such as neural network classifier etc. are handled, and details are not described herein.
It should be noted that except the throwing can be whether there is to analyze the face of people in image sequence by calculating frame-to-frame differences Penetrate outside reflected light signal caused by light, can also the front and rear frame of throw light change is poor (to be not necessarily by calculating Two adjacent frames) to analyze the face of people in image sequence it whether there is reflected light signal caused by the throw light, specifically Reference can be made to embodiment above, details are not described herein.
207th, terminal using default identification model to the characteristics of image (i.e. reflected light signal on the detection object surface, such as The characteristics of image that the face of people is formed) belonging to the type of object be identified, if recognition result is indicated belonging to the characteristics of image The type of object is live body, it is determined that the detection object is live body.
Conversely, if recognition result indicates that the type of the affiliated object of the characteristics of image is non-living body, for example it is " mobile phone screen " Deng then can determine that the detection object is non-living body.
Wherein, default identification model can include grader or other identification model etc., grader can include SVM, Neutral net and decision tree etc..
The default identification model can be trained by multiple feature samples to be formed, and this feature sample is the reflected light signal The characteristics of image that the subject surface of marking types is formed.
Optionally, after which can be established by other equipment, it is stored in default memory space, when the end When end needs that the type of the affiliated object of characteristics of image is identified, directly acquired by the terminal from the memory space, alternatively, The identification model can also voluntarily be established by the terminal, such as, terminal can obtain multiple feature samples, according to this feature Sample is trained default initial identification model, obtains default identification model, etc., for details, reference can be made to embodiment above, Details are not described herein.
Optionally, in order to more further improve the accuracy rate of detection, some interactive operations can also be properly joined into, than Such as, user is allowed to perform blink or the action such as open one's mouth, i.e., " determining the face of people in the image sequence, there are the throw light in step After caused reflected light signal ", which can also include:
Generation instruction detection object (such as face of people) performs the prompt message of deliberate action, shows the prompt message, And the detection object is monitored, if monitoring detection object performs the deliberate action, the detection object is just determined to live Body, otherwise, if monitoring detection object is not carried out the deliberate action, it is determined that the detection object is non-living body.
Wherein, which can be configured according to the demand of practical application, it should be noted that, in order to avoid cumbersome Interactive operation, can to the quantity and complexity of the deliberate action carry out a definite limitation, such as, only need to carry out it is once simple Interaction, such as blink or open one's mouth, details are not described herein.
From the foregoing, it will be observed that the present embodiment can by detection interface set a non-detection region, the color that can flash shade, Wherein, which can be used as light source to detection object, such as the facial throw light of people, in this way, when needing to carry out live body During detection, Image Acquisition can be carried out to the face of the people, it is then determined that whether the face of people is deposited in obtained image sequence In reflected light signal caused by the throw light, and the reflected light signal formed in the face of the people it is right belonging to characteristics of image Whether the type of elephant is live body, if there is and type be live body, it is determined that the face of the people is live body;Since the program need not Cumbersome interactive operation and computing are carried out with user, therefore, the demand to hardware configuration can be reduced, be additionally, since the program The foundation for carrying out living body determination is the reflected light signal on detection object surface, and really live body and the live body (composite diagram forged The carrier of piece or video, such as photograph, mobile phone or tablet computer etc.) reflected light signal be different, therefore, the program also may be used Effectively to keep out synthesis face attack, the accuracy of differentiation is improved;So to sum up, the program can be limited in terminal Under hardware configuration, In vivo detection effect is improved, so as to improve the accuracy and security of authentication.
Identical with previous embodiment, the present embodiment is equally specifically integrated in the terminal with the living body detection device, Light source is specially color shade, and the surface of detection object is exemplified by the face of people specifically to illustrate, with previous embodiment Unlike, in the present embodiment, the color shade will be used as using the light combinations of pre-arranged code, will carried out below detailed Explanation.
As shown in Figure 3a, a kind of biopsy method, idiographic flow can be as follows:
301st, terminal receives In vivo detection request.
For example, terminal can specifically receive the In vivo detection request of user's triggering, alternatively, other equipment hair can also be received In vivo detection request sent, etc..
Such as by taking user triggers as an example, when user starts the In vivo detection function, such as the start key of click In vivo detection When, it can trigger and generate In vivo detection request, so that terminal receives In vivo detection request.
302nd, terminal obtains predetermined code sequence, which includes multiple codings.
Wherein, which can be randomly generated, and can also be configured according to the demand of practical application.
For example, the coded sequence can be Serial No., such as:0, -1,1,0 ... ..., etc..
303rd, terminal determines each coding pair successively according to pre-arranged code algorithm according to the order encoded in the coded sequence The color answered, obtains colour sequential.
Wherein, which can reflect the corresponding pass between each coding in coded sequence and a variety of colors System, the correspondence specifically can depending on the demand of practical application, such as, can make it is red represents digital -1, green generation Table 0, blueness represents 1, etc..
For example, representing numeral -1 with order red, green represents 0, exemplified by blueness represents 1, if in step 302, getting Coded sequence for " 0, -1,1,0 ", then at this time, terminal can according to it is each coding a variety of colors between correspondence, press The corresponding color of each coding is determined successively according to the order encoded in the coded sequence, obtains colour sequential " green, red, indigo plant Color, green ".
304th, terminal is based on colour sequential generation color shade so that the light that the color shade is projected is according to this Color indicated by colour sequential is changed.
If for example, in step 303, colour sequential " green, red, blueness, green " is obtained, then terminal can generate one Color shade so that the light that the color shade is projected can be carried out according to the order of " green, red, blueness, green " Change, referring to Fig. 3 b and Fig. 3 c.
It should be noted that in projection, between stand-by period when switching between the display duration of each color and each color Every can be configured according to the demand of practical application, such as, as shown in Figure 3b, can allow each color display when it is a length of 1 second, and latency period is arranged to 0 second, etc., i.e., it is specific according to the direction of time shaft meaning in Fig. 3 b, the throw light Can show as " green->Red->Blueness->Green ", wherein, claim at the time of being switched to another color from a kind of color For color change point.
Optionally, waiting time interval can not also be 0, such as, as shown in Figure 3c, the aobvious of each color can be allowed A length of 1 second when showing, and latency period is arranged to 0.5 second, etc., wherein, during latency period, it can not project Light (i.e. dull thread), i.e., according to the direction of time shaft meaning in Fig. 3 c, the throw light can specifically show as " green-> Dull thread->Red->Dull thread->Blueness->Dull thread->Green ".
Optionally, in order to further improve the security, the rule change of light can also be further complicated, such as, will Latency period when switching between display duration and the different colours of each color is also configured as inconsistent number Value, such as, green display duration can be 3 seconds, and a length of 2 seconds during the display of red, blueness is 4 seconds, green and red Between latency period when switching be 1 second, and latency period when switching between red and blueness is 1.5 seconds, with this Analogize, etc..
305th, the terminal-pair detection object carries out Image Acquisition, to obtain image sequence.
For example, terminal can specifically call the camera of terminal, detection object is shot in real time, obtains image sequence Row, and the image sequence obtained to shooting is shown, such as, the image sequence that shooting obtains is carried out in the detection zone Display, etc..
Such as using the light that color shade is projected as shown in Figure 3b colour sequential (i.e. green->Red-> Blueness->Green), and exemplified by the face that detection object is user, then in this four seconds video, the picture frame in first second 1 corresponding face goes up viridescent screen reflection light, there is the screen reflection of red on the corresponding face of the picture frame 2 in second second Light, there is the screen reflection light of blueness, the 4 corresponding face of picture frame of the 4th second on the corresponding face of picture frame 3 in the 3rd second Upper viridescent screen reflection light.All picture frames, are the original data with reflected light signal, are that the present invention is real Apply the image sequence of example.
Optionally, in order to reduce the influence that the numerical value caused by noise floats to signal, after image sequence is obtained, may be used also To carry out Denoising disposal to the image sequence.For example, by taking noise model is Gaussian noise as an example, can specifically use in sequential Multi-frame mean and/or it is at same frame it is multiple dimensioned it is average reduce noise as much as possible, details are not described herein.
306th, terminal is when the change in location degree for determining detection object is less than default changing value, respectively from image sequence Obtain the chrominance/luminance of frame corresponding before and after throw light changes.
For example, still using color change as " green->Red->Blueness->Exemplified by green ", when terminal is determining detection When the change in location degree of object is less than default changing value, throw light can be obtained from image sequence respectively by green conversion For red when corresponding frame chrominance/luminance, by red conversion be blueness when corresponding frame chrominance/luminance, by blueness Be converted to the chrominance/luminance of frame corresponding during green.
For example if the image sequence includes picture frame 1, picture frame 2, picture frame 3 and picture frame 4 successively.Wherein, scheme As the upper viridescent screen reflection light of the corresponding face of frame 1, there are the screen reflection light of red, figure on the corresponding face of picture frame 2 As there is the screen reflection light of blueness on the corresponding face of frame 3, the corresponding face of picture frame 4 goes up viridescent screen reflection light, then At this point it is possible to the chrominance/luminance of picture frame 1, picture frame 2, picture frame 3 and picture frame 4 is obtained respectively.
Again for example, if the image sequence includes picture frame 1, picture frame 2, picture frame 3, picture frame 4, picture frame 5, figure successively As frame 6, picture frame 7, picture frame 8, picture frame 9, picture frame 10, picture frame 11 and picture frame 12.Wherein, picture frame 1, figure As frame 2 and the upper viridescent screen reflection light of the corresponding face of picture frame 3,6 corresponding face of picture frame 4, picture frame 5 and picture frame There is the screen reflection light of red in portion, the screen reflection for having blueness on picture frame 7, picture frame 8 and the corresponding face of picture frame 9 Light, picture frame 10, picture frame 11 and the upper viridescent screen reflection light of the corresponding face of picture frame 12, then at this point it is possible to respectively The chrominance/luminance of picture frame 3, picture frame 4, picture frame 6, picture frame 7, picture frame 9 and picture frame l0 are obtained, wherein, image Frame 3 and picture frame 4 for color from it is green be changed into red when front and rear two frames, when picture frame 6 and picture frame 7 are that color by red change is indigo plant Two front and rear frames, picture frame 9 and picture frame 10 two frames front and rear when being changed into green from indigo plant for color.
It should be noted that, however, it is determined that the change in location degree of detection object is more than or equal to default changing value, then can be from image Corresponding frame before and after other contiguous frames or the change of other throw lights is obtained in sequence to be calculated, or reacquires figure As sequence.
307th, the colourity between frame corresponding to before and after terminal calculates throw light change according to the chrominance/luminance got Change relative value or brightness change relative value.
For example, terminal can specifically calculate the chrominance/luminance by default regression function, throw light change is obtained Chrominance/luminance change relative value (i.e. colourity change relative value or brightness change relative value) before and after change between corresponding frame, Etc..
Wherein, which can be configured according to the demand of practical application, such as, can be specifically recurrent nerve Network, etc..
Such as with picture frame 3 and picture frame 4 for color from it is green be changed into red when front and rear two frames, picture frame 6 and picture frame 7 be front and rear when two frames front and rear when color by red change is indigo plant, picture frame 9 and picture frame 10 are changed into green for color from indigo plant two Frame, and exemplified by calculating colourity change relative value, then can calculate following colourity change relative value:
By presetting regression function, for example recurrent neural networks calculate the difference of the colourity of picture frame 3 and the colourity of picture frame 4 Value, obtains the colourity change relative value of picture frame 3 and picture frame 4;
By presetting regression function, for example recurrent neural networks calculate the difference of the colourity of picture frame 6 and the colourity of picture frame 7 Value, obtains the colourity change relative value of picture frame 6 and picture frame 7;
By presetting regression function, for example recurrent neural networks calculate the colourity of picture frame 9 and the colourity of picture frame 10 Difference, obtains the colourity change relative value of picture frame 9 and picture frame 10.
It should be noted that the calculation of brightness change relative value is similar, details are not described herein.
Wherein, these colourities change relative value or brightness change relative value measure △ I equivalent to a kind of of signal strength, In above example, picture frame 3 and the change relative value (chrominance/luminance change relative value) of picture frame 4 are △ I34, picture frame 6 and the change relative value (chrominance/luminance change relative value) of picture frame 7 are △ I67, picture frame 9 is opposite with the change of picture frame 10 It is △ I to be worth (chrominance/luminance change relative value)910
308th, terminal changes relative value (i.e. between frame corresponding before and after throw light change according to the chrominance/luminance Chrominance/luminance changes relative value), the image sequence is decoded according to default decoding algorithm, sequence after being decoded.
For example, terminal can use default decoding algorithm, relative value is changed to the chrominance/luminance in the image sequence successively (the chrominance/luminance change relative value i.e. between frame corresponding before and after throw light change) is calculated, and obtains each projection light The chrominance/luminance absolute value of corresponding frame before and after line change, using obtained chrominance/luminance absolute value as sequence after decoding, Or obtained chrominance/luminance absolute value is changed according to preset strategy, sequence after being decoded.
Wherein, decoding algorithm is preset with encryption algorithm to match, specifically can be depending on encryption algorithm;And default plan Summary can also be configured according to the demand of practical application, and details are not described herein.
For example if in step 307, determining the change relative value of picture frame 3 and picture frame 4, (chrominance/luminance change is opposite Value) it is △ I34, picture frame 6 and the change relative value (chrominance/luminance change relative value) of picture frame 7 are △ I67, picture frame 9 with The change relative value (chrominance/luminance change relative value) of picture frame 10 is △ I910, then at this time can be opposite according to these changes Value, draws the absolute strength value (such as colourity absolute value or brightness absolute value) of the heliogram of all frames, specific as follows:
Assuming that the origin and least unit length in the space have given, for example make the absolute strength value I of picture frame 11= 0, and each change relative value △ I34=-1, △ I67=2, △ I910=-1, then due to picture frame 1, picture frame 2 and picture frame 3 It is green, therefore, it is known that picture frame 1, picture frame 2 are identical with the absolute strength value of the heliogram of picture frame 3, i.e. I3=I2 =I1=0;And since picture frame 4, picture frame 5 and picture frame 6 are red, it is known that picture frame 4, picture frame 5 and image The absolute strength value of the heliogram of frame 6 is identical;Similarly, since picture frame 7, picture frame 8 and picture frame 9 are blueness, image Frame 10, picture frame 11 and picture frame 12 are green, therefore, the heliogram of picture frame 7, picture frame 8 and picture frame 9 it is absolute Intensity level is identical, and picture frame 10, picture frame 11 are identical with the absolute strength value of the heliogram of picture frame 12, accordingly, Ke Yitong Cross the absolute strength value that equation below calculates each picture frame:
I4=I5=I6=I3+△I34=0-1=-1;
I7=I8=I9=I6+△I67=-1+2=1;
I10=I11=I12=I9+△I910=1-1=0;
So far, the Serial No. can be decoded, that is, sequence after being decoded:0, -1,1,0 (it is represent respectively green, it is red, Indigo plant, green).
309th, whether sequence matches with coded sequence after terminal determines the decoding, if matching, it is determined that in the image sequence The surface of detection object can perform step 310 there are reflected light signal caused by the throw light;Otherwise, if decoding Sequence and coded sequence mismatch afterwards, it is determined that there is no the throw light on the surface of detection object in described image sequence to be produced Raw reflected light signal, can be operated according to preset strategy at this time.
For example, terminal can determine whether sequence and coded sequence are consistent after the decoding, if unanimously, it is determined that the image sequence The surface of detection object can perform step 310 there are reflected light signal caused by the throw light in row;Otherwise, if Sequence and coded sequence are inconsistent after decoding, it is determined that the throw light is not present in the surface of detection object in described image sequence Caused reflected light signal, and then operated according to preset strategy.
Such as if in step 308, sequence after being decoded " 0, -1,1,0 ", it is " 0, -1,1,0 " one with coded sequence Cause, hence, it can be determined that in the image sequence surface of detection object there are reflected light signal caused by the throw light, etc. Deng.
Wherein, which can for details, reference can be made in one embodiment depending on the demand of practical application Step 206, details are not described herein.
310th, terminal using default identification model to the characteristics of image (i.e. reflected light signal on the detection object surface, such as The characteristics of image that the face of people is formed) belonging to the type of object be identified, if recognition result is indicated belonging to the characteristics of image The type of object is live body, it is determined that the detection object is live body.
Conversely, if recognition result indicates that the type of the affiliated object of the characteristics of image is non-living body, for example it is " mobile phone screen " Deng then can determine that the detection object is non-living body.
Wherein, default identification model can include grader or other identification model etc., grader can include SVM, Neutral net and decision tree etc..The default identification model can be trained by multiple feature samples to be formed, and this feature sample is anti-for this Penetrate the characteristics of image that optical signal is formed in the subject surface of marking types.
Optionally, which can be established by other equipment, and be supplied to the terminal to be used, can also be by The terminal is voluntarily established, and for details, reference can be made to embodiment above, details are not described herein.
Optionally, in order to more further improve the accuracy rate of detection, some interactive operations can also be properly joined into, than Such as, user is allowed to perform blink or the action, i.e., after " sequence is matched with coded sequence after determining the decoding ", the live body such as open one's mouth Detection method can also include:
Terminal generation instruction detection object (such as face of people) performs the prompt message of deliberate action, shows that the prompting is believed Breath, and the detection object is monitored, if monitoring detection object performs the deliberate action, just determine that the detection object is Live body, otherwise, if monitoring detection object is not carried out the deliberate action, it is determined that the detection object is non-living body.
Wherein, which can be configured according to the demand of practical application, it should be noted that, in order to avoid cumbersome Interactive operation, can to the quantity and complexity of the deliberate action carry out a definite limitation, such as, only need to carry out it is once simple Interaction, such as blink or open one's mouth, details are not described herein.
Optionally, since each two field picture with reflected light signal collected has all recorded corresponding timestamp, because This, after sequence is matched with coded sequence after determining decoding, can also further determine that whether these timestamps can be with face The time of color shade conversion light corresponds, if can correspond to, just determines the surface of detection object in the image sequence There are reflected light signal caused by the throw light;Otherwise, if cannot correspond to, it is determined that reflected light signal and default light Sample of signal mismatches., just not only will be in the color sequence of coding that is, if attacker wants to render attack with human face segmentation Row can sequentially match, and cannot also be offset on absolute time point (because synthesizing the computing rendered if synthesis in real time It is also required to the time of at least Millisecond), it is attacked difficulty and greatly improves, and therefore, can further improve security.
From the foregoing, it will be observed that the present embodiment can generate a color shade by coded sequence, wherein, which can make It is light source to detection object, such as the facial throw light of people, in this way, when needing to carry out In vivo detection, can be to the people's Face is monitored, and the reflected light signal in the image sequence then obtained to monitoring on the face of people decodes, to determine Whether match with coded sequence, if matching, it is determined that the face of the people is live body;Since the program is numerous without being carried out with user Trivial interactive operation and computing, therefore, can reduce the demand to hardware configuration, be additionally, since the program and carry out living body determination Foundation be detection object surface reflected light signal, and really live body with forge the live body (load of synthesising picture or video Body, such as photograph, mobile phone or tablet computer etc.) reflected light signal be different, therefore, the program can also effectively keep out conjunction Attacked into face, improve the accuracy of differentiation.
Further, since the throw light is generated according to random coded sequence, and subsequently needed when differentiating Reflected light signal is decoded, therefore, corresponding reflective is regarded even if attacker has synthesized one according to current colour sequential Frequently, also can not in middle use next time, so, for the scheme of upper one embodiment, can further improve live body Detection result, and then improve the accuracy and security of authentication.
In order to preferably implement above method, the embodiment of the present invention also provides a kind of living body detection device, the inspection of abbreviation live body Device is surveyed, as shown in fig. 4 a, which includes receiving unit 401, start unit 402, collecting unit 403 and detection Unit 404, it is as follows:
(1) receiving unit 401;
Receiving unit 401, for receiving In vivo detection request.
For example, receiving unit 401, specifically can be used for the In vivo detection request for receiving user's triggering, alternatively, can also connect Receive In vivo detection request that other equipment is sent, etc..
(2) start unit 402;
Start unit 402, for being asked to start light source according to the In vivo detection, and to detection object throw light.
Such as the start unit 402, specifically can be used for according to the corresponding In vivo detection of In vivo detection request call into Journey, according to the In vivo detection process initiation light source, etc..
Wherein, which can be configured according to the demand of practical application, such as, can be by adjusting terminal screen Brightness realizes, alternatively, can also using other luminous components such as flash lamp or infrared transmitter or external device realizing or Person, can also be by setting a color shade to realize, etc. on display interface, i.e., start unit 402 can specifically perform Any one following operation:
(1) start unit 402, specifically can be used for asking adjustment screen intensity according to the In vivo detection so that the screen As light source to detection object throw light.
(2) start unit 402, specifically can be used for being asked to open preset emission component according to the In vivo detection so that should Luminous component is as light source to detection object throw light.
Wherein, which can include the components such as flash lamp or infrared transmitter.
(3) start unit 402, specifically can be used for) default color shade, the face are started according to In vivo detection request Color shade is as light source to detection object throw light.
For example, start unit 402, specifically can be used for being asked to start the color shade in terminal according to the In vivo detection, For example the component for the color shade that can flash can be set at terminal enclosure edge, then, receiving In vivo detection request Afterwards, the component can be started, with the color shade that flashes;Alternatively, can also by showing detection interface come the color shade that flashes, It is as follows:
Asked to start detection interface according to the In vivo detection, which can flash color shade, the color shade As light source to detection object throw light.
Wherein, the region of the color shade that flashes can be depending on the demand of practical application, for example, the detection interface can With including detection zone and non-detection region, detection zone is mainly used for showing monitoring situation, and the non-detection region The color that can flash shade, the color shade is as light source to detection object throw light, etc..
Wherein, the region of color shade is flashed in the non-detection region can be depending on the demand of practical application, can be with It is that whole non-detection region is both provided with color shade or in certain subregion of the non-detection region or certain several Subregion is provided with color shade, etc..The parameters such as the color and transparency of the color shade can be according to practical application Demand is configured, which can be set in advance by system, and is directly transferred when starting and detecting interface, or Person, can also automatically generate, i.e., as shown in Figure 4 b, which can also wrap after In vivo detection request is received Generation unit 405 is included, it is as follows:
The generation unit 405, can be used for generating color shade so that the light that the color shade is projected can be by It is changed according to default rule.
Optionally, for the ease of subsequently can preferably identify the change of light, which can be also used for Maximize the change intensity of the light.
Wherein, which can be depending on the demand of practical application, and maximizes the change intensity of the light Mode can also have a variety of, for example, for the light of same color, can be maximized by adjusting the front and rear screen intensity of change The change intensity of light, such as, the screen intensity for making change front and rear is arranged to minimum and maximum, and for the light of different colours Line, then can be by adjusting the change intensity of the front and rear aberration of change to maximize light, etc..I.e.:
Generation unit 405, specifically can be used for the light for same color, obtain default screen intensity adjusting parameter, The screen intensity of the light before and after change with color is adjusted according to the screen intensity adjusting parameter, to adjust the change of light Intensity;For the light of different colours, default aberration adjusting parameter is obtained, which is adjusted according to the aberration adjusting parameter Aberration of the light of color before and after change, to adjust the change intensity of light.
Wherein, the adjustment amplitude of the change intensity of the light can be configured according to the demand of practical application, can be wrapped Include and significantly adjust, for example maximize the change intensity of light, fine tune, etc. can also be included, details are not described herein.
Optionally, in order to subsequently preferably detect reflected light signal from image frame-to-frame differences, except that can adjust Outside the change intensity of the light, the color space to signal analysis most robust can also be selected as far as possible in the selection of color, Embodiment above is for details, reference can be made to, details are not described herein.
Optionally, in order to improve the accuracy of authentication and security, the light combinations of pre-arranged code can also be used It is used as the color shade, i.e.,:
Generation unit 405, specifically can be used for:Predetermined code sequence is obtained, which includes multiple codings, according to Pre-arranged code algorithm, determines the corresponding color of each coding according to the order encoded in the coded sequence, obtains color sequence successively Row, generate color shade so that the light that the color shade is projected is signified according to the colour sequential based on the colour sequential The color shown is changed.
Wherein, which can be randomly generated, and can also be configured according to the demand of practical application, And the pre-arranged code algorithm can also be depending on the demand of practical application.The encryption algorithm, can reflect in coded sequence Correspondence between each coding and a variety of colors, such as, red can be made to represent numeral -1, green represents 0, and blueness represents 1, etc., if the coded sequence then got for " 0, -1,1,0 ", then, can obtain colour sequential " green, red, blueness, Green ", so as to generate a color shade so that the light that the color shade is projected can according to " green, red, blueness, The order of green " is changed.
It should be noted that in projection, between stand-by period when switching between the display duration of each color and each color Every can be configured according to the demand of practical application;In addition, during latency period, can not throw light, alternatively, Predetermined light can also be projected, for details, reference can be made to embodiment of the method above, details are not described herein.
(3) collecting unit 403;
Collecting unit 403, for carrying out Image Acquisition to the detection object, to obtain image sequence.
For example, collecting unit 403, specifically can be used for calling camera device, detection object is shot in real time, is obtained Image sequence, and the image sequence that shooting is obtained is shown in the detection zone.
Wherein, which includes but not limited to camera, IP Camera and the monitoring camera that terminal carries Head and other equipment that can gather image etc..
Optionally, in order to reduce the influence that the numerical value caused by noise floats to signal, after image sequence is obtained, gather Unit 403 can also carry out Denoising disposal to the image sequence, refer to embodiment above, details are not described herein.
Optionally, collecting unit 403 can also carry out the image sequence other pretreatments, for example scale, cut, is sharp Change, blurred background etc. operate, to improve the efficiency and accuracy that subsequently identify.
(4) detection unit 404;
Detection unit 404, for identifying that there are produced by the throw light on the surface of detection object in the image sequence Reflected light signal, the reflected light signal is in the detection object image forming surface feature, using default identification model to this The type of the affiliated object of characteristics of image is identified, if recognition result indicates that the type of the affiliated object of the characteristics of image is live body, Then determine that the detection object is live body.
The detection unit 404, can be also used for determining that the projection light is not present in the surface of detection object in the image sequence Caused by line during reflected light signal, operated according to preset strategy.
Wherein, which can specifically be configured according to the demand of practical application, such as, it may be determined that the detection Object is non-living body, or, can also triggering collection unit 403 again to the detection object carry out Image Acquisition, or, Start unit 402 can also be triggered and restart light source, and to detection object throw light, etc., for details, reference can be made to above Embodiment of the method, details are not described herein.
The detection unit 404, can be also used for indicating that the type of the affiliated object of the characteristics of image is non-live in recognition result During body, it is non-living body to determine the detection object.
For example, the detection unit 404 can include computation subunit, judgment sub-unit and identification subelement, it is as follows:
Computation subunit, can be used for the change of frame in the regression analysis image sequence, obtains regression result.
For example, computation subunit, it specifically can be used for the numerical value of the chrominance/luminance in the regression analysis image sequence per frame Expression, which can be sequence of values, according to the numerical expression such as sequence of values come judging frame in the image sequence The change of chrominance/luminance, obtains regression result.
Alternatively, computation subunit, it specifically can be used for the change between frame in the regression analysis image sequence, returned As a result, etc..
Wherein, before the concrete mode of the numerical expression of the chrominance/luminance in the regression analysis image sequence per frame can be found in The embodiment of the method in face, and the change in the image sequence between frame can be by calculating the difference in the image sequence between frame To obtain, the difference between the frame can be frame-to-frame differences or frame is poor, and frame-to-frame differences refers to the difference between adjacent two frame, And difference of the frame difference between frame corresponding before and after throw light change.
For example the computation subunit, the change in location degree for being specifically determined for detection object are less than default change During value, the pixel coordinate of contiguous frames in the image sequence is obtained respectively, and frame-to-frame differences is calculated based on the pixel coordinate;Such as can be with The pixel coordinate is converted, to minimize the registration error of the pixel coordinate, then, correlation is filtered out according to transformation results Property meet the pixel of preset condition, and frame-to-frame differences, etc. is calculated according to the pixel filtered out.
Again for example, the computation subunit, the change in location degree for being specifically determined for detection object are less than default become During change value, the pixel coordinate of frame corresponding before and after throw light changes is obtained from the image sequence respectively, based on the pixel It is poor that coordinate calculates frame;For example the pixel coordinate can be converted, to minimize the registration error of the pixel coordinate, then, The pixel that correlation meets preset condition is filtered out according to transformation results, and it is poor according to the pixel filtered out to calculate frame, etc. Deng.
Optionally, in addition to the mode of above-mentioned calculating frame difference, other modes can also be used, such as, can be at certain On the passage of one color space, or any can describe in the dimension of colourity or brightness change, remove the color between two frames of analysis The relative value of change or the relative value of brightness change are spent, i.e.,:
Computation subunit, specifically can be used for when the change in location degree for determining detection object is less than default changing value, The chrominance/luminance of corresponding frame before and after throw light change is obtained from image sequence respectively, according to the colourity got/bright Degree calculates the colourity change relative value or brightness change relative value between corresponding frame before and after throw light change, by colourity/ Frame between brightness change relative value frame corresponding before and after changing as throw light is poor.
Such as computation subunit, it specifically can be used for calculating the chrominance/luminance by default regression function, obtain Chrominance/luminance change relative value (i.e. colourity change relative value or brightness before and after changing to throw light between corresponding frame Change relative value), etc..
Wherein, which can be configured according to the demand of practical application, such as, can be specifically recurrent nerve Network etc..
Wherein, the default changing value and preset condition can be configured according to the demand of practical application.
Judgment sub-unit, can be used for determining whether the surface of detection object in the image sequence deposits according to the regression result In reflected light signal caused by the throw light, the reflected light signal is in the detection object image forming surface feature.
Identify subelement, can be used for determining that there are the throwing for the surface of detection object in the image sequence in judgment sub-unit When penetrating reflected light signal caused by light, the type of the affiliated object of the characteristics of image is known using default identification model Not, if recognition result indicates that the type of the affiliated object of the characteristics of image is live body, it is determined that the detection object is live body.
The identification subelement, can be also used for determining that reflected light caused by the throw light is not present in judgment sub-unit During signal, operated according to preset strategy, for details, reference can be made to the description as described in preset strategy in detection unit 404, herein not Repeat again.
The identification subelement, can be also used for indicating that the type of the affiliated object of the characteristics of image is non-living body in recognition result When, it is non-living body to determine the detection object.
Wherein, which can be trained by multiple feature samples forms, and this feature sample is believed for the reflected light Number characteristics of image formed in the subject surface of marking types.The identification model can be established by other equipment, And the living body detection device is supplied to, can also voluntarily it be established by the living body detection device, i.e., the living body detection device is also It can include model foundation unit, it is as follows:
Model foundation unit, for obtaining multiple feature samples, according to this feature sample to default initial identification model into Row training, obtains default identification model.
Wherein, the surface for determining detection object in the image sequence according to the regression result whether there is the throw light institute The mode of the reflected light signal of generation can have it is a variety of, it is for instance possible to use any one following mode:
First way:
Judgment sub-unit, is specifically determined for whether the regression result is more than predetermined threshold value, if, it is determined that the figure The reflected light signal as caused by the surface of detection object in sequence there are the throw light;If not, it is determined that the image sequence Reflected light signal caused by the throw light is not present in the surface of middle detection object.
The second way:
The judgment sub-unit, specifically can be used for by default global characteristics algorithm or default identification model to the recurrence knot Fruit carries out classification analysis, if the interframe change on the surface of analysis result instruction detection object is more than setting value, it is determined that the image There are reflected light signal caused by the throw light on the surface of detection object in sequence;If analysis result instruction detection object The interframe change on surface is not more than setting value, it is determined that the throw light institute is not present in the surface of detection object in the image sequence The reflected light signal of generation.
Wherein, which can be depending on the demand of practical application, and is somebody's turn to do " by default global characteristics algorithm or pre- If identification model to the frame-to-frame differences carry out classification analysis " mode can also have it is a variety of, such as, can be as follows:
The judgment sub-unit, specifically can be used for analyzing the regression result, with judge in the image sequence whether There are reflected light signal caused by the throw light, if reflected light signal caused by the throw light is not present, generate Indicate the analysis result of the interframe change no more than setting value on the surface of detection object;If there are anti-caused by the throw light Penetrate optical signal, then by preset global characteristics algorithm or default identification model judge existing for reflected light information reflector whether For the detection object, if the detection object, then the interframe change on the surface of generation instruction detection object is more than point of setting value Analysis is as a result, if not the detection object, then analysis of the interframe change on the surface of generation instruction detection object no more than setting value As a result.
Alternatively, the judgment sub-unit, specifically can be used for by default global characteristics algorithm or default identification model to this Image in image sequence is classified, and to filter out the frame there are the detection object, is obtained candidate frame, is analyzed the candidate frame Frame-to-frame differences, to judge that the detection object whether there is reflected light signal caused by the throw light, if the projection light is not present Reflected light signal caused by line, then the interframe change on the surface of generation instruction detection object are not more than the analysis knot of setting value Fruit;If there are reflected light signal caused by the throw light, the interframe change on the surface of generation instruction detection object is more than The analysis result of setting value.
Wherein, global characteristics algorithm refers to the algorithm based on global characteristics, wherein, global characteristics can include gray scale Frequency spectrum after the conversion such as mean variance, gray level co-occurrence matrixes, FFT and DCT.
Optionally, if color shade is generated according to predetermined code sequence, the third mode can be used, i.e., it is logical Cross and optical signal is decoded, to determine the surface of detection object in the image sequence with the presence or absence of caused by the throw light Reflected light signal, it is specific as follows:
The third mode:
Judgment sub-unit, specifically can be used for according to the regression result, according to default decoding algorithm to the image sequence into Row decoding, sequence after being decoded, whether sequence matches with coded sequence after determining the decoding, if matching, it is determined that the image There are reflected light signal caused by the throw light on the surface of detection object in sequence;If sequence and coded sequence are not after decoding Matching, it is determined that reflected light signal caused by the throw light is not present in the surface of detection object in the image sequence.
Such as if chrominance/luminance of the regression result between frame corresponding before and after throw light change changes relative value, Then at this time, judgment sub-unit can use default decoding algorithm, change relative value to the chrominance/luminance in the image sequence successively (the chrominance/luminance change relative value i.e. between frame corresponding before and after throw light change) is calculated, and obtains each projection light The chrominance/luminance absolute value of corresponding frame before and after line change, then, using obtained chrominance/luminance absolute value as decoding after Sequence, or obtained chrominance/luminance absolute value is changed according to preset strategy, sequence after being decoded.
Wherein, decoding algorithm is preset with encryption algorithm to match, specifically can be depending on encryption algorithm;And default plan Summary can also be configured according to the demand of practical application, and details are not described herein.
Optionally, determine that sequence can also have a variety of with the whether matched mode of coded sequence after the decoding, such as, judge Subelement can determine whether sequence and coded sequence are consistent after the decoding, if unanimously, it is determined that sequence and coding after the decoding Sequences match, if inconsistent, it is determined that sequence is mismatched with coded sequence after the decoding;Alternatively, judgment sub-unit can also be true Whether the relation after the fixed decoding between sequence and coded sequence meets default correspondence, if so, then determining sequence after the decoding Matched with coded sequence, otherwise, if not meeting, it is determined that sequence and coded sequence mismatch, etc. after the decoding.Wherein, should Default correspondence can be configured according to the demand of practical application.
It when it is implemented, above unit can be realized as independent entity, can also be combined, be made Realized for same or several entities, the specific implementation of above unit can be found in embodiment of the method above, herein not Repeat again.
The living body detection device can be specifically integrated in the equipment such as terminal, which is specifically as follows mobile phone, tablet electricity The equipment such as brain, laptop or PC.
From the foregoing, it will be observed that the living body detection device of the present embodiment need carry out In vivo detection when, can be by start unit 402 Start light source to detection object throw light, and by collecting unit 403 by the detection zone in the detection interface to the detection Object carries out image, is then existed by the surface of the detection object in the image sequence collected is identified of detection unit 404 Caused by the throw light during reflected light signal, using default identification model to the reflected light signal on the detection object surface The type for forming the affiliated object of characteristics of image is identified, if recognition result indicates that the type of the affiliated object of the characteristics of image is work Body, it is determined that the detection object is live body;Since the program with user without carrying out cumbersome interactive operation and computing, The demand to hardware configuration can be reduced, the foundation for being additionally, since program progress living body determination is the anti-of detection object surface Optical signal is penetrated, and really (carrier of synthesising picture or video, such as photograph, mobile phone or tablet are electric for live body and the live body of forgery Brain etc.) reflected light signal be different, therefore, the program can also effectively keep out synthesis face attack, improve the standard of differentiation True property.
Further, the generation unit 405 of the living body detection device can also generate this according to random coded sequence Throw light, and being differentiated by detection unit 404 by decoding the reflected light signal, therefore, even if attacker is according to working as Preceding colour sequential has synthesized a corresponding reflective video, also can not in middle use next time, so, can cause its safety Property is greatly enhanced.To sum up, the program can greatly improve In vivo detection effect, be conducive to improve the standard of authentication True property and security.
Correspondingly, the embodiment of the present invention also provides a kind of terminal, as shown in figure 5, the terminal can include radio frequency (RF, Radio Frequency) circuit 501, include the memories 502, defeated of one or more computer-readable recording mediums Enter unit 503, display unit 504, sensor 505, voicefrequency circuit 506, Wireless Fidelity (WiFi, Wireless Fidelity) Module 507, include the component such as one or the processor 508 of more than one processing core and power supply 509.This area skill Art personnel are appreciated that the restriction of the terminal structure shown in Fig. 5 not structure paired terminal, can include more more or more than illustrating Few component, either combines some components or different components arrangement.Wherein:
RF circuits 501 can be used for receive and send messages or communication process in, the reception and transmission of signal, especially, by base station After downlink information receives, transfer to one or more than one processor 508 is handled;In addition, will be related to the data sending of uplink to Base station.In general, RF circuits 501 include but not limited to antenna, at least one amplifier, tuner, one or more oscillators, use Family identity module (SIM, Subscriber IdentityModule) card, transceiver, coupler, low-noise amplifier (LNA, Low Noise Amplifier), duplexer etc..In addition, RF circuits 501 can also by wireless communication and network and other set Standby communication.The wireless communication can use any communication standard or agreement, include but not limited to global system for mobile communications (GSM, Global System of Mobile communication), general packet radio service (GPRS, General Packet Radio Service), CDMA (CDMA, Code Division Multiple Access), wideband code division it is more Location (WCDMA, Wideband Code Division Multiple Access), Long Term Evolution (LTE, Long Term Evolution), Email, Short Message Service (SMS, Short Messaging Service) etc..
Memory 502 can be used for storage software program and module, and processor 508 is stored in memory 502 by operation Software program and module, so as to perform various functions application and data processing.Memory 502 can mainly include storage journey Sequence area and storage data field, wherein, storing program area can storage program area, the application program (ratio needed at least one function Such as sound-playing function, image player function) etc.;Storage data field can be stored uses created data according to terminal (such as voice data, phone directory etc.) etc..In addition, memory 502 can include high-speed random access memory, can also include Nonvolatile memory, for example, at least a disk memory, flush memory device or other volatile solid-state parts.Phase Ying Di, memory 502 can also include Memory Controller, to provide processor 508 and input unit 503 to memory 502 Access.
Input unit 503 can be used for the numeral or character information for receiving input, and produce and user setting and function Control related keyboard, mouse, operation lever, optics or the input of trace ball signal.Specifically, in a specific embodiment In, input unit 503 may include touch sensitive surface and other input equipments.Touch sensitive surface, also referred to as touch display screen or tactile Control plate, collect user on it or neighbouring touch operation (such as user using any suitable object such as finger, stylus or Operation of the annex on touch sensitive surface or near touch sensitive surface), and corresponding connection dress is driven according to formula set in advance Put.Optionally, touch sensitive surface may include both touch detecting apparatus and touch controller.Wherein, touch detecting apparatus is examined The touch orientation of user is surveyed, and detects the signal that touch operation is brought, transmits a signal to touch controller;Touch controller from Touch information is received on touch detecting apparatus, and is converted into contact coordinate, then gives processor 508, and can reception processing Order that device 508 is sent simultaneously is performed.It is furthermore, it is possible to a variety of using resistance-type, condenser type, infrared ray and surface acoustic wave etc. Type realizes touch sensitive surface.Except touch sensitive surface, input unit 503 can also include other input equipments.Specifically, other are defeated Enter equipment and can include but is not limited to physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse One or more in mark, operation lever etc..
Display unit 504 is various available for the information and terminal for showing by information input by user or being supplied to user Graphical user interface, these graphical user interface can be made of figure, text, icon, video and its any combination.Display Unit 504 may include display panel, optionally, can use liquid crystal display (LCD, Liquid Crystal Display), The forms such as Organic Light Emitting Diode (OLED, Organic Light-Emitting Diode) configure display panel.Further , touch sensitive surface can cover display panel, when touch sensitive surface is detected on it or after neighbouring touch operation, send processing to Device 508 is followed by subsequent processing device 508 and is provided on a display panel accordingly according to the type of touch event to determine the type of touch event Visual output.Although in Figure 5, touch sensitive surface and display panel are the components independent as two to realize input and input Function, but in some embodiments it is possible to touch sensitive surface and display panel are integrated and realizes and outputs and inputs function.
Terminal may also include at least one sensor 505, such as optical sensor, motion sensor and other sensors. Specifically, optical sensor may include ambient light sensor and proximity sensor, wherein, ambient light sensor can be according to ambient light Light and shade adjust the brightness of display panel, proximity sensor can close display panel and/or the back of the body when terminal is moved in one's ear Light.As one kind of motion sensor, gravity accelerometer can detect in all directions (generally three axis) acceleration Size, can detect that size and the direction of gravity when static, available for identification mobile phone posture application (such as horizontal/vertical screen switching, Dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;It can also configure as terminal Gyroscope, barometer, hygrometer, thermometer, the other sensors such as infrared ray sensor, details are not described herein.
Voicefrequency circuit 506, loudspeaker, microphone can provide the audio interface between user and terminal.Voicefrequency circuit 506 can By the transformed electric signal of the voice data received, loudspeaker is transferred to, voice signal output is converted to by loudspeaker;It is another The voice signal of collection is converted to electric signal by aspect, microphone, and voice data is converted to after being received by voicefrequency circuit 506, then After voice data output processor 508 is handled, through RF circuits 501 to be sent to such as another terminal, or by voice data Export to memory 502 further to handle.Voicefrequency circuit 506 is also possible that earphone jack, with provide peripheral hardware earphone with The communication of terminal.
WiFi belongs to short range wireless transmission technology, and terminal can help user's transceiver electronics postal by WiFi module 507 Part, browse webpage and access streaming video etc., it has provided wireless broadband internet to the user and has accessed.Although Fig. 5 is shown WiFi module 507, but it is understood that, it is simultaneously not belonging to must be configured into for terminal, can not change as needed completely Become in the essential scope of invention and omit.
Processor 508 is the control centre of terminal, using various interfaces and the various pieces of connection whole mobile phone, is led to Cross operation or perform the software program and/or module being stored in memory 502, and call and be stored in memory 502 Data, perform the various functions and processing data of terminal, so as to carry out integral monitoring to mobile phone.Optionally, processor 508 can wrap Include one or more processing cores;Preferably, processor 508 can integrate application processor and modem processor, wherein, should Operating system, user interface and application program etc. are mainly handled with processor, modem processor mainly handles wireless communication. It is understood that above-mentioned modem processor can not also be integrated into processor 508.
Terminal further includes the power supply 509 (such as battery) to all parts power supply, it is preferred that power supply can pass through power supply pipe Reason system and processor 508 are logically contiguous, so as to realize management charging, electric discharge and power managed by power-supply management system Etc. function.Power supply 509 can also include one or more direct current or AC power, recharging system, power failure inspection The random component such as slowdown monitoring circuit, power supply changeover device or inverter, power supply status indicator.
Although being not shown, terminal can also include camera, bluetooth module etc., and details are not described herein.Specifically in this implementation In example, the processor 508 in terminal can be corresponding by the process of one or more application program according to following instruction Executable file is loaded into memory 502, and runs the application program being stored in memory 502 by processor 508, from And realize various functions:
In vivo detection request is received, is asked to start light source according to the In vivo detection, and to detection object throw light, so Afterwards, Image Acquisition is carried out to detection object, to obtain image sequence, identifies that the surface of detection object in the image sequence exists Reflected light signal caused by the throw light, the reflected light signal is in the detection object image forming surface feature, using pre- If the type of the affiliated object of the characteristics of image is identified in identification model, if recognition result indicates the affiliated object of the characteristics of image Type be live body, it is determined that the detection object is live body.
Wherein it is determined that the surface of detection object whether there is reflected light letter caused by the throw light in the image sequence Number, and can have to the mode that the type of the affiliated object of the characteristics of image is identified a variety of, it for details, reference can be made to reality above Example is applied, details are not described herein.
Wherein, the implementation of the light source can also have a variety of, such as, can be by adjusting the brightness of terminal screen come real It is existing, alternatively, can also using other luminous components such as flash lamp or infrared transmitter or external device realizing or, may be used also With by setting a color shade to realize on display interface, etc., i.e. application program in the memory 502 can also Implement function such as:
Adjustment screen intensity is asked according to the In vivo detection so that the screen is as light source to detection object throw light.
Alternatively, asked to open preset emission component according to the In vivo detection so that the luminous component is as light source to detection Object throw light.Wherein, which can include the components such as flash lamp or infrared transmitter.
Alternatively, asking to start color shade according to the In vivo detection, for example start detection interface, which can dodge Existing color shade, the color shade is as light source to detection object throw light, etc..
Wherein, the region of the color shade that flashes can be depending on the demand of practical application, for example, the detection interface can With including detection zone and non-detection region, detection zone is mainly used for showing monitoring situation, and the non-detection region The color that can flash shade, the color shade is as light source to detection object throw light, etc..
In addition, it should be noted that, the parameter such as color and transparency of the color shade can be according to the demand of practical application It is configured, which can be set in advance by system, and directly be transferred when starting and detecting interface, alternatively, It can be automatically generated after In vivo detection request is received, i.e., this is stored in the application program in memory 502, can also be real Now following function:
Generate color shade so that the light that the color shade is projected can be changed according to default rule.
Optionally, for the ease of subsequently can preferably identify the change of light, the change of the light can also be maximized Change intensity.
Wherein, maximizing the mode of the change intensity of the light can also have a variety of, for details, reference can be made to embodiment above, Details are not described herein.
Optionally, in order to subsequently preferably detect reflected light signal from image frame-to-frame differences, except can be maximum Outside the change intensity for changing the light, in the selection of color the color to signal analysis most robust can also be selected empty as far as possible Between.
Optionally, in order to improve the accuracy of authentication and security, the light combinations of pre-arranged code can also be used It is used as the color shade, i.e., this is stored in the application program in memory 502, can also implement function such as:
Predetermined code sequence is obtained, which includes multiple codings, according to pre-arranged code algorithm, according to the code sequence The order encoded in row determines the corresponding color of each coding successively, obtains colour sequential, and color is generated based on the colour sequential Shade so that the light that the color shade is projected is changed according to the color indicated by the colour sequential.
Wherein, which can be randomly generated, and can also be configured according to the demand of practical application, And the pre-arranged code algorithm can also be depending on the demand of practical application, the encryption algorithm, can reflect in coded sequence Correspondence between each coding and a variety of colors, such as, red can be made to represent numeral -1, green represents 0, and blueness represents 1, etc..
If color shade is generated according to predetermined code sequence, can also be come by being decoded to optical signal Determine that the surface of detection object in the image sequence whether there is reflected light signal caused by the throw light, for details, reference can be made to Embodiment above, details are not described herein.
Optionally, in order to reduce the influence that the numerical value caused by noise floats to signal, after image sequence is obtained, may be used also To carry out Denoising disposal to the image sequence, i.e., this is stored in the application program in memory 502, can also realize following work( Energy:
Denoising disposal is carried out to the image sequence.
For example, by taking noise model is Gaussian noise as an example, multi-frame mean and/or more rulers at same frame in sequential can be specifically used Degree averagely reduces noise, etc. as much as possible.
The specific implementation of each operation can be found in embodiment above above, and details are not described herein.
From the foregoing, it will be observed that the terminal of the present embodiment when needing to carry out In vivo detection, can start light source and be thrown to detection object Penetrate light, and Image Acquisition carried out to the detection object, it is then determined that in the image sequence collected detection object surface With the presence or absence of reflected light signal caused by the throw light, if it is present using default identification model to the characteristics of image The type of affiliated object is identified, if recognition result indicates that the type of the affiliated object of the characteristics of image is live body, it is determined that should Detection object is live body;Since the program with user without carrying out cumbersome interactive operation and computing, it can reduce to hard The demand of part configuration, the foundation for being additionally, since program progress living body determination is the reflected light signal on detection object surface, and true Positive live body and the reflected light for the live body (carrier of synthesising picture or video, such as photograph, mobile phone or tablet computer etc.) forged Signal is different, and therefore, the program can also effectively keep out synthesis face attack, improve the accuracy of differentiation;So it is total and Yan Zhi, the program particularly under the limited hardware configuration of mobile terminal, can improve In vivo detection effect, so as to carry in terminal The accuracy and security of high authentication.
It will appreciated by the skilled person that all or part of step in the various methods of above-described embodiment can be with Completed by instructing, or control relevant hardware to complete by instructing, which can be stored in one and computer-readable deposit In storage media, and loaded and performed by processor.
For this reason, the embodiment of the present invention also provides a kind of storage medium, wherein being stored with a plurality of instruction, which can be located Reason device is loaded, to perform the step in any biopsy method that the embodiment of the present invention provided.For example, the instruction Can be with following steps:
In vivo detection request is received, is asked to start light source according to the In vivo detection, and to detection object throw light, so Afterwards, Image Acquisition is carried out to the detection object, to obtain image sequence, identifies that the surface of detection object in the image sequence is deposited In reflected light signal caused by the throw light, the reflected light signal is in the detection object image forming surface feature, use The type of the affiliated object of the characteristics of image is identified in default identification model, if recognition result indicate it is right belonging to the characteristics of image The type of elephant is live body, it is determined that the detection object is live body.
Wherein it is determined that the surface of detection object whether there is reflected light letter caused by the throw light in the image sequence Number, and can have to the mode that the type of the affiliated object of the characteristics of image is identified a variety of, it for details, reference can be made to reality above Example is applied, details are not described herein.
Wherein, the implementation of the light source can also have a variety of, such as, can be by adjusting the brightness of terminal screen come real It is existing, alternatively, can also using other luminous components such as flash lamp or infrared transmitter or external device realizing or, may be used also With by setting a color shade to realize on display interface, etc., i.e. the instruction can be with following steps:
Adjustment screen intensity is asked according to the In vivo detection so that the screen is as light source to detection object throw light.
Alternatively, asked to open preset emission component according to the In vivo detection so that the luminous component is as light source to detection Object throw light.Wherein, which can include the components such as flash lamp or infrared transmitter.
Alternatively, asking to start color shade according to the In vivo detection, for example start detection interface, which can dodge Existing color shade, the color shade is as light source to detection object throw light.
Wherein, the region of the color shade that flashes can be depending on the demand of practical application, for example, the detection interface can With including detection zone and non-detection region, detection zone is mainly used for showing monitoring situation, and the non-detection region The color that can flash shade, the color shade is as light source to detection object throw light, etc..
In addition, it should be noted that, the parameter such as color and transparency of the color shade can be according to the demand of practical application It is configured, which can be set in advance by system, and directly be transferred when starting and detecting interface, alternatively, It can be automatically generated after In vivo detection request is received, i.e. the instruction can be with following steps:
Generating color shade so that the light that the color shade is projected can be changed according to default rule, than Such as, specifically the color shade, etc. can be used as with the light combinations of pre-arranged code.
The specific implementation of each operation can be found in embodiment above above, and details are not described herein.
Wherein, which can include:Read-only storage (ROM, Read Only Memory), random access memory Body (RAM, Random Access Memory), disk or CD etc..
By the instruction stored in the storage medium, any live body inspection that the embodiment of the present invention is provided can be performed Step in survey method, it is thereby achieved that achieved by any biopsy method that the embodiment of the present invention is provided Beneficial effect, refers to embodiment above, details are not described herein.
A kind of biopsy method, device and the storage medium provided above the embodiment of the present invention has carried out detailed Jie Continue, specific case used herein is set forth the principle of the present invention and embodiment, and the explanation of above example is only It is the method and its core concept for being used to help understand the present invention;Meanwhile for those skilled in the art, according to the present invention's Thought, there will be changes in specific embodiments and applications, in conclusion this specification content should not be construed as Limitation of the present invention.

Claims (19)

  1. A kind of 1. biopsy method, it is characterised in that including:
    Receive In vivo detection request;
    Asked to start light source according to the In vivo detection, and to detection object throw light;
    Image Acquisition is carried out to the detection object, to obtain image sequence;
    The surface of detection object in described image sequence is identified there are reflected light signal caused by the throw light, it is described Reflected light signal is in the detection object image forming surface feature;
    The type of the affiliated object of described image feature is identified using default identification model, the default identification model is by more A feature samples training forms, and the feature samples are formed by the reflected light signal in the subject surface of marking types Characteristics of image;
    If the type of the recognition result instruction affiliated object of described image feature is live body, it is determined that the detection object is live body.
  2. 2. according to the method described in claim 1, it is characterized in that, described ask to start light source, bag according to the In vivo detection Include:
    Default color shade is started according to In vivo detection request, the color shade is projected as light source to detection object Light.
  3. 3. according to the method described in claim 2, it is characterized in that, described start default face according to In vivo detection request Color shade, including:
    Asked to start detection interface according to the In vivo detection, the detection interface includes non-detection region, the non-detection area Flash color shade in domain, and the color shade is as light source to detection object throw light.
  4. 4. according to the method described in claim 2, it is characterized in that, after the reception In vivo detection request, further include:
    Generate color shade so that the light that the color shade is projected can be changed according to default rule.
  5. 5. according to the method described in claim 4, it is characterized in that, after the generation color shade, further include:
    For the light of same color, default screen intensity adjusting parameter is obtained, is adjusted according to the screen intensity adjusting parameter Screen intensity of the light of the same color before and after change, to adjust the change intensity of light;
    For the light of different colours, default aberration adjusting parameter is obtained, according to aberration adjusting parameter adjustment not Light with color is changing front and rear aberration, to adjust the change intensity of light.
  6. 6. according to the method described in claim 4, it is characterized in that, the generation color shade so that the color shade institute The light projected can be changed according to default rule including:
    Predetermined code sequence is obtained, the coded sequence includes multiple codings;
    According to pre-arranged code algorithm, the corresponding color of each coding is determined successively according to the order encoded in the coded sequence, Obtain colour sequential;
    Color shade is generated based on the colour sequential so that the light that the color shade is projected is according to the color sequence The indicated color of row is changed.
  7. 7. method according to any one of claims 1 to 6, it is characterised in that described identify is examined in described image sequence The surface of object is surveyed there are reflected light signal caused by the throw light, including:
    The change of frame, obtains regression result in regression analysis described image sequence;
    According to produced by the regression result identifies the surface of detection object in described image sequence there are the throw light Reflected light signal.
  8. 8. the method according to the description of claim 7 is characterized in that described identify described image sequence according to the regression result The surface of detection object is there are reflected light signal caused by the throw light in row, including:
    Determine whether the regression result is more than predetermined threshold value;
    If, it is determined that there are reflected light caused by the throw light on the surface of detection object in described image sequence to believe Number;
    If not, it is determined that the surface of detection object is not present reflected light caused by the throw light and believes in described image sequence Number.
  9. 9. the method according to the description of claim 7 is characterized in that described identify described image sequence according to the regression result The surface of detection object is there are reflected light signal caused by the throw light in row, including:
    Classification analysis is carried out to the regression result by default global characteristics algorithm or default identification model;
    If the interframe change on the surface of analysis result instruction detection object is more than setting value, it is determined that is detected in described image sequence There are reflected light signal caused by the throw light on the surface of object;
    If the interframe change on the surface of analysis result instruction detection object is not more than setting value, it is determined that is examined in described image sequence Reflected light signal caused by the throw light is not present in the surface for surveying object.
  10. It is 10. according to the method described in claim 9, it is characterized in that, described by presetting global characteristics algorithm or default identification Model carries out the regression result classification analysis, including:
    The regression result is analyzed, to judge in described image sequence with the presence or absence of anti-caused by the throw light Penetrate optical signal;
    If reflected light signal caused by the throw light is not present, the interframe change on the surface of generation instruction detection object No more than the analysis result of setting value;
    If there are reflected light signal caused by the throw light, pass through default global characteristics algorithm or default identification model Whether the reflector of reflected light information existing for judgement is the detection object, if the detection object, then generation instruction inspection The analysis result of the interframe change more than setting value on the surface of object is surveyed, if not the detection object, then generation instruction detection Analysis result of the interframe change on the surface of object no more than setting value.
  11. It is 11. according to the method described in claim 9, it is characterized in that, described by presetting global characteristics algorithm or default identification Model carries out the regression result classification analysis, including:
    Classified by default global characteristics algorithm or default identification model to the image in described image sequence, to filter out There are the frame of the detection object, candidate frame is obtained;
    The frame-to-frame differences of the candidate frame is analyzed, to judge the surface of the detection object with the presence or absence of produced by the throw light Reflected light signal;
    If reflected light signal caused by the throw light is not present, the interframe change on the surface of generation instruction detection object No more than the analysis result of setting value;
    If there are reflected light signal caused by the throw light, the interframe change on the surface of generation instruction detection object is greatly In the analysis result of setting value.
  12. 12. the method according to the description of claim 7 is characterized in that the color shade is to be generated according to predetermined code sequence , it is described according to the regression result identify detection object in described image sequence there are the throw light on surface to be produced Raw reflected light signal, including:
    According to the regression result, described image sequence is decoded according to default decoding algorithm, sequence after being decoded;
    Whether sequence matches with the coded sequence after determining the decoding;
    If matching, it is determined that there are reflected light caused by the throw light on the surface of detection object in described image sequence to believe Number;
    If mismatch, it is determined that there is no reflection caused by the throw light on the surface of detection object in described image sequence Optical signal.
  13. 13. according to the method for claim 12, it is characterised in that the regression result is front and rear for throw light change right Chrominance/luminance change relative value between the frame answered, it is described according to the regression result, according to default decoding algorithm to the figure As sequence is decoded, sequence after being decoded, including:
    Using default decoding algorithm, the chrominance/luminance change relative value in described image sequence is calculated successively, is obtained The chrominance/luminance absolute value of corresponding frame before and after changing to each throw light;
    It is or absolute to obtained chrominance/luminance according to preset strategy using obtained chrominance/luminance absolute value as sequence after decoding Value is changed, sequence after being decoded.
  14. 14. the method according to the description of claim 7 is characterized in that in the regression analysis described image sequence frame change, Regression result is obtained, including:
    When determining that the change in location degree of detection object is less than default changing value, projection light is obtained from described image sequence respectively The chrominance/luminance of corresponding frame before and after line change, calculates the chrominance/luminance by default regression function, obtains Chrominance/luminance change relative value before and after throw light change between corresponding frame, using chrominance/luminance change relative value as Regression result.
  15. A kind of 15. living body detection device, it is characterised in that including:
    Receiving unit, for receiving In vivo detection request;
    Start unit, for being asked to start light source according to the In vivo detection, and to detection object throw light;
    Collecting unit, for carrying out Image Acquisition to the detection object, to obtain image sequence;
    Detection unit, for identifying that there are anti-caused by the throw light on the surface of detection object in described image sequence Penetrate optical signal, the reflected light signal is in the detection object image forming surface feature, using default identification model to described The type of the affiliated object of characteristics of image is identified, and the default identification model is formed by the training of multiple feature samples, the spy The characteristics of image that sign sample is formed by the reflected light signal in the subject surface of marking types, if recognition result indicates institute The type for stating the affiliated object of characteristics of image is live body, it is determined that the detection object is live body.
  16. 16. according to the method for claim 15, it is characterised in that
    The start unit, specifically for starting default color shade, the color shade according to In vivo detection request As light source to detection object throw light.
  17. 17. device according to claim 16, it is characterised in that
    The start unit, specifically for being asked to start detection interface according to the In vivo detection, the detection interface includes non- Detection zone, the non-detection region are flashed color shade, and the color shade is as light source to detection object throw light.
  18. 18. device according to claim 16, it is characterised in that further include generation unit;
    The generation unit, for generating color shade so that the light that the color shade is projected can be according to default Rule is changed.
  19. 19. a kind of storage medium, it is characterised in that the storage medium is stored with a plurality of instruction, and described instruction is suitable for processor Loaded, the step in 1 to 14 any one of them biopsy method is required with perform claim.
CN201711012244.1A 2016-12-30 2017-10-26 A kind of biopsy method, device and storage medium Active CN107992794B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/111218 WO2019080797A1 (en) 2016-12-30 2018-10-22 Living body detection method, terminal, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611257052 2016-12-30
CN2016112570522 2016-12-30

Publications (2)

Publication Number Publication Date
CN107992794A true CN107992794A (en) 2018-05-04
CN107992794B CN107992794B (en) 2019-05-28

Family

ID=62031297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711012244.1A Active CN107992794B (en) 2016-12-30 2017-10-26 A kind of biopsy method, device and storage medium

Country Status (2)

Country Link
CN (1) CN107992794B (en)
WO (2) WO2018121428A1 (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107832712A (en) * 2017-11-13 2018-03-23 深圳前海微众银行股份有限公司 Biopsy method, device and computer-readable recording medium
CN109101881A (en) * 2018-07-06 2018-12-28 华中科技大学 A kind of real-time blink detection method based on multiple dimensioned timing image
CN109376592A (en) * 2018-09-10 2019-02-22 阿里巴巴集团控股有限公司 Biopsy method, device and computer readable storage medium
CN109660745A (en) * 2018-12-21 2019-04-19 深圳前海微众银行股份有限公司 Video recording method, device, terminal and computer readable storage medium
WO2019080797A1 (en) * 2016-12-30 2019-05-02 腾讯科技(深圳)有限公司 Living body detection method, terminal, and storage medium
CN109961025A (en) * 2019-03-11 2019-07-02 烟台市广智微芯智能科技有限责任公司 A kind of true and false face recognition detection method and detection system based on image degree of skewness
CN110414346A (en) * 2019-06-25 2019-11-05 北京迈格威科技有限公司 Biopsy method, device, electronic equipment and storage medium
CN110516644A (en) * 2019-08-30 2019-11-29 深圳前海微众银行股份有限公司 A kind of biopsy method and device
CN110688946A (en) * 2019-09-26 2020-01-14 上海依图信息技术有限公司 Public cloud silence in-vivo detection device and method based on picture identification
CN111126229A (en) * 2019-12-17 2020-05-08 中国建设银行股份有限公司 Data processing method and device
CN111274928A (en) * 2020-01-17 2020-06-12 腾讯科技(深圳)有限公司 Living body detection method and device, electronic equipment and storage medium
CN111310515A (en) * 2018-12-11 2020-06-19 上海耕岩智能科技有限公司 Code mask biological characteristic analysis method, storage medium and neural network
CN111310575A (en) * 2020-01-17 2020-06-19 腾讯科技(深圳)有限公司 Face living body detection method, related device, equipment and storage medium
CN111310514A (en) * 2018-12-11 2020-06-19 上海耕岩智能科技有限公司 Method for reconstructing biological characteristics of coded mask and storage medium
WO2020151489A1 (en) * 2019-01-25 2020-07-30 杭州海康威视数字技术股份有限公司 Living body detection method based on facial recognition, and electronic device and storage medium
CN111783640A (en) * 2020-06-30 2020-10-16 北京百度网讯科技有限公司 Detection method, device, equipment and storage medium
CN111899232A (en) * 2020-07-20 2020-11-06 广西大学 Method for nondestructive testing of bamboo-wood composite container bottom plate by utilizing image processing
CN111914763A (en) * 2020-08-04 2020-11-10 网易(杭州)网络有限公司 Living body detection method and device and terminal equipment
WO2020259128A1 (en) * 2019-06-28 2020-12-30 北京旷视科技有限公司 Liveness detection method and apparatus, electronic device, and computer readable storage medium
CN112183156A (en) * 2019-07-02 2021-01-05 杭州海康威视数字技术股份有限公司 Living body detection method and equipment
CN112528909A (en) * 2020-12-18 2021-03-19 平安银行股份有限公司 Living body detection method, living body detection device, electronic apparatus, and computer-readable storage medium
CN112818900A (en) * 2020-06-08 2021-05-18 支付宝实验室(新加坡)有限公司 Face activity detection system, device and method
CN112818782A (en) * 2021-01-22 2021-05-18 电子科技大学 Generalized silence living body detection method based on medium sensing
CN113807159A (en) * 2020-12-31 2021-12-17 京东科技信息技术有限公司 Face recognition processing method, device, equipment and storage medium thereof
CN113837930A (en) * 2021-09-24 2021-12-24 重庆中科云从科技有限公司 Face image synthesis method and device and computer readable storage medium
CN113869219A (en) * 2021-09-29 2021-12-31 平安银行股份有限公司 Face living body detection method, device, equipment and storage medium
WO2023061123A1 (en) * 2021-10-15 2023-04-20 北京眼神科技有限公司 Facial silent living body detection method and apparatus, and storage medium and device
WO2023197739A1 (en) * 2022-04-14 2023-10-19 京东科技信息技术有限公司 Living body detection method and apparatus, system, electronic device and computer-readable medium

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110969077A (en) * 2019-09-16 2020-04-07 成都恒道智融信息技术有限公司 Living body detection method based on color change
CN113298747A (en) * 2020-02-19 2021-08-24 北京沃东天骏信息技术有限公司 Picture and video detection method and device
CN111444831B (en) * 2020-03-25 2023-03-21 深圳中科信迅信息技术有限公司 Method for recognizing human face through living body detection
CN111797735A (en) * 2020-06-22 2020-10-20 深圳壹账通智能科技有限公司 Face video recognition method, device, equipment and storage medium
WO2023221996A1 (en) * 2022-05-16 2023-11-23 北京旷视科技有限公司 Living body detection method, electronic device, storage medium, and program product

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105260731A (en) * 2015-11-25 2016-01-20 商汤集团有限公司 Human face living body detection system and method based on optical pulses
CN105612533A (en) * 2015-06-08 2016-05-25 北京旷视科技有限公司 In-vivo detection method, in-vivo detection system and computer programe products

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016197297A1 (en) * 2015-06-08 2016-12-15 北京旷视科技有限公司 Living body detection method, living body detection system and computer program product
CN104951769B (en) * 2015-07-02 2018-11-30 京东方科技集团股份有限公司 Vivo identification device, vivo identification method and living body authentication system
CN105912986B (en) * 2016-04-01 2019-06-07 北京旷视科技有限公司 A kind of biopsy method and system
CN106529512B (en) * 2016-12-15 2019-09-10 北京旷视科技有限公司 Living body faces verification method and device
CN107992794B (en) * 2016-12-30 2019-05-28 腾讯科技(深圳)有限公司 A kind of biopsy method, device and storage medium
CN107273794A (en) * 2017-04-28 2017-10-20 北京建筑大学 Live body discrimination method and device in a kind of face recognition process
CN107220635A (en) * 2017-06-21 2017-09-29 北京市威富安防科技有限公司 Human face in-vivo detection method based on many fraud modes

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105612533A (en) * 2015-06-08 2016-05-25 北京旷视科技有限公司 In-vivo detection method, in-vivo detection system and computer programe products
CN105260731A (en) * 2015-11-25 2016-01-20 商汤集团有限公司 Human face living body detection system and method based on optical pulses

Cited By (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019080797A1 (en) * 2016-12-30 2019-05-02 腾讯科技(深圳)有限公司 Living body detection method, terminal, and storage medium
CN107832712A (en) * 2017-11-13 2018-03-23 深圳前海微众银行股份有限公司 Biopsy method, device and computer-readable recording medium
CN109101881A (en) * 2018-07-06 2018-12-28 华中科技大学 A kind of real-time blink detection method based on multiple dimensioned timing image
CN109101881B (en) * 2018-07-06 2021-08-20 华中科技大学 Real-time blink detection method based on multi-scale time sequence image
US11210541B2 (en) 2018-09-10 2021-12-28 Advanced New Technologies Co., Ltd. Liveness detection method, apparatus and computer-readable storage medium
US11093773B2 (en) 2018-09-10 2021-08-17 Advanced New Technologies Co., Ltd. Liveness detection method, apparatus and computer-readable storage medium
CN109376592B (en) * 2018-09-10 2021-04-27 创新先进技术有限公司 Living body detection method, living body detection device, and computer-readable storage medium
CN113408403A (en) * 2018-09-10 2021-09-17 创新先进技术有限公司 Living body detection method, living body detection device, and computer-readable storage medium
CN109376592A (en) * 2018-09-10 2019-02-22 阿里巴巴集团控股有限公司 Biopsy method, device and computer readable storage medium
CN111310514A (en) * 2018-12-11 2020-06-19 上海耕岩智能科技有限公司 Method for reconstructing biological characteristics of coded mask and storage medium
CN111310515A (en) * 2018-12-11 2020-06-19 上海耕岩智能科技有限公司 Code mask biological characteristic analysis method, storage medium and neural network
CN109660745A (en) * 2018-12-21 2019-04-19 深圳前海微众银行股份有限公司 Video recording method, device, terminal and computer readable storage medium
WO2020151489A1 (en) * 2019-01-25 2020-07-30 杭州海康威视数字技术股份有限公司 Living body detection method based on facial recognition, and electronic device and storage medium
US11830230B2 (en) 2019-01-25 2023-11-28 Hangzhou Hikvision Digital Technology Co., Ltd. Living body detection method based on facial recognition, and electronic device and storage medium
CN109961025A (en) * 2019-03-11 2019-07-02 烟台市广智微芯智能科技有限责任公司 A kind of true and false face recognition detection method and detection system based on image degree of skewness
CN109961025B (en) * 2019-03-11 2020-01-24 烟台市广智微芯智能科技有限责任公司 True and false face identification and detection method and detection system based on image skewness
CN110414346A (en) * 2019-06-25 2019-11-05 北京迈格威科技有限公司 Biopsy method, device, electronic equipment and storage medium
WO2020259128A1 (en) * 2019-06-28 2020-12-30 北京旷视科技有限公司 Liveness detection method and apparatus, electronic device, and computer readable storage medium
CN112183156B (en) * 2019-07-02 2023-08-11 杭州海康威视数字技术股份有限公司 Living body detection method and equipment
CN112183156A (en) * 2019-07-02 2021-01-05 杭州海康威视数字技术股份有限公司 Living body detection method and equipment
CN110516644A (en) * 2019-08-30 2019-11-29 深圳前海微众银行股份有限公司 A kind of biopsy method and device
CN110688946A (en) * 2019-09-26 2020-01-14 上海依图信息技术有限公司 Public cloud silence in-vivo detection device and method based on picture identification
CN111126229A (en) * 2019-12-17 2020-05-08 中国建设银行股份有限公司 Data processing method and device
CN111274928A (en) * 2020-01-17 2020-06-12 腾讯科技(深圳)有限公司 Living body detection method and device, electronic equipment and storage medium
CN111310575A (en) * 2020-01-17 2020-06-19 腾讯科技(深圳)有限公司 Face living body detection method, related device, equipment and storage medium
WO2021143266A1 (en) * 2020-01-17 2021-07-22 腾讯科技(深圳)有限公司 Method and apparatus for detecting living body, electronic device and storage medium
CN111274928B (en) * 2020-01-17 2023-04-07 腾讯科技(深圳)有限公司 Living body detection method and device, electronic equipment and storage medium
CN111310575B (en) * 2020-01-17 2022-07-08 腾讯科技(深圳)有限公司 Face living body detection method, related device, equipment and storage medium
CN112818900A (en) * 2020-06-08 2021-05-18 支付宝实验室(新加坡)有限公司 Face activity detection system, device and method
CN111783640A (en) * 2020-06-30 2020-10-16 北京百度网讯科技有限公司 Detection method, device, equipment and storage medium
CN111899232B (en) * 2020-07-20 2023-07-04 广西大学 Method for nondestructive detection of bamboo-wood composite container bottom plate by image processing
CN111899232A (en) * 2020-07-20 2020-11-06 广西大学 Method for nondestructive testing of bamboo-wood composite container bottom plate by utilizing image processing
CN111914763B (en) * 2020-08-04 2023-11-28 网易(杭州)网络有限公司 Living body detection method, living body detection device and terminal equipment
CN111914763A (en) * 2020-08-04 2020-11-10 网易(杭州)网络有限公司 Living body detection method and device and terminal equipment
CN112528909A (en) * 2020-12-18 2021-03-19 平安银行股份有限公司 Living body detection method, living body detection device, electronic apparatus, and computer-readable storage medium
CN112528909B (en) * 2020-12-18 2024-05-21 平安银行股份有限公司 Living body detection method, living body detection device, electronic equipment and computer readable storage medium
CN113807159A (en) * 2020-12-31 2021-12-17 京东科技信息技术有限公司 Face recognition processing method, device, equipment and storage medium thereof
CN112818782A (en) * 2021-01-22 2021-05-18 电子科技大学 Generalized silence living body detection method based on medium sensing
CN113837930A (en) * 2021-09-24 2021-12-24 重庆中科云从科技有限公司 Face image synthesis method and device and computer readable storage medium
CN113837930B (en) * 2021-09-24 2024-02-02 重庆中科云从科技有限公司 Face image synthesis method, device and computer readable storage medium
CN113869219A (en) * 2021-09-29 2021-12-31 平安银行股份有限公司 Face living body detection method, device, equipment and storage medium
CN113869219B (en) * 2021-09-29 2024-05-21 平安银行股份有限公司 Face living body detection method, device, equipment and storage medium
WO2023061123A1 (en) * 2021-10-15 2023-04-20 北京眼神科技有限公司 Facial silent living body detection method and apparatus, and storage medium and device
WO2023197739A1 (en) * 2022-04-14 2023-10-19 京东科技信息技术有限公司 Living body detection method and apparatus, system, electronic device and computer-readable medium

Also Published As

Publication number Publication date
CN107992794B (en) 2019-05-28
WO2019080797A1 (en) 2019-05-02
WO2018121428A1 (en) 2018-07-05

Similar Documents

Publication Publication Date Title
CN107992794B (en) A kind of biopsy method, device and storage medium
US10579853B2 (en) Method and apparatus for acquiring fingerprint image and terminal device
CN110321790B (en) Method for detecting countermeasure sample and electronic equipment
US11074466B2 (en) Anti-counterfeiting processing method and related products
CN111274928B (en) Living body detection method and device, electronic equipment and storage medium
CN106557678B (en) A kind of intelligent terminal mode switching method and its device
CN110689500B (en) Face image processing method and device, electronic equipment and storage medium
CN107437009B (en) Authority control method and related product
WO2019052329A1 (en) Facial recognition method and related product
CN102193962B (en) Matching device, digital image processing system, and matching device control method
CN107463818B (en) Unlocking control method and related product
CN107292290B (en) Face living body identification method and related product
CN104143078A (en) Living body face recognition method and device and equipment
WO2015127313A1 (en) Multi-band biometric camera system having iris color recognition
EP3623973B1 (en) Unlocking control method and related product
CN104598870A (en) Living fingerprint detection method based on intelligent mobile information equipment
CN108494996B (en) Image processing method, device, storage medium and mobile terminal
CN107995422A (en) Image capturing method and device, computer equipment, computer-readable recording medium
CN107506697B (en) Anti-counterfeiting processing method and related product
CN112446252A (en) Image recognition method and electronic equipment
CN113591517A (en) Living body detection method and related equipment
WO2017185728A1 (en) Method and device for identifying key operation
WO2019011106A1 (en) State control method and related product
CN111310575A (en) Face living body detection method, related device, equipment and storage medium
CN107729833B (en) Face detection method and related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant