CN106203305A - Human face in-vivo detection method and device - Google Patents

Human face in-vivo detection method and device Download PDF

Info

Publication number
CN106203305A
CN106203305A CN201610511947.8A CN201610511947A CN106203305A CN 106203305 A CN106203305 A CN 106203305A CN 201610511947 A CN201610511947 A CN 201610511947A CN 106203305 A CN106203305 A CN 106203305A
Authority
CN
China
Prior art keywords
image
light filling
face
light
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610511947.8A
Other languages
Chinese (zh)
Other versions
CN106203305B (en
Inventor
范浩强
印奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Beijing Aperture Science and Technology Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Beijing Aperture Science and Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd, Beijing Aperture Science and Technology Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN201610511947.8A priority Critical patent/CN106203305B/en
Publication of CN106203305A publication Critical patent/CN106203305A/en
Application granted granted Critical
Publication of CN106203305B publication Critical patent/CN106203305B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a kind of human face in-vivo detection method and device, this human face in-vivo detection method includes: obtain the non-light filling image gathered in the case of non-light filling about face object to be detected and multiple the light filling images gathered in the case of the multiple sets of light sources light filling respectively that brightness is close;Based on non-light filling image and multiple light filling images calculate face object without high light image and high light image;Multichannel image will be merged into without high light image and high light image, and multichannel image will be inputted the convolutional neural networks trained;And output of based on convolutional neural networks determines whether face object is live subject.Human face in-vivo detection method and device according to embodiments of the present invention utilize true living body faces different from the high light characteristic of mask, screen etc., are made a distinction with mask, screen etc. by true living body faces, it is achieved face In vivo detection.

Description

Human face in-vivo detection method and device
Technical field
The present invention relates to technical field of face recognition, relate more specifically to a kind of human face in-vivo detection method and device.
Background technology
Recognition of face, is facial feature information based on people a kind of biological identification technology of carrying out identification.Face is known The most generally use image or video flowing that video camera or camera collection contain face, and detect and track people the most in the picture Face, and then the face detected is carried out face recognition.
Recognition of face is used widely in various fields.But, due to face information reproducible usurp etc. because of Element, some personnel of assuming another's name may pretend to be true man's face by face mask, screen face etc., and it is hidden that this will make recognition of face exist Suffer from.Be increasingly being applied to security protection especially with recognition of face, financial field needs the scene of authentication, as bank is remote Journey is opened an account, gate control system, remote transaction operation demonstration etc., in the application of these high level of securitys, needs to take precautions against people's face The security attacks such as tool.
Summary of the invention
The present invention is proposed in view of the problems referred to above.The invention provides a kind of human face in-vivo detection method and device, It synthesizes without high light image and height based on multiple images (light filling image) gathered under non-light filling image and multiple sets of light sources light filling Light image, to form multichannel image (this multichannel image includes high light component) without high light image and high light image, utilizes True living body faces is different from the high light characteristic of mask, screen etc., is distinguished with mask, screen etc. by true living body faces Come.
According to an aspect of the present invention, it is provided that human face in-vivo detection method, described human face in-vivo detection method includes: obtain The non-light filling image that gathered in the case of non-light filling about face object to be detected and dividing at the multiple sets of light sources that brightness is close Multiple the light filling images gathered in the case of other light filling;Calculate described based on described non-light filling image and multiple light filling images described Face object without high light image and high light image;It is merged into multichannel figure without high light image and described high light image by described Picture, and described multichannel image is inputted the convolutional neural networks trained;And output of based on described convolutional neural networks Determine whether described face object is live subject.
In one embodiment of the invention, some in described multiple sets of light sources are respectively at relative to described with other The different angles scope of face object, and described calculate described people based on described non-light filling image and multiple light filling images described Including without high light image and high light image of face object: based on the minimum pixel value at arbitrary same position in all light filling images With described non-light filling image the pixel value of corresponding position difference calculate described face object without high light image;And based on Maximum pixel at arbitrary same position in multiple light filling images in the case of multiple sets of light sources light filling in the range of equal angular The difference of value and minimum pixel value calculates the high light image of described face object.
Exemplarily, described output based on described convolutional neural networks determines whether described face object is live subject Including: when the output of described convolutional neural networks is more than predetermined threshold, determine that described face object is live subject;When described When the output of convolutional neural networks is less than or equal to described predetermined threshold, determine that described face object is non-living body object.
In one embodiment of the invention, described human face in-vivo detection method also includes: at non-light filling figure described in acquisition After picture and multiple light filling images described, by multiple light filling images described and described non-light filling image alignment, and described calculating institute State face object without high light image and high light image based on described non-light filling image and multiple the light filling images after alignment.
Exemplarily, described multiple light filling images described are included with described non-light filling image alignment: to described non-light filling Image and multiple light filling images described carry out Face datection and face critical point detection;And position of based on face key point will Multiple light filling images described and described non-light filling image alignment.
On the other hand, the present invention also provides for a kind of face living body detection device, and described face living body detection device includes: obtain Delivery block, for obtaining the non-light filling image gathered in the case of non-light filling about face object to be detected and in brightness Multiple the light filling images gathered in the case of close multiple sets of light sources light filling respectively;Computing module, for based on described non-light filling Image and multiple light filling images described calculate described face object without high light image and high light image;Synthesis module, being used for will Described it is merged into multichannel image without high light image and described high light image, and described multichannel image is inputted the volume trained Long-pending neutral net;And analysis module, determine that whether described face object is for output based on described convolutional neural networks Live subject.
In one embodiment of the invention, some in described multiple sets of light sources are respectively at relative to described with other The different angles scope of face object, and described computing module is further used for: based on arbitrary identical in all light filling images The minimum pixel value of position and described non-light filling image calculate described face object in the difference of the pixel value of corresponding position Without high light image;And based on arbitrary phase in multiple the light filling images in the case of the multiple sets of light sources light filling in the range of equal angular The difference of the max pixel value at co-located and minimum pixel value calculates the high light image of described face object.
Exemplarily, described analysis module is further used for: when the output of described convolutional neural networks is more than predetermined threshold Time, determine that described face object is live subject;When the output of described convolutional neural networks is less than or equal to described predetermined threshold Time, determine that described face object is non-living body object.
In one embodiment of the invention, described face living body detection device also includes: alignment module, for by described Multiple light filling images described acquired in acquisition module and described non-light filling image alignment, and described computing module for based on Described non-light filling image and through alignment after multiple light filling images calculate described face object without high light image and high light image.
Exemplarily, described alignment module is further used for: enter described non-light filling image and multiple light filling images described Row Face datection and face critical point detection;And position of based on face key point by multiple light filling images described with described not Light filling image alignment.
Human face in-vivo detection method according to embodiments of the present invention and device are based on non-light filling image and multiple sets of light sources light filling Lower gathered multiple images synthesis is without high light image and high light image, to form multichannel figure without high light image and high light image As (this multichannel image includes high light component), utilize true living body faces different from the high light characteristic of mask, screen etc., True living body faces is made a distinction with mask, screen etc., it is achieved face In vivo detection.
Accompanying drawing explanation
By combining accompanying drawing, the embodiment of the present invention is described in more detail, above-mentioned and other purpose of the present invention, Feature and advantage will be apparent from.Accompanying drawing is used for providing being further appreciated by the embodiment of the present invention, and constitutes explanation A part for book, is used for explaining the present invention together with the embodiment of the present invention, is not intended that limitation of the present invention.In the accompanying drawings, Identical reference number typically represents same parts or step.
Fig. 1 is for realizing human face in-vivo detection method according to embodiments of the present invention and the exemplary electronic device of device Schematic block diagram;
Fig. 2 is the indicative flowchart of human face in-vivo detection method according to embodiments of the present invention;
Fig. 3 is for the image capture device of human face in-vivo detection method according to embodiments of the present invention and to be distributed in its week The schematic diagram of the light source enclosed;
Fig. 4 is the indicative flowchart of human face in-vivo detection method according to another embodiment of the present invention;
Fig. 5 is the schematic block diagram of face living body detection device according to embodiments of the present invention;
Fig. 6 is the schematic block diagram of face living body detection device according to another embodiment of the present invention;And
Fig. 7 is the schematic block diagram of face In vivo detection system according to embodiments of the present invention.
Detailed description of the invention
So that the object, technical solutions and advantages of the present invention become apparent from, describe root below with reference to accompanying drawings in detail Example embodiment according to the present invention.Obviously, described embodiment be only a part of embodiment of the present invention rather than this Bright whole embodiments, it should be appreciated that the present invention is not limited by example embodiment described herein.Described in the present invention The embodiment of the present invention, all other embodiments that those skilled in the art are obtained in the case of not paying creative work All should fall under the scope of the present invention.
Under certain photoenvironment, often there is the Gao Guang of local in the optical imagery that can be formed by the reflection of light Reflector space, such as face surface, can Specular Highlight compositions due to cuticular existence.The difference of physical surface material can cause The intensity of high light component is different from angular response scope.Therefore, it can based on true living body faces and emulation mask, screen people The difference of the high light characteristic of face etc., makes a distinction true living body faces with mask, screen etc..The present invention is based on this principle The method and apparatus etc. of face In vivo detection is provided, suffers people with strick precaution at the face identification system of high level of security application The problems such as face tool attack.Below, various embodiments of the present invention are described with reference to the accompanying drawings.
First, with reference to Fig. 1, the example for the human face in-vivo detection method and device realizing the embodiment of the present invention is described Electronic equipment 100.
As it is shown in figure 1, electronic equipment 100 includes one or more processor 102, one or more storage device 104, defeated Entering device 106, output device 108 and imageing sensor 110, these assemblies are by bus system 112 and/or other form Bindiny mechanism's (not shown) interconnection.It should be noted that, the assembly of the electronic equipment 100 shown in Fig. 1 and structure are exemplary, and Nonrestrictive, as required, described electronic equipment can also have other assemblies and structure.
Described processor 102 can be CPU (CPU) or have data-handling capacity and/or instruction execution The processing unit of other form of ability, and it is desired to perform to control other assembly in described electronic equipment 100 Function.
Described storage device 104 can include one or more computer program, and described computer program can To include various forms of computer-readable recording medium, such as volatile memory and/or nonvolatile memory.Described easily The property lost memorizer such as can include random access memory (RAM) and/or cache memory (cache) etc..Described non- Volatile memory such as can include read only memory (ROM), hard disk, flash memory etc..At described computer-readable recording medium On can store one or more computer program instructions, processor 102 can run described programmed instruction, to realize hereafter institute The client functionality (realized by processor) in the embodiment of the present invention stated and/or other desired function.At described meter Calculation machine readable storage medium storing program for executing can also store various application program and various data, the most described application program use and/or The various data etc. produced.
Described input equipment 106 can be user for inputting the device of instruction, and keyboard, mouse, wheat can be included One or more in gram wind and touch screen etc..
Described output device 108 can export various information (such as image or sound) to outside (such as user), and Can include in display, speaker etc. is one or more.
Described imageing sensor 110 can shoot the desired image of user (such as photo, video etc.), and by captured Image be stored in described storage device 104 and use for other assembly.
Exemplarily, the exemplary electron being used for realizing human face in-vivo detection method and device according to embodiments of the present invention sets The standby image acquisition end etc. that may be implemented as such as smart mobile phone, panel computer, gate control system.
Below, with reference to Fig. 2, human face in-vivo detection method 200 according to embodiments of the present invention will be described.
In step S210, obtain gathered in the case of non-light filling about face object to be detected non-light filling image, with And multiple the light filling images gathered in the case of the multiple sets of light sources light filling respectively that brightness is close.
In one example, can obtain image from image capture device, this image capture device such as can include hands Image capture device on the smart machine such as machine, IPAD, such as, image first-class.
Image capture device can gather the image about face object to be detected, the most non-light filling in the case of non-light filling Image, such as, be expressed as I0;Additionally, image capture device can be in brightness close to (such as, often organizing the luminance difference of light source less than pre- If threshold value, or judge that brightness is the most close by human eye) multiple sets of light sources light filling respectively in the case of gather multiple light filling images.
In one example, image capture device collection four (the most as shown in Figure 3) under the difference light filling of four groups of light sources Open image.In figure 3, although image capture device is shown as photographic head, but it will appreciated by the skilled person that also Other image capture device can be used.In figure 3, first group of light source includes light-emitting device (such as light emitting diode) LED_ 1_1 and LED_1_2, second group of light source includes light-emitting device LED_2_1 and LED_2_2, and the 3rd group of light source includes light-emitting device LED_3_1 and LED_3_2, the 4th group of light source includes light-emitting device LED_4_1 and LED_4_2.In a simple example, should Four groups of light sources include equal number of (all including two) light-emitting device, and the brightness of each light-emitting device is identical.Additionally, this four Group light source is often organized light-emitting device included by the light source mean place relative to photographic head close to (the most often organizing two of light source to send out Electro-optical device line center is close to even overlapping), all light-emitting devices dissipate relative to photographic head distribution.In one example, One group of light source and the light-emitting device cross arrangement of second group of light source, the 3rd group of light source and the 4th group of light source are as the same.
Although it will appreciated by the skilled person that and figure 3 illustrates four groups of light sources, but can be as required An appropriate number of light sources is set.Additionally, the light-emitting device that figure 3 illustrates is only exemplary relative to the distribution of photographic head , the image being easy to subsequent calculations and process can be collected under this distribution, it is also possible to as required or special case arrange Other distributions.
Based on above-mentioned example, image capture device gathers the light filling of face object to be detected under the light filling of first group of light source Image I1, gathers the light filling image I2 of face object to be detected, at the light filling of the 3rd group of light source under the light filling of second group of light source The light filling image I3 of lower collection face to be detected object, gathers the light filling of face object to be detected under the light filling of the 4th group of light source Image I4.
In this example, the non-light filling image I0 about object to be detected and light filling image can be obtained in step S210 I1, I2, I3 and I4.
In step S220, the nothing calculating described face object based on acquired non-light filling image and multiple light filling images is high Light image and high light image.
In one example, for face object without high light image, can be based on arbitrary identical in all light filling images Position minimum pixel value calculates in the difference of the pixel value of corresponding position with non-light filling image.Then above example, example As to be calculated is expressed as S0 without high light image, then S0=min (I1, I2, I3, I4)-I0.Gao Guangtu for face object Picture, then can be based on arbitrary same position in multiple the light filling images in the case of the multiple sets of light sources light filling in the range of equal angular The max pixel value at place calculates with the difference of minimum pixel value.The purpose of do so is to obtain about this face object to be measured The high light component of different angles response range.Such as, in the above example, first group of light source and the luminous dress of second group of light source Put and be in identical angular range, similarly, the 3rd group of light source and the luminous dress of the 4th group of light source relative to face object to be measured Put and be in identical angular range relative to face object to be measured.Therefore, it can obtain high light image S1 and S2, wherein S1= Max (I1, I2)-min (I1, I2), S2=max (I3, I4)-min (I3, I4).High light image S1 and S2 has respectively obtained difference (the high light component of such as cornea and nose ratio in high light image S1 and S2 will not for the high light component of angular response scope Equally).
In step S230, it is merged into multichannel image by calculated without high light image and high light image, and by manifold Road image inputs the convolutional neural networks trained.
In this step, multichannel image will be merged into about face object to be measured without high light image and high light image, In this multichannel image, each pixel is represented by multiple values and (such as, is merged into by S0, S1 and the S2 in above example Triple channel image, in this triple channel image, (such as, tri-Color Channels of R, G, B are the most right by 3 values for each pixel The value answered) represent).Exemplarily, this multichannel image includes high light component.Based on by the input training of this multichannel image Based on the high light component in multichannel image, good convolutional neural networks, can intactly analyze whether face object to be measured is alive Body face.Wherein, this convolutional neural networks is to obtain based on a large amount of image pattern training through mark, these image pattern examples As being noted as face, mask etc. according to the distribution situation of the high light component in image.
In step S240, output based on this convolutional neural networks determines whether described face object is live subject.
In one example, the output of this convolutional neural networks can be the value between 0~1.When convolutional neural networks When output is more than predetermined threshold (such as 0.5), determine that described face object is live subject;Otherwise, when convolutional neural networks When output is less than or equal to described predetermined threshold, determine that described face object is non-living body object.
Above-mentioned human face in-vivo detection method 200 according to embodiments of the present invention is based on non-light filling image and multiple sets of light sources light filling Lower gathered multiple images synthesis is without high light image and high light image, to form multichannel figure without high light image and high light image As (this multichannel image includes high light component), utilize true living body faces different from the high light characteristic of mask, screen etc., True living body faces is made a distinction with mask, screen etc., it is achieved face In vivo detection.
Exemplarily, human face in-vivo detection method according to embodiments of the present invention can have memorizer and processor Unit or system realize.
Human face in-vivo detection method according to embodiments of the present invention can be deployed at man face image acquiring end, such as, Security protection application, can be deployed in the image acquisition end of gate control system;In financial application field, personal terminal can be deployed in Place, such as smart phone, panel computer, personal computer etc..
Alternatively, human face in-vivo detection method according to embodiments of the present invention can also be deployed in server end with being distributed At (or high in the clouds) and personal terminal.
For example, it is possible to gather non-light filling image and multiple light filling images at personal terminal, personal terminal will be gathered Image be delivered to server end (or high in the clouds);At server end (or high in the clouds) based on non-light filling image and multiple light filling image meters Calculate face object without high light image and high light image, multichannel image will will be merged into without high light image and high light image Whether convolutional neural networks and output based on convolutional neural networks that multichannel image input trains determine face object For live subject.
Again for example, it is possible at personal terminal, gather non-light filling image and multiple light filling images, based on non-light filling image and Multiple light filling images calculate face object without high light image and high light image and will merge without high light image and high light image Become multichannel image, personal terminal multichannel image is delivered to server end (or high in the clouds), in server end (or high in the clouds) Whether the convolutional neural networks the output based on convolutional neural networks that multichannel image input are trained determine face object For live subject.
Below, with reference to Fig. 4, human face in-vivo detection method 400 according to another embodiment of the present invention will be described.
In step S410, obtain gathered in the case of non-light filling about face object to be detected non-light filling image, with And multiple the light filling images gathered in the case of the multiple sets of light sources light filling respectively that brightness is close.
Step S410 in this embodiment and step S210 in human face in-vivo detection method 200 described in conjunction with Figure 2 It is similar, for sake of simplicity, here is omitted.
In step S420, by multiple light filling images described and described non-light filling image alignment.
Multiple light filling images described and described non-light filling image alignment can more effectively be improved subsequent treatment efficiency and Accuracy.In one example, the step of this alignment may further include: to described non-light filling image and multiple light fillings described Image carries out Face datection and face critical point detection;And position of based on face key point by multiple light filling images described with Described non-light filling image alignment.Image is carried out registration process, makes in each image the coordinate of each pixel on face object the most identical, Such that it is able to reduce the complexity of subsequent treatment computing, improve arithmetic speed.
For example, it is possible to never light filling image and multiple light filling images detect face, then utilize the nerve trained Network carries out feature extraction and tries to achieve the position of key point based on the feature extracted, based on face key point facial image Position by multiple light filling images and non-light filling image alignment.KLT (Kanade-Lucas-Tomasi) such as can be used to follow the trail of Device obtains each light filling image I1, I2, I3, I4 relative displacement relative to I0, the most correspondingly I1, I2, I3, I4 is translated Operation makes the image of its face align with I0.Multiple light filling images after alignment are used for subsequent treatment.
It should be understood that the present invention is not limited by the concrete Face datection used and face critical point detection method, no matter It is existing Face datection and face critical point detection method or the Face datection of exploitation in the future and face critical point detection side Method, can be applied in human face in-vivo detection method according to embodiments of the present invention, and also should be included in the guarantor of the present invention In the range of protecting.
In step S430, the nothing calculating described face object based on multiple the light filling images after non-light filling image and alignment is high Light image and high light image.
In step S440, it is merged into multichannel image by calculated without high light image and high light image, and by manifold Road image inputs the convolutional neural networks trained.
In step S450, output based on this convolutional neural networks determines whether described face object is live subject.
Above-mentioned steps S430, S440 and S450 and the step in human face in-vivo detection method 200 described in conjunction with Figure 2 S220, S230 are similar with S240, for sake of simplicity, here is omitted.
Above-mentioned human face in-vivo detection method 400 non-light filling image according to embodiments of the present invention and based on multiple sets of light sources light filling Lower gathered multiple images synthesis is without high light image and high light image, to form multichannel figure without high light image and high light image As (this multichannel image includes high light component), utilize true living body faces different from the high light characteristic of mask, screen etc., True living body faces is made a distinction with mask, screen etc., it is achieved face In vivo detection.Additionally, obtain non-light filling image and By multiple light filling images and non-light filling image alignment after multiple light filling images, can more effectively improve efficiency and the standard of subsequent treatment Exactness.
Fig. 5 shows the schematic block diagram of face living body detection device 500 according to embodiments of the present invention.
As it is shown in figure 5, face living body detection device 500 according to embodiments of the present invention includes acquisition module 510, calculates mould Block 520, synthesis module 530 and analysis module 540.
Acquisition module 510 is for obtaining the non-light filling figure gathered in the case of non-light filling about face object to be detected Picture and multiple the light filling images gathered in the case of the multiple sets of light sources light filling respectively that brightness is close.
In one example, acquisition module 510 can obtain image from image capture device, and this image capture device is such as The image capture device on the smart machine such as mobile phone, IPAD can be included, such as, image first-class.Image capture device can be not Gather the image about face object to be detected, the most non-light filling image in the case of light filling, such as, be expressed as I0;Additionally, image Collecting device can gather multiple light filling images in the case of the multiple sets of light sources light filling respectively that brightness is close.In one example, Image capture device under the difference light filling of four groups of light sources (the most as shown in Figure 3) gather four images.Specifically, image acquisition Equipment gathers the light filling image I1 of face object to be detected under the light filling of first group of light source, adopts under the light filling of second group of light source Collect the light filling image I2 of face object to be detected, under the light filling of the 3rd group of light source, gather the light filling image of face object to be detected I3, gathers the light filling image I4 of face object to be detected under the light filling of the 4th group of light source.
Computing module 520 is for calculating described face object based on described non-light filling image and multiple light filling images described Without high light image and high light image.
In one example, for face object without high light image, computing module 520 can be based on all light filling images In minimum pixel value at arbitrary same position calculate in the difference of the pixel value of corresponding position with described non-light filling image.Connect Above example, such as, to be calculated is expressed as S0 without high light image, then S0=min (I1, I2, I3, I4)-I0.For The high light image of face object, in the case of computing module 520 can be based on the multiple sets of light sources light filling in the range of equal angular In multiple light filling images, max pixel value at arbitrary same position calculates with the difference of minimum pixel value.The purpose of do so is In order to obtain the high light component about this face object different angles response range to be measured.Such as, above as shown in Figure 3 In example, the light-emitting device of first group of light source and second group of light source is in identical angular range relative to face object to be measured, Similarly, the light-emitting device of the 3rd group of light source and the 4th group of light source is in identical angular range relative to face object to be measured. Therefore, computing module 520 can be calculated high light image S1 and S2, wherein S1=max (I1, I2)-min (I1, I2), S2= max(I3,I4)-min(I3,I4).High light image S1 and S2 has respectively obtained the high light component of different angles response range (such as The high light component of cornea and nose ratio in high light image S1 and S2 will be different).
Synthesis module 530 is used for being merged into multichannel image by described without high light image and described high light image, and by institute State multichannel image and input the convolutional neural networks trained.
Synthesis module 530 will be merged into multichannel image about face object to be measured without high light image and high light image, In this multichannel image, each pixel is represented by multiple values and (such as, is merged by S0, S1 and the S2 in above example Becoming triple channel image, in this triple channel image, each pixel is by 3 values (such as, the value of tri-Color Channels of R, G, B) Represent).Exemplarily, this multichannel image includes high light component.The convolution god that the input of this multichannel image is trained Through network, analyze whether module 540 can intactly analyze face object to be measured based on the high light component in this multichannel image It it is living body faces.
Analyze module 540 and determine whether described face object is live body pair for output based on described convolutional neural networks As.
In one example, the output of this convolutional neural networks can be the value between 0~1.Analyze module 540 in convolution When the output of neutral net is more than predetermined threshold (such as 0.5), determine that described face object is live subject;Otherwise, in convolution When the output of neutral net is less than or equal to described predetermined threshold, determine that described face object is non-living body object.
Above-mentioned face living body detection device 500 according to embodiments of the present invention is based on non-light filling image and multiple sets of light sources light filling Lower gathered multiple images synthesis is without high light image and high light image, to form multichannel figure without high light image and high light image As (this multichannel image includes high light component), utilize true living body faces different from the high light characteristic of mask, screen etc., True living body faces is made a distinction with mask, screen etc., it is achieved face In vivo detection.
Acquisition module 510, computing module 520, synthesis module 530 and analyze module 540 all can electricity as shown in Figure 1 In processor 102 Running storage device 104 in subset, the programmed instruction of storage realizes.
Below, with reference to Fig. 6, face living body detection device 600 according to another embodiment of the present invention will be described.
As shown in Figure 6, face living body detection device 600 according to embodiments of the present invention includes acquisition module 610, alignment mould Block 620, computing module 630, synthesis module 640 and analysis module 650.
Acquisition module 610 is for obtaining the non-light filling figure gathered in the case of non-light filling about face object to be detected Picture and multiple the light filling images gathered in the case of the multiple sets of light sources light filling respectively that brightness is close.Acquisition module 610 and knot Close the acquisition module 510 in the face living body detection device 500 described by Fig. 5 similar, for sake of simplicity, here is omitted.
Alignment module 620 is for by multiple the light filling images acquired in acquisition module 610 and non-light filling image alignment.
Multiple light filling images and non-light filling image alignment can more effectively be improved the efficiency of subsequent treatment by alignment module 620 And accuracy.In one example, alignment module 620 is further used for non-light filling image and multiple light filling images are carried out face Detection and face critical point detection;And position of based on face key point is by multiple light filling images and non-light filling image alignment.
In one example, alignment module 620 never can detect face, so in light filling image and multiple light filling images The neutral net that rear utilization trains carries out feature extraction and tries to achieve the position of key point based on the feature extracted facial image Putting, position based on face key point is by multiple light filling images and non-light filling image alignment.Such as, alignment module 620 can make Each light filling image I1, I2, I3, I4 relative displacement relative to I0 is obtained with KLT (Kanade-Lucas-Tomasi) tracker, The most correspondingly I1, I2, I3, I4 being carried out translation makes the image of its face align with I0.Multiple after alignment Light filling image is used for subsequent treatment.
Computing module 630 for calculating the nothing of face object based on non-light filling image and multiple the light filling images after alignment High light image and high light image.Computing module 630 and the calculating mould in face living body detection device 500 described in conjunction with Figure 5 Block 520 is similar, simply calculates multiple the light filling images after multiple the light filling images being based on replace with alignment, for sake of simplicity, this Place repeats no more.
Synthesis module 640 is used for be merged into multichannel image without high light image and high light image, and by multichannel image The convolutional neural networks that input trains.
Analyze module 650 and determine whether described face object is live body pair for output based on described convolutional neural networks As.
Synthesis module 640 and analyze module 650 respectively with in face living body detection device 500 described in conjunction with Figure 5 Synthesis module 530 is similar, for sake of simplicity, here is omitted with analysis module 540.
Acquisition module 610, alignment module 620, computing module 630, synthesis module 640 and analysis module 650 are the most permissible In processor 102 Running storage device 104 in electronic equipment as shown in Figure 1, the programmed instruction of storage realizes.
Above-mentioned figure image height electro-optical device 600 according to embodiments of the present invention is based on institute under non-light filling image and multiple sets of light sources light filling Multiple images synthesis gathered is without high light image and high light image, to form multichannel image without high light image and high light image (this multichannel image includes high light component), utilizes true living body faces different from the high light characteristic of mask, screen etc., will True living body faces makes a distinction with mask, screen etc., it is achieved face In vivo detection.Do not mend additionally, obtain at acquisition module 610 By alignment module 620 by multiple light filling images and non-light filling image alignment after light image and multiple light filling images, can be more effective Improve efficiency and the accuracy of subsequent treatment.
Those of ordinary skill in the art are it is to be appreciated that combine the mould of each example that the embodiments described herein describes Block and algorithm steps, it is possible to being implemented in combination in of electronic hardware or computer software and electronic hardware.These functions are actually Perform with hardware or software mode, depend on application-specific and the design constraint of technical scheme.Professional and technical personnel Each specifically should being used for can be used different methods to realize described function, but this realization is it is not considered that exceed The scope of the present invention.
Fig. 7 shows the schematic block diagram of face In vivo detection system 700 according to embodiments of the present invention.Face live body is examined Examining system 700 includes image capture device 710, storage device 720 and processor 730.
Wherein, image capture device 710 is used for gathering image.Storage device 720 storage is real for realizing according to the present invention Execute the program code of corresponding steps in the human face in-vivo detection method of example.Processor 730 is deposited in Running storage device 720 The program code of storage, to perform the corresponding steps of human face in-vivo detection method according to embodiments of the present invention, and is used for realizing The corresponding module in face living body detection device according to embodiments of the present invention.
In one embodiment, following steps are performed when described program code is run by processor 730: obtain about treating The detection non-light filling image that gathered in the case of non-light filling of face object and at the close multiple sets of light sources light filling respectively of brightness In the case of multiple light filling images of being gathered;Described face pair is calculated based on described non-light filling image and multiple light filling images described Elephant without high light image and high light image;It is merged into multichannel image without high light image and described high light image by described, and will Described multichannel image inputs the convolutional neural networks trained;And output of based on described convolutional neural networks determines described Whether face object is live subject.
Exemplarily, some in described multiple sets of light sources are respectively at the difference relative to described face object with other Angular range, and described calculating includes: based on the minimum pixel value at arbitrary same position in all light filling images with described Non-light filling image the pixel value of corresponding position difference calculate described face object without high light image;And based on identical In multiple light filling images in the case of multiple sets of light sources light filling in angular range max pixel value at arbitrary same position with The difference of small pixel value calculates the high light image of described face object.
Exemplarily, described output based on described convolutional neural networks determines whether described face object is live subject Including: when the output of described convolutional neural networks is more than predetermined threshold, determine that described face object is live subject;When described When the output of convolutional neural networks is less than or equal to described predetermined threshold, determine that described face object is non-living body object.
Additionally, in one embodiment, following walking also is performed when described program code is run by described processor 730 Rapid: after non-light filling image described in obtaining and multiple light filling images described, by multiple light filling images described and described non-light filling figure As alignment, and described calculating described face object without high light image and high light image based on described non-light filling image with through right Multiple light filling images after Qi.
Exemplarily, described multiple light filling images described are included with described non-light filling image alignment: to described non-light filling Image and multiple light filling images described carry out Face datection and face critical point detection;And position of based on face key point will Multiple light filling images described and described non-light filling image alignment.
Additionally, according to embodiments of the present invention, additionally provide a kind of storage medium, store program on said storage Instruction, when described programmed instruction is run by computer or processor for performing the face In vivo detection side of the embodiment of the present invention The corresponding steps of method, and for realizing the corresponding module in face living body detection device according to embodiments of the present invention.Described Storage medium such as can include the storage card of smart phone, the memory unit of panel computer, the hard disk of personal computer, read-only Memorizer (ROM), Erasable Programmable Read Only Memory EPROM (EPROM), portable compact disc read only memory (CD-ROM), USB Memorizer or the combination in any of above-mentioned storage medium.Described computer-readable recording medium can be one or more calculating The combination in any of machine readable storage medium storing program for executing.
In one embodiment, described computer program instructions can realize basis when by computer or processor operation Each functional module of the face living body detection device of the embodiment of the present invention, and/or can perform to implement according to the present invention The human face in-vivo detection method of example.
In one embodiment, described computer program instructions makes computer or place when by computer or processor operation Reason device performs following steps: obtain the non-light filling image that gathered in the case of non-light filling about face object to be detected and Multiple the light filling images gathered in the case of the multiple sets of light sources light filling respectively that brightness is close;Based on described non-light filling image and institute State multiple light filling images calculate described face object without high light image and high light image;By described without high light image with described height Light image is merged into multichannel image, and described multichannel image inputs the convolutional neural networks trained;And based on institute The output stating convolutional neural networks determines whether described face object is live subject.
Exemplarily, some in described multiple sets of light sources are respectively at the difference relative to described face object with other Angular range, and described calculating includes: based on the minimum pixel value at arbitrary same position in all light filling images with described Non-light filling image the pixel value of corresponding position difference calculate described face object without high light image;And based on identical In multiple light filling images in the case of multiple sets of light sources light filling in angular range max pixel value at arbitrary same position with The difference of small pixel value calculates the high light image of described face object.
Exemplarily, described output based on described convolutional neural networks determines whether described face object is live subject Including: when the output of described convolutional neural networks is more than predetermined threshold, determine that described face object is live subject;When described When the output of convolutional neural networks is less than or equal to described predetermined threshold, determine that described face object is non-living body object.
Additionally, in one embodiment, described computer program instructions also makes meter when by computer or processor operation Calculation machine or processor perform following steps: after non-light filling image described in obtaining and multiple light filling images described, by described multiple Light filling image and described non-light filling image alignment, and described calculating described face object without high light image and high light image base In described non-light filling image and multiple the light filling images after alignment.
Exemplarily, described multiple light filling images described are included with described non-light filling image alignment: to described non-light filling Image and multiple light filling images described carry out Face datection and face critical point detection;And position of based on face key point will Multiple light filling images described and described non-light filling image alignment.
The each module in face living body detection device according to embodiments of the present invention can be by according to embodiments of the present invention The processor of electronic equipment of face In vivo detection run the computer program instructions that stores in memory and realize, or The computer instruction that can store in the computer-readable recording medium of computer program according to embodiments of the present invention Realize when being run by computer.
Human face in-vivo detection method, device, system and storage medium according to embodiments of the present invention is based on non-light filling figure Multiple images gathered under picture and multiple sets of light sources light filling synthesize without high light image and high light image, with without high light image and Gao Guang Image forms multichannel image (this multichannel image includes high light component), utilizes true living body faces and mask, screen etc. The difference of high light characteristic, true living body faces is made a distinction with mask, screen etc., it is achieved face In vivo detection.Additionally, Obtain multiple light filling images and non-light filling image alignment after non-light filling image and multiple light filling images, after can more effectively improving The continuous efficiency processed and accuracy.
Although describing example embodiment by reference to accompanying drawing here, it should be understood that above-mentioned example embodiment is merely exemplary , and be not intended to limit the scope of the invention to this.Those of ordinary skill in the art can carry out various change wherein And amendment, it is made without departing from the scope of the present invention and spirit.All such changes and modifications are intended to be included in claims Within required the scope of the present invention.
Those of ordinary skill in the art are it is to be appreciated that combine the list of each example that the embodiments described herein describes Unit and algorithm steps, it is possible to being implemented in combination in of electronic hardware or computer software and electronic hardware.These functions are actually Perform with hardware or software mode, depend on application-specific and the design constraint of technical scheme.Professional and technical personnel Each specifically should being used for can be used different methods to realize described function, but this realization is it is not considered that exceed The scope of the present invention.
In several embodiments provided herein, it should be understood that disclosed equipment and method, can be passed through it Its mode realizes.Such as, apparatus embodiments described above is only schematically, such as, and the division of described unit, only Being only a kind of logic function to divide, actual can have other dividing mode, the most multiple unit or assembly to tie when realizing Close or be desirably integrated into another equipment, or some features can be ignored, or not performing.
In description mentioned herein, illustrate a large amount of detail.It is to be appreciated, however, that the enforcement of the present invention Example can be put into practice in the case of not having these details.In some instances, it is not shown specifically known method, structure And technology, in order to do not obscure the understanding of this description.
Similarly, it will be appreciated that one or more in order to simplify that the present invention helping understands in each inventive aspect, exist To in the description of the exemplary embodiment of the present invention, each feature of the present invention be sometimes grouped together into single embodiment, figure, Or in descriptions thereof.But, should not be construed to reflect an intention that the most required for protection by the method for this present invention Application claims feature more more than the feature being expressly recited in each claim.More precisely, such as corresponding power As profit claim is reflected, its inventive point is can be with the spy of all features of embodiment single disclosed in certain Levy and solve corresponding technical problem.Therefore, it then follows it is concrete that claims of detailed description of the invention are thus expressly incorporated in this Embodiment, the most each claim itself is as the independent embodiment of the present invention.
It will be understood to those skilled in the art that in addition to mutually exclusive between feature, any combination can be used right All features disclosed in this specification (including adjoint claim, summary and accompanying drawing) and the disclosedest any method Or all processes of equipment or unit are combined.Unless expressly stated otherwise, this specification (includes that adjoint right is wanted Ask, make a summary and accompanying drawing) disclosed in each feature can be replaced by providing identical, equivalent or the alternative features of similar purpose.
Although additionally, it will be appreciated by those of skill in the art that embodiments more described herein include other embodiments Some feature included by rather than further feature, but the combination of the feature of different embodiment means to be in the present invention's Within the scope of and form different embodiments.Such as, in detail in the claims, embodiment required for protection one of arbitrarily Can mode use in any combination.
The all parts embodiment of the present invention can realize with hardware, or to run on one or more processor Software module realize, or with combinations thereof realize.It will be understood by those of skill in the art that and can use in practice Microprocessor or digital signal processor (DSP) realize some moulds in article analytical equipment according to embodiments of the present invention The some or all functions of block.The present invention is also implemented as the part for performing method as described herein or complete The device program (such as, computer program and computer program) in portion.The program of such present invention of realization can store On a computer-readable medium, or can be to have the form of one or more signal.Such signal can be from the Internet Download on website and obtain, or provide on carrier signal, or provide with any other form.
The present invention will be described rather than limits the invention to it should be noted above-described embodiment, and ability Field technique personnel can design alternative embodiment without departing from the scope of the appended claims.In the claims, Any reference marks that should not will be located between bracket is configured to limitations on claims.Word " comprises " and does not excludes the presence of not Arrange element in the claims or step.Word "a" or "an" before being positioned at element does not excludes the presence of multiple such Element.The present invention and can come real by means of including the hardware of some different elements by means of properly programmed computer Existing.If in the unit claim listing equipment for drying, several in these devices can be by same hardware branch Specifically embody.Word first, second and third use do not indicate that any order.These word explanations can be run after fame Claim.
The above, the only detailed description of the invention of the present invention or explanation to detailed description of the invention, the protection of the present invention Scope is not limited thereto, and any those familiar with the art, can be easily in the technical scope that the invention discloses Expect change or replace, all should contain within protection scope of the present invention.Protection scope of the present invention should be with claim Protection domain is as the criterion.

Claims (10)

1. a human face in-vivo detection method, it is characterised in that described human face in-vivo detection method includes:
Obtain the non-light filling image that gathered in the case of non-light filling about face object to be detected and close many in brightness Multiple the light filling images gathered in the case of group light source light filling respectively;
Based on described non-light filling image and multiple light filling images described calculate described face object without high light image and Gao Guangtu Picture;
It is merged into multichannel image without high light image and described high light image by described, and by the input training of described multichannel image Good convolutional neural networks;And
Output based on described convolutional neural networks determines whether described face object is live subject.
Human face in-vivo detection method the most according to claim 1, it is characterised in that some in described multiple sets of light sources are with another Some are respectively at the different angles scope relative to described face object, and described based on described non-light filling image with described Including without high light image and high light image of multiple light filling images described face object of calculating:
Based on the minimum pixel value at arbitrary same position in all light filling images with described non-light filling image in corresponding position Pixel value difference calculate described face object without high light image;And
At arbitrary same position in multiple the light filling images in the case of the multiple sets of light sources light filling in the range of equal angular The difference of max pixel value and minimum pixel value calculates the high light image of described face object.
Human face in-vivo detection method the most according to claim 1, it is characterised in that described based on described convolutional neural networks Output determine whether described face object is that live subject includes:
When the output of described convolutional neural networks is more than predetermined threshold, determine that described face object is live subject;
When the output of described convolutional neural networks is less than or equal to described predetermined threshold, determine that described face object is non-living body Object.
Human face in-vivo detection method the most according to claim 1, it is characterised in that described human face in-vivo detection method is also wrapped Include:
After non-light filling image described in obtaining and multiple light filling images described, by multiple light filling images described and described non-light filling figure As alignment, and
Described calculating described face object without high light image and high light image based on described non-light filling image with after alignment Multiple light filling images.
Human face in-vivo detection method the most according to claim 4, it is characterised in that described by multiple light filling images described with Described non-light filling image alignment includes:
Described non-light filling image and multiple light filling images described are carried out Face datection and face critical point detection;And
Position based on face key point is by multiple light filling images described and described non-light filling image alignment.
6. a face living body detection device, it is characterised in that described face living body detection device includes:
Acquisition module, for obtain the non-light filling image that gathered in the case of non-light filling about face object to be detected and Multiple the light filling images gathered in the case of the multiple sets of light sources light filling respectively that brightness is close;
Computing module, for based on described non-light filling image and multiple light filling images described calculate described face object without Gao Guang Image and high light image;
Synthesis module, for being merged into multichannel image by described without high light image and described high light image, and by described manifold Road image inputs the convolutional neural networks trained;And
Analyze module, determine whether described face object is live subject for output based on described convolutional neural networks.
Face living body detection device the most according to claim 6, it is characterised in that some in described multiple sets of light sources are with another Some are respectively at the different angles scope relative to described face object, and described computing module is further used for:
Based on the minimum pixel value at arbitrary same position in all light filling images with described non-light filling image in corresponding position Pixel value difference calculate described face object without high light image;And
At arbitrary same position in multiple the light filling images in the case of the multiple sets of light sources light filling in the range of equal angular The difference of max pixel value and minimum pixel value calculates the high light image of described face object.
Face living body detection device the most according to claim 6, it is characterised in that described analysis module is further used for:
When the output of described convolutional neural networks is more than predetermined threshold, determine that described face object is live subject;
When the output of described convolutional neural networks is less than or equal to described predetermined threshold, determine that described face object is non-living body Object.
Face living body detection device the most according to claim 6, it is characterised in that described face living body detection device also wraps Include:
Alignment module, for by multiple light filling images described in acquired in described acquisition module and described non-light filling image alignment, And
Described computing module is for calculating described face pair based on described non-light filling image and multiple the light filling images after alignment Elephant without high light image and high light image.
Face living body detection device the most according to claim 9, it is characterised in that described alignment module is further used for:
Described non-light filling image and multiple light filling images described are carried out Face datection and face critical point detection;And
Position based on face key point is by multiple light filling images described and described non-light filling image alignment.
CN201610511947.8A 2016-06-30 2016-06-30 Face living body detection method and device Active CN106203305B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610511947.8A CN106203305B (en) 2016-06-30 2016-06-30 Face living body detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610511947.8A CN106203305B (en) 2016-06-30 2016-06-30 Face living body detection method and device

Publications (2)

Publication Number Publication Date
CN106203305A true CN106203305A (en) 2016-12-07
CN106203305B CN106203305B (en) 2020-02-04

Family

ID=57464149

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610511947.8A Active CN106203305B (en) 2016-06-30 2016-06-30 Face living body detection method and device

Country Status (1)

Country Link
CN (1) CN106203305B (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529512A (en) * 2016-12-15 2017-03-22 北京旷视科技有限公司 Living body face verification method and device
CN107066942A (en) * 2017-03-03 2017-08-18 上海斐讯数据通信技术有限公司 A kind of living body faces recognition methods and system
CN107194376A (en) * 2017-06-21 2017-09-22 北京市威富安防科技有限公司 Mask fraud convolutional neural networks training method and human face in-vivo detection method
CN107220990A (en) * 2017-06-22 2017-09-29 成都品果科技有限公司 A kind of hair dividing method based on deep learning
CN107392135A (en) * 2017-07-14 2017-11-24 广东欧珀移动通信有限公司 Biopsy method and Related product
CN107808115A (en) * 2017-09-27 2018-03-16 联想(北京)有限公司 A kind of biopsy method, device and storage medium
CN107832735A (en) * 2017-11-24 2018-03-23 百度在线网络技术(北京)有限公司 Method and apparatus for identifying face
CN108108676A (en) * 2017-12-12 2018-06-01 北京小米移动软件有限公司 Face identification method, convolutional neural networks generation method and device
CN108229331A (en) * 2017-03-16 2018-06-29 北京市商汤科技开发有限公司 Face false-proof detection method and system, electronic equipment, program and medium
CN108345818A (en) * 2017-01-23 2018-07-31 北京中科奥森数据科技有限公司 A kind of human face in-vivo detection method and device
CN108416324A (en) * 2018-03-27 2018-08-17 百度在线网络技术(北京)有限公司 Method and apparatus for detecting live body
CN108509888A (en) * 2018-03-27 2018-09-07 百度在线网络技术(北京)有限公司 Method and apparatus for generating information
CN108537152A (en) * 2018-03-27 2018-09-14 百度在线网络技术(北京)有限公司 Method and apparatus for detecting live body
CN108875574A (en) * 2018-05-11 2018-11-23 北京旷视科技有限公司 Detection method, device, system and the storage medium of pedestrian detection false positive result
CN108875467A (en) * 2017-06-05 2018-11-23 北京旷视科技有限公司 The method, apparatus and computer storage medium of In vivo detection
CN109086691A (en) * 2018-07-16 2018-12-25 阿里巴巴集团控股有限公司 A kind of three-dimensional face biopsy method, face's certification recognition methods and device
CN109101871A (en) * 2018-08-07 2018-12-28 北京华捷艾米科技有限公司 A kind of living body detection device based on depth and Near Infrared Information, detection method and its application
CN109255322A (en) * 2018-09-03 2019-01-22 北京诚志重科海图科技有限公司 A kind of human face in-vivo detection method and device
CN109522877A (en) * 2018-12-14 2019-03-26 睿云联(厦门)网络通讯技术有限公司 A kind of offline plurality of human faces recognition methods and computer equipment based on Android device
CN109558840A (en) * 2018-11-29 2019-04-02 中国科学院重庆绿色智能技术研究院 A kind of biopsy method of Fusion Features
CN109635795A (en) * 2018-10-22 2019-04-16 小黄狗环保科技有限公司 A kind of Intelligent supplemental lighting method improving Bottle & Can discrimination based on VGG16 network model
WO2019134536A1 (en) * 2018-01-04 2019-07-11 杭州海康威视数字技术股份有限公司 Neural network model-based human face living body detection
CN111460970A (en) * 2020-03-27 2020-07-28 深圳市商汤科技有限公司 Living body detection method and device and face recognition equipment
CN111523438A (en) * 2020-04-20 2020-08-11 支付宝实验室(新加坡)有限公司 Living body identification method, terminal device and electronic device
US10796178B2 (en) 2016-12-15 2020-10-06 Beijing Kuangshi Technology Co., Ltd. Method and device for face liveness detection
CN112906676A (en) * 2021-05-06 2021-06-04 北京远鉴信息技术有限公司 Face image source identification method and device, storage medium and electronic equipment
WO2021143216A1 (en) * 2020-01-17 2021-07-22 腾讯科技(深圳)有限公司 Face liveness detection method and related apparatus
US11093773B2 (en) 2018-09-10 2021-08-17 Advanced New Technologies Co., Ltd. Liveness detection method, apparatus and computer-readable storage medium
CN115601245A (en) * 2021-07-07 2023-01-13 同方威视技术股份有限公司(Cn) Shadow elimination device and method, and empty disk identification device and method
WO2023061122A1 (en) * 2021-10-15 2023-04-20 北京眼神科技有限公司 Live face detection method and apparatus, computer-readable storage medium, and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103106397A (en) * 2013-01-19 2013-05-15 华南理工大学 Human face living body detection method based on bright pupil effect
CN104506778A (en) * 2014-12-22 2015-04-08 厦门美图之家科技有限公司 Flashlight control method and device based on age estimation
CN105260731A (en) * 2015-11-25 2016-01-20 商汤集团有限公司 Human face living body detection system and method based on optical pulses
CN105518711A (en) * 2015-06-29 2016-04-20 北京旷视科技有限公司 In-vivo detection method, in-vivo detection system, and computer program product
CN105654028A (en) * 2015-09-29 2016-06-08 厦门中控生物识别信息技术有限公司 True and false face identification method and apparatus thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103106397A (en) * 2013-01-19 2013-05-15 华南理工大学 Human face living body detection method based on bright pupil effect
CN104506778A (en) * 2014-12-22 2015-04-08 厦门美图之家科技有限公司 Flashlight control method and device based on age estimation
CN105518711A (en) * 2015-06-29 2016-04-20 北京旷视科技有限公司 In-vivo detection method, in-vivo detection system, and computer program product
CN105654028A (en) * 2015-09-29 2016-06-08 厦门中控生物识别信息技术有限公司 True and false face identification method and apparatus thereof
CN105260731A (en) * 2015-11-25 2016-01-20 商汤集团有限公司 Human face living body detection system and method based on optical pulses

Cited By (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529512B (en) * 2016-12-15 2019-09-10 北京旷视科技有限公司 Living body faces verification method and device
US10796178B2 (en) 2016-12-15 2020-10-06 Beijing Kuangshi Technology Co., Ltd. Method and device for face liveness detection
CN106529512A (en) * 2016-12-15 2017-03-22 北京旷视科技有限公司 Living body face verification method and device
CN108345818A (en) * 2017-01-23 2018-07-31 北京中科奥森数据科技有限公司 A kind of human face in-vivo detection method and device
CN107066942A (en) * 2017-03-03 2017-08-18 上海斐讯数据通信技术有限公司 A kind of living body faces recognition methods and system
CN108229329A (en) * 2017-03-16 2018-06-29 北京市商汤科技开发有限公司 Face false-proof detection method and system, electronic equipment, program and medium
US11482040B2 (en) 2017-03-16 2022-10-25 Beijing Sensetime Technology Development Co., Ltd. Face anti-counterfeiting detection methods and systems, electronic devices, programs and media
WO2018166524A1 (en) * 2017-03-16 2018-09-20 北京市商汤科技开发有限公司 Face detection method and system, electronic device, program, and medium
CN108229331A (en) * 2017-03-16 2018-06-29 北京市商汤科技开发有限公司 Face false-proof detection method and system, electronic equipment, program and medium
CN108229325A (en) * 2017-03-16 2018-06-29 北京市商汤科技开发有限公司 Method for detecting human face and system, electronic equipment, program and medium
CN108875467B (en) * 2017-06-05 2020-12-25 北京旷视科技有限公司 Living body detection method, living body detection device and computer storage medium
CN108875467A (en) * 2017-06-05 2018-11-23 北京旷视科技有限公司 The method, apparatus and computer storage medium of In vivo detection
CN107194376A (en) * 2017-06-21 2017-09-22 北京市威富安防科技有限公司 Mask fraud convolutional neural networks training method and human face in-vivo detection method
CN107220990A (en) * 2017-06-22 2017-09-29 成都品果科技有限公司 A kind of hair dividing method based on deep learning
CN107392135A (en) * 2017-07-14 2017-11-24 广东欧珀移动通信有限公司 Biopsy method and Related product
CN107808115A (en) * 2017-09-27 2018-03-16 联想(北京)有限公司 A kind of biopsy method, device and storage medium
CN107832735A (en) * 2017-11-24 2018-03-23 百度在线网络技术(北京)有限公司 Method and apparatus for identifying face
CN108108676A (en) * 2017-12-12 2018-06-01 北京小米移动软件有限公司 Face identification method, convolutional neural networks generation method and device
US11321963B2 (en) 2018-01-04 2022-05-03 Hangzhou Hikvision Digital Technology Co., Ltd. Face liveness detection based on neural network model
WO2019134536A1 (en) * 2018-01-04 2019-07-11 杭州海康威视数字技术股份有限公司 Neural network model-based human face living body detection
CN108509888B (en) * 2018-03-27 2022-01-28 百度在线网络技术(北京)有限公司 Method and apparatus for generating information
CN108416324B (en) * 2018-03-27 2022-02-25 百度在线网络技术(北京)有限公司 Method and apparatus for detecting living body
CN108537152A (en) * 2018-03-27 2018-09-14 百度在线网络技术(北京)有限公司 Method and apparatus for detecting live body
CN108537152B (en) * 2018-03-27 2022-01-25 百度在线网络技术(北京)有限公司 Method and apparatus for detecting living body
CN108509888A (en) * 2018-03-27 2018-09-07 百度在线网络技术(北京)有限公司 Method and apparatus for generating information
CN108416324A (en) * 2018-03-27 2018-08-17 百度在线网络技术(北京)有限公司 Method and apparatus for detecting live body
CN108875574A (en) * 2018-05-11 2018-11-23 北京旷视科技有限公司 Detection method, device, system and the storage medium of pedestrian detection false positive result
CN109086691A (en) * 2018-07-16 2018-12-25 阿里巴巴集团控股有限公司 A kind of three-dimensional face biopsy method, face's certification recognition methods and device
CN109101871A (en) * 2018-08-07 2018-12-28 北京华捷艾米科技有限公司 A kind of living body detection device based on depth and Near Infrared Information, detection method and its application
CN109255322A (en) * 2018-09-03 2019-01-22 北京诚志重科海图科技有限公司 A kind of human face in-vivo detection method and device
CN109255322B (en) * 2018-09-03 2019-11-19 北京诚志重科海图科技有限公司 A kind of human face in-vivo detection method and device
US11093773B2 (en) 2018-09-10 2021-08-17 Advanced New Technologies Co., Ltd. Liveness detection method, apparatus and computer-readable storage medium
US11210541B2 (en) 2018-09-10 2021-12-28 Advanced New Technologies Co., Ltd. Liveness detection method, apparatus and computer-readable storage medium
CN109635795A (en) * 2018-10-22 2019-04-16 小黄狗环保科技有限公司 A kind of Intelligent supplemental lighting method improving Bottle & Can discrimination based on VGG16 network model
CN109558840A (en) * 2018-11-29 2019-04-02 中国科学院重庆绿色智能技术研究院 A kind of biopsy method of Fusion Features
CN109522877A (en) * 2018-12-14 2019-03-26 睿云联(厦门)网络通讯技术有限公司 A kind of offline plurality of human faces recognition methods and computer equipment based on Android device
US11854310B2 (en) 2020-01-17 2023-12-26 Tencent Technology (Shenzhen) Company Limited Face liveness detection method and related apparatus
WO2021143216A1 (en) * 2020-01-17 2021-07-22 腾讯科技(深圳)有限公司 Face liveness detection method and related apparatus
CN111460970A (en) * 2020-03-27 2020-07-28 深圳市商汤科技有限公司 Living body detection method and device and face recognition equipment
CN111523438A (en) * 2020-04-20 2020-08-11 支付宝实验室(新加坡)有限公司 Living body identification method, terminal device and electronic device
CN111523438B (en) * 2020-04-20 2024-02-23 支付宝实验室(新加坡)有限公司 Living body identification method, terminal equipment and electronic equipment
CN112906676A (en) * 2021-05-06 2021-06-04 北京远鉴信息技术有限公司 Face image source identification method and device, storage medium and electronic equipment
CN115601245A (en) * 2021-07-07 2023-01-13 同方威视技术股份有限公司(Cn) Shadow elimination device and method, and empty disk identification device and method
CN115601245B (en) * 2021-07-07 2023-12-12 同方威视技术股份有限公司 Shadow eliminating device and method, empty disc identifying device and method
WO2023061122A1 (en) * 2021-10-15 2023-04-20 北京眼神科技有限公司 Live face detection method and apparatus, computer-readable storage medium, and device

Also Published As

Publication number Publication date
CN106203305B (en) 2020-02-04

Similar Documents

Publication Publication Date Title
CN106203305A (en) Human face in-vivo detection method and device
CN108876833A (en) Image processing method, image processing apparatus and computer readable storage medium
CN109146830A (en) For generating the method, apparatus, system and storage medium of training data
CN106251338B (en) Target integrity detection method and device
CN106650662B (en) Target object shielding detection method and device
CN103577801B (en) Quality metrics method and system for biometric authentication
CN108875546A (en) Face auth method, system and storage medium
CN108875932A (en) Image-recognizing method, device and system and storage medium
CN105938552A (en) Face recognition method capable of realizing base image automatic update and face recognition device
CN108875522A (en) Face cluster methods, devices and systems and storage medium
CN103597820B (en) Optics terrain imaging
CN105631439A (en) Human face image collection method and device
CN108875452A (en) Face identification method, device, system and computer-readable medium
CN109522790A (en) Human body attribute recognition approach, device, storage medium and electronic equipment
CN106446851A (en) Visible light based human face optimal selection method and system
CN108573268A (en) Image-recognizing method and device, image processing method and device and storage medium
CN109740491A (en) A kind of human eye sight recognition methods, device, system and storage medium
CN108875515A (en) Face identification method, device, system, storage medium and capture machine
CN106131405A (en) Image processing method and device and for its image capturing system
CN108932456A (en) Face identification method, device and system and storage medium
CN108875535A (en) image detecting method, device and system and storage medium
CN108875487A (en) Pedestrian is identified the training of network again and is identified again based on its pedestrian
CN106254782A (en) Image processing method and device and camera
CN108961149A (en) Image processing method, device and system and storage medium
CN108875476A (en) Automatic near-infrared face registration and recognition methods, device and system and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100190 Beijing, Haidian District Academy of Sciences, South Road, No. 2, block A, No. 313

Applicant after: MEGVII INC.

Applicant after: Beijing maigewei Technology Co., Ltd.

Address before: 100190 Beijing, Haidian District Academy of Sciences, South Road, No. 2, block A, No. 313

Applicant before: MEGVII INC.

Applicant before: Beijing aperture Science and Technology Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant