CN110059590A - A kind of face living body verification method, device, mobile terminal and readable storage medium storing program for executing - Google Patents
A kind of face living body verification method, device, mobile terminal and readable storage medium storing program for executing Download PDFInfo
- Publication number
- CN110059590A CN110059590A CN201910252907.XA CN201910252907A CN110059590A CN 110059590 A CN110059590 A CN 110059590A CN 201910252907 A CN201910252907 A CN 201910252907A CN 110059590 A CN110059590 A CN 110059590A
- Authority
- CN
- China
- Prior art keywords
- face
- facial image
- living body
- key point
- mobile terminal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Telephone Function (AREA)
Abstract
The invention discloses a kind of face living body verification method, device, mobile terminal and computer readable storage mediums, are applied to field of mobile terminals, comprising: shoot facial image using the left and right camera of mobile terminal;Distortion correction is carried out to the left and right facial image taken respectively and row is aligned;Face datection is carried out to the left and right facial image through overcorrection respectively;Matching face key point is carried out to the left images by Face datection;Living body judgement is carried out according to the distance between described face key point.Through the embodiment of the present invention, when being authenticated using recognition of face, the attack such as image, video, the simple and reliable maturation of method can be effectively resisted, and speed is fast, reduces security risk, improve user experience.
Description
Technical field
The present invention relates to field of mobile terminals, in particular to a kind of face living body verification method based on binocular camera,
Device, mobile terminal and computer readable storage medium.
Background technique
With the development of society, recognition of face is more and more applied in various product.Although the application of recognition of face
It is promoted, but user experience is bad.For example, needing people ceaselessly to blink when certification using recognition of face and coming really
Recognizing is a living body in face of camera lens, this can have the defects that two aspects:
(1) it sometimes just blinks not in time or unobvious cannot pass through In vivo detection at all;
(2) it is easy the attack by image, video.
More than haveing the defects that due to existing recognition of face, cause when carrying out safety certification, to deposit using recognition of face
In security risk, and user experience is bad.
Summary of the invention
In view of this, the face living body verification method that the purpose of the present invention is to provide a kind of based on binocular camera, dress
It sets, mobile terminal and computer readable storage medium can effectively resist image, video etc. when authenticating using recognition of face
Attack, the simple and reliable maturation of method, and speed is fast, reduces security risk, improves user experience.
It is as follows that the present invention solves technical solution used by above-mentioned technical problem:
According to an aspect of the present invention, a kind of face living body verification method provided is applied to mobile terminal, the side
Method includes:
Facial image is shot using the left and right camera of mobile terminal;
Distortion correction is carried out to the left and right facial image taken respectively and row is aligned;
Face datection is carried out to the left and right facial image through overcorrection respectively;
Matching face key point is carried out to the left images by Face datection
Living body judgement is carried out according to the distance between described face key point.
It is described that distortion correction and row pair are carried out to the left and right facial image taken respectively in a possible design
Together, comprising:
The facial image that left and right camera is taken carries out distortion correction;
Binocular correction is carried out to the left and right facial image Jing Guo distortion correction.
In a possible design, the described pair of left and right facial image Jing Guo distortion correction carries out binocular correction, comprising:
The facial image that left and right camera Jing Guo distortion correction takes is rotated, binocular correction is carried out, makes binocular camera
The image of acquisition can mathematically keep being aligned.
It is described that Face datection is carried out to the left and right facial image through overcorrection respectively in a possible design, comprising:
The position of face frame in the left and right facial image through overcorrection is obtained using the mtcnn algorithm detection based on deep learning, and
The key point information of face.
In a possible design, it is crucial that the described pair of left and right facial image by Face datection carries out matching face
Point, comprising: according to face frame and face key point information is got, match the key point of the left and right face in the facial image of left and right
Position.
It is described that living body judgement is carried out according to the distance between described face key point in a possible design, comprising:
Determine left-right ear to pick-up lens distance;
Determine left and right eye to camera lens distance;
Determine left eye eyeball to ear depth distance Dist_eyeToRose;
Determine left eye eyeball to right eye eyeball depth distance Dist_eyeToeye;
Determine that face rotates angle angle;
According to the depth distance Dist_eyeToRose of the left eye eyeball to ear and the face rotate angle angle into
Row living body judgement, alternatively, according to the depth distance Dist_eyeToeye of the left eye eyeball to right eye eyeball and the face rotation angle
It spends angle and carries out living body judgement.
In a possible design, the depth distance Dist_eyeToRose according to the left eye eyeball to ear and
The face rotation angle angle carries out living body judgement;Include: when determine face rotation angle be [0, angle] range in when,
When threshold value T1:Dist_eyeToRose < T is set, attacked for non-living body;Alternatively,
It is described that angle is rotated according to the depth distance Dist_eyeToeye of the left eye eyeball to right eye eyeball and the face
Angle carries out living body judgement, comprising: when determining face rotation angle is [angle, 90] range, judges between left and right eye
Depth difference: it when setting threshold value T1:Dist_eyeToeye < T1, is attacked for non-living body.
According to another aspect of the present invention, a kind of face living body verifying device provided, is applied to mobile terminal, described
Device includes: shooting module, correction and row alignment module, detection module, matching module, judgment module, in which:
The shooting module, for shooting facial image using the left and right camera of mobile terminal;
The correction and row alignment module, for carrying out distortion correction and row pair to the left and right facial image taken respectively
Together;
The detection module, for carrying out Face datection to the left and right facial image through overcorrection respectively;
The matching module, for carrying out matching face key point to the left images by Face datection;
The judgment module, for carrying out living body judgement according to the distance between described face key point.
According to another aspect of the present invention, a kind of terminal provided, comprising: memory, processor and be stored in described
It is real when the computer program is executed by the processor on memory and the computer program that can run on the processor
A kind of the step of existing described face living body verification method provided in an embodiment of the present invention.
According to another aspect of the present invention, a kind of computer readable storage medium provided, it is described computer-readable to deposit
Face living body verification method program, realization when the face living body verification method program is executed by processor are stored on storage media
A kind of the step of described face living body verification method provided in an embodiment of the present invention.
Compared with prior art, the face living body verification method that the invention proposes a kind of based on binocular camera, device,
Mobile terminal and computer readable storage medium are applied to field of mobile terminals, comprising: utilize the left and right camera of mobile terminal
Shoot facial image;Distortion correction is carried out to the left and right facial image taken respectively and row is aligned;Respectively to through overcorrection
Left and right facial image carries out Face datection;Matching face key point is carried out to the left images by Face datection;According to described
The distance between face key point carries out living body judgement.Through the embodiment of the present invention, when being authenticated using recognition of face, Ke Yiyou
Effect resists the attack such as image, video, the simple and reliable maturation of method, and speed is fast, reduces security risk, improves user experience.
Detailed description of the invention
A kind of hardware structural diagram of Fig. 1 mobile terminal of each embodiment to realize the present invention;
Fig. 2 is a kind of communications network system architecture diagram provided in an embodiment of the present invention;
Fig. 3 is a kind of flow diagram of face living body verification method provided in an embodiment of the present invention;
Fig. 4 is the structural schematic diagram that a kind of face living body provided in an embodiment of the present invention verifies device;
Fig. 5 is a kind of flow diagram of face living body verification method provided in an embodiment of the present invention;
Fig. 6 is a kind of flow diagram of face living body verification method provided in an embodiment of the present invention;
Fig. 7 is a kind of process signal of face living body verification method based on binocular camera provided in an embodiment of the present invention
Figure;
Fig. 8 is a kind of process signal of face living body verification method based on binocular camera provided in an embodiment of the present invention
Figure;
Fig. 9 is a kind of process signal of face living body verification method based on binocular camera provided in an embodiment of the present invention
Figure;
Figure 10 is that a kind of process of the face living body verification method based on binocular camera provided in an embodiment of the present invention is shown
It is intended to;
Figure 11 is the mobile terminal structure schematic diagram provided in an embodiment of the present invention using the method for the present invention.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
In order to be clearer and more clear technical problems, technical solutions and advantages to be solved, tie below
Drawings and examples are closed, the present invention will be described in further detail.It should be appreciated that specific embodiment described herein is only
To explain the present invention, it is not intended to limit the present invention.
In subsequent description, it is only using the suffix for indicating such as " module ", " component " or " unit " of element
Be conducive to explanation of the invention, itself there is no a specific meaning.Therefore, " module ", " component " or " unit " can mix
Ground uses.
Terminal can be implemented in a variety of manners.For example, terminal described in the present invention may include such as mobile phone, plate
Computer, laptop, palm PC, personal digital assistant (Personal Digital Assistant, PDA), portable
Media player (Portable Media Player, PMP), navigation device, wearable device, Intelligent bracelet, pedometer etc. move
The fixed terminals such as dynamic terminal, and number TV, desktop computer.
It will be illustrated by taking mobile terminal as an example in subsequent descriptions, it will be appreciated by those skilled in the art that in addition to special
Except element for moving purpose, the construction of embodiment according to the present invention can also apply to the terminal of fixed type.
Referring to Fig. 1, a kind of hardware structural diagram of its mobile terminal of each embodiment to realize the present invention, the shifting
Dynamic terminal 100 may include: RF (Radio Frequency, radio frequency) unit 101, WiFi module 102, audio output unit
103, A/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit
108, the components such as memory 109, processor 110 and power supply 111.It will be understood by those skilled in the art that shown in Fig. 1
Mobile terminal structure does not constitute the restriction to mobile terminal, and mobile terminal may include components more more or fewer than diagram,
Perhaps certain components or different component layouts are combined.
It is specifically introduced below with reference to all parts of the Fig. 1 to mobile terminal:
Radio frequency unit 101 can be used for receiving and sending messages or communication process in, signal sends and receivees, specifically, by base station
Downlink information receive after, to processor 110 handle;In addition, the data of uplink are sent to base station.In general, radio frequency unit 101
Including but not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier, duplexer etc..In addition, penetrating
Frequency unit 101 can also be communicated with network and other equipment by wireless communication.Any communication can be used in above-mentioned wireless communication
Standard or agreement, including but not limited to GSM (Global System of Mobile communication, global system for mobile telecommunications
System), GPRS (General Packet Radio Service, general packet radio service), CDMA2000 (Code
Division Multiple Access 2000, CDMA 2000), WCDMA (Wideband Code Division
Multiple Access, wideband code division multiple access), TD-SCDMA (Time Division-Synchronous Code
Division Multiple Access, TD SDMA), FDD-LTE (Frequency Division
Duplexing-Long Term Evolution, frequency division duplex long term evolution) and TDD-LTE (Time Division
Duplexing-Long Term Evolution, time division duplex long term evolution) etc..
WiFi belongs to short range wireless transmission technology, and mobile terminal can help user to receive and dispatch electricity by WiFi module 102
Sub- mail, browsing webpage and access streaming video etc., it provides wireless broadband internet access for user.Although Fig. 1 shows
Go out WiFi module 102, but it is understood that, and it is not belonging to must be configured into for mobile terminal, it completely can be according to need
It to omit within the scope of not changing the essence of the invention.
Audio output unit 103 can be in call signal reception pattern, call mode, record mould in mobile terminal 100
When under the isotypes such as formula, speech recognition mode, broadcast reception mode, by radio frequency unit 101 or WiFi module 102 it is received or
The audio data stored in memory 109 is converted into audio signal and exports to be sound.Moreover, audio output unit 103
Audio output relevant to the specific function that mobile terminal 100 executes can also be provided (for example, call signal receives sound, disappears
Breath receives sound etc.).Audio output unit 103 may include loudspeaker, buzzer etc..
A/V input unit 104 is for receiving audio or video signal.A/V input unit 104 may include graphics processor
(Graphics Processing Unit, GPU) 1041 and microphone 1042, graphics processor 1041 is in video acquisition mode
Or the image data of the static images or video obtained in image capture mode by image capture apparatus (such as camera) carries out
Reason.Treated, and picture frame may be displayed on display unit 106.Through graphics processor 1041, treated that picture frame can be deposited
Storage is sent in memory 109 (or other storage mediums) or via radio frequency unit 101 or WiFi module 102.Mike
Wind 1042 can connect in telephone calling model, logging mode, speech recognition mode etc. operational mode via microphone 1042
Quiet down sound (audio data), and can be audio data by such acoustic processing.Audio that treated (voice) data can
To be converted to the format output that can be sent to mobile communication base station via radio frequency unit 101 in the case where telephone calling model.
Microphone 1042 can be implemented various types of noises elimination (or inhibition) algorithms and send and receive sound to eliminate (or inhibition)
The noise generated during frequency signal or interference.
Mobile terminal 100 further includes at least one sensor 105, such as optical sensor, motion sensor and other biographies
Sensor.Specifically, optical sensor includes ambient light sensor and proximity sensor, wherein ambient light sensor can be according to environment
The light and shade of light adjusts the brightness of display panel 1061, and proximity sensor can close when mobile terminal 100 is moved in one's ear
Display panel 1061 and/or backlight.As a kind of motion sensor, accelerometer sensor can detect in all directions (general
For three axis) size of acceleration, it can detect that size and the direction of gravity when static, can be used to identify the application of mobile phone posture
(such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, percussion) etc.;
The fingerprint sensor that can also configure as mobile phone, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer,
The other sensors such as hygrometer, thermometer, infrared sensor, details are not described herein.
Display unit 106 is for showing information input by user or being supplied to the information of user.Display unit 106 can wrap
Display panel 1061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used
Forms such as (Organic Light-Emitting Diode, OLED) configure display panel 1061.
User input unit 107 can be used for receiving the number or character information of input, and generate the use with mobile terminal
Family setting and the related key signals input of function control.Specifically, user input unit 107 may include touch panel 1071 with
And other input equipments 1072.Touch panel 1071, also referred to as touch screen collect the touch operation of user on it or nearby
(for example user uses any suitable objects or attachment such as finger, stylus on touch panel 1071 or in touch panel 1071
Neighbouring operation), and corresponding attachment device is driven according to preset formula.Touch panel 1071 may include touch detection
Two parts of device and touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect touch operation band
The signal come, transmits a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and by it
It is converted into contact coordinate, then gives processor 110, and order that processor 110 is sent can be received and executed.In addition, can
To realize touch panel 1071 using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves.In addition to touch panel
1071, user input unit 107 can also include other input equipments 1072.Specifically, other input equipments 1072 can wrap
It includes but is not limited in physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse, operating stick etc.
It is one or more, specifically herein without limitation.
Further, touch panel 1071 can cover display panel 1061, when touch panel 1071 detect on it or
After neighbouring touch operation, processor 110 is sent to determine the type of touch event, is followed by subsequent processing device 110 according to touch thing
The type of part provides corresponding visual output on display panel 1061.Although in Fig. 1, touch panel 1071 and display panel
1061 be the function that outputs and inputs of realizing mobile terminal as two independent components, but in certain embodiments, it can
The function that outputs and inputs of mobile terminal is realized so that touch panel 1071 and display panel 1061 is integrated, is not done herein specifically
It limits.
Interface unit 108 be used as at least one external device (ED) connect with mobile terminal 100 can by interface.For example,
External device (ED) may include wired or wireless headphone port, external power supply (or battery charger) port, wired or nothing
Line data port, memory card port, the port for connecting the device with identification module, audio input/output (I/O) end
Mouth, video i/o port, ear port etc..Interface unit 108 can be used for receiving the input from external device (ED) (for example, number
It is believed that breath, electric power etc.) and the input received is transferred to one or more elements in mobile terminal 100 or can be with
For transmitting data between mobile terminal 100 and external device (ED).
Memory 109 can be used for storing software program and various data.Memory 109 can mainly include storing program area
The storage data area and, wherein storing program area can (such as the sound of application program needed for storage program area, at least one function
Sound playing function, image player function etc.) etc.;Storage data area can store according to mobile phone use created data (such as
Audio data, phone directory etc.) etc..In addition, memory 109 may include high-speed random access memory, it can also include non-easy
The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 110 is the control centre of mobile terminal, utilizes each of various interfaces and the entire mobile terminal of connection
A part by running or execute the software program and/or module that are stored in memory 109, and calls and is stored in storage
Data in device 109 execute the various functions and processing data of mobile terminal, to carry out integral monitoring to mobile terminal.Place
Managing device 110 may include one or more processing units;Preferably, processor 110 can integrate application processor and modulatedemodulate is mediated
Manage device, wherein the main processing operation system of application processor, user interface and application program etc., modem processor is main
Processing wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 110.
Mobile terminal 100 can also include the power supply 111 (such as battery) powered to all parts, it is preferred that power supply 111
Can be logically contiguous by power-supply management system and processor 110, to realize management charging by power-supply management system, put
The functions such as electricity and power managed.
Although Fig. 1 is not shown, mobile terminal 100 can also be including bluetooth module etc., and details are not described herein.
Embodiment to facilitate the understanding of the present invention, the communications network system that mobile terminal of the invention is based below into
Row description.
Referring to Fig. 2, Fig. 2 is a kind of communications network system architecture diagram provided in an embodiment of the present invention, the communication network system
System is the LTE system of universal mobile communications technology, which includes UE (User Equipment, the use of successively communication connection
Family equipment) (the land Evolved UMTS Terrestrial Radio Access Network, evolved UMTS 201, E-UTRAN
Ground wireless access network) 202, EPC (Evolved Packet Core, evolved packet-based core networks) 203 and operator IP operation
204。
Specifically, UE201 can be above-mentioned terminal 100, and details are not described herein again.
E-UTRAN202 includes eNodeB2021 and other eNodeB2022 etc..Wherein, eNodeB2021 can be by returning
Journey (backhaul) (such as X2 interface) is connect with other eNodeB2022, and eNodeB2021 is connected to EPC203,
ENodeB2021 can provide the access of UE201 to EPC203.
EPC203 may include MME (Mobility Management Entity, mobility management entity) 2031, HSS
(Home Subscriber Server, home subscriber server) 2032, other MME2033, SGW (Serving Gate Way,
Gateway) 2034, PGW (PDN Gate Way, grouped data network gateway) 2035 and PCRF (Policy and
Charging Rules Function, policy and rate functional entity) 2036 etc..Wherein, MME2031 be processing UE201 and
The control node of signaling, provides carrying and connection management between EPC203.HSS2032 is all to manage for providing some registers
Such as the function of home location register (not shown) etc, and preserves some related service features, data rates etc. and use
The dedicated information in family.All customer data can be sent by SGW2034, and PGW2035 can provide the IP of UE 201
Address distribution and other functions, PCRF2036 are strategy and the charging control strategic decision-making of business data flow and IP bearing resource
Point, it selects and provides available strategy and charging control decision with charge execution function unit (not shown) for strategy.
IP operation 204 may include internet, Intranet, IMS (IP Multimedia Subsystem, IP multimedia
System) or other IP operations etc..
Although above-mentioned be described by taking LTE system as an example, those skilled in the art should know the present invention is not only
Suitable for LTE system, be readily applicable to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA with
And the following new network system etc., herein without limitation.
Based on above-mentioned mobile terminal hardware configuration and communications network system, each embodiment of the method for the present invention is proposed.
Please refer to Fig. 3.The embodiment of the present invention provides a kind of face living body verification method based on binocular camera, is applied to
Mobile terminal, which comprises
S1, facial image is shot using the left and right camera of mobile terminal;
S2, distortion correction and row alignment are carried out to the left and right facial image taken respectively;
S3, Face datection is carried out to the left and right facial image through overcorrection respectively;
S4, the left images by Face datection are carried out with matching face key point;
S5, living body judgement is carried out according to the distance between described face key point.
Further, described before described using the step S1 of the left and right camera shooting facial image of mobile terminal
Method further include: build binocular camera environment using mobile terminal.
Mobile terminal installs two cameras, builds binocular camera environment using mobile terminal.
Further, in the step S1, the left and right camera using mobile terminal shoots facial image, comprising:
Shoot the facial image of user simultaneously using the left and right camera of mobile terminal.
Further, described that distortion correction and row are carried out to the left and right facial image taken respectively in the step S2
Alignment, comprising:
S21, the facial image for taking left and right camera carry out distortion correction;
S22, binocular correction is carried out to the left and right facial image Jing Guo distortion correction;It include: by the left side Jing Guo distortion correction
The facial image that right camera takes is rotated, and binocular correction is carried out, and the image for enabling binocular camera to obtain is in number
Alignment is kept on.
Further, described that Face datection, packet are carried out to the left and right facial image through overcorrection respectively in the step S3
It includes:
The position of face frame in the left and right facial image through overcorrection is obtained using the mtcnn algorithm detection based on deep learning
It sets and the key point information of face.Wherein, the key point of the face includes ear, nose, mouth, eyes etc..
Further, in the step S4, the described pair of left and right facial image by Face datection carries out matching face and closes
Key point, comprising:
According to face frame and face key point information is got, the key point of the left and right face in the facial image of left and right is matched
Position, such as: the auris dextra piece in Zuoren face image in left ear and right facial image matches, nose and right people in Zuoren face image
Nose matching etc. in face image.
Further, described that living body judgement, packet are carried out according to the distance between described face key point in the step S5
It includes:
S51, determine left-right ear to pick-up lens distance, wherein D_left_rose, D_right_rose distinguish table
Show a left side, auris dextra piece arrives the distance of camera lens, and (x1, y1, z1) indicates the three-dimensional coordinate of D_left_rose;
S52, distance of the left and right eye to camera lens is determined, D_eye_left, D_mouth_right respectively indicate a left side, right eye
For eyeball to the distance of camera lens, (x2, y2, z2) indicates that left eye eyeball three-dimensional coordinate, (x4, y4, z4) indicate right eye eyeball three-dimensional coordinate;
S53, determine left eye eyeball to ear depth distance: Dist_eyeToRose=| z2-z1 |;
S54, determine left eye eyeball to right eye eyeball depth distance: Dist_eyeToeye=| z4-z2 |;
S55, determine that face rotates angle angle;
S56, angle is rotated according to the depth distance Dist_eyeToRose of the left eye eyeball to ear and the face
Angle or the left eye eyeball are lived to the depth distance Dist_eyeToeye of right eye eyeball and face rotation angle angle
Body judgement, comprising:
When determining face rotation angle is in [0, angle] range, when threshold value T1:Dist_eyeToRose < T is set,
For non-living body attack;Alternatively,
When determining face rotation angle is [angle, 90] range, the depth difference between left and right eye is judged: setting threshold value
When T1:Dist_eyeToeye < T1, attacked for non-living body.
The embodiment of the present invention provides a kind of face living body verification method based on binocular camera, is applied to mobile terminal,
It include: the left and right camera shooting facial image using mobile terminal;It distorts respectively to the left and right facial image taken
Correction and row alignment;Face datection is carried out to the left and right facial image through overcorrection respectively;To the left and right figure by Face datection
As carrying out matching face key point;Living body judgement is carried out according to the distance between described face key point.Implement through the invention
Example can effectively resist the attack such as image, video, the simple and reliable maturation of method, and speed when authenticating using recognition of face
Fastly, security risk is reduced, user experience is improved.
Please refer to Fig. 4.The embodiment of the present invention provides a kind of face living body verifying device based on binocular camera, is applied to
Mobile terminal, described device include: shooting module 10, correction and row alignment module 20, detection module 30, matching module 40, sentence
Disconnected module 50, in which:
The shooting module 10, for shooting facial image using the left and right camera of mobile terminal;
The correction and row alignment module 20, for carrying out distortion correction and row to the left and right facial image taken respectively
Alignment;
The detection module 30, for carrying out Face datection to the left and right facial image through overcorrection respectively;
The matching module 40, for carrying out matching face key point to the left images by Face datection;
The judgment module 50, for carrying out living body judgement according to the distance between described face key point.
Further, the shooting module 10 is also used to build binocular camera environment using mobile terminal.Mobile terminal
Two cameras are installed, build binocular camera environment using mobile terminal.
Further, the shooting module 10 shoots the people of user for the left and right camera using mobile terminal simultaneously
Face image.
Further, the correction and row alignment module 20, are specifically used for:
The facial image that left and right camera is taken carries out distortion correction;
Binocular correction is carried out to the left and right facial image Jing Guo distortion correction;It include: to take the photograph the left and right Jing Guo distortion correction
The facial image taken as head is rotated, and binocular correction is carried out, and the image for enabling binocular camera to obtain is mathematically
Keep alignment.
The detection module 30, is specifically used for:
The position of face frame in the left and right facial image through overcorrection is obtained using the mtcnn algorithm detection based on deep learning
It sets and the key point information of face.Wherein, the key point of the face includes ear, nose, mouth, eyes etc..
The matching module 40, is specifically used for:
According to face frame and face key point information is got, the key point of the left and right face in the facial image of left and right is matched
Position, such as: the auris dextra piece in Zuoren face image in left ear and right facial image matches, nose and right people in Zuoren face image
Nose matching etc. in face image.
The judgment module 50, is specifically used for:
Determine left-right ear to pick-up lens distance, wherein D_left_rose, D_right_rose respectively indicate a left side,
Auris dextra piece arrives the distance of camera lens, and (x1, y1, z1) indicates the three-dimensional coordinate of D_left_rose;
Determine distance of the left and right eye to camera lens, D_eye_left, D_mouth_right respectively indicate a left side, and right eye eyeball arrives
The distance of camera lens, (x2, y2, z2) indicate that left eye eyeball three-dimensional coordinate, (x4, y4, z4) indicate right eye eyeball three-dimensional coordinate;
Determine left eye eyeball to ear depth distance: Dist_eyeToRose=| z2-z1 |;
Determine left eye eyeball to right eye eyeball depth distance: Dist_eyeToeye=| z4-z2 |;
Determine that face rotates angle angle;
According to the depth distance Dist_eyeToRose of the left eye eyeball to ear and the face rotation angle angle or
The left eye eyeball carries out living body judgement to the depth distance Dist_eyeToeye of right eye eyeball and face rotation angle angle,
Include:
When determining face rotation angle is in [0, angle] range, when threshold value T1:Dist_eyeToRose < T is set,
For non-living body attack;
When determining face rotation angle is [angle, 90] range, the depth difference between left and right eye is judged: setting threshold value
When T1:Dist_eyeToeye < T1, attacked for non-living body.
A kind of face living body based on binocular camera provided in an embodiment of the present invention verifies device, is applied to mobile whole
End, comprising: shooting module, correction and row alignment module, detection module, matching module and judgment module, in which: shooting module benefit
Facial image is shot with the left and right camera of mobile terminal;Correction and row alignment module are respectively to the left and right facial image taken
Carry out distortion correction and row alignment;Detection module carries out Face datection to the left and right facial image through overcorrection respectively;Match mould
Block carries out matching face key point to the left images by Face datection;Judgment module is according between the face key point
Distance carries out living body judgement.Through the embodiment of the present invention, when authenticating using recognition of face, image, video can effectively be resisted
Deng attack, the simple and reliable maturation of method, and speed is fast, reduces security risk, improves user experience.
It should be noted that above-mentioned apparatus embodiment and embodiment of the method belong to same design, specific implementation process is detailed
See embodiment of the method, and the technical characteristic in embodiment of the method is corresponding applicable in described device embodiment, it is no longer superfluous here
It states.
Technical solution of the present invention is described in further detail with reference to embodiments.
Please refer to Fig. 5.
The embodiment of the present invention provides a kind of face living body verification method based on binocular camera, is applied to mobile terminal,
The described method includes:
Step S501 builds binocular camera environment using mobile terminal.
Mobile terminal installs two cameras, builds binocular camera environment using mobile terminal.
Step S502, utilize mobile terminal left and right camera shoot facial image, comprising: using a left side for mobile terminal,
Right camera shoots the facial image of user simultaneously.
Step S503, the facial image that left and right camera is taken carry out distortion correction.
The facial image that left and right camera is taken carries out distortion correction, and the purpose is to eliminate because pick-up lens factor is made
At face distortion, the especially surrounding in camera lens visual field, distortion is very big.
Distortion includes radial distortion and tangential distortion, wherein the producing cause of radial distortion is light far from camera
The place at center by paracentral place than being more bent.The generation of tangential distortion is since the defect in camera manufacture makes
Camera itself it is not parallel with the plane of delineation and generate.
After distortion correction, the entire visual field distortion of facial image can be substantially eliminated, to improve recognition of face
Precision.
Step S504 carries out binocular correction to the left and right facial image Jing Guo distortion correction;It include: that will pass through distortion correction
The facial image that takes of left and right camera rotated, carry out binocular correction, the image for enabling binocular camera to obtain
Mathematically keep alignment.
Please refer to Fig. 6.It, can not be same between the camera of left and right on mobile terminals when installing binocular camera
Level, but there is certain rotation relationship mutually.Therefore the image that by binocular correction binocular camera is obtained
It is enough mathematically to keep alignment.
Please refer to Fig. 7.The left and right facial image of left and right camera shooting is after rotation, and two of left and right facial image
Identical content on image can be on the same horizontal plane.
The left and right facial image of left and right camera shooting final effect after rotation is as shown in Figure 8.At this point, face
Each key point can accurately just calculate the privileged site of actually face between camera all in same level in this way
Distance.
Step S505 carries out Face datection to the left and right facial image through overcorrection respectively, comprising:
The position of face frame in the left and right facial image through overcorrection is obtained using the mtcnn algorithm detection based on deep learning
It sets and the key point information of face.Wherein, the key point of the face includes ear, nose, mouth, eyes etc..Such as Fig. 8 institute
Show, mtcnn algorithm detection face can position each key point of left and right face while frame position is had the face in positioning.
Step S506 carries out matching face key point to the left and right facial image by Face datection, comprising:
According to face frame and face key point information is got, the key point of the left and right face in the facial image of left and right is matched
Position, such as: the auris dextra piece in Zuoren face image in left ear and right facial image matches, nose and right people in Zuoren face image
Nose matching etc. in face image.
In step S405, the available face frame of mtcnn algorithm and face key point information, and the pass of left and right face
Key point is all one-to-one, such as shown in Fig. 8, the left ear coordinate and auris dextra piece coordinate of Zuoren face are corresponding, left figure nose
It is corresponding with right figure nose coordinate.
Step S507 carries out living body judgement according to the distance between described face key point, comprising:
As shown in figure 9, there are two coordinate system OR and OT.P2 point is corresponding in P1 point and OT coordinate system in OR coordinate system
Left eyes coordinates, therefore the left eye eyeball of people can be calculated according to the disparity map of P1 point and P2 point to the distance of trunnion axis b.
According to certain key points of left and right face shown in Fig. 9 and combination figure 8 above, the related keyword point of face can be calculated
To the distance between camera.
Please refer to Fig. 8 and Fig. 9.Fig. 9 is double ideal models for taking the photograph shooting, and P is P1 and P2 respectively in the projection of OR and OT.This
When P1 and P2 be in same a line of image, can be obtained according to the disparity map of P1 and P2 P to camera depth distance Z.
P1 and P2 is respectively indicated a little in the horizontal position of left and right figure, parallax d=P1-P2, while depth distance Z and parallax
D can inversely, using similar triangles extrapolate Z:
In above formula, f indicates that focal length, b indicate the distance between two camera lenses in left and right.The calculated result of above formula is an ideal
Result.But in practical applications, P1 and P2 is less likely on the same horizontal line, and the row error in left images is non-
Chang great, if calculating P by force according to disparity map to the distance of camera lens, obtained mistake is very wrong.So will be according to this
Method in inventive embodiments is adjusted.
In embodiments of the present invention, since the key point for having got left and right face controls figure as shown in Fig. 8 (1c)
The correspondence key point of (straight line is connected) above the left ear of picture, because the two key points (refer to Fig. 6 all on the same pole-face
(1a)), the correspondence key point of the left ear of right image is maintained on polar curve erpr and moves, and the different location on polar curve erpr can obtain
To different parallaxes, to obtain different result Z according to above formula.
Please refer to Figure 10.
Determine left-right ear to pick-up lens distance, wherein D_left_rose, D_right_rose respectively indicate a left side,
Auris dextra piece arrives the distance of camera lens, and (x1, y1, z1) indicates the three-dimensional coordinate of D_left_rose;
Nose is determined to the distance of pick-up lens, the distance of D_nose expression nose to camera lens, (x3, y3, z3) indicates nose
The three-dimensional coordinate of son;
Determine distance of the left and right corners of the mouth to camera lens, D_mouth_left, D_mouth_right respectively indicate a left side, the right corners of the mouth
To the distance of camera lens;
Determine distance of the left and right eye to camera lens, D_eye_left, D_mouth_right respectively indicate a left side, and right eye eyeball arrives
The distance of camera lens, (x2, y2, z2) indicate that left eye eyeball three-dimensional coordinate, (x4, y4, z4) indicate right eye eyeball three-dimensional coordinate;
Determine left eye eyeball to ear depth distance: Dist_eyeToRose=| z2-z1 |;
Determine nose to ear depth distance: Dist_noseToRose=| z3-z1 |;
Determine nose to left eye eyeball depth distance: Dist_noseToRose=| z3-z2 |;
Determine left eye eyeball to right eye eyeball depth distance: Dist_eyeToeye=| z4-z2 |;
Determine that face rotates angle angle;
When determining face rotation angle is in [0, angle] range, when threshold value T1:Dist_eyeToRose < T is set,
For non-living body attack;
When determining face rotation angle is [angle, 90] range, the depth difference between left and right eye is judged: setting threshold value
When T1:Dist_eyeToeye < T1, attacked for non-living body.
In addition, the embodiment of the present invention also provides a kind of mobile terminal, as shown in figure 11, the mobile terminal 900 includes: to deposit
Reservoir 902, processor 901 and it is stored in one for can running in the memory 902 and on the processor 901 or more
A computer program, the memory 902 and the processor 901 are coupled by bus system 903, it is one or
It is provided in an embodiment of the present invention a kind of based on binocular camera shooting to realize when the multiple computer programs of person are executed by the processor 901
The following steps of the face living body verification method of head:
S1, facial image is shot using the left and right camera of mobile terminal;
S2, distortion correction and row alignment are carried out to the left and right facial image taken respectively;
S3, Face datection is carried out to the left and right facial image through overcorrection respectively;
S4, the left images by Face datection are carried out with matching face key point;
S5, living body judgement is carried out according to the distance between described face key point.
Further, described before described using the step S1 of the left and right camera shooting facial image of mobile terminal
Method further include: build binocular camera environment using mobile terminal.
Mobile terminal installs two cameras, builds binocular camera environment using mobile terminal.
Further, in the step S1, the left and right camera using mobile terminal shoots facial image, comprising:
Shoot the facial image of user simultaneously using the left and right camera of mobile terminal.
Further, described that distortion correction and row are carried out to the left and right facial image taken respectively in the step S2
Alignment, comprising:
S21, the facial image for taking left and right camera carry out distortion correction;
S22, binocular correction is carried out to the left and right facial image Jing Guo distortion correction;It include: by the left side Jing Guo distortion correction
The facial image that right camera takes is rotated, and binocular correction is carried out, and the image for enabling binocular camera to obtain is in number
Alignment is kept on.
Further, described that Face datection, packet are carried out to the left and right facial image through overcorrection respectively in the step S3
It includes:
The position of face frame in the left and right facial image through overcorrection is obtained using the mtcnn algorithm detection based on deep learning
It sets and the key point information of face.Wherein, the key point of the face includes ear, nose, mouth, eyes etc..
Further, in the step S4, the described pair of left and right facial image by Face datection carries out matching face and closes
Key point, comprising:
According to face frame and face key point information is got, the key point of the left and right face in the facial image of left and right is matched
Position, such as: the auris dextra piece in Zuoren face image in left ear and right facial image matches, nose and right people in Zuoren face image
Nose matching etc. in face image.
Further, described that living body judgement, packet are carried out according to the distance between described face key point in the step S5
It includes:
S51, determine left-right ear to pick-up lens distance, wherein D_left_rose, D_right_rose distinguish table
Show a left side, auris dextra piece arrives the distance of camera lens, and (x1, y1, z1) indicates the three-dimensional coordinate of D_left_rose;
S52, distance of the left and right eye to camera lens is determined, D_eye_left, D_mouth_right respectively indicate a left side, right eye
For eyeball to the distance of camera lens, (x2, y2, z2) indicates that left eye eyeball three-dimensional coordinate, (x4, y4, z4) indicate right eye eyeball three-dimensional coordinate;
S53, determine left eye eyeball to ear depth distance: Dist_eyeToRose=| z2-z1 |;
S54, determine left eye eyeball to right eye eyeball depth distance: Dist_eyeToeye=| z4-z2 |;
S55, determine that face rotates angle angle;
S56, angle is rotated according to the depth distance Dist_eyeToRose of the left eye eyeball to ear and the face
Angle or the left eye eyeball are lived to the depth distance Dist_eyeToeye of right eye eyeball and face rotation angle angle
Body judgement, comprising:
When determining face rotation angle is in [0, angle] range, when threshold value T1:Dist_eyeToRose < T is set,
For non-living body attack;Alternatively,
When determining face rotation angle is [angle, 90] range, the depth difference between left and right eye is judged: setting threshold value
When T1:Dist_eyeToeye < T1, attacked for non-living body.
The method that the embodiments of the present invention disclose can be applied in the processor 901, or by the processor
901 realize.The processor 901 may be a kind of IC chip, have signal handling capacity.During realization, on
Each step for stating method can be complete by the integrated logic circuit of the hardware in the processor 901 or the instruction of software form
At.The processor 901 can be general processor, DSP or other programmable logic device, discrete gate or transistor
Logical device, discrete hardware components etc..The processor 901 may be implemented or execute disclosed each in the embodiment of the present invention
Method, step and logic diagram.General processor can be microprocessor or any conventional processor etc..In conjunction with the present invention
The step of method disclosed in embodiment, can be embodied directly in hardware decoding processor and execute completion, or be handled with decoding
Hardware and software module combination in device execute completion.Software module can be located in storage medium, which, which is located at, deposits
The step of reservoir 902, the processor 901 reads the information in memory 902, completes preceding method in conjunction with its hardware.
It is appreciated that the memory 902 of the embodiment of the present invention can be volatile memory or nonvolatile memory,
It also may include both volatile and non-volatile memories.Wherein, nonvolatile memory can be read-only memory (ROM,
Read-Only Memory), it is programmable read only memory (PROM, Programmable Read-Only Memory), erasable
Programmable read only memory (EPROM, Erasable Read-Only Memory), electricallyerasable ROM (EEROM) (EEPROM,
Electrically Erasable Programmable Read-Only Memory), magnetic RAM (FRAM,
Ferromagnetic Random Access Memory), flash memory (Flash Memory) or other memory technologies, CD only
Read memory (CD-ROM, Compact Disk Read-Only Memory), digital versatile disc (DVD, Digital Video
) or other optical disc storages, magnetic holder, tape, disk storage or other magnetic memory apparatus Disk;Volatile memory can be at random
It accesses memory (RAM, Random Access Memory), by exemplary but be not restricted explanation, the RAM of many forms
It can use, such as static random access memory (SRAM, Static Random Access Memory), static random-access are deposited
Reservoir (SSRAM, Synchronous Static Random Access Memory), dynamic random access memory (DRAM,
Dynamic Random Access Memory), Synchronous Dynamic Random Access Memory (SDRAM, Synchronous
Dynamic Random Access Memory), double data speed synchronous dynamic RAM (DDRSDRAM,
Double Data Rate Synchronous Dynamic Random Access Memory), enhanced synchronous dynamic random
Access memory (ESDRAM, Enhanced Synchronous Dynamic Random Access Memory), synchronized links
Dynamic random access memory (SLDRAM, SyncLink Dynamic Random Access Memory), direct rambus
Random access memory (DRRAM, Direct Rambus Random Access Memory).Description of the embodiment of the present invention is deposited
Reservoir is intended to include but is not limited to the memory of these and any other suitable type.
It should be noted that above-mentioned mobile terminal embodiment and embodiment of the method belong to same design, implemented
Journey is detailed in embodiment of the method, and the technical characteristic in embodiment of the method is corresponding applicable in the mobile terminal embodiment, this
In repeat no more.
In addition, in the exemplary embodiment, the embodiment of the present invention also provides a kind of computer storage medium, specially calculate
Machine readable storage medium storing program for executing is stored with base in the computer storage medium for example including the memory 902 for storing computer program
In one or more program of the face living body verification method of binocular camera, the face living body based on binocular camera
It is provided in an embodiment of the present invention a kind of based on double to realize when one or more program of verification method is executed by processor 901
The following steps of the face living body verification method of mesh camera:
S1, facial image is shot using the left and right camera of mobile terminal;
S2, distortion correction and row alignment are carried out to the left and right facial image taken respectively;
S3, Face datection is carried out to the left and right facial image through overcorrection respectively;
S4, the left images by Face datection are carried out with matching face key point;
S5, living body judgement is carried out according to the distance between described face key point.
Further, described before described using the step S1 of the left and right camera shooting facial image of mobile terminal
Method further include: build binocular camera environment using mobile terminal.
Mobile terminal installs two cameras, builds binocular camera environment using mobile terminal.
Further, in the step S1, the left and right camera using mobile terminal shoots facial image, comprising:
Shoot the facial image of user simultaneously using the left and right camera of mobile terminal.
Further, described that distortion correction and row are carried out to the left and right facial image taken respectively in the step S2
Alignment, comprising:
S21, the facial image for taking left and right camera carry out distortion correction;
S22, binocular correction is carried out to the left and right facial image Jing Guo distortion correction;It include: by the left side Jing Guo distortion correction
The facial image that right camera takes is rotated, and binocular correction is carried out, and the image for enabling binocular camera to obtain is in number
Alignment is kept on.
Further, described that Face datection, packet are carried out to the left and right facial image through overcorrection respectively in the step S3
It includes:
The position of face frame in the left and right facial image through overcorrection is obtained using the mtcnn algorithm detection based on deep learning
It sets and the key point information of face.Wherein, the key point of the face includes ear, nose, mouth, eyes etc..
Further, in the step S4, the described pair of left and right facial image by Face datection carries out matching face and closes
Key point, comprising:
According to face frame and face key point information is got, the key point of the left and right face in the facial image of left and right is matched
Position, such as: the auris dextra piece in Zuoren face image in left ear and right facial image matches, nose and right people in Zuoren face image
Nose matching etc. in face image.
Further, described that living body judgement, packet are carried out according to the distance between described face key point in the step S5
It includes:
S51, determine left-right ear to pick-up lens distance, wherein D_left_rose, D_right_rose distinguish table
Show a left side, auris dextra piece arrives the distance of camera lens, and (x1, y1, z1) indicates the three-dimensional coordinate of D_left_rose;
S52, distance of the left and right eye to camera lens is determined, D_eye_left, D_mouth_right respectively indicate a left side, right eye
For eyeball to the distance of camera lens, (x2, y2, z2) indicates that left eye eyeball three-dimensional coordinate, (x4, y4, z4) indicate right eye eyeball three-dimensional coordinate;
S53, determine left eye eyeball to ear depth distance: Dist_eyeToRose=| z2-z1 |;
S54, determine left eye eyeball to right eye eyeball depth distance: Dist_eyeToeye=| z4-z2 |;
S55, determine that face rotates angle angle;
S56, angle is rotated according to the depth distance Dist_eyeToRose of the left eye eyeball to ear and the face
Angle or the left eye eyeball are lived to the depth distance Dist_eyeToeye of right eye eyeball and face rotation angle angle
Body judgement, comprising:
When determining face rotation angle is in [0, angle] range, when threshold value T1:Dist_eyeToRose < T is set,
For non-living body attack;Alternatively,
When determining face rotation angle is [angle, 90] range, the depth difference between left and right eye is judged: setting threshold value
When T1:Dist_eyeToeye < T1, attacked for non-living body.
It should be noted that the face living body authentication based on binocular camera on above-mentioned computer readable storage medium
Method program embodiment and embodiment of the method belong to same design, and specific implementation process is detailed in embodiment of the method, and method is implemented
Technical characteristic in example is corresponding applicable in the embodiment of above-mentioned computer readable storage medium, and which is not described herein again.
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, method of element, article or device.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art
The part contributed out can be embodied in the form of software products, which is stored in a storage medium
In (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal (can be mobile phone, computer, service
Device, air conditioner or network equipment etc.) execute method described in each embodiment of the present invention.
The embodiment of the present invention is described with above attached drawing, but the invention is not limited to above-mentioned specific
Embodiment, the above mentioned embodiment is only schematical, rather than restrictive, those skilled in the art
Under the inspiration of the present invention, without breaking away from the scope protected by the purposes and claims of the present invention, it can also make very much
Form, all of these belong to the protection of the present invention.
Claims (10)
1. a kind of face living body verification method is applied to mobile terminal, which is characterized in that the described method includes:
Facial image is shot using the left and right camera of mobile terminal;
Distortion correction is carried out to the left and right facial image taken respectively and row is aligned;
Face datection is carried out to the left and right facial image through overcorrection respectively;
Matching face key point is carried out to the left images by Face datection
Living body judgement is carried out according to the distance between described face key point.
2. the method according to claim 1, wherein described abnormal to the left and right facial image progress taken respectively
Become correction and row alignment, comprising:
The facial image that left and right camera is taken carries out distortion correction;
Binocular correction is carried out to the left and right facial image Jing Guo distortion correction.
3. according to the method described in claim 2, it is characterized in that, the described pair of left and right facial image Jing Guo distortion correction carries out
Binocular correction, comprising: the facial image that the left and right camera Jing Guo distortion correction takes is rotated, binocular school is carried out
Just, the image for enabling binocular camera obtain mathematically keeps being aligned.
4. according to the method described in claim 2, it is characterized in that, described respectively carry out the left and right facial image through overcorrection
Face datection, comprising: face in the left and right facial image through overcorrection is obtained using the mtcnn algorithm detection based on deep learning
The position of frame and the key point information of face.
5. according to the method described in claim 4, it is characterized in that, the described pair of left and right facial image by Face datection carries out
Match face key point, comprising: according to face frame and face key point information is got, match the left and right in the facial image of left and right
The key point position of face.
6. according to the method described in claim 5, it is characterized in that, described carry out according to the distance between described face key point
Living body judgement, comprising:
Determine left-right ear to pick-up lens distance;
Determine left and right eye to camera lens distance;
Determine left eye eyeball to ear depth distance Dist_eyeToRose;
Determine left eye eyeball to right eye eyeball depth distance Dist_eyeToeye;
Determine that face rotates angle angle;
It is lived according to the depth distance Dist_eyeToRose of the left eye eyeball to ear and face rotation angle angle
Body judgement, alternatively, rotating angle according to the depth distance Dist_eyeToeye of the left eye eyeball to right eye eyeball and the face
Angle carries out living body judgement.
7. according to the method described in claim 6, it is characterized in that, the depth distance according to the left eye eyeball to ear
Dist_eyeToRose and face rotation angle angle carry out living body judgement;It include: when determining face rotation angle is
When in [0, angle] range, when threshold value T1:Dist_eyeToRose < T is set, attacked for non-living body;Alternatively,
It is described that angle angle is rotated according to the depth distance Dist_eyeToeye of the left eye eyeball to right eye eyeball and the face
Carry out living body judgement, comprising: when determining face rotation angle is [angle, 90] range, judge the depth between left and right eye
Difference: it when setting threshold value T1:Dist_eyeToeye < T1, is attacked for non-living body.
8. a kind of face living body verifies device, it is applied to a kind of face living body authentication as described in any one of claim 1 to 7
Method, which is characterized in that described device includes: shooting module, correction and row alignment module, detection module, matching module, judges mould
Block, in which:
The shooting module, for shooting facial image using the left and right camera of mobile terminal;
The correction and row alignment module, for carrying out distortion correction and row alignment to the left and right facial image taken respectively;
The detection module, for carrying out Face datection to the left and right facial image through overcorrection respectively;
The matching module, for carrying out matching face key point to the left images by Face datection;
The judgment module, for carrying out living body judgement according to the distance between described face key point.
9. a kind of terminal characterized by comprising memory, processor and be stored on the memory and can be at the place
The computer program run on reason device is realized when the computer program is executed by the processor as appointed in claim 1 to 7
A kind of the step of face living body verification method described in one.
10. a kind of computer readable storage medium, which is characterized in that it is living to be stored with face on the computer readable storage medium
Body proving program is realized when the face living body verification method program is executed by processor such as any one of claims 1 to 7 institute
A kind of the step of face living body verification method stated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910252907.XA CN110059590B (en) | 2019-03-29 | 2019-03-29 | Face living experience authentication method and device, mobile terminal and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910252907.XA CN110059590B (en) | 2019-03-29 | 2019-03-29 | Face living experience authentication method and device, mobile terminal and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110059590A true CN110059590A (en) | 2019-07-26 |
CN110059590B CN110059590B (en) | 2023-06-30 |
Family
ID=67318027
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910252907.XA Active CN110059590B (en) | 2019-03-29 | 2019-03-29 | Face living experience authentication method and device, mobile terminal and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110059590B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110472582A (en) * | 2019-08-16 | 2019-11-19 | 腾讯科技(深圳)有限公司 | 3D face identification method, device and terminal based on eye recognition |
CN110688946A (en) * | 2019-09-26 | 2020-01-14 | 上海依图信息技术有限公司 | Public cloud silence in-vivo detection device and method based on picture identification |
CN111008605A (en) * | 2019-12-09 | 2020-04-14 | Oppo广东移动通信有限公司 | Method and device for processing straight line in face image, terminal equipment and storage medium |
CN112801038A (en) * | 2021-03-02 | 2021-05-14 | 重庆邮电大学 | Multi-view face living body detection method and system |
CN112926464A (en) * | 2021-03-01 | 2021-06-08 | 创新奇智(重庆)科技有限公司 | Face living body detection method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105224924A (en) * | 2015-09-29 | 2016-01-06 | 小米科技有限责任公司 | Living body faces recognition methods and device |
CN106355139A (en) * | 2016-08-22 | 2017-01-25 | 厦门中控生物识别信息技术有限公司 | Facial anti-fake method and device |
CN107820071A (en) * | 2017-11-24 | 2018-03-20 | 深圳超多维科技有限公司 | Mobile terminal and its stereoscopic imaging method, device and computer-readable recording medium |
US20180307928A1 (en) * | 2016-04-21 | 2018-10-25 | Tencent Technology (Shenzhen) Company Limited | Living face verification method and device |
CN108764091A (en) * | 2018-05-18 | 2018-11-06 | 北京市商汤科技开发有限公司 | Biopsy method and device, electronic equipment and storage medium |
-
2019
- 2019-03-29 CN CN201910252907.XA patent/CN110059590B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105224924A (en) * | 2015-09-29 | 2016-01-06 | 小米科技有限责任公司 | Living body faces recognition methods and device |
US20180307928A1 (en) * | 2016-04-21 | 2018-10-25 | Tencent Technology (Shenzhen) Company Limited | Living face verification method and device |
CN106355139A (en) * | 2016-08-22 | 2017-01-25 | 厦门中控生物识别信息技术有限公司 | Facial anti-fake method and device |
CN107820071A (en) * | 2017-11-24 | 2018-03-20 | 深圳超多维科技有限公司 | Mobile terminal and its stereoscopic imaging method, device and computer-readable recording medium |
CN108764091A (en) * | 2018-05-18 | 2018-11-06 | 北京市商汤科技开发有限公司 | Biopsy method and device, electronic equipment and storage medium |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110472582A (en) * | 2019-08-16 | 2019-11-19 | 腾讯科技(深圳)有限公司 | 3D face identification method, device and terminal based on eye recognition |
CN110472582B (en) * | 2019-08-16 | 2023-07-21 | 腾讯科技(深圳)有限公司 | 3D face recognition method and device based on eye recognition and terminal |
CN110688946A (en) * | 2019-09-26 | 2020-01-14 | 上海依图信息技术有限公司 | Public cloud silence in-vivo detection device and method based on picture identification |
CN111008605A (en) * | 2019-12-09 | 2020-04-14 | Oppo广东移动通信有限公司 | Method and device for processing straight line in face image, terminal equipment and storage medium |
CN111008605B (en) * | 2019-12-09 | 2023-08-11 | Oppo广东移动通信有限公司 | Linear processing method and device in face image, terminal equipment and storage medium |
CN112926464A (en) * | 2021-03-01 | 2021-06-08 | 创新奇智(重庆)科技有限公司 | Face living body detection method and device |
CN112926464B (en) * | 2021-03-01 | 2023-08-29 | 创新奇智(重庆)科技有限公司 | Face living body detection method and device |
CN112801038A (en) * | 2021-03-02 | 2021-05-14 | 重庆邮电大学 | Multi-view face living body detection method and system |
CN112801038B (en) * | 2021-03-02 | 2022-07-22 | 重庆邮电大学 | Multi-view face in-vivo detection method and system |
Also Published As
Publication number | Publication date |
---|---|
CN110059590B (en) | 2023-06-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110059590A (en) | A kind of face living body verification method, device, mobile terminal and readable storage medium storing program for executing | |
CN109685740B (en) | Face correction method and device, mobile terminal and computer readable storage medium | |
WO2016169432A1 (en) | Identity authentication method and device, and terminal | |
CN108227833A (en) | Control method, terminal and the computer readable storage medium of flexible screen terminal | |
CN108108704A (en) | Face identification method and mobile terminal | |
CN109739602A (en) | A kind of mobile terminal wallpaper setting method and device, mobile terminal and storage medium | |
CN109618052A (en) | A kind of call audio switching method and device, mobile terminal and readable storage medium storing program for executing | |
CN109618058A (en) | A kind of protection screen is from broken screen method and device, touch control device and storage medium | |
CN108171743A (en) | Method, equipment and the computer for shooting image can storage mediums | |
CN107358432A (en) | Mobile terminal is swiped the card method, apparatus and computer-readable recording medium | |
CN108961489A (en) | A kind of equipment wearing control method, terminal and computer readable storage medium | |
CN109255620A (en) | Encrypting payment method, mobile terminal and computer readable storage medium | |
CN108196762A (en) | A kind of terminal control method, terminal and computer readable storage medium | |
CN108540458A (en) | A kind of method of client checks, equipment, server and storage medium | |
CN107483804A (en) | A kind of image pickup method, mobile terminal and computer-readable recording medium | |
CN110035270A (en) | A kind of 3D rendering display methods, terminal and computer readable storage medium | |
CN109683742A (en) | Prevent touch control device by error touch control method and device, touch control device and storage medium | |
CN108376239A (en) | A kind of face identification method, mobile terminal and storage medium | |
CN107426441A (en) | A kind of displaying method of terminal, terminal and computer-readable recording medium | |
CN108108600A (en) | Double screen safe verification method, mobile terminal and computer readable storage medium | |
CN107527036A (en) | A kind of Environmental security detection method, terminal and computer-readable recording medium | |
CN108197560A (en) | Facial image recognition method, mobile terminal and computer readable storage medium | |
CN108196773A (en) | Control method, terminal and the computer readable storage medium of flexible screen terminal | |
CN110060617A (en) | A kind of method, apparatus, terminal and the readable storage medium storing program for executing of screen light-dark cycle | |
CN107451547A (en) | Identify the method and Related product of live body |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |