CN110287900A - Verification method and verifying device - Google Patents
Verification method and verifying device Download PDFInfo
- Publication number
- CN110287900A CN110287900A CN201910568579.4A CN201910568579A CN110287900A CN 110287900 A CN110287900 A CN 110287900A CN 201910568579 A CN201910568579 A CN 201910568579A CN 110287900 A CN110287900 A CN 110287900A
- Authority
- CN
- China
- Prior art keywords
- face
- module
- image
- infrared image
- facial contour
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Collating Specific Patterns (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention provides a kind of verification method and devices, comprising: obtains the near-infrared image and depth image of target object;Face datection is carried out to the near-infrared image, obtains facial contour information;In vivo detection is carried out according to the facial contour information and the depth image;Pass through in response to the In vivo detection, face verification is carried out to the target object according to the near-infrared image;Pass through in response to the face verification, executes unlock operation;It can effectively solve the problems, such as that dark backlight human face identification and dummy's attack etc. are difficult to the problem of defending by the verification method.
Description
Technical field
The present invention relates to technical field of face recognition, and in particular to a kind of verification method and verifying device.
Background technique
Face recognition scheme includes Face datection, and recognition of face, In vivo detection, open and close eyes detection, and picture quality detects skill
Art.By shifting to an earlier date typing human face image information, the face that mobile phone can quickly before recognition screen when opening screen carries out living body inspection
It surveys, the detection that opens and closes eyes, compare face information, realize unlock.Current face unlock scheme has common RGB face unlock, close red
Outer face unlock and the unlock of 3D structure light face.
Cost is relatively low for the scheme of existing 2D mobile phone face unlock at present, but exist be easy to be rung by environment shadow or to paper-cut/
Photo/high-cost dummy attack is difficult to the problem of defending.Because the face of RGB photo is darker under half-light environment, it is difficult to be known
Not.It is easy to appear overexposure in strong light human face, or to form " yin-yang face " since tree shade etc. blocks and will increase recognition of face difficulty.It is anti-
Attack, what is obtained due to 2D scheme is that 2D RGB figure or 2D near-infrared photo are easy to pacify user equipment by success attack
Holotype is at threat.
Summary of the invention
The embodiment of the invention provides a kind of verification methods and verifying device, can effectively solve the problem that dark backlight human face identification
Problem and dummy's attack etc. are difficult to the problem of defending.
First aspect of the embodiment of the present invention discloses a kind of verification method, which comprises obtains the close of target object
Infrared image and depth image;Face datection is carried out to the near-infrared image, obtains facial contour information;According to the face
Profile information and the depth image carry out In vivo detection;Pass through in response to the In vivo detection, according to the near-infrared image
Face verification is carried out to the target object;Pass through in response to the face verification, executes unlock operation.
Optionally, described to carry out Face datection to the near-infrared image, obtaining facial contour information includes: described in adjustment
The size of near-infrared image, obtains image pyramid;Face characteristic extraction is carried out to described image pyramid and demarcates frame;It is right
The frame optimizes, and obtains the facial contour information.
Optionally, the near-infrared image is carried out described after Face datection obtains facial contour information, the side
Method further include: the detection that opens and closes eyes is carried out according to the facial contour information.
Optionally, described that the detection that opens and closes eyes is carried out according to the facial contour information, comprising: the near-infrared image is defeated
Enter in the first module of deep learning network, first module is for the detection that opens and closes eyes;It obtains in the facial contour information
Eye key point coordinate;Eye feature is extracted in first module according to the eye key point coordinate;By extraction
The eye feature is compared with the first pre-stored characteristics, to carry out the detection that opens and closes eyes, wherein first pre-stored characteristics are behaved
The eye feature when eye opening prestored in face feature database.
Optionally, the deep learning network further includes the second module, and second module is used for the In vivo detection, institute
It states and In vivo detection is carried out according to the facial contour information and the depth image, comprising: according to the institute of the near-infrared image
It states facial contour information and is partitioned into the first face area image in the depth image;First face area image is inputted into institute
It states in the second module, the first face characteristic of the first face area image is extracted by second module;By described
One face characteristic is compared with the second pre-stored characteristics, to carry out In vivo detection, wherein second pre-stored characteristics are the people
The living body faces characteristic information prestored in face feature database.
Optionally, the deep learning network further includes third module, and the third module is used for the face verification, institute
Stating and carrying out face verification to the target object according to the near-infrared image includes: according to the facial contour information described
The second human face region image is partitioned into near-infrared image;Second human face region image is inputted in the third module, is passed through
The third module extracts the second face characteristic of the second human face region image, and second face characteristic and third is pre-
It deposits feature to be compared, judges whether it is same people, wherein prestored in the third pre-stored characteristics face characteristic library
The face characteristic of the target object.
Optionally, the deep learning network is obtained by training, and the training includes: to establish deep learning net
Network, wherein the deep learning network includes first module, second module, the third module;Choose training sample
This, the training sample includes opening and closing eyes to detect sample, In vivo detection sample and face verification sample;Pass through first module
First pre-stored characteristics described in sample extraction are detected to described open and close eyes, the In vivo detection sample is mentioned by second module
Second pre-stored characteristics are taken, it, will by the third module to third pre-stored characteristics described in the face verification sample extraction
First pre-stored characteristics, second pre-stored characteristics and the third pre-stored characteristics are stored in the face characteristic library;According to institute
It states the first pre-stored characteristics to be trained first module, second module is instructed according to second pre-stored characteristics
Practice, the third module is trained according to the third pre-stored characteristics;The deep learning net is adjusted according to training result
The hyper parameter of network, the deep learning network after being trained.
Second aspect of the present invention discloses a kind of verification method device, and the verification method device includes: the first acquisition list
Member, for obtaining the near-infrared image and depth image of target object;Face datection unit, for the near-infrared image into
Row Face datection obtains facial contour information;In vivo detection unit, for according to the facial contour information and the depth map
As carrying out In vivo detection;Face verification unit, for passing through in response to the In vivo detection, according to the near-infrared image to institute
It states target object and carries out face verification;Unlocking unit executes unlock operation for passing through in response to the face verification.
Optionally, Face datection is carried out to the near-infrared image described, obtains facial contour message context, the people
Face detection unit is specifically used for: adjusting the size of the near-infrared image, obtains image pyramid;To described image pyramid into
The feature extraction of pedestrian's face simultaneously demarcates frame;The frame is optimized, the facial contour information is obtained.
Optionally, the context of detection that opens and closes eyes is carried out according to the facial contour information described, described open and close eyes detects list
Member is specifically used for: by the first module of near-infrared image input deep learning network, first module is closed for opening
Eye detection;Obtain the eye key point coordinate in the facial contour information;According to the eye key point coordinate described
Eye feature is extracted in one module;The eye feature of extraction is compared with the first pre-stored characteristics, to open and close eyes
Detection, wherein eye feature when first pre-stored characteristics are the eye opening prestored in face feature database.
Optionally, the deep learning network further includes the second module, and second module is used for the In vivo detection,
Described to carry out In vivo detection aspect according to the facial contour information and the depth image, the In vivo detection unit is specifically used
In: the first face administrative division map is partitioned into the depth image according to the facial contour information of the near-infrared image
Picture;First face area image is inputted in second module, first human face region is extracted by second module
First face characteristic of image;First face characteristic is compared with the second pre-stored characteristics, to carry out In vivo detection,
In, second pre-stored characteristics are the living body faces characteristic information prestored in the face characteristic library.
Optionally, the deep learning network further includes third module, and the third module is used for the face verification,
Described to carry out face verification aspect to the target object according to the near-infrared image, the face verification unit is specifically used
In: the second human face region image is partitioned into the near-infrared image according to the facial contour information;By the second face area
Area image inputs in the third module, and the second face for extracting the second human face region image by the third module is special
Sign;Second face characteristic is compared with third pre-stored characteristics, judges whether it is same people, wherein the third is pre-
Deposit the face characteristic that feature is the target object prestored in the face characteristic library.
Optionally, the deep learning network is obtained by training, and the verifying device further includes training unit, is used
In: establish deep learning network, wherein the deep learning network includes first module, second module, and described
Three modules;Training sample is chosen, the training sample includes opening and closing eyes to detect sample, In vivo detection sample and face verification sample
This;The first pre-stored characteristics described in sample extraction are detected to described open and close eyes by first module, pass through second module
To the second pre-stored characteristics described in the In vivo detection sample extraction, by the third module to the face verification sample extraction
The third pre-stored characteristics, by first pre-stored characteristics, second pre-stored characteristics and the third pre-stored characteristics are stored in institute
State face characteristic library;First module is trained according to first pre-stored characteristics, according to second pre-stored characteristics
Second module is trained, the third module is trained according to the third pre-stored characteristics;It is tied according to training
Fruit adjusts the hyper parameter of the deep learning network, the deep learning network after being trained.
Third aspect present invention discloses a kind of electronic equipment, including processor, memory, communication interface and one
Or multiple programs, one or more of programs are stored in the memory, and are configured to be executed by the processor,
Described program includes for executing the described in any item methods of first aspect.
Fourth aspect present invention discloses a kind of computer readable storage medium, and the computer storage medium is stored with meter
Calculation machine program, the computer program are executed by processor, to realize such as the described in any item methods of first aspect.
In the scheme of the embodiment of the present invention, the near-infrared image and depth image of target object are obtained;To the near-infrared
Image carries out Face datection, obtains facial contour information;Living body is carried out according to the facial contour information and the depth image
Detection;Pass through in response to the In vivo detection, face verification is carried out to the target object according to the near-infrared image;Response
Pass through in the face verification, executes unlock operation.The scheme provided through the invention, can be by near-infrared image and depth
Spend image and carry out the detection such as Face datection, In vivo detection and face verification, the dark backlight human face identification problem of effective solution and
Dummy's attack etc. is difficult to the problem of defending.
Detailed description of the invention
It to describe the technical solutions in the embodiments of the present invention more clearly, below will be to needed in the embodiment
Attached drawing is briefly described, it should be apparent that, drawings in the following description are some embodiments of the invention, general for this field
For logical technical staff, without creative efforts, it is also possible to obtain other drawings based on these drawings.
Fig. 1 is a kind of register flow path schematic diagram of verification method provided in an embodiment of the present invention;
Fig. 2 is a kind of unlock flow diagram of verification method provided in an embodiment of the present invention;
Fig. 3 is a kind of flow diagram of verification method provided in an embodiment of the present invention;
Fig. 4 is the structural schematic diagram of a kind of electronic equipment provided by the embodiments of the present application;
Fig. 5 is a kind of structural schematic diagram for verifying device provided in an embodiment of the present invention.
Specific embodiment
In order to enable those skilled in the art to better understand the solution of the present invention, below in conjunction in the embodiment of the present invention
Attached drawing, technical solution in the embodiment of the present invention are explicitly described, it is clear that described embodiment is the present invention one
The embodiment divided, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art are not doing
Every other embodiment obtained under the premise of creative work out, should fall within the scope of the present invention.
The term " first " that occurs in description of the invention, claims and attached drawing, " second " and " third " etc. are to use
In the different object of difference, and it is not intended to describe specific sequence.In addition, term " includes " and " having " and they are any
Deformation, it is intended that cover and non-exclusive include.Such as contain the process, method, system, product of a series of steps or units
Or equipment is not limited to listed step or unit, but optionally further comprising the step of not listing or unit, or can
Selection of land further includes the other step or units intrinsic for these process, methods, product or equipment.
Referenced herein " embodiment " is it is meant that a particular feature, structure, or characteristic described can wrap in conjunction with the embodiments
It is contained at least one embodiment of the application.Each position in the description occur the phrase might not each mean it is identical
Embodiment, nor the independent or alternative embodiment with other embodiments mutual exclusion.Those skilled in the art explicitly and
Implicitly understand, embodiment described herein can be combined with other embodiments.
Electronic equipment involved by the embodiment of the present application may include the various handheld devices with wireless communication function,
Mobile unit, wireless headset calculate equipment or are connected to other processing equipments of radio modem and various forms of
User equipment (user equipment, UE), mobile station (mobile station, MS), terminal device (terminal
Device) etc., electronic equipment for example can be smart phone, tablet computer, Earphone box etc..For convenience of description, mention above
To equipment be referred to as electronic equipment.
In the embodiment of the present application, it is obtained respectively by near-infrared camera and TOF (Time of flight) camera
Take near-infrared image and depth image, wherein the principle for obtaining depth image by TOF camera is as follows: passing through TOF camera
Continuous near-infrared pulse is emitted to target scene, then receives the light pulse being reflected back by object with sensor, by comparing
The phase difference for emitting light pulse and the light pulse by object reflection calculates the transmission delay obtained between light pulse and then obtains
Distance of the object relative to transmitter, finally obtains depth image, depth image be by from video camera to scene each point away from
Image from (depth) as pixel value can obtain the 3D information of object according to depth image.
In general, TOF camera includes infrared emission unit, optical lens, imaging sensor, control unit and core algorithm
Computing unit.
Wherein, infrared emission unit includes Vcsel transmitter, Diffuser diffuser.What Vcsel was issued is pulse side
Wave, wavelength 940nm, the infrared light of the wavelength are non-visible lights, and the amount in spectrum is minimum, thus can be to avoid environment light
Interference.
Optical lens is imaged on the optical sensor for converging reflected light.
Imaging sensor is similar with the photosensitive element of general camera, for receiving reflected light, and on Sensor
Carry out photoelectric conversion.
Control unit is the driving IC of laser emitter, the high frequency arteries and veins that it can drive the laser upper limit to reach 100MHz
Punching driving, while eliminating all kinds of interference, it is ensured that drive waveforms are square waves, rising edge and failing edge time in 0.2ns or so, thus
The extraction of effective guarantee high accuracy depth precision.
TOF chip is the core of TOF camera, the image information of acquisition can be converted to depth map.
Identification and tracking to target can be rapidly completed in TOF camera, can be obtained between object more by range information
Positional relationship abundant, i.e. differentiation prospect and background, are handled by further in-depth, can also complete the application such as three-dimensional modeling.
In order to better understand a kind of verification method provided by the embodiments of the present application, briefly it is situated between to it first below
It continues.This method mainly includes registration and unlock two parts.
Referring to Fig. 1, Fig. 1 is a kind of register flow path schematic diagram of verification method provided by the embodiments of the present application, it is logical first
It crosses near-infrared camera and obtains near-infrared image, depth image is obtained by TOF camera, then near-infrared image is carried out pre-
Processing, pretreatment include Face datection, picture quality detection and the detection that opens and closes eyes, later to near-infrared image and depth image into
Row post processing of image, post processing of image include In vivo detection, save as face template, extract face characteristic and be stored in face characteristic
Face characteristic library.
The above process is it will be understood that as follows: after detecting face in near-infrared image, filtering out problematic near-infrared
Image (face blocks, and position goes out frame, and face is too partially etc.), and the too close or too far depth image of distance is filtered, further filtering is closed
Eye or the near-infrared image for not watching screen attentively cooperate depth image to filter out non-living body image by near-infrared image later,
Finally obtained near-infrared image is qualified images, finally, qualified images are saved in face template, and extracts face template
In face characteristic, will face characteristic be stored in face characteristic library in.
It will be understood by those skilled in the art that each step writes sequence simultaneously in the above method of specific embodiment
It does not mean that stringent execution sequence and any restriction is constituted to implementation process, the specific execution sequence of each step should be with its function
It can be determined with possible internal logic.
The embodiment of the present application shows the registration process of verification method, and registration process is as follows:
Obtain the near-infrared image and depth image of target object.
Face datection is carried out to the near-infrared image, obtains facial contour information.
Quality testing is carried out to the near-infrared image.
Wherein, quality testing includes detection exposure, clarity, color, noise, anti-hand shaking, flash lamp, focusing and pseudomorphism etc.
Factor can just pass through quality testing only under conditions of the testing result of factors above all meets preset threshold.
The detection that opens and closes eyes is carried out according to the facial contour information.
Gaze detection is carried out to the near-infrared image.
Wherein, gaze detection is inessential item, and gaze detection needs to detect upper and lower, left and right and positive five directions respectively
When human eye watch situation attentively, if gaze detection fail, re-type near-infrared image and depth image.
In vivo detection is carried out according to the facial contour information and the depth image.
Pass through in response to the In vivo detection, detects facial angle in the near-infrared image.
Wherein, detecting facial angle in the near-infrared image is to protect in the face template of typing respective angles
Card facial angle is in preset range, for example, when carrying out front detection, it is necessary to ensure that face is in the deviation range of permission
Face screen.
After confirmation facial angle is in preset range, the face characteristic in the near-infrared image and typing people are extracted
Face characteristic in face feature database compares.
Wherein, if facial angle exceeds angular deviation allowed band, return re-types depth image and near-infrared figure
Picture.
It is confirmed as same people through face characteristic comparison, the near-infrared image is saved as into face template.
It wherein, is not the same person if being compared, return re-types depth image and near-infrared image.
It repeats above step 5 times, respectively the face template in five typing front, upper and lower, left and right directions.
Wherein, when typing starts each time, user can be reminded to prepare in the form of text or voice or text add voice
Typing template and relevant action command, action command include: face screen, deflect, deflect to the right to the left, upward deflecting and
It deflects down.
Wherein, during carrying out template typing, effective face or any one is not detected when the preset time is exceeded
Item detection does not pass through, then returns and re-type depth image and near-infrared image.
As it can be seen that can guarantee that the face template of last typing is the facial image of target object through the above steps,
And subsequent unlock operation can be carried out by face alignment.
Referring to Fig. 2, Fig. 2 is a kind of unlock flow diagram of verification method provided by the embodiments of the present application, it is logical first
It crosses near-infrared camera and obtains near-infrared image, depth image is obtained by TOF camera, then near-infrared image is carried out pre-
Processing, pretreatment include Face datection, picture quality detection and the detection that opens and closes eyes, later to near-infrared image and depth image into
Row post processing of image, post processing of image include In vivo detection, extract face characteristic and the face characteristic progress in face characteristic library
Compare.
The above process is it will be understood that as follows: after detecting face in near-infrared image, filtering has the close red of eye closing situation
Outer image cooperates depth image to filter out non-living body image, face characteristic is extracted from near-infrared image by near-infrared image
Compared with the face characteristic in face characteristic library, Overall Steps are returned after passing through and are unlocked successfully, are otherwise failure.
It will be understood by those skilled in the art that each step writes sequence simultaneously in the above method of specific embodiment
It does not mean that stringent execution sequence and any restriction is constituted to implementation process, the specific execution sequence of each step should be with its function
It can be determined with possible internal logic.
Refering to Fig. 3, Fig. 3 is a kind of flow diagram of verification method provided by the embodiments of the present application, comprising:
S301 obtains the near-infrared image and depth image of target object.
Wherein, near-infrared image and depth image, near-infrared are obtained respectively by near-infrared camera and TOF camera
Camera emits near infrared light by active light source and receives then imaging by sensor devices again, is not influenced by visual intensity, can
Dark backlight human face identification problem is solved perfectly;TOF camera judges the principle of distance by the flight time to detect depth
Information in 100m apart from interior detection accuracy about in 1cm or so, and is not influenced by object texture, there is higher frame per second, small in size,
The advantages that effective depth information is high, and manufacturing process is simple, and object edge is clear.
S302 carries out Face datection to the near-infrared image, obtains facial contour information.
Wherein, the facial contour information includes face frame coordinate and face key point coordinate, face key point coordinate packet
Eye key point coordinate is included, supercilium key point coordinate, nose key point coordinate and lip key point coordinate may also include ear pass
Key point coordinate, cheekbone key point coordinate and lower jaw key point coordinate etc..
Wherein, detecting to the face in the near-infrared image can be based on the method for feature invariant, template matching side
Method, the method based on appearance and Knowledge based engineering method etc., the application does not limit this.Optionally, deep learning can be selected
Model M TCNN (Multi-task convolutional neural network multitask convolutional neural networks) Lai Jinhang people
Face detection, wherein MTCNN is cascaded by PNet, RNet and Onet, and image data successively passes through these three convolutional neural networks
Processing after, facial contour information finally can be obtained.After Face datection, face can be effectively filtered out and blocked, position goes out frame, face
Too inclined does not conform to table images.
S303 carries out In vivo detection according to the facial contour information and the depth image.
Wherein, data sample is chosen, establishes deep learning network, extracts in living body faces and non-living body face discrimination most
Big feature is trained deep learning network as distinguishing characteristic, based on data sample, obtains trained deep learning
Network snaps to the face frame coordinate in the near-infrared image in the depth image, is sat according to the face frame
It is marked in the depth image and is partitioned into the first face area image, the first face area image is inputted into trained depth
It practises in network, distinguishes living body and non-living body with trained deep learning network.
May determine that whether face is true man in image, and program anti-attack ability is strong, can prevent institute by In vivo detection
There are picture, video, paper-cut, photo class 2D attack and most 3D attack.
S304 passes through in response to the In vivo detection, carries out face to the target object according to the near-infrared image
Verifying.
Wherein, feature extraction and spy are carried out to the human face region in near-infrared image by trained deep learning network
Sign compares, to carry out face verification, recognition effect is good under complex environment, can the various angles of face, complicated light and away from
From etc. guarantee percent of pass under scenes.
S305 passes through in response to the face verification, executes unlock operation.
Wherein, when executing unlock operation, ambient brightness is obtained by photosensitive sensor, the adjusting adapted to according to ambient brightness
The screen intensity of electronic equipment.
Wherein, step S303 can be executed also after step S304, i.e., above-mentioned steps can are as follows:
Obtain the near-infrared image and depth image of target object;
Face datection is carried out to the near-infrared image, obtains facial contour information;
Face verification is carried out to the target object according to the near-infrared image;
Pass through in response to the face verification, living body inspection is carried out according to the facial contour information and the depth image
It surveys;
Pass through in response to the In vivo detection, executes unlock operation.
In addition, the embodiment of the present application supports user to unlock in the state that mobile phone is non-positive, it is flat that unlock range is up to mobile phone
In the 360 degree rotation section in face.
As can be seen that in the embodiment of the present application, obtaining the near-infrared image and depth image of target object;To described close
Infrared image carries out Face datection, obtains facial contour information;It is carried out according to the facial contour information and the depth image
In vivo detection;Pass through in response to the In vivo detection, face verification is carried out to the target object according to the near-infrared image;
Pass through in response to the face verification, executes unlock operation.The scheme provided through the invention, can be by near-infrared image
The detection such as Face datection, In vivo detection and face verification is carried out with depth image, the dark backlight human face identification of effective solution is asked
Topic and dummy's attack etc. are difficult to the problem of defending.In addition, the embodiment of the present application is developed based on the security platform of hardware-level, can use
In relatively high directions of security level requireds such as payments.
Optionally, described to carry out Face datection to the near-infrared image, obtaining facial contour information includes:
The size for adjusting the near-infrared image, obtains image pyramid.
Wherein, the near-infrared image is zoomed in and out to obtain the near-infrared image of different scale, forms image pyramid,
To reach Scale invariant.
Face characteristic extraction is carried out to described image pyramid and demarcates frame.
Wherein, while candidate frame and frame regression vector can be generated.
The frame is optimized, the facial contour information is obtained.
Wherein, optimization is overlapped including calibration, merging, fine tuning and removal, specific: to be returned with the frame, to other
Candidate frame is calibrated, and the candidate frame of high superposed is then merged by non-maxima suppression (NMS), is returned using frame
Inclination amount finely tunes candidate frame, recycles the frame of NMS removal overlapping, obtains face frame coordinate and face key point coordinate, people
Face frame coordinate and face key point coordinate are marked in tag form on corresponding pixel, such as certain pixel belongs to eye
Key point, then the pixel has corresponding label, is " eye key point, coordinate are as follows: * * * ".If the Face datection failure,
Reacquire the near-infrared image.
As it can be seen that the near-infrared image of not face can be filtered out by Face datection, and face location is positioned,
Be conducive to the subsequent operation such as detection and recognition of face that opens and closes eyes.
Optionally, Face datection, after obtaining facial contour information, the side are carried out to the near-infrared image described
Method further include:
The detection that opens and closes eyes is carried out according to the facial contour information.
Wherein, if the detection failure that opens and closes eyes, reacquires the near-infrared image.By opening and closing eyes, detection can be with
Judge whether user is active certification.
It is optionally, described that the detection that opens and closes eyes is carried out according to the facial contour information, comprising:
By in the first module of near-infrared image input deep learning network, first module is for the inspection that opens and closes eyes
It surveys.
Wherein, the deep learning network includes the first module, the second module and third module, and first module is used for
Open and close eyes detection.
Obtain the eye key point coordinate in the facial contour information.
Wherein, because the title of face key point and coordinate are marked in tag form on the pixel of image,
The coordinate of the title quick obtaining eye key point of key point in label can be passed through.
Eye feature is extracted in first module according to the eye key point coordinate.
Wherein, after obtaining eye key point coordinate, SIFT (Scale invariant can then be passed through to eye key point location
Eigentransformation), HOG (histograms of oriented gradients), the methods of LBP (local binary patterns) carry out feature to eye key point and mention
It takes.
The eye feature of extraction is compared with the first pre-stored characteristics, to carry out the detection that opens and closes eyes, wherein described
Eye feature when first pre-stored characteristics are the eye opening prestored in face feature database.
Wherein, the detection that opens and closes eyes is carried out by trained deep learning network, judges whether user is that eyes open state,
I.e. whether user wants actively to authenticate, if testing result is that closed-eye state or one open a closed state, will lead to open and close
Eye detection failure, returns and reacquires near-infrared image and depth image.
Optionally, the deep learning network further includes the second module, and second module is used for the In vivo detection, institute
It states and In vivo detection is carried out according to the facial contour information and the depth image, comprising:
The first face area is partitioned into the depth image according to the facial contour information of the near-infrared image
Area image;
First face area image is inputted in second module, first face is extracted by second module
First face characteristic of area image;
First face characteristic is compared with the second pre-stored characteristics, to carry out In vivo detection, wherein described second
Pre-stored characteristics are the living body faces characteristic information prestored in the face characteristic library.
Wherein, it is partitioned into the depth image according to the facial contour information of the near-infrared image the first
Face area image includes: to snap to the face frame coordinate in the depth image, according to the face frame coordinate described
The first face area image is partitioned into depth image.Depth image and RGB color figure can be obtained simultaneously by TOF camera
Picture, RGB color image and depth image are registrations, are corresponded between each pixel of the two, but the object in depth image
Edge be jagged, and not with the boundary alignment of the object in corresponding color image, if therefore directly by RGB color
Face location in image corresponds in depth image, it may appear that relatively large deviation, therefore the face location in near-infrared image is sat
Mark snaps in the depth image.
Wherein, extracted first face is characterized in the extracted depth information from depth image, can distinguish living body
Facial image and non-living body facial image, if the first face aspect ratio is more than preset threshold to result, then it is assumed that be living body faces.
If the In vivo detection failure, reacquires the near-infrared image.
Optionally, the deep learning network further includes third module, and the third module is used for the face verification, institute
It states and includes: to target object progress face verification according to the near-infrared image
The second human face region image is partitioned into the near-infrared image according to the facial contour information;
Second human face region image is inputted in the third module, second face is extracted by the third module
Second face characteristic is compared with third pre-stored characteristics, judges whether it is same by the second face characteristic of area image
One people, wherein the third pre-stored characteristics are the face characteristic of the target object prestored in the face characteristic library.
Wherein, the second human face region image can be partitioned by facial contour information, thus by human face region from background
It is extracted in image, eliminates the interference of background information, the second face characteristic after being conducive to extracts, and is tested by face
Card, can determine whether certification personnel are target user.
Optionally, the second deep learning network is obtained by training, and the training includes:
Establish deep learning network, wherein the deep learning network includes first module, second module,
The third module.
Wherein, first module is used for In vivo detection, the third module for the detection that opens and closes eyes, second module
For face verification.
Training sample is chosen, the training sample includes opening and closing eyes to detect sample, In vivo detection sample and face verification sample
This.
Wherein, the face verification sample is the face template for the target object that registration phase saves, the inspection that opens and closes eyes
Test sample sheet and the In vivo detection sample are the image patterns chosen in the database, and wherein COCO can be selected in database,
ImageNet etc..
The first pre-stored characteristics described in sample extraction are detected to described open and close eyes by first module, pass through described second
Module is to the second pre-stored characteristics described in the In vivo detection sample extraction, by the third module to the face verification sample
The third pre-stored characteristics are extracted, by first pre-stored characteristics, second pre-stored characteristics and the third pre-stored characteristics are deposited
Enter the face characteristic library.
First module is trained according to first pre-stored characteristics, according to second pre-stored characteristics to described
Second module is trained, and is trained according to the third pre-stored characteristics to the third module.
Wherein, pass through BP algorithm (inverse error propagation algorithm) or OLS algorithm (Orthogonal Least Squares learning algorithm) or RBF net
The methods of learning algorithm is trained first module, second module and the third module, and described in adjusting
The weight of each layer in deep learning network.
The hyper parameter that the deep learning network is adjusted according to training result, the deep learning network after being trained.
Wherein, the training result includes loss function and accuracy rate and absolute error adjusting parameter, the hyper parameter packet
Include learning rate, the number of iterations and batch size.
As it can be seen that can be used for opening and closing eyes detection by obtained deep learning network after training sample training, In vivo detection with
Face verification, to effectively prevent all kinds of material photos, video, mask and the attack of the face of 3D dummy.
The embodiment of the present application shows a concrete scene of verification method, and in this scenario, user, which uses, has 3D ToF
The Android mobile phone of equipment carries out the operation of the typing of mobile phone face and unlock.Face typing is carried out first, prompts to rotate according to mobile phone
Head is completed septum reset image up and down and is obtained, and obtains user's face template, is stored in face characteristic library.User exists later
When using mobile phone every time, the unlocking motion of unaware, entire recognition of face unlocking process can be realized in the moment that screen lights
It is completed within a few tens of milliseconds.
As it can be seen that the embodiment of the present application may be implemented to unlock at any angle, it, can also even if electronic equipment is non-forward condition
To pass through the facial image for comparing different angle in acquired image and template, and then unlock.
Referring to Fig. 4, Fig. 4 is the structural schematic diagram of a kind of electronic equipment provided by the embodiments of the present application, as shown, packet
Processor, memory, communication interface are included, and one or more programs, described program are stored in the memory, and
It is configured to be executed by the processor.
Optionally, when electronic equipment is verifying device, wherein the device can be smart phone, tablet computer, intelligence
The electronic equipments such as wearable device, described program include the instruction for executing following steps:
Obtain the near-infrared image and depth image of target object;
Face datection is carried out to the near-infrared image, obtains facial contour information;
In vivo detection is carried out according to the facial contour information and the depth image;
Pass through in response to the In vivo detection, face verification is carried out to the target object according to the near-infrared image;
Pass through in response to the face verification, executes unlock operation.
Optionally, Face datection is carried out to the near-infrared image described, obtains facial contour message context, the journey
Sequence includes the instruction for executing following steps:
The size for adjusting the near-infrared image, obtains image pyramid;
Face characteristic extraction is carried out to described image pyramid and demarcates frame;
The frame is optimized, the facial contour information is obtained.
Optionally, Face datection, after obtaining facial contour information, the journey are carried out to the near-infrared image described
Sequence includes the instruction for executing following steps:
The detection that opens and closes eyes is carried out according to the facial contour information.
Optionally, the context of detection that opens and closes eyes is carried out according to the facial contour information described, described program includes being used for
Execute the instruction of following steps:
By in the first module of near-infrared image input deep learning network, first module is for the inspection that opens and closes eyes
It surveys;
Obtain the eye key point coordinate in the facial contour information;
Eye feature is extracted in first module according to the eye key point coordinate;
The eye feature of extraction is compared with the first pre-stored characteristics, to carry out the detection that opens and closes eyes, wherein described
Eye feature when first pre-stored characteristics are the eye opening prestored in face feature database.
Optionally, the deep learning network further includes the second module, and second module is used for the In vivo detection,
It is described In vivo detection is carried out according to the facial contour information and the depth image in terms of, described program include for execute with
The instruction of lower step:
The first face area is partitioned into the depth image according to the facial contour information of the near-infrared image
Area image;
First face area image is inputted in second module, first face is extracted by second module
First face characteristic of area image;
First face characteristic is compared with the second pre-stored characteristics, to carry out In vivo detection, wherein described second
Pre-stored characteristics are the living body faces characteristic information prestored in the face characteristic library.
Optionally, the deep learning network further includes third module, and the third module is used for the face verification,
Described to carry out face verification aspect to the target object according to the near-infrared image, described program includes following for executing
The instruction of step:
The second human face region image is partitioned into the near-infrared image according to the facial contour information;
Second human face region image is inputted in the third module, second face is extracted by the third module
Second face characteristic is compared with third pre-stored characteristics, judges whether it is same by the second face characteristic of area image
One people, wherein the third pre-stored characteristics are the face characteristic of the target object prestored in the face characteristic library.
Optionally, the deep learning network is obtained by training, and in terms of the training, described program includes using
In the instruction for executing following steps:
Establish deep learning network, wherein the deep learning network includes first module, second module,
The third module;
Training sample is chosen, the training sample includes opening and closing eyes to detect sample, In vivo detection sample and face verification sample
This;
The first pre-stored characteristics described in sample extraction are detected to described open and close eyes by first module, pass through described second
Module is to the second pre-stored characteristics described in the In vivo detection sample extraction, by the third module to the face verification sample
The third pre-stored characteristics are extracted, by first pre-stored characteristics, second pre-stored characteristics and the third pre-stored characteristics are deposited
Enter the face characteristic library;
First module is trained according to first pre-stored characteristics, according to second pre-stored characteristics to described
Second module is trained, and is trained according to the third pre-stored characteristics to the third module;
The hyper parameter that the deep learning network is adjusted according to training result, the deep learning network after being trained.
It is above-mentioned that mainly the scheme of the embodiment of the present application is described from the angle of method implementation procedure.It is understood that
It is that in order to realize the above functions, it comprises execute the corresponding hardware configuration of each function and/or software module for terminal.Ability
Field technique personnel should be readily appreciated that, in conjunction with each exemplary unit and algorithm steps of embodiment description presented herein
Suddenly, the application can be realized with the combining form of hardware or hardware and computer software.Some function actually with hardware still
Computer software drives the mode of hardware to execute, the specific application and design constraint depending on technical solution.Professional skill
Art personnel can specifically realize described function to each using distinct methods, but it is this realize it is not considered that
Beyond scope of the present application.
The embodiment of the present application can carry out the division of functional unit according to above method example to terminal, for example, can be right
The each functional unit of each function division is answered, two or more functions can also be integrated in a processing unit.
Above-mentioned integrated unit both can take the form of hardware realization, can also realize in the form of software functional units.It needs
Illustrate, is schematical, only a kind of logical function partition to the division of unit in the embodiment of the present application, it is practical to realize
When there may be another division manner.
Consistent with the above, referring to Fig. 5, Fig. 5 is that a kind of structure for verifying device 500 provided by the embodiments of the present application is shown
It is intended to.The verifying device includes first acquisition unit 501, Face datection unit 502, In vivo detection unit 503, face verification
Unit 504, unlocking unit 505, in which:
First acquisition unit 501, for obtaining the near-infrared image and depth image of target object;
Face datection unit 502 obtains facial contour information for carrying out Face datection to the near-infrared image;
In vivo detection unit 503, for carrying out In vivo detection according to the facial contour information and the depth image;
Face verification unit 504, for passing through in response to the In vivo detection, according to the near-infrared image to the mesh
It marks object and carries out face verification;
Unlocking unit 505 executes unlock operation for passing through in response to the face verification.
Optionally, Face datection is carried out to the near-infrared image described, obtains facial contour message context, the people
Face detection unit 502 is specifically used for:
The size for adjusting the near-infrared image, obtains image pyramid;
Face characteristic extraction is carried out to described image pyramid and demarcates frame;
The frame is optimized, the facial contour information is obtained.
Optionally, the verifying device further includes the detection unit 506 that opens and closes eyes, and is carried out described to the near-infrared image
Face datection, after obtaining facial contour information, the detection unit 506 that opens and closes eyes is used for:
The detection that opens and closes eyes is carried out according to the facial contour information.
Optionally, the context of detection that opens and closes eyes is carried out according to the facial contour information described, described open and close eyes detects list
Member 506 is specifically used for:
By in the first module of near-infrared image input deep learning network, first module is for the inspection that opens and closes eyes
It surveys;
Obtain the eye key point coordinate in the facial contour information;
Eye feature is extracted in first module according to the eye key point coordinate;
The eye feature of extraction is compared with the first pre-stored characteristics, to carry out the detection that opens and closes eyes, wherein described
Eye feature when first pre-stored characteristics are the eye opening prestored in face feature database.
Optionally, the deep learning network further includes the second module, and second module is used for the In vivo detection,
Described to carry out In vivo detection aspect according to the facial contour information and the depth image, the In vivo detection unit 503 has
Body is used for:
The first face area is partitioned into the depth image according to the facial contour information of the near-infrared image
Area image;
First face area image is inputted in second module, first face is extracted by second module
First face characteristic of area image;
First face characteristic is compared with the second pre-stored characteristics, to carry out In vivo detection, wherein described second
Pre-stored characteristics are the living body faces characteristic information prestored in the face characteristic library.
Optionally, the deep learning network further includes third module, and the third module is used for the face verification,
Described to carry out face verification aspect to the target object according to the near-infrared image, the face verification unit 504 is specific
For:
The second human face region image is partitioned into the near-infrared image according to the facial contour information;
Second human face region image is inputted in the third module, second face is extracted by the third module
Second face characteristic is compared with third pre-stored characteristics, judges whether it is same by the second face characteristic of area image
One people, wherein the third pre-stored characteristics are the face characteristic of the target object prestored in the face characteristic library.
Optionally, the deep learning network is obtained by training, and the verifying device further includes training unit
507, the training unit 507 is used for:
Establish deep learning network, wherein the deep learning network includes first module, second module,
The third module;
Training sample is chosen, the training sample includes opening and closing eyes to detect sample, In vivo detection sample and face verification sample
This;
The first pre-stored characteristics described in sample extraction are detected to described open and close eyes by first module, pass through described second
Module is to the second pre-stored characteristics described in the In vivo detection sample extraction, by the third module to the face verification sample
The third pre-stored characteristics are extracted, by first pre-stored characteristics, second pre-stored characteristics and the third pre-stored characteristics are deposited
Enter the face characteristic library;
First module is trained according to first pre-stored characteristics, according to second pre-stored characteristics to described
Second module is trained, and is trained according to the third pre-stored characteristics to the third module;
The hyper parameter that the deep learning network is adjusted according to training result, the deep learning network after being trained.
Wherein, said units can be used for executing method described in above-described embodiment, specifically describe detailed in Example
Description, details are not described herein.
In the embodiment of the present application, the near-infrared image and depth image of target object are obtained;To the near-infrared image
Face datection is carried out, facial contour information is obtained;In vivo detection is carried out according to the facial contour information and the depth image;
Pass through in response to the In vivo detection, face verification is carried out to the target object according to the near-infrared image;In response to institute
It states face verification to pass through, executes unlock operation.The scheme provided through the invention, can be by near-infrared image and depth map
As carrying out the detection such as Face datection, In vivo detection and face verification, the dark backlight human face identification problem of effective solution and dummy
Attack etc. is difficult to the problem of defending.
The embodiment of the present application also provides a kind of computer readable storage medium, and storage is used for the computer of electronic data interchange
Program, the computer program make the part of the computer execution such as any verification method recorded in above method embodiment
Or Overall Steps.
The embodiment of the present application also provides a kind of computer program product, and the computer program product includes storing calculating
The non-transient computer readable storage medium of machine program, the computer program make computer execute such as above method embodiment
Some or all of any verification method of middle record step.
It should be noted that for the various method embodiments described above, for simple description, therefore, it is stated as a series of
Combination of actions, but those skilled in the art should understand that, the application is not limited by the described action sequence because
According to the application, some steps may be performed in other sequences or simultaneously.Secondly, those skilled in the art should also know
It knows, the embodiments described in the specification are all preferred embodiments, related actions and modules not necessarily the application
It is necessary.In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodiment
Point, reference can be made to the related descriptions of other embodiments.
The above, above embodiments are only to illustrate the technical solution of the application, rather than its limitations;Although referring to before
Embodiment is stated the application is described in detail, those skilled in the art should understand that: it still can be to preceding
The technical solution for stating each embodiment record is modified or equivalent replacement of some of the technical features;And these are repaired
Change or replaces, the range of each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution.
Claims (10)
1. a kind of verification method, which is characterized in that the described method includes:
Obtain the near-infrared image and depth image of target object;
Face datection is carried out to the near-infrared image, obtains facial contour information;
In vivo detection is carried out according to the facial contour information and the depth image;
Pass through in response to the In vivo detection, face verification is carried out to the target object according to the near-infrared image;
Pass through in response to the face verification, executes unlock operation.
2. being obtained the method according to claim 1, wherein described carry out Face datection to the near-infrared image
Include: to facial contour information
The size for adjusting the near-infrared image, obtains image pyramid;
Face characteristic extraction is carried out to described image pyramid and demarcates frame;
The frame is optimized, the facial contour information is obtained.
3. the method according to claim 1, wherein it is described to the near-infrared image carry out Face datection,
After obtaining facial contour information, the method also includes:
The detection that opens and closes eyes is carried out according to the facial contour information.
4. according to the method described in claim 3, it is characterized in that, described carry out the inspection that opens and closes eyes according to the facial contour information
It surveys, comprising:
By in the first module of near-infrared image input deep learning network, first module is for the detection that opens and closes eyes;
Obtain the eye key point coordinate in the facial contour information;
Eye feature is extracted in first module according to the eye key point coordinate;
The eye feature of extraction is compared with the first pre-stored characteristics, to carry out the detection that opens and closes eyes, wherein described first
Eye feature when pre-stored characteristics are the eye opening prestored in face feature database.
5. described the method according to claim 1, wherein the deep learning network further includes the second module
Second module is used for the In vivo detection, described to carry out In vivo detection according to the facial contour information and the depth image,
Include:
The first face administrative division map is partitioned into the depth image according to the facial contour information of the near-infrared image
Picture;
First face area image is inputted in second module, first human face region is extracted by second module
First face characteristic of image;
First face characteristic is compared with the second pre-stored characteristics, to carry out In vivo detection, wherein described second prestores
Feature is the living body faces characteristic information prestored in the face characteristic library.
6. described the method according to claim 1, wherein the deep learning network further includes third module
Third module is used for the face verification, described to carry out face verification, packet to the target object according to the near-infrared image
It includes:
The second human face region image is partitioned into the near-infrared image according to the facial contour information;
Second human face region image is inputted in the third module, second human face region is extracted by the third module
Second face characteristic of image;
Second face characteristic is compared with third pre-stored characteristics, judges whether it is same people, wherein the third is pre-
Deposit the face characteristic that feature is the target object prestored in the face characteristic library.
7. according to the method described in claim 4, it is characterized in that, the deep learning network be by training obtain, institute
Stating training includes:
Establish deep learning network, wherein the deep learning network includes first module, and second module is described
Third module;
Training sample is chosen, the training sample includes opening and closing eyes to detect sample, In vivo detection sample and face verification sample;
The first pre-stored characteristics described in sample extraction are detected to described open and close eyes by first module, pass through second module
To the second pre-stored characteristics described in the In vivo detection sample extraction, by the third module to the face verification sample extraction
The third pre-stored characteristics, by first pre-stored characteristics, second pre-stored characteristics and the third pre-stored characteristics are stored in institute
State face characteristic library;
First module is trained according to first pre-stored characteristics, according to second pre-stored characteristics to described second
Module is trained, and is trained according to the third pre-stored characteristics to the third module;
The hyper parameter that the deep learning network is adjusted according to training result, the deep learning network after being trained.
8. a kind of verifying device, which is characterized in that the verifying device includes:
First acquisition unit, for obtaining the near-infrared image and depth image of target object;
Face datection unit obtains facial contour information for carrying out Face datection to the near-infrared image;
In vivo detection unit, for carrying out In vivo detection according to the facial contour information and the depth image;
Face verification unit, for passing through in response to the In vivo detection, according to the near-infrared image to the target object
Carry out face verification;
Unlocking unit executes unlock operation for passing through in response to the face verification.
9. verifying device according to claim 8, which is characterized in that carry out face inspection to the near-infrared image described
It surveys, obtains facial contour message context, the Face datection unit is specifically used for:
The size for adjusting the near-infrared image, obtains image pyramid;
Face characteristic extraction is carried out to described image pyramid and demarcates frame;
The frame is optimized, the facial contour information is obtained.
10. verifying device according to claim 8, which is characterized in that the verifying device further includes opening and closing eyes to detect list
Member carries out Face datection, after obtaining facial contour information, the detection unit that opens and closes eyes to the near-infrared image described
For: the detection that opens and closes eyes is carried out according to the facial contour information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910568579.4A CN110287900B (en) | 2019-06-27 | 2019-06-27 | Verification method and verification device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910568579.4A CN110287900B (en) | 2019-06-27 | 2019-06-27 | Verification method and verification device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110287900A true CN110287900A (en) | 2019-09-27 |
CN110287900B CN110287900B (en) | 2023-08-01 |
Family
ID=68019333
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910568579.4A Active CN110287900B (en) | 2019-06-27 | 2019-06-27 | Verification method and verification device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110287900B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110929286A (en) * | 2019-11-20 | 2020-03-27 | 四川虹美智能科技有限公司 | Method for dynamically detecting operation authorization and intelligent equipment |
CN111160309A (en) * | 2019-12-31 | 2020-05-15 | 深圳云天励飞技术有限公司 | Image processing method and related equipment |
CN111582197A (en) * | 2020-05-07 | 2020-08-25 | 贵州省邮电规划设计院有限公司 | Living body based on near infrared and 3D camera shooting technology and face recognition system |
CN112861568A (en) * | 2019-11-12 | 2021-05-28 | Oppo广东移动通信有限公司 | Authentication method and device, electronic equipment and computer readable storage medium |
CN113128320A (en) * | 2020-01-16 | 2021-07-16 | 浙江舜宇智能光学技术有限公司 | Face living body detection method and device based on TOF camera and electronic equipment |
CN113313856A (en) * | 2020-02-10 | 2021-08-27 | 深圳市光鉴科技有限公司 | Door lock system with 3D face recognition function and using method |
CN113673286A (en) * | 2020-05-15 | 2021-11-19 | 深圳市光鉴科技有限公司 | Depth reconstruction method, system, device and medium based on target area |
CN113674230A (en) * | 2021-08-10 | 2021-11-19 | 深圳市捷顺科技实业股份有限公司 | Method and device for detecting key points of indoor backlight face |
CN115100714A (en) * | 2022-06-27 | 2022-09-23 | 平安银行股份有限公司 | Living body detection method and device based on face image and server |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006260397A (en) * | 2005-03-18 | 2006-09-28 | Konica Minolta Holdings Inc | Eye opening degree estimating device |
US20070185946A1 (en) * | 2004-02-17 | 2007-08-09 | Ronen Basri | Method and apparatus for matching portions of input images |
CN101159016A (en) * | 2007-11-26 | 2008-04-09 | 清华大学 | Living body detecting method and system based on human face physiologic moving |
US20090060383A1 (en) * | 2007-08-27 | 2009-03-05 | Arcsoft, Inc. | Method of restoring closed-eye portrait photo |
CN103440479A (en) * | 2013-08-29 | 2013-12-11 | 湖北微模式科技发展有限公司 | Method and system for detecting living body human face |
CN106557723A (en) * | 2015-09-25 | 2017-04-05 | 北京市商汤科技开发有限公司 | A kind of system for face identity authentication with interactive In vivo detection and its method |
CN106997452A (en) * | 2016-01-26 | 2017-08-01 | 北京市商汤科技开发有限公司 | Live body verification method and device |
CN107609383A (en) * | 2017-10-26 | 2018-01-19 | 深圳奥比中光科技有限公司 | 3D face identity authentications and device |
CN107766840A (en) * | 2017-11-09 | 2018-03-06 | 杭州有盾网络科技有限公司 | A kind of method, apparatus of blink detection, equipment and computer-readable recording medium |
US20180211096A1 (en) * | 2015-06-30 | 2018-07-26 | Beijing Kuangshi Technology Co., Ltd. | Living-body detection method and device and computer program product |
CN108491772A (en) * | 2018-03-09 | 2018-09-04 | 天津港(集团)有限公司 | A kind of face recognition algorithms and face identification device |
CN108764069A (en) * | 2018-05-10 | 2018-11-06 | 北京市商汤科技开发有限公司 | Biopsy method and device |
CN108805024A (en) * | 2018-04-28 | 2018-11-13 | Oppo广东移动通信有限公司 | Image processing method, device, computer readable storage medium and electronic equipment |
CN109034102A (en) * | 2018-08-14 | 2018-12-18 | 腾讯科技(深圳)有限公司 | Human face in-vivo detection method, device, equipment and storage medium |
-
2019
- 2019-06-27 CN CN201910568579.4A patent/CN110287900B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070185946A1 (en) * | 2004-02-17 | 2007-08-09 | Ronen Basri | Method and apparatus for matching portions of input images |
JP2006260397A (en) * | 2005-03-18 | 2006-09-28 | Konica Minolta Holdings Inc | Eye opening degree estimating device |
US20090060383A1 (en) * | 2007-08-27 | 2009-03-05 | Arcsoft, Inc. | Method of restoring closed-eye portrait photo |
CN101159016A (en) * | 2007-11-26 | 2008-04-09 | 清华大学 | Living body detecting method and system based on human face physiologic moving |
CN103440479A (en) * | 2013-08-29 | 2013-12-11 | 湖北微模式科技发展有限公司 | Method and system for detecting living body human face |
US20180211096A1 (en) * | 2015-06-30 | 2018-07-26 | Beijing Kuangshi Technology Co., Ltd. | Living-body detection method and device and computer program product |
CN106557723A (en) * | 2015-09-25 | 2017-04-05 | 北京市商汤科技开发有限公司 | A kind of system for face identity authentication with interactive In vivo detection and its method |
CN106997452A (en) * | 2016-01-26 | 2017-08-01 | 北京市商汤科技开发有限公司 | Live body verification method and device |
CN107609383A (en) * | 2017-10-26 | 2018-01-19 | 深圳奥比中光科技有限公司 | 3D face identity authentications and device |
CN107766840A (en) * | 2017-11-09 | 2018-03-06 | 杭州有盾网络科技有限公司 | A kind of method, apparatus of blink detection, equipment and computer-readable recording medium |
CN108491772A (en) * | 2018-03-09 | 2018-09-04 | 天津港(集团)有限公司 | A kind of face recognition algorithms and face identification device |
CN108805024A (en) * | 2018-04-28 | 2018-11-13 | Oppo广东移动通信有限公司 | Image processing method, device, computer readable storage medium and electronic equipment |
CN108764069A (en) * | 2018-05-10 | 2018-11-06 | 北京市商汤科技开发有限公司 | Biopsy method and device |
CN109034102A (en) * | 2018-08-14 | 2018-12-18 | 腾讯科技(深圳)有限公司 | Human face in-vivo detection method, device, equipment and storage medium |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112861568A (en) * | 2019-11-12 | 2021-05-28 | Oppo广东移动通信有限公司 | Authentication method and device, electronic equipment and computer readable storage medium |
CN110929286A (en) * | 2019-11-20 | 2020-03-27 | 四川虹美智能科技有限公司 | Method for dynamically detecting operation authorization and intelligent equipment |
CN111160309A (en) * | 2019-12-31 | 2020-05-15 | 深圳云天励飞技术有限公司 | Image processing method and related equipment |
CN113128320A (en) * | 2020-01-16 | 2021-07-16 | 浙江舜宇智能光学技术有限公司 | Face living body detection method and device based on TOF camera and electronic equipment |
CN113128320B (en) * | 2020-01-16 | 2023-05-16 | 浙江舜宇智能光学技术有限公司 | Human face living body detection method and device based on TOF camera and electronic equipment |
CN113313856A (en) * | 2020-02-10 | 2021-08-27 | 深圳市光鉴科技有限公司 | Door lock system with 3D face recognition function and using method |
CN111582197A (en) * | 2020-05-07 | 2020-08-25 | 贵州省邮电规划设计院有限公司 | Living body based on near infrared and 3D camera shooting technology and face recognition system |
CN113673286A (en) * | 2020-05-15 | 2021-11-19 | 深圳市光鉴科技有限公司 | Depth reconstruction method, system, device and medium based on target area |
CN113673286B (en) * | 2020-05-15 | 2024-04-16 | 深圳市光鉴科技有限公司 | Depth reconstruction method, system, equipment and medium based on target area |
CN113674230A (en) * | 2021-08-10 | 2021-11-19 | 深圳市捷顺科技实业股份有限公司 | Method and device for detecting key points of indoor backlight face |
CN113674230B (en) * | 2021-08-10 | 2023-12-19 | 深圳市捷顺科技实业股份有限公司 | Method and device for detecting key points of indoor backlight face |
CN115100714A (en) * | 2022-06-27 | 2022-09-23 | 平安银行股份有限公司 | Living body detection method and device based on face image and server |
Also Published As
Publication number | Publication date |
---|---|
CN110287900B (en) | 2023-08-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110287900A (en) | Verification method and verifying device | |
US11238270B2 (en) | 3D face identity authentication method and apparatus | |
CN107609383B (en) | 3D face identity authentication method and device | |
CN107633165B (en) | 3D face identity authentication method and device | |
CN108319953B (en) | Occlusion detection method and device, electronic equipment and the storage medium of target object | |
KR102483642B1 (en) | Method and apparatus for liveness test | |
CN106778664B (en) | Iris image iris area segmentation method and device | |
US20160019421A1 (en) | Multispectral eye analysis for identity authentication | |
US20160019420A1 (en) | Multispectral eye analysis for identity authentication | |
US20170091550A1 (en) | Multispectral eye analysis for identity authentication | |
CN112651348B (en) | Identity authentication method and device and storage medium | |
CN105874472A (en) | Multi-band biometric camera system having iris color recognition | |
CN110210276A (en) | A kind of motion track acquisition methods and its equipment, storage medium, terminal | |
JP2018508888A (en) | System and method for performing fingerprint-based user authentication using an image captured using a mobile device | |
CN109858439A (en) | A kind of biopsy method and device based on face | |
CN110326001A (en) | The system and method for executing the user authentication based on fingerprint using the image captured using mobile device | |
CN106529494A (en) | Human face recognition method based on multi-camera model | |
CN109086724A (en) | A kind of method for detecting human face and storage medium of acceleration | |
CN107844742A (en) | Facial image glasses minimizing technology, device and storage medium | |
CN105915804A (en) | Video stitching method and system | |
CN111353404A (en) | Face recognition method, device and equipment | |
Chen et al. | Real-time eye localization, blink detection, and gaze estimation system without infrared illumination | |
EP3872753B1 (en) | Wrinkle detection method and terminal device | |
CN106156739B (en) | A kind of certificate photo ear detection and extracting method based on face mask analysis | |
US11354940B2 (en) | Method and apparatus for foreground geometry and topology based face anti-spoofing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |