CN109284596A - Face unlocking method and device - Google Patents
Face unlocking method and device Download PDFInfo
- Publication number
- CN109284596A CN109284596A CN201811318452.9A CN201811318452A CN109284596A CN 109284596 A CN109284596 A CN 109284596A CN 201811318452 A CN201811318452 A CN 201811318452A CN 109284596 A CN109284596 A CN 109284596A
- Authority
- CN
- China
- Prior art keywords
- opening degree
- feature point
- eyes
- facial image
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Security & Cryptography (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Ophthalmology & Optometry (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
Present invention discloses a kind of face unlocking method and devices, wherein face unlocking method is the following steps are included: obtain the first facial image;Position multiple characteristic points around the first facial image eyes;The first opening degree that eyes are calculated according to multiple characteristic points, judges whether the first opening degree of eyes is greater than preset second opening degree;After obtaining result of first opening degree greater than the second opening degree, whether the corresponding human body of the first facial image of detection is living body;After obtaining the result that human body is living body, whether the feature for the second facial image for comparing the feature of the first facial image and prestoring is identical;After the feature of the feature and the second facial image that obtain the first facial image identical result, activation unlock control instruction is to be unlocked electronic equipment.The present invention can avoid because light is bad, eyes are smaller or narrows the state at the moment judging eyes by accident.
Description
Technical field
The present invention relates to electronic equipments to unlock field, and in particular to a kind of face unlocking method and device.
Background technique
Face information is everyone unique biological characteristic, with face recognition technology rapid development and wide is answered
With face recognition technology is applied to electronic equipment solution by more and more electronic equipment (mobile phone, tablet computer etc.) manufacturers at present
Lock, but existing face unlocking method is when light is bad, eyes are smaller or narrow eye, it is difficult to accurately detect that eyes are opened
Closed state.
Summary of the invention
The present invention provides a kind of face unlocking method and device, it is intended to solve that light is bad in the prior art, eyes are smaller
Or when narrowing eye, it is difficult to the problem of accurately detecting eyes open and-shut mode.
The present invention proposes a kind of face unlocking method, comprising the following steps:
Obtain the first facial image;
Position multiple characteristic points around the first facial image eyes;
The first opening degree that the eyes are calculated according to the multiple characteristic point judges that the first opening degree of the eyes is
It is no to be greater than preset second opening degree;
After obtaining result of first opening degree greater than second opening degree, first facial image pair is detected
Whether the human body answered is living body;
After obtaining the result that the human body is living body, the feature of first facial image and the second people prestored are compared
Whether the feature of face image is identical;
After the feature of the feature and second facial image that obtain first facial image identical result, activation
Control instruction is unlocked to be unlocked to electronic equipment.
Further, the step of multiple characteristic points positioned around the first facial image eyes include:
It positions fisrt feature point, the second feature point of the first facial image first eye upper eyelid, positions described the
Third feature point, the fourth feature point of one eyes lower eyelid position fifth feature point, the positioning institute at first eye left eye angle
State the sixth feature point at first eye right eye angle.
Further, first opening degree that the eyes are calculated according to the multiple characteristic point, judges the eyes
The step of whether the first opening degree is greater than preset second opening degree include:
The distance for calculating the fisrt feature point and third feature point, is denoted as L1;
The distance for calculating the second feature point and fourth feature point, is denoted as L2;
The distance for calculating the fifth feature point and sixth feature point, is denoted as L3;
Calculate the first opening degree of the first eye, wherein the first opening degree of the first eye is denoted as Q1, described
Q1=(L2+L3)/2 × L1, second opening degree are denoted as Q2;
Judge whether the Q1 is greater than or equal to Q2;
When the Q1 is greater than or equal to Q2, the knot of " value of first opening degree is greater than second opening degree " is generated
Fruit.
Further, the step of multiple characteristic points positioned around the first facial image eyes include:
It positions seventh feature point, the eighth feature point of the second eyes of the first facial image upper eyelid, positions described the
Ninth feature point, the tenth feature point of two eyes lower eyelids position the 11st characteristic point, the positioning at the second eyes left eye angle
12nd characteristic point at the second eyes right eye angle.
Further, first opening degree that the eyes are calculated according to the multiple characteristic point, judges the eyes
The step of whether the first opening degree is greater than preset second opening degree include:
The distance for calculating the seventh feature point and ninth feature point, is denoted as R1;
The distance for calculating the eighth feature point and tenth feature point, is denoted as R2;
The distance for calculating the 11st characteristic point and the 12nd characteristic point, is denoted as R3;
Calculate the first opening degree of second eyes, wherein the first opening degree of second eyes is denoted as Q3, described
Q3=(R2+R3)/2 × R1, second opening degree are denoted as Q2;
Judge whether the Q3 is greater than or equal to Q2;
When the Q3 is greater than or equal to Q2, the knot of " value of first opening degree is greater than second opening degree " is generated
Fruit.
Further, described " positioning " is carried out based on trained convolutional neural networks, wherein the convolutional Neural net
Network is used to carry out feature extraction and processing to first facial image, and exports the multiple characteristic point.
The present invention also proposes a kind of face tripper, comprising:
Acquiring unit, for obtaining the first facial image;
Positioning unit, for positioning multiple characteristic points around the first facial image eyes;
Computing unit judges the eyes for calculating the first opening degree of the eyes according to the multiple characteristic point
The first opening degree whether be greater than preset second opening degree;
Detection unit, for detecting the first face figure when first opening degree is greater than second opening degree
As whether corresponding human body is living body;
Comparing unit, for when the human body is living body, comparing the feature of first facial image and prestoring
Whether the feature of two facial images is identical;
Unit is activated, for when the feature of first facial image is identical with the feature of second facial image,
Activation unlock control instruction is to be unlocked electronic equipment.
Further, the positioning unit includes:
First locating module, for position the first facial image first eye upper eyelid fisrt feature point, second
Characteristic point positions third feature point, the fourth feature point of the first eye lower eyelid, positions first eye left eye angle
Fifth feature point, the sixth feature point for positioning first eye right eye angle.
Further, the computing unit includes:
First computing module is denoted as L1 for calculating the distance of the fisrt feature point and third feature point;
Second computing module is denoted as L2 for calculating the distance of the second feature point and fourth feature point;
Third computing module is denoted as L3 for calculating the distance of the fifth feature point and sixth feature point;
4th computing module, for calculating the first opening degree of the first eye, wherein the first of the first eye
Opening degree is denoted as Q1, Q1=(L2+L3)/2 × L1, and second opening degree is denoted as Q2;
First judgment module, for judging whether the Q1 is greater than or equal to Q2;
First production module, for generating when the Q1 is greater than or equal to Q2, " value of first opening degree is greater than institute
State the second opening degree " result.
Further, the positioning unit includes:
Second locating module, for position the second eyes of the first facial image upper eyelid seventh feature point, the 8th
Characteristic point positions ninth feature point, the tenth feature point of the second eyes lower eyelid, positions the second eyes left eye angle
11st characteristic point, the 12nd characteristic point at positioning the second eyes right eye angle.
Beneficial effects of the present invention: the second opening degree of setting is as threshold value, when the first folding of the first facial image eyes
When degree is greater than the second opening degree, it can be deduced that the eyes that the first facial image corresponds to human body are to open state, pass through opening degree
It compares, more acurrate can detect the state of human eye, avoid because light is bad, eyes are smaller or narrows and at the moment judges eyes by accident
State.
Recurrence positioning, available multiple spies are carried out to the circumocular feature of people using trained convolutional neural networks
Levy point accurate location, keep calculated first opening degree more accurate, enhance detection eyes open and-shut mode stability and
Accuracy, and the convolutional neural networks based on deep learning make detection have better robustness, can further avoid because of light
Line is bad, eyes are smaller or narrows the state at the moment judging eyes by accident.
Detailed description of the invention
Fig. 1 is a kind of basic flow chart of face unlocking method provided in an embodiment of the present invention;
Fig. 2 is a kind of specific flow chart of another embodiment of face unlocking method provided in an embodiment of the present invention;
Fig. 3 is a kind of specific flow chart of the another embodiment of face unlocking method provided in an embodiment of the present invention;
Fig. 4 is a kind of eye areas image schematic diagram provided in an embodiment of the present invention;
Fig. 5 is a kind of basic structure block diagram of face tripper provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiment is only a part of the embodiments of the present invention, instead of all the embodiments.Base
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts it is all its
His embodiment, shall fall within the protection scope of the present invention.
Referring to figures 1-4, an a kind of embodiment of face unlocking method of the present invention is shown, comprising:
S10, the first facial image is obtained.
Multiple characteristic points around S20, the first facial image eyes of positioning.
S30, the first opening degree that eyes are calculated according to multiple characteristic points, it is pre- to judge whether the first opening degree of eyes is greater than
If the second opening degree.
S40, after obtaining the first opening degree and being greater than the result of the second opening degree, the corresponding human body of the first facial image of detection
It whether is living body.
S50, after obtaining the result that human body is living body, compare the feature of the first facial image and the second face figure for prestoring
Whether the feature of picture is identical.
S60, after the feature identical result of the feature and the second facial image that obtain the first facial image, activation unlock
Control instruction is to be unlocked electronic equipment.
In the present embodiment, the executing subject of face unlocking method is the electronic equipments such as mobile phone, tablet computer, computer.
In the present embodiment, the second opening degree is set as threshold value, when the first opening degree of the first facial image eyes is greater than
When the second opening degree, it can be deduced that the eyes that the first facial image corresponds to human body are to open state, can by the comparison of opening degree
With the more acurrate state for detecting human eye, avoid because light is bad, eyes are smaller or narrows the state at the moment judging eyes by accident.
In above-mentioned S10 step, the first facial image is the face obtained when the unlock of the electronic equipments such as mobile phone, tablet computer
Image.
In an alternative embodiment, S20, positioning the first facial image eyes around multiple characteristic points the step of include:
Fisrt feature point 11, the second feature point 12 of the first facial image first eye upper eyelid are positioned, First view is positioned
Third feature point 13, the fourth feature point 14 of eyeball lower eyelid position fifth feature point 15, the positioning first at first eye left eye angle
The sixth feature point 16 at eyes right eye angle.
Referring again to Fig. 4, in the present embodiment, first eye can be left eye eyeball, and the first of first eye upper eyelid is special
Sign point 11, second feature point 12 can be located at all edges of first eye upper eyelid and close to first eye lower eyelids, in other realities
It applies in example, fisrt feature point 11, second feature point 12 can also be located at the other positions of first eye upper eyelid, for example, being located at
The middle part of first eye upper eyelid outer surface;Third feature point 13, the fourth feature point 14 of first eye lower eyelid can be located at
The week of first eye lower eyelid along and close to first eye upper eyelid, in other embodiments, third feature point 13, fourth feature
Point 14 can also be located at the other positions of first eye lower eyelid, for example, being located at the middle part of first eye lower eyelid outer surface.
In the present embodiment, S30, the first opening degree that eyes are calculated according to multiple characteristic points judge the first folding of eyes
Spend that the step of whether being greater than preset second opening degree includes:
S301, the distance for calculating fisrt feature point 11 and third feature point 13, are denoted as L1.
S302, the distance for calculating second feature point 12 and fourth feature point 14, are denoted as L2.
S303, the distance for calculating fifth feature point 15 and sixth feature point 16, are denoted as L3.
S304, the first opening degree for calculating first eye, wherein the first opening degree of first eye is denoted as Q1, Q1=(L2
+ L3)/2 × L1, the second opening degree is denoted as Q2.
S305, judge whether Q1 is greater than or equal to Q2.
S306, when Q1 is greater than or equal to Q2, generate the result of " value of the first opening degree be greater than the second opening degree ".
In above-mentioned S305 step, if Q1 is less than Q2, determine that the first eye of the first facial image for closed-eye state, solves
Lock failure.
In the present embodiment, as long as there is the first opening degree of one eye eyeball to be greater than or equal to the second opening degree, so that it may determine
The corresponding human body of first facial image is eyes-open state.
In an alternative embodiment, S20, positioning the first facial image eyes around multiple characteristic points the step of include:
It positions seventh feature point 21, the eighth feature point 22 of first facial image the second eyes upper eyelid, positions described the
Ninth feature point 23, the tenth feature point 24 of two eyes lower eyelids position the 11st characteristic point at the second eyes left eye angle
25, the 12nd characteristic point 26 at the second eyes right eye angle is positioned.
Referring again to Fig. 4, in the present embodiment, the second eyes can be right eye eyeball, and the 7th of the second eyes upper eyelid is special
Sign point 21, eighth feature point 22 can be located at all edges of the second eyes upper eyelid and close to the second eyes lower eyelids, in other realities
It applies in example, seventh feature point 21, eighth feature point 22 can also be located at the other positions of the second eyes upper eyelid, for example, being located at
The middle part of second eyes upper eyelid outer surface;Ninth feature point 23, the tenth feature point 24 of second eyes lower eyelid can be located at
The week of second eyes lower eyelid along and close to the second eyes upper eyelid, in other embodiments, ninth feature point 23, tenth feature
Point 24 can also be located at the other positions of the second eyes lower eyelid, for example, being located at the middle part of the second eyes lower eyelid outer surface.
In the present embodiment, S30, the first opening degree that eyes are calculated according to multiple characteristic points judge the first folding of eyes
Spend that the step of whether being greater than preset second opening degree includes:
S311, the distance for calculating seventh feature point 21 and ninth feature point 23, are denoted as R1.
S312, the distance for calculating eighth feature point 22 and tenth feature point 24, are denoted as R2.
S313, the distance for calculating the 11st characteristic point 25 and the 12nd characteristic point 26, are denoted as R3.
S314, the first opening degree for calculating the second eyes, wherein the first opening degree of the second eyes is denoted as Q3, Q3=(R2
+ R3)/2 × R1, the second opening degree is denoted as Q2.
S315, judge whether Q3 is greater than or equal to Q2.
S316, when Q3 is greater than or equal to Q2, generate the result of " value of the first opening degree be greater than the second opening degree ".
In above-mentioned S315 step, if Q3 is less than Q2, determine that the second eyes of the first facial image are closed-eye state, solves
Lock failure.
In the present embodiment, as long as there is the first opening degree of one eye eyeball to be greater than or equal to the second opening degree, it can sentence
The fixed corresponding human body of first facial image is eyes-open state.
In an alternative embodiment, " positioning " in multiple characteristic points around S20, the first facial image eyes of positioning is
It is carried out based on trained convolutional neural networks, wherein convolutional neural networks are used to carry out feature to the first facial image to mention
It takes and handles, and export multiple characteristic points.For example, the first facial image is carried out by convolutional neural networks in above-described embodiment
After feature extraction and processing, fisrt feature point 11, second feature point 12, third feature point 13, fourth feature point the 14, the 5th are obtained
The accurate coordinate of characteristic point 15, sixth feature point 16, or obtain seventh feature point 21, eighth feature point 22, ninth feature point
23, tenth feature point 24, the 11st characteristic point 25, the 12nd characteristic point 26 accurate coordinate.
In the present embodiment, convolutional layer may include being connected with each other convolutional layer and full articulamentum, wherein convolutional layer includes successively
The first convolutional layer, the second convolutional layer, third convolutional layer and the Volume Four lamination of connection, full articulamentum include being sequentially connected first entirely
Articulamentum, the second full articulamentum and the full articulamentum of third, so that the coordinate of obtained characteristic point is more accurate.
In the present embodiment, according to the face image data set obtained in advance, Face datection is carried out to face image set,
Face alignment is carried out to the face detected with face Keypoint detector later, orients 68 characteristic points of face, intercepts eye
Picture size is arranged, so that the size of the eye areas image of all interceptions is all kept in eyeball area image (as shown in Figure 4)
Unanimously.Finally for eyeball area image of often opening one's eyes, multiple characteristic point coordinates around eyes are recorded, to generate data set to convolution
Neural network is trained, and multiple characteristic point coordinates can be fisrt feature point 11 in above-described embodiment, second feature point 12,
Three characteristic points 13, fourth feature point 14, fifth feature point 15, sixth feature point 16, seventh feature point 21, eighth feature point 22,
Ninth feature point 23, tenth feature point 24, the 11st characteristic point 25, the 12nd characteristic point 26 coordinate.
In the present embodiment, recurrence positioning is carried out to the circumocular feature of people using trained convolutional neural networks, it can
To obtain the accurate location of multiple characteristic points, keep calculated first opening degree more accurate, enhances detection eyes opening and closing shape
The stability and accuracy of state, and the convolutional neural networks based on deep learning make detection have better robustness, it can be into one
Step avoids because light is bad, eyes are smaller or narrows the state at the moment judging eyes by accident.
In above-mentioned S40 step, whether the corresponding human body of the first facial image of detection is that the method for living body can be one
Multiple first facial images are obtained in a period, whether the opening degree for comparing multiple same eyes of the first facial image is identical,
If the opening degree of same eyes is different, determine that the corresponding human body of the first facial image is living body, conversely, being then prosthese, unlock
Failure.It should be appreciated that in other embodiments, can also judge the corresponding people of the first facial image by other conventional methods
Whether body is living body.
In above-mentioned S50 step, compares the feature of the first facial image and whether the feature of the second facial image that prestores
Identical method can be the first facial image of comparison and whether the similarity of the second facial image reaches 90 or more percent,
If so, determine that the feature of the first facial image is identical with the feature of the second facial image, the first facial image and the second face
Corresponding image is same people.Otherwise, it is determined that the feature of the first facial image and the feature of the second facial image be not identical, first
Facial image and the second facial image it is corresponding be not same people, unlock failure.It should be appreciated that in other embodiments, it can also
With judged by other conventional methods the first facial image feature and the second facial image feature it is whether identical.
Referring to Fig. 5, an a kind of embodiment of face tripper of the present invention is shown, comprising:
Acquiring unit 100, for obtaining the first facial image.
Positioning unit 200, for positioning multiple characteristic points around the first facial image eyes.
Computing unit 300 judges the first folding of eyes for calculating the first opening degree of eyes according to multiple characteristic points
Whether degree is greater than preset second opening degree.
Detection unit 400, for detecting the corresponding people of the first facial image when the first opening degree is greater than the second opening degree
Whether body is living body.
Comparing unit 500, the second face for comparing the feature of the first facial image and prestoring when human body is living body
Whether the feature of image is identical.
Unit 600 is activated, for when the feature of the first facial image is identical with the feature of the second facial image, activation to be solved
Lock control is instructed to be unlocked to electronic equipment.
In the present embodiment, the second opening degree is set as threshold value, when the first opening degree of the first facial image eyes is greater than
When the second opening degree, it can be deduced that the eyes that the first facial image corresponds to human body are to open state, can by the comparison of opening degree
With the more acurrate state for detecting human eye, avoid because light is bad, eyes are smaller or narrows the state at the moment judging eyes by accident.
In the present embodiment, the first facial image is the facial image obtained when the unlock of the electronic equipments such as mobile phone, tablet computer.
In an alternative embodiment, positioning unit 200 includes the first locating module, and the first locating module is for positioning first
Fisrt feature point 11, the second feature point 12 of facial image first eye upper eyelid, the third for positioning first eye lower eyelid are special
Sign point 13, fourth feature point 14, position the fifth feature point 15 at first eye left eye angle, position the 6th of first eye right eye angle
Characteristic point 16.
Referring again to Fig. 4, in the present embodiment, first eye can be left eye eyeball, and the first of first eye upper eyelid is special
Sign point 11, second feature point 12 can be located at all edges of first eye upper eyelid and close to first eye lower eyelids, in other realities
It applies in example, fisrt feature point 11, second feature point 12 can also be located at the other positions of first eye upper eyelid, for example, being located at
The middle part of first eye upper eyelid outer surface;Third feature point 13, the fourth feature point 14 of first eye lower eyelid can be located at
The week of first eye lower eyelid along and close to first eye upper eyelid, in other embodiments, third feature point 13, fourth feature
Point 14 can also be located at the other positions of first eye lower eyelid, for example, being located at the middle part of first eye lower eyelid outer surface.
In the present embodiment, computing unit 300 includes the first computing module, the second computing module, third computing module, the 4th
Computing module, first judgment module and the first production module;First computing module is special for calculating fisrt feature point 11 and third
The distance of sign point 13, is denoted as L1.Second computing module is used to calculate the distance of second feature point 12 and fourth feature point 14, is denoted as
L2.Third computing module is used to calculate the distance of fifth feature point 15 and sixth feature point 16, is denoted as L3.4th computing module is used
In the first opening degree for calculating first eye, wherein the first opening degree of first eye is denoted as Q1, Q1=(L2+L3)/2 × L1,
Second opening degree is denoted as Q2.First judgment module is for judging whether Q1 is greater than or equal to Q2.First production module is for working as Q1
When more than or equal to Q2, the result of " value of the first opening degree is greater than the second opening degree " is generated.
In the present embodiment, if Q1 is less than Q2, first judgment module determines the first eye of the first facial image to close one's eyes
State, unlock failure.
In the present embodiment, as long as there is the first opening degree of one eye eyeball to be greater than or equal to the second opening degree, so that it may determine
The corresponding human body of first facial image is eyes-open state.
In an alternative embodiment, positioning unit 200 includes the second locating module, and the second locating module is for positioning first
Seventh feature point 21, the eighth feature point 22 of facial image the second eyes upper eyelid position the of the second eyes lower eyelid
Nine characteristic points 23, tenth feature point 24 position the 11st characteristic point 25 at the second eyes left eye angle, position described second
12nd characteristic point 26 at eyeball right eye angle.
Referring again to Fig. 4, in the present embodiment, the second eyes can be right eye eyeball, and the 7th of the second eyes upper eyelid is special
Sign point 21, eighth feature point 22 can be located at all edges of the second eyes upper eyelid and close to the second eyes lower eyelids, in other realities
It applies in example, seventh feature point 21, eighth feature point 22 can also be located at the other positions of the second eyes upper eyelid, for example, being located at
The middle part of second eyes upper eyelid outer surface;Ninth feature point 23, the tenth feature point 24 of second eyes lower eyelid can be located at
The week of second eyes lower eyelid along and close to the second eyes upper eyelid, in other embodiments, ninth feature point 23, tenth feature
Point 24 can also be located at the other positions of the second eyes lower eyelid, for example, being located at the middle part of the second eyes lower eyelid outer surface.
In the present embodiment, computing unit 300 includes the 5th computing module, the 6th computing module, the 7th computing module, the 8th
Computing module, the second judgment module and the second production module;5th computing module is for calculating seventh feature point 21 and the 9th spy
The distance of sign point 23, is denoted as R1.6th computing module is used to calculate the distance of eighth feature point 22 and tenth feature point 24, is denoted as
R2.7th computing module is used to calculate the distance of the 11st characteristic point 25 and the 12nd characteristic point 26, is denoted as R3.8th calculates mould
Block is used to calculate the first opening degree of the second eyes, wherein the first opening degree of the second eyes is denoted as Q3, Q3=(R2+R3)/2
× R1, the second opening degree are denoted as Q2.Second judgment module is for judging whether Q3 is greater than or equal to Q2.
Second production module is used to generate " value of the first opening degree is greater than the second opening degree " when Q3 is greater than or equal to Q2
Result.
In the present embodiment, if Q3 is less than Q2, the second judgment module determines the second eyes of the first facial image to close one's eyes
State, unlock failure.
In the present embodiment, as long as there is the first opening degree of one eye eyeball to be greater than or equal to the second opening degree, it can sentence
The fixed corresponding human body of first facial image is eyes-open state.
In an alternative embodiment, " positioning " in positioning unit 200 be based on trained convolutional neural networks into
Row, wherein convolutional neural networks are used to carry out feature extraction and processing to the first facial image, and export multiple characteristic points.Example
Such as, in above-described embodiment, the first facial image obtains fisrt feature after convolutional neural networks carry out feature extraction and processing
Put the accurate seat of 11, second feature point 12, third feature point 13, fourth feature point 14, fifth feature point 15, sixth feature point 16
Mark, or obtain seventh feature point 21, eighth feature point 22, ninth feature point 23, tenth feature point 24, the 11st characteristic point
25, the accurate coordinate of the 12nd characteristic point 26.
In the present embodiment, convolutional layer may include being connected with each other convolutional layer and full articulamentum, wherein convolutional layer includes successively
The first convolutional layer, the second convolutional layer, third convolutional layer and the Volume Four lamination of connection, full articulamentum include being sequentially connected first entirely
Articulamentum, the second full articulamentum and the full articulamentum of third, so that the coordinate of obtained characteristic point is more accurate.
In the present embodiment, according to the face image data set obtained in advance, Face datection is carried out to face image set,
Face alignment is carried out to the face detected with face Keypoint detector later, orients 68 characteristic points of face, intercepts eye
Picture size is arranged, so that the size of the eye areas image of all interceptions is all kept in eyeball area image (as shown in Figure 4)
Unanimously.Finally for eyeball area image of often opening one's eyes, multiple characteristic point coordinates around eyes are recorded, to generate data set to convolution
Neural network is trained, and multiple characteristic point coordinates can be fisrt feature point 11 in above-described embodiment, second feature point 12,
Three characteristic points 13, fourth feature point 14, fifth feature point 15, sixth feature point 16, seventh feature point 21, eighth feature point 22,
Ninth feature point 23, tenth feature point 24, the 11st characteristic point 25, the 12nd characteristic point 26 coordinate.
In the present embodiment, recurrence positioning is carried out to the circumocular feature of people using trained convolutional neural networks, it can
To obtain the accurate location of multiple characteristic points, keep calculated first opening degree more accurate, enhances detection eyes opening and closing shape
State accuracy, and the convolutional neural networks based on deep learning make detection have better robustness, can further avoid because
Light is bad, eyes are smaller or narrows the state at the moment judging eyes by accident.
In the present embodiment, detection unit 400 detects whether the corresponding human body of the first facial image is that the method for living body can be with
To obtain multiple first facial images in a period of time, compare multiple same eyes of the first facial image opening degree whether
It is identical, if the opening degree of same eyes is different, determine that the corresponding human body of the first facial image is living body, conversely, be then prosthese,
Unlock failure.It should be appreciated that in other embodiments, can also judge that the first facial image is corresponding by other conventional methods
Human body whether be living body.
In the present embodiment, comparing unit 500 compares feature and the spy of the second facial image prestored of the first facial image
Whether identical method can be sign compares the similarity of the first facial image and the second facial image and whether reach 9 percent
Ten or more, if so, determine that the feature of the first facial image is identical with the feature of the second facial image, the first facial image and the
Corresponding two facial images are same people.Otherwise, it is determined that the feature of the first facial image and the feature of the second facial image not phase
Together, the first facial image and the second facial image it is corresponding be not same people, unlock failure.It should be appreciated that in other embodiments
In, can also be judged by other conventional methods the first facial image feature and the second facial image feature whether phase
Together.
The above description is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all utilizations
Equivalent structure or equivalent flow shift made by description of the invention and accompanying drawing content is applied directly or indirectly in other correlations
Technical field, be included within the scope of the present invention.
Claims (10)
1. a kind of face unlocking method, which comprises the following steps:
Obtain the first facial image;
Position multiple characteristic points around the first facial image eyes;
The first opening degree that the eyes are calculated according to the multiple characteristic point judges whether the first opening degree of the eyes is big
In preset second opening degree;
After obtaining result of first opening degree greater than second opening degree, it is corresponding to detect first facial image
Whether human body is living body;
After obtaining the result that the human body is living body, the feature of first facial image and the second face figure prestored are compared
Whether the feature of picture is identical;
After the feature of the feature and second facial image that obtain first facial image identical result, activation unlock
Control instruction is to be unlocked electronic equipment.
2. face unlocking method as described in claim 1, which is characterized in that positioning the first facial image eyes week
The step of multiple characteristic points enclosed includes:
Fisrt feature point, the second feature point for positioning the first facial image first eye upper eyelid, position the First view
Third feature point, the fourth feature point of eyeball lower eyelid position fifth feature point, the positioning described the at first eye left eye angle
The sixth feature point at one eyes right eye angle.
3. face unlocking method as claimed in claim 2, which is characterized in that described according to the calculating of the multiple characteristic point
First opening degree of eyes judges that the step of whether the first opening degree of the eyes is greater than preset second opening degree includes:
The distance for calculating the fisrt feature point and third feature point, is denoted as L1;
The distance for calculating the second feature point and fourth feature point, is denoted as L2;
The distance for calculating the fifth feature point and sixth feature point, is denoted as L3;
Calculate the first opening degree of the first eye, wherein the first opening degree of the first eye is denoted as Q1, the Q1=
(L2+L3)/2 × L1, second opening degree are denoted as Q2;
Judge whether the Q1 is greater than or equal to Q2;
When the Q1 is greater than or equal to Q2, the result of " value of first opening degree is greater than second opening degree " is generated.
4. face unlocking method as described in claim 1, which is characterized in that positioning the first facial image eyes week
The step of multiple characteristic points enclosed includes:
Seventh feature point, the eighth feature point of the second eyes of the first facial image upper eyelid are positioned, positions described second
Ninth feature point, the tenth feature point of eyeball lower eyelid position described in the 11st characteristic point at the second eyes left eye angle, positioning
12nd characteristic point at the second eyes right eye angle.
5. face unlocking method as claimed in claim 4, which is characterized in that described according to the calculating of the multiple characteristic point
First opening degree of eyes judges that the step of whether the first opening degree of the eyes is greater than preset second opening degree includes:
The distance for calculating the seventh feature point and ninth feature point, is denoted as R1;
The distance for calculating the eighth feature point and tenth feature point, is denoted as R2;
The distance for calculating the 11st characteristic point and the 12nd characteristic point, is denoted as R3;
Calculate the first opening degree of second eyes, wherein the first opening degree of second eyes is denoted as Q3, the Q3=
(R2+R3)/2 × R1, second opening degree are denoted as Q2;
Judge whether the Q3 is greater than or equal to Q2;
When the Q3 is greater than or equal to Q2, the result of " value of first opening degree is greater than second opening degree " is generated.
6. face unlocking method as described in claim 1, which is characterized in that " positioning " is based on trained convolution
Neural network carries out, wherein and the convolutional neural networks are used to carry out feature extraction and processing to first facial image, and
Export the multiple characteristic point.
7. a kind of face tripper characterized by comprising
Acquiring unit, for obtaining the first facial image;
Positioning unit, for positioning multiple characteristic points around the first facial image eyes;
Computing unit judges the of the eyes for calculating the first opening degree of the eyes according to the multiple characteristic point
Whether one opening degree is greater than preset second opening degree;
Detection unit, for detecting first facial image pair when first opening degree is greater than second opening degree
Whether the human body answered is living body;
Comparing unit, the second people for comparing the feature of first facial image and prestoring when the human body is living body
Whether the feature of face image is identical;
Unit is activated, for activating when the feature of first facial image is identical with the feature of second facial image
Control instruction is unlocked to be unlocked to electronic equipment.
8. face tripper as claimed in claim 7, which is characterized in that the positioning unit includes:
First locating module, for positioning fisrt feature point, the second feature of the first facial image first eye upper eyelid
Point positions third feature point, the fourth feature point of the first eye lower eyelid, positions the 5th of first eye left eye angle
Characteristic point, the sixth feature point for positioning first eye right eye angle.
9. face tripper as claimed in claim 8, which is characterized in that the computing unit includes:
First computing module is denoted as L1 for calculating the distance of the fisrt feature point and third feature point;
Second computing module is denoted as L2 for calculating the distance of the second feature point and fourth feature point;
Third computing module is denoted as L3 for calculating the distance of the fifth feature point and sixth feature point;
4th computing module, for calculating the first opening degree of the first eye, wherein the first folding of the first eye
Degree is denoted as Q1, Q1=(L2+L3)/2 × L1, and second opening degree is denoted as Q2;
First judgment module, for judging whether the Q1 is greater than or equal to Q2;
First production module, for generating when the Q1 is greater than or equal to Q2, " value of first opening degree is greater than described the
The result of two opening degrees ".
10. face tripper as claimed in claim 7, which is characterized in that the positioning unit includes:
Second locating module, for positioning seventh feature point, the eighth feature of the second eyes of the first facial image upper eyelid
Point positions ninth feature point, the tenth feature point of the second eyes lower eyelid, positions the tenth of the second eyes left eye angle
One characteristic point, the 12nd characteristic point at positioning the second eyes right eye angle.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811318452.9A CN109284596A (en) | 2018-11-07 | 2018-11-07 | Face unlocking method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811318452.9A CN109284596A (en) | 2018-11-07 | 2018-11-07 | Face unlocking method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109284596A true CN109284596A (en) | 2019-01-29 |
Family
ID=65175038
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811318452.9A Pending CN109284596A (en) | 2018-11-07 | 2018-11-07 | Face unlocking method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109284596A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101216887A (en) * | 2008-01-04 | 2008-07-09 | 浙江大学 | An automatic computer authentication method for photographic faces and living faces |
US20100158319A1 (en) * | 2008-12-22 | 2010-06-24 | Electronics And Telecommunications Research Institute | Method and apparatus for fake-face detection using range information |
CN105989263A (en) * | 2015-01-30 | 2016-10-05 | 阿里巴巴集团控股有限公司 | Method for authenticating identities, method for opening accounts, devices and systems |
CN107169483A (en) * | 2017-07-12 | 2017-09-15 | 深圳奥比中光科技有限公司 | Tasks carrying based on recognition of face |
CN107358151A (en) * | 2017-06-02 | 2017-11-17 | 广州视源电子科技股份有限公司 | A kind of eye motion detection method and device and vivo identification method and system |
CN107609383A (en) * | 2017-10-26 | 2018-01-19 | 深圳奥比中光科技有限公司 | 3D face identity authentications and device |
CN107766840A (en) * | 2017-11-09 | 2018-03-06 | 杭州有盾网络科技有限公司 | A kind of method, apparatus of blink detection, equipment and computer-readable recording medium |
-
2018
- 2018-11-07 CN CN201811318452.9A patent/CN109284596A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101216887A (en) * | 2008-01-04 | 2008-07-09 | 浙江大学 | An automatic computer authentication method for photographic faces and living faces |
US20100158319A1 (en) * | 2008-12-22 | 2010-06-24 | Electronics And Telecommunications Research Institute | Method and apparatus for fake-face detection using range information |
CN105989263A (en) * | 2015-01-30 | 2016-10-05 | 阿里巴巴集团控股有限公司 | Method for authenticating identities, method for opening accounts, devices and systems |
CN107358151A (en) * | 2017-06-02 | 2017-11-17 | 广州视源电子科技股份有限公司 | A kind of eye motion detection method and device and vivo identification method and system |
CN107169483A (en) * | 2017-07-12 | 2017-09-15 | 深圳奥比中光科技有限公司 | Tasks carrying based on recognition of face |
CN107609383A (en) * | 2017-10-26 | 2018-01-19 | 深圳奥比中光科技有限公司 | 3D face identity authentications and device |
CN107766840A (en) * | 2017-11-09 | 2018-03-06 | 杭州有盾网络科技有限公司 | A kind of method, apparatus of blink detection, equipment and computer-readable recording medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105279459B (en) | A kind of terminal glance prevention method and mobile terminal | |
CN105117695B (en) | In vivo detection equipment and biopsy method | |
CN105320251B (en) | Eye-controlled password input device, method and computer readable recording medium thereof | |
CN109657553B (en) | Student classroom attention detection method | |
CN108171218A (en) | A kind of gaze estimation method for watching network attentively based on appearance of depth | |
CN109446981A (en) | A kind of face's In vivo detection, identity identifying method and device | |
EP3862897A1 (en) | Facial recognition for user authentication | |
CN106156578A (en) | Auth method and device | |
CN107066862A (en) | Embedded authentication systems in electronic equipment | |
CN106355147A (en) | Acquiring method and detecting method of live face head pose detection regression apparatus | |
WO2020024416A1 (en) | Anti-peep method and apparatus for smart terminal, computer device and storage medium | |
CN109711255A (en) | Fingerprint collecting method and relevant apparatus | |
CN1323370C (en) | Method for detecting pornographic images | |
CN105718776B (en) | A kind of three-dimension gesture verification method and system | |
CN108830062A (en) | Face identification method, mobile terminal and computer readable storage medium | |
US20190147152A1 (en) | Authenticating access to a computing resource using quorum-based facial recognition | |
CN105960663A (en) | Information processing device, information processing method, and program | |
CN110191234B (en) | Intelligent terminal unlocking method based on fixation point analysis | |
CN105940428A (en) | Information processing apparatus, information processing method, and program | |
CN107403086A (en) | Purview certification method, apparatus and system | |
EP4099198A1 (en) | Unlocking method and apparatus based on facial expression, and computer device and storage medium | |
CN107707738A (en) | A kind of face identification method and mobile terminal | |
CN108629278A (en) | The system and method that information security is shown is realized based on depth camera | |
CN110427849A (en) | Face pose determination method and device, storage medium and electronic equipment | |
CN108717508A (en) | A kind of information processing method and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190129 |