CN105184277B - Living body face recognition method and device - Google Patents
Living body face recognition method and device Download PDFInfo
- Publication number
- CN105184277B CN105184277B CN201510629903.0A CN201510629903A CN105184277B CN 105184277 B CN105184277 B CN 105184277B CN 201510629903 A CN201510629903 A CN 201510629903A CN 105184277 B CN105184277 B CN 105184277B
- Authority
- CN
- China
- Prior art keywords
- face
- image
- eye
- iris
- eyes
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/167—Detection; Localisation; Normalisation using comparisons between temporally consecutive images
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Collating Specific Patterns (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention relates to the technical field of identity authentication, in particular to a method, a device and a system for identifying a living human face. The method comprises the following steps: acquiring a plurality of face images in real time; acquiring a fixation point or a movement track of the fixation point of at least one eye in each human face image according to the human face images; judging whether the eyes move according to prompts or not according to the fixation points of the eyes or the movement tracks of the fixation points; and if the judgment result is yes, the living body verification is passed. When the user performs facial feature verification, eyes must move according to prompts, because facial feature extraction and eye movement data are extracted from the same photo/group of photos or videos, an illegal login person cannot use a prepared face image or video to 'cheat' a verification device, and therefore the method can judge that the user conforms to facial features and simultaneously face information comes from live people in reality.
Description
Technical Field
The invention relates to the technical field of identity authentication, in particular to a living body face recognition method and a living body face recognition device.
Background
Currently, face recognition technology is widely applied to identity verification in secure login systems. The current facial recognition technology generally acquires a facial photo of a current user in real time through an image acquisition device, compares the facial photo acquired in real time with a facial photo pre-stored in advance based on the characteristics and distribution of five sense organs, and if the comparison is consistent, the identity authentication is passed.
However, in performing face recognition, an illegal login person may "fool" a camera or other image capture device by forging a face so that the picture captured by the image capture device is not a live face picture. For example, a face photo or a face video segment of a registrant is placed in front of an image acquisition device, the face photo of a current user acquired by the image acquisition device is actually from the photo or the video segment, or an illegal loggers can forge a three-dimensional face model, and the three-dimensional face model is placed in front of the image acquisition device, at this time, the face photo acquired by the image acquisition device is a photo of the three-dimensional face model, but the three-dimensional face model cannot be perceived when comparing features and distribution based on five sense organs, which causes the problems of weak anti-counterfeiting capability and low safety of an identity recognition system.
Disclosure of Invention
In view of this, an object of the embodiments of the present invention is to provide an identity recognition method and apparatus, which can recognize whether a face picture obtained from an image capturing device is from a real living human, increase the anti-counterfeiting capability of an identity recognition system, and improve the security of the system.
In a first aspect, an embodiment of the present invention provides a living body face recognition method, including:
acquiring a plurality of face images in real time;
acquiring a fixation point or a movement track of the fixation point of at least one eye in each human face image according to the human face images;
judging whether the eyes move according to prompts or not according to the fixation points of the eyes or the movement tracks of the fixation points;
and if the judgment result is yes, the living body verification is passed.
In combination with the first aspect, the embodiments of the present invention provide a first possible implementation manner of the first aspect,
determining whether the eye is moving as prompted comprises at least one of:
prompting a target reference object which needs to be watched by eyes and is located outside to a user in advance, wherein in the shooting process, whether the watching points of the eyes are respectively aligned with the target reference object in a plurality of face images or not;
in the photographing process, displaying a reference object with a randomly changed motion track to a user through a display screen, wherein whether the motion track of a fixation point formed by eyes in a plurality of face images is consistent with the motion track of the target reference object or not;
in the photographing process, displaying a target reference object which sequentially appears at a plurality of random positions and disappears after preset time after each appearance to a user through a display screen, and judging whether the fixation point of eyes is respectively aligned with the target reference object at random positions in a plurality of face images;
displaying a plurality of reference objects to a user in a photographing process, and prompting the user by taking parts of the plurality of reference objects as target reference objects, wherein the fixation points of the eyes are respectively aligned with the target reference objects in the plurality of face images;
and displaying the password input panel to a user in the photographing process, wherein whether the fixation point of eyes is aligned with the corresponding position of a preset password on the fixation password input panel in sequence in the plurality of face images or not.
With reference to the first aspect, an embodiment of the present invention provides a second possible implementation manner of the first aspect, where the obtaining at least one face image in real time specifically includes:
acquiring a single or a plurality of original images comprising human faces;
according to the original image, carrying out face detection to obtain a face position;
and intercepting the original image comprising the face according to the face position, thereby obtaining a face image.
With reference to the first aspect, an embodiment of the present invention provides a third possible implementation manner of the first aspect, where the obtaining, according to the face image, a gaze point of at least one eye in each face image specifically includes:
according to the face image, carrying out eye positioning, intercepting the face image, and acquiring an image of at least one eye;
extracting an iris image and/or a pupil image of an eye from the image of the eye;
and calculating the relative displacement of the rotation of the eyeballs according to the iris image and/or the pupil image, and acquiring the motion trail of the fixation point of the eyes according to the relative displacement.
With reference to the first aspect, an embodiment of the present invention provides a fourth possible implementation manner of the first aspect, where before the immediately acquiring at least one face image, the method further includes: emitting at least one beam of infrared light, so that each beam of infrared light can form an iris reflection point in an eyeball of a human body;
displaying a plurality of calibration points to a user in advance, and acquiring a calibration face image when the fixation point of the eye is aligned with the calibration points;
according to the calibration face image, carrying out eye positioning, intercepting the calibration face image, and acquiring an image of at least one eye;
extracting the pupil position and/or the iris position of the eye from the image of the eye, and acquiring the position of the iris reflection point;
acquiring the relative positions of the pupil and/or the iris of the eye and the iris reflection point according to the pupil position and/or the iris position and the position of the iris reflection point;
calibrating the position corresponding relation between the injection point and the calibration point of the eye in the actual space according to the relative position of the pupil and/or the iris of the eye and the iris reflection point and the coordinate of the reference point in the display screen;
the obtaining the fixation point of at least one eye in each face image according to the face image specifically comprises:
acquiring specific coordinates of the positions watched by the eyes in the display screen according to the position corresponding relation between the injection points of the eyes in the actual space and the reference points in the display screen and the face image; taking the specific coordinates of the gazing position of the eyes in the display screen as a gazing point;
and/or the presence of a gas in the gas,
after the instant acquisition of a plurality of face images, the method further comprises the following steps:
extracting facial features of the face image according to the face image, and comparing the facial features of the face image with a prestored registered face image;
if the comparison result is consistent, the face verification is passed;
further comprising: and if the human face verification and the living body verification pass, the user verification is passed.
In a second aspect, an embodiment of the present invention further provides a living body face recognition apparatus, including:
the face image acquisition unit is used for acquiring a plurality of face images in real time;
the eye fixation point acquisition unit is used for acquiring the fixation point of at least one eye or the movement track of the fixation point in each human face image according to the human face images;
the eye fixation point position judging unit is used for judging whether the eyes move according to prompts or not according to the fixation points of the eyes or the movement tracks of the fixation points;
and a living body authentication passing unit for passing the living body authentication when the judgment result is yes.
With reference to the second aspect, an embodiment of the present invention provides a first possible implementation manner of the second aspect, and determining whether the eye moves according to the prompt includes at least one of:
prompting a target reference object which needs to be watched by eyes and is located outside to a user in advance, wherein in the shooting process, whether the watching points of the eyes are respectively aligned with the target reference object in a plurality of face images or not;
in the photographing process, displaying a reference object with a randomly changed motion track to a user through a display screen, wherein whether the motion track of a fixation point formed by eyes in a plurality of face images is consistent with the motion track of the target reference object or not;
in the photographing process, displaying a target reference object which sequentially appears at a plurality of random positions and disappears after preset time after each appearance to a user through a display screen, and judging whether the fixation point of eyes is respectively aligned with the target reference object at random positions in a plurality of face images;
displaying a plurality of reference objects to a user in a photographing process, and prompting the user by taking parts of the plurality of reference objects as target reference objects, wherein the fixation points of the eyes are respectively aligned with the target reference objects in the plurality of face images;
and displaying the password input panel to a user in the photographing process, wherein whether the fixation point of eyes is aligned with the corresponding position of a preset password on the fixation password input panel in sequence in the plurality of face images or not.
With reference to the second aspect, an embodiment of the present invention provides a second possible implementation manner of the second aspect, where the face image obtaining unit specifically includes:
the original image acquisition module is used for acquiring a plurality of original images comprising human faces;
the face position acquisition module is used for carrying out face detection according to the original image to acquire a face position;
and the face image acquisition module is used for intercepting the original image comprising the face according to the face position so as to acquire a face image.
With reference to the second aspect, an embodiment of the present invention provides a third possible implementation manner of the second aspect, where the eye gaze point obtaining unit specifically includes:
the first eye image acquisition module is used for positioning eyes according to the face image, intercepting the face image and acquiring an image of at least one eye;
the first eye image extraction module is used for extracting an iris image and/or a pupil image of an eye from the image of the eye;
and the fixation point acquisition module is used for calculating the relative displacement of eyeball rotation according to the iris image and/or the pupil image and acquiring the motion trail of the fixation point of the eyes according to the relative displacement.
With reference to the second aspect, an embodiment of the present invention provides a fourth possible implementation manner of the second aspect, further including:
the infrared light emitting unit is used for emitting at least one beam of infrared light, so that each beam of infrared light can form an iris reflection point in an eyeball of a human body;
the calibration face image acquisition module is used for displaying a plurality of calibration points to a user in advance and acquiring a calibration face image when the fixation point of the eyes is aligned with the calibration points;
the second eye image acquisition module is used for positioning eyes according to the calibration face image, intercepting the calibration face image and acquiring an image of at least one eye;
the second eye position extraction module is used for extracting the pupil position and/or the iris position of the eye from the image of the eye, acquiring the position of the iris reflection point, and acquiring the relative position of the pupil and/or the iris of the eye and the iris reflection point according to the pupil position and/or the iris position and the position of the iris reflection point;
the calibration module is used for calibrating the position corresponding relation between the injection point and the calibration point of the eye in the actual space according to the relative position of the pupil and/or the iris of the eye and the iris reflection point and the coordinate of the reference point in the display screen;
the eye fixation point acquisition unit specifically includes:
the fixation point coordinate acquisition submodule is used for acquiring the specific coordinate of the fixation position of the eyes in the display screen according to the position corresponding relation between the fixation point of the eyes and the reference point in the display screen in the actual space and the face image; taking the specific coordinates of the gazing position of the eyes in the display screen as a gazing point;
and/or the presence of a gas in the gas,
further comprising: the face feature extraction unit is used for extracting the face features of the face image and comparing the face features of the face image with a prestored registered face image;
the face verification passing unit is used for passing the face verification when the comparison results are consistent;
and the user verification passing unit is used for passing user verification when the face verification and the living body verification pass.
The living human face recognition method and the living human face recognition device provided by the embodiment of the invention acquire the gaze point or the movement track of the gaze point of at least one eye in each human face image according to the human face image after acquiring the human face image, and then judge whether the eyes of the person move according to the prompt according to the gaze point or the movement track of the gaze point, wherein the prompt is a prompt sent to a user by the device through a display screen in the photographing process, so that when the user is ready to photograph or record a video, the eyes must move according to the prompt, because the extraction of the facial features and the data of eye movement are extracted from the same photo/group of photos or videos, an illegal login person cannot realize the prepared human face image or video to 'verify' the device, thereby the method can judge that the user accords with the facial features, the face information comes from a live person in reality, the anti-counterfeiting capability of the identity recognition system is improved, and the safety of the system is improved.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a flowchart illustrating a method for recognizing a human face of a living body according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating a specific method for acquiring multiple face images in the living body face recognition method according to the embodiment of the present invention;
fig. 3 is a flowchart illustrating a specific method for acquiring a motion trajectory of a gaze point in the living body face recognition method according to the embodiment of the present invention;
fig. 4 is a flowchart illustrating a specific method for acquiring a gaze point in the living body face recognition method according to the embodiment of the present invention;
FIG. 5 is a flow chart of another living human face recognition method according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a living body face recognition apparatus according to an embodiment of the present invention;
fig. 7 is a schematic diagram illustrating a specific structure of a face image obtaining unit according to an embodiment of the present invention;
fig. 8 is a schematic diagram illustrating a specific structure of an eye fixation point obtaining unit according to an embodiment of the present invention;
fig. 9 is a schematic diagram illustrating a specific structure of another eye fixation point obtaining unit according to an embodiment of the present invention;
FIG. 10 is a schematic structural diagram of another living human face recognition provided by an embodiment of the invention;
fig. 11 is a schematic structural diagram illustrating a living human face recognition system according to an embodiment of the present invention.
Illustration of the drawings:
10-a face image acquisition unit; 20-an eye fixation point acquisition unit; 30-eye fixation point position judging unit; 40-living body verification pass unit; 50-an infrared light emitting unit; 60-calibrating the face image acquisition module; 70-a second eye image acquisition module; 80-a second eye position extraction module; 90-a calibration module; 100-a face feature extraction unit; 110-face verification pass unit; 120-user authentication passing unit; 130-display screen; 140-an image acquisition device; 150-lighting device and 160-living human face recognition device;
101-an original image acquisition module; 102-a face position acquisition module; 103-a face image acquisition module;
201-a first eye image acquisition module; 202-a first eye image extraction module; 203-a fixation point acquisition module; 204-gazing point coordinate obtaining module.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
The existing living body face recognition method generally collects a human face photo directly through an image collecting device, then compares the human face photo with the human face feature in a pre-stored registration photo according to the human face feature in the photo, if the comparison is consistent, the identity verification is passed, but once an illegal login person forges the face to deceive the image collecting device, the collected photo is not the photo of the living body, and the point cannot be perceived in the subsequent comparison, so that the problems of weak anti-counterfeiting capability and low safety of an identity recognition system are caused. Based on the above, the living body face recognition method, the living body face recognition device and the living body face recognition system can be used for recognizing whether the face picture acquired from the image acquisition equipment comes from a real living body, so that the anti-counterfeiting capacity of the identity recognition system is improved, and the safety of the system is improved. It should be noted that the method can be applied to a tablet computer, a mobile phone or other mobile terminal devices. For example, a camera of a cell phone, tablet computer or other mobile device may be used for picture taking. In addition, the mobile terminal can also be used for logging in mobile terminal hardware in the form of a mobile phone or a mobile device APP, and network games, online banking, electronic merchants and the like need identity authentication login.
For the convenience of understanding the present embodiment, a living body face recognition method disclosed in the present embodiment will be described in detail first,
referring to fig. 1, the living human face recognition method provided by the embodiment of the present invention includes the following steps:
s101: acquiring a plurality of face images in real time;
in a specific implementation, the face image needs to be acquired through a related image acquisition device. And the image is acquired through the image acquisition equipment instantly, so that the acquired image is ensured to be the image of a person currently in the image acquisition area of the image acquisition equipment, and is not a photo or a video which is shot or forged before. Generally, when a face image is acquired by an image acquisition device, a plurality of face images are acquired continuously according to a certain frequency. This makes it possible to obtain a plurality of images when the user needs to look at a plurality of target reference objects and the eyes look at each target reference object, so that one of the plurality of images is selected as the image of the target reference object which is watched by the eye's gaze point.
After acquiring the face image, the method further comprises the following steps:
s102: acquiring a fixation point or a movement track of the fixation point of at least one eye in each human face image according to the human face images;
in the specific implementation, because the face image is obtained, and the features of each part in the face can be extracted from the face image, the eye features can be extracted from the face image, and then the fixation point of at least one eye in the face image is obtained according to the eye features. It should be noted that, since there is a certain distance between two eyes of a person, the rotation angles of the eyeballs are different when the two eyes of the person are gazing at the same target, and therefore, when the gazing point of one eye at a certain login of the same user is obtained, the gazing point of the same eye should be obtained.
S103: judging whether the eyes move according to prompts or not according to the fixation points of the eyes or the movement tracks of the fixation points;
in the concrete implementation:
1. the system can prompt a user that a target reference object located outside is watched by the user, the target reference object located outside is relative to the display screen, the external object outside the display screen is used as the target reference object, and when the image acquisition device is installed, the external object is already located. Therefore, the system directly prompts a user through the display screen to watch a specific external target reference object, simultaneously obtains a human face image when human eyes look at the target reference object, and judges whether the watching point of the eyes is aligned with the target reference object in the image. It should be noted that the user may be prompted many times to indicate that the user is looking at the target reference object at different positions, which improves safety.
2. The system displays a target reference object with a randomly changed motion track to a user and prompts the user to look at the target reference object, when the position of the target reference object is changed, eyeballs can rotate along with the change of the position of the target reference object, and finally, when the eyeballs rotate, the motion track of the eye fixation point is matched with the motion track of the target reference object. For example, a target reference object with a circular motion trajectory is displayed to the user, and when the eye looks at the target reference object, the trajectory of the eye gaze point should also be circular. When judging, as long as the matching degree of the motion trail of the eye fixation point and the motion trail of the reference object is above the preset threshold value, the eye fixation point is considered to be towards the target reference object.
It should be noted that the image capturing device simultaneously takes a picture of the human face during the movement of the target reference object. Because the photographing is performed according to a certain preset frequency, the number of the obtained face photos is multiple, and finally, in the multiple face photos, the continuous motion track of the eye fixation point should be consistent with the motion track of the target reference object.
3. The system may display to the user in advance a reference object that needs to be looked at when taking a picture. The reference object may be a reference object whose position is randomly (i.e. coordinates are random) displayed to the user by the system through the display, and the reference object disappears after a preset time and reappears at another random position in the original form or another form, and thus repeatedly appears for a preset number of times, in the process, the eyes of the user look at the position where the reference object appears. In order to judge more accurately, the above display process may be repeated several times, and what appears on the display screen is a combination of position patterns, and the user needs to watch the positions of the reference objects in turn according to the sequence of their appearance. When the user sequentially gazes at the combination of the several positions, the gazing point of the eyes changes accordingly. Therefore, according to the pictures of a plurality of eyes, a series of position points formed by the eye fixation points can be obtained. And comparing the position pattern of the point formed by the eye gazing point with the position pattern of the point which actually appears, and if the similarity is above a preset threshold, considering that the eye gazing point is consistent with the position combination which is displayed to the user in advance. That is, in each face image, the fixation points of the eyes are respectively aligned with the target reference objects appearing at different positions. In the specific determination, it is possible to determine whether or not the eyes of the person are random objects whose gaze position changes randomly, based on the gaze point and the position of the reference object that appears randomly.
4. The system may display a combination of multiple reference objects, such as an alphanumeric keyboard, a numeric keypad, or a graphic combination or other combination of content, to the user in advance on the display, and then ask the user to look at a portion of the reference objects therein, i.e., to use the portion of the reference objects as target reference objects, and prompt the user to look at the target reference objects. For example, the user is asked to look at the letter combinations Q, P, Z, M on the letter keyboard, or the number combinations 3, 7, 4, 2, or some of the shape combinations (circle to star). This process of gazing may require the user to follow a strictly specified order, or may refer only to whether the content is consistent regardless of the order in which the user gazes.
5. The system can display a password input panel on a display in advance for a user, wherein the password input panel can be an alphabetic keyboard, a numeric keyboard or a graphic combination or other content combinations, and then inputs a password or a shape combination set before the user in a mode of requiring the user to watch eyes, namely whether the watching point of the eyes is aligned with the corresponding position of a preset password on the watching password input panel in sequence in a plurality of face images. The process is similar to the process of displaying a virtual keyboard or an input point on a screen, and requires a user to select a preset password in a mode of watching by eyes or move eyeballs, so that the activities of the eyeballs are in accordance with the preset shapes drawn before. It should be noted that the preset shape is also one of password inputs.
S104: and if the judgment result is yes, the living body verification is passed.
In a specific implementation, if the result of the above determination is yes, the image acquired by the system is considered to be an image of a living person, and thus the living verification is passed.
The living human face recognition method provided by the invention obtains the gaze point of at least one eye in each human face image according to the human face image after obtaining the human face image, and then judges whether the eyes of the human are gazed at a target reference object according to the gaze point, so that the eyes must rotate according to the instruction when a user takes a picture or records a video, and in the process, an illegal login person difficultly forges the human face to 'cheat' the image acquisition equipment in a short time, so that the living human face recognition method provided by the invention can recognize whether the human face picture obtained from the image acquisition equipment is from a real living human, the anti-counterfeiting capability of an identity recognition system is increased, and the safety of the system is improved.
For the above face image, referring to fig. 2, an embodiment of the present invention further provides a specific method for obtaining a face image, where the method includes:
s201: acquiring a single or a plurality of original images comprising human faces;
in a specific implementation, the acquisition of a single or multiple original images with faces is an image which is directly acquired by an image acquisition device and is not processed.
When the user looks at the reference object, or the user rotates the fixation point of the eyes to a certain angle according to the requirement, namely the fixation point of the eyes reaches the indication position, the user can press the shooting key, and at the moment, the image acquired by the image acquisition equipment is used as the original image. Or, the display screen is set as a touch screen, a user can send a control instruction to the system through touch, flick, sliding and other modes, the system controls the image acquisition device to work according to the control instruction input by the user, and the user can face the lens of the image acquisition device and realize interaction with the lens through the touch screen. In addition, the image acquisition device can automatically acquire an original image under the control of the system.
S202: according to the original image, carrying out face detection to obtain a face position;
in the specific implementation, the face detection is performed according to the original image, that is, whether a face exists in the original image is identified, generally, the original image includes a face, but it is also possible that when the image acquisition device automatically acquires the original image under the control of the system, the acquired image is an image when the user is not ready, and at this time, the face may not be included in the original image, so that the image that does not include a face needs to be discarded after the face does not exist in the image. After the face detection, the position of the face in the original image is obtained.
S203: and intercepting the original image comprising the face according to the face position, thereby obtaining a face image.
In the specific implementation, since the face position is already acquired in S202, when the face position is acquired, the coordinate point of the edge of the face in the image or the coordinate point outside the face and having the preset distance from the edge of the face is acquired, and at this time, the face image can be captured according to the coordinate point, that is, the face image is acquired.
In addition, referring to fig. 3, an embodiment of the present invention further provides a specific method for obtaining a motion trajectory of a fixation point of at least one eye in each face image according to the face image, where the method includes:
s301: according to the face image, carrying out eye positioning, intercepting the face image, and acquiring an image of at least one eye;
in the specific implementation process, the characteristics of each part of the human face can be acquired from the human face image, so that when the eyes are positioned, the eyes are positioned only according to the characteristics of the eyes. When the eyes are positioned, the obtained real coordinates of the eye contour in the face image are obtained, so at this time, the image of at least one eye is obtained by intercepting the face image according to the obtained coordinates.
S302: extracting an iris image and/or a pupil image of an eye from the image of the eye;
in particular, when implemented, under a variety of different light source conditions, such as under visible or infrared illumination, both the iris image and the pupil image of a human eye can be directly observed from the eye image, so that only the final iris image or pupil image is retained after processing the eye image, such as removing noise interference, according to the characteristics of the iris or pupil.
S303: and calculating the relative displacement of the rotation of the eyeballs according to the iris image and/or the pupil image, and acquiring the motion trail of the fixation point of the eyes according to the relative displacement.
When the method is specifically realized, eyeballs can rotate when a person looks at a certain target reference object, and then corresponding displacement is generated. Generally, the axis of the pupil or the iris when the eye is directly in front of the eye is taken as the reference axis, so that when the eye is looking at a target reference object, even if the iris image or the pupil image is acquired, the axis of the iris or the pupil at that time can be acquired, and the relative displacement of the eyeball rotation can be calculated by comparing the axis of the iris or the pupil at that time with the reference axis. And the relative displacement of the eyeball rotation in the plurality of human face pictures is combined, so that the motion trail of the fixation point of the eyes can be formed. The mode of acquiring the fixation point of the eye does not need to carry out eye positioning, and only depends on the position change of the pupil or the iris when the eyeball is fixed at different positions.
In addition, referring to fig. 4, another living human face recognition method is further provided in the embodiments of the present invention, and on the basis of the several embodiments, the method includes:
before acquiring at least one face image, namely before S101, at least one beam of infrared light may be emitted by the lighting device, so that each beam of infrared light can form an iris reflection point in the human eyeball. When infrared light emitted by the single or a plurality of infrared light sources forms a single or a plurality of iris reflection points on the iris, the position change of the fixation point in the real space corresponding to the relative movement of the eye in the imaging picture can be calculated according to the relative position and the angle of the iris reflection points and the eyeball rotation, namely the change of the eye fixation point can be obtained, and therefore the fixation point of the eye in the real space can be calculated according to the iris reflection points on the iris of the eye and the pupil image.
Specifically, the tracking of the eye position may be the position of one of the left and right eyes, or may be the tracking of both the left and right eyes.
Under more stable visible light illumination conditions, the face recognition method can determine the positions of the eyes of the user. Both the pupil and the iris are intrinsic physiological consequences of the eye, and are widely used to track eye movements because of the large differences in color from surrounding structures and because of their fixed shape.
The pupil, which is located in the center of the iris, is usually dark in color, circular if not occluded by the upper and lower eyelids, or partially occluded. The center position of the pupil can be found on the image by pattern recognition of the shape and threshold control of the color. The iris is usually circular without occlusion by the upper and lower eyelids or partially circular with occlusion by the upper and lower eyelids, which can be distinguished from the white part of the eyeball by color contrast.
The movement of the pupil, or iris, can be approximated as a change in the relative position of the eye on the image. The position data of the eye in the captured picture can be acquired using only visible light, and therefore, the method further comprises:
s401, displaying a plurality of calibration points to a user in advance, and acquiring a calibration face image when the fixation point of the eye is aligned with the calibration points;
the process is similar to S301, and is not described herein again.
S402, carrying out eye positioning according to the calibration face image, intercepting the calibration face image, and obtaining an image of at least one eye;
the process is similar to S302 and is not described herein.
S403: extracting the pupil position and/or the iris position of the eye from the image of the eye, and acquiring the position of the iris reflection point;
in the specific implementation, when infrared rays irradiate human eyes and form iris reflection points, the iris reflection points can be directly observed from eyeballs and can also be obtained from images of the eyes. When the eye rotates, the pupil position and the iris position change along with the rotation of the eyeball, but because the incident angle of the infrared ray is not changed, when the eyeball rotates, the light spot formed by the infrared ray in the eye takes the pupil or the iris as a reference object, and the position changes, which is just opposite to the change of the pupil or the iris. Therefore, the pupil position and/or the iris position and the position of the iris reflection point can be visually extracted from the image of the eye.
S404: acquiring the relative positions of the pupil and/or the iris of the eye and the iris reflection point according to the pupil position and/or the iris position and the position of the iris reflection point;
s405: and calibrating the position corresponding relation between the injection point and the calibration point of the eye in the actual space according to the relative position of the pupil and/or the iris of the eye and the iris reflection point and the coordinates of the reference point in the display screen.
After the images of the eyes are extracted from the pupil positions and/or the iris positions and the positions of the iris reflection points, the relative positions of the pupils and/or the irises of the eyes and the iris reflection points can be obtained directly by measuring or calculating the position relations of the pupils and/or the irises of the eyes and the iris reflection points, and then the position corresponding relation of the injection points and the calibration points of the eyes in the actual space is calibrated according to the relative positions of the pupils and/or the irises of the eyes and the iris reflection points and the coordinates of the reference points in the display screen.
When the display screen displays the reference object to the user again, the specific coordinates of the positions of the eyes in the display screen can be obtained according to the coordinates of the reference object, the corresponding relation between the positions of the injection point and the calibration point of the eyes in the actual space and the face image shot by the user when the user gazes at the reference object, and the coordinates are the injection point of the eyes.
Therefore, the obtaining the fixation point of at least one eye in each face image according to the face image specifically includes:
acquiring specific coordinates of the positions watched by the eyes in the display screen according to the position corresponding relation between the injection points of the eyes in the actual space and the reference points in the display screen and the face image; taking the specific coordinates of the gazing position of the eyes in the display screen as a gazing point;
when implemented, the system may display several calibration points to the user at different locations in sequence on the display, requiring the user's eyes to look at the several points. Since the position of the reference object is displayed by the system through the display, the system can directly acquire the position of the calibration point, and then the system also acquires the data of the photo coordinates where the eyes are located at the corresponding time through the method. The movement of the eyes in the photo coordinates can be converted to coordinates calibrated to the screen or to the actual physical coordinates by the relative position between several calibration points and the relative movement of the eyes in the photo coordinates.
Under the irradiation of an infrared light source, the eye tracking can be performed under the condition similar to visible light. In addition, the infrared light source can form strong and clear specular reflection points on the iris of a person, which are called iris reflection points, the iris reflection points can form obvious light spots on the eyeball, and the iris reflection points can be seen from the eyeball obviously.
When the eye moves, the position of the pupil or iris moves following the eye movement. But the position of the iris reflection point does not change under the condition that the position of the infrared light source is constant. Through the difference of the positions of the pupil (or iris) and the iris reflection point, the change of the position coordinate and position where the eye is gazed in the real space or on the actual screen can be calculated.
One or more infrared light sources may be used to form one or more iris reflection points. With the known position of the light source, the movement of the eye relative to the position of the light source can be used to calibrate the position at which the eye is gazed in real space.
In addition, another embodiment of the present invention further provides another living human face recognition method, based on the above embodiment, as shown in fig. 5, before the immediately acquiring at least one human face image, the method further includes:
s501: extracting facial features of the face image according to the face image, and comparing the facial features of the face image with a prestored registered face image;
s502: if the comparison result is consistent, the face verification is passed;
in a specific implementation, since in the embodiment corresponding to fig. 1, only the verification whether the acquired image is from a live face is performed, a comparison based on facial features between the acquired image and a pre-stored registered face image needs to be performed on the face image, and only when the comparison is passed, the person to be logged in is considered to be the user himself, but not other illegal loggers. According to the face image, extracting the facial features of the face image, and comparing the facial features of the face image with the pre-stored registered face image by using the existing living body face recognition method, so the detailed process is not repeated herein.
It should be noted that the embodiment corresponding to fig. 1 and the embodiment corresponding to fig. 5 should be performed synchronously, that is, after the face image is acquired, the living body verification and the face verification are performed simultaneously, and only when both are passed, the user is enabled to log in successfully. If one of the verification items fails, login cannot be performed. Therefore, the embodiment provided by the present invention further includes: and if the human face verification and the living body verification pass, the user verification is passed.
Still another embodiment of the present invention further provides a living body face recognition apparatus, as shown in fig. 6, the living body face recognition apparatus provided in the embodiment of the present invention includes:
a face image obtaining unit 10, configured to obtain at least one face image in real time;
the eye fixation point acquisition unit 20 is configured to acquire a fixation point of at least one eye or a movement track of the fixation point in each face image according to the face images;
an eye fixation point position determination unit 30 that determines whether the eye moves according to a prompt, based on the eye fixation point or the movement trajectory of the eye fixation point;
and a living body authentication passing unit 40 for passing the living body authentication when the judgment result is yes.
Wherein judging whether the eye moves according to the prompt comprises at least one of:
prompting a target reference object which needs to be watched by eyes and is located outside to a user in advance, wherein in the shooting process, whether the watching points of the eyes are respectively aligned with the target reference object in a plurality of face images or not;
in the photographing process, displaying a reference object with a randomly changed motion track to a user through a display screen, wherein whether the motion track of a fixation point formed by eyes in a plurality of face images is consistent with the motion track of the target reference object or not;
in the photographing process, displaying a target reference object which sequentially appears at a plurality of random positions and disappears after preset time after each appearance to a user through a display screen, and judging whether the fixation point of eyes is respectively aligned with the target reference object at random positions in a plurality of face images;
displaying a plurality of reference objects to a user in a photographing process, and prompting the user by taking parts of the plurality of reference objects as target reference objects, wherein the fixation points of the eyes are respectively aligned with the target reference objects in the plurality of face images;
and displaying the password input panel to a user in the photographing process, wherein whether the fixation point of eyes is aligned with the corresponding position of a preset password on the fixation password input panel in sequence in the plurality of face images or not.
In this embodiment, specific functions and interaction modes of the face image obtaining unit 10, the eye fixation point obtaining unit 20, the eye fixation point position determining unit 30, and the living body verification passing unit 40 can be referred to the description of the embodiment corresponding to fig. 1, and are not described herein again.
In the living body face recognition device provided in the embodiment of the present invention, after at least one face image is acquired by the face image acquisition unit 10, the eye gaze point acquisition unit 20 is used to acquire the gaze point of at least one eye in each face image according to the face image, and the eye gaze point position determination unit 30 is used to determine whether the image acquired by the image acquisition device is from a living body according to the gaze point of the eye. The motion trail of the reference object used in the process is a random reference object at a position which is displayed to a client in advance before the face image is acquired, so that when a user is ready to take a picture or record a video, the eyes must rotate according to the preset instruction, and in the process, an illegal login person difficultly forges the face in a short time to 'cheat' the image acquisition equipment, so that the living body face identification method provided by the invention can identify whether the face image acquired from the image acquisition equipment is from a real living body, the anti-counterfeiting capability of the identity identification system is improved, and the safety of the system is improved.
Still another embodiment of the present invention further provides a specific apparatus of a face image obtaining unit, and as shown in fig. 7, the face image obtaining unit 10 specifically includes:
an original image acquisition module 101, configured to acquire a single or multiple original images including a human face;
a face position obtaining module 102, configured to perform face detection according to the original image to obtain a face position;
a face image obtaining module 103, configured to intercept the original image including the face according to the face position, so as to obtain a face image.
In this embodiment, the specific functions and interaction modes of the original image obtaining module 101, the face position obtaining module 102, and the face image obtaining module 103 can refer to the records of the embodiment corresponding to fig. 2, and are not described herein again.
Still another embodiment of the present invention provides another specific structure of an eye fixation point obtaining unit, as shown in fig. 8, the eye fixation point obtaining unit 20 specifically includes:
a first eye image obtaining module 201, configured to perform eye positioning according to the face image, intercept the face image, and obtain an image of at least one eye;
a first eye image extraction module 202, configured to extract an iris image and/or a pupil image of an eye from an image of the eye;
and a gaze point obtaining module 203, configured to calculate a relative displacement of eyeball rotation according to the iris image and/or the pupil image, and obtain a motion trajectory of a gaze point of the eye according to the relative displacement.
In this embodiment, specific functions and interaction modes of the first eye image obtaining module 201, the first eye image extracting module 202, and the gaze point obtaining module 203 may refer to the description of the embodiment corresponding to fig. 3, and are not described herein again.
It should be further noted that, in each embodiment of the present invention, as shown in fig. 9, the method may further include: an infrared light emitting unit 50 for emitting at least one beam of infrared light such that each beam of infrared light can form an iris reflection point in an eyeball of a human body;
a calibration face image obtaining module 60, configured to display a plurality of calibration points to a user in advance, and obtain a calibration face image when the fixation point of the eye is aligned with the calibration point;
a second eye image obtaining module 70, configured to perform eye positioning according to the calibration face image, intercept the calibration face image, and obtain an image of at least one eye;
a second eye position extracting module 80, configured to extract a pupil position and/or an iris position of an eye from the image of the eye, acquire a position of the iris reflection point, and acquire a relative position of the pupil and/or the iris of the eye and the iris reflection point according to the pupil position and/or the iris position and the position of the iris reflection point;
the calibration module 90 is used for calibrating the position corresponding relation between the injection point and the calibration point of the eye in the actual space according to the relative position of the pupil and/or the iris of the eye and the iris reflection point and the coordinate of the reference point in the display screen;
it should be noted that, at this time, the eye fixation point obtaining unit 20 specifically includes:
a fixation point coordinate obtaining module 204, configured to obtain a specific coordinate of a fixation point of an eye in the display screen according to a position correspondence between the fixation point of the eye and a reference point in the display screen in the actual space and the face image; taking the specific coordinates of the gazing position of the eyes in the display screen as a gazing point;
in this embodiment, specific functions and interaction modes of the infrared light emitting unit 50, the calibration face image obtaining module 60, the second eye image obtaining module 70, the second eye position extracting module 80, the calibration module 90, and the gazing point coordinate obtaining module 204 can be referred to the record of the embodiment corresponding to fig. 4, and are not described herein again.
As shown in fig. 10, the method further includes: a face feature extraction unit 100, configured to extract a face feature of a face image, and compare the face feature of the face image with a pre-stored registered face image;
a face verification passing unit 110, configured to pass face verification when the comparison results are consistent;
and a user authentication passing unit 120 for passing user authentication when both the face authentication and the living body authentication pass.
In this embodiment, the specific functions and interaction modes of the face feature extraction unit 100, the face verification passing unit 110, and the user verification passing unit 120 may refer to the description of the embodiment corresponding to fig. 4, and are not described herein again.
Still another embodiment of the present invention provides a living body face recognition system, as shown in fig. 11,
the method comprises the following steps: the display 130, the image collecting device 140, the lighting device 150, and the living human face recognition device 160 in the embodiment corresponding to the above-mentioned figures 6 to 10,
wherein, the display screen 130, the image collecting device 140 and the lighting device 150 are all connected with the living human face recognition device 160.
The living body face recognition method, the living body face recognition device and the computer program product of the system provided by the embodiment of the invention comprise a computer readable storage medium storing program codes, wherein instructions included in the program codes can be used for executing the method described in the previous method embodiment, and specific implementation can refer to the method embodiment, and is not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (6)
1. A living body face recognition method is characterized by comprising the following steps:
acquiring a plurality of face images in real time;
acquiring the movement track of the fixation point of at least one eye in each human face image according to the human face images;
judging whether the eyes move according to prompts or not according to the movement tracks of the fixation points of the eyes;
if the judgment result is yes, the living body verification is passed;
the obtaining of the movement track of the fixation point of at least one eye in each face image according to the face image specifically includes:
according to the face image, carrying out eye positioning, intercepting the face image, and acquiring an image of at least one eye;
extracting an iris image and/or a pupil image of an eye from the image of the eye;
calculating the relative displacement of eyeball rotation according to the iris image and/or the pupil image, and acquiring the motion trail of the fixation point of the eye according to the relative displacement;
calculating the relative displacement of the eyeball rotation according to the iris image and/or the pupil image, and specifically comprises the following steps:
taking the axis of the iris or the pupil of the eye directly in front of the eye as a reference axis, and acquiring the axis of the iris or the pupil according to the iris image and/or the pupil image; comparing the axis of the iris or the pupil with a reference axis, and calculating the relative displacement of the eyeball rotation;
determining whether the eye is moving as prompted comprises at least one of:
prompting a target reference object which needs to be watched by eyes and is located outside to a user in advance, and judging whether the gaze points of the eyes are respectively aligned with the target reference object in a plurality of face images or not in the photographing process;
in the photographing process, displaying a target reference object with a randomly changed motion track to a user through a display screen, and judging whether the motion track of a fixation point formed by eyes in a plurality of face images is consistent with the motion track of the target reference object or not;
in the photographing process, displaying a target reference object which sequentially appears at a plurality of random positions and disappears after preset time after each appearance to a user through a display screen, and judging whether the fixation point of the eyes is respectively aligned with the target reference object at random positions in a plurality of face images;
displaying a plurality of reference objects to a user in a photographing process, prompting the user by taking parts of the reference objects as target reference objects, and judging whether the fixation points of the eyes are respectively aligned with the target reference objects in a plurality of face images;
displaying a password input panel to a user in the photographing process, and judging whether the fixation point of eyes is aligned with the corresponding position of a preset password on the fixation password input panel in sequence in a plurality of face images;
before the instant acquisition of at least one face image, the method further comprises:
emitting at least one beam of infrared light, so that each beam of infrared light can form an iris reflection point in an eyeball of a human body;
displaying a plurality of calibration points to a user in advance, and acquiring a calibration face image when the fixation point of the eye is aligned with the calibration points;
according to the calibration face image, carrying out eye positioning, intercepting the calibration face image, and acquiring an image of at least one eye;
extracting the pupil position and/or the iris position of the eye from the image of the eye, acquiring the position of the iris reflection point, and acquiring the relative position of the pupil and/or the iris of the eye and the iris reflection point according to the pupil position and/or the iris position and the position of the iris reflection point;
and calibrating the position corresponding relation between the injection point and the calibration point of the eye in the actual space according to the relative position of the pupil and/or the iris of the eye and the iris reflection point and the coordinates of the reference point in the display screen.
2. The live face recognition method according to claim 1, wherein the immediately acquiring a plurality of face images specifically comprises:
acquiring a plurality of original images comprising human faces;
according to the original image, carrying out face detection to obtain a face position;
and intercepting the original image comprising the face according to the face position, thereby obtaining a face image.
3. The live body face recognition method according to claim 1,
the obtaining the fixation point of at least one eye in each face image according to the face image specifically comprises:
acquiring specific coordinates of the positions watched by the eyes in the display screen according to the position corresponding relation between the injection points of the eyes in the actual space and the reference points in the display screen and the face image; taking the specific coordinates of the gazing position of the eyes in the display screen as a gazing point;
and/or the presence of a gas in the gas,
after the instant acquisition of a plurality of face images, the method further comprises the following steps:
extracting facial features of the face image according to the face image, and comparing the facial features of the face image with a prestored registered face image;
if the comparison result is consistent, the face verification is passed;
further comprising: and if the human face verification and the living body verification pass, the user verification is passed.
4. A living body face recognition apparatus, comprising:
the face image acquisition unit is used for acquiring a plurality of face images in real time;
the eye fixation point acquisition unit is used for acquiring the movement track of the fixation point of at least one eye in each human face image according to the human face images;
the eye fixation point position judging unit is used for judging whether the eyes move according to prompts according to the movement track of the eye fixation point;
a living body authentication passing unit for passing the living body authentication when the judgment result is yes;
the eye fixation point obtaining unit specifically includes:
the first eye image acquisition module is used for positioning eyes according to the face image, intercepting the face image and acquiring an image of at least one eye;
the first eye image extraction module is used for extracting an iris image and/or a pupil image of an eye from the image of the eye;
the fixation point acquisition module is used for calculating the relative displacement of eyeball rotation according to the iris image and/or the pupil image and acquiring the motion trail of the fixation point of the eyes according to the relative displacement;
the gaze point obtaining module is specifically configured to calculate a relative displacement of an eyeball rotation according to the iris image and/or the pupil image by the following steps:
taking the axis of the iris or the pupil of the eye directly in front of the eye as a reference axis, and acquiring the axis of the iris or the pupil according to the iris image and/or the pupil image; comparing the axis of the iris or the pupil with a reference axis, and calculating the relative displacement of the eyeball rotation;
determining whether the eye is moving as prompted comprises at least one of:
prompting a target reference object which needs to be watched by eyes and is located outside to a user in advance, and judging whether the gaze points of the eyes are respectively aligned with the target reference object in a plurality of face images or not in the photographing process;
in the photographing process, displaying a reference object with a randomly changed motion track to a user through a display screen, and judging whether the motion track of a fixation point formed by eyes in a plurality of face images is consistent with the motion track of the target reference object;
in the photographing process, displaying a target reference object which sequentially appears at a plurality of random positions and disappears after preset time after each appearance to a user through a display screen, and judging whether the fixation point of the eyes is respectively aligned with the target reference object at random positions in a plurality of face images;
displaying a plurality of reference objects to a user in a photographing process, prompting the user by taking parts of the reference objects as target reference objects, and judging whether the fixation points of the eyes are respectively aligned with the target reference objects in a plurality of face images;
displaying a password input panel to a user in the photographing process, and judging whether the fixation point of eyes is aligned with the corresponding position of a preset password on the fixation password input panel in sequence in a plurality of face images;
the infrared light emitting unit is used for emitting at least one beam of infrared light, so that each beam of infrared light can form an iris reflection point in an eyeball of a human body;
the calibration face image acquisition module is used for displaying a plurality of calibration points to a user in advance and acquiring a calibration face image when the fixation point of the eyes is aligned with the calibration points;
the second eye image acquisition module is used for positioning eyes according to the calibration face image, intercepting the calibration face image and acquiring an image of at least one eye;
the second eye position extraction module is used for extracting the pupil position and/or the iris position of the eye from the image of the eye, acquiring the position of the iris reflection point, and acquiring the relative position of the pupil and/or the iris of the eye and the iris reflection point according to the pupil position and/or the iris position and the position of the iris reflection point;
and the calibration module is used for calibrating the position corresponding relation between the injection point and the calibration point of the eye in the actual space according to the relative position of the pupil and/or the iris of the eye and the iris reflection point and the coordinate of the reference point in the display screen.
5. The living body face recognition device according to claim 4, wherein the face image obtaining unit specifically comprises:
the original image acquisition module is used for acquiring a plurality of original images comprising human faces;
the face position acquisition module is used for carrying out face detection according to the original image to acquire a face position;
and the face image acquisition module is used for intercepting the original image comprising the face according to the face position so as to acquire a face image.
6. The living body face recognition device according to claim 4, further comprising:
the eye fixation point acquisition unit specifically includes:
the fixation point coordinate acquisition module is used for acquiring the specific coordinates of the fixation position of the eyes in the display screen according to the position corresponding relation between the fixation point of the eyes and the reference point in the display screen in the actual space and the face image; taking the specific coordinates of the gazing position of the eyes in the display screen as a gazing point;
and/or the presence of a gas in the gas,
further comprising: the face feature extraction unit is used for extracting the face features of the face image and comparing the face features of the face image with a prestored registered face image;
the face verification passing unit is used for passing the face verification when the comparison results are consistent;
and the user verification passing unit is used for passing user verification when the face verification and the living body verification pass.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510629903.0A CN105184277B (en) | 2015-09-29 | 2015-09-29 | Living body face recognition method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510629903.0A CN105184277B (en) | 2015-09-29 | 2015-09-29 | Living body face recognition method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105184277A CN105184277A (en) | 2015-12-23 |
CN105184277B true CN105184277B (en) | 2020-02-21 |
Family
ID=54906344
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510629903.0A Active CN105184277B (en) | 2015-09-29 | 2015-09-29 | Living body face recognition method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105184277B (en) |
Families Citing this family (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107092818A (en) * | 2016-02-17 | 2017-08-25 | 阿里巴巴集团控股有限公司 | The implementation method and device of vivo identification |
CN107305629A (en) * | 2016-04-21 | 2017-10-31 | 王溯 | Sight identifying device and method |
US10063560B2 (en) * | 2016-04-29 | 2018-08-28 | Microsoft Technology Licensing, Llc | Gaze-based authentication |
CN106203297B (en) * | 2016-06-30 | 2019-11-08 | 北京七鑫易维信息技术有限公司 | A kind of personal identification method and device |
CN106169075A (en) * | 2016-07-11 | 2016-11-30 | 北京小米移动软件有限公司 | Auth method and device |
CN107622188A (en) * | 2016-07-15 | 2018-01-23 | 阿里巴巴集团控股有限公司 | Verification method, device, system and equipment based on biological characteristic |
CN110114777B (en) * | 2016-12-30 | 2023-10-20 | 托比股份公司 | Identification, authentication and/or guidance of a user using gaze information |
CN108256406B (en) * | 2017-01-05 | 2023-11-03 | 广州市晶密电子有限公司 | Data processing method and device for realizing face recognition through eye positioning guidance |
CN108319830A (en) * | 2017-01-17 | 2018-07-24 | 中兴通讯股份有限公司 | A kind of auth method and device |
CN108345818B (en) * | 2017-01-23 | 2021-08-31 | 北京中科奥森数据科技有限公司 | Face living body detection method and device |
CN106713368B (en) * | 2017-03-08 | 2019-09-27 | 四川大学 | A kind of auth method and device |
CN106850685A (en) * | 2017-03-25 | 2017-06-13 | 安徽味唯网络科技有限公司 | A kind of auth method for having a sense of independence |
CN108693543B (en) * | 2017-03-31 | 2022-11-22 | 法拉第未来公司 | Method and system for detecting signal spoofing |
CN107273794A (en) * | 2017-04-28 | 2017-10-20 | 北京建筑大学 | Live body discrimination method and device in a kind of face recognition process |
CN107154095A (en) * | 2017-06-06 | 2017-09-12 | 信利光电股份有限公司 | A kind of car entry system and method |
CN107451547A (en) * | 2017-07-17 | 2017-12-08 | 广东欧珀移动通信有限公司 | Identify the method and Related product of live body |
CN107590429A (en) * | 2017-07-20 | 2018-01-16 | 阿里巴巴集团控股有限公司 | The method and device verified based on eyeprint feature |
CN107506696A (en) * | 2017-07-29 | 2017-12-22 | 广东欧珀移动通信有限公司 | Anti-fake processing method and related product |
CN108304784A (en) * | 2018-01-15 | 2018-07-20 | 武汉神目信息技术有限公司 | A kind of blink detection method and device |
CN108399365B (en) * | 2018-01-19 | 2022-03-25 | 东北电力大学 | Method and device for detecting living human face by using pupil diameter |
CN108537103B (en) * | 2018-01-19 | 2022-06-10 | 东北电力大学 | Living body face detection method and device based on pupil axis measurement |
WO2019200576A1 (en) * | 2018-04-18 | 2019-10-24 | 深圳阜时科技有限公司 | Identity authentication method, identity authentication device, and electronic apparatus |
CN108615159A (en) * | 2018-05-03 | 2018-10-02 | 百度在线网络技术(北京)有限公司 | Access control method and device based on blinkpunkt detection |
CN108900700A (en) * | 2018-06-04 | 2018-11-27 | 北京大学 | Authentication method and system based on the double verification that recognition of face and sight position |
CN110738072A (en) * | 2018-07-18 | 2020-01-31 | 浙江宇视科技有限公司 | Living body judgment method and device |
CN109190509B (en) * | 2018-08-13 | 2023-04-25 | 创新先进技术有限公司 | Identity recognition method, device and computer readable storage medium |
CN109325462B (en) * | 2018-10-11 | 2021-03-12 | 深圳斐视沃德科技有限公司 | Face recognition living body detection method and device based on iris |
CN111046696A (en) * | 2018-10-12 | 2020-04-21 | 宏碁股份有限公司 | Living body identification method and electronic device |
CN109753871A (en) * | 2018-11-21 | 2019-05-14 | 惠州Tcl移动通信有限公司 | A kind of method and mobile terminal of recognition of face |
CN109271978A (en) * | 2018-11-23 | 2019-01-25 | 四川长虹电器股份有限公司 | Recognition of face anti-fraud method |
CN109871796A (en) * | 2019-01-31 | 2019-06-11 | 杭州创匠信息科技有限公司 | Method for anti-counterfeit and device based on living body iris |
CN110135370B (en) * | 2019-05-20 | 2022-09-09 | 北京百度网讯科技有限公司 | Method and device for detecting living human face, electronic equipment and computer readable medium |
CN110720105A (en) * | 2019-09-11 | 2020-01-21 | 深圳市汇顶科技股份有限公司 | Face anti-counterfeiting detection method, device, chip, electronic equipment and computer readable medium |
CN112784655A (en) * | 2019-11-11 | 2021-05-11 | 七鑫易维(深圳)科技有限公司 | Living body detection method and device based on gazing information and detection equipment |
CN111159678B (en) * | 2019-12-26 | 2023-08-18 | 联想(北京)有限公司 | Identity recognition method, device and storage medium |
CN111324878A (en) * | 2020-02-05 | 2020-06-23 | 重庆特斯联智慧科技股份有限公司 | Identity verification method and device based on face recognition, storage medium and terminal |
CN113723147A (en) * | 2020-05-26 | 2021-11-30 | 深圳爱酷智能科技有限公司 | Iris face multi-mode in-vivo detection and identification method, device, medium and equipment |
CN111967293A (en) * | 2020-06-22 | 2020-11-20 | 云知声智能科技股份有限公司 | Face authentication method and system combining voiceprint recognition and attention detection |
CN112016482B (en) * | 2020-08-31 | 2022-10-25 | 成都新潮传媒集团有限公司 | Method and device for distinguishing false face and computer equipment |
CN112257050B (en) * | 2020-10-26 | 2022-10-28 | 北京鹰瞳科技发展股份有限公司 | Identity authentication method and equipment based on gazing action |
CN113255529A (en) * | 2021-05-28 | 2021-08-13 | 支付宝(杭州)信息技术有限公司 | Biological feature identification method, device and equipment |
CN113420613B (en) * | 2021-06-03 | 2023-09-05 | 中国联合网络通信集团有限公司 | Identity verification method based on face recognition, face recognition system and associated equipment |
CN113886791A (en) * | 2021-10-19 | 2022-01-04 | 中国科学院上海微系统与信息技术研究所 | Identity information determining method and device, electronic equipment and storage medium |
CN114677746B (en) * | 2022-04-15 | 2024-08-09 | Oppo广东移动通信有限公司 | Living body face detection method and device, storage medium and electronic equipment |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101710383A (en) * | 2009-10-26 | 2010-05-19 | 北京中星微电子有限公司 | Method and device for identity authentication |
CN102309366A (en) * | 2011-07-21 | 2012-01-11 | 山东科技大学 | Control system and control method for controlling upper prosthesis to move by using eye movement signals |
CN102749991A (en) * | 2012-04-12 | 2012-10-24 | 广东百泰科技有限公司 | Non-contact free space eye-gaze tracking method suitable for man-machine interaction |
CN102842040A (en) * | 2012-08-13 | 2012-12-26 | 高艳玲 | Living body detection method utilizing eye tracing |
CN103186240A (en) * | 2013-03-25 | 2013-07-03 | 成都西可科技有限公司 | High-pixel camera-based method for detecting eye movement |
CN104123543A (en) * | 2014-07-23 | 2014-10-29 | 泰亿格电子(上海)有限公司 | Eyeball movement identification method based on face identification |
CN104158980A (en) * | 2014-08-28 | 2014-11-19 | 西安交通大学 | Intelligent equipment unlocking method based on human eye motion feature |
CN105184246A (en) * | 2015-08-28 | 2015-12-23 | 北京旷视科技有限公司 | Living body detection method and living body detection system |
-
2015
- 2015-09-29 CN CN201510629903.0A patent/CN105184277B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101710383A (en) * | 2009-10-26 | 2010-05-19 | 北京中星微电子有限公司 | Method and device for identity authentication |
CN102309366A (en) * | 2011-07-21 | 2012-01-11 | 山东科技大学 | Control system and control method for controlling upper prosthesis to move by using eye movement signals |
CN102749991A (en) * | 2012-04-12 | 2012-10-24 | 广东百泰科技有限公司 | Non-contact free space eye-gaze tracking method suitable for man-machine interaction |
CN102842040A (en) * | 2012-08-13 | 2012-12-26 | 高艳玲 | Living body detection method utilizing eye tracing |
CN103186240A (en) * | 2013-03-25 | 2013-07-03 | 成都西可科技有限公司 | High-pixel camera-based method for detecting eye movement |
CN104123543A (en) * | 2014-07-23 | 2014-10-29 | 泰亿格电子(上海)有限公司 | Eyeball movement identification method based on face identification |
CN104158980A (en) * | 2014-08-28 | 2014-11-19 | 西安交通大学 | Intelligent equipment unlocking method based on human eye motion feature |
CN105184246A (en) * | 2015-08-28 | 2015-12-23 | 北京旷视科技有限公司 | Living body detection method and living body detection system |
Also Published As
Publication number | Publication date |
---|---|
CN105184277A (en) | 2015-12-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105184277B (en) | Living body face recognition method and device | |
JP6722272B2 (en) | User identification and/or authentication using gaze information | |
CN106203297B (en) | A kind of personal identification method and device | |
KR102573482B1 (en) | Biometric security system and method | |
CN106407914B (en) | Method and device for detecting human face and remote teller machine system | |
US10678897B2 (en) | Identification, authentication, and/or guiding of a user using gaze information | |
CN107609383B (en) | 3D face identity authentication method and device | |
US10635795B2 (en) | Dynamic graphic eye-movement authentication system and method using face authentication or hand authentication | |
CN106295287B (en) | Biopsy method and device and identity identifying method and device | |
US20150186708A1 (en) | Biometric identification system | |
CN111344703B (en) | User authentication device and method based on iris recognition | |
WO2016090379A2 (en) | Detection of print-based spoofing attacks | |
EP3905943B1 (en) | System and method for eye tracking | |
CN106709303B (en) | Display method and device and intelligent terminal | |
CN110612530A (en) | Method for selecting a frame for use in face processing | |
TW201725528A (en) | Eye movement traces authentication and facial recognition system, methods, computer readable system, and computer program product | |
US20210406353A1 (en) | System and method for biometric authentication | |
KR20190095141A (en) | Face authentication method | |
CN111723636B (en) | Fraud detection using optokinetic responses | |
CN110909704A (en) | Living body detection method and device, electronic equipment and storage medium | |
CN112257050B (en) | Identity authentication method and equipment based on gazing action | |
CN112232255B (en) | Identity authentication method and equipment based on multi-mode data | |
US11948402B2 (en) | Spoof detection using intraocular reflection correspondences | |
CN112069917B (en) | Face recognition system for fixed scene | |
CN115270093A (en) | Identity verification method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |