CN112784655A - Living body detection method and device based on gazing information and detection equipment - Google Patents

Living body detection method and device based on gazing information and detection equipment Download PDF

Info

Publication number
CN112784655A
CN112784655A CN201911094537.8A CN201911094537A CN112784655A CN 112784655 A CN112784655 A CN 112784655A CN 201911094537 A CN201911094537 A CN 201911094537A CN 112784655 A CN112784655 A CN 112784655A
Authority
CN
China
Prior art keywords
point
user
detected
detection
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911094537.8A
Other languages
Chinese (zh)
Inventor
孔祥晖
姚涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qixin Yiwei Shenzhen Technology Co ltd
Beijing 7Invensun Technology Co Ltd
Original Assignee
Qixin Yiwei Shenzhen Technology Co ltd
Beijing 7Invensun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qixin Yiwei Shenzhen Technology Co ltd, Beijing 7Invensun Technology Co Ltd filed Critical Qixin Yiwei Shenzhen Technology Co ltd
Priority to CN201911094537.8A priority Critical patent/CN112784655A/en
Publication of CN112784655A publication Critical patent/CN112784655A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Abstract

The application provides a living body detection method based on gazing information, which comprises the following steps: determining a detection point on the detection equipment by using the position information of the base point on the detection equipment; the position information of the base point is obtained by utilizing the fixation point information of the user to be detected on the detection equipment; acquiring the fixation point information of a user to be detected in real time under the condition that the detection point presents a dynamic display effect; judging whether the gaze point of the user to be detected is located in a target area within preset time or not by using the gaze point information of the user to be detected, or judging whether the gaze point track of the user to be detected meets the track requirement from a base point to a detection point within the preset time or not; the target area is an area within a preset distance from the detection point; and if the fixation point of the user to be detected is judged to be located in the target area within the preset time, or the fixation point track of the user to be detected meets the track requirement from the base point to the detection point within the preset time, determining that the user to be detected passes the living body detection.

Description

Living body detection method and device based on gazing information and detection equipment
Technical Field
The invention relates to the technical field of living body detection, in particular to a living body detection method and device based on gazing information and detection equipment.
Background
With the development of biometric technology, functions based on face recognition, such as face unlocking and face payment, are gradually and widely applied to various devices and are also widely accepted by users.
Security is very important for face recognition-based functions such as face unlocking and face payment. Therefore, in order to prevent others from imitating the unlocking or payment of the device by the user through a photo, a mask, a video, or a three-dimensional stereo model of the user, and thus to prevent the privacy or the interest of the user from being violated, a living body detection is usually added to the face recognition function, that is, whether a person performing face recognition is a living body is detected.
In the prior art, the living body detection can be performed by means of an action command, for example, by prompting a user to perform an action such as shaking, nodding or opening the mouth. However, for some scenes or some users, the actions performed cooperatively can cause the users to feel uncomfortable, so that the users are easy to generate repulsive psychology and the user experience is reduced. Or through iris recognition and infrared sensor. However, iris recognition is easily affected by external factors such as light, so that the detection result is not accurate enough, and when the infrared sensor performs living body detection, the accuracy is also obviously low when the user wears glasses.
Disclosure of Invention
Based on the defects of the prior art, the invention provides a living body detection method and device based on gazing information and detection equipment, and aims to solve the problems that the living body detection in the prior art is easy to cause the rejection psychology of a user and the accuracy of a detection result is poor.
In order to achieve the purpose, the invention provides the following technical scheme:
the invention provides a living body detection method based on gazing information, which comprises the following steps:
determining a detection point on the detection equipment by using the position information of a base point on the detection equipment; the position information of the base point is obtained by utilizing the information of the fixation point of the user to be detected on the detection equipment; the position of the detection point is different from the position of the base point;
acquiring the fixation point information of the user to be detected in real time under the condition that the detection point presents a dynamic display effect;
performing area judgment and/or track judgment by using the gaze point information of the user to be detected, wherein the area judgment representation judges whether the gaze point of the user to be detected is located in a target area within preset time, and the track judgment representation judges whether the gaze point track of the user to be detected meets the track requirement from the base point to the detection point within the preset time; the target area is an area within a preset distance from the detection point;
and if the fixation point of the user to be detected is judged to be located in the target area within the preset time, or the fixation point track of the user to be detected is judged to meet the track requirement from the base point to the detection point within the preset time, determining that the user to be detected passes the living body detection.
Optionally, in the method described above, the method for determining the position information of the base point includes:
acquiring a face image of the user to be detected;
calculating eye feature information corresponding to the face image based on the face image;
calculating according to the eye feature information to obtain corresponding fixation point information;
judging whether the point of regard of the user to be detected falls on the detection equipment or not by utilizing the point of regard information;
if the fixation point of the user to be detected is judged to fall on the detection equipment, setting the position point of the fixation point of the user to be detected on the detection equipment as a base point; the position information of the position point of the fixation point of the user to be detected on the detection equipment is the position information of the base point;
and if the fixation point of the user to be detected is judged not to fall on the detection equipment, returning to obtain the facial image of the user to be detected again.
Optionally, in the method described above, the method for determining the position information of the base point includes:
determining a location point on said detection device;
acquiring the fixation point information of the user to be detected in the state that the position point presents the dynamic display effect;
judging whether the point of regard of the user to be detected falls on the position point or not according to the point of regard information of the user to be detected;
if the fixation point of the user to be detected is judged to fall on the position point, setting the position point as a base point, and determining the position information of the position point; wherein the position information of the position point is the position information of the base point.
Optionally, in the above method, the determining, by using position information of a base point on a detection device, a detection point on the detection device includes:
setting a position point on the detection apparatus farthest from the base point as a detection point based on the position information of the base point on the detection apparatus.
Optionally, in the above method, the determining, by using position information of a base point on a detection device, a detection point on the detection device includes:
randomly determining a position point meeting a first preset condition from the detection equipment as a detection point based on the position information of the base point on the detection equipment; wherein the first preset condition is as follows: the distance from the base point is greater than a preset distance and is first determined as a detection point within a preset time period.
Optionally, in the foregoing method, the obtaining, in real time, the gaze point information of the user to be detected includes:
and acquiring the facial image of the user to be detected in real time, and calculating to obtain the gaze point information corresponding to each frame of the facial image of the user to be detected based on each frame of the facial image of the user to be detected.
Optionally, in the foregoing method, performing area judgment by using the gaze point information of the user to be detected includes:
judging whether the coincidence degree of the gazing point of the user to be detected and the target area meets preset requirements within preset time or not by using the gazing point information of the user to be detected;
if the contact ratio of the gaze point of the user to be detected and the target area is judged, and when the preset requirement is met within the preset time, the gaze point of the user to be detected is judged to be located in the target area within the preset time.
In another aspect, the present invention provides a living body detecting apparatus based on gaze information, including:
a first determination unit configured to determine a detection point on a detection device using position information of a base point on the detection device; the position information of the base point is obtained by utilizing the information of the fixation point of the user to be detected on the detection equipment; the position of the detection point is different from the position of the base point;
the first acquisition unit is used for acquiring the fixation point information of the user to be detected in real time under the condition that the detection point presents a dynamic display effect;
the first judging unit is used for carrying out area judgment and/or track judgment by utilizing the gaze point information of the user to be detected, the area judgment representation judges whether the gaze point of the user to be detected is located in a target area within preset time, and the track judgment representation judges whether the gaze point track of the user to be detected meets the track requirement from the base point to the detection point within the preset time; the target area is an area within a preset distance from the detection point;
and the second determining unit is used for determining that the user to be detected passes the living body detection when the first judging unit judges that the gaze point of the user to be detected is located in the target area within the preset time or judges that the gaze point track of the user to be detected meets the track requirement from the base point to the detection point.
Optionally, in the above apparatus, a first base point determining unit is further included; wherein the first base point determining unit includes:
the second acquisition unit is used for acquiring the face image of the user to be detected;
the first calculation unit is used for calculating eye feature information corresponding to the face image based on the face image;
the second calculation unit is used for calculating corresponding fixation point information according to the eye feature information;
the second judging unit is used for judging whether the fixation point of the user to be detected falls on the detection equipment or not by utilizing the fixation point information;
the first setting unit is used for setting the position point of the gaze point of the user to be detected on the detection equipment as a base point when the second judging unit judges that the gaze point of the user to be detected falls on the detection equipment; the position information of the position point of the fixation point of the user to be detected on the detection equipment is the position information of the base point;
and the returning unit is used for returning to the second acquiring unit to acquire the facial image of the user to be detected again when the second judging unit judges that the point of regard of the user to be detected does not fall on the detecting equipment.
Optionally, in the above apparatus, a second base point determining unit is further included; wherein the second base point determining unit includes:
a third determination unit for determining a location point on the detection device;
a third obtaining unit, configured to obtain the gaze point information of the user to be detected in a state where the position point exhibits the dynamic display effect;
a third judging unit, configured to judge whether the gaze point of the user to be detected falls on the location point according to the gaze point information of the user to be detected;
the second setting unit is used for setting the position point as a base point and determining the position information of the position point when the third judging unit judges that the fixation point of the user to be detected falls on the position point; wherein the position information of the position point is the position information of the base point.
Optionally, in the above apparatus, the first determining unit includes:
a first detected point determining unit configured to set, as a detected point, a position point on the detection apparatus that is farthest from a base point on the detection apparatus, based on position information of the base point on the detection apparatus.
Optionally, in the above apparatus, the first determining unit includes:
a second detection point determination unit configured to randomly determine, as a detection point, a position point satisfying a first preset condition from the detection device based on position information of a base point on the detection device; wherein the first preset condition is as follows: the distance from the base point is greater than a preset distance and is first determined as a detection point within a preset time period.
Optionally, in the above apparatus, the first obtaining unit includes:
the first acquisition subunit is configured to acquire the face image of the user to be detected in real time, and calculate, based on each frame of the face image of the user to be detected, to obtain gaze point information corresponding to each frame of the face image of the user to be detected.
Optionally, in the above apparatus, the first determining unit, when performing the area determination by using the gaze point information of the user to be detected, is configured to:
judging whether the coincidence degree of the gazing point of the user to be detected and the target area meets preset requirements within preset time or not by using the gazing point information of the user to be detected;
if the contact ratio of the gaze point of the user to be detected and the target area is judged, and when the preset requirement is met within the preset time, the gaze point of the user to be detected is judged to be located in the target area within the preset time.
Another aspect of the present invention provides a detection apparatus, including:
one or more processors;
a memory having one or more programs stored thereon;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement a gaze information based liveness detection method as any of the above.
The living body detection method based on the gazing information determines the position information of the base point by utilizing the gazing point information of a user to be detected on the detection equipment, and determines the detection point on the detection equipment by utilizing the position information of the base point on the detection equipment. Then, by adding a dynamic display effect to the detection point, the line of sight of the user is attracted. If the user to be detected is a living user, the sight line of the user to be detected can be naturally transferred from the base point to the detection point with the dynamic display effect. Therefore, under the condition that the detection point presents the dynamic display effect, the gaze point information of the user to be detected is obtained in real time, and whether the gaze point of the user to be detected is located in a target area within a preset distance from the detection point within preset time is judged by using the gaze point information of the user to be detected, and/or whether the gaze point track of the user to be detected meets the track requirement from the base point to the detection point is judged, and if the gaze point of the user to be detected is located in the target area within the preset time or the gaze point track of the user to be detected meets the track requirement from the base point to the detection point, the user to be detected is determined to pass the living body detection. Therefore, the living body detection is completed under the condition that the user has no perception at all, and the rejection psychology of the user is avoided. In addition, the human body detection is carried out based on the eyeball tracking technology, so that the human body detection is not easily influenced by factors such as light or glasses, and the accuracy of the detection result can be ensured.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flow chart of a living body detection method based on gaze information according to an embodiment of the present invention;
fig. 2 is a schematic flowchart illustrating a method for determining position information of a base point according to another embodiment of the present invention;
FIG. 3 is a flowchart illustrating a method for determining position information of a base point according to another embodiment of the present invention;
fig. 4 is a schematic structural diagram of a living body detecting apparatus based on gaze information according to another embodiment of the present invention;
fig. 5 is a schematic structural diagram of a first base point determining unit according to another embodiment of the present invention;
fig. 6 is a schematic structural diagram of a second base point determining unit according to another embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In this application, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The embodiment of the invention provides a living body detection method based on gazing information, as shown in figure 1, comprising the following steps:
s101, determining a detection point on the detection equipment by using the position information of the base point on the detection equipment.
The position information of the base point is obtained by using the fixation point information of the user to be detected on the detection equipment, and the position of the detection point is different from that of the base point, namely the base point and the detection point are two different position points on the detection equipment respectively.
It should be noted that, the living body detection is usually performed together with the face recognition, so the detection device may be a detection device for implementing the living body detection method provided by the present invention, or may be a detection device, such as a mobile phone, a tablet, etc., that can implement the living body detection method and the face detection provided by the present invention. For the user to be detected, the purpose of the living body detection is to determine whether the user to be detected is a living body, so that the user to be detected cannot be determined to be the living body before the detection result is not obtained. The living body user refers to a person who is actually performing living body detection, and the photo, the video and the like of the user do not belong to the living body user.
The living body detection method provided by the embodiment of the invention aims to judge whether a user to be detected is a living body or not by judging whether the fixation point of the user to be detected can be transferred from one position point to another position point or not. Therefore, the position information of the base point is actually equal to the position information of the fixation point of the user to be detected falling on the detection device when the formal detection is started. Therefore, the position information of the base point can be obtained by using the gazing point information of the user to be detected on the detection equipment. Specifically, the position information of the base point can be determined by shooting an eye image of the user to be detected and calculating the gaze point information of the user to be detected based on the shot eye image, and the position information of the base point can also be determined by obtaining the gaze point information of the user to be detected through one or more eyeball tracking technologies such as a scleral contact lens, a micro-electro-mechanical system, myoelectric current and a capacitor.
Alternatively, another embodiment of the present invention provides a method for determining position information of a base point, as shown in fig. 2, including:
s201, obtaining a face image of a user to be detected.
Specifically, the front camera can be installed on the detection device, and the face image of the user to be detected is shot through the front camera. Optionally, step S202 may be executed after the first picture of the user to be detected is taken; or after a plurality of pictures are taken, acquiring a first facial picture of the user to be detected from the pictures, and executing the step S202; or acquiring a video of the user to be detected in a shooting mode, and then acquiring a first frame image of the user to be detected in the video for executing the step S202.
S202, calculating eye feature information corresponding to the face image based on the face image of the user to be detected.
The eye feature information of the user may include any one or a combination of several of information characterizing eye features, such as pupil information, cornea information, light spot information, iris information, and/or eyelid information.
And S203, calculating to obtain corresponding fixation point information according to the eye feature information.
Specifically, the gaze point information corresponding to the facial image is calculated by analyzing the relative positions of features such as the pupil profile, the iris profile, the pupil center, the iris center, and the reflection point of the external light source on the cornea. The gaze point information may include a gaze direction and a gaze point (gaze point).
And S204, judging whether the gaze point of the user to be detected falls on the detection equipment or not by using the gaze point information.
Specifically, the gaze direction and the gaze falling point in the gaze point information are used to determine whether the gaze point of the user to be detected falls within the range of the detection device, so as to determine whether the gaze point of the user to be detected falls on the detection device. The range of the detection device may be the maximum range of the entire detection device, or may be a part of a specific range on the detection device, for example, when the detection device has a screen, it may be determined whether the gaze point of the user to be detected falls on the detection device by determining whether the gaze point of the user to be detected falls within the screen range of the detection device.
If the step S204 is executed to determine that the gazing point of the user to be detected falls on the detection device, the step S205 is executed, and if the step S204 is executed to determine that the gazing point of the user to be detected does not fall on the detection device, the step S206 is executed.
S205, setting the position point of the gaze point of the user to be detected on the detection device as a base point, and setting the position information of the position point of the gaze point of the user to be detected on the detection device as the position information of the base point.
And S206, returning to obtain the face image of the user to be detected again.
Specifically, returning to step S201, the face image of the user to be detected is taken again, or the next face image of the user to be detected is obtained from the previously taken picture or video, that is, the determination method of the position information of the base point provided in this embodiment is executed again until it is determined that the gazing point of the user to be detected falls on the detection device.
Because the living body detection method provided by the embodiment of the invention is carried out under the condition that the user does not sense, the base point cannot be determined or the determination time of the base point is too long in order to avoid the situation that the user does not watch the detection equipment for a long time, namely, the point of regard of the user to be detected does not fall on the detection equipment for a long time. Therefore, another embodiment of the present invention provides another method for determining position information of a base point, as shown in fig. 3, including:
s301, determining a position point on the detection equipment.
The location point may be any location point on the detection device, for example, when the detection device exists on the screen, the location point may be any location point on the screen. Of course, it is also possible to select one position point from a plurality of preset position points, for example, to determine one position point from a plurality of preset position points on the screen of the detection device, or to detect that the screen does not exist on the detection device, but a plurality of position points are preset on the detection device, and the position points are provided with a device capable of presenting a dynamic effect, such as a light-emitting indicator light, a flashing light, and the like, and then randomly determine one position point.
S302, under the condition that the position points show the dynamic display effect, the fixation point information of the user to be detected is obtained.
As can be seen from the above, the determined position point may present a dynamic effect, if the position point is a point on the screen, the screen displays the dynamic effect at the position point on the screen, and if the detection device does not have the screen, the device that may present the dynamic effect at the position point presents the dynamic effect.
Because, when the position presenting the dynamic display effect appears suddenly on the detection device, the attention of the living user can be attracted as the instinctive reaction of the human body, namely, the sight line is naturally descended to present the position point presenting the dynamic display effect, so that the point of regard falls on the position point. Therefore, the fixation point of the user to be detected, which falls on the detection equipment, can be acquired more quickly, and the basic point and the position information thereof can be determined more quickly.
S303, judging whether the gaze point of the user to be detected falls on the determined position point or not according to the gaze point information of the user to be detected.
In order to ensure that the position information of the finally determined base point and the position information of the gaze point of the user to be detected are consistent, it is necessary to determine whether the gaze point of the user to be detected falls on the determined position point.
If it is determined that the gaze point of the user to be detected is located on the determined location point, step S304 is executed. If the fixation point of the user to be detected does not fall on the determined position point after long-time judgment, it can be determined that the user to be detected does not pass the living body detection, or the user is prompted to perform the living body detection again.
S304, setting the determined position point as a base point, and determining the position information of the position point; the position information of the position point is the position information of the base point.
Immediately after setting the base point and determining the position information of the base point, the detection point is determined on the detection device based on the position information of the base point. Alternatively, a position point where the position information on the detection device is distinguished from the position information of the base point may be set as the detection point based on the position information of the base point. For example, on the screen of the inspection apparatus, a position point whose position is different from the base point is randomly determined as the inspection point. Any one position point different from the base point may be determined from among the specific plurality of position points. For example, from four corner points on the screen of the inspection apparatus, a corner point whose position is different from the base point is determined as the inspection point.
It should be noted that after the detection point is determined, a dynamic display effect needs to be presented on the detection point, and at a location point on the screen, the dynamic effect may be presented directly on the location point through a display function of the screen, for example, a rotating cursor or a flashing icon is presented at the detection point. If the detection point is not on the screen of the detection device, or if the detection device does not have a screen device, it is necessary to provide a device capable of presenting a dynamic display effect, such as a flashing indicator light, at a specific plurality of location points.
Optionally, in another embodiment of the present invention, a specific implementation manner of step S101 is provided, which includes: based on the position information of the base point on the detection device, a position point on the detection device farthest from the base point is set as the detection point.
If the distance between the base point and the detection point is too short, the two position points cannot be easily distinguished, and the moving distance of the gaze point of the user is too short, so that the method is not beneficial to judging whether the gaze point of the user to be detected moves or not and judging whether the gaze point track of the user to be detected meets the track requirement from the base point to the detection point or not, and therefore, the detection result is easy to be inaccurate. Therefore, the invention sets the farthest position point on the detection equipment from the base point as the detection point, thereby ensuring that the distance between the base point and the detection point is long enough, avoiding the accident and ensuring the accuracy of the detection result.
Optionally, in another embodiment of the present invention, another specific implementation manner of step S101 is provided, which includes: randomly determining a position point meeting a first preset condition from the detection equipment as a detection point based on the position information of the base point on the detection equipment; wherein the first preset condition is as follows: the distance from the base point is greater than a preset distance, and is first determined as a detection point within a preset time period.
Similarly, in order to avoid that the distance between the base point and the detection point is too close, the two position points are not easily distinguished, and the moving distance of the gaze point of the user is too short, which is not favorable for judging whether the gaze point of the user to be detected moves or not, and is also favorable for judging whether the trajectory of the gaze point of the user to be detected meets the trajectory requirement between the base point and the detection point, so that the detection result is easy to be inaccurate, and therefore, the distance between the detection point and the base point determined in the embodiment of the invention should be greater than the preset distance. Compared with the embodiment that the farthest position point on the detection equipment from the base point is set as the detection point, the position points which can be determined as the detection points in the embodiment are more and have randomness, and therefore the accuracy of the detection result can be guaranteed better.
In addition, in order to avoid that the same position point is determined as the detection point for a plurality of times in the plurality of times of detection, the living body detection can be performed in a manner of replacing the non-living body correspondingly, so that the fixation point is positioned in the target area of the detection point within the preset time, or whether the trajectory of the fixation point meets the trajectory requirement from the base point to the detection point within the preset time.
Therefore, in the embodiment of the present invention, the position point determined as the detection point needs to be the position point determined as the detection point for the first time within a preset time period. That is, a location point can exist only once at most within a preset time, and is determined as a detection point. Specifically, when a certain location point is determined as a detection point, a corresponding tag may be generated for the location point, and the tag may include location information of the location point and a timestamp of the location point determined as the detection point. And deleting the label after reaching the preset time.
The advantage of using a random determination of the detection points is that the user cannot know the location of the detection points in advance, and thus cannot crack the liveness detection procedure by preparing a plurality of non-live bodies in advance. The in vivo detection is more accurate, and the accuracy of the in vivo detection is improved.
It should be noted that, while determining the detection point on the detection device, step S101 is executed to add a dynamic display effect to the detection point, and step S102 is executed.
And S102, acquiring the fixation point information of the user to be detected in real time under the condition that the detection point shows the dynamic display effect.
Optionally, the gaze tracking device may track the gaze of the user to be detected in real time, and acquire the gaze point information of the user to be detected tracked by the gaze tracking device from the eyeball tracking device in real time. Of course, the detection device may also be self-contained with line-of-sight tracking functionality. The gaze point information of the user to be detected can be obtained by capturing the eye image of the user to be detected in real time and analyzing the relative position of the eye features through the eye image of the user to be detected. Or the eyeball movement is detected through the capacitance value between the eyeball and the capacitance polar plate to obtain the gaze point information of the user to be detected, or the eyeball movement is detected through the detected myoelectric current signal mode by placing an electrode at the bridge of the nose, the forehead, the ear or the earlobe to obtain the gaze point information of the user to be detected. Of course, other methods for acquiring the gaze point information of the user to be detected in real time may be adopted, which all fall within the scope of the present invention.
Optionally, in another embodiment of the present invention, a specific implementation manner of step S102 is provided, which includes: the method comprises the steps of acquiring the face image of a user to be detected in real time, and calculating to obtain the fixation point information corresponding to the face image of each frame of the user to be detected based on the face image of each frame of the user to be detected.
Specifically, the facial image of the user to be detected can be captured in milliseconds by a camera shooting method, so that the facial image of the user to be detected can be obtained in real time. Then, the pupil outline, the iris outline, the pupil center, the iris center and the reflection point eye characteristics of the external light source on the cornea of the user to be detected are obtained from the face image of the user to be detected, and the obtained eye characteristics are analyzed to obtain the fixation point information of the user to be detected.
S103, area judgment is carried out by utilizing the fixation point information of the user to be detected, and the area judgment representation judges whether the fixation point of the user to be detected is located in a target area within preset time.
The target area is an area within a preset distance from the detection point, and may also be simply understood as a target area within a circle with the detection point as a circle center and the preset distance as a radius. Because the fixation point of the user to be detected cannot be guaranteed to fall on the detection point perfectly every time, the living body detection can be carried out as long as the fixation point of the user to be detected is located in the target area within the preset time.
Alternatively, since it is considered that the user may move the gaze point from the base point to the detection point, but the gaze point cannot be accurately located in the target area, and the like, if it is determined in step S103 that the gaze point of the user to be detected cannot be located in the target area within the preset time, step S104 is performed to make a further determination, which effectively avoids detecting the live user as a non-live user.
It should be noted that, when the step S103 is executed to determine that the gaze point of the user to be detected is located in the target area within the preset time, it indicates that the gaze point of the user moves from the base point to the target area, and therefore the gaze point trajectory of the user certainly meets the trajectory requirement from the base point to the detection point within the preset time. Moreover, at this time, it is determined that the user to be detected is a living user, so that the step S104 does not need to be executed again for redundant determination, and the step S105 is performed directly.
Of course, this is only one optional manner, and it may also be determined that the gaze point of the user to be detected is not located in the target area within the preset time after executing step S103, and it is directly determined that the user to be detected does not pass the biopsy, and the biopsy is ended. That is, the living body detecting method based on the gazing point information according to the embodiment of the present invention may perform only step S103 without performing step S104.
S104, judging the track by using the gaze point information of the user to be detected, wherein the track judgment representation judges whether the gaze point track of the user to be detected meets the track requirement from the base point to the detection point within the preset time.
The requirement of the locus from the base point to the detection point means that the locus of the fixation point is a locus from the base point to the detection point. In the embodiment of the present invention, the requirement of the track from the base point to the detection point is not limited to a specific track.
That is, if the gaze point trajectory of the user to be detected meets the trajectory requirement from the base point to the detection point, that is, the gaze point trajectory of the user to be detected is a trajectory from the base point to the detection point, the living body detection can be performed, so that the step S104 is executed to determine that the gaze point trajectory of the user to be detected meets the trajectory requirement from the base point to the detection point within the preset time, and then the step S105 is executed. And if the gaze point track of the user to be detected does not meet the track requirement from the base point to the detection point within the preset time, determining that the user to be detected does not pass the living body detection.
It should be noted that, after the step S102 is executed, the step S103 is executed first, and when the determination result of executing the step S103 is negative, the step S104 is executed again, which is only one optional manner. After step S102 is executed, step S104 may be executed first, and when the result of the determination of step S104 is negative, step S103 may be executed again, that is, the execution order of step S103 and step S104 may be interchanged. Of course, step S103 and step S104 may be performed simultaneously, and step S105 may be performed when the determination result of either step is yes in step S103 and step S104.
Similarly, the living body detection method based on the gazing point information according to the embodiment of the present invention may perform only step S104 without performing step S103. That is, after the step S102 is executed, the step S104 is executed, and when the step S104 is executed, it is determined that the gaze point trajectory of the user to be detected does not meet the trajectory requirement from the base point to the detection point within the preset time, it is directly determined that the user to be detected does not pass the biopsy, and the biopsy is ended.
It should be noted that the preset time is set to prevent other people from continuously switching pictures or three-dimensional models of the user to move the gaze point from the base point to the detection point, that is, to reach the gaze point of the user to be detected falling on the target area, or to detect that the gaze point trajectory of the user meets the trajectory requirement from the base point to the detection point, so that privacy or property of the user is violated through living body detection. Since the gaze point of a living user naturally moves from one point to another very fast, compared to the movement of the gaze point from the base point to the detection point by switching the user's picture or stereo model, etc., the present embodiment sets a very short preset time, such as: 0.5 seconds or 1 second, etc., to avoid others from passing the live body test in the manner described above.
Optionally, in another embodiment of the present invention, a method for determining whether a gaze point of a user to be detected is located in a target area within a preset time is provided, which specifically includes: and judging the contact ratio of the gazing point of the user to be detected and the target area by using the gazing point information of the user to be detected, and judging whether the contact ratio meets the preset requirement within the preset time.
Specifically, the obtained gaze point may be a circle having a certain range, and the contact ratio between the gaze point of the user to be detected and the target area is determined by determining the contact ratio between the gaze point and the target area, so as to determine whether the contact ratio satisfies the requirement of the contact ratio of the preset area within a preset time. If the contact ratio of the gaze point of the user to be detected and the target area is judged, and the preset requirement is met within the preset time, the gaze point of the user to be detected is judged to be located in the target area within the preset time.
Since only the living user can move the gaze point from the base point to the detection point within the preset time, when the step S103 is executed to determine that the gaze point of the user to be detected is located in the target area within the preset time, or the step S104 is executed to determine that the trajectory of the gaze point of the user to be detected meets the trajectory requirement from the base point to the detection point within the preset time, the step S105 is executed. If the step S103 is executed to determine that the gaze point of the user to be detected is not located in the target area within the preset time, and the step S104 is executed to determine that the gaze point trajectory of the user to be detected does not meet the trajectory requirement from the base point to the detection point within the preset time, it is determined that the user to be detected does not pass the living body detection, that is, it is determined that the user to be detected is not a living body. And when the user to be detected is determined not to be a living body, the living body detection process can be exited, and the sleeping state is entered. After waiting for a certain time, the flow proceeds to the initial step in the next living body detection flow, i.e., returns to step S101.
Optionally, after determining that the user to be detected fails the in vivo detection, an alarm may be further performed.
The specific alarm mode may be one or a combination of multiple alarm modes, such as sending an alarm sound, or sending an alarm short message to an emergency contact person pre-filled by the user.
And S105, determining that the user to be detected passes the living body detection.
That is, it is finally determined that the user to be detected is a living user, not a non-living body such as a photograph, a video, a mask, or the like.
According to the living body detection method based on the gazing information, provided by the embodiment of the invention, the position information of the base point is determined by utilizing the gazing point information of the user to be detected on the detection equipment, and the detection point is determined on the detection equipment by utilizing the position information of the base point on the detection equipment. Then, by adding a dynamic display effect to the detection point, the line of sight of the user is attracted. If the user to be detected is a living user, the sight line of the user to be detected can be naturally transferred from the base point to the detection point with the dynamic display effect. Therefore, under the condition that the detection point presents the dynamic display effect, the gaze point information of the user to be detected is obtained in real time, and whether the gaze point of the user to be detected is located in a target area within a preset distance from the detection point within preset time or whether the gaze point track of the user to be detected meets the track requirement from the base point to the detection point is judged by using the gaze point information of the user to be detected, and if the gaze point of the user to be detected is located in the target area within the preset time or the gaze point track of the user to be detected meets the track requirement from the base point to the detection point, the user to be detected is determined to pass the living body detection. Therefore, the living body detection is completed under the condition that the user has no perception at all, and the rejection psychology of the user is avoided.
In addition, the detection point of the invention can be a random one of a plurality of preset detection points, or any position point with the distance which is randomly calculated by the system and is larger than the preset distance, so that the detection point of each living body detection is uncertain, and the possibility that the living body detection is manually cracked is avoided. In addition, the human body detection is carried out based on the eyeball tracking technology, so that the human body detection is not easily influenced by factors such as light or glasses, and the accuracy of the detection result can be ensured.
Another embodiment of the present invention provides a living body detecting apparatus based on gaze information, as shown in fig. 4, including:
a first determination unit 401 for determining a detection point on a detection device using position information of a base point on the detection device.
And the position information of the base point is obtained by utilizing the information of the fixation point of the user to be detected on the detection equipment. The position of the detection point is different from the position of the base point.
It should be noted that, the specific working process of the first determining unit 401 may refer to step S101 in the foregoing method embodiment accordingly, and is not described herein again.
A first obtaining unit 402, configured to obtain, in real time, the gaze point information of the user to be detected in a state where the detection point shows the dynamic display effect.
It should be noted that, the specific working process of the first obtaining unit 402 may refer to step S102 in the foregoing method embodiment accordingly, and details are not described here again.
A first judging unit 403, configured to perform area judgment and/or track judgment by using the gaze point information of the user to be detected, where the area judgment representation judges whether the gaze point of the user to be detected is located in a target area within a preset time, and the track judgment representation judges whether the gaze point track of the user to be detected meets a track requirement from the base point to the detection point within the preset time.
And the target area is an area within a preset distance from the detection point.
It should be noted that, the specific working process of the first determining unit 403 may refer to step S103 in the foregoing method embodiment accordingly, which is not described herein again.
A second determining unit 404, configured to determine that the user to be detected passes the biopsy when the first determining unit 403 determines that the gaze point of the user to be detected is located in the target area within a preset time, or determines that the gaze point trajectory of the user to be detected meets a trajectory requirement from the base point to the detection point.
It should be noted that, the specific working process of the second determining unit 404 may refer to step S104 in the foregoing method embodiment accordingly, and is not described herein again.
Optionally, in another embodiment of the present invention, the living body detecting device further includes a first base point determining unit. As shown in fig. 5, the first base point determining unit specifically includes:
a second obtaining unit 501, configured to obtain a face image of the user to be detected.
It should be noted that, the specific working process of the second obtaining unit 501 may refer to step S201 in the foregoing method embodiment accordingly, and details are not repeated here.
A first calculating unit 502, configured to calculate, based on the face image, eye feature information corresponding to the face image.
It should be noted that, the specific working process of the first calculating unit 502 may refer to step S202 in the foregoing method embodiment accordingly, and is not described herein again.
A second calculating unit 503, configured to calculate, according to the eye feature information, corresponding gaze point information.
It should be noted that, the specific working process of the second calculating unit 503 may refer to step S203 in the foregoing method embodiment accordingly, and is not described herein again.
A second determining unit 504, configured to determine, by using the gaze point information, whether the gaze point of the user to be detected falls on the detection device.
It should be noted that, the specific working process of the second determining unit 504 may refer to step S204 in the foregoing method embodiment accordingly, and is not described herein again.
A first setting unit 505, configured to set, as a base point, a position point of the gaze point of the user to be detected on the detection device when the second determining unit 504 determines that the gaze point of the user to be detected falls on the detection device.
And the position information of the position point of the fixation point of the user to be detected on the detection equipment is the position information of the base point.
It should be noted that, the specific working process of the first setting unit 505 may refer to step S205 in the foregoing method embodiment accordingly, and details are not described here again.
A returning unit 506, configured to return to the second obtaining unit 501 to obtain the face image of the user to be detected again when the second determining unit 504 determines that the gaze point of the user to be detected does not fall on the detection device.
It should be noted that, the specific working process of the returning unit 506 may refer to step S206 in the foregoing method embodiment accordingly, which is not described herein again.
Optionally, in another embodiment of the present invention, the living body detecting device further includes a second base point determining unit. As shown in fig. 6, the second base point determining unit specifically includes:
a third determining unit 601 for determining a location point on said detection device.
It should be noted that, the specific working process of the third determining unit 601 may refer to step S301 in the foregoing method embodiment accordingly, and is not described herein again.
A third obtaining unit 602, configured to obtain the gaze point information of the user to be detected in a state where the location point shows the dynamic display effect.
It should be noted that, the specific working process of the third obtaining unit 602 may refer to step S302 in the foregoing method embodiment accordingly, and details are not described here again.
A third determining unit 603, configured to determine whether the gaze point of the user to be detected falls on the location point according to the gaze point information of the user to be detected.
It should be noted that, the specific working process of the third determining unit 603 may refer to step S303 in the foregoing method embodiment accordingly, which is not described herein again.
A second setting unit 604, configured to set the position point as a base point and determine position information of the position point when the third determining unit 603 determines that the gaze point of the user to be detected falls on the position point.
Wherein the position information of the position point is the position information of the base point.
It should be noted that, the specific working process of the second setting unit 604 may refer to step S304 in the foregoing method embodiment accordingly, and details are not described here again.
Optionally, in another embodiment of the present invention, the first determining unit includes:
a first detected point determining unit configured to set, as a detected point, a position point on the detection apparatus that is farthest from a base point on the detection apparatus, based on position information of the base point on the detection apparatus.
It should be noted that, a specific implementation manner of step S101 in the above method embodiment may be referred to in a specific working process of the first detection point determining unit, and details are not described here again.
Optionally, in another embodiment of the present invention, the first determining unit includes:
and the second detection point determining unit is used for randomly determining a position point meeting a first preset condition from the detection equipment as a detection point based on the position information of the base point on the detection equipment.
Wherein the first preset condition is as follows: the distance from the base point is greater than a preset distance and is first determined as a detection point within a preset time period.
It should be noted that, for the specific working process of the second detection point determining unit, reference may be made to another specific implementation manner of step S101 in the foregoing method embodiment, and details are not described here again.
Optionally, in another embodiment of the present invention, the first obtaining unit includes:
the first acquisition subunit is configured to acquire the face image of the user to be detected in real time, and calculate, based on each frame of the face image of the user to be detected, to obtain gaze point information corresponding to each frame of the face image of the user to be detected.
It should be noted that, for the specific working process of the first obtaining subunit, reference may be made to a specific implementation manner of step S102 in the foregoing method embodiment, and details are not described here again.
Optionally, in another embodiment of the present invention, when the first determining unit executes the determining, by using the gaze point information of the user to be detected, whether the gaze point of the user to be detected is located in the target area within a preset time, the first determining unit is configured to:
and judging the contact ratio of the gazing point of the user to be detected and the target area by using the gazing point information of the user to be detected, and judging whether the contact ratio meets the preset requirement within the preset time.
And if the contact ratio of the gaze point of the user to be detected and the target area is judged, and the preset requirement is met within the preset time, judging whether the gaze point of the user to be detected is located in the target area within the preset time.
It should be noted that, in the specific working process of the first determining unit when executing the above function, reference may be made to the step S103 in the above method embodiment to determine the coincidence degree between the point of gaze of the user to be detected and the target area, and whether the coincidence degree satisfies a preset requirement within a preset time is to find a specific implementation, which is not described herein again.
According to the living body detection device based on the gazing information, provided by the embodiment of the invention, the position information of the base point is determined by utilizing the gazing point information of the user to be detected on the detection equipment, and the first determination unit determines the detection point on the detection equipment by utilizing the position information of the base point on the detection equipment. Then, by adding a dynamic display effect to the detection point, the line of sight of the user is attracted. If the user to be detected is a living user, the sight line of the user to be detected can be naturally transferred from the base point to the detection point with the dynamic display effect. Therefore, the first obtaining unit obtains the gaze point information of the user to be detected in real time in a state that the detection point shows the dynamic display effect, and the first judging unit judges whether the gaze point of the user to be detected is located in a target area within a preset distance from the detection point within a preset time or not by using the gaze point information of the user to be detected, or judges whether the gaze point track of the user to be detected meets the track requirement from the base point to the detection point, and if the gaze point of the user to be detected is located in the target area within the preset time or the gaze point track of the user to be detected meets the track requirement from the base point to the detection point, the second determining unit determines that the user to be detected passes the living body detection. Therefore, the living body detection is completed under the condition that the user has no perception at all, and the rejection psychology of the user is avoided. In addition, the human body detection is carried out based on the eyeball tracking technology, so that the human body detection is not easily influenced by factors such as light or glasses, and the accuracy of the detection result can be ensured.
Another embodiment of the present invention further provides a detection apparatus, including:
one or more processors.
A memory having one or more programs stored thereon.
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the gaze information based liveness detection method provided by any of the embodiments described above.
In order to perform the liveness detection method in cooperation with a processor in the detection device, the detection device may be configured with a camera through which an image of the face of the user to be detected is taken and provided to the processor. The detection equipment can also be provided with a screen, and the base point and the detection point on the detection equipment are both a certain position point in the screen. If the detection device is not equipped with a screen, in order to present a dynamic display effect at the detection point, it is necessary to provide the detection device with a device capable of presenting a dynamic display effect, such as a blinking indicator light. Also, the base point and the detection point are selected from devices having a dynamic display effect.
It should be noted that the camera, the screen, and the device capable of presenting dynamic display effect configured in the above-mentioned detection device may be disposed in the detection device itself, and of course, may be disposed separately from the detection device, and cooperate with the detection device to execute the living body detection method.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (15)

1. A method for in vivo detection based on gaze information, comprising:
determining a detection point on the detection equipment by using the position information of a base point on the detection equipment; the position information of the base point is obtained by utilizing the information of the fixation point of the user to be detected on the detection equipment; the position of the detection point is different from the position of the base point;
acquiring the fixation point information of the user to be detected in real time under the condition that the detection point presents a dynamic display effect;
performing area judgment and/or track judgment by using the gaze point information of the user to be detected, wherein the area judgment representation judges whether the gaze point of the user to be detected is located in a target area within preset time, and the track judgment representation judges whether the gaze point track of the user to be detected meets the track requirement from the base point to the detection point within the preset time; the target area is an area within a preset distance from the detection point;
and if the fixation point of the user to be detected is judged to be located in the target area within the preset time, or the fixation point track of the user to be detected is judged to meet the track requirement from the base point to the detection point within the preset time, determining that the user to be detected passes the living body detection.
2. The method according to claim 1, wherein the method for determining the position information of the base point comprises:
acquiring a face image of the user to be detected;
calculating eye feature information corresponding to the face image based on the face image;
calculating according to the eye feature information to obtain corresponding fixation point information;
judging whether the point of regard of the user to be detected falls on the detection equipment or not by utilizing the point of regard information;
if the fixation point of the user to be detected is judged to fall on the detection equipment, setting the position point of the fixation point of the user to be detected on the detection equipment as a base point; the position information of the position point of the fixation point of the user to be detected on the detection equipment is the position information of the base point;
and if the fixation point of the user to be detected is judged not to fall on the detection equipment, returning to obtain the facial image of the user to be detected again.
3. The method according to claim 1, wherein the method for determining the position information of the base point comprises:
determining a location point on said detection device;
acquiring the fixation point information of the user to be detected in the state that the position point presents the dynamic display effect;
judging whether the point of regard of the user to be detected falls on the position point or not according to the point of regard information of the user to be detected;
if the fixation point of the user to be detected is judged to fall on the position point, setting the position point as a base point, and determining the position information of the position point; wherein the position information of the position point is the position information of the base point.
4. The method of claim 1, wherein the determining a detection point on a detection device using position information of a base point on the detection device comprises:
setting a position point on the detection apparatus farthest from the base point as a detection point based on the position information of the base point on the detection apparatus.
5. The method of claim 1, wherein the determining a detection point on a detection device using position information of a base point on the detection device comprises:
randomly determining a position point meeting a first preset condition from the detection equipment as a detection point based on the position information of the base point on the detection equipment; wherein the first preset condition is as follows: the distance from the base point is greater than a preset distance and is first determined as a detection point within a preset time period.
6. The method according to claim 1, wherein the obtaining the gaze point information of the user to be detected in real time comprises:
and acquiring the facial image of the user to be detected in real time, and calculating to obtain the gaze point information corresponding to each frame of the facial image of the user to be detected based on each frame of the facial image of the user to be detected.
7. The method according to claim 1, wherein the performing the area judgment by using the gazing point information of the user to be detected comprises:
judging whether the coincidence degree of the gazing point of the user to be detected and the target area meets preset requirements within preset time or not by using the gazing point information of the user to be detected;
if the contact ratio of the gaze point of the user to be detected and the target area is judged, and when the preset requirement is met within the preset time, the gaze point of the user to be detected is judged to be located in the target area within the preset time.
8. A living body detecting apparatus based on gaze information, characterized by comprising:
a first determination unit configured to determine a detection point on a detection device using position information of a base point on the detection device; the position information of the base point is obtained by utilizing the information of the fixation point of the user to be detected on the detection equipment; the position of the detection point is different from the position of the base point;
the first acquisition unit is used for acquiring the fixation point information of the user to be detected in real time under the condition that the detection point presents a dynamic display effect;
the first judging unit is used for carrying out area judgment and/or track judgment by utilizing the gaze point information of the user to be detected, the area judgment representation judges whether the gaze point of the user to be detected is located in a target area within preset time, and the track judgment representation judges whether the gaze point track of the user to be detected meets the track requirement from the base point to the detection point within the preset time; the target area is an area within a preset distance from the detection point;
and the second determining unit is used for determining that the user to be detected passes the living body detection when the first judging unit judges that the gaze point of the user to be detected is located in the target area within the preset time or judges that the gaze point track of the user to be detected meets the track requirement from the base point to the detection point.
9. The apparatus according to claim 8, further comprising a first base point determining unit; wherein the first base point determining unit includes:
the second acquisition unit is used for acquiring the face image of the user to be detected;
the first calculation unit is used for calculating eye feature information corresponding to the face image based on the face image;
the second calculation unit is used for calculating corresponding fixation point information according to the eye feature information;
the second judging unit is used for judging whether the fixation point of the user to be detected falls on the detection equipment or not by utilizing the fixation point information;
the first setting unit is used for setting the position point of the gaze point of the user to be detected on the detection equipment as a base point when the second judging unit judges that the gaze point of the user to be detected falls on the detection equipment; the position information of the position point of the fixation point of the user to be detected on the detection equipment is the position information of the base point;
and the returning unit is used for returning to the second acquiring unit to acquire the facial image of the user to be detected again when the second judging unit judges that the point of regard of the user to be detected does not fall on the detecting equipment.
10. The apparatus according to claim 8, further comprising a second base point determining unit; wherein the second base point determining unit includes:
a third determination unit for determining a location point on the detection device;
a third obtaining unit, configured to obtain the gaze point information of the user to be detected in a state where the position point exhibits the dynamic display effect;
a third judging unit, configured to judge whether the gaze point of the user to be detected falls on the location point according to the gaze point information of the user to be detected;
the second setting unit is used for setting the position point as a base point and determining the position information of the position point when the third judging unit judges that the fixation point of the user to be detected falls on the position point; wherein the position information of the position point is the position information of the base point.
11. The apparatus of claim 8, wherein the first determining unit comprises:
a first detected point determining unit configured to set, as a detected point, a position point on the detection apparatus that is farthest from a base point on the detection apparatus, based on position information of the base point on the detection apparatus.
12. The apparatus of claim 8, wherein the first determining unit comprises:
a second detection point determination unit configured to randomly determine, as a detection point, a position point satisfying a first preset condition from the detection device based on position information of a base point on the detection device; wherein the first preset condition is as follows: the distance from the base point is greater than a preset distance and is first determined as a detection point within a preset time period.
13. The apparatus of claim 8, wherein the first obtaining unit comprises:
the first acquisition subunit is configured to acquire the face image of the user to be detected in real time, and calculate, based on each frame of the face image of the user to be detected, to obtain gaze point information corresponding to each frame of the face image of the user to be detected.
14. The apparatus according to claim 8, wherein the first determining unit, when performing the area determination by using the gazing point information of the user to be detected, is configured to:
judging whether the coincidence degree of the gazing point of the user to be detected and the target area meets preset requirements within preset time or not by using the gazing point information of the user to be detected;
if the contact ratio of the gaze point of the user to be detected and the target area is judged, and when the preset requirement is met within the preset time, the gaze point of the user to be detected is judged to be located in the target area within the preset time.
15. A detection apparatus, comprising:
one or more processors;
a memory having one or more programs stored thereon;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the gaze information based liveness detection method of any of claims 1 to 7.
CN201911094537.8A 2019-11-11 2019-11-11 Living body detection method and device based on gazing information and detection equipment Pending CN112784655A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911094537.8A CN112784655A (en) 2019-11-11 2019-11-11 Living body detection method and device based on gazing information and detection equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911094537.8A CN112784655A (en) 2019-11-11 2019-11-11 Living body detection method and device based on gazing information and detection equipment

Publications (1)

Publication Number Publication Date
CN112784655A true CN112784655A (en) 2021-05-11

Family

ID=75749659

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911094537.8A Pending CN112784655A (en) 2019-11-11 2019-11-11 Living body detection method and device based on gazing information and detection equipment

Country Status (1)

Country Link
CN (1) CN112784655A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113657293A (en) * 2021-08-19 2021-11-16 北京神州新桥科技有限公司 Living body detection method, living body detection device, electronic apparatus, medium, and program product

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113657293A (en) * 2021-08-19 2021-11-16 北京神州新桥科技有限公司 Living body detection method, living body detection device, electronic apparatus, medium, and program product
CN113657293B (en) * 2021-08-19 2023-11-24 北京神州新桥科技有限公司 Living body detection method, living body detection device, electronic equipment, medium and program product

Similar Documents

Publication Publication Date Title
CN110874129B (en) Display system
CN105184277B (en) Living body face recognition method and device
CN106471419B (en) Management information is shown
CN109086726A (en) A kind of topography's recognition methods and system based on AR intelligent glasses
KR101785255B1 (en) Shape discrimination vision assessment and tracking system
CN105488957B (en) Method for detecting fatigue driving and device
CN112034977A (en) Method for MR intelligent glasses content interaction, information input and recommendation technology application
WO2020020022A1 (en) Method for visual recognition and system thereof
CN112181152A (en) Advertisement push management method, equipment and application based on MR glasses
CN110502099A (en) Reliably detect the associated method between watching attentively and stimulating
CN108897423A (en) A kind of VR glasses and its online testing anti-cheating method
CN108027644B (en) Method for starting eyeball tracking function and mobile equipment
US20170000344A1 (en) System and method for optical detection of cognitive impairment
KR101661555B1 (en) Method and program for restricting photography of built-in camera of wearable glass device
CN112987910B (en) Testing method, device, equipment and storage medium of eyeball tracking equipment
CN108537103B (en) Living body face detection method and device based on pupil axis measurement
CN108133189B (en) Hospital waiting information display method
CN115409774A (en) Eye detection method based on deep learning and strabismus screening system
KR20160015142A (en) Method and program for emergency reporting by wearable glass device
KR101728707B1 (en) Method and program for controlling electronic device by wearable glass device
CN112784655A (en) Living body detection method and device based on gazing information and detection equipment
CN110174937A (en) Watch the implementation method and device of information control operation attentively
CN114021211A (en) Intelligent peep-proof system
CN112748798B (en) Eyeball tracking calibration method and related equipment
CN114846788A (en) Enhanced oculomotor testing device and method using additional structure for mobile device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination