CN112347860A - Gradient-based eye state detection method and computer-readable storage medium - Google Patents

Gradient-based eye state detection method and computer-readable storage medium Download PDF

Info

Publication number
CN112347860A
CN112347860A CN202011110030.XA CN202011110030A CN112347860A CN 112347860 A CN112347860 A CN 112347860A CN 202011110030 A CN202011110030 A CN 202011110030A CN 112347860 A CN112347860 A CN 112347860A
Authority
CN
China
Prior art keywords
eye
frame image
current frame
gradient
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011110030.XA
Other languages
Chinese (zh)
Other versions
CN112347860B (en
Inventor
刘德建
陈春雷
郭玉湖
陈宏�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Tianquan Educational Technology Ltd
Original Assignee
Fujian Tianquan Educational Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Tianquan Educational Technology Ltd filed Critical Fujian Tianquan Educational Technology Ltd
Priority to CN202011110030.XA priority Critical patent/CN112347860B/en
Publication of CN112347860A publication Critical patent/CN112347860A/en
Application granted granted Critical
Publication of CN112347860B publication Critical patent/CN112347860B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Ophthalmology & Optometry (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a gradient-based eye state detection method and a computer-readable storage medium, wherein the method comprises the following steps: acquiring an image to be detected with two eyes in a preset initial state; sequentially acquiring a next frame of image to be detected as a current frame image; calibrating the current frame image to obtain eye calibration points through a face detection algorithm and a face characteristic point calibration algorithm; calculating gradient information in the current frame image to obtain a gradient map corresponding to the current frame image; calibrating each eye calibration point in the current frame image in the gradient image corresponding to the current frame image according to the calibrated position of each eye calibration point in the previous frame image to obtain the calibrated position of each eye calibration point in the current frame image; and calculating the eye opening and closing angle corresponding to the same eye according to the calibrated positions of the eye calibration points of the same eye in the current frame image. The invention can accurately identify the opening and closing angle of the eye.

Description

Gradient-based eye state detection method and computer-readable storage medium
Technical Field
The present invention relates to the field of eye state recognition technologies, and in particular, to a gradient-based eye state detection method and a computer-readable storage medium.
Background
With the rapid development of image processing technology, the human face feature point calibration algorithm has achieved good effects, and key organ and ground contours such as human eyes, nose, mouth and the like, such as dlib, opendose, openface and the like, can be accurately calibrated. Blink detection can be well applied to human-computer interaction, such as blink unlocking a mobile phone after face recognition, blink control robots and the like. Ideally, it becomes very simple to identify the state of the eye (blink detection) based on such algorithms, and the eye state can be easily identified by using only the eye contour.
However, the openposition algorithm has high computational complexity, and when the independent graphics card 1080ti with high performance runs the algorithm at present, only dozens of graphs can be processed in one second, and particularly, a small-sized intelligent terminal with low computational power is difficult to run smoothly. Although the operation speed of the calibration of the human face characteristic points by the dlib algorithm is high, the calibrated eye contour is not accurate enough, and the calibration is often mistaken, so that the blink recognition is wrong; some researchers also adopt some simple neural network structures to achieve the effect similar to openposition, but the recognition effect is not good under the condition that the user wears the glasses.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: provided are a gradient-based eye state detection method and a computer-readable storage medium, which can accurately identify an eye opening and closing angle.
In order to solve the technical problems, the invention adopts the technical scheme that: a gradient-based eye state detection method, comprising:
sequentially acquiring a frame of image to be detected as a current frame image;
calibrating the current frame image to obtain eye calibration points through a face detection algorithm and a face characteristic point calibration algorithm, and recording the positions of the eye calibration points;
calculating gradient information of the current frame image to obtain a gradient image corresponding to the current frame image;
judging whether the two eyes in the current frame image are both in a preset initial state or not according to the corresponding gradient map;
if not, continuing to execute the step of sequentially acquiring a frame of image to be detected as the current frame image;
if so, acquiring a next frame of image to be detected as a current frame image;
calibrating the current frame image to obtain eye calibration points through a face detection algorithm and a face characteristic point calibration algorithm;
calculating gradient information in the current frame image to obtain a gradient map corresponding to the current frame image;
calibrating each eye calibration point in the current frame image in the gradient image corresponding to the current frame image according to the calibrated position of each eye calibration point in the previous frame image to obtain the calibrated position of each eye calibration point in the current frame image;
and calculating the eye opening and closing angle corresponding to the same eye according to the calibrated positions of the eye calibration points of the same eye in the current frame image.
The invention also proposes a computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method as described above.
The invention has the beneficial effects that: by acquiring the image with the initial state of both eyes, namely acquiring the image with better calibration effect, the eye calibration point in the subsequent image can be effectively calibrated, the calibrated eye calibration point is ensured to be positioned on the eyelid contour line as much as possible, and the condition that the eye calibration point is obviously deviated from the eyelid contour line is avoided; the eye opening and closing angle is calculated according to the calibrated eye calibration point, so that the accuracy of the eye opening and closing angle can be ensured. The invention matches the corresponding relation between the eye calibration point and the gradient map on the basis of the human face characteristic point calibration algorithm, tracks the movement of the eye calibration point on the gradient map and calibrates the eye calibration point, effectively improves the calibration of the eye contour, is not influenced by wearing glasses, improves the eye state detection effect, has low algorithm complexity and can run in real time on lower terminals of some computers.
Drawings
FIG. 1 is a flow chart of a gradient-based eye condition detection method of the present invention;
FIG. 2 is a flowchart of a method according to a first embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating detection of 68 facial feature points according to one embodiment of the present invention;
FIG. 4 is a schematic diagram of serial numbers of 68 personal facial feature points according to an embodiment of the present invention;
fig. 5 is a schematic diagram of an ocular gradient according to a first embodiment of the present invention.
Detailed Description
In order to explain technical contents, objects and effects of the present invention in detail, the following detailed description is given with reference to the accompanying drawings in conjunction with the embodiments.
Referring to fig. 1, a gradient-based eye condition detection method includes:
sequentially acquiring a frame of image to be detected as a current frame image;
calibrating the current frame image to obtain eye calibration points through a face detection algorithm and a face characteristic point calibration algorithm, and recording the positions of the eye calibration points;
calculating gradient information of the current frame image to obtain a gradient image corresponding to the current frame image;
judging whether the two eyes in the current frame image are both in a preset initial state or not according to the corresponding gradient map;
if not, continuing to execute the step of sequentially acquiring a frame of image to be detected as the current frame image;
if so, acquiring a next frame of image to be detected as a current frame image;
calibrating the current frame image to obtain eye calibration points through a face detection algorithm and a face characteristic point calibration algorithm;
calculating gradient information in the current frame image to obtain a gradient map corresponding to the current frame image;
calibrating each eye calibration point in the current frame image in the gradient image corresponding to the current frame image according to the calibrated position of each eye calibration point in the previous frame image to obtain the calibrated position of each eye calibration point in the current frame image;
and calculating the eye opening and closing angle corresponding to the same eye according to the calibrated positions of the eye calibration points of the same eye in the current frame image.
From the above description, the beneficial effects of the present invention are: the calibration of the eye contour is effectively improved, the eye contour is not affected by wearing glasses, and the eye state detection effect is improved.
Further, the obtaining of the eye calibration point by the human face detection algorithm and the human face feature point calibration algorithm in the current frame image through calibration, and recording the position of the eye calibration point specifically includes:
identifying a face region in the current frame image through a face detection algorithm;
calibrating the human face region through a human face characteristic point calibration algorithm to obtain an eye calibration point;
establishing a two-dimensional rectangular coordinate system by taking the upper left corner of the image to be detected as an original point, the horizontal right direction as the positive direction of an X axis and the vertical downward direction as the positive direction of a Y axis;
and recording the coordinate value of the position of the eye calibration point.
As can be seen from the above description, a rectangular coordinate system is established based on the image to be measured, and the uniformity of the coordinate system is ensured.
Further, the calculating of the gradient information of the current frame image to obtain the gradient map corresponding to the current frame image specifically includes:
respectively calculating a horizontal gradient map and a vertical gradient map corresponding to the current frame image based on a Sobel operator;
and carrying out weighted summation on the horizontal gradient image and the vertical gradient image to obtain a gradient image corresponding to the current frame image.
Further, the step of judging whether both eyes in the current frame image are in the preset initial state according to the corresponding gradient map specifically includes:
and if the eye calibration point of the same eye positioned at the canthus is a gradient maximum value point in a preset range in the horizontal direction, and the other eye calibration points of the same eye are gradient maximum value points in a preset range in the vertical direction, judging that the same eye is in an initial state.
As can be seen from the above description, when the eye calibration point is at the local gradient extreme point, the eye calibration point is considered to be on the eyelid contour curve, that is, the eye is considered to be in the preset initial state.
Further, the step of judging whether both eyes in the current frame image are in the preset initial state according to the corresponding gradient map specifically includes:
calibrating each eye calibration point in the current frame image in a gradient image corresponding to the current frame image to obtain the calibrated position of each eye calibration point;
calculating the sum of distance errors of the positions of the same eye before and after calibration of each eye calibration point;
acquiring a maximum horizontal coordinate value and a minimum horizontal coordinate value according to the calibrated positions of the eye calibration points of the same eye, and calculating the difference value of the maximum horizontal coordinate value and the minimum horizontal coordinate value to obtain the horizontal coordinate span of the same eye;
and if the value obtained by dividing the sum of the distance errors of the same eye by the horizontal coordinate span is smaller than a preset threshold value, judging that the same eye is in a preset initial state.
As can be seen from the above description, when the positions before and after calibration are not changed too much, the eye calibration point is considered to be on the eyelid contour curve, i.e. the eye is considered to be in the preset initial state.
Further, the step of calibrating each eye calibration point in the current frame image in the gradient map corresponding to the current frame image according to the calibrated position of each eye calibration point in the previous frame image to obtain the calibrated position of each eye calibration point in the current frame image specifically includes:
according to the position of each eye calibration point in the previous frame image after calibration, respectively connecting each eye calibration point in the previous frame image with each eye calibration point in the current frame image in a one-to-one correspondence manner in a gradient image corresponding to the current frame image to obtain a connecting line corresponding to each eye calibration point;
and respectively acquiring a maximum gradient value point on a connecting line corresponding to each eye calibration point, and taking the maximum gradient value point as the position of each eye calibration point in the current frame image after calibration.
From the above description, the eye calibration point is aligned to the gradient of the eye, so that the eye calibration point is prevented from deviating from the eyelid, that is, the eye calibration point falls on the eyelid contour curve as much as possible.
Further, the calculating the eye opening and closing angle corresponding to the same eye according to the calibrated position of each eye calibration point of the same eye in the current frame image specifically comprises:
acquiring a maximum horizontal coordinate value, a maximum vertical coordinate value, a minimum horizontal coordinate value and a minimum vertical coordinate value according to the calibrated positions of the eye calibration points of the same eye in the current frame image;
calculating the eye opening and closing angle corresponding to the same eye according to a first formula, wherein the first formula is theta-arctan (y)max-ymin)/(xmax-xmin) Theta is the opening and closing angle of the eyes, ymaxIs the maximum vertical coordinate value, yminIs the minimum vertical coordinate value, xmaxIs the maximum horizontal coordinate value, xminIs the minimum horizontal coordinate value.
From the above description, the first formula is to estimate the opening angle of the eye region, arctan is an inverse trigonometric function, and when the eye is closed, ymax=yminTheta is 0; the other case θ becomes larger as the angle at which the eye opens becomes larger.
Further, after the obtaining of the next frame of image to be measured as the current frame image, the method further includes:
judging whether a face exists in the previous frame of image;
if not, judging whether the two eyes in the current frame image are both in a preset initial state;
and if so, executing the step of obtaining the eye calibration point in the current frame image by the face detection algorithm and the face characteristic point calibration algorithm.
As can be seen from the above description, if there is no face in the previous frame of image, it means that after the image in the initial state is obtained, the face of the person being photographed leaves the lens and enters the lens again, and at this time, the image in the initial state needs to be searched again.
The invention also proposes a computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method as described above.
Example one
Referring to fig. 2-5, a first embodiment of the present invention is: a gradient-based eye state detection method can be applied to blink human-computer interaction when a user wears glasses, and as shown in FIG. 2, the method comprises the following steps:
s1: and acquiring a frame of image to be detected as a current frame image in sequence, namely acquiring continuous images to be detected through a camera, and acquiring a frame of image to be detected as a current frame image in sequence.
S2: and identifying a human face region in the current frame image through a human face detection algorithm. Further, if no human face is detected, the next frame of image to be measured is acquired, that is, the step S1 is executed again.
The face detection algorithm in this embodiment may use dlib, mtcnn, and other algorithms.
S3: and calibrating the human face region by a human face characteristic point calibration algorithm to obtain an eye calibration point, and recording the position of the eye calibration point.
In this embodiment, 68-point calibration of dlib is adopted to calibrate the facial feature points, wherein a schematic diagram of 68 facial feature points is shown in fig. 3; the 68 serial numbers of the individual facial feature points are shown in fig. 4. It can be seen that each eye corresponds to 6 eye calibration points, wherein the calibration points of the left eye have serial numbers of 37-42, and the calibration points of the right eye have serial numbers of 43-48.
In this embodiment, a two-dimensional rectangular coordinate system is established with the upper left corner of the image to be measured as the origin, the horizontal right direction as the positive direction of the X axis, and the vertical downward direction as the positive direction of the Y axis. The coordinate values of the eye calibration points in the current frame image are recorded.
S4: and calculating gradient information in the current frame image to obtain a gradient image corresponding to the current frame image.
Specifically, a horizontal gradient map and a vertical gradient map of the current frame image are respectively obtained based on a Sobel (Sobel) operator, and then the horizontal gradient map and the vertical gradient map are subjected to weighted summation to obtain a gradient map of the current frame image.
Preferably, an eye region with a preset size may be determined according to the eye calibration point, and then gradient calculation is performed on only the eye region image to obtain an eye gradient map, as shown in fig. 5.
S5: and judging whether the two eyes in the current frame image are both in a preset initial state according to the gradient map corresponding to the current frame image, if so, executing the step S6, otherwise, continuously acquiring the next frame of image to be detected and judging whether the two eyes are in the initial state, namely executing the steps S1-S5.
In this embodiment, the preset initial state is open, because the algorithm in the industry is better in scaling effect under the condition that the two eyes of the user are open, the state that the two eyes are open is considered as the standard state. The ocular index points now essentially all fall on the ocular gradient map. However, user cooperation is often needed to achieve the desired initial state in the first frame of image, which may reduce user experience. Therefore, in the present embodiment, the determination in the present step may be performed by using any one of the following two initial state detection methods that are not perceived by the user.
In the first method, if an eye calibration point of the same eye in the current frame image, which is located at the canthus, is a gradient maximum value point within a preset range in the horizontal direction, and other eye calibration points of the same eye are gradient maximum value points within a preset range in the vertical direction, it is determined that the same eye is in an initial state.
For example, for the left eye, the eye index points at the corner of the eye, i.e., index points numbered 37 and 40 in fig. 4, and the eye index points at the non-corner of the eye, i.e., index points numbered 38, 39, 41, and 42 in fig. 4; therefore, if the eye calibration points of numbers 37 and 40 are both local gradient maximum points in the horizontal direction, i.e., the gradient values are the largest as compared to the m points on the left and the m points on the right, and the eye calibration points of numbers 38, 39, 41, and 42 are all local gradient maximum points in the vertical direction, i.e., the gradient values are the largest as compared to the m points above and the m points below, the left eye is considered to be in the initial state.
Preferably, m is 3. Similarly, whether the right eye is in the initial state is judged.
A second method, comprising the steps of:
s501: calibrating each eye calibration point in the current frame image in a gradient image corresponding to the current frame image to obtain the calibrated position of each eye calibration point; this step may refer to step S11 described below.
S502: and calculating the sum of the distance errors of the positions of the same eye before and after the calibration of each eye calibration point.
Specifically, the distance error of each eye calibration point is calculated according to the coordinate values before and after calibration of each eye calibration point of the same eye, and then the distance errors of each eye calibration point of the same eye are added, so that the sum of the distance errors of the same eye can be obtained.
S503: and calculating the horizontal coordinate span of the same eye according to the calibrated positions of the eye calibration points of the same eye.
Specifically, a maximum horizontal coordinate value and a minimum horizontal coordinate value are obtained from the coordinate values after calibration of the eye calibration points of the same eye, and then a difference between the maximum horizontal coordinate value and the minimum horizontal coordinate value is calculated, so that the horizontal coordinate span of the same eye can be obtained.
S504: dividing the sum of the distance errors of the same eye by the horizontal coordinate span of the same eye, and judging whether the obtained quotient is smaller than a preset threshold value, if so, judging that the same eye is in an initial state, and if not, judging that the same eye is not in the initial state. Where the normalization is achieved by dividing by the horizontal coordinate span.
Preferably, the threshold is 0.01. Similarly, it is determined whether the other eye is in the initial state.
S6: and acquiring the next frame of image to be detected as the current frame of image.
S7: and judging whether the previous frame of image has a face, if so, executing the step S8, otherwise, indicating that after the image in the initial state is obtained, the face of the shot person leaves the lens and enters the lens again, and at the moment, the image in the initial state needs to be searched again, namely, returning to the step S2.
S8: and identifying a human face region in the current frame image through a human face detection algorithm. Further, if no human face is detected, the next frame of image to be measured is acquired, that is, the step S6 is executed again.
S9: and calibrating the human face region to obtain an eye calibration point through a human face characteristic point calibration algorithm. This step may refer to step S3.
S10: and calculating gradient information in the current frame image to obtain a gradient image corresponding to the current frame image. This step may refer to step S4.
S11: and calibrating each eye calibration point in the current frame image in the gradient image corresponding to the current frame image according to the calibrated position of each eye calibration point in the previous frame image to obtain the calibrated position of each eye calibration point in the current frame image.
Specifically, according to the calibrated position of each eye calibration point in the previous frame image (if the previous frame image is an image in an initial state and the first initial state detection method is adopted in step S5, the position of each eye calibration point in the previous frame image is directly obtained), each eye calibration point in the previous frame image and each eye calibration point in the current frame image are respectively connected in a one-to-one correspondence manner in the gradient map corresponding to the current frame image, so as to obtain a connection line corresponding to each eye calibration point; and then, acquiring a maximum gradient value point on a connecting line corresponding to each eye calibration point respectively to serve as the calibrated position of each eye calibration point.
Determining the positions of the eye calibration points in the previous frame of image in a gradient map corresponding to the current frame of image according to the coordinate values of the calibrated eye calibration points in the previous frame of image, and then correspondingly connecting the eye calibration points corresponding to the same calibration point sequence number in the previous frame of image and the current frame of image one by one in the gradient map to obtain connecting lines corresponding to the eye calibration points; and then, acquiring a point with the maximum gradient value on a connecting line corresponding to each eye calibration point respectively to serve as the calibrated position of each eye calibration point.
In the step, the eye calibration point is aligned to the gradient of the eye, so that the eye calibration point is prevented from deviating from the eyelid (such as the eye calibration point of the right eye in fig. 3), that is, the eye calibration point is allowed to fall on the eyelid contour curve as much as possible.
S12: and calculating the eye opening and closing angle corresponding to the same eye according to the calibrated positions of the eye calibration points of the same eye in the current frame image.
Specifically, the maximum horizontal coordinate value x is obtained from the coordinate values of the positions after calibration of the eye calibration points of the same eye in the current frame imagemaxMaximum vertical coordinate value ymaxMinimum horizontal coordinate value xminAnd a minimum vertical coordinate value yminThen, the eye opening and closing angle theta corresponding to the same eye is calculated according to the following first formula.
The first formula: θ ═ arctan (y)max-ymin)/(xmax-xmin)
The first formula is to estimate the opening angle of the eye region, arctan is an inverse trigonometric function, and when the eye is closed, ymax=yminTheta is 0; the other case θ becomes larger as the angle at which the eye opens becomes larger.
When the embodiment is applied to the blink control robot, the eye state does not simply detect the binary condition of whether the eyes are closed, but directly obtains the opening angle of the eyes, and the value is continuously taken. Therefore, theta is directly output to the robot device as the eye state subsequently, and the robot device can directly receive theta to control the opening angle of the robot eyes.
After the eye opening and closing angles of the two eyes in the current frame image are calculated, the next frame image is continuously obtained for eye state detection, that is, the step S6 is continuously executed.
On the basis of a human face characteristic point calibration algorithm, the motion of the eye calibration point is tracked on the gradient map and the eye calibration point is calibrated, so that the calibration of the eye contour is effectively improved, the eye contour calibration is not influenced by wearing glasses, the eye state detection effect is improved, meanwhile, the algorithm complexity is low, and the eye contour calibration method can run on lower terminals of computers in real time.
Example two
The present embodiment is a computer-readable storage medium corresponding to the above-mentioned embodiments, on which a computer program is stored, which when executed by a processor implements the steps of:
sequentially acquiring a frame of image to be detected as a current frame image;
calibrating the current frame image to obtain eye calibration points through a face detection algorithm and a face characteristic point calibration algorithm, and recording the positions of the eye calibration points;
calculating gradient information of the current frame image to obtain a gradient image corresponding to the current frame image;
judging whether the two eyes in the current frame image are both in a preset initial state or not according to the corresponding gradient map;
if not, continuing to execute the step of sequentially acquiring a frame of image to be detected as the current frame image;
if so, acquiring a next frame of image to be detected as a current frame image;
calibrating the current frame image to obtain eye calibration points through a face detection algorithm and a face characteristic point calibration algorithm;
calculating gradient information in the current frame image to obtain a gradient map corresponding to the current frame image;
calibrating each eye calibration point in the current frame image in the gradient image corresponding to the current frame image according to the calibrated position of each eye calibration point in the previous frame image to obtain the calibrated position of each eye calibration point in the current frame image;
and calculating the eye opening and closing angle corresponding to the same eye according to the calibrated positions of the eye calibration points of the same eye in the current frame image.
Further, the obtaining of the eye calibration point by the human face detection algorithm and the human face feature point calibration algorithm in the current frame image through calibration, and recording the position of the eye calibration point specifically includes:
identifying a face region in the current frame image through a face detection algorithm;
calibrating the human face region through a human face characteristic point calibration algorithm to obtain an eye calibration point;
establishing a two-dimensional rectangular coordinate system by taking the upper left corner of the image to be detected as an original point, the horizontal right direction as the positive direction of an X axis and the vertical downward direction as the positive direction of a Y axis;
and recording the coordinate value of the position of the eye calibration point.
Further, the calculating of the gradient information of the current frame image to obtain the gradient map corresponding to the current frame image specifically includes:
respectively calculating a horizontal gradient map and a vertical gradient map corresponding to the current frame image based on a Sobel operator;
and carrying out weighted summation on the horizontal gradient image and the vertical gradient image to obtain a gradient image corresponding to the current frame image.
Further, the step of judging whether both eyes in the current frame image are in the preset initial state according to the corresponding gradient map specifically includes:
and if the eye calibration point of the same eye positioned at the canthus is a gradient maximum value point in a preset range in the horizontal direction, and the other eye calibration points of the same eye are gradient maximum value points in a preset range in the vertical direction, judging that the same eye is in an initial state.
Further, the step of judging whether both eyes in the current frame image are in the preset initial state according to the corresponding gradient map specifically includes:
calibrating each eye calibration point in the current frame image in a gradient image corresponding to the current frame image to obtain the calibrated position of each eye calibration point;
calculating the sum of distance errors of the positions of the same eye before and after calibration of each eye calibration point;
acquiring a maximum horizontal coordinate value and a minimum horizontal coordinate value according to the calibrated positions of the eye calibration points of the same eye, and calculating the difference value of the maximum horizontal coordinate value and the minimum horizontal coordinate value to obtain the horizontal coordinate span of the same eye;
and if the value obtained by dividing the sum of the distance errors of the same eye by the horizontal coordinate span is smaller than a preset threshold value, judging that the same eye is in a preset initial state.
Further, the step of calibrating each eye calibration point in the current frame image in the gradient map corresponding to the current frame image according to the calibrated position of each eye calibration point in the previous frame image to obtain the calibrated position of each eye calibration point in the current frame image specifically includes:
according to the position of each eye calibration point in the previous frame image after calibration, respectively connecting each eye calibration point in the previous frame image with each eye calibration point in the current frame image in a one-to-one correspondence manner in a gradient image corresponding to the current frame image to obtain a connecting line corresponding to each eye calibration point;
and respectively acquiring a maximum gradient value point on a connecting line corresponding to each eye calibration point, and taking the maximum gradient value point as the position of each eye calibration point in the current frame image after calibration.
Further, the calculating the eye opening and closing angle corresponding to the same eye according to the calibrated position of each eye calibration point of the same eye in the current frame image specifically comprises:
acquiring a maximum horizontal coordinate value, a maximum vertical coordinate value, a minimum horizontal coordinate value and a minimum vertical coordinate value according to the calibrated positions of the eye calibration points of the same eye in the current frame image;
calculating the eye opening and closing angle corresponding to the same eye according to a first formula, wherein the first formula is theta-arctan (y)max-ymin)/(xmax-xmin) Theta is the opening and closing angle of the eyes, ymaxIs the maximum vertical coordinate value, yminIs the minimum vertical coordinate value, xmaxIs the maximum horizontal coordinate value, xminIs the minimum horizontal coordinate value.
Further, after the obtaining of the next frame of image to be measured as the current frame image, the method further includes:
judging whether a face exists in the previous frame of image;
if not, judging whether the two eyes in the current frame image are both in a preset initial state;
and if so, executing the step of obtaining the eye calibration point in the current frame image by the face detection algorithm and the face characteristic point calibration algorithm.
In summary, the eye state detection method based on gradient and the computer readable storage medium provided by the invention track the movement of the eye calibration point on the gradient map and calibrate the eye calibration point on the basis of the human face feature point calibration algorithm, so that the calibration of the eye contour is effectively improved, the eye state detection effect is improved without being influenced by wearing glasses, and meanwhile, the algorithm complexity is low, and the eye state detection method can run in real time on lower terminals of some computers.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent changes made by using the contents of the present specification and the drawings, or applied directly or indirectly to the related technical fields, are included in the scope of the present invention.

Claims (9)

1. A gradient-based eye state detection method, comprising:
sequentially acquiring a frame of image to be detected as a current frame image;
calibrating the current frame image to obtain eye calibration points through a face detection algorithm and a face characteristic point calibration algorithm, and recording the positions of the eye calibration points;
calculating gradient information of the current frame image to obtain a gradient image corresponding to the current frame image;
judging whether the two eyes in the current frame image are both in a preset initial state or not according to the corresponding gradient map;
if not, continuing to execute the step of sequentially acquiring a frame of image to be detected as the current frame image;
if so, acquiring a next frame of image to be detected as a current frame image;
calibrating the current frame image to obtain eye calibration points through a face detection algorithm and a face characteristic point calibration algorithm;
calculating gradient information in the current frame image to obtain a gradient map corresponding to the current frame image;
calibrating each eye calibration point in the current frame image in the gradient image corresponding to the current frame image according to the calibrated position of each eye calibration point in the previous frame image to obtain the calibrated position of each eye calibration point in the current frame image;
and calculating the eye opening and closing angle corresponding to the same eye according to the calibrated positions of the eye calibration points of the same eye in the current frame image.
2. The gradient-based eye state detection method according to claim 1, wherein the eye calibration points are calibrated in the current frame image by a face detection algorithm and a face feature point calibration algorithm, and recording positions of the eye calibration points specifically comprises:
identifying a face region in the current frame image through a face detection algorithm;
calibrating the human face region through a human face characteristic point calibration algorithm to obtain an eye calibration point;
establishing a two-dimensional rectangular coordinate system by taking the upper left corner of the image to be detected as an original point, the horizontal right direction as the positive direction of an X axis and the vertical downward direction as the positive direction of a Y axis;
and recording the coordinate value of the position of the eye calibration point.
3. The method for detecting an eye state based on a gradient according to claim 1, wherein the step of calculating the gradient information of the current frame image to obtain the gradient map corresponding to the current frame image specifically comprises:
respectively calculating a horizontal gradient map and a vertical gradient map corresponding to the current frame image based on a Sobel operator;
and carrying out weighted summation on the horizontal gradient image and the vertical gradient image to obtain a gradient image corresponding to the current frame image.
4. The method for detecting an eye state based on a gradient according to claim 1, wherein the determining whether both eyes in the current frame image are in a preset initial state according to the corresponding gradient map specifically comprises:
and if the eye calibration point of the same eye positioned at the canthus is a gradient maximum value point in a preset range in the horizontal direction, and the other eye calibration points of the same eye are gradient maximum value points in a preset range in the vertical direction, judging that the same eye is in an initial state.
5. The method for detecting an eye state based on a gradient according to claim 1, wherein the determining whether both eyes in the current frame image are in a preset initial state according to the corresponding gradient map specifically comprises:
calibrating each eye calibration point in the current frame image in a gradient image corresponding to the current frame image to obtain the calibrated position of each eye calibration point;
calculating the sum of distance errors of the positions of the same eye before and after calibration of each eye calibration point;
acquiring a maximum horizontal coordinate value and a minimum horizontal coordinate value according to the calibrated positions of the eye calibration points of the same eye, and calculating the difference value of the maximum horizontal coordinate value and the minimum horizontal coordinate value to obtain the horizontal coordinate span of the same eye;
and if the value obtained by dividing the sum of the distance errors of the same eye by the horizontal coordinate span is smaller than a preset threshold value, judging that the same eye is in a preset initial state.
6. The method according to claim 1 or 5, wherein the step of calibrating the eye calibration points in the current frame image in the gradient map corresponding to the current frame image according to the calibrated positions of the eye calibration points in the previous frame image to obtain the calibrated positions of the eye calibration points in the current frame image specifically comprises:
according to the position of each eye calibration point in the previous frame image after calibration, respectively connecting each eye calibration point in the previous frame image with each eye calibration point in the current frame image in a one-to-one correspondence manner in a gradient image corresponding to the current frame image to obtain a connecting line corresponding to each eye calibration point;
and respectively acquiring a maximum gradient value point on a connecting line corresponding to each eye calibration point, and taking the maximum gradient value point as the position of each eye calibration point in the current frame image after calibration.
7. The method for detecting an eye state based on a gradient according to claim 1, wherein the calculating the eye opening and closing angle corresponding to the same eye according to the calibrated positions of the eye calibration points of the same eye in the current frame image specifically comprises:
acquiring a maximum horizontal coordinate value, a maximum vertical coordinate value, a minimum horizontal coordinate value and a minimum vertical coordinate value according to the calibrated positions of the eye calibration points of the same eye in the current frame image;
calculating the eye opening and closing angle corresponding to the same eye according to a first formula, wherein the first formula is theta-arctan (y)max-ymin)/(xmax-xmin) Theta is the opening and closing angle of the eyes, ymaxIs the maximum vertical coordinate value, yminIs the minimum vertical coordinate value, xmaxIs a stand forThe maximum horizontal coordinate value, xminIs the minimum horizontal coordinate value.
8. The gradient-based eye state detection method according to claim 1, wherein after obtaining the next frame of image to be detected as the current frame image, the method further comprises:
judging whether a face exists in the previous frame of image;
if not, judging whether the two eyes in the current frame image are both in a preset initial state;
and if so, executing the step of obtaining the eye calibration point in the current frame image by the face detection algorithm and the face characteristic point calibration algorithm.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
CN202011110030.XA 2020-10-16 2020-10-16 Gradient-based eye state detection method and computer-readable storage medium Active CN112347860B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011110030.XA CN112347860B (en) 2020-10-16 2020-10-16 Gradient-based eye state detection method and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011110030.XA CN112347860B (en) 2020-10-16 2020-10-16 Gradient-based eye state detection method and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN112347860A true CN112347860A (en) 2021-02-09
CN112347860B CN112347860B (en) 2023-04-28

Family

ID=74360979

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011110030.XA Active CN112347860B (en) 2020-10-16 2020-10-16 Gradient-based eye state detection method and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN112347860B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093215A (en) * 2013-02-01 2013-05-08 北京天诚盛业科技有限公司 Eye location method and device
CN103632136A (en) * 2013-11-11 2014-03-12 北京天诚盛业科技有限公司 Method and device for locating human eyes
CN104091155A (en) * 2014-07-04 2014-10-08 武汉工程大学 Rapid iris positioning method with illumination robustness
US20170119298A1 (en) * 2014-09-02 2017-05-04 Hong Kong Baptist University Method and Apparatus for Eye Gaze Tracking and Detection of Fatigue
CN108460345A (en) * 2018-02-08 2018-08-28 电子科技大学 A kind of facial fatigue detection method based on face key point location
CN110705468A (en) * 2019-09-30 2020-01-17 四川大学 Eye movement range identification method and system based on image analysis

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093215A (en) * 2013-02-01 2013-05-08 北京天诚盛业科技有限公司 Eye location method and device
CN103632136A (en) * 2013-11-11 2014-03-12 北京天诚盛业科技有限公司 Method and device for locating human eyes
CN104091155A (en) * 2014-07-04 2014-10-08 武汉工程大学 Rapid iris positioning method with illumination robustness
US20170119298A1 (en) * 2014-09-02 2017-05-04 Hong Kong Baptist University Method and Apparatus for Eye Gaze Tracking and Detection of Fatigue
CN108460345A (en) * 2018-02-08 2018-08-28 电子科技大学 A kind of facial fatigue detection method based on face key point location
CN110705468A (en) * 2019-09-30 2020-01-17 四川大学 Eye movement range identification method and system based on image analysis

Also Published As

Publication number Publication date
CN112347860B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
CN108876879B (en) Method and device for realizing human face animation, computer equipment and storage medium
US6611613B1 (en) Apparatus and method for detecting speaking person's eyes and face
KR101169533B1 (en) Face posture estimating device, face posture estimating method, and computer readable recording medium recording face posture estimating program
KR101612605B1 (en) Method for extracting face feature and apparatus for perforimg the method
CN107316029B (en) A kind of living body verification method and equipment
WO2019137215A1 (en) Head pose and distraction estimation
EP2704056A2 (en) Image processing apparatus, image processing method
KR20190098858A (en) Method and apparatus for pose-invariant face recognition based on deep learning
CN107194361A (en) Two-dimentional pose detection method and device
CN108381549A (en) A kind of quick grasping means of binocular vision guided robot, device and storage medium
CN105912126B (en) A kind of gesture motion is mapped to the adaptive adjusting gain method at interface
WO2022116829A1 (en) Human behavior recognition method and apparatus, computer device and readable storage medium
JPWO2018078857A1 (en) Gaze estimation apparatus, gaze estimation method, and program recording medium
CN110705454A (en) Face recognition method with living body detection function
WO2021084972A1 (en) Object tracking device and object tracking method
CN109711239B (en) Visual attention detection method based on improved mixed increment dynamic Bayesian network
CN110503068A (en) Gaze estimation method, terminal and storage medium
JP4952267B2 (en) Three-dimensional shape processing apparatus, three-dimensional shape processing apparatus control method, and three-dimensional shape processing apparatus control program
Bei et al. Sitting posture detection using adaptively fused 3D features
CN105824398A (en) Incoming call processing method and mobile terminal
Hata et al. Detection of distant eye-contact using spatio-temporal pedestrian skeletons
WO2021026281A1 (en) Adaptive hand tracking and gesture recognition using face-shoulder feature coordinate transforms
Xia et al. SDM-based means of gradient for eye center localization
CN112347860A (en) Gradient-based eye state detection method and computer-readable storage medium
CN112257512B (en) Indirect eye state detection method and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant