CN115471824A - Eye state detection method and device, electronic equipment and storage medium - Google Patents

Eye state detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115471824A
CN115471824A CN202210326117.3A CN202210326117A CN115471824A CN 115471824 A CN115471824 A CN 115471824A CN 202210326117 A CN202210326117 A CN 202210326117A CN 115471824 A CN115471824 A CN 115471824A
Authority
CN
China
Prior art keywords
eye
image
processed
determining
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210326117.3A
Other languages
Chinese (zh)
Inventor
韦涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Co Wheels Technology Co Ltd
Original Assignee
Beijing Co Wheels Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Co Wheels Technology Co Ltd filed Critical Beijing Co Wheels Technology Co Ltd
Priority to CN202210326117.3A priority Critical patent/CN115471824A/en
Publication of CN115471824A publication Critical patent/CN115471824A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • G06V10/811Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data the classifiers operating on different input data, e.g. multi-modal recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Ophthalmology & Optometry (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an eye state detection method, an eye state detection device, an electronic device and a storage medium, wherein the method comprises the following steps: determining an image to be processed and an eye region image in the image to be processed; identifying the eye region image, and acquiring an identification result of the eye region image, wherein the identification result comprises: at least one eye key point of each eye in each eye reference state and eye region image; determining the corresponding opening degree of each eye according to at least one eye key point of each eye; and determining the detection result of the eye in the image to be processed according to the reference state of each eye and the opening degree of each eye. Therefore, the method obtains the final recognition result of the eyes by fusing the reference state of each eye and the opening degree of each eye, so that the accuracy of eye detection can be improved.

Description

Eye state detection method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of deep learning technologies, and in particular, to an eye state detection method, an eye state detection apparatus, an electronic device, and a computer-readable storage medium.
Background
Eye state detection has wide application, for example, an automatic alarm system for driving fatigue determines whether a driver is in fatigue by detecting the eye state of the driver and calculating the duration of eye closure or the blinking frequency of the driver; the detection of the eye state can also provide rich information for facial expression analysis, human-computer interaction and the like. Therefore, the eye state detection has very important significance.
In the related art, there are many schemes for detecting the eye state, but the false alarm rate of the eye state detection in the related art is high.
Disclosure of Invention
The present invention is directed to solving, to some extent, the technical problems in the related art.
Therefore, a first object of the present invention is to provide an eye state detection method that can improve the accuracy of eye detection by obtaining a final eye recognition result by fusing each eye reference state and each eye opening degree.
A second object of the present invention is to provide an eye state detection device.
A third object of the invention is to propose an electronic device.
A fourth object of the invention is to propose a computer-readable storage medium.
A fifth object of the invention is to propose a computer program product.
In order to achieve the above object, a first aspect of the present invention provides an eye condition detecting method, including the following steps: determining an image to be processed and an eye region image in the image to be processed; identifying the eye region image, and acquiring an identification result of the eye region image, wherein the identification result comprises: each eye reference state and at least one eye key point of each eye in the eye region image, wherein each eye reference state is an eye opening state or an eye closing state; determining the corresponding opening degree of each eye according to at least one eye key point of each eye; and determining the detection result of the eye in the image to be processed according to the reference state of each eye and the opening and closing degree of each eye.
According to the eye state detection method provided by the embodiment of the invention, an image to be processed and an eye region image in the image to be processed are determined; identifying the eye region image, and acquiring an identification result of the eye region image, wherein the identification result comprises: each eye reference state and at least one eye key point of each eye in the eye region image, wherein each eye reference state is an eye opening state or an eye closing state; determining the corresponding opening degree of each eye according to at least one eye key point of each eye; and determining the detection result of the eye in the image to be processed according to the reference state of each eye and the opening degree of each eye. Therefore, the method obtains the final eye detection result by fusing the reference state of each eye and the opening degree of each eye, so that the accuracy of eye detection can be improved.
In addition, the eye state detection method provided by the embodiment of the first aspect of the present invention may further have the following additional technical features:
according to an embodiment of the present invention, the determining the image to be processed and the eye region image in the image to be processed includes: determining an image to be processed; carrying out face detection on the image to be processed to obtain a face area image in the image to be processed; performing face key point detection on the face region image to acquire at least one face key point in the face region image; and determining an eye region image in the image to be processed according to each eye key point in the at least one face key point and the image to be processed.
According to an embodiment of the present invention, the determining an eye region image in the image to be processed according to each eye key point in the at least one face key point and the image to be processed includes: determining the position information of an eye region in the image to be processed according to each eye key point in the at least one face key point; and cutting the image to be processed according to the position information to obtain the eye region image.
According to an embodiment of the present invention, the determining a detection result of an eye of the image to be processed according to the eye reference state and the eye opening/closing degree includes: when the openness of at least one eye is greater than or equal to a set threshold value, or the reference state of at least one eye is an eye opening state, determining that the detection result is that the eye opening behavior exists in the human face in the image to be processed; and when the eye opening and closing degrees are all smaller than the set threshold value and the eye reference states are all eye closing states, determining that the eye detection result is the eye closing behavior of the human face in the image to be processed.
In order to achieve the above object, a second aspect of the present invention provides an eye state detecting device, including: the device comprises a first determining module, a second determining module and a processing module, wherein the first determining module is used for determining an image to be processed and an eye region image in the image to be processed; an obtaining module, configured to identify the eye region image and obtain an identification result of the eye region image, where the identification result includes: each eye reference state and at least one eye key point of each eye in the eye region image, wherein each eye reference state is an eye opening state or an eye closing state; the second determining module is used for determining the corresponding opening degree of each eye according to at least one eye key point of each eye; and the third determining module is used for determining the detection result of the eye in the image to be processed according to the reference state of each eye and the opening and closing degree of each eye.
According to the eye state detection device of the embodiment of the invention, the first determining module determines the image to be processed and the eye region image in the image to be processed, the obtaining module identifies the eye region image and obtains the identification result of the eye region image, wherein the identification result comprises: and determining corresponding eye opening and closing degrees according to the at least one eye key point of each eye by a second determining module, and determining the detection result of the eye in the image to be processed according to the eye reference states and the eye opening degrees by a third determining module. Therefore, the device obtains the final recognition result of the eyes by fusing the reference state of each eye and the opening degree of each eye, and the accuracy of eye detection can be improved.
In addition, the eye state detection device provided by the embodiment of the second aspect of the present invention may further have the following additional technical features:
according to one embodiment of the invention, the first determining module comprises: a first determination unit configured to determine an image to be processed; the first acquisition unit is used for carrying out face detection on the image to be processed so as to acquire a face region image in the image to be processed; the second acquisition unit is used for detecting face key points of the face region image so as to acquire at least one face key point in the face region image; and the second determining unit is used for determining an eye region image in the image to be processed according to each eye key point in the at least one face key point and the image to be processed.
According to an embodiment of the present invention, the second determining unit is specifically configured to determine, according to each eye key point of the at least one face key point, position information of an eye region in the image to be processed; and cutting the image to be processed according to the position information to obtain the eye region image.
According to one embodiment of the invention, the third determining module comprises: the third determining unit is used for determining that the detection result is that the human face in the image to be processed has eye opening behavior when the openness of at least one eye is greater than or equal to a set threshold value or the reference state of at least one eye is the eye opening state; and a fourth determining unit, configured to determine that the detection result of the eye portion is an eye closing behavior of the face in the image to be processed when the eye opening/closing degree is smaller than the set threshold and the eye reference state is an eye closing state.
To achieve the above object, a third aspect of the present invention provides an electronic device, including: a processor and a memory; the processor reads the executable program code stored in the memory to run a program corresponding to the executable program code, so as to implement the eye state detection method of the embodiment of the first aspect.
The electronic device of the embodiment of the invention can improve the accuracy of eye detection by executing the eye state detection method.
In order to achieve the above object, a fourth embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, wherein the computer program is configured to implement the eye state detection method of the first embodiment when executed by a processor.
The computer-readable storage medium of the embodiment of the invention can improve the accuracy of eye detection by executing the eye state detection method.
To achieve the above object, a fifth embodiment of the present invention provides a computer program product, which when executed by an instruction processor in the computer program product, performs the eye state detection method of the first embodiment.
The computer program product of the embodiment of the invention can improve the accuracy of eye detection by executing the eye state detection method.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a flowchart of an eye state detection method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of eye state detection according to one embodiment of the present invention;
FIG. 3 is a flow chart of a method of eye state detection according to another embodiment of the invention;
fig. 4 is a schematic view of an eye state detection apparatus according to an embodiment of the present invention;
fig. 5 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative and intended to explain the present invention and should not be construed as limiting the present invention.
An eye state detection method, an eye state detection apparatus, an electronic device, and a computer-readable storage medium according to embodiments of the present invention are described below with reference to the accompanying drawings.
In the related art, there are two mainstream eye state detection schemes, which are as follows:
the first scheme is as follows: after the face image is obtained, the face image is detected to obtain face key points, and then the eye state is detected through the positioning signals of the face key points, for example, whether the height information of the key points of the eyes of the person is smaller than a set threshold value of the eye closing state is judged, and if the height information of the key points of the eyes of the person is smaller than the threshold value, the eye closing behavior is judged. However, if the method is executed, since the method performs full-face positioning, the positioning result has low accuracy, the shaking is large, and the determination of the eye state is greatly influenced by the change of the face posture.
Scheme II: the eye state is directly processed by the eye classification algorithm, but the classification result is not necessarily reliable, and the eye opening degree cannot be obtained, so that the risk of judging the eye state by only the eye classification algorithm is also high.
Therefore, the invention provides an eye state detection method capable of improving eye state detection accuracy.
Fig. 1 is a flowchart of an eye state detection method according to an embodiment of the present invention.
It should be noted that the executing subject of the embodiment of the present invention is an eye state detecting device, which can be configured in an electronic apparatus, so that the electronic apparatus can execute the eye state detecting function.
The electronic device may be any device having a computing capability, for example, a Personal Computer (PC), a mobile terminal, a server, and the like, and the mobile terminal may be a hardware device having various operating systems, touch screens, and/or display screens, such as an in-vehicle device, a mobile phone, a tablet Computer, a Personal digital assistant, and a wearable device.
As shown in fig. 1, the eye state detection method according to the embodiment of the present invention includes the following steps:
s101, determining an image to be processed and an eye region image in the image to be processed.
For example, the image to be processed may be acquired by a camera. The eye region image (including the left eye region image and/or the right eye region image) can be subjected to face key point detection (such as face 68 key points) through the face region image in the image to be processed, the position information of the eye region is determined according to the eye key points in the face key points, the image to be processed is cut according to the position information, and the eye region image is obtained, so that the range is reduced, and the repositioning of the eye key points is facilitated.
S102, identifying the eye region image, and acquiring an identification result of the eye region image, wherein the identification result comprises: each eye reference state and at least one eye key point of each eye in the eye region image, wherein each eye reference state is an open eye state or a closed eye state.
For example, the left eye region image and the right eye region image are respectively identified by using the multitask convolutional neural network model, and a plurality of identification results output by the model, such as a left eye reference state, a right eye reference state, coordinate information of at least one eye key point of the left eye in the eye region image and coordinate information of at least one eye key point of the right eye in the eye region image, are correspondingly obtained. Wherein the at least one eye keypoint may comprise an upper eyelid keypoint and a lower eyelid keypoint.
It can be understood that the multitask convolutional neural Network model includes a selection Network (P-Net), an optimization Network (R-Net), and an Output Network (O-Net), and before using these Network models, the left-eye region image and the right-eye region image are respectively preprocessed, the pictures in the left-eye region image and the right-eye region image are scaled to different sizes to form an "image pyramid", and the pictures of each size are calculated, so as to respectively detect the left-eye region image and the right-eye region image in different sizes. Then, inputting the left eye area images and the right eye area images with different sizes into a selection network, obtaining candidate left eye area images and candidate right eye area images output by the selection network, then inputting the candidate left eye area images and the right eye area images into a next optimization network, obtaining candidate left eye area images and candidate right eye area images output by the optimization network, finally inputting the accurate candidate left eye area images and candidate right eye area images into the output network, classifying and positioning key points of eye states of the candidate left eye area images and the candidate right eye area images through the output network, and outputting a left eye reference state and at least one eye key point of a left eye in the corresponding candidate left eye area images, and a right eye reference state and at least one eye key point of a right eye in the corresponding candidate right eye area images.
And S103, determining the corresponding opening degree of each eye according to at least one eye key point of each eye.
In the step, the left eye opening degree calculates the opening height of the left eye according to the coordinate information of the upper eyelid key point of the left eye and the coordinate information of the lower eyelid key point of the left eye; the right eye opening degree calculates the height at which the right eye is opened, based on the coordinate information of the upper eyelid key point of the right eye and the coordinate information of the lower eyelid key point of the right eye.
And S104, determining the detection result of the eye in the image to be processed according to the reference state of each eye and the opening degree of each eye.
In this step, the detection result of the eye may include a recognition result of the left eye and a recognition result of the right eye.
And determining the identification result of the left eye in the image to be processed according to the left eye reference state and the left eye opening and closing degree. If the left eye opening degree is larger than or equal to the set threshold value, or the left eye reference state is an eye opening state, determining that the identification result of the left eye is that the eye opening behavior exists in the left eye of the face in the image to be processed; and if the opening degree of the left eye is smaller than the set threshold value and the reference state of the left eye is the eye closing state, determining that the eye closing behavior exists in the left eye of the face in the image to be processed as the identification result of the left eye.
And determining the identification result of the right eye in the image to be processed according to the reference state of the right eye and the opening and closing degree of the right eye. If the right eye opening degree is larger than or equal to a set threshold value, or the right eye reference state is an eye opening state, determining that the right eye of the face in the image to be processed has an eye opening behavior as the recognition result of the right eye; and if the opening degree of the right eye is smaller than the set threshold value and the reference state of the right eye is the eye closing state, determining that the right eye of the face in the image to be processed has the eye closing behavior as the recognition result of the right eye.
The detection result of the eyes in the image to be processed is judged according to the existing behaviors of the left eye and the right eye. If the left eye has eye opening behavior, and/or the right eye has eye opening behavior, determining that the eyes of the human face in the image to be processed have eye opening behavior; and if the left eye has the eye closing behavior and the right eye has the eye closing behavior, determining that the eyes of the human face in the image to be processed have the eye closing behavior.
Therefore, the eye state detection method of the embodiment of the invention determines the image to be processed and the eye region image in the image to be processed; identifying the eye region image, and acquiring an identification result of the eye region image, wherein the identification result comprises: each eye reference state and at least one eye key point of each eye in the eye region image, wherein each eye reference state is an eye opening state or an eye closing state; determining the corresponding opening degree of each eye according to at least one eye key point of each eye; and determining the detection result of the eye in the image to be processed according to the reference state of each eye and the opening degree of each eye. Therefore, the method obtains the final eye detection result by fusing the reference state of each eye and the opening degree of each eye, so that the accuracy of eye detection can be improved.
Fig. 2 is a flowchart of an eye state detection method according to an embodiment of the present invention.
As shown in fig. 2, the eye state detection method according to the embodiment of the present invention includes the following steps:
s201, determining an image to be processed.
Wherein, the image to be processed can be obtained from the camera. The images to be processed can be images including human faces and images not including human faces.
S202, carrying out face detection on the image to be processed to acquire a face region image in the image to be processed.
In the step, the image to be processed is input into a face detection model, the position and the size of the face are calibrated, and a face region image output by the face detection model is obtained.
If the face detection is performed on the image to be processed and the face region image is not acquired, the subsequent processing on the image to be processed is stopped.
And S203, performing face key point detection on the face region image to acquire at least one face key point in the face region image.
In this step, human face key points, such as eye, nose tip, mouth corner points, eyebrow, and contour points of each part of the human face, are detected according to the human face region image, and a feature point set of the human face is obtained.
And S204, determining an eye region image in the image to be processed according to each eye key point in the at least one face key point and the image to be processed.
In the step, according to a left eye key point and a right eye key point in at least one face key point, position information of a left eye area and position information of a right eye area in the image to be processed are determined, and the image to be processed is cut according to the corresponding position information, so that a left eye area image and a right eye area image are obtained respectively.
S205, identifying the eye region image, and acquiring an identification result of the eye region image, wherein the identification result comprises: each eye reference state and at least one eye key point of each eye in the eye region image, wherein each eye reference state is an open eye state or a closed eye state.
And S206, determining the detection result of the eye in the image to be processed according to the reference state of each eye and the opening degree of each eye.
It should be noted that, the process of executing step S205 and step S206 is referred to step S102 and step S103, and detailed description thereof is omitted here.
Therefore, the eye state detection method of the embodiment of the invention firstly outputs a plurality of tasks (including the reference state of each eye and the opening degree of each eye), and then determines the detection result of the eye according to the reference state of each eye and the opening degree of each eye, thereby reducing the influence degree of the change of the face posture on the eye state discrimination, reducing the false alarm rate and improving the accuracy of the detection result of the eye.
Fig. 3 is a flowchart of another eye state detection method according to the present invention.
As shown in fig. 3, the eye state detection method according to the embodiment of the present invention includes:
s301, determining an image to be processed and an eye region image in the image to be processed.
The images to be processed can be images including human faces and images not including human faces. Images of the human face, such as landscape images, road condition images, and the like, are not included.
S302, identifying the eye region image, and acquiring an identification result of the eye region image, wherein the identification result comprises: each eye reference state and at least one eye key point of each eye in the eye region image, wherein each eye reference state is an open eye state or a closed eye state.
In this step, each eye region image (including the left eye region image and the right eye region image) may be input into a convolutional neural network, and an identification result output by the convolutional neural network is obtained, where the identification result includes a left eye reference state, a right eye reference state, at least one eye key point of the left eye in the left eye region image, and at least one eye key point of the right eye in the right eye region image, where the left eye reference state and the right eye reference state may be an eye-open state or an eye-closed state, the eye key points of the left eye total six points, and the eye key points of the right eye total six points, and then according to geometric analysis of the eye key points of the left eye, an opening degree of the left eye is obtained, and according to geometric analysis of the eye key points of the right eye, an opening degree of the right eye is obtained.
And S303, determining the corresponding opening degree of each eye according to at least one eye key point of each eye.
For example, the left-eye opening degree may be calculated from the coordinate information of the highest position of the upper eyelid and the coordinate information of the lowest position of the lower eyelid of the left eye; and calculating the opening degree of the right eye according to the coordinate information of the highest position of the upper eyelid and the lowest position of the lower eyelid of the right eye.
S304, when the openness of at least one eye is larger than or equal to a set threshold value, or when the reference state of at least one eye is the eye opening state, determining that the detection result is that the human face in the image to be processed has the eye opening behavior.
S305, when the eye opening and closing degree is smaller than the set threshold value and the eye reference state is the eye closing state, determining that the eye detection result is that the human face in the image to be processed has the eye closing behavior.
That is, the result of recognizing the eyes in the image to be processed is specifically determined according to the left-eye reference state, the left-eye opening degree, the right-eye reference state, and the right-eye opening degree. If the opening degrees of the left eye and/or the right eye are both larger than or equal to a set threshold value, determining that the eyes in the human face in the image to be processed have eye opening behaviors; if the states of the left eye and/or the right eye are both eye opening states, determining that the eyes in the human face in the image to be processed have eye opening behaviors; and if the left eye opening degree is smaller than the set threshold, the left eye reference state is an eye closing state, the right eye opening degree is smaller than the set threshold, and the right eye reference state is an eye closing state, determining that the eye closing behavior exists in the eyes in the face of the person in the image to be processed.
In summary, according to the eye state detection method of the embodiment of the present invention, the image to be processed and the eye region image in the image to be processed are determined; identifying the eye region image, and acquiring an identification result of the eye region image, wherein the identification result comprises: each eye reference state and at least one eye key point of each eye in the eye region image, wherein each eye reference state is an eye opening state or an eye closing state; determining the corresponding opening degree of each eye according to at least one eye key point of each eye; and determining the detection result of the eye in the image to be processed according to the reference state of each eye and the opening degree of each eye. Therefore, the method obtains the final eye detection result by fusing the reference state of each eye and the opening degree of each eye, so that the accuracy of eye detection can be improved.
Fig. 4 is a schematic diagram of an eye state detection apparatus according to an embodiment of the present invention.
As shown in fig. 4, the eye state detection apparatus 400 according to the embodiment of the present invention includes: a first determining module 401, an obtaining module 402, a second determining module 403 and a third determining module 404.
The first determining module 401 is configured to determine an image to be processed and an eye region image in the image to be processed. The obtaining module 402 is configured to identify an eye region image, and obtain a recognition result of the eye region image, where the recognition result includes: each eye reference state and at least one eye key point of each eye in the eye region image, wherein each eye reference state is an open eye state or a closed eye state. The second determining module 403 is configured to determine a corresponding opening degree of each eye according to at least one eye key point of each eye. The third determining module 404 is configured to determine a detection result of an eye in the image to be processed according to the eye reference state and the eye opening degree.
According to one embodiment of the invention, the first determining module 401 comprises: a first determination unit configured to determine an image to be processed; the first acquisition unit is used for carrying out face detection on the image to be processed so as to acquire a face region image in the image to be processed; the second acquisition unit is used for detecting face key points of the face region image so as to acquire at least one face key point in the face region image; and the second determining unit is used for determining an eye region image in the image to be processed according to each eye key point in the at least one face key point and the image to be processed.
According to an embodiment of the present invention, the second determining unit is specifically configured to determine, according to each eye key point of the at least one face key point, position information of an eye region in the image to be processed; and cutting the image to be processed according to the position information to obtain the eye region image.
According to one embodiment of the invention, the third determining module 404 includes: the third determining unit is used for determining that the detection result is that the human face in the image to be processed has eye opening behavior when the openness of at least one eye is greater than or equal to a set threshold value or the reference state of at least one eye is the eye opening state; and the fourth determining unit is used for determining that the eye closing behavior exists on the human face in the image to be processed according to the detection result of the eyes when the eye opening and closing degree is smaller than the set threshold value and the eye reference state is the eye closing state.
It should be noted that details not disclosed in the eye state detection device of the embodiment of the present invention refer to details disclosed in the eye state detection method of the present invention, and are not repeated herein.
According to the eye state detection device of the embodiment of the invention, the first determining module determines the image to be processed and the eye region image in the image to be processed, the obtaining module identifies the eye region image and obtains the identification result of the eye region image, wherein the identification result comprises: and determining corresponding eye opening and closing degrees according to the at least one eye key point of each eye by a second determining module, and determining the detection result of the eye in the image to be processed according to the eye reference states and the eye opening degrees by a third determining module. Therefore, the device obtains the final recognition result of the eyes by fusing the reference state of each eye and the opening degree of each eye, and the accuracy of eye detection can be improved.
Based on the above embodiment, the invention further provides an electronic device.
The electronic device of the embodiment of the invention comprises: a processor and a memory; the processor reads the executable program codes stored in the memory to run programs corresponding to the executable program codes, so as to realize the eye state detection method.
The computer-readable storage medium of the embodiment of the invention can improve the accuracy of eye detection by executing the eye state detection method.
Based on the above embodiment, the present invention further provides a computer-readable storage medium.
A computer-readable storage medium of an embodiment of the present invention has stored thereon a computer program that, when executed by a processor, implements the above-described eye state detection method.
The computer-readable storage medium of the embodiment of the invention can improve the accuracy of eye detection by executing the eye state detection method.
Based on the above embodiment, the present invention further provides a computer program product.
The computer program product of the embodiment of the invention executes the above-mentioned eye state detection method when the instructions in the computer program product are executed by the processor.
The computer program product of the embodiment of the invention can improve the accuracy of eye detection by executing the eye state detection method.
Fig. 5 is a block diagram of an electronic device according to an embodiment of the present invention. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 5, the electronic device 10 includes a processor 11, which can perform various suitable actions and processes according to a program stored in a Read Only Memory (ROM) 12 or a program loaded from a Memory 16 into a Random Access Memory (RAM) 13. In the RAM 13, various programs and data necessary for the operation of the electronic apparatus 10 are also stored. The processor 11, the ROM 12, and the RAM 13 are connected to each other via a bus 14. An Input/Output (I/O) interface 15 is also connected to the bus 14.
The following components are connected to the I/O interface 15: a memory 16 including a hard disk and the like; and a communication section 17 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like, the communication section 17 performing communication processing via a Network such as the internet; a drive 18 is also connected to the I/O interface 15 as necessary.
In particular, the processes described above with reference to the flowcharts may be implemented as a computer software program according to an embodiment of the present invention. For example, embodiments of the invention include a computer program embodied on a computer-readable medium, the computer program containing program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 17. Which when executed by the processor 11 performs the above-mentioned functions as defined in the method of the invention.
In an exemplary embodiment, there is also provided a storage medium comprising instructions, such as the memory 16 comprising instructions, executable by the processor 11 of the electronic device 10 to perform the above-described method. Alternatively, the storage medium may be a computer-readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, such as an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Further, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (10)

1. An eye state detection method characterized by comprising the steps of:
determining an image to be processed and an eye region image in the image to be processed;
identifying the eye region image, and acquiring an identification result of the eye region image, wherein the identification result comprises: each eye reference state and at least one eye key point of each eye in the eye region image, wherein each eye reference state is an eye opening state or an eye closing state;
determining the corresponding opening degree of each eye according to at least one eye key point of each eye;
and determining the detection result of the eyes in the image to be processed according to the reference state of each eye and the opening and closing degree of each eye.
2. The method according to claim 1, wherein the determining the image to be processed and the eye region image in the image to be processed comprises:
determining an image to be processed;
carrying out face detection on the image to be processed to obtain a face area image in the image to be processed;
performing face key point detection on the face region image to acquire at least one face key point in the face region image;
and determining an eye region image in the image to be processed according to each eye key point in the at least one face key point and the image to be processed.
3. The method according to claim 2, wherein the determining the eye region image in the image to be processed according to each eye key point in the at least one face key point and the image to be processed comprises:
determining the position information of an eye region in the image to be processed according to each eye key point in the at least one face key point;
and cutting the image to be processed according to the position information to obtain the eye region image.
4. The method according to claim 1, wherein determining the detection result of the eye of the image to be processed according to the eye reference state and the eye opening/closing degree comprises:
when the openness of at least one eye is greater than or equal to a set threshold value, or the reference state of at least one eye is an eye opening state, determining that the detection result is that the eye opening behavior exists in the human face in the image to be processed;
and when the eye opening and closing degrees are all smaller than the set threshold value and the eye reference states are all eye closing states, determining that the eye detection result is the eye closing behavior of the human face in the image to be processed.
5. An eye state detection device, comprising:
the device comprises a first determining module, a second determining module and a processing module, wherein the first determining module is used for determining an image to be processed and an eye region image in the image to be processed;
an obtaining module, configured to identify the eye region image and obtain an identification result of the eye region image, where the identification result includes: each eye reference state and at least one eye key point of each eye in the eye region image, wherein each eye reference state is an eye opening state or an eye closing state;
the second determining module is used for determining the corresponding opening degree of each eye according to at least one eye key point of each eye;
and the third determining module is used for determining the detection result of the eye in the image to be processed according to the reference state of each eye and the opening and closing degree of each eye.
6. The apparatus of claim 5, wherein the first determining module comprises:
a first determination unit configured to determine an image to be processed;
the first acquisition unit is used for carrying out face detection on the image to be processed so as to acquire a face region image in the image to be processed;
the second acquisition unit is used for detecting face key points of the face region image so as to acquire at least one face key point in the face region image;
and the second determining unit is used for determining an eye region image in the image to be processed according to each eye key point in the at least one face key point and the image to be processed.
7. The apparatus according to claim 6, characterized in that the second determination unit is specifically configured to,
determining the position information of an eye region in the image to be processed according to each eye key point in the at least one face key point;
and cutting the image to be processed according to the position information to obtain the eye region image.
8. The apparatus of claim 5, wherein the third determining module comprises:
the third determining unit is used for determining that the detection result is that the human face in the image to be processed has eye opening behavior when the openness of at least one eye is greater than or equal to a set threshold value or the reference state of at least one eye is the eye opening state;
and a fourth determining unit, configured to determine that the detection result of the eye portion is an eye closing behavior of the face in the image to be processed when the eye opening/closing degree is smaller than the set threshold and the eye reference state is an eye closing state.
9. An electronic device, comprising:
a processor and a memory;
wherein the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory for implementing the eye state detection method according to any one of claims 1 to 4.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the eye state detection method according to any one of claims 1 to 4.
CN202210326117.3A 2022-03-29 2022-03-29 Eye state detection method and device, electronic equipment and storage medium Pending CN115471824A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210326117.3A CN115471824A (en) 2022-03-29 2022-03-29 Eye state detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210326117.3A CN115471824A (en) 2022-03-29 2022-03-29 Eye state detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115471824A true CN115471824A (en) 2022-12-13

Family

ID=84365136

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210326117.3A Pending CN115471824A (en) 2022-03-29 2022-03-29 Eye state detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115471824A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115690892A (en) * 2023-01-03 2023-02-03 京东方艺云(杭州)科技有限公司 Squinting recognition method and device, electronic equipment and storage medium
CN116152723A (en) * 2023-04-19 2023-05-23 深圳国辰智能系统有限公司 Intelligent video monitoring method and system based on big data

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115690892A (en) * 2023-01-03 2023-02-03 京东方艺云(杭州)科技有限公司 Squinting recognition method and device, electronic equipment and storage medium
CN116152723A (en) * 2023-04-19 2023-05-23 深圳国辰智能系统有限公司 Intelligent video monitoring method and system based on big data
CN116152723B (en) * 2023-04-19 2023-06-27 深圳国辰智能系统有限公司 Intelligent video monitoring method and system based on big data

Similar Documents

Publication Publication Date Title
CN107358149B (en) Human body posture detection method and device
CN115471824A (en) Eye state detection method and device, electronic equipment and storage medium
CN106897658A (en) The discrimination method and device of face live body
CN111597884A (en) Facial action unit identification method and device, electronic equipment and storage medium
US9990710B2 (en) Apparatus and method for supporting computer aided diagnosis
CN111553266A (en) Identification verification method and device and electronic equipment
EP2639743A2 (en) Image processing device, image processing program, and image processing method
CN106169075A (en) Auth method and device
KR20140134803A (en) Apparatus and method for gesture recognition using multiclass Support Vector Machine and tree classification
KR20070083952A (en) Object detection utilizing a rotated version of an image
CN110826372A (en) Method and device for detecting human face characteristic points
WO2018103024A1 (en) Intelligent guidance method and apparatus for visually handicapped person
CN108520263B (en) Panoramic image identification method and system and computer storage medium
CN112560584A (en) Face detection method and device, storage medium and terminal
JP2015103188A (en) Image analysis device, image analysis method, and image analysis program
US9053383B2 (en) Recognizing apparatus and method, program, and recording medium
KR101961462B1 (en) Object recognition method and the device thereof
JP2014199506A (en) Object detection device, object method of detection, and program
CN114092987A (en) Apparatus and method for providing vehicle service based on individual emotion recognition
CN117058421A (en) Multi-head model-based image detection key point method, system, platform and medium
CN115937991A (en) Human body tumbling identification method and device, computer equipment and storage medium
CN115641570A (en) Driving behavior determination method and device, electronic equipment and storage medium
CN115565103A (en) Dynamic target detection method and device, computer equipment and storage medium
CN115719428A (en) Face image clustering method, device, equipment and medium based on classification model
CN115311723A (en) Living body detection method, living body detection device and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination