US11386710B2 - Eye state detection method, electronic device, detecting apparatus and computer readable storage medium - Google Patents

Eye state detection method, electronic device, detecting apparatus and computer readable storage medium Download PDF

Info

Publication number
US11386710B2
US11386710B2 US16/473,491 US201816473491A US11386710B2 US 11386710 B2 US11386710 B2 US 11386710B2 US 201816473491 A US201816473491 A US 201816473491A US 11386710 B2 US11386710 B2 US 11386710B2
Authority
US
United States
Prior art keywords
feature points
eye
eye feature
position coordinates
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US16/473,491
Other versions
US20210357617A1 (en
Inventor
Chu Xu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing BOE Technology Development Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Assigned to BOE TECHNOLOGY GROUP CO., LTD. reassignment BOE TECHNOLOGY GROUP CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: XU, CHU
Publication of US20210357617A1 publication Critical patent/US20210357617A1/en
Application granted granted Critical
Publication of US11386710B2 publication Critical patent/US11386710B2/en
Assigned to Beijing Boe Technology Development Co., Ltd. reassignment Beijing Boe Technology Development Co., Ltd. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BOE TECHNOLOGY GROUP CO., LTD.
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • G06K9/6256
    • G06K9/6262
    • G06K9/6298
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/77Determining position or orientation of objects or cameras using statistical methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Definitions

  • Embodiments of the present disclosure relate to an eye state detection method, an electronic device, a detecting apparatus, and a computer readable storage medium.
  • Eyes are the most important features of a human face, and play an extremely important role in computer vision researches and applications.
  • the eye state detection has always been the focus of researchers.
  • the detection of an eye state may help various smart devices to recognize the state of human eyes, and has broad application prospects in the field of fatigue detection and visual interaction, for example, driver's fatigue detection and invalid photos filtering.
  • an eye state detection method including: acquiring a target image; positioning a plurality of eye feature points in the target image to determine position coordinates of the plurality of eye feature points; normalizing the position coordinates of the plurality of eye feature points to obtain normalized position feature data; and determining an eye state in the target image based on the position feature data.
  • the eye feature points include left eye feature points and right eye feature points.
  • Normalizing the position coordinates of the plurality of eye feature points to obtain normalized position feature data includes: determining an Euclidean distance between a mean value of position coordinates of left eye feature points and a mean value of position coordinates of right eye feature points; and normalizing the position coordinates of the plurality of eye feature points by the Euclidean distance as a standard unit, to obtain normalized position feature data.
  • determining an eye state in the target image based on the position feature data includes: classifying the position feature data; and determining an eye state in the target image based on the classification result.
  • determining an Euclidean distance between a mean value of position coordinates of left eye feature points and a mean value of position coordinates of right eye feature points includes: determining a mean value of position coordinates
  • normalizing the position coordinates of the plurality of eye feature points by the Euclidean distance as a standard unit, to obtain normalized position feature data includes: normalizing the position coordinates of the plurality of eye feature points according to the formulas
  • N 12.
  • classifying the position feature data includes: classifying the position feature data with a classifier.
  • classifying the position feature data further includes: training the classifier with sample images to obtain a classifier parameter for the eye state detection.
  • training the classifier with sample images to obtain a classifier parameter for the eye state detection including: acquiring positive and negative sample images from a picture library, wherein the eye state in the positive sample images is an opened eye state, and the eye state in the negative sample images is a closed eye state; positioning the eye feature points in the positive and negative sample images, to obtain position coordinates of the plurality of eye feature points in the positive and negative sample images; normalizing the position coordinates of the plurality of eye feature points, to obtain normalized position feature data; and training the classifier with the position feature data to obtain a classifier parameter for detecting the eye state.
  • positioning a plurality of eye feature points in the target image includes: detecting whether a human face is included in the target image; and when detecting that the target image includes a human face, positioning the plurality of eye feature points in the target image.
  • an electronic device including at least one processor, at least one memory, and computer program instructions stored in the memory, when the computer program instructions are executed by the processor, the processor is caused to perform: acquiring a target image; positioning a plurality of eye feature points in the target image to determine position coordinates of the plurality of eye feature points; normalizing the position coordinates of the plurality of eye feature points to obtain normalized position feature data; and determining an eye state in the target image based on the position feature data.
  • the eye feature points include left eye feature points and right eye feature points; and normalizing the position coordinates of the plurality of eye feature points to obtain normalized position feature data includes: determining an Euclidean distance between a mean value of position coordinates of left eye feature points and a mean value of position coordinates of right eye feature points; and normalizing the position coordinates of the plurality of eye feature points by the Euclidean distance as a standard unit, to obtain normalized position feature data.
  • determining an eye state in the target image based on the position feature data includes: classifying the position feature data; and determining an eye state in the target image based on the classification result.
  • determining a Euclidean distance between a mean value of position coordinates of left eye feature points and a mean value of position coordinates of right eye feature points includes: determining a mean value of position coordinates
  • normalizing the position coordinates of the plurality of eye feature points by the Euclidean distance as a standard unit, to obtain normalized position feature data includes: normalizing the position coordinates of the plurality of eye feature points according to the formulas
  • classifying the position feature data includes: classifying the position feature data with a classifier.
  • classifying the position feature data further includes: training the classifier with sample images to obtain a classifier parameter for the eye state detection.
  • training the classifier with sample images to obtain a classifier parameter for eye state detection including: acquiring positive and negative sample images from a picture library, wherein the eye state in the positive sample images is an opened eye state, and the eye state in the negative sample images is a closed eye state; positioning the eye feature points in the positive and negative sample images, to obtain position coordinates of the plurality of eye feature points in the positive and negative sample images; normalizing the position coordinates of the plurality of eye feature points, to obtain normalized position feature data; and training the classifier with the position feature data to obtain a classifier parameter for detecting the eye state.
  • an eye state detection apparatus including: an acquiring sub-circuit configured to acquire a target image; a positioning sub-circuit configured to position a plurality of eye feature points in the target image to determine position coordinates of the plurality of eye feature points; a normalization processing sub-circuit configured to normalize the position coordinates of the plurality of eye feature points to obtain the normalized position feature data; and a determining sub-circuit configured to determine an eye state in the target image based on the position feature data.
  • a computer readable storage medium having stored thereon computer program instructions, when the computer program instructions are executed by a processor, the processor performs the method of the above embodiments.
  • FIG. 1 is a schematic flowchart diagram of an eye state detection method according to an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of a training process of a classifier according to an embodiment of the present disclosure
  • FIG. 3 is a detailed flowchart of an eye state detection method according to an embodiment of the present disclosure
  • FIG. 4 is a block diagram of an eye state detection apparatus according to an embodiment of the present disclosure.
  • FIG. 5 is a schematic structural diagram of a computer device suitable for implementing an embodiment of the present disclosure according to an embodiment of the present disclosure.
  • Embodiments of the present disclosure provide an eye state detection method.
  • An eye state may include an opened eye state and a closed eye state.
  • the eye state may also include a semi-opened and semi-closed state, a fatigue state, a squint state, and so on.
  • the present disclosure describes the eye state detection only by taking the opened eye state and the closed eye state as an example.
  • a schematic flowchart of an eye state detection method according to an embodiment of the present disclosure includes the following steps 101 - 104 .
  • the order of the above steps as described is only an example of the embodiments of the present disclosure, and is not the only order, and other possible execution order is also conceivable by those skilled in the art according to the present disclosure.
  • Step 101 a target image is acquired.
  • the target image may be acquired by a camera or may be received from other devices other than the camera.
  • Step 102 a plurality of eye feature points in the target image are positioned to determine position coordinates of the plurality of eye feature points.
  • the target image before determining the position coordinates of the plurality of eye feature points in the target image, it may be first detected whether the target image includes a face, for example, with a face detection technique.
  • the face detection technique refers to a method of searching for a given image using a certain strategy to determine whether a face is contained. For example, a linear subspace method, a neural network method, or the like may be used to detect whether a face is included in a target image.
  • the face region in the target image can be determined, and then the plurality of eye feature points in the face region can be positioned, and thereby the position coordinates of the plurality of eye feature points can be determined.
  • eye feature points can be positioned by an eye feature learning machine. For example, first, positive and negative samples of a plurality of eye feature points are acquired.
  • an image recognition algorithm can be used to detect a plurality of images that may include eye features, to obtain a positive sample and a sub-sample of a plurality of eye feature points.
  • a positive sample is a sample that includes eye features
  • a sub-sample is a sample that is similar to an eye feature but is not an eye feature.
  • the eye feature learning machine is trained with a large number of positive and negative samples.
  • the face region in the target image or the target image is input into the trained learning machine, and the trained learning machine can automatically position the eye feature points in the input face region and determine position coordinates of the positioned positions in the target image.
  • the eye feature points include left eye feature points and right eye feature points, which may be, but not limited to, edge points of the eye corners and upper and lower eyelids.
  • the number of feature points of the left and right eyes can be set as needed, and based on the symmetrical characteristics of the human eyes in the face. The numbers of feature points of the left and right eyes can be the same.
  • the algorithm used for positioning the left and right eye feature points determines the number of left and right eye feature points as 12, that is, 6 left eye feature points and 6 right eye feature points respectively, for example, a left eye corner point, a right eye corner point, two edge points of upper eyelids and two edge points of lower eyelids.
  • the position coordinates of the eye feature points can be, but not limited to, positioned in a XY axis coordinate system.
  • the coordinate system can take the upper left corner of the target image as the origin, the horizontal direction as the horizontal axis, that is, the X axis, and the vertical direction as the vertical axis, that is, the Y axis.
  • Step 103 the position coordinates of the plurality of eye feature points are normalized to obtain normalized position feature data.
  • an Euclidean distance between a mean value of position coordinates of the left eye feature points and a mean value of position coordinates of the right eye feature points is used as a standard unit for normalizing the position coordinates of the plurality of eye feature points.
  • the horizontal axis coordinates of 12 eye feature points are X (X1, X2, . . . , X11, X12), and the vertical axis coordinates of 12 eye feature points are Y (Y1, Y2, . . . , Y11, Y12).
  • X i is the horizontal axis coordinate of the i-th eye feature point
  • Y i is the vertical axis coordinate of the i-th eye feature point
  • the value of i ranges from 1 to N
  • the first to the (0.5N)th eye feature points are left eye feature points
  • the (0.5N+1)th to the N-th eye feature points are right eye feature points
  • N is an even number.
  • the Euclidean distance Ed between El and Er is calculated.
  • the obtained new position coordinates of the eye feature points are the position feature data obtained after the normalization process.
  • Step 104 an eye state in the target image is determined based on the position feature data.
  • the position feature data can be classified to determine the state eye in the target image.
  • the obtained position feature data can be classified with a classifier.
  • the embodiment of the present disclosure can further include: training a classifier with sample images to obtain a classifier parameter for detecting the eye state.
  • the training process of the classifier can be as shown in FIG. 2 .
  • the positive and negative sample images are acquired from a library of sample images of opened eye states and closed eye states.
  • the eye state in the positive sample image is the opened eye state
  • the eye state in the negative sample image is the closed eye state.
  • face detection and eye feature point positioning are performed on the positive and negative sample images to determine position coordinates of the plurality of eye feature points in the positive and negative sample images.
  • Step 103 based on the normalization processing principle in the above Step 103 , the position coordinates of the plurality of eye feature points are normalized to obtain the normalized position feature data.
  • the classifier is trained to obtain a classifier parameter for detecting the eye state.
  • the classifier can classify the opened eye state or the closed eye state. For example, if the position coordinates of the eye feature points are input into the classifier, the classifier can determine whether the feature at the position coordinates corresponds to an opened eye state or a closed eye state.
  • FIG. 3 is a schematic flowchart diagram of an eye state detection method according to an embodiment of the present disclosure.
  • the method for detecting the eye may include the following steps 301 - 305 .
  • the order of the above steps as described is only an example of the embodiments of the present disclosure, and is not the only order. Other possible execution order is also conceivable by those skilled in the art according to the present disclosure.
  • Step 301 a target image is acquired.
  • Step 302 it is detected whether a face is included in the target image.
  • Step 303 is performed.
  • the process ends, and the process returns to Step 301 .
  • Step 303 twelve (12) eye feature points in the target image are positioned to determine position coordinates of the twelve (12) eye feature points.
  • the horizontal axis coordinates the 12 eye feature points are X (X1, X2, . . . , X11, X12) of, and the vertical axis coordinates of the 12 eye feature points are Y (Y1, Y2, . . . , Y11, Y12).
  • Step 304 the position coordinates of the 12 eye feature points are normalized by an Euclidean distance between a mean value of position coordinates of left eye feature points and a mean value of position coordinates of right eye feature points, to obtain the normalized position feature data.
  • X i is the horizontal axis coordinate of the i-th eye feature point
  • Y i is the vertical axis coordinate of the i-th eye feature point
  • Step 305 the obtained position feature data is classified by a classifier to determine the eye state in the target image.
  • the eye state is determined by acquiring a target image, determining position coordinates of the plurality of eye feature points in the target image, and normalizing the position coordinates of the plurality of eye feature points by an Euclidean distance between a mean value of position coordinates of the left eye feature points and a mean value of position coordinates of the right eye feature points as a standard unit to obtain normalized position feature data; and classifying the position feature data to determine the eye state in the target image.
  • the present method can accurately detect the eye state in the target image, and due to the normalization process, it won't be affected by the size and position of the eye region in the target image and has an excellent robustness.
  • FIG. 4 is a structural block diagram of an eye state detection apparatus according to an embodiment of the present disclosure, the apparatus includes: an acquiring sub-circuit 41 , a positioning sub-circuit 42 , a normalization processing sub-circuit 43 and a determining sub-circuit 44 .
  • the acquiring sub-circuit 41 is configured to acquire a target image, wherein the acquiring sub-circuit is, for example, a camera, a video camera, or the like, and can be a program command that calls for a target image.
  • the positioning sub-circuit 42 is configured to position a plurality of eye feature points in the target image to determine position coordinates of the plurality of eye feature points, wherein, for example, eye feature points can include left eye feature points and right eye feature points.
  • the normalization processing sub-circuit 43 is configured to normalize the position coordinates of the plurality of eye feature points to obtain the normalized position feature data, wherein, for example, the position coordinates of the plurality of eye feature points can be normalized by an Euclidean distance between a mean value of position coordinates of left eye feature points and a mean value of position coordinates of right eye feature points, to obtain normalized position feature data, as a standard unit for normalizing the position coordinates of the plurality of eye feature points.
  • the determining sub-circuit 44 is configured to determine an eye state in the target image based on the position feature data.
  • the positioning sub-circuit 42 , the normalization processing sub-circuit 43 , and the determining sub-circuit 44 can be implemented by software, or can be implemented by hardware or firmware.
  • they can be implemented by a general purpose processor, programmable logic circuits, or integrated circuits.
  • the positioning sub-circuit 42 is configured, for example, to detect whether a face is included in the target image, and when detecting that the target image includes a human face, position a plurality of eye feature points in the target image.
  • the normalization processing sub-circuit 43 is configured to: determine a mean value of position coordinates
  • N 12.
  • the determining sub-circuit 44 is configured to classify the position feature data with a classifier.
  • the apparatus further includes: a classifier training sub-circuit 45 configured to train the classifier with sample images to obtain a classifier parameter for eye state detection.
  • embodiments of the present disclosure also provide a computer device suitable for implementing the embodiments of the present disclosure and implementing the method of the foregoing embodiments.
  • the computer device includes a memory and a processor, the memory storing computer program instructions, and when the processor processes the program instructions, the processor performs: acquiring a target image; and positioning a plurality of eye feature points in the target image to determine position coordinates of the plurality of eye feature points; normalizing the position coordinates of the plurality of eye feature points to obtain normalized position feature data; and determining an eye state in the target image based on the position feature data.
  • the eye feature points include left eye feature points and right eye feature points.
  • Normalizing the position coordinates of the plurality of eye feature points, to obtain the normalized position feature data includes: determining an Euclidean distance between a mean value of position coordinates of left eye feature points and a mean value of position coordinates of right eye feature points; normalizing the position coordinates of the plurality of eye feature points by the Euclidean distance as a standard unit, to obtain normalized position feature data.
  • determining the eye state in the target image based on the position feature data includes: classifying the position feature data; and determining an eye state in the target image based on the classification result.
  • determining an Euclidean distance between a mean value of position coordinates of left eye feature points and a mean value of position coordinates of right eye feature points includes: determining a mean value of position coordinates of left eye feature points and a mean value of position
  • normalizing the position coordinates of the plurality of eye feature points by the Euclidean distance as a standard unit, to obtain normalized position feature data includes: normalizing the position coordinates of the plurality of eye feature points according to the formulas
  • N 12.
  • classifying the position feature data includes: classifying the position feature data with a classifier.
  • classifying the position feature data further includes: training the classifier with sample images to obtain a classifier parameter for eye state detection.
  • training the classifier with sample images to obtain a classifier parameter for eye state detection includes: acquiring positive and negative sample images from a picture library, wherein the eye state in the positive sample image is the opened eye state, and the eye state in the negative sample image is the closed eye state; positioning the eye feature points in the positive and negative sample images, to obtain position coordinates of the plurality of eye feature points in the positive and negative sample images; normalizing the position coordinates of the plurality of eye feature points, to obtain normalized position feature data; and training the classifier with the position feature data to obtain a classifier parameter for detecting the eye state.
  • positioning the plurality of eye feature points in the target image includes: detecting whether the target image includes a human face; and when detecting that the target image includes a human face, positioning the plurality of eye feature points in the image.
  • FIG. 5 a schematic structural diagram of a computer device applicable for implementing the embodiment of the present disclosure is provided
  • the computer system includes a central processing unit (CPU) 501 , which can perform desired actions and processes according to a program stored in a read only memory (ROM) 502 or a program loaded to a random access memory (RAM) 503 from a memory portion 508 .
  • ROM read only memory
  • RAM random access memory
  • various programs and data required for the operation of the system 500 are also stored.
  • the CPU 501 , the ROM 502 , and the RAM 503 are connected to each other via a bus 504 .
  • An input/output (I/O) interface 505 is also connected to the bus 504 .
  • the following components are connected to the I/O interface 505 : an input portion 506 including a keyboard, a mouse, and the like; an output portion 507 including, for example, a cathode ray tube (CRT), a liquid crystal display (LCD), a speaker and the like; a storage portion 508 including a hard disk or the like; and a communication portion 509 including a network interface card such as a LAN card, a modem, or the like.
  • the communication section 509 performs communication processing via a network such as the Internet.
  • the driver 310 is also connected to an I/O interface 505 as needed.
  • a removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory or the like is mounted on the drive 510 as needed so that a computer program read therefrom can be installed into the storage portion 508 as needed.
  • an embodiment of the present disclosure includes a computer program product including a computer program tangibly embodied on a machine readable medium.
  • the computer program includes program codes for performing the methods of FIGS. 1-3 .
  • the computer program can be downloaded and installed from the network via the communication portion 509 , and/or installed from the removable medium 511 .
  • each block of the flowcharts or the block diagrams can represent a module, a program segment, or a portion of codes that includes one or more executable instructions for implementing the specified logic functions.
  • the functions noted in the blocks can also occur in a different order than that illustrated in the drawings. For example, two successively represented blocks can be actually executed substantially in parallel, and they can sometimes be executed in the reverse order, depending upon the function involved.
  • each block of the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts can be implemented in a dedicated hardware-based system that performs the specified function or operation. Alternatively, it can be implemented by a combination of dedicated hardware and computer instructions.
  • the sub-circuits or modules described in the embodiments of the present disclosure can be implemented by software or by hardware.
  • the described sub-circuits or modules can also be provided in the processor.
  • the names of these sub-circuits or modules do not in any way constitute a limitation on the sub-circuit or module itself.
  • the present disclosure further provides a computer readable storage medium, which can be a computer readable storage medium included in the apparatus described in the foregoing embodiments, or can exist separately, as a computer readable storage medium that is not assembled into the device.
  • the computer readable storage medium stores one or more programs that are used by one or more processors to perform the eye state detection methods described in this disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Ophthalmology & Optometry (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The present invention is directed to positioning a plurality of eye feature points in the target image to determine position coordinates of the plurality of eye feature points normalizing the position coordinates of the plurality of eye feature points to obtain normalized position feature data and determining an eye state in the target image based on the position feature data.

Description

The present disclosure is based on PCT/CN2018/118374, filed on Nov. 30, 2018, which claims the priority of the Chinese Patent Application No. 201810394919.1 filed on Apr. 27, 2018, the entire disclosure of which is hereby incorporated by reference.
TECHNICAL FIELD
Embodiments of the present disclosure relate to an eye state detection method, an electronic device, a detecting apparatus, and a computer readable storage medium.
BACKGROUND
Eyes are the most important features of a human face, and play an extremely important role in computer vision researches and applications. The eye state detection has always been the focus of researchers. On the basis of face recognition, the detection of an eye state may help various smart devices to recognize the state of human eyes, and has broad application prospects in the field of fatigue detection and visual interaction, for example, driver's fatigue detection and invalid photos filtering.
SUMMARY
According to at least one embodiment of the present disclosure, there is provided an eye state detection method, including: acquiring a target image; positioning a plurality of eye feature points in the target image to determine position coordinates of the plurality of eye feature points; normalizing the position coordinates of the plurality of eye feature points to obtain normalized position feature data; and determining an eye state in the target image based on the position feature data.
For example, the eye feature points include left eye feature points and right eye feature points. Normalizing the position coordinates of the plurality of eye feature points to obtain normalized position feature data includes: determining an Euclidean distance between a mean value of position coordinates of left eye feature points and a mean value of position coordinates of right eye feature points; and normalizing the position coordinates of the plurality of eye feature points by the Euclidean distance as a standard unit, to obtain normalized position feature data.
For example, determining an eye state in the target image based on the position feature data includes: classifying the position feature data; and determining an eye state in the target image based on the classification result.
For example, determining an Euclidean distance between a mean value of position coordinates of left eye feature points and a mean value of position coordinates of right eye feature points includes: determining a mean value of position coordinates
E 1 ( i = 1 0.5 N X i 0.5 N , i = 1 0.5 N Y i 0.5 N )
of left eye feature points and a mean value of position coordinates
Er ( i = 0.5 N + 1 N X i 0.5 N , i = 0.5 N + 1 N Y i 0.5 N )
of right eye feature points, where Xi is the horizontal axis coordinate of the i-th eye feature point, Yi is the vertical axis coordinate of the i-th eye feature point; the value of i ranges from 1 to N, the first to the (0.5N)th eye feature points are left eye feature points, the (0.5N+1)th to the N-th eye feature points are right eye feature points, and N is an even number; and determining the Euclidean distance Ed between El and Er based on El and Er.
For example, normalizing the position coordinates of the plurality of eye feature points by the Euclidean distance as a standard unit, to obtain normalized position feature data includes: normalizing the position coordinates of the plurality of eye feature points according to the formulas
X inew = X i - i = 1 N X i N Ed and Y inew = Y i - i = 1 N Y i N Ed ,
to obtain new position coordinates of the plurality of eye feature points, as the normalized position feature data, where Xinew is the new coordinate of the horizontal axis of the i-th eye feature point, and Yinew is the new coordinate of the vertical axis of the i-th eye feature point.
For example, N=12.
For example, classifying the position feature data includes: classifying the position feature data with a classifier.
For example, classifying the position feature data further includes: training the classifier with sample images to obtain a classifier parameter for the eye state detection.
For example, training the classifier with sample images to obtain a classifier parameter for the eye state detection including: acquiring positive and negative sample images from a picture library, wherein the eye state in the positive sample images is an opened eye state, and the eye state in the negative sample images is a closed eye state; positioning the eye feature points in the positive and negative sample images, to obtain position coordinates of the plurality of eye feature points in the positive and negative sample images; normalizing the position coordinates of the plurality of eye feature points, to obtain normalized position feature data; and training the classifier with the position feature data to obtain a classifier parameter for detecting the eye state.
For example, positioning a plurality of eye feature points in the target image includes: detecting whether a human face is included in the target image; and when detecting that the target image includes a human face, positioning the plurality of eye feature points in the target image.
According to at least one embodiment of the present disclosure, there is provided an electronic device including at least one processor, at least one memory, and computer program instructions stored in the memory, when the computer program instructions are executed by the processor, the processor is caused to perform: acquiring a target image; positioning a plurality of eye feature points in the target image to determine position coordinates of the plurality of eye feature points; normalizing the position coordinates of the plurality of eye feature points to obtain normalized position feature data; and determining an eye state in the target image based on the position feature data.
For example, the eye feature points include left eye feature points and right eye feature points; and normalizing the position coordinates of the plurality of eye feature points to obtain normalized position feature data includes: determining an Euclidean distance between a mean value of position coordinates of left eye feature points and a mean value of position coordinates of right eye feature points; and normalizing the position coordinates of the plurality of eye feature points by the Euclidean distance as a standard unit, to obtain normalized position feature data.
For example, determining an eye state in the target image based on the position feature data includes: classifying the position feature data; and determining an eye state in the target image based on the classification result.
For example, determining a Euclidean distance between a mean value of position coordinates of left eye feature points and a mean value of position coordinates of right eye feature points includes: determining a mean value of position coordinates
E 1 ( i = 1 0.5 N X i 0.5 N , i = 1 0.5 N Y i 0.5 N )
of left eye feature points and a mean value of position coordinates
Er ( i = 0.5 N + 1 N X i 0.5 N , i = 0.5 N + 1 N Y i 0.5 N )
of right eye feature points, where, Xi is the horizontal axis coordinate of the i-th eye feature point, Yi is the vertical axis coordinate of the i-th eye feature point; the value of i ranges from 1 to N, the first to the (0.5N)th eye feature points are left eye feature points, the (0.5N+1)th to the N-th eye feature points are right eye feature points, and N is an even number; and determining the Euclidean distance Ed between El and Er based on El and Er.
For example, normalizing the position coordinates of the plurality of eye feature points by the Euclidean distance as a standard unit, to obtain normalized position feature data includes: normalizing the position coordinates of the plurality of eye feature points according to the formulas
X inew = X i - i = 1 N X i N Ed and Y inew = Y i - i = 1 N Y i N Ed ,
to obtain new position coordinates of the plurality of eye feature points, as the normalized position feature data, where Xinew is the new coordinate of the horizontal axis of the i-th eye feature point, and Yinew is the new coordinate of the vertical axis of the i-th eye feature point.
For example, classifying the position feature data includes: classifying the position feature data with a classifier.
For example, classifying the position feature data further includes: training the classifier with sample images to obtain a classifier parameter for the eye state detection.
For example, training the classifier with sample images to obtain a classifier parameter for eye state detection including: acquiring positive and negative sample images from a picture library, wherein the eye state in the positive sample images is an opened eye state, and the eye state in the negative sample images is a closed eye state; positioning the eye feature points in the positive and negative sample images, to obtain position coordinates of the plurality of eye feature points in the positive and negative sample images; normalizing the position coordinates of the plurality of eye feature points, to obtain normalized position feature data; and training the classifier with the position feature data to obtain a classifier parameter for detecting the eye state.
According to at least one embodiment of the present disclosure, there is provided an eye state detection apparatus, including: an acquiring sub-circuit configured to acquire a target image; a positioning sub-circuit configured to position a plurality of eye feature points in the target image to determine position coordinates of the plurality of eye feature points; a normalization processing sub-circuit configured to normalize the position coordinates of the plurality of eye feature points to obtain the normalized position feature data; and a determining sub-circuit configured to determine an eye state in the target image based on the position feature data.
According to at least one embodiment of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions, when the computer program instructions are executed by a processor, the processor performs the method of the above embodiments.
BRIEF DESCRIPTION OF THE DRAWINGS
Other features, objects, and advantages of the present disclosure will become more apparent from detailed description of non-limiting embodiments with reference to the following accompanying drawings:
FIG. 1 is a schematic flowchart diagram of an eye state detection method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a training process of a classifier according to an embodiment of the present disclosure;
FIG. 3 is a detailed flowchart of an eye state detection method according to an embodiment of the present disclosure;
FIG. 4 is a block diagram of an eye state detection apparatus according to an embodiment of the present disclosure; and
FIG. 5 is a schematic structural diagram of a computer device suitable for implementing an embodiment of the present disclosure according to an embodiment of the present disclosure.
DETAILED DESCRIPTION
The present disclosure will be described in detail in one example with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present disclosure, rather than limitation of the present disclosure. It should also be noted that, for the convenience of description, only parts related to the present disclosure are shown in the drawings.
It should be noted that the embodiments in the present disclosure and the features in the embodiments may be combined with each other without conflict.
The present disclosure will be described in detail below with reference to the drawings and embodiments.
Embodiments of the present disclosure provide an eye state detection method. An eye state may include an opened eye state and a closed eye state. Of course, the eye state may also include a semi-opened and semi-closed state, a fatigue state, a squint state, and so on. The present disclosure describes the eye state detection only by taking the opened eye state and the closed eye state as an example.
As shown in FIG. 1, a schematic flowchart of an eye state detection method according to an embodiment of the present disclosure includes the following steps 101-104. The order of the above steps as described is only an example of the embodiments of the present disclosure, and is not the only order, and other possible execution order is also conceivable by those skilled in the art according to the present disclosure.
In Step 101, a target image is acquired.
The target image may be acquired by a camera or may be received from other devices other than the camera.
In Step 102, a plurality of eye feature points in the target image are positioned to determine position coordinates of the plurality of eye feature points.
In the embodiment of the present disclosure, before determining the position coordinates of the plurality of eye feature points in the target image, it may be first detected whether the target image includes a face, for example, with a face detection technique. The face detection technique refers to a method of searching for a given image using a certain strategy to determine whether a face is contained. For example, a linear subspace method, a neural network method, or the like may be used to detect whether a face is included in a target image.
When it is detected that the target image includes a human face, the face region in the target image can be determined, and then the plurality of eye feature points in the face region can be positioned, and thereby the position coordinates of the plurality of eye feature points can be determined. For example, eye feature points can be positioned by an eye feature learning machine. For example, first, positive and negative samples of a plurality of eye feature points are acquired. For example, an image recognition algorithm can be used to detect a plurality of images that may include eye features, to obtain a positive sample and a sub-sample of a plurality of eye feature points. A positive sample is a sample that includes eye features, and a sub-sample is a sample that is similar to an eye feature but is not an eye feature. Second, the eye feature learning machine is trained with a large number of positive and negative samples. Next, the face region in the target image or the target image is input into the trained learning machine, and the trained learning machine can automatically position the eye feature points in the input face region and determine position coordinates of the positioned positions in the target image.
In the embodiment, the eye feature points include left eye feature points and right eye feature points, which may be, but not limited to, edge points of the eye corners and upper and lower eyelids. The number of feature points of the left and right eyes can be set as needed, and based on the symmetrical characteristics of the human eyes in the face. The numbers of feature points of the left and right eyes can be the same. In the embodiment of the present disclosure, the algorithm used for positioning the left and right eye feature points determines the number of left and right eye feature points as 12, that is, 6 left eye feature points and 6 right eye feature points respectively, for example, a left eye corner point, a right eye corner point, two edge points of upper eyelids and two edge points of lower eyelids. Of course, those skilled in the art understand that the number of left and right eye feature points may vary depending on the detection accuracy. However, a large number can increase the calculation cost, and a small number may suffer from low positioning accuracy. Therefore, 12 feature points are selected in this example to balance the accuracy and the calculation amount.
In addition, the position coordinates of the eye feature points can be, but not limited to, positioned in a XY axis coordinate system. The coordinate system can take the upper left corner of the target image as the origin, the horizontal direction as the horizontal axis, that is, the X axis, and the vertical direction as the vertical axis, that is, the Y axis.
In Step 103, the position coordinates of the plurality of eye feature points are normalized to obtain normalized position feature data. For example, an Euclidean distance between a mean value of position coordinates of the left eye feature points and a mean value of position coordinates of the right eye feature points is used as a standard unit for normalizing the position coordinates of the plurality of eye feature points.
Taking N as 12 as an example, the horizontal axis coordinates of 12 eye feature points are X (X1, X2, . . . , X11, X12), and the vertical axis coordinates of 12 eye feature points are Y (Y1, Y2, . . . , Y11, Y12).
When the step 103 is performed, a mean value of position coordinates Ea
( i = 1 N X i N , i = 1 N Y i N )
of all the eye feature points, a mean value of position coordinates
E 1 ( i = 1 0.5 N X i 0.5 N , i = 1 0.5 N Y i 0.5 N )
of left eye feature points and a mean value of position coordinates
Er ( i = 0.5 N + 1 N X i 0.5 N , i = 0.5 N + 1 N Y i 0.5 N )
of right eye feature points can be firstly determined.
Where Xi is the horizontal axis coordinate of the i-th eye feature point, Yi is the vertical axis coordinate of the i-th eye feature point; the value of i ranges from 1 to N, the first to the (0.5N)th eye feature points are left eye feature points, the (0.5N+1)th to the N-th eye feature points are right eye feature points, and N is an even number. Then the Euclidean distance Ed between El and Er is calculated. The Euclidean distance can be calculated, for example, through the following formula:
ρ=√{square root over ((x 2 −x 1)2+(y 2 −y 1)2)}
where ρ is the Euclidean distance between points (x2,y2) and (x1,y1). Then the Euclidean distance between El and Er is:
Ed = ( ( i = 0.5 N + 1 N X i 0.5 N - i = 1 0.5 N X i 0.5 N ) 2 + ( i = 0.5 N + 1 N Y i 0.5 N - i = 1 0.5 N Y i 0.5 N ) 2 )
Finally, according to the formulas
X inew = X i - i = 1 N X i N Ed and Y inew = Y i - i = 1 N Y i N Ed ,
the position coordinates of the plurality of eye feature points are normalized to obtain new position coordinates of the plurality of eye feature points; where Xinew is the new coordinate of the horizontal axis of the i-th eye feature point, and Yinew is the new coordinate of the vertical axis of the i-th eye feature point.
The obtained new position coordinates of the eye feature points are the position feature data obtained after the normalization process.
In Step 104, an eye state in the target image is determined based on the position feature data. For example, the position feature data can be classified to determine the state eye in the target image.
The obtained position feature data can be classified with a classifier.
]For example, the embodiment of the present disclosure can further include: training a classifier with sample images to obtain a classifier parameter for detecting the eye state.
In the embodiment of the present disclosure, the training process of the classifier can be as shown in FIG. 2.
The order of the steps in FIG. 2 as described is merely an example of the embodiments of the present disclosure, and is not the only order, and other possible execution order is also conceivable by those skilled in the art based on the present disclosure.
First, the positive and negative sample images are acquired from a library of sample images of opened eye states and closed eye states. The eye state in the positive sample image is the opened eye state, and the eye state in the negative sample image is the closed eye state.
Then, face detection and eye feature point positioning are performed on the positive and negative sample images to determine position coordinates of the plurality of eye feature points in the positive and negative sample images.
Next, based on the normalization processing principle in the above Step 103, the position coordinates of the plurality of eye feature points are normalized to obtain the normalized position feature data.
Finally, with the obtained position feature data, the classifier is trained to obtain a classifier parameter for detecting the eye state. The classifier can classify the opened eye state or the closed eye state. For example, if the position coordinates of the eye feature points are input into the classifier, the classifier can determine whether the feature at the position coordinates corresponds to an opened eye state or a closed eye state.
The present disclosure is further described below in conjunction with specific embodiments, but the present disclosure is not limited to the following embodiments.
FIG. 3 is a schematic flowchart diagram of an eye state detection method according to an embodiment of the present disclosure. The method for detecting the eye may include the following steps 301-305. The order of the above steps as described is only an example of the embodiments of the present disclosure, and is not the only order. Other possible execution order is also conceivable by those skilled in the art according to the present disclosure.
In Step 301, a target image is acquired.
In Step 302, it is detected whether a face is included in the target image.
When it is detected that the target image includes a human face, Step 303 is performed. When it is detected that the target image does not include a human face, the process ends, and the process returns to Step 301.
In Step 303, twelve (12) eye feature points in the target image are positioned to determine position coordinates of the twelve (12) eye feature points.
The horizontal axis coordinates the 12 eye feature points are X (X1, X2, . . . , X11, X12) of, and the vertical axis coordinates of the 12 eye feature points are Y (Y1, Y2, . . . , Y11, Y12).
In Step 304, the position coordinates of the 12 eye feature points are normalized by an Euclidean distance between a mean value of position coordinates of left eye feature points and a mean value of position coordinates of right eye feature points, to obtain the normalized position feature data.
First, a mean value of position coordinates
Ea ( i = 1 12 X i 12 , i = 1 12 Y i 12 )
of all the eye feature points, a mean value of position coordinates
E 1 ( i = 1 6 X i 6 , i = 1 6 Y i 6 )
of left eye feature points and a mean value of position coordinates
Er ( i = 7 12 X i 6 , i = 7 12 Y i 6 )
of right eye feature points are determined.
Where Xi is the horizontal axis coordinate of the i-th eye feature point, and Yi is the vertical axis coordinate of the i-th eye feature point.
Then, the Euclidean distance Ed of El and Er is calculated.
Finally, according to the formulas
X inew = X i - i = 1 12 X i 12 Ed and Y inew = Y i - i = 1 12 Y i 12 Ed ,
the position coordinates of the twelve (12) eye feature points are normalized to obtain the new position coordinates of the 12 eye feature points, where Xinew is the new coordinate of the horizontal axis of the i-th eye feature point, and Yinew is the new coordinate of the vertical axis of the i-th eye feature point.
Taking (X1, Y1) as an example:
X 1 new = X 1 - i = 1 12 X i 12 Ed , and Y 1 new = Y 1 - i = 1 12 Y i 12 Ed .
Calculation are performed similarly for the other eye feature points, to obtain the new position coordinates of the twelve (12) eye feature points are: (X1new, X2new . . . X11new, X12new, Y1new, Y2new . . . Y11new, Y12new), that is, the position feature data obtained after the normalization process.
In Step 305, the obtained position feature data is classified by a classifier to determine the eye state in the target image.
In the eye state detection method provided by the embodiment of the present disclosure, the eye state is determined by acquiring a target image, determining position coordinates of the plurality of eye feature points in the target image, and normalizing the position coordinates of the plurality of eye feature points by an Euclidean distance between a mean value of position coordinates of the left eye feature points and a mean value of position coordinates of the right eye feature points as a standard unit to obtain normalized position feature data; and classifying the position feature data to determine the eye state in the target image. The present method can accurately detect the eye state in the target image, and due to the normalization process, it won't be affected by the size and position of the eye region in the target image and has an excellent robustness.
It should be noted that although the operations of the disclosed methods are described in a particular order in the figures, this is not a requirement or implied that the operations must be performed in that particular order, or that all of the operations shown must be performed to achieve the desired results. Instead, the steps depicted in the flowcharts can change the order of execution. Additionally or alternatively, certain steps can be omitted, multiple steps can be combined into one step, and/or one step can be broken down into multiple steps.
Based on the same inventive concept as the foregoing method, the embodiment of the present disclosure further provides an eye state detection apparatus. For the sake of brevity of the description, the following is only a brief description. FIG. 4 is a structural block diagram of an eye state detection apparatus according to an embodiment of the present disclosure, the apparatus includes: an acquiring sub-circuit 41, a positioning sub-circuit 42, a normalization processing sub-circuit 43 and a determining sub-circuit 44.
The acquiring sub-circuit 41 is configured to acquire a target image, wherein the acquiring sub-circuit is, for example, a camera, a video camera, or the like, and can be a program command that calls for a target image.
The positioning sub-circuit 42 is configured to position a plurality of eye feature points in the target image to determine position coordinates of the plurality of eye feature points, wherein, for example, eye feature points can include left eye feature points and right eye feature points.
The normalization processing sub-circuit 43 is configured to normalize the position coordinates of the plurality of eye feature points to obtain the normalized position feature data, wherein, for example, the position coordinates of the plurality of eye feature points can be normalized by an Euclidean distance between a mean value of position coordinates of left eye feature points and a mean value of position coordinates of right eye feature points, to obtain normalized position feature data, as a standard unit for normalizing the position coordinates of the plurality of eye feature points.
The determining sub-circuit 44 is configured to determine an eye state in the target image based on the position feature data.
For example, the positioning sub-circuit 42, the normalization processing sub-circuit 43, and the determining sub-circuit 44 can be implemented by software, or can be implemented by hardware or firmware. For example, they can be implemented by a general purpose processor, programmable logic circuits, or integrated circuits.
For example, the positioning sub-circuit 42 is configured, for example, to detect whether a face is included in the target image, and when detecting that the target image includes a human face, position a plurality of eye feature points in the target image.
For example, the normalization processing sub-circuit 43 is configured to: determine a mean value of position coordinates
El ( i = 1 0.5 N X i 0.5 N , i = 1 0.5 N Y i 0.5 N )
of left eye feature points and a mean value of position coordinates
Er ( i = 0.5 N + 1 N X i 0.5 N , i = 0.5 N + 1 N Y i 0.5 N )
of right eye feature points, where Xi is the horizontal axis coordinate of the i-th eye feature point, Yi is the vertical axis coordinate of the i-th eye feature point; the value of i ranges from 1 to N, the first to the (0.5N)th eye feature points are left eye feature points, the (0.5N+1)th to the N-th eye feature points are right eye feature points, and N is an even number; calculate the Euclidean distance Ed between El and Er; and normalizing the position coordinates of the plurality of eye feature points according to the formulas
X inew = X i - i = 1 N X i N Ed and Y inew = Y i - i = 1 N Y i N Ed ,
to obtain new position coordinates of the plurality of eye feature points, as the normalized position feature data; where Xinew is the new coordinate of the horizontal axis of the i-th eye feature point, and Yinew is the new coordinate of the vertical axis of the i-th eye feature point.
For example, in the embodiment, N=12.
For example, the determining sub-circuit 44 is configured to classify the position feature data with a classifier.
In one example, the apparatus further includes: a classifier training sub-circuit 45 configured to train the classifier with sample images to obtain a classifier parameter for eye state detection.
It should be understood that the subsystems or sub-circuit described in relation to the above-described apparatus for detecting the opened-eye or closed-eye state correspond to the respective steps in the method described with reference to FIGS. 1-3. Thus, the operations and features described above for the method are also applicable to the eye state detection apparatus and the sub-circuit included therein, details of which will not be repeated herein.
Based on the same inventive concept, embodiments of the present disclosure also provide a computer device suitable for implementing the embodiments of the present disclosure and implementing the method of the foregoing embodiments.
For example, the computer device includes a memory and a processor, the memory storing computer program instructions, and when the processor processes the program instructions, the processor performs: acquiring a target image; and positioning a plurality of eye feature points in the target image to determine position coordinates of the plurality of eye feature points; normalizing the position coordinates of the plurality of eye feature points to obtain normalized position feature data; and determining an eye state in the target image based on the position feature data.
For example, the eye feature points include left eye feature points and right eye feature points. Normalizing the position coordinates of the plurality of eye feature points, to obtain the normalized position feature data includes: determining an Euclidean distance between a mean value of position coordinates of left eye feature points and a mean value of position coordinates of right eye feature points; normalizing the position coordinates of the plurality of eye feature points by the Euclidean distance as a standard unit, to obtain normalized position feature data.
For example, determining the eye state in the target image based on the position feature data includes: classifying the position feature data; and determining an eye state in the target image based on the classification result.
For example, determining an Euclidean distance between a mean value of position coordinates of left eye feature points and a mean value of position coordinates of right eye feature points includes: determining a mean value of position coordinates of left eye feature points and a mean value of position
El ( i = 1 0.5 N X i 0.5 N , i = 1 0.5 N Y i 0.5 N )
coordinates
Er ( i = 0.5 N + 1 N X i 0.5 N , i = 0.5 N + 1 N Y i 0.5 N )
of right eye feature points, where Xi is the horizontal axis coordinate of the i-th eye feature point, Yi is the vertical axis coordinate of the i-th eye feature point; the value of i ranges from 1 to N, the first to the (0.5N)th eye feature points are left eye feature points, the (0.5N+1)th to the N-th eye feature points are right eye feature points, and N is an even number; and determining the Euclidean distance Ed between El and Er based on El and Er.
For example, normalizing the position coordinates of the plurality of eye feature points by the Euclidean distance as a standard unit, to obtain normalized position feature data includes: normalizing the position coordinates of the plurality of eye feature points according to the formulas
X inew = X i - i = 1 N X i N Ed and Y inew = Y i - i = 1 N Y i N Ed ,
to obtain new position coordinates of the plurality of eye feature points, as the normalized position feature data; where Xinew is the new coordinate of the horizontal axis of the i-th eye feature point, and Yinew is the new coordinate of the vertical axis of the i-th eye feature point.
For example, N=12.
For example, classifying the position feature data includes: classifying the position feature data with a classifier.
For example, classifying the position feature data further includes: training the classifier with sample images to obtain a classifier parameter for eye state detection.
For example, training the classifier with sample images to obtain a classifier parameter for eye state detection includes: acquiring positive and negative sample images from a picture library, wherein the eye state in the positive sample image is the opened eye state, and the eye state in the negative sample image is the closed eye state; positioning the eye feature points in the positive and negative sample images, to obtain position coordinates of the plurality of eye feature points in the positive and negative sample images; normalizing the position coordinates of the plurality of eye feature points, to obtain normalized position feature data; and training the classifier with the position feature data to obtain a classifier parameter for detecting the eye state.
For example, positioning the plurality of eye feature points in the target image includes: detecting whether the target image includes a human face; and when detecting that the target image includes a human face, positioning the plurality of eye feature points in the image.
As shown in FIG. 5, in an embodiment of the present disclosure, a schematic structural diagram of a computer device applicable for implementing the embodiment of the present disclosure is provided
As shown in FIG. 5, the computer system includes a central processing unit (CPU) 501, which can perform desired actions and processes according to a program stored in a read only memory (ROM) 502 or a program loaded to a random access memory (RAM) 503 from a memory portion 508. In the RAM 503, various programs and data required for the operation of the system 500 are also stored. The CPU 501, the ROM 502, and the RAM 503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to the bus 504.
The following components are connected to the I/O interface 505: an input portion 506 including a keyboard, a mouse, and the like; an output portion 507 including, for example, a cathode ray tube (CRT), a liquid crystal display (LCD), a speaker and the like; a storage portion 508 including a hard disk or the like; and a communication portion 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the Internet. The driver 310 is also connected to an I/O interface 505 as needed. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory or the like is mounted on the drive 510 as needed so that a computer program read therefrom can be installed into the storage portion 508 as needed.
In particular, the processes described above with reference to FIGS. 1-3 can be implemented as a computer software program in accordance with embodiments of the present disclosure. For example, an embodiment of the present disclosure includes a computer program product including a computer program tangibly embodied on a machine readable medium. The computer program includes program codes for performing the methods of FIGS. 1-3. In such an embodiment, the computer program can be downloaded and installed from the network via the communication portion 509, and/or installed from the removable medium 511.
The flowchart and block diagrams in the drawings illustrate the architecture, function, and operation of possible implementations of the system, the method, and the computer program product in accordance with various embodiments of the present disclosure. In this regard, each block of the flowcharts or the block diagrams can represent a module, a program segment, or a portion of codes that includes one or more executable instructions for implementing the specified logic functions. It should also be noted that in some alternative implementations, the functions noted in the blocks can also occur in a different order than that illustrated in the drawings. For example, two successively represented blocks can be actually executed substantially in parallel, and they can sometimes be executed in the reverse order, depending upon the function involved. It is also noted that each block of the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts, can be implemented in a dedicated hardware-based system that performs the specified function or operation. Alternatively, it can be implemented by a combination of dedicated hardware and computer instructions.
The sub-circuits or modules described in the embodiments of the present disclosure can be implemented by software or by hardware. The described sub-circuits or modules can also be provided in the processor. The names of these sub-circuits or modules do not in any way constitute a limitation on the sub-circuit or module itself.
In another aspect, the present disclosure further provides a computer readable storage medium, which can be a computer readable storage medium included in the apparatus described in the foregoing embodiments, or can exist separately, as a computer readable storage medium that is not assembled into the device. The computer readable storage medium stores one or more programs that are used by one or more processors to perform the eye state detection methods described in this disclosure.
The above description is only a preferred embodiment of the present disclosure and a description of the technical principles applied. It should be understood by those skilled in the art that the scope of the present disclosure referred to in the present disclosure is not limited to the specific combination of the above technical features, and should also cover other technical solutions formed by any combination of the above technical features or equivalent features thereof without departing from the inventive concept, for example, a technical solution formed by replacing the above features with but not limited to, the technical features having similar functions, as disclosed in the present disclosure.

Claims (20)

What is claimed is:
1. An eye state detection method, comprising:
acquiring a target image;
positioning a plurality of eye feature points in the target image to determine position coordinates of the plurality of eye feature points;
normalizing the position coordinates of the plurality of eye feature points to obtain normalized position feature data; and
determining a state in the target image based on the position feature data.
2. The method according to claim 1, wherein the eye feature points comprise left eye feature points and right eye feature points; and normalizing the position coordinates of the plurality of eye feature points to obtain normalized position feature data comprises:
determining a Euclidean distance between a mean value of position coordinates of left eye feature points and a mean value of position coordinates of right eye feature points; and
normalizing the position coordinates of the plurality of eye feature points by the Euclidian distance as a standard unit, to obtain normalized position feature data.
3. The method according to claim 2, wherein determining a Euclidean distance between a mean value of position coordinates of left eye feature points and a mean value of position coordinates of right eye feature points comprises:
determining a mean value of position coordinates
El ( i = 1 0.5 N X i 0.5 N , i = 1 0.5 N Y i 0.5 N )
of left eye feature points and a mean value of position coordinates
Er ( i = 0.5 N + 1 N X i 0.5 N , i = 0.5 N + 1 N Y i 0.5 N )
of right eye feature points Xi is the horizontal axis coordinate of the i-th eye feature point, Yi is the vertical axis coordinate of the i-th eye feature point; the value of I ranges from 1 to N, the first to the (0.5N)th eye feature points are left eye feature points, the (0.5N+1)th to the N-th eye feature points are right eye feature points, and N is an even number, and determining the Euclidean distance Ed between El and Er based on El and Er.
4. The method according to claim 3, wherein normalizing the position coordinates of the plurality of eye feature points by the Euclidean distance as a standard unit, to obtain normalized position feature data comprises: normalizing the position coordinates of the plurality of eye feature points according to the formulas
X inew = X i - i = 1 N X i N Ed and Y inew = Y i - i = 1 N Y i N Ed ,
to obtain new position coordinates of the plurality of eye feature points, as the normalized position feature data; where Xinew is the new coordinate of the horizontal axis of the i-th eye feature point, and Yinew is the new coordinate of the vertical axis of the i-th eye feature point.
5. The method according to claim 3, wherein N=12.
6. The method according to claim 1, wherein determining an eye state in the target image based on the position feature data comprises: classifying the position feature data; and determining an eye state in the target image based on the classification result.
7. The method according to claim 6, wherein classifying the position feature data comprises: classifying the position feature data with a classifier.
8. The method according to claim 7, wherein classifying the position feature data further comprises: training the classifier with sample images to obtain a classifier parameter for eye state detection.
9. The method according to claim 8, wherein training the classifier with sample images to obtain a classifier parameter for eye state detection comprising:
acquiring positive sample images and negative sample images from a picture library, wherein the eye state in the positive sample images is an opened eye state, and the eye state in the negative sample images is a closed eye state;
positioning the eye feature points in the positive and negative sample images, to obtain position coordinates of the plurality of eye feature points in the positive and negative sample images;
normalizing the position coordinates of the plurality of eye feature points, to obtain normalized position feature data;
training the classifier with the position feature data to obtain a classifier parameter for detecting the eye state.
10. The method according to claim 1, wherein positioning a plurality of eye feature points in the target image comprises:
detecting whether a human face is comprised in the target image; and when detecting that the target image comprises a human face, positioning the plurality of eye feature points in the target image.
11. A computer readable storage medium having stored thereon computer program instructions, when the computer program instructions are executed by a processor, the processor is caused to perform the method according to claim 1.
12. An electronic device comprising at least one processor, at least one memory, and computer program instructions stored in the memory, when the computer program instructions are executed by the processor, the processor is configured to perform:
acquiring a target image;
positioning a plurality of eye feature points in the target image to determine position coordinates of the plurality of eye feature points;
normalizing the position coordinates of the plurality of eye feature points to obtain normalized position feature data; and
determining an eye state in the target image based on the position feature data.
13. The electronic device according to claim 12, wherein the eye feature points comprise left eye feature points and right eye feature points; and
normalizing the position coordinates of the plurality of eye feature points to obtain normalized position feature data comprises:
determining a Euclidean distance between a mean value of position coordinates of left eye feature points and a mean value of position coordinates of right eye feature points, and
normalizing the position coordinates of the plurality of eye feature points by the Euclidean distance as a standard unit, to obtain normalized position feature data.
14. The electronic device according to claim 13, wherein determining a Euclidean distance between a mean value of position coordinates of left eye feature points and a mean value of position coordinates of right eye feature points comprises:
determining a mean value of position coordinates
El ( i = 1 0.5 N X i 0.5 N , i = 1 0.5 N Y i 0.5 N )
of left eye feature points and a mean value of position coordinates
Er ( i = 0.5 N + 1 N X i 0.5 N , i = 0.5 N + 1 N Y i 0.5 N )
of right eye feature points, where Xi is the horizontal axis coordinate of the i-th eye feature point, Yi is the vertical axis coordinate of the i-th eye feature point; the value of i ranges from 1 to N, the first to the (0.5N)th eye feature points are left eye feature points, the (0.5N+1)th to the N-th eye feature points are right eye feature points, and N is an even number; and
determining the Euclidean distance Ed between El and Er based on El and Er.
15. The electronic device according to claim 14, wherein normalizing the position coordinates of the plurality of eye feature points by the Euclidean distance as a standard unit, to obtain normalized position feature data comprises:
normalizing the position coordinates of the plurality of eye feature points according to the formulas
X inew = X i - i = 1 N X i N Ed and Y inew = Y i - i = 1 N Y i N Ed ,
to obtain new position coordinates of the plurality of eye feature points, as the normalized position feature data; where Xinew is the new coordinate of the horizontal axis of the i-th eye feature point, and Yinew is the new coordinate of the vertical axis of the i-th eye feature point.
16. The electronic device according to claim 12, wherein determining an eye state in the target image based on the position feature data comprises:
classifying the position feature data; and
determining an eye state in the target image based on the classification result.
17. The electronic device according to claim 16, wherein classifying the position feature data comprises:
classifying the position feature data with a classifier.
18. The electronic device according to claim 17, wherein the classifying the position feature data further comprises:
training the classifier with sample images to obtain a classifier parameter for eye state detection.
19. The electronic device according to claim 18, wherein training the classifier with sample images to obtain a classifier parameter for eye state detection comprising:
acquiring positive sample images and negative sample images from a picture library, wherein the eye state in the positive sample images is an opened state, and the eye state in the negative sample images is a closed eye state;
positioning the eye feature points in the positive and negative sample images, to obtain position coordinates of the plurality of eye feature points in the positive and negative sample images;
normalizing the position coordinates of the plurality of eye feature points to obtain normalized position feature data; and
training the classifier with the position feature data to obtain a classifier parameter for detecting the eye state.
20. An eye state detection apparatus, comprising:
an acquiring sub-circuit configured to acquire a target image;
a positioning sub-circuit configured to position a plurality of eye feature points in the target image to determine position coordinates of the plurality of eye feature points,
normalization processing sub-circuit configured to normalize the position coordinates of the plurality of eye feature points to obtain the normalized position feature data; and
a determining sub-circuit configured to determine an eye state in the target image based on the position feature data.
US16/473,491 2018-04-27 2018-11-30 Eye state detection method, electronic device, detecting apparatus and computer readable storage medium Active 2040-02-19 US11386710B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201810394919.1 2018-04-27
CN201810394919.1A CN108615014B (en) 2018-04-27 2018-04-27 Eye state detection method, device, equipment and medium
PCT/CN2018/118374 WO2019205633A1 (en) 2018-04-27 2018-11-30 Eye state detection method and detection apparatus, electronic device, and computer readable storage medium

Publications (2)

Publication Number Publication Date
US20210357617A1 US20210357617A1 (en) 2021-11-18
US11386710B2 true US11386710B2 (en) 2022-07-12

Family

ID=63661343

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/473,491 Active 2040-02-19 US11386710B2 (en) 2018-04-27 2018-11-30 Eye state detection method, electronic device, detecting apparatus and computer readable storage medium

Country Status (3)

Country Link
US (1) US11386710B2 (en)
CN (1) CN108615014B (en)
WO (1) WO2019205633A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108615014B (en) * 2018-04-27 2022-06-21 京东方科技集团股份有限公司 Eye state detection method, device, equipment and medium
CN110555426A (en) * 2019-09-11 2019-12-10 北京儒博科技有限公司 Sight line detection method, device, equipment and storage medium
CN110941333A (en) * 2019-11-12 2020-03-31 北京字节跳动网络技术有限公司 Interaction method, device, medium and electronic equipment based on eye movement
CN110866508B (en) * 2019-11-20 2023-06-27 Oppo广东移动通信有限公司 Method, device, terminal and storage medium for identifying the form of a target object
CN112989890B (en) * 2019-12-17 2024-08-02 腾讯科技(深圳)有限公司 Image detection method, device and storage medium
CN111178278B (en) * 2019-12-30 2022-04-08 上海商汤临港智能科技有限公司 Line-of-sight direction determination method, device, electronic device and storage medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101030316A (en) 2007-04-17 2007-09-05 北京中星微电子有限公司 Safety driving monitoring system and method for vehicle
US20080181508A1 (en) * 2007-01-30 2008-07-31 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and program
US20080317385A1 (en) * 2007-06-22 2008-12-25 Nintendo Co., Ltd. Storage medium storing an information processing program, information processing apparatus and information processing method
US20100160049A1 (en) * 2008-12-22 2010-06-24 Nintendo Co., Ltd. Storage medium storing a game program, game apparatus and game controlling method
US20120154550A1 (en) * 2010-12-20 2012-06-21 Sony Corporation Correction value calculation apparatus, compound eye imaging apparatus, and method of controlling correction value calculation apparatus
US20130093847A1 (en) * 2010-06-28 2013-04-18 Fujifilm Corporation Stereoscopic image capture device and control method of the same
CN106228293A (en) 2016-07-18 2016-12-14 重庆中科云丛科技有限公司 teaching evaluation method and system
CN106485191A (en) 2015-09-02 2017-03-08 腾讯科技(深圳)有限公司 A kind of method for detecting fatigue state of driver and system
US20170140210A1 (en) * 2015-11-16 2017-05-18 Canon Kabushiki Kaisha Image processing apparatus and image processing method
CN107704805A (en) 2017-09-01 2018-02-16 深圳市爱培科技术股份有限公司 method for detecting fatigue driving, drive recorder and storage device
CN108615014A (en) 2018-04-27 2018-10-02 京东方科技集团股份有限公司 A kind of detection method of eye state, device, equipment and medium
US20200401879A1 (en) * 2019-06-19 2020-12-24 LegInsight, LLC Systems and methods for predicting whether experimental legislation will become enacted into law
US20210209851A1 (en) * 2019-05-15 2021-07-08 Beijing Sensetime Technology Development Co., Ltd. Face model creation
US20210271865A1 (en) * 2018-12-12 2021-09-02 Mitsubishi Electric Corporation State determination device, state determination method, and recording medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100438841B1 (en) * 2002-04-23 2004-07-05 삼성전자주식회사 Method for verifying users and updating the data base, and face verification system using thereof
CN100389388C (en) * 2006-06-15 2008-05-21 北京中星微电子有限公司 Screen protection method and device based on face authentication
CN101339607B (en) * 2008-08-15 2012-08-01 北京中星微电子有限公司 Human face recognition method and system, human face recognition model training method and system
CN102663361B (en) * 2012-04-01 2014-01-01 北京工业大学 Face image reversible geometric normalization method facing overall characteristics analysis
CN103793720B (en) * 2014-02-12 2017-05-31 北京海鑫科金高科技股份有限公司 A kind of eye locating method and system
CN106529409B (en) * 2016-10-10 2019-08-09 中山大学 A Method for Measuring Eye Gaze Angle Based on Head Posture

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080181508A1 (en) * 2007-01-30 2008-07-31 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and program
CN101030316A (en) 2007-04-17 2007-09-05 北京中星微电子有限公司 Safety driving monitoring system and method for vehicle
US20080317385A1 (en) * 2007-06-22 2008-12-25 Nintendo Co., Ltd. Storage medium storing an information processing program, information processing apparatus and information processing method
US20100160049A1 (en) * 2008-12-22 2010-06-24 Nintendo Co., Ltd. Storage medium storing a game program, game apparatus and game controlling method
US20130093847A1 (en) * 2010-06-28 2013-04-18 Fujifilm Corporation Stereoscopic image capture device and control method of the same
US20120154550A1 (en) * 2010-12-20 2012-06-21 Sony Corporation Correction value calculation apparatus, compound eye imaging apparatus, and method of controlling correction value calculation apparatus
CN106485191A (en) 2015-09-02 2017-03-08 腾讯科技(深圳)有限公司 A kind of method for detecting fatigue state of driver and system
US20170140210A1 (en) * 2015-11-16 2017-05-18 Canon Kabushiki Kaisha Image processing apparatus and image processing method
CN106228293A (en) 2016-07-18 2016-12-14 重庆中科云丛科技有限公司 teaching evaluation method and system
CN107704805A (en) 2017-09-01 2018-02-16 深圳市爱培科技术股份有限公司 method for detecting fatigue driving, drive recorder and storage device
CN108615014A (en) 2018-04-27 2018-10-02 京东方科技集团股份有限公司 A kind of detection method of eye state, device, equipment and medium
US20210271865A1 (en) * 2018-12-12 2021-09-02 Mitsubishi Electric Corporation State determination device, state determination method, and recording medium
US20210209851A1 (en) * 2019-05-15 2021-07-08 Beijing Sensetime Technology Development Co., Ltd. Face model creation
US20200401879A1 (en) * 2019-06-19 2020-12-24 LegInsight, LLC Systems and methods for predicting whether experimental legislation will become enacted into law

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
International Search Report and Written Opinion dated Mar. 1, 2019 from State Intellectual Property Office of the P.R. China.

Also Published As

Publication number Publication date
US20210357617A1 (en) 2021-11-18
CN108615014B (en) 2022-06-21
CN108615014A (en) 2018-10-02
WO2019205633A1 (en) 2019-10-31

Similar Documents

Publication Publication Date Title
US11386710B2 (en) Eye state detection method, electronic device, detecting apparatus and computer readable storage medium
US11182592B2 (en) Target object recognition method and apparatus, storage medium, and electronic device
US20230030267A1 (en) Method and apparatus for selecting face image, device, and storage medium
CN110020592B (en) Object detection model training method, device, computer equipment and storage medium
US20210103763A1 (en) Method and apparatus for processing laser radar based sparse depth map, device and medium
US20210124928A1 (en) Object tracking methods and apparatuses, electronic devices and storage media
CN111310826B (en) Annotation anomaly detection method, device and electronic equipment for sample set
US8792722B2 (en) Hand gesture detection
US8520956B2 (en) Optimized correlation filters for signal processing
US20180300589A1 (en) System and method using machine learning for iris tracking, measurement, and simulation
US7983480B2 (en) Two-level scanning for memory saving in image detection systems
US20230334235A1 (en) Detecting occlusion of digital ink
CN110909568A (en) Image detection method, apparatus, electronic device, and medium for face recognition
US12112522B2 (en) Defect detecting method based on dimensionality reduction of data, electronic device, and storage medium
CN111079638A (en) Target detection model training method, device and medium based on convolutional neural network
WO2017092679A1 (en) Eyeball tracking method and apparatus, and device
US11694331B2 (en) Capture and storage of magnified images
CN112989768B (en) Method, device, electronic device and storage medium for correcting multiple-line questions
US20080304699A1 (en) Face feature point detection apparatus and method of the same
US20150131873A1 (en) Exemplar-based feature weighting
CN114255339A (en) A method, device and storage medium for identifying breakpoints of power transmission wires
CN110910445B (en) Object size detection method, device, detection equipment and storage medium
CN115908409A (en) Photovoltaic sheet defect detection method, detection device, computer equipment and medium
Devadethan et al. Face detection and facial feature extraction based on a fusion of knowledge based method and morphological image processing
CN113971671B (en) Instance segmentation method, device, electronic device and storage medium

Legal Events

Date Code Title Description
AS Assignment

Owner name: BOE TECHNOLOGY GROUP CO., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:XU, CHU;REEL/FRAME:049587/0226

Effective date: 20190606

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: AWAITING TC RESP., ISSUE FEE NOT PAID

STPP Information on status: patent application and granting procedure in general

Free format text: AWAITING TC RESP., ISSUE FEE NOT PAID

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: BEIJING BOE TECHNOLOGY DEVELOPMENT CO., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BOE TECHNOLOGY GROUP CO., LTD.;REEL/FRAME:064397/0480

Effective date: 20230726

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4