CN109213325B - Eye potential feature acquisition method and eye potential identification system - Google Patents

Eye potential feature acquisition method and eye potential identification system Download PDF

Info

Publication number
CN109213325B
CN109213325B CN201811060286.7A CN201811060286A CN109213325B CN 109213325 B CN109213325 B CN 109213325B CN 201811060286 A CN201811060286 A CN 201811060286A CN 109213325 B CN109213325 B CN 109213325B
Authority
CN
China
Prior art keywords
eye
feature data
image
looking
posture state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201811060286.7A
Other languages
Chinese (zh)
Other versions
CN109213325A (en
Inventor
李宗宏
李威寰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qisda Optronics Suzhou Co Ltd
Qisda Corp
Original Assignee
Qisda Optronics Suzhou Co Ltd
Qisda Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qisda Optronics Suzhou Co Ltd, Qisda Corp filed Critical Qisda Optronics Suzhou Co Ltd
Priority to CN201811060286.7A priority Critical patent/CN109213325B/en
Publication of CN109213325A publication Critical patent/CN109213325A/en
Application granted granted Critical
Publication of CN109213325B publication Critical patent/CN109213325B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Abstract

The invention provides an eye potential feature acquisition method and an eye potential identification system, wherein the eye potential feature acquisition method comprises the following steps: a1) fixing the distance between the image capturing device and the eyes, and determining the positions of the first canthus and the second canthus by the image capturing device; a2) adopting concentric circles to position the positions of eyeball, white and eyelid; a3) the image capturing device captures an image of an area where eyes are located, and calculates a first distance between the center of a second circle image and a first canthus image in the image, a second distance between the center of the second circle image and the second canthus image, the number of first skin color pixels in the first circle image, the number of second skin color pixels in the second circle image, the number of total eye white pixels, and the number of first to fourth eye white pixels. The eye posture characteristic acquisition method can acquire the reference characteristic data of the known eye posture state and the first characteristic data of the eye posture state to be identified, and compares the first characteristic data with the reference characteristic data to obtain the eye posture state corresponding to the first characteristic data, so that the eye posture state identification is accurate.

Description

Eye potential feature acquisition method and eye potential identification system
Technical Field
The invention relates to the field of eye gesture identification, in particular to an eye gesture feature acquisition method and an eye gesture identification system.
Background
At present, electronic devices such as displays, mobile phones, computers, televisions, VR (virtual reality) devices, AR (augmented reality) devices and the like need to be operated through human-computer interfaces, and with the progress of science and technology, communication modes between users and the devices are more and more diversified, user information received by the human-computer interfaces is not limited to instructions generated by keyboards, mice and touch screens, and many human-computer interfaces can also receive instructions generated by voice, handwriting, gestures and the like. However, the conventional VR and AR devices usually use a glasses case helmet to display the images, and it is inconvenient to receive the commands only by using the above-mentioned human-machine interface, so a new way to overcome the above-mentioned problems is necessary.
Disclosure of Invention
In view of the problems in the prior art, an object of the present invention is to provide an eye gesture feature collecting method and an eye gesture recognition system, which can recognize an eye gesture state of a user and execute a corresponding operation command according to the eye gesture state.
In order to achieve the above object, the present invention provides an eye posture feature collecting method for collecting feature information of an eye in different eye posture states, the eye including an eyeball, a white eye, an eyelid, a first canthus and a second canthus, the eye posture states including a left-eye view, a right-eye view, an upward view, a downward view, a center view, a blink and a closed eye, the eye posture feature collecting method comprising the steps of:
a1) fixing the distance between the image capturing device and the eyes, and determining a first position of the first canthus and a second position of the second canthus according to color comparison by the image capturing device;
a2) the image capturing device generates movable concentric circles on the eyes, and positions of the eyeball, the white of the eye and the eyelid are positioned by utilizing the concentric circles, wherein the concentric circles comprise a first circle and a second circle, and the diameter of the first circle is smaller than that of the second circle;
a3) judging whether the concentric circles are positioned at the positions of the eyeball, the white eye and the eyelid or not, if not, the eye gesture state is blinking or eye closing, and the image capturing device adjusts the positions of the concentric circles to enable the first position and the second position to be positioned on the circumference of the second circle; if so, the eye posture state is left-looking or right-looking or up-looking or down-looking or centered-looking, the image capturing device adjusts the positions of the concentric circles so that the eyeball and part of the eyelid are positioned in the first circle and the white of the eye is positioned outside the first circle;
a4) the image capturing device captures an image of an area where the eyes are located; when the eye posture state is blinking or closing eyes, the duration time of the eyes closing is also recorded, and the images comprise an eyelid image, a first position image, a second position image and a concentric circle image; when the eye posture state is left-looking or right-looking or up-looking or down-looking or middle-looking, the image comprises an eyeball image, an eye white image, an eyelid image, a first position image, a second position image and a concentric circle image; the concentric circle image comprises a first circle image and a second circle image, and the second circle image comprises a first area, a second area, a third area and a fourth area;
a5) respectively calculating a first distance between the center of the second circular image and the first position image, a second distance between the center of the second circular image and the second position image, the number of first skin color pixels of an eyelid image in the first circular image, the number of second skin color pixels of an eyelid image in the second circular image, the total number of eye white pixels of an eye white image in the second circular image, the number of first eye white pixels of an eye white image in the first region, the number of second eye white pixels of an eye white image in the second region, the number of third eye white pixels of an eye white image in the third region and the number of fourth eye white pixels of an eye white image in the fourth region to obtain feature data of the image.
Optionally, in step a4), the image capturing device filters out an image of an unintentional spontaneous blink when capturing the image of the region where the eye is located, where the spontaneous blink includes a spontaneous eye closing and a spontaneous eye opening, the duration of the spontaneous eye closing has a first time range, and the duration of the spontaneous eye opening has a second time range; the blinking in the eye posture state comprises active eye closing and active eye opening, the duration of the active eye closing has a third time range, and the duration of the active eye opening has a fourth time range; the duration of the eye closure in the eye potential state has a fifth time range.
As an optional scheme, the areas of the first region, the second region, the third region and the fourth region are equal and are all sectors with the same circle center, and the first region, the second region, the third region and the fourth region are sequentially arranged along the counterclockwise direction.
Optionally, a sum of the first number of eye-white pixels, the second number of eye-white pixels, the third number of eye-white pixels, and the fourth number of eye-white pixels is equal to the total number of eye-white pixels.
Optionally, when the eye posture feature acquisition method is used for acquiring feature data of an image of the eye in a known eye posture state, the feature data is reference feature data; when the eye posture feature acquisition method is used for acquiring feature data of an image of the eye in an eye posture state needing to be identified, the feature data is first feature data.
Alternatively, when the eye potential feature collection method is used for collecting the reference feature data, the eye only makes the eye potential state of the middle-looking, and the image capture device collects and stores the reference feature data of the image of the eye in the eye potential state of the middle-looking.
As an optional scheme, when the eye potential feature acquisition method is used for acquiring the reference feature data, the eye firstly makes an eye potential state of a middle view, after the image capturing device acquires and stores the reference feature data of the image of the eye in the eye potential state of the middle view, the eye then makes other eye potential states in sequence, and the image capturing device respectively acquires and stores the reference feature data of the image of the eye in the other eye potential states.
Optionally, the first circle has a first fixed radius, and the first fixed radius is determined by: when the eye contour feature collection method is used to collect the reference feature data of the image of the eye in the centered eye contour state, in step a3), the image capturing device first adjusts the position of the concentric circle and the radius of the first circle so that the eyeball, part of the white of the eye and part of the eyelid are located in the first circle, then gradually reduces the radius of the first circle and adjusts the position of the first circle so that the eyeball is located in the first circle and the white of the eye is not located in the first circle, at this time, the radius of the first circle is equal to the first fixed radius.
Optionally, the second circle has a second fixed radius, and the method for determining the second fixed radius is as follows: when the eye contour feature acquisition method is used to acquire the reference feature data of the image of the eye in the centered eye contour state, in step a3), the image capturing device first adjusts the position of the concentric circle so that the eyeball is entirely located within the first circle with the first fixed radius, and then adjusts the radius of the second circle so that the first position is located on the circumference of the second circle, where the distance between the center of the first circle and the first position is equal to the second fixed radius; or the second circle has a second fixed radius, and the method for determining the second fixed radius is as follows: when the eye contour feature collection method is used to collect the reference feature data of the image of the eye in the centered eye contour state, in step a3), the image capturing device first adjusts the position of the concentric circle so that the eyeball is located entirely within the first circle with the first fixed radius, and then adjusts the radius of the second circle so that the second position is located on the circumference of the second circle, where the distance between the center of the first circle and the second position is equal to the second fixed radius.
The present invention also provides an eye posture identifying system, comprising:
a correction and feature value recording module, including the image capturing device, the image capturing device applying the eye contour feature collecting method, configured to collect reference feature data of an image of the eye in a known eye contour state and first feature data of the image of the eye in an eye contour state to be identified, where the reference feature data and the first feature data both include the first distance, the second distance, the first skin pixel number, the second skin pixel number, the total eye white pixel number, the first eye white pixel number, the second eye white pixel number, the third eye white pixel number, and the fourth eye white pixel number;
the eye posture comparison module is electrically connected with the correction and characteristic value recording module and used for calling the reference characteristic data and the first characteristic data and comparing the first characteristic data with the reference characteristic data to obtain an eye posture state corresponding to the first characteristic data;
and the execution module is electrically connected with the eye potential comparison module to acquire the eye potential state and execute the corresponding operation command according to the eye potential state.
As an optional scheme, when the image capturing device collects reference feature data of images of the eye in all types of eye posture states, each eye posture state has one group of reference feature data, so that there are multiple groups of reference feature data, and the eye posture state corresponding to the first feature data can be obtained by sequentially comparing the first feature data with each group of reference feature data.
As an optional scheme, when the image capturing device only acquires the reference feature data of the image of the eye in the eye posture state of the centered vision, the eye posture comparison module determines that the eye posture state corresponding to the first feature data is open eye, closed eye or blinking, and the open eye includes left-looking, right-looking, up-looking, down-looking and centered vision:
b1) comparing the second skin color pixel quantity of the first characteristic data with the second skin color pixel quantity of the reference characteristic data to judge whether the eye posture state corresponding to the second skin color pixel quantity of the first characteristic data is eye closure; if yes, go to step b 2); if not, entering step b 4);
b2) comparing the first skin color pixel quantity of the first characteristic data with the first skin color pixel quantity of the reference characteristic data, and judging whether the eye posture state corresponding to the first skin color pixel quantity of the first characteristic data is eye closure; if yes, go to step b 3); if not, entering step b 4);
b3) judging whether the duration time of the eye posture state corresponding to the first characteristic data is within a fifth time range, if so, confirming that the eye posture state corresponding to the first characteristic data is an eye closing posture state, ending the eye posture state confirmation process, and if not, entering the step b 4);
b4) judging whether the eye posture state corresponding to the first characteristic data comprises active eye closing and active eye opening at the same time in a duration, if so, entering step b5), and if not, entering step b 6);
b5) judging whether the duration time of the active eye closing of the eye posture state is within a third time range, simultaneously judging whether the duration time of the active eye opening of the eye posture state is within a fourth time range, if yes, confirming that the eye posture state corresponding to the first feature data is blinking, finishing the eye posture state confirmation process, and if not, entering a step b 6);
b6) determining the eye posture state as open eyes;
b7) comparing the first distance in the first characteristic data with the first distance in the reference characteristic data, thereby preliminarily judging whether the eye posture state corresponding to the first distance of the first characteristic data is left-looking or right-looking;
b8) if it is preliminarily determined that the eye posture state corresponding to the first distance of the first feature data is a left-looking state, comparing the second distance of the first feature data with the second distance of the reference feature data, determining whether the eye posture state corresponding to the second distance of the first feature data is a left-looking state, if so, comparing the number of the fourth eye white pixels of the first feature data with the number of the fourth eye white pixels of the reference feature data, determining whether the eye posture state corresponding to the number of the fourth eye white pixels of the first feature data is a left-looking state, if so, determining that the eye posture state corresponding to the first feature data is a left-looking eye posture state, ending the eye posture state determination process, if one of the four eye white pixels is a left-looking state, re-collecting the first feature data, and entering step b 1);
if it is preliminarily determined that the eye posture state corresponding to the first distance of the first feature data is looking right, comparing the second distance of the first feature data with the second distance of the reference feature data, determining whether the eye posture state corresponding to the second distance of the first feature data is looking right, if so, comparing the number of the third eye white pixels of the first feature data with the number of the third eye white pixels of the reference feature data, determining whether the eye posture state corresponding to the number of the third eye white pixels of the first feature data is looking right, if so, determining that the eye posture state corresponding to the first feature data is looking right, ending the eye posture state determination process, if not, re-collecting the first feature data, and entering step b 1);
if it is preliminarily determined that the eye posture state corresponding to the first distance of the first feature data is neither left-looking nor right-looking, go to step b 9);
b9) comparing the second distance in the first characteristic data with the second distance of the reference characteristic data, and judging whether the eye posture state corresponding to the second distance of the first characteristic data is in the middle or upward;
b10) if the eye posture state corresponding to the second distance of the first feature data is a center view, comparing the total eye white pixel number, the first eye white pixel number, the second eye white pixel number, the third eye white pixel number, and the fourth eye white pixel number of the first feature data with the total eye white pixel number, the first eye white pixel number, the second eye white pixel number, the third eye white pixel number, and the fourth eye white pixel number of the reference feature data, determining whether the eye posture state corresponding to the first feature data is a center view, if so, determining that the eye posture state corresponding to the first feature data is a center view, ending the eye posture state confirmation process, if not, re-acquiring the first feature data, entering step b 1);
if the eye posture state corresponding to the second distance of the first feature data is upward-looking, comparing the total eye white pixel number, the third eye white pixel number, the fourth eye white pixel number of the first feature data with the total eye white pixel number, the third eye white pixel number, and the fourth eye white pixel number of the reference feature data, determining whether the eye posture state corresponding to the second skin color pixel number of the first feature data is upward-looking, if so, comparing the second skin color pixel number of the first feature data with the second skin color pixel number of the reference feature data, determining whether the eye posture state corresponding to the second skin color pixel number of the first feature data is upward-looking, if so, determining that the eye posture state corresponding to the first feature data is upward-looking, ending the eye posture state confirmation process, if not, re-collecting the first characteristic data, and entering the step b 1);
if the eye posture state corresponding to the second distance of the first feature data is neither the central eye posture state nor the upward eye posture state, entering step b 11);
b11) comparing the second skin color pixel quantity of the first characteristic data with the second skin color pixel quantity of the reference characteristic data, and judging whether the eye posture state corresponding to the second skin color pixel quantity of the first characteristic data is downward looking; if so, comparing the first skin color pixel quantity of the first feature data with the first skin color pixel quantity of the reference feature data, and judging whether the eye posture state corresponding to the first skin color pixel quantity of the first feature data is downward-looking, if so, determining that the eye posture state corresponding to the first feature data is downward-looking, finishing the eye posture state confirmation process, otherwise, re-collecting the first feature data, and entering step b 1).
Compared with the prior art, the eye posture characteristic acquisition method can accurately identify the eye posture state of the user and execute the corresponding operation command according to the eye posture state.
The invention is described in detail below with reference to the drawings and specific examples, but the invention is not limited thereto.
Drawings
FIG. 1 is a flow chart of a method of eye contour feature acquisition of the present invention;
FIG. 2 is a schematic diagram of the position of the image and concentric circles with the eye in a centered eye-potential state;
FIG. 3 is a schematic diagram of the position of the image and concentric circles when the eye is in a state of eye potential looking to the left;
FIG. 4 is a schematic diagram of the position of the image and concentric circles when the eye is in a right-looking eye posture;
FIG. 5 is a schematic diagram of the position of the image and concentric circles with the eye in an upward looking eye-potential state;
FIG. 6 is a schematic diagram of the position of the image and concentric circles with the eye in a downward-looking eye-potential state;
fig. 7 is a schematic diagram of the positions of the image and the concentric circles when the eye is in the eye-closed state.
Detailed Description
In order to further understand the objects, structures, features and functions of the present invention, the following embodiments are described in detail.
The invention provides an eye potential characteristic acquisition method, which is used for acquiring characteristic information of eyes in different eye potential states, wherein the eye potential states comprise left-eye looking, right-eye looking, upward looking, downward looking, middle looking, blinking and eye closing, wherein the blinking in the eye potential states comprises active eye closing and active eye opening, the duration of the active eye closing is in a third time range, and the duration of the active eye opening is in a fourth time range; the duration of the closed eye in the eye potential state has a fifth time range. In practical applications, the eyes may also have involuntary spontaneous blinks that interfere with the recognition of the eye posture, the spontaneous blinks include spontaneous eye closure and spontaneous eye opening, the duration of the spontaneous eye closure has a first time range, and the duration of the spontaneous eye opening has a second time range.
The eyes comprise an eyeball 1, an eyelid 2, an eyelid, a first canthus 7 and a second canthus 8, the eyelid 2 is positioned on the periphery of the eyeball 1, the eyelid comprises an eyelid 3, and the first canthus 7 and the second canthus 8 are positioned at two ends of the eyelid 3. When the eyes are in an eye closing state or an active eye closing state or a spontaneous eye closing state, the eyelid covers the eyeball 1 and the white eye 2; when the eye is in a left-looking or right-looking or up-looking or down-looking or centered state, the eyelids surround the eyeball 1 and the white of the eye 2.
In the present invention, as shown in fig. 1 and fig. 2, fig. 1 is a flowchart of an eye potential feature acquisition method of the present invention, and fig. 2 is a schematic diagram of an image and a concentric circle position when an eye is in a centered eye potential state, the eye potential feature acquisition method of the present invention includes the following steps:
a1) fixing the distance between the image capturing device and the eyes, and determining a first position of a first canthus 7 and a second position of a second canthus 8 by the image capturing device according to color comparison;
a2) the image capturing device generates movable concentric circles on eyes, positions of eyeball 1, white 2 and eyelid are located by the aid of the concentric circles, the concentric circles comprise a first circle 4 and a second circle 5, and the diameter of the first circle 4 is smaller than that of the second circle 5;
a3) judging whether the concentric circles are positioned at the positions of the eyeball 1, the white eye 2 and the eyelid, if not, the eye gesture state is blinking or closed eyes, and the image capturing device adjusts the positions of the concentric circles to enable the first position and the second position to be positioned on the circumference of the second circle 5; if so, the eye posture state is left-looking or right-looking or up-looking or down-looking or centered-looking, the image capturing device adjusts the positions of the concentric circles so that the eyeball 1 and part of the eyelid are positioned in the first circle 4 and the white 2 is positioned outside the first circle 4;
a4) the image capturing device captures an image of an area where eyes are located; when the eye posture state is blinking or eye closing, the duration time of the eye closing is also recorded, and the images comprise an eyelid image, a first position image, a second position image and a concentric circle image; when the eye posture state is left-eye or right-eye or up-eye or down-eye or middle-eye, the images comprise an eyeball image, an eye white image, an eyelid image, a first position image, a second position image and a concentric circle image; the concentric circle image comprises a first circle image and a second circle image, and the second circle image comprises a first area, a second area, a third area and a fourth area;
a5) a first distance An between the center of the second circular image and the first position image, a second distance Bn between the center of the second circular image and the second position image, a first number En of skin color pixels of the eyelid image in the first circular image, a second number Dn of skin color pixels of the eyelid image in the second circular image, a total number Csn of eye white pixels of the eye white image in the second circular image, a first number C1n of eye white pixels of the eye white image in the first region, a second number C2n of eye white pixels of the eye white image in the second region, a third number C3n of eye white images in the third region, and a fourth number C4n of eye white pixels of the eye white image in the fourth region are calculated, respectively, to obtain feature data of the image.
In step a4), when the image capturing device captures the image of the area where the eye is located, and when the eye gesture state is determined to be blinking or eye closing, the image capturing device records the duration time when the eye is closed, and if the duration time is within a first time range, the image is determined to be an unconscious image blinking spontaneously, and the image is filtered and a new image is re-acquired, so as to reduce the error of the feature data. In step a5), the sum of the pixel numbers of the first eye white pixel number C1n, the second eye white pixel number C2n, the third eye white pixel number C3n and the fourth eye white pixel number C4n is equal to the total eye white pixel number Csn. When the eye posture state is eye-closed, the total number Csn of white pixels, the first number C1n of white pixels, the second number C2n of white pixels, the third number C3n of white pixels, and the fourth number C4n of white pixels are all 0.
The eye posture characteristic acquisition method can acquire the characteristic data of the image of the eye in the known eye posture state and can also acquire the characteristic data of the image of the eye in the eye posture state needing to be identified. When the eye posture feature acquisition method is used for acquiring feature data of an image of an eye in a known eye posture state, naming the feature data as reference feature data; when the eye posture feature acquisition method is used for acquiring feature data of an image of an eye in an eye posture state needing to be identified, the feature data is named as first feature data. The reference characteristic data can be used as a comparison standard of the first characteristic data.
In addition, because the shape and size of the eyes of each user are slightly different, when the eye potential characteristic acquisition method of the invention is adopted to acquire characteristic data for a new user, reference characteristic data of the user also needs to be acquired, thereby achieving the purpose of correcting the reference characteristic data. For example, the first user has previously collected the reference feature data, which is recorded as the first reference feature data, and when the method of the present invention needs to be adopted to determine the eye posture state of the first user, which needs to be identified, the first reference feature data of the first user does not need to be collected again; when the second user does not collect the reference characteristic data in advance, the reference characteristic data of the second user needs to be collected to form second reference characteristic data, and then the eye posture state needing to be identified of the second user is judged by adopting the method.
In the present invention, there are two ways to collect the reference feature data, namely, a first way and a second way. In the first mode, the eyes of the user only make the eye posture state of the centered vision, and the image capturing device collects and stores the reference characteristic data of the image of the eyes in the eye posture state of the centered vision. In the second mode, the eyes of the user firstly make the eye posture state of the user in the middle view, the image capturing device collects and stores the reference characteristic data of the image of the eyes in the eye posture state of the user in the middle view, then the eyes make other eye posture states in sequence, and the image capturing device respectively collects and stores the reference characteristic data of the image of the eyes in the other eye posture states.
In the present invention, the first circle 4 of the concentric circles may have a first fixed radius, and the second circle 5 of the concentric circles may have a second fixed radius.
The method for determining the first fixed radius comprises the following steps: when the reference feature data of the image of the eye in the eye posture state of the centered vision is acquired by using the eye posture feature acquisition method, in step a3) of the eye posture feature acquisition method of the present invention, the image capturing device first adjusts the positions of the concentric circles and the radius of the first circle 4 so that the eyeball 1, part of the white 2 and part of the eyelid are located within the first circle 4, and then gradually reduces the radius of the first circle 4 and adjusts the position of the first circle 4 so that the eyeball 1 is located entirely within the first circle 4 and the white 2 is not located within the first circle 4, where the radius of the first circle 4 is equal to the first fixed radius.
The second fixed radius determination method comprises the following steps: in the step a3) of the eye potential feature collection method of the present invention, when the reference feature data of the image of the eye in the centered eye potential state is collected by the eye potential feature collection method, the image capturing device first adjusts the positions of the concentric circles so that the eyeball 1 is entirely located within the first circle 4 having a first fixed radius, and then adjusts the radius of the second circle 5 so that the first position (or the second position) is located on the circumference of the second circle 5, where the distance between the center of the first circle 4 and the first position (or the second position) is equal to a second fixed radius.
Therefore, the reference characteristic data needs to be measured firstly, then the first characteristic data needs to be measured, in the process of measuring the reference characteristic data of the image in the eye potential state of the user in the middle, the radiuses of the first circle 4 and the second circle 5 are changed at first and are fixed at last, and when the first circle 4 is determined to have the first fixed radius and the second circle 5 is determined to have the second fixed radius, the image capturing device collects the reference characteristic data, so that the reference characteristic data is accurate. When the invention is used for measuring the reference characteristic data of the images in other eye potential states except the central vision state and measuring the first characteristic data of the eye potential state needing to be identified, the radiuses of the first circle 4 and the second circle 5 are fixed, the first circle 4 has a first fixed radius, and the second circle 5 has a second fixed radius, so that each parameter in the first characteristic data and each parameter in the reference characteristic data can be obtained from the first circle image and the second circle image with the same size, and the reference characteristic data can be used as a comparison standard of the first characteristic data to improve the accuracy of identifying the eye potential state corresponding to the first characteristic data.
In the present invention, the concentric circles may further include a third circle having a radius larger than that of the second circle 5, the third circle being used to detect the eyelid 3, and in step a3) of the eye potential feature collecting method of the present invention, if the concentric circles are not positioned on the eyeball 1 and the white 2, the concentric circles may be moved such that the eyelid 3 is positioned within the third circle and then the first position and the second position are positioned on the circumference of the second circle 5. The third circle can reduce the positioning range, so that the second circle 5 can be further positioned conveniently.
As shown in fig. 2 to 7, fig. 2 to 7 are schematic diagrams of the positions of the image and the concentric circles when the eyes are in different eye conditions; the position of the cursor 6 is only marked in fig. 2, and the position of the cursor 6 is not marked in fig. 3 to 7 in order to observe the eye posture state, but the cursor 6 should be provided in fig. 3 to 7 in actual use, and the relative position of the cursor 6 and the second circle 5 is not changed. In this embodiment, the cursor 6 includes a first straight line and a second straight line, the first straight line and the second straight line both pass through the center of the second circle 5, the first straight line is perpendicular to the second straight line, the first straight line and the second straight line divide the second circle 5 into four regions with equal areas, that is, the first region, the second region, the third region and the fourth region have equal areas and are all sectors with the same center of circle, and the first region, the second region, the third region and the fourth region are sequentially arranged along the counterclockwise direction.
The invention also provides an eye posture identification system applying the eye posture feature acquisition method, which comprises the following steps: the correction and recording characteristic value module comprises An image capturing device, wherein the image capturing device applies the eye potential characteristic acquisition method and is used for acquiring reference characteristic data of An image of An eye in a known eye potential state and first characteristic data of the image of the eye in An eye potential state needing to be identified, and the reference characteristic data and the first characteristic data respectively comprise a first distance An, a second distance Bn, a first skin color pixel number En, a second skin color pixel number Dn, a total eye white pixel number Csn, a first eye white pixel number C1n, a second eye white pixel number C2n, a third eye white pixel number C3n and a fourth eye white pixel number C4 n;
the eye posture comparison module is electrically connected with the correction and characteristic value recording module and used for calling the reference characteristic data and the first characteristic data and comparing the first characteristic data with the reference characteristic data to obtain an eye posture state corresponding to the first characteristic data;
and the execution module is electrically connected with the eye posture comparison module to acquire the eye posture state and execute the corresponding operation command according to the eye posture state.
When the eye posture identification system of the present invention adopts the above second method to collect the reference characteristic data, that is, when the image pickup device collects the reference characteristic data of the images of the eyes in all kinds of eye posture states, each eye posture state has one group of reference characteristic data, so the image pickup device collects a plurality of groups of reference characteristic data, the first characteristic data collected by the image pickup device is sequentially compared with each group of reference characteristic data, if each parameter value in the first characteristic data is equal to or approximately equal to each corresponding parameter value in the same group of reference characteristic data, the eye posture state corresponding to the first characteristic data can be obtained. For example, the plurality of sets of reference feature data includes a set of reference feature data (reference feature data for a centered eye-potential state), a set of reference feature data (reference feature data for a leftward eye-potential state), a set of reference feature data (reference feature data for a rightward eye-potential state), and the like, the first feature data is sequentially compared with the set of reference feature data, and if a first distance An in the first feature data is approximately equal to a first distance An in the set of reference feature data, a second distance Bn in the first feature data is approximately equal to a second distance Bn in the set of reference feature data, a first number of skin pixels En in the first feature data is approximately equal to a first number of skin pixels En in the set of reference feature data, and a second number of skin pixels Dn in the set of reference feature data is approximately equal to a second number of skin pixels Dn in the set of reference feature data The total number Csn of white eye pixels in the first feature data is equal to or equal to the total number Csn of white eye pixels in the B-set reference feature data, the first number C1n of white eye pixels in the first feature data is equal to or equal to the first number C1n of white eye pixels in the B-set reference feature data, the second number C2n of white eye pixels in the first feature data is equal to or equal to the second number C2n of white eye pixels in the B-set reference feature data, the third number C3n of white eye pixels in the first feature data is equal to or equal to the third number C3n of white eye pixels in the B-set reference feature data, and the fourth number C4n of white eye pixels in the first feature data is equal to or equal to the fourth number C4n of white eye pixels in the B-set reference feature data, which indicates that the eye potential state corresponding to the first feature data is a left eye potential state.
When the eye posture identification system of the present invention adopts the above-mentioned first mode to collect the reference characteristic data, that is, when the image capturing device only collects the reference characteristic data of the image of the eye in the eye posture state of the centered vision, the image capturing device only collects a group of reference characteristic data, in an embodiment of the present invention, a comparison relationship between the first characteristic data and the unique group of reference characteristic data is as shown in the following table 1:
table 1:
Figure GDA0002927197780000131
in table 1, a part of parameter values in the first characteristic data of different eye posture states are close to the size of corresponding parameter values in the reference characteristic data or cannot determine the size relationship, so table 1 is not filled in. In table 1, a0, B0, C10, C20, C30, C40, Cs0, D0, and E0 are reference feature data of a centered eye potential state, E5 is reference feature data of An eye potential state of closed eyes, An, Bn, C1n, C2n, C3n, C4n, Csn, Dn, and En are first feature data, tn is duration of eye closure, tn >2s is the fifth time range, 0.4s ≦ tn ≦ 2s is the second time range, and ranges of the fifth time range and the second time range may be set as needed in actual applications, and are not limited thereto. Since the images of spontaneous blinks have been filtered out before the feature data is acquired, the situation of spontaneous blinks may not be considered.
In specific implementation, in the process of acquiring the reference characteristic data or the first characteristic data by using the eye potential characteristic acquisition method of the present invention, if the duration of the eye closing recorded in step a4) is within the third time range, it is indicated that the corresponding eye potential state is a blink, at this time, the present invention may further continue to record the interval time Tn between the current blink eye potential state and the next blink eye potential state in step a4), and if the next blink eye potential state is still a blink, it may be determined whether the interval time Tn is within the sixth time range; if Tn is in the sixth time range, it indicates that the two eye posture states are continuous, that is, the eye posture blinks for 2 times, and an instruction is correspondingly executed; if Tn is outside the sixth time range, it indicates that the two eye potentials are not continuous, i.e. the two eye potentials are separated and blink 1 time, and two instructions are executed correspondingly. In this embodiment, the sixth time range is Tn <0.5s, but not limited thereto.
By comparing the first characteristic data with the reference characteristic data through the table 1, the eye posture state corresponding to the first characteristic data can be obtained more intuitively. In practical application, a plurality of comparison methods can be designed according to table 1 to obtain the eye posture state corresponding to the first characteristic data, the comparison method can also be designed into a computer programming language, and the computer programming language is embedded into the image capturing device, so that the identification efficiency is improved. In an embodiment of the present invention, the eye posture comparison module determines that the eye posture state corresponding to the first feature data is open eye or closed eye or blinking by using the following comparison method, wherein the open eye includes left-looking, right-looking, up-looking, down-looking and centered-looking:
b1) comparing the second skin color pixel quantity of the first characteristic data with the second skin color pixel quantity of the reference characteristic data to judge whether the eye posture state corresponding to the second skin color pixel quantity of the first characteristic data is eye closure; if yes, go to step b 2); if not, entering step b 4);
b2) comparing the first skin color pixel quantity of the first characteristic data with the first skin color pixel quantity of the reference characteristic data, and judging whether the eye posture state corresponding to the first skin color pixel quantity of the first characteristic data is eye closure; if yes, go to step b 3); if not, entering step b 4);
b3) judging whether the duration time of the eye posture state corresponding to the first characteristic data is within a fifth time range, if so, confirming that the eye posture state corresponding to the first characteristic data is an eye closing posture state, finishing the eye posture state confirmation process, and if not, entering the step b 4);
b4) judging whether the eye posture state corresponding to the first characteristic data comprises active eye closing and active eye opening at the same time in a duration, if so, entering the step b5), and if not, entering the step b 6);
b5) judging whether the duration time of the active eye closing of the eye posture state is within a third time range, simultaneously judging whether the duration time of the active eye opening of the eye posture state is within a fourth time range, if yes, confirming that the eye posture state corresponding to the first characteristic data is blinking, finishing the eye posture state confirmation process, and if not, entering the step b 6);
b6) determining the eye posture state as eye opening;
b7) comparing the first distance An in the first characteristic data with the first distance An in the reference characteristic data, so as to preliminarily judge whether the eye posture state corresponding to the first distance An of the first characteristic data is left-looking or right-looking;
b8) if the eye potential state corresponding to the first distance An of the first feature data is preliminarily judged to be the left-looking state, comparing the second distance Bn of the first feature data with the second distance Bn of the reference feature data, judging whether the eye potential state corresponding to the second distance Bn of the first feature data is the left-looking state, if so, comparing the fourth eye white pixel number C4n of the first feature data with the fourth eye white pixel number C4n of the reference feature data, judging whether the eye potential state corresponding to the fourth eye white pixel number C4n of the first feature data is the left-looking state, if so, determining that the eye potential state corresponding to the first feature data is the left-looking eye potential state, finishing the eye potential state confirmation process, if one of the eye potential states is not, re-collecting the first feature data, and entering step b 1);
if the eye potential state corresponding to the first distance An of the first feature data is preliminarily judged to be the right-looking state, comparing the second distance Bn of the first feature data with the second distance Bn of the reference feature data, judging whether the eye potential state corresponding to the second distance Bn of the first feature data is the right-looking state, if so, comparing the third eye white pixel number C3n of the first feature data with the third eye white pixel number C3n of the reference feature data, judging whether the eye potential state corresponding to the third eye white pixel number C3n of the first feature data is the right-looking state, if so, determining that the eye potential state corresponding to the first feature data is the right-looking state, finishing the eye potential state confirmation process, if not, re-collecting the first feature data, and entering the step b 1);
if the eye posture state corresponding to the first distance An of the first characteristic data is preliminarily judged to be neither left-looking nor right-looking, entering step b 9);
b9) comparing the second distance Bn in the first characteristic data with the second distance Bn of the reference characteristic data, and judging whether the eye posture state corresponding to the second distance Bn of the first characteristic data is in the middle or upward;
b10) if the eye potential state corresponding to the second distance Bn of the first feature data is in the middle view, comparing the total number Csn of white eye pixels, the first number C1n of white eye pixels, the second number C2n of white eye pixels, the third number C3n of white eye pixels and the fourth number C4n of white eye pixels of the first feature data with the total number Csn of white eye pixels, the first number C1n of white eye pixels, the second number C2n of white eye pixels, the third number C3n of white eye pixels and the fourth number C4n of reference feature data, determining whether the eye potential state corresponding to the first number C1n of white eye pixels, the second number C2n of white eye pixels, the third number C3n of white eye pixels and the fourth number C4n of first feature data is in the middle view, if yes, determining that the eye posture state corresponding to the first characteristic data is in a centered view state, finishing the eye posture state confirmation process, if not, acquiring the first characteristic data again, and entering step b 1);
if the eye posture state corresponding to the second distance Bn of the first feature data is upward-looking, comparing the total number Csn, the third number C3n, and the fourth number C4n of the first feature data with the total number Csn, the third number C3n, and the fourth number C4n of the reference feature data, determining whether the eye posture state corresponding to the total number Csn, the third number C3n, and the fourth number C4n of the first feature data is upward-looking, if so, comparing the second number Csn of the first feature data with the second number of skin color pixels of the reference feature data, determining whether the eye posture state corresponding to the second number of skin color pixels of the first feature data is upward-looking, if so, determining that the eye posture state corresponding to the first feature data is upward-looking, and ending the eye posture state confirmation process, if not, the first characteristic data is collected again, and the step b1 is carried out);
if the eye potential state corresponding to the second distance Bn of the first feature data is neither the central eye potential state nor the upward eye potential state, entering step b 11);
b11) comparing the second skin color pixel quantity of the first characteristic data with the second skin color pixel quantity of the reference characteristic data, and judging whether the eye posture state corresponding to the second skin color pixel quantity of the first characteristic data is downward-looking; if so, comparing the first skin color pixel quantity of the first feature data with the first skin color pixel quantity of the reference feature data, and judging whether the eye posture state corresponding to the first skin color pixel quantity of the first feature data is downward-looking, if so, determining that the eye posture state corresponding to the first feature data is downward-looking, finishing the eye posture state confirmation process, otherwise, re-collecting the first feature data, and entering the step b 1).
In addition, different eye posture states may be defined as executing different operation commands, and in this embodiment, the different eye posture states and the corresponding operation commands are shown in table 2 below, but not limited thereto.
Table 2:
Figure GDA0002927197780000161
Figure GDA0002927197780000171
in practical application, the eye gesture function execution module can be further provided with a display unit, a sound production unit and a recording unit, when the eye gesture function execution module obtains the eye gesture state, the eye gesture function execution module can display an instruction to be executed or send a sound to inform a user of the instruction to be executed, the user can confirm OK by closing eyes, and the user can also send a sound response instruction which is wrong or correct.
When the eye function execution module can be worn on the head of a user, a gravity sensor and a gyroscope can be arranged on the eye function execution module, so that the user can respond to command errors by shaking the head and confirm OK by nodding the head.
In summary, the eye posture feature acquisition method and the eye posture identification system of the present invention can acquire the reference feature data of the known eye posture state and the first feature data of the eye posture state to be identified, and can compare the first feature data with the reference feature data to obtain the eye posture state corresponding to the first feature data, so that the eye posture state identification is more accurate.
The above detailed description of the preferred embodiments is intended to more clearly illustrate the features and spirit of the present invention, and is not intended to limit the scope of the present invention by the preferred embodiments disclosed above. On the contrary, it is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims. The scope of the invention is therefore to be accorded the broadest interpretation so as to encompass all such modifications and equivalent arrangements as is within the scope of the appended claims.

Claims (12)

1. An eye posture feature collection method for collecting feature information of an eye in different eye posture states, wherein the eye comprises an eyeball, white, eyelid, a first canthus and a second canthus, and the eye posture states comprise left-looking, right-looking, up-looking, down-looking, central-looking, blinking and eye closing, and the eye posture feature collection method comprises the following steps:
a1) fixing the distance between the image capturing device and the eyes, and determining a first position of the first canthus and a second position of the second canthus according to color comparison by the image capturing device;
a2) the image capturing device generates movable concentric circles on the eyes, and positions of the eyeball, the white of the eye and the eyelid are positioned by utilizing the concentric circles, wherein the concentric circles comprise a first circle and a second circle, and the diameter of the first circle is smaller than that of the second circle;
a3) judging whether the concentric circles are positioned at the positions of the eyeball, the white eye and the eyelid or not, if not, the eye gesture state is blinking or eye closing, and the image capturing device adjusts the positions of the concentric circles to enable the first position and the second position to be positioned on the circumference of the second circle; if so, the eye posture state is left-looking or right-looking or up-looking or down-looking or centered-looking, the image capturing device adjusts the positions of the concentric circles so that the eyeball and part of the eyelid are positioned in the first circle and the white of the eye is positioned outside the first circle;
a4) the image capturing device captures an image of an area where the eyes are located; when the eye posture state is blinking or closing eyes, the duration time of the eyes closing is also recorded, and the images comprise an eyelid image, a first position image, a second position image and a concentric circle image; when the eye posture state is left-looking or right-looking or up-looking or down-looking or middle-looking, the image comprises an eyeball image, an eye white image, an eyelid image, a first position image, a second position image and a concentric circle image; the concentric circle image comprises a first circle image and a second circle image, and the second circle image comprises a first area, a second area, a third area and a fourth area;
a5) respectively calculating a first distance between the center of the second circular image and the first position image, a second distance between the center of the second circular image and the second position image, the number of first skin color pixels of an eyelid image in the first circular image, the number of second skin color pixels of an eyelid image in the second circular image, the total number of eye white pixels of an eye white image in the second circular image, the number of first eye white pixels of an eye white image in the first region, the number of second eye white pixels of an eye white image in the second region, the number of third eye white pixels of an eye white image in the third region and the number of fourth eye white pixels of an eye white image in the fourth region to obtain feature data of the image.
2. The eye gesture feature collection method of claim 1, wherein in step a4), the image capturing device filters out the image of the unconscious spontaneous blinking when capturing the image of the region where the eye is located, wherein the spontaneous blinking includes spontaneous eye closing and spontaneous eye opening, the duration of the spontaneous eye closing has a first time range, and the duration of the spontaneous eye opening has a second time range; the blinking in the eye posture state comprises active eye closing and active eye opening, the duration of the active eye closing has a third time range, and the duration of the active eye opening has a fourth time range; the duration of the eye closure in the eye potential state has a fifth time range.
3. The method of claim 1, wherein the first, second, third and fourth regions have the same area and are all sectors with the same center, and the first, second, third and fourth regions are sequentially disposed in a counterclockwise direction.
4. The method of claim 1, wherein a sum of the first eye white pixel count, the second eye white pixel count, the third eye white pixel count, and the fourth eye white pixel count is equal to the total eye white pixel count.
5. The eye potential feature collection method of claim 1, wherein when the eye potential feature collection method is used to collect feature data of an image of the eye in a known eye potential state, the feature data is reference feature data; when the eye posture feature acquisition method is used for acquiring feature data of an image of the eye in an eye posture state needing to be identified, the feature data is first feature data.
6. The eye potential feature collection method of claim 5, wherein when the eye potential feature collection method is used for collecting the reference feature data, the eye only takes the eye potential state of the middle-looking, and the image capture device collects and stores the reference feature data of the image of the eye in the eye potential state of the middle-looking.
7. The method as claimed in claim 5, wherein when the eye potential feature collection method is used to collect the reference feature data, the eye first makes the eye potential state of the middle-looking, the image capturing device collects and stores the reference feature data of the image of the eye in the eye potential state of the middle-looking, the eye then makes the other eye potential states in turn, and the image capturing device respectively collects and stores the reference feature data of the image of the eye in the other eye potential states.
8. The eye contour feature collection method of claim 6 or 7, wherein the first circle has a first fixed radius determined by: when the eye contour feature collection method is used to collect the reference feature data of the image of the eye in the centered eye contour state, in step a3), the image capturing device first adjusts the position of the concentric circle and the radius of the first circle so that the eyeball, part of the white of the eye and part of the eyelid are located in the first circle, then gradually reduces the radius of the first circle and adjusts the position of the first circle so that the eyeball is located in the first circle and the white of the eye is not located in the first circle, at this time, the radius of the first circle is equal to the first fixed radius.
9. The eye contour feature collection method of claim 8, wherein the second circle has a second fixed radius determined by: when the eye contour feature acquisition method is used to acquire the reference feature data of the image of the eye in the centered eye contour state, in step a3), the image capturing device first adjusts the position of the concentric circle so that the eyeball is entirely located within the first circle with the first fixed radius, and then adjusts the radius of the second circle so that the first position is located on the circumference of the second circle, where the distance between the center of the first circle and the first position is equal to the second fixed radius; or the second circle has a second fixed radius, and the method for determining the second fixed radius is as follows: when the eye contour feature collection method is used to collect the reference feature data of the image of the eye in the centered eye contour state, in step a3), the image capturing device first adjusts the position of the concentric circle so that the eyeball is located entirely within the first circle with the first fixed radius, and then adjusts the radius of the second circle so that the second position is located on the circumference of the second circle, where the distance between the center of the first circle and the second position is equal to the second fixed radius.
10. An eye potential identification system, comprising:
a correction and feature value recording module, comprising an image capturing device, wherein the image capturing device employs the eye contour feature collecting method according to any one of claims 1 to 9, and is configured to collect reference feature data of an image of the eye in a known eye contour state and first feature data of the image of the eye in an eye contour state to be identified, where the reference feature data and the first feature data each include the first distance, the second distance, the first skin color pixel number, the second skin color pixel number, the total eye white pixel number, the first eye white pixel number, the second eye white pixel number, the third eye white pixel number, and the fourth eye white pixel number;
the eye posture comparison module is electrically connected with the correction and characteristic value recording module and used for calling the reference characteristic data and the first characteristic data and comparing the first characteristic data with the reference characteristic data to obtain an eye posture state corresponding to the first characteristic data;
and the execution module is electrically connected with the eye potential comparison module to acquire the eye potential state and execute the corresponding operation command according to the eye potential state.
11. The system of claim 10, wherein when the image capturing device captures the reference feature data of the images of the eye in all types of eye states, each eye state has a set of reference feature data, so that there are multiple sets of reference feature data, and the eye state corresponding to the first feature data can be obtained by sequentially comparing the first feature data with each set of reference feature data.
12. The system of claim 10, wherein when the image capturing device only collects the reference feature data of the image of the eye in the centered eye posture, the eye posture comparing module determines the eye posture corresponding to the first feature data as open eye or closed eye or blinking, the open eye includes left-looking, right-looking, up-looking, down-looking and centered-looking:
b1) comparing the second skin color pixel quantity of the first characteristic data with the second skin color pixel quantity of the reference characteristic data to judge whether the eye posture state corresponding to the second skin color pixel quantity of the first characteristic data is eye closure; if yes, go to step b 2); if not, entering step b 4);
b2) comparing the first skin color pixel quantity of the first characteristic data with the first skin color pixel quantity of the reference characteristic data, and judging whether the eye posture state corresponding to the first skin color pixel quantity of the first characteristic data is eye closure; if yes, go to step b 3); if not, entering step b 4);
b3) judging whether the duration time of the eye posture state corresponding to the first characteristic data is within a fifth time range, if so, confirming that the eye posture state corresponding to the first characteristic data is an eye closing posture state, ending the eye posture state confirmation process, and if not, entering the step b 4);
b4) judging whether the eye posture state corresponding to the first characteristic data comprises active eye closing and active eye opening at the same time in a duration, if so, entering step b5), and if not, entering step b 6);
b5) judging whether the duration time of the active eye closing of the eye posture state is within a third time range, simultaneously judging whether the duration time of the active eye opening of the eye posture state is within a fourth time range, if yes, confirming that the eye posture state corresponding to the first feature data is blinking, finishing the eye posture state confirmation process, and if not, entering a step b 6);
b6) determining the eye posture state as open eyes;
b7) comparing the first distance in the first characteristic data with the first distance in the reference characteristic data, thereby preliminarily judging whether the eye posture state corresponding to the first distance of the first characteristic data is left-looking or right-looking;
b8) if it is preliminarily determined that the eye posture state corresponding to the first distance of the first feature data is a left-looking state, comparing the second distance of the first feature data with the second distance of the reference feature data, determining whether the eye posture state corresponding to the second distance of the first feature data is a left-looking state, if so, comparing the number of the fourth eye white pixels of the first feature data with the number of the fourth eye white pixels of the reference feature data, determining whether the eye posture state corresponding to the number of the fourth eye white pixels of the first feature data is a left-looking state, if so, determining that the eye posture state corresponding to the first feature data is a left-looking eye posture state, ending the eye posture state determination process, if one of the four eye white pixels is a left-looking state, re-collecting the first feature data, and entering step b 1);
if it is preliminarily determined that the eye posture state corresponding to the first distance of the first feature data is looking right, comparing the second distance of the first feature data with the second distance of the reference feature data, determining whether the eye posture state corresponding to the second distance of the first feature data is looking right, if so, comparing the number of the third eye white pixels of the first feature data with the number of the third eye white pixels of the reference feature data, determining whether the eye posture state corresponding to the number of the third eye white pixels of the first feature data is looking right, if so, determining that the eye posture state corresponding to the first feature data is looking right, ending the eye posture state determination process, if not, re-collecting the first feature data, and entering step b 1);
if it is preliminarily determined that the eye posture state corresponding to the first distance of the first feature data is neither left-looking nor right-looking, go to step b 9);
b9) comparing the second distance in the first characteristic data with the second distance of the reference characteristic data, and judging whether the eye posture state corresponding to the second distance of the first characteristic data is in the middle or upward;
b10) if the eye posture state corresponding to the second distance of the first feature data is a center view, comparing the total eye white pixel number, the first eye white pixel number, the second eye white pixel number, the third eye white pixel number, and the fourth eye white pixel number of the first feature data with the total eye white pixel number, the first eye white pixel number, the second eye white pixel number, the third eye white pixel number, and the fourth eye white pixel number of the reference feature data, determining whether the eye posture state corresponding to the first feature data is a center view, if so, determining that the eye posture state corresponding to the first feature data is a center view, ending the eye posture state confirmation process, if not, re-acquiring the first feature data, entering step b 1);
if the eye posture state corresponding to the second distance of the first feature data is upward-looking, comparing the total eye white pixel number, the third eye white pixel number, the fourth eye white pixel number of the first feature data with the total eye white pixel number, the third eye white pixel number, and the fourth eye white pixel number of the reference feature data, determining whether the eye posture state corresponding to the second skin color pixel number of the first feature data is upward-looking, if so, comparing the second skin color pixel number of the first feature data with the second skin color pixel number of the reference feature data, determining whether the eye posture state corresponding to the second skin color pixel number of the first feature data is upward-looking, if so, determining that the eye posture state corresponding to the first feature data is upward-looking, ending the eye posture state confirmation process, if not, re-collecting the first characteristic data, and entering the step b 1);
if the eye posture state corresponding to the second distance of the first feature data is neither the central eye posture state nor the upward eye posture state, entering step b 11);
b11) comparing the second skin color pixel quantity of the first characteristic data with the second skin color pixel quantity of the reference characteristic data, and judging whether the eye posture state corresponding to the second skin color pixel quantity of the first characteristic data is downward looking; if so, comparing the first skin color pixel quantity of the first feature data with the first skin color pixel quantity of the reference feature data, and judging whether the eye posture state corresponding to the first skin color pixel quantity of the first feature data is downward-looking, if so, determining that the eye posture state corresponding to the first feature data is downward-looking, finishing the eye posture state confirmation process, otherwise, re-collecting the first feature data, and entering step b 1).
CN201811060286.7A 2018-09-12 2018-09-12 Eye potential feature acquisition method and eye potential identification system Expired - Fee Related CN109213325B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811060286.7A CN109213325B (en) 2018-09-12 2018-09-12 Eye potential feature acquisition method and eye potential identification system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811060286.7A CN109213325B (en) 2018-09-12 2018-09-12 Eye potential feature acquisition method and eye potential identification system

Publications (2)

Publication Number Publication Date
CN109213325A CN109213325A (en) 2019-01-15
CN109213325B true CN109213325B (en) 2021-04-20

Family

ID=64983795

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811060286.7A Expired - Fee Related CN109213325B (en) 2018-09-12 2018-09-12 Eye potential feature acquisition method and eye potential identification system

Country Status (1)

Country Link
CN (1) CN109213325B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1422596A (en) * 2000-08-09 2003-06-11 松下电器产业株式会社 Eye position detection method and apparatus thereof
CN102662470A (en) * 2012-04-01 2012-09-12 西华大学 Method and system for implementation of eye operation
CN106339087A (en) * 2016-08-29 2017-01-18 上海青研科技有限公司 Eyeball tracking method based on multidimensional coordinate and device thereof
CN107944408A (en) * 2017-11-30 2018-04-20 西安科锐盛创新科技有限公司 Method based on canthus angle-determining eye state

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050025927A (en) * 2003-09-08 2005-03-14 유웅덕 The pupil detection method and shape descriptor extraction method for a iris recognition, iris feature extraction apparatus and method, and iris recognition system and method using its

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1422596A (en) * 2000-08-09 2003-06-11 松下电器产业株式会社 Eye position detection method and apparatus thereof
CN102662470A (en) * 2012-04-01 2012-09-12 西华大学 Method and system for implementation of eye operation
CN106339087A (en) * 2016-08-29 2017-01-18 上海青研科技有限公司 Eyeball tracking method based on multidimensional coordinate and device thereof
CN107944408A (en) * 2017-11-30 2018-04-20 西安科锐盛创新科技有限公司 Method based on canthus angle-determining eye state

Also Published As

Publication number Publication date
CN109213325A (en) 2019-01-15

Similar Documents

Publication Publication Date Title
CN113646732A (en) System and method for obtaining control schemes based on neuromuscular data
JP2019527377A (en) Image capturing system, device and method for automatic focusing based on eye tracking
EP2940555A1 (en) Automatic gaze calibration
EP2905680B1 (en) Information processing apparatus, information processing method, and program
US20180063397A1 (en) Wearable device, control method and non-transitory storage medium
EP4095744A1 (en) Automatic iris capturing method and apparatus, computer-readable storage medium, and computer device
CN109976528B (en) Method for adjusting watching area based on head movement and terminal equipment
CN110968190B (en) IMU for touch detection
WO2019214329A1 (en) Method and apparatus for controlling terminal, and terminal
CN110427108A (en) Photographic method and Related product based on eyeball tracking
CN111179560A (en) Intelligent glasses, control device and control method
US10444831B2 (en) User-input apparatus, method and program for user-input
CN114092985A (en) Terminal control method, device, terminal and storage medium
CN111915667A (en) Sight line identification method, sight line identification device, terminal equipment and readable storage medium
CN109144262B (en) Human-computer interaction method, device, equipment and storage medium based on eye movement
CN109213325B (en) Eye potential feature acquisition method and eye potential identification system
CN109960412B (en) Method for adjusting gazing area based on touch control and terminal equipment
WO2018076609A1 (en) Terminal and method for operating terminal
CN109917923B (en) Method for adjusting gazing area based on free motion and terminal equipment
EP3851939A1 (en) Positioning a user-controlled spatial selector based on extremity tracking information and eye tracking information
CN112596605A (en) AR (augmented reality) glasses control method and device, AR glasses and storage medium
US20190324548A1 (en) Gesture-based designation of regions of interest in images
CN111566597A (en) Information processing apparatus, information processing method, and program
JP2015045943A (en) Blink detecting device
WO2022196093A1 (en) Information processing device, line-of-sight detection method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210420

CF01 Termination of patent right due to non-payment of annual fee