US20170238800A1 - Method of identifying iris - Google Patents

Method of identifying iris Download PDF

Info

Publication number
US20170238800A1
US20170238800A1 US15/588,473 US201715588473A US2017238800A1 US 20170238800 A1 US20170238800 A1 US 20170238800A1 US 201715588473 A US201715588473 A US 201715588473A US 2017238800 A1 US2017238800 A1 US 2017238800A1
Authority
US
United States
Prior art keywords
glint
eye
measuring
image
reference point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/588,473
Inventor
Yu-Hao Huang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pixart Imaging Inc
Original Assignee
Pixart Imaging Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pixart Imaging Inc filed Critical Pixart Imaging Inc
Priority to US15/588,473 priority Critical patent/US20170238800A1/en
Assigned to PIXART IMAGING INC. reassignment PIXART IMAGING INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HUANG, YU-HAO
Publication of US20170238800A1 publication Critical patent/US20170238800A1/en
Priority to US16/679,421 priority patent/US20200093368A1/en
Priority to US18/319,436 priority patent/US20230293007A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • A61B3/1216Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes for diagnostics of the iris
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0025Operational features thereof characterised by electronic signal processing, e.g. eye models
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/113Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography

Definitions

  • the present disclosure relates to an eye identifying method, and in particular to a method of identifying iris.
  • the eye detecting device can be used to detect gaze direction or identify iris boundary.
  • Most eye detecting devices detects the eye gaze direction by using the characteristic that the position of pupil changes with the gaze direction.
  • conventional eye detecting device detects the eye gaze direction by using the glint formed by emitting the incident light into the eye, and the glint is used to be a reference point for locating eye.
  • the conventional eye detecting device identifies the pupil and glint from the whole cornea image.
  • the conventional eye detecting device scans whole eye image.
  • the conventional eye detecting device analyzes the gray scale value distribution of whole eye image for identifying the pupil and glint.
  • the conventional eye detecting device can obtain the relative position of the pupil and glint, and then determines the gaze direction according to the relative position.
  • An exemplary embodiment of the present disclosure illustrates an eye detecting device which determines the position of the pupil according to at least one glint.
  • An exemplary embodiment of the present disclosure illustrates a method of identifying iris comprising providing a plurality of incident lights entering an eye, the eye locating at a reference position; setting a first reference point, a second reference point and a third reference point as a mark for locating the eye at the reference position; forming a first measuring glint, a second measuring glint, and a third measuring glint near a pupil of the eye by the incident lights after the eye moves from the reference position to a measuring position; capturing an eye image of the eye including a first measuring glint image, a second measuring glint image, a third measuring glint image, and an iris image; analyzing a gray scale value of the eye image to obtain the positions of the first measuring glint, the second measuring glint, and the third measuring glint; and calculating a variation of the distance between the first measuring glint and the second measuring glint with respect to the distance between the first reference point and the second reference point
  • the positions of the first reference point, the second reference point and the third reference point are corresponding to the emission position of the incident lights.
  • a plurality of positions of the first measuring glint, the second measuring glint, and the third measuring glint are corresponding to those positions of the first reference point, the second reference point and the third reference point.
  • the present disclosure provides a method of identifying iris.
  • the arithmetic unit can calculate the first, second, and third variation.
  • the boundary of the pupil can be estimated so that the boundary of the pupil can be searched quickly.
  • FIG. 1A depicts a side view of the eye detecting device in accordance with the first embodiment of the present invention.
  • FIG. 1B is a front view of the eye detecting device shown in FIG. 1A .
  • FIG. 1C is a function block diagram of the eye detecting device in accordance with the first embodiment of the present invention.
  • FIG. 1D depicts a flow diagram of a method of detecting pupil in accordance with the first exemplary embodiment of the present disclosure.
  • FIG. 2A depicts a side view of the eye detecting device in accordance with the second embodiment of the present invention.
  • FIG. 2B is a function block diagram of the eye detecting device in accordance with the second embodiment of the present invention.
  • FIG. 2C depicts a flow diagram of a method of detecting pupil in accordance with the second exemplary embodiment of the present disclosure.
  • FIG. 3B depicts a flow diagram of a method of detecting pupil in accordance with the second exemplary embodiment of the present disclosure.
  • FIG. 4 depicts a flow diagram of a method of identifying iris in accordance with the third exemplary embodiment of the present disclosure.
  • FIG. 5 depicts a flow diagram of a method of identifying iris in accordance with the fourth exemplary embodiment of the present disclosure.
  • FIG. 1A is a side view of the eye detecting device in accordance with the first embodiment of the present invention.
  • FIG. 1B is a front view of the eye detecting device shown in FIG. 1A .
  • FIG. 1C is a function block diagram of the eye detecting device in accordance with the first embodiment of the present invention.
  • the eye detecting device 100 includes an optical assembly 110 , an image sensor 120 , and an arithmetic unit 130 .
  • the optical assembly 110 provides at least one incident light L 1 to form at least one glint G 1 located near a pupil P 1 of the eye E 1 .
  • the eye E 1 has the pupil P 1 and a periphery surrounding the pupil P 1 , and the glint G 1 is formed on the periphery.
  • the periphery includes an iris I 1 and a sclera.
  • the image sensor 120 is used to capture an eye image, and the eye E 1 image includes the glint G 1 image.
  • the arithmetic unit 130 analyzes a gray scale value of the eye E 1 image and obtains at least one position of the glint G 1 according to the gray scale value. Hence, the arithmetic unit 130 can determine the position of the pupil P 1 of eye E 1 according to the position of the glint G 1 .
  • the eye detecting device 100 can be disposed on the eyeglasses frame, and the eye detecting device 100 also can be disposed on the laptop or the screen of the smartphone. In this embodiment, the eye detecting device 100 may be wearable, like eyeglasses.
  • the optical assembly 110 and the image sensor 120 are disposed on the supporting frame 150 . User can wear the supporting frame 150 , and the optical assembly 110 and the image sensor 120 are in front of the user.
  • the eye detecting device 100 can be disposed on mobile device, for example, laptop, the front camera lens or the screen of the smartphone.
  • the present disclosure does not limit the disposition of the eye detecting device 100 .
  • the supporting frame 150 can be an eyeglasses frame.
  • the supporting frame 150 includes two rims 152 and two temples 154 connected to rims 152 respectively. User can put the temples 154 on ears, and the rims 152 are in front of the eye E 1 .
  • the present disclosure does not limit the supporting frame 150 .
  • the optical assembly 110 can emit at least one incident light L 1 entering the eye E 1 .
  • the incident light L 1 falls on the eye E 1 to form at least one glint by reflecting at the iris I 1 of the eye E 1 .
  • the glint is located near a pupil P 1 of the eye E 1 .
  • the glint may be formed on the periphery surrounding the pupil P 1 , namely iris I 1 or sclera.
  • providing one incident light L 1 entering the eye E 1 so that the number of the glint is one.
  • the incident light L 1 is the invisible light, such as infrared light or near infrared light.
  • the cornea covered on the iris I 1 has a smooth surface so that the incident light L 1 emitted in many directions can form the glint G 1 through the path between the cornea and the image sensor 120 .
  • the optical assembly 110 includes at least one light source 112 and at least one dispersing component 114 so that the optical assembly 110 provides at least one incident light L 1 .
  • the light source 112 can be light emitting diode (LED), and the dispersing component 114 can guide light and has a plurality of optical microstructures.
  • the optical microstructures can be optical microstructures, trenches or ribs. The trenches may be V-cut grooves When the light provided by the light source 112 is emitted into the dispersing component 114 , the light can be reflected, refracted, or scattered by the optical microstructures so as to be transmitted from an outgoing surface of the dispersing component 114 .
  • the image sensor 120 is used to capture the eye E 1 image. It is worth to mention that the wavelength range of the light captured by the image sensor 120 covers the wavelength range of the incident light L 1 .
  • the eye E 1 image appears in the eye region of user, for example, the eye white area (not shown), the iris I 1 area, and the pupil P 1 area. Besides, the eye E 1 image shows the glint G 1 image.
  • the image sensor 120 senses the incident light L 1 through photo-sensitive elements.
  • the photo-sensitive elements can be complementary metal-oxide-semiconductor sensors (CMOS) or charge-coupled devices (CCD).
  • the arithmetic unit 130 can be a digital signal processor (DSP) or a central processing unit (CPU).
  • DSP digital signal processor
  • CPU central processing unit
  • the arithmetic unit 130 analyzes a gray scale value of the eye image and obtains the distribution of the glint G 1 through the gray scale value.
  • the arithmetic unit 130 determines the position of the pupil P 1 of eye E 1 according to at least one distribution of the glint G 1 .
  • FIG. 1D depicts a flow diagram of a method of detecting pupil in accordance with the first exemplary embodiment of the present disclosure. Please refer to FIG. 1B , FIG. 1C and FIG. 1D .
  • the optical assembly 110 provides one incident light L 1 entering into the eye E 1 .
  • the incident light L 1 is located at the eye E 1 and reflects to form one glint G 1 near the pupil P 1 , such as the iris I 1 .
  • the position where the incident light L 1 enters the iris I 1 near the pupil P 1 can be adjusted by changing the arrangement of the light source 112 or the disposition of the light source 112 and the dispersing component 114 .
  • the position of the glint G 1 can be changed by the emission position of the incident light L 1 .
  • the position of the glint G 1 depends on the emission position of the incident light L 1 .
  • the image sensor 120 captures a first eye image by photographing the eye E 1 .
  • the first eye image photographed by image sensor 120 shows the image of the eye E 1 region and the image of the said glint G 1 .
  • the image sensor 120 transmits the data of the first eye image to the arithmetic unit 130 .
  • the arithmetic unit 130 analyzes a gray scale value of the first eye image to obtain the distributions of the glint G 1 .
  • the 8-bit color image namely 256-grayscale image is used as an example.
  • the grayscale value is quantified as 256 colors from the pure black, through gray to white, and the grayscale value ranges from 0 to 255. It is worth to notice that the gray scale value of the glint G 1 is near to or equal to 255, whereas the gray scale value of the pupil P 1 is near to 0.
  • the arithmetic unit 130 can obtain the arrangement, shape and range of the pixels which is close to the maximum gray scale value in all pixels through the gray scale value distribution of the first eye image. Further, the arithmetic unit 130 speculates the arrangement of the pixels corresponding to the arrangement of the glint G 1 in the first image.
  • the arithmetic unit 130 determines the position of the pupil P 1 according to the position of the glint G 1 . Specifically, the arithmetic unit 130 selects an appropriate threshold gray scale value first. The gray scale value of the pupil P 1 is less than the said threshold gray scale value, whereas the gray scale value of the glint G 1 in the first eye image is greater than the said threshold gray scale value.
  • the arithmetic unit 130 scans the survey area M 1 near the arrangement of the glint G 1 (shown in FIG. 1B ), and analyzes the gray scale value distribution of the survey area M 1 .
  • the arithmetic unit 130 determines the part of the survey area M 1 , whose gray scale value is less than the threshold gray scale value.
  • the survey area M 1 can be defined by at least one glint G 1 .
  • the positions of the glint G 1 and the pupil P 1 are in the survey area M 1 . It is worth to mention that the position of the glint G 1 can be at the boundary of the survey area M 1 or in the survey area Ml.
  • User can set the range of the survey area M 1 according to the pupil P 1 size through the arithmetic unit 130 .
  • the present disclosure does not limit the range of the survey area M 1 .
  • the arithmetic unit 130 determines an area from the survey area M 1 to be a specific area, and the gray scale value of the specific area is less than the threshold gray scale value. Further, the arithmetic unit 130 determines whether the shape of the specific area matches with the shape of the pupil P 1 to reduce the possibility of the misjudgment of the pupil P 1 position. For instance, the arithmetic unit 130 selects two specific areas satisfied by the condition that the gray scale value of the specific areas are less than the threshold gray scale value. When one specific area is rectangle, and the other specific area is circular, the arithmetic unit 130 then determines one of the circular specific areas, which is circular, matches with the shape of the pupil P 1 .
  • the arithmetic unit 130 determines whether the proportion of the specific area is within the range of the pupil P 1 to reduce the possibility of the misjudgment of the pupil P 1 position more.
  • the arithmetic unit 130 can analyze the gray scale value distribution of the survey area M 1 near the glint G 1 in the first eye image so as to reduce searching scope of the pupil P 1 . Hence, the position of the pupil P 1 can be found quickly. Therefore, compared with conventional technology, the arithmetic unit 130 does not analyze the gray scale value distribution of whole first eye image for searching for the pupil P 1 .
  • FIG. 2A is a side view of the eye detecting device in accordance with the second embodiment of the present invention.
  • FIG. 2B is a function block diagram of the eye detecting device in accordance with the second embodiment of the present invention. Please refer to FIGS. 2A and 2B .
  • the structure of an eye detecting device 200 in accordance with second exemplary embodiment is similar to the eye detecting device 100 in accordance with first exemplary embodiment.
  • the eye detecting device 100 and 200 include the image sensor 120 .
  • there are some differences between the eye detecting devices 100 and 200 The following detailed description explains the difference between the eye detecting devices 100 and 200 , and the same features are basically not described again.
  • the eye detecting device 200 in accordance with the second embodiment includes an optical assembly 210 , an image sensor 120 , and an arithmetic unit 130 .
  • the optical assembly 210 provides a plurality of incident lights L 1 to form a plurality of glints G 1 located near a pupil P 1 of the eye E 1 .
  • the image sensor 120 is used to capture an eye image, and the eye image includes these glints G 1 image.
  • the arithmetic unit 130 analyzes a gray scale value of the eye E 1 image and obtains distribution of the glints G 1 according to the gray scale value. Hence, the arithmetic unit 130 determines the position of the pupil P 1 of eye E 1 according to the distribution of the glints G 1 .
  • the optical assembly 210 can emit a plurality of incident lights L 1 enter into the eye E 1 .
  • the incident lights L 1 fall on the eye E 1 to form a plurality of glints by reflecting at an iris I 1 of the eye E 1 , and at least part of glints are located near a pupil P 1 of the eye E 1 .
  • the optical assembly 210 includes only one or less light source 212 and a dispersing component 214 .
  • the incident lights can be formed by dividing at least one light through the optical assembly 210 .
  • the optical assembly 210 may include a plurality of light sources 212 and exclude any dispersing component 214 .
  • the present disclosure does not limit the number of the light source 212 and the structure of dispersing component 214 .
  • FIG. 2C depicts a flow diagram of a method of detecting pupil in accordance with the second exemplary embodiment of the present disclosure. Please refer to FIG. 2A , FIG. 2B and FIG. 2C .
  • the optical assembly 210 provides a plurality of incident lights L 1 enter the eye E 1 .
  • the incident lights L 1 reflect to form a plurality of glints G 1 near the pupil P 1 , such as the iris I 1 .
  • the positions of the glints G 1 can be changed with the emission positions of the incident lights L 1 .
  • the emission positions of the incident lights L 1 there are four emission positions of the incident lights L 1 approximately arranged in a rectangle, and the aspect ratio of the rectangle is 2:1.
  • four glints G 1 are formed and arranged in a rectangle with the aspect ratio of 2:1.
  • the image sensor 120 captures a first eye image by photographing the eye E 1 .
  • the first eye image photographed by image sensor 120 shows the image of the eye E 1 region and the image of the said glints G 1 .
  • the image sensor 120 transmits the data of the first eye image to the arithmetic unit 130 .
  • the arithmetic unit 130 can obtain the arrangement, shape and range of the pixels which each have close to the maximum gray scale value through the gray scale value. Further, the arithmetic unit 130 speculates the arrangement of the pixels corresponding to the arrangement of the glints G 1 in the first image.
  • the arithmetic unit 130 determines the position of the pupil P 1 according to the distributions of the glints G 1 . Specifically, the arithmetic unit 130 selects an appropriate threshold gray scale value first. The gray scale value of the glints G 1 in the first eye image are greater than the said threshold gray scale value. After confirming the arrangement of the glints G 1 , the arithmetic unit 130 scans the survey area M 1 near the arrangement of the glints G 1 (shown in FIG. 2A ), and analyzes the gray scale value distribution of the survey area M 1 .
  • the survey area M 1 can be defined by those glints G 1 .
  • the survey area M 1 contains the arrangement of the glints G 1 and the pupil P 1 , and can be equal to or slightly larger than the area surrounded by the glints G 1 .
  • the arithmetic unit 130 determines whether the shape of the specific area matches the shape of the pupil P 1 , and the proportion of the specific area is within the range of the pupil P 1 .
  • the arithmetic unit 130 can define the shape or range of the survey area M 1 through distribution of the glints G 1 so as to reduce the seeking range. Hence, the position of the pupil P 1 can be searched quickly.
  • FIG. 3A is a function block diagram of the eye detecting device in accordance with the third embodiment of the present invention.
  • the structure of an eye detecting device 300 in accordance with third exemplary embodiment is similar to the eye detecting device 200 in accordance with second exemplary embodiment.
  • the eye detecting devices 300 and 200 each include the optical assembly 210 and the image sensor 120 .
  • the eye detecting devices 100 and 200 there are some differences between the eye detecting devices 100 and 200 .
  • the following detailed description explains the difference between the eye detecting device 100 and 200 , and the same features are basically not described again.
  • the eye detecting device 300 in accordance with the third embodiment includes an optical assembly 210 , an image sensor 120 , an arithmetic unit 230 , and the control unit 340 .
  • the optical assembly 210 provides a plurality of incident lights L 1 to form a plurality of glints G 1 located near a pupil P 1 of the eye E 1 .
  • the control unit 340 controls the timing that the incident lights are emitted into the eyes, namely, the control unit 340 can control that the optical assembly 210 providing different incident lights L 1 into the eyes E 1 at different timing separately.
  • the image sensor 120 captures the eye images at different timing, and the eye images include the glint G 1 a and glint G 1 b image.
  • the glint G 1 a and glint G 1 b image appear in each eye images captured at different timing.
  • the arithmetic unit 230 analyzes a gray scale value of the eye images captured at different timing and obtains positions of glint G 1 a and glint G 1 b according to the gray scale value. Hence, the arithmetic unit 230 determines the position of the pupil P 1 of eye E 1 according to the position of glint G 1 a and glint G 1 b.
  • the image sensor 120 is used to capture the eye images at different timing, and each the eye image shows these glint G 1 a and glint G 1 b image.
  • the arithmetic unit 230 analyzes gray scale value of the eye image captured at different timing, in addition, the arithmetic unit 230 commands the control unit 340 so that the control unit 340 controls the timing that the optical assembly 210 provides the incident lights L 1 .
  • FIG. 3B depicts a flow diagram of a method of detecting pupil in accordance with the second exemplary embodiment of the present disclosure. Please refer to FIG. 3A and FIG. 3B .
  • the control unit 340 controls the optical assembly 210 provides a plurality of incident lights L 1 at a first timing.
  • the incident lights L 1 enter into the iris I 1 area near the pupil P 1 and than reflect to form a plurality of first glints G 1 a .
  • the arrangement of the first glints G 1 a is corresponding to the emission arrangement of the incident lights L 1 .
  • the optical assembly 210 includes a plurality of light sources without including any dispersing component.
  • the image sensor 120 captures a first eye image by photographing the eye E 1 at the first timing.
  • the first eye image photographed by image sensor 120 at the first timing shows the image of the eye E 1 region and the image of the said first glints G 1 a .
  • the image sensor 120 transmits the data of the first eye image to the arithmetic unit 230 .
  • the control unit 340 controls the optical assembly 210 provides a plurality of incident lights L 1 at a second timing.
  • the incident lights L 1 enters the iris I 1 area near the pupil P 1 and reflect to form a plurality of second glints G 1 b .
  • the arrangement of the second glints G 1 b is corresponding to the emission arrangement of the incident lights L 1 .
  • the first timing is not equal to the second timing, and the arrangement of the first glints G 1 a formed at the first timing is not equal to the arrangement of the second glints G 1 b formed at the second timing.
  • part of the light source 112 provides some incident lights L 1 at the first timing
  • the other light source 112 provides some incident lights L 1 at the second timing.
  • the amount of the light sources 112 is four, and the light sources 112 are arranged approximately in the rectangular array.
  • the aspect ratio of said rectangular array is 2:1.
  • the control unit 340 controls the optical assembly 210 to provide two light sources 112 arranged in diagonally opposite corners of the rectangular array at the first timing, and then the control unit 340 controls the optical assembly 210 to provide the other light sources 112 arranged in diagonally opposite corners of the rectangular array at the second timing.
  • the present disclosure does not limit the number and arrangement of the light sources 112 provided by the optical assembly 210 at different timing.
  • the present disclosure does not limit the emission sequence of the light sources 112 .
  • the image sensor 120 captures a second eye image by photographing the eye E 1 at the second timing.
  • the second eye image photographed by image sensor 120 at the second timing shows the image of the eye E 1 region and the image of the said first glints G 1 b .
  • the image sensor 120 transmits the data of the second eye image to the arithmetic unit 230 .
  • the aforementioned first timing is namely the timing that the user started using the eye detecting device 300
  • the second timing is the another timing different from the first timing.
  • the arithmetic unit 230 analyzes a gray scale value distribution of the first eye image and the second eye image to obtain the distributions of the first glints G 1 a and the second glints G 1 b .
  • the arithmetic unit 230 can obtain the arrangement, shape and range of the pixels which is close to the maximum gray scale value in all pixels through the gray scale value distribution of the first eye image and the second eye image.
  • the arithmetic unit 130 speculates the arrangement of the pixels corresponding to the arrangement of the first glint G 1 a in the first image and the second glint G 1 b in the second image.
  • a difference image between the first image and the second image is produced by image subtraction.
  • the amount of the light sources 112 is four, and the first glints G 1 a in the first image are provided by two light sources 112 arranged in diagonally opposite corners of the rectangular array, and the second glints G 1 b in the second image are provided by the other light sources 112 arranged in other diagonally opposite corners of the rectangular array.
  • the difference image is generated by subtracting the second image from the first image, and the difference gray scale value of the difference image range from ⁇ 255 to 255.
  • the gray scale values corresponding to the arrangement of the first glint G 1 a and the second glint G 1 b in the difference image between the first image and the second image are proximate to critical value.
  • the gray scale value corresponding to the arrangement of the first glint G 1 a is proximate to a maximum value
  • the gray scale value corresponding to the arrangement of the second glints G 1 b is proximate to a minimum value (negative gray scale value).
  • the gray scale value corresponding to the arrangement of the first glint G 1 a and the second glint G 1 b show a special pattern.
  • the special pattern is defined by two brightest spots and two darkest spots.
  • the difference image can be generated by subtracting the first image from the second image.
  • the gray scale value corresponding to the arrangement of the second glint G 1 b is proximate to a maximum value
  • the gray scale value corresponding to the arrangement of the first glints G 1 a is proximate to a minimum value and is not limited to the examples provided herein.
  • the arrangement of the first glints G 1 a and the second glint G 1 b can be further determined through the arithmetic unit 230 .
  • the arithmetic unit 230 analyzes the arrangement, shape and range of the pixels which are close to the maximum (255) and minimum ( ⁇ 255) gray scale value in all pixels to speculate a possibility arrangement of the first glints G 1 a and the second glint G 1 b . Then, the arithmetic unit 230 speculates whether the possibility arrangement of the first glints G 1 a and the second glint G 1 b corresponding to the above-mentioned special pattern.
  • the control unit 340 controls the different incident lights L 1 to emit into the different positions of the eye E 1 at the different timing
  • the arrangement of the first glints G 1 a and the second glint G 1 b at the different timing can be arranged.
  • the arrangement of the first glints G 1 a and the second glint G 1 b can be more confirmed through the gray scale value and the above-mentioned special pattern after image subtraction. Hence, the possibility of the misjudgments of the glints G 1 position can be more reduced.
  • the arithmetic unit 230 determines the position of the pupil P 1 according to the arrangement of the first glints G 1 a and the second glints G 1 b . Specifically, the arithmetic unit 230 selects an appropriate threshold gray scale value first. The gray scale value of the pupil P 1 is less than the said threshold gray scale value, whereas the gray scale values of the first glints G 1 a and the second glint G 1 b in the difference image are greater than the said threshold gray scale value. The arithmetic unit 230 confirms the arrangement of the first glints G 1 a and the second glint G 1 b through the gray scale value.
  • the arithmetic unit 230 After confirming the arrangement of the first glints G 1 a and the second glint G 1 b , the arithmetic unit 230 scans the survey area M 1 near the arrangement of the first glints G 1 a or the second glint G 1 b , and analyzes the gray scale value distribution of the survey area M 1 . The arithmetic unit 230 determines the part of the survey area M 1 , whose gray scale value is less than the threshold gray scale value.
  • the gray scale values corresponding to the arrangement of the first glint G 1 a and the second glint G 1 b in the difference image are proximate to critical value, and the gray scale value corresponding to the arrangement of the first glint G 1 a is proximate to a maximum value, whereas the gray scale value corresponding to the arrangement of the second glints G 1 b is proximate to a minimum value.
  • the arithmetic unit 230 confirms the arrangement of the first glints G 1 a through the gray scale value, and then scans the survey area M 1 near the arrangement of the first glint G 1 a to determine the position of the pupil P 1 .
  • the survey area M 1 can be defined by these first glints G 1 a and/or second glints G 1 b .
  • the range of the survey area M 1 contains the arrangement of the first glints G 1 a and/or second glints G 1 b , and can be equal to or slightly larger than the area surrounded by the arrangement of the first glints G 1 a and/or second glints G 1 b .
  • the position of the first glints G 1 a and/or second glints G 1 b can be at the boundary of the survey area M 1 or in the survey area M 1 .
  • User can set the range of the survey area M 1 according to the pupil P 1 size through the arithmetic unit 230 .
  • the present disclosure is not limited to the range of the survey area M 1 .
  • the arithmetic unit 230 selects the specific area from the survey area, and the gray scale value of the specific area is less than the threshold gray scale value, and then determines whether the shape and proportion of the specific area matches the pupil P 1 in the difference image to reduce the possibility of the misjudgment of the pupil P 1 position.
  • the arithmetic unit 230 can analyze the gray scale value distribution of the survey area M 1 near the arrangement of the first glints G 1 a and/or second glints G 1 b in the difference image so as to search the position of the pupil P 1 quickly. Therefore, compared with conventional technology, the arithmetic unit 230 does not analyze the gray scale value distribution of whole first or second eye image for searching scope of the pupil P 1 .
  • FIG. 4 depicts a flow diagram of a method of identifying iris in accordance with the third exemplary embodiment of the present disclosure.
  • the method of identifying iris in accordance with the third exemplary embodiment can be implemented through eye detecting device 200 (shown in FIG. 2A ). Please refer to FIG. 2A and FIG. 4 .
  • the optical assembly 210 provides a plurality of incident lights L 1 entering the eye E 1 .
  • the incident lights L 1 reflect to form a plurality of glints G 1 near the pupil P 1 , and the arrangement of those glints G 1 is defined to a first reference point, a second reference point and a third reference point.
  • the incident lights L 1 can be provided by the light source 212 and the dispersing component 214 so that the emission positions of the incident lights L 1 are the illuminated position of the dispersing component 214 .
  • the incident lights L 1 can be provided by at least three light sources 212 without any dispersing component 214 so that the emission positions of the incident lights L 1 are the position where the light sources 212 are placed.
  • the position that the incident lights L 1 enter in the iris I 1 area near the pupil P 1 can be adjusted by adjusting the arrangement of the light source 212 or the disposition of the light sources 212 and the dispersing component 214 .
  • the first reference point, the second reference point and the third reference point are located near a pupil P 1 of the eye E 1 as a mark for locating the eye E 1 at the reference position.
  • the positions of the first reference point, the second reference point and the third reference point are corresponding to the emission position of the incident lights L 1 .
  • the user when the user looks straight ahead, namely, the eye gazes straight ahead, the user presets those glints positions corresponding to the emission arrangement of the incident lights L 1 to be regarded as the positions of the first reference point, the second reference point and the third reference point.
  • a first reference axis is formed between the first reference point and the second reference point.
  • a second reference axis is formed between the second reference point and the third reference point.
  • a reference angle is formed between the first reference axis and the second reference axis.
  • the method of identifying iris can further include presets the fourth reference point or more other reference point, but not limited to the examples provided herein.
  • three emission positions of the incident lights L 1 are provided and are arranged approximately as the right angled triangle.
  • the ratio of two sides of said right angled triangle is 2:1.
  • those glints are located near a pupil P 1 of the eye E 1 and arranged approximately as the right angled triangle.
  • the ratio between the first reference axis and the second reference axis is 2:1, and the reference angle is approximate to 90 degrees.
  • the incident lights L 1 form a first measuring glint, a second measuring glint, and a third measuring glint near a pupil P 1 of the eye E 1 .
  • a first axis is formed between the first measuring glint and the second measuring glint.
  • a second axis is formed between the second measuring glint and the third measuring glint.
  • An angle is formed between the first axis and the second axis.
  • the eye E 1 is substantially spherical, and the iris I 1 is the portion rising slightly above the surface of the sphere.
  • the arrangement of those glints G 1 is changed while the eye E 1 moves corresponding to the reference position, whereas the glints G 1 are the first reference point, the second reference point and the third reference point. Namely, when the eye E 1 gaze direction moves from the front direction to lateral direction, the arrangement of those glints G 1 is changed from the first reference point, the second reference point and the third reference point to the first measuring glint, the second measuring glint, and the third measuring glint.
  • the image sensor 120 captures an eye image by photographing the eye E 1 .
  • the eye image photographed by image sensor 120 shows the image of the eye E 1 region and the image of the said first measuring glint, the second measuring glint, and the third measuring glint. Then, the image sensor 120 transmits the data of the eye image to the arithmetic unit 130 or 230 .
  • the arithmetic unit 130 or 230 analyzes a gray scale value distribution of the eye image to obtain the arrangement of the first measuring glint, the second measuring, and the third measuring glint. Specifically, the arithmetic unit 130 or 230 can obtain the arrangement, shape and range of the pixels which is close to the maximum gray scale value (255) in all pixels through the gray scale value distribution of the eye image. Further, the arithmetic unit 130 or 230 speculates the arrangement of the pixels corresponding to the arrangement of the first measuring glint, the second measuring, and the third measuring glint in the image.
  • the displacement amounts of the first measuring glint, the second measuring glint, and the third measuring glint relative to the first reference point, the second reference point and the third reference point are calculated respectively. Therefore, a deformation amount caused by the iris image of the eye at the measuring position relative to the iris image of the eye at the reference position is obtained.
  • the arithmetic unit 130 or 230 calculates the first variation, which is a length and angular variation of the first axis relatives to the first reference axis.
  • the arithmetic unit 130 or 230 calculates the second variation, which is a length and angular variation of the second axis relatives to the second reference axis.
  • the third variation which is an angular variation of the angle relatives to the reference angle is calculated.
  • the arithmetic unit 130 or 230 calculates the iris image deformation amount according to the first variation, the second variation, and the third variation.
  • the proportion of the iris image deformation amount can be estimated according to the relative proportion of the first axis relatives to the first reference axis and the relative proportion of the second axis relatives to the second reference axis.
  • the distance between the image sensor 120 and eye E 1 can be estimated according to the length of the first axis and the second axis.
  • the size of the pupil P 1 can be estimated so that the position of the pupil P 1 can be searched quickly.
  • the shape of the pupil P 1 image photographed by image sensor 120 is similar to a circle. While the measuring position is equal to the reference position, namely, the user keeps looking straight ahead, the shape of the pupil P 1 image photographed by image sensor 120 keeps being similar to circle. While the measuring position is not equal to the reference position, namely, the eye E 1 gaze direction moves from the front direction to lateral direction, the shape of the pupil P 1 image photographed by image sensor 120 is similar to an ellipse.
  • the arithmetic unit 130 or 230 can calculate the major axis and minor axis of the said ellipse according to the first variation, the second variation, and the third variation. Hence, the boundary of the pupil P 1 can be estimated so that the boundary of the pupil P 1 can be searched quickly.
  • FIG. 5 depicts a flow diagram of a method of identifying iris in accordance with the fourth exemplary embodiment of the present disclosure. Please refer to FIG. 5 and FIG. 2A .
  • the method of identifying iris shown in FIG. 5 is similar to the method of identifying iris shown in FIG. 4 .
  • the differences between these methods of identifying iris s are further discloses as follows.
  • the optical assembly 210 provides a plurality of incident lights L 1 entering the eye E 1 .
  • the incident lights L 1 reflect to form a plurality of glints G 1 near the pupil P 1 .
  • the incident lights L 1 can be provided by the light source 212 and the dispersing component 214 so that the emission positions of the incident lights L 1 are the illuminated position of the dispersing component 214 .
  • the incident lights L 1 can be provided by at least three light sources 212 without any dispersing component 214 so that the emission positions of the incident lights L 1 are the position where the light sources 212 are placed.
  • the position that the incident lights L 1 enter in the iris I 1 area near the pupil P 1 can be adjusted by adjusting the arrangement of the light source 212 or the disposition of the light sources 212 and the dispersing component 214 .
  • the user sets a first reference point, a second reference point and a third reference point as a mark for locating the eye E 1 at the reference position.
  • the positions of the first reference point, the second reference point and the third reference point are corresponding to the emission position of the incident lights L 1 .
  • the user presets those glints positions corresponding to the emission arrangement of the incident lights L 1 to be regarded as the positions of the first reference point, the second reference point and the third reference point.
  • the reference position cannot be the position right front the eye E 1 when the eye E 1 gaze direction deviates from the front direction of the eye E 1 , and is not limited to the examples provided herein.
  • a first reference axis is formed between the first reference point and the second reference point.
  • a second reference axis is formed between the second reference point and the third reference point.
  • a reference angle is formed between the first reference axis and the second reference axis.
  • the method of identifying iris can further include presets the fourth reference point or more other reference point, but not limited to the examples provided herein.
  • three emission positions of the incident lights L 1 are provided and are arranged approximately as the right angled triangle.
  • the ratio of two sides of said right angled triangle is 2:1.
  • those glints are located near a pupil P 1 of the eye E 1 and arranged approximately as the right angled triangle.
  • the ratio between the first reference axis and the second reference axis is 2:1, and the reference angle is approximate to 90 degrees.
  • the user presets those glints positions corresponding to the emission arrangement of the incident lights L 1 to be regarded as the positions of the first reference point, the second reference point and the third reference point while there exists the reference distance between the optical assembly 210 and the eye E 1 .
  • the incident lights L 1 form a first measuring glint, a second measuring glint, and a third measuring glint near a pupil P 1 of the eye E 1 .
  • the positions of the first measuring glint, the second measuring glint, and the third measuring glint are corresponding to the positions of the first reference point, the second reference point and the third reference point.
  • a first axis is formed between the first measuring glint and the second measuring glint.
  • a second axis is formed between the second measuring glint and the third measuring glint.
  • An angle is formed between the first axis and the second axis.
  • the glint G 1 position formed by emitting the incident light L 1 into the eye E 1 can be changed to the first, second and the third measuring glint. That is, when the eye gaze direction remains without moving, the glint G 1 position formed by emitting the incident light L 1 into the eye E 1 can be changed proportionally from the aforementioned the first, second and third reference point to the first, second and the third measuring glint. The angle is equal to the reference angle.
  • the ratio between the first and second reference axis is 2:1
  • the ratio between the first and second axis is 2:1. Since the reference angle is approximate to 90 degrees, the angle is approximate to 90 degrees.
  • the image sensor 120 captures an eye image by photographing the eye E 1 .
  • the eye image photographed by image sensor 120 shows the image of the eye E 1 region and the image of the said first, second, and the third measuring glint, and an iris I 1 image.
  • the image sensor 120 transmits the data of the eye image to the arithmetic unit 130 or 230 .
  • the arithmetic unit 130 or 230 analyzes a gray scale value distribution of the eye image to obtain the arrangement of the first, second, and third measuring glint. Specifically, the arithmetic unit 130 or 230 can obtain the arrangement, shape and range of the pixels which each have close to the maximum gray scale value (255) through the gray scale value distribution of the eye image. Further, the arithmetic unit 130 or 230 speculates the arrangement of the pixels corresponding to the arrangement of the first, second, and third measuring glint in the image.
  • the variation of the distance between the first measuring glint and the second measuring glint with respect to the distance between the first reference point and the second reference point is calculated.
  • the variation of the distance between the second measuring glint and the third measuring glint with respect to the distance between the second reference point and the third reference point is calculated. Accordingly, the resolution variation of an iris image when the eye is located at the measuring position is obtained.
  • the arithmetic unit 130 or 230 calculates the first variation, which is a length variation of the first axis relatives to the first reference axis.
  • the arithmetic unit 130 or 230 calculates the second variation, which is a length variation of the second axis relatives to the second reference axis.
  • the arithmetic unit 130 or 230 calculates the resolution variation of an iris image according to the first and second variation.
  • the first reference axis has 20 pixels
  • the second reference axis has 10 pixels.
  • the ratio between pixel of the first and second reference axis is 2:1.
  • the arithmetic unit 130 or 230 calculates that the first axis has 10 pixels and the second axis has 5 pixels.
  • the arithmetic unit 130 or 230 calculates that the first axis is decrease by two times compare to the first reference axis and the second axis is decrease by two times compare to the second reference axis.
  • the first and second variation is two.
  • the boundary of the pupil P 1 can be estimated so that the boundary of the pupil P 1 can be searched quickly.
  • the present disclosure provides eye detecting device, methods of detecting pupil and identifying iris.
  • the eye detecting device includes an optical assembly, an image sensor, and an arithmetic unit.
  • the arithmetic unit can analyze the gray scale value distribution of the survey area near the arrangement of the glint in the first eye image so as to reduce searching scope of the pupil. Hence, the position of the pupil can be searched quickly. Therefore, compared with conventional technology, the arithmetic unit does not analyze the gray scale value distribution of whole first eye image for searching scope of the pupil.
  • the present disclosure provides eye detecting device, methods of detecting pupil.
  • the eye detecting device includes an optical assembly, an image sensor, an arithmetic unit, and the control unit. Since the control unit controls the different incident lights to emit into the different positions of the eye at the different timing, the arrangement of the first and second glint at the different timing can be arranged. The arrangement of the first and second glint can be more confirmed through the gray scale value and the special pattern after image subtraction. Hence, the possibility of the misjudgments of the glints position can be more reduced.
  • the arithmetic unit can analyze the gray scale value distribution of the survey area near the arrangement of the first glints and/or second glints in the difference image so as to search the position of the pupil quickly. Therefore, compared with conventional technology, the arithmetic unit does not analyze the gray scale value distribution of whole first or second eye image for searching scope of the pupil.
  • the arithmetic unit can calculate the major axis and minor axis of the said ellipse according to the first variation, the second variation, and the third variation. Hence, the boundary of the pupil P 1 can be estimated so that the boundary of the pupil P 1 can be searched quickly.
  • the present disclosure provides methods of identifying iris.
  • the arithmetic unit can calculate the major axis and minor axis of the said ellipse according to the first, second, and third variation. Hence, the boundary of the pupil can be estimated so that the boundary of the pupil can be searched quickly.
  • the present disclosure provides methods of identifying iris.
  • the arithmetic unit can calculate the first, second, and third variation. Hence, the boundary of the pupil can be estimated so that the boundary of the pupil can be searched quickly.

Abstract

A method of identifying iris is provided. The method includes providing a plurality of incident lights entering an eye located at a reference position; setting a first, second and third reference points; forming a first, second and third measuring glint by the incident lights after the eye moves to a measuring position; capturing an eye image of the eye including a first, second and third measuring glint images and an iris image; analyzing a gray scale value of the eye image to obtain the positions of the first, second and third measuring glints; and calculating a variation of the distance between the first measuring glint and the second measuring glint, a variation of the distance between the second measuring glint and the third measuring glint to obtain an resolution variation of the iris image when the eye is located at the measuring position.

Description

    CROSS REFERENCE TO RELATED APPLICATION
  • This application is a continuation application of U.S. application Ser. No. 14/478,517, filed on Sep. 5, 2014, and entitled “EYE DETECTING DEVICE AND METHODS OF DETECTING PUPIL”, the entire contents of which are hereby incorporated by reference.
  • BACKGROUND
  • 1. Technical Field
  • The present disclosure relates to an eye identifying method, and in particular to a method of identifying iris.
  • 2. Description of Related Art
  • Currently, the eye detecting device can be used to detect gaze direction or identify iris boundary. Most eye detecting devices detects the eye gaze direction by using the characteristic that the position of pupil changes with the gaze direction.
  • Generally, conventional eye detecting device detects the eye gaze direction by using the glint formed by emitting the incident light into the eye, and the glint is used to be a reference point for locating eye.
  • Specifically, after capturing eye image, the conventional eye detecting device identifies the pupil and glint from the whole cornea image. In the process of identifying the pupil, the conventional eye detecting device scans whole eye image. The conventional eye detecting device analyzes the gray scale value distribution of whole eye image for identifying the pupil and glint. The conventional eye detecting device can obtain the relative position of the pupil and glint, and then determines the gaze direction according to the relative position.
  • SUMMARY
  • An exemplary embodiment of the present disclosure illustrates an eye detecting device which determines the position of the pupil according to at least one glint.
  • An exemplary embodiment of the present disclosure illustrates a method of identifying iris comprising providing a plurality of incident lights entering an eye, the eye locating at a reference position; setting a first reference point, a second reference point and a third reference point as a mark for locating the eye at the reference position; forming a first measuring glint, a second measuring glint, and a third measuring glint near a pupil of the eye by the incident lights after the eye moves from the reference position to a measuring position; capturing an eye image of the eye including a first measuring glint image, a second measuring glint image, a third measuring glint image, and an iris image; analyzing a gray scale value of the eye image to obtain the positions of the first measuring glint, the second measuring glint, and the third measuring glint; and calculating a variation of the distance between the first measuring glint and the second measuring glint with respect to the distance between the first reference point and the second reference point, and calculating a variation of the distance between the second measuring glint and the third measuring glint with respect to the distance between the second reference point and the third reference point, so as to obtain an resolution variation of the iris image when the eye is located at the measuring position. The positions of the first reference point, the second reference point and the third reference point are corresponding to the emission position of the incident lights. A plurality of positions of the first measuring glint, the second measuring glint, and the third measuring glint are corresponding to those positions of the first reference point, the second reference point and the third reference point.
  • In summary, the present disclosure provides a method of identifying iris. The arithmetic unit can calculate the first, second, and third variation. Hence, the boundary of the pupil can be estimated so that the boundary of the pupil can be searched quickly.
  • In order to further understand the techniques, means and effects of the present disclosure, the following detailed descriptions and appended drawings are hereby referred, such that through which, the purposes, features and aspects of the present disclosure can be thoroughly and concretely appreciated; however, the appended drawings are merely provided for reference and illustration, without any intention to be used for limiting the present disclosure.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying drawings are included to provide a further understanding of the present disclosure, and are incorporated in and constitute a part of this specification. The drawings illustrate exemplary embodiments of the present disclosure and, together with the description, serve to explain the principles of the present disclosure.
  • FIG. 1A depicts a side view of the eye detecting device in accordance with the first embodiment of the present invention.
  • FIG. 1B is a front view of the eye detecting device shown in FIG. 1A.
  • FIG. 1C is a function block diagram of the eye detecting device in accordance with the first embodiment of the present invention.
  • FIG. 1D depicts a flow diagram of a method of detecting pupil in accordance with the first exemplary embodiment of the present disclosure.
  • FIG. 2A depicts a side view of the eye detecting device in accordance with the second embodiment of the present invention.
  • FIG. 2B is a function block diagram of the eye detecting device in accordance with the second embodiment of the present invention.
  • FIG. 2C depicts a flow diagram of a method of detecting pupil in accordance with the second exemplary embodiment of the present disclosure.
  • FIG. 3A depicts a function block diagram of the eye detecting device in accordance with the third embodiment of the present invention.
  • FIG. 3B depicts a flow diagram of a method of detecting pupil in accordance with the second exemplary embodiment of the present disclosure.
  • FIG. 4 depicts a flow diagram of a method of identifying iris in accordance with the third exemplary embodiment of the present disclosure.
  • FIG. 5 depicts a flow diagram of a method of identifying iris in accordance with the fourth exemplary embodiment of the present disclosure.
  • DESCRIPTION OF THE EXEMPLARY EMBODIMENTS
  • FIG. 1A is a side view of the eye detecting device in accordance with the first embodiment of the present invention. FIG. 1B is a front view of the eye detecting device shown in FIG. 1A. FIG. 1C is a function block diagram of the eye detecting device in accordance with the first embodiment of the present invention. Please refer to FIG. 1A to 1C, the eye detecting device 100 includes an optical assembly 110, an image sensor 120, and an arithmetic unit 130. The optical assembly 110 provides at least one incident light L1 to form at least one glint G1 located near a pupil P1 of the eye E1. Specifically, the eye E1 has the pupil P1 and a periphery surrounding the pupil P1, and the glint G1 is formed on the periphery. The periphery includes an iris I1 and a sclera. The image sensor 120 is used to capture an eye image, and the eye E1 image includes the glint G1 image. The arithmetic unit 130 analyzes a gray scale value of the eye E1 image and obtains at least one position of the glint G1 according to the gray scale value. Hence, the arithmetic unit 130 can determine the position of the pupil P1 of eye E1 according to the position of the glint G1.
  • The eye detecting device 100 can be disposed on the eyeglasses frame, and the eye detecting device 100 also can be disposed on the laptop or the screen of the smartphone. In this embodiment, the eye detecting device 100 may be wearable, like eyeglasses. The optical assembly 110 and the image sensor 120 are disposed on the supporting frame 150. User can wear the supporting frame 150, and the optical assembly 110 and the image sensor 120 are in front of the user. However, in other embodiment, the eye detecting device 100 can be disposed on mobile device, for example, laptop, the front camera lens or the screen of the smartphone. However, the present disclosure does not limit the disposition of the eye detecting device 100.
  • Practically, the supporting frame 150 can be an eyeglasses frame. The supporting frame 150 includes two rims 152 and two temples 154 connected to rims 152 respectively. User can put the temples 154 on ears, and the rims 152 are in front of the eye E1. However, the present disclosure does not limit the supporting frame 150.
  • The optical assembly 110 can emit at least one incident light L1 entering the eye E1. The incident light L1 falls on the eye E1 to form at least one glint by reflecting at the iris I1 of the eye E1. The glint is located near a pupil P1 of the eye E1. Specifically, the glint may be formed on the periphery surrounding the pupil P1, namely iris I1 or sclera. In this embodiment, providing one incident light L1 entering the eye E1 so that the number of the glint is one. It is worth to mention that the incident light L1 is the invisible light, such as infrared light or near infrared light. The cornea covered on the iris I1 has a smooth surface so that the incident light L1 emitted in many directions can form the glint G1 through the path between the cornea and the image sensor 120.
  • Specifically, the optical assembly 110 includes at least one light source 112 and at least one dispersing component 114 so that the optical assembly 110 provides at least one incident light L1. Practically, the light source 112 can be light emitting diode (LED), and the dispersing component 114 can guide light and has a plurality of optical microstructures. The optical microstructures can be optical microstructures, trenches or ribs. The trenches may be V-cut grooves When the light provided by the light source 112 is emitted into the dispersing component 114, the light can be reflected, refracted, or scattered by the optical microstructures so as to be transmitted from an outgoing surface of the dispersing component 114.
  • The image sensor 120 is used to capture the eye E1 image. It is worth to mention that the wavelength range of the light captured by the image sensor 120 covers the wavelength range of the incident light L1. The eye E1 image appears in the eye region of user, for example, the eye white area (not shown), the iris I1 area, and the pupil P1 area. Besides, the eye E1 image shows the glint G1 image. Specifically, the image sensor 120 senses the incident light L1 through photo-sensitive elements. The photo-sensitive elements can be complementary metal-oxide-semiconductor sensors (CMOS) or charge-coupled devices (CCD).
  • The arithmetic unit 130 can be a digital signal processor (DSP) or a central processing unit (CPU). The arithmetic unit 130 analyzes a gray scale value of the eye image and obtains the distribution of the glint G1 through the gray scale value. The arithmetic unit 130 determines the position of the pupil P1 of eye E1 according to at least one distribution of the glint G1.
  • FIG. 1D depicts a flow diagram of a method of detecting pupil in accordance with the first exemplary embodiment of the present disclosure. Please refer to FIG. 1B, FIG. 1C and FIG. 1D.
  • Implementing the step S101, when the user uses the eye detecting device 100, such as the user wearing the supporting frame 150 of the eye detecting device 100, the optical assembly 110 provides one incident light L1 entering into the eye E1. The incident light L1 is located at the eye E1 and reflects to form one glint G1 near the pupil P1, such as the iris I1.
  • It is worth to notice that the position where the incident light L1 enters the iris I1 near the pupil P1 can be adjusted by changing the arrangement of the light source 112 or the disposition of the light source 112 and the dispersing component 114. Namely, the position of the glint G1 can be changed by the emission position of the incident light L1. Hence, the position of the glint G1 depends on the emission position of the incident light L1.
  • Implementing the step S102, the image sensor 120 captures a first eye image by photographing the eye E1. The first eye image photographed by image sensor 120 shows the image of the eye E1 region and the image of the said glint G1. Then, the image sensor 120 transmits the data of the first eye image to the arithmetic unit 130.
  • Implementing the step S103, the arithmetic unit 130 analyzes a gray scale value of the first eye image to obtain the distributions of the glint G1. The 8-bit color image, namely 256-grayscale image is used as an example. The grayscale value is quantified as 256 colors from the pure black, through gray to white, and the grayscale value ranges from 0 to 255. It is worth to notice that the gray scale value of the glint G1 is near to or equal to 255, whereas the gray scale value of the pupil P1 is near to 0. The arithmetic unit 130 can obtain the arrangement, shape and range of the pixels which is close to the maximum gray scale value in all pixels through the gray scale value distribution of the first eye image. Further, the arithmetic unit 130 speculates the arrangement of the pixels corresponding to the arrangement of the glint G1 in the first image.
  • Implementing the step S104, the arithmetic unit 130 determines the position of the pupil P1 according to the position of the glint G1. Specifically, the arithmetic unit 130 selects an appropriate threshold gray scale value first. The gray scale value of the pupil P1 is less than the said threshold gray scale value, whereas the gray scale value of the glint G1 in the first eye image is greater than the said threshold gray scale value.
  • After confirming the position of the glint G1, the arithmetic unit 130 scans the survey area M1 near the arrangement of the glint G1 (shown in FIG. 1B), and analyzes the gray scale value distribution of the survey area M1. The arithmetic unit 130 determines the part of the survey area M1, whose gray scale value is less than the threshold gray scale value. The survey area M1 can be defined by at least one glint G1. The positions of the glint G1 and the pupil P1 are in the survey area M1. It is worth to mention that the position of the glint G1 can be at the boundary of the survey area M1 or in the survey area Ml. User can set the range of the survey area M1 according to the pupil P1 size through the arithmetic unit 130. The present disclosure does not limit the range of the survey area M1.
  • The arithmetic unit 130 determines an area from the survey area M1 to be a specific area, and the gray scale value of the specific area is less than the threshold gray scale value. Further, the arithmetic unit 130 determines whether the shape of the specific area matches with the shape of the pupil P1 to reduce the possibility of the misjudgment of the pupil P1 position. For instance, the arithmetic unit 130 selects two specific areas satisfied by the condition that the gray scale value of the specific areas are less than the threshold gray scale value. When one specific area is rectangle, and the other specific area is circular, the arithmetic unit 130 then determines one of the circular specific areas, which is circular, matches with the shape of the pupil P1. Besides, in order to reduce the possibility of the misjudgment of the pupil P1 position more, user can set the range of the pupil P1 area in the first image. The arithmetic unit 130 determines whether the proportion of the specific area is within the range of the pupil P1 to reduce the possibility of the misjudgment of the pupil P1 position more.
  • It is worth to mention that the arithmetic unit 130 can analyze the gray scale value distribution of the survey area M1 near the glint G1 in the first eye image so as to reduce searching scope of the pupil P1. Hence, the position of the pupil P1 can be found quickly. Therefore, compared with conventional technology, the arithmetic unit 130 does not analyze the gray scale value distribution of whole first eye image for searching for the pupil P1.
  • FIG. 2A is a side view of the eye detecting device in accordance with the second embodiment of the present invention. FIG. 2B is a function block diagram of the eye detecting device in accordance with the second embodiment of the present invention. Please refer to FIGS. 2A and 2B. The structure of an eye detecting device 200 in accordance with second exemplary embodiment is similar to the eye detecting device 100 in accordance with first exemplary embodiment. For example, the eye detecting device 100 and 200 include the image sensor 120. However, there are some differences between the eye detecting devices 100 and 200. The following detailed description explains the difference between the eye detecting devices 100 and 200, and the same features are basically not described again.
  • The eye detecting device 200 in accordance with the second embodiment includes an optical assembly 210, an image sensor 120, and an arithmetic unit 130. The optical assembly 210 provides a plurality of incident lights L1 to form a plurality of glints G1 located near a pupil P1 of the eye E1. The image sensor 120 is used to capture an eye image, and the eye image includes these glints G1 image. The arithmetic unit 130 analyzes a gray scale value of the eye E1 image and obtains distribution of the glints G1 according to the gray scale value. Hence, the arithmetic unit 130 determines the position of the pupil P1 of eye E1 according to the distribution of the glints G1.
  • The optical assembly 210 can emit a plurality of incident lights L1 enter into the eye E1. The incident lights L1 fall on the eye E1 to form a plurality of glints by reflecting at an iris I1 of the eye E1, and at least part of glints are located near a pupil P1 of the eye E1.
  • In this embodiment, the optical assembly 210 includes only one or less light source 212 and a dispersing component 214. The incident lights can be formed by dividing at least one light through the optical assembly 210. In other embodiment, the optical assembly 210 may include a plurality of light sources 212 and exclude any dispersing component 214. The present disclosure does not limit the number of the light source 212 and the structure of dispersing component 214.
  • FIG. 2C depicts a flow diagram of a method of detecting pupil in accordance with the second exemplary embodiment of the present disclosure. Please refer to FIG. 2A, FIG. 2B and FIG. 2C.
  • Implementing the step S201, when the user uses the eye detecting device 200, the optical assembly 210 provides a plurality of incident lights L1 enter the eye E1. The incident lights L1 reflect to form a plurality of glints G1 near the pupil P1, such as the iris I1.
  • It is worth to notice that the positions of the glints G1 can be changed with the emission positions of the incident lights L1. For instance, there are four emission positions of the incident lights L1 approximately arranged in a rectangle, and the aspect ratio of the rectangle is 2:1. Then, four glints G1 are formed and arranged in a rectangle with the aspect ratio of 2:1.
  • Implementing the step S202, the image sensor 120 captures a first eye image by photographing the eye E1. The first eye image photographed by image sensor 120 shows the image of the eye E1 region and the image of the said glints G1. Then, the image sensor 120 transmits the data of the first eye image to the arithmetic unit 130.
  • Implementing the step S203, the arithmetic unit 130 can obtain the arrangement, shape and range of the pixels which each have close to the maximum gray scale value through the gray scale value. Further, the arithmetic unit 130 speculates the arrangement of the pixels corresponding to the arrangement of the glints G1 in the first image.
  • Implementing the step S204, the arithmetic unit 130 determines the position of the pupil P1 according to the distributions of the glints G1. Specifically, the arithmetic unit 130 selects an appropriate threshold gray scale value first. The gray scale value of the glints G1 in the first eye image are greater than the said threshold gray scale value. After confirming the arrangement of the glints G1, the arithmetic unit 130 scans the survey area M1 near the arrangement of the glints G1 (shown in FIG. 2A), and analyzes the gray scale value distribution of the survey area M1.
  • It worth to mention that the survey area M1 can be defined by those glints G1. The survey area M1 contains the arrangement of the glints G1 and the pupil P1, and can be equal to or slightly larger than the area surrounded by the glints G1.
  • In the same way, in order to reduce the possibility of the misjudgment of the pupil P1 position, after the specific area which has gray scale value less than the threshold gray scale value is determined by the arithmetic unit 130, the arithmetic unit 130 determines whether the shape of the specific area matches the shape of the pupil P1, and the proportion of the specific area is within the range of the pupil P1.
  • It is worth to mention that the arithmetic unit 130 can define the shape or range of the survey area M1 through distribution of the glints G1 so as to reduce the seeking range. Hence, the position of the pupil P1 can be searched quickly.
  • FIG. 3A is a function block diagram of the eye detecting device in accordance with the third embodiment of the present invention. The structure of an eye detecting device 300 in accordance with third exemplary embodiment is similar to the eye detecting device 200 in accordance with second exemplary embodiment. For example, the eye detecting devices 300 and 200 each include the optical assembly 210 and the image sensor 120. However, there are some differences between the eye detecting devices 100 and 200. The following detailed description explains the difference between the eye detecting device 100 and 200, and the same features are basically not described again.
  • The eye detecting device 300 in accordance with the third embodiment includes an optical assembly 210, an image sensor 120, an arithmetic unit 230, and the control unit 340. The optical assembly 210 provides a plurality of incident lights L1 to form a plurality of glints G1 located near a pupil P1 of the eye E1. The control unit 340 controls the timing that the incident lights are emitted into the eyes, namely, the control unit 340 can control that the optical assembly 210 providing different incident lights L1 into the eyes E1 at different timing separately. The image sensor 120 captures the eye images at different timing, and the eye images include the glint G1 a and glint G1 b image. Namely, The glint G1 a and glint G1 b image appear in each eye images captured at different timing. The arithmetic unit 230 analyzes a gray scale value of the eye images captured at different timing and obtains positions of glint G1 a and glint G1 b according to the gray scale value. Hence, the arithmetic unit 230 determines the position of the pupil P1 of eye E1 according to the position of glint G1 a and glint G1 b.
  • Specifically, the image sensor 120 is used to capture the eye images at different timing, and each the eye image shows these glint G1 a and glint G1 b image. The arithmetic unit 230 analyzes gray scale value of the eye image captured at different timing, in addition, the arithmetic unit 230 commands the control unit 340 so that the control unit 340 controls the timing that the optical assembly 210 provides the incident lights L1.
  • FIG. 3B depicts a flow diagram of a method of detecting pupil in accordance with the second exemplary embodiment of the present disclosure. Please refer to FIG. 3A and FIG. 3B.
  • Implementing the step S301, the control unit 340 controls the optical assembly 210 provides a plurality of incident lights L1 at a first timing. The incident lights L1 enter into the iris I1 area near the pupil P1 and than reflect to form a plurality of first glints G1 a. The arrangement of the first glints G1 a is corresponding to the emission arrangement of the incident lights L1. It is worth to notice that the optical assembly 210 includes a plurality of light sources without including any dispersing component.
  • Implementing the step S302, the image sensor 120 captures a first eye image by photographing the eye E1 at the first timing. The first eye image photographed by image sensor 120 at the first timing and shows the image of the eye E1 region and the image of the said first glints G1 a. Then, the image sensor 120 transmits the data of the first eye image to the arithmetic unit 230.
  • Implementing the step S303, the control unit 340 controls the optical assembly 210 provides a plurality of incident lights L1 at a second timing. The incident lights L1 enters the iris I1 area near the pupil P1 and reflect to form a plurality of second glints G1 b. The arrangement of the second glints G1 b is corresponding to the emission arrangement of the incident lights L1. It is worth to notice that the first timing is not equal to the second timing, and the arrangement of the first glints G1 a formed at the first timing is not equal to the arrangement of the second glints G1 b formed at the second timing. Specifically, part of the light source 112 provides some incident lights L1 at the first timing, the other light source 112 provides some incident lights L1 at the second timing.
  • For example, the amount of the light sources 112 is four, and the light sources 112 are arranged approximately in the rectangular array. The aspect ratio of said rectangular array is 2:1. The control unit 340 controls the optical assembly 210 to provide two light sources 112 arranged in diagonally opposite corners of the rectangular array at the first timing, and then the control unit 340 controls the optical assembly 210 to provide the other light sources 112 arranged in diagonally opposite corners of the rectangular array at the second timing. The present disclosure does not limit the number and arrangement of the light sources 112 provided by the optical assembly 210 at different timing. The present disclosure does not limit the emission sequence of the light sources 112.
  • Implementing the step S304, the image sensor 120 captures a second eye image by photographing the eye E1 at the second timing. The second eye image photographed by image sensor 120 at the second timing and shows the image of the eye E1 region and the image of the said first glints G1 b. Then, the image sensor 120 transmits the data of the second eye image to the arithmetic unit 230.
  • It is worth to notice that the aforementioned first timing is namely the timing that the user started using the eye detecting device 300, and the second timing is the another timing different from the first timing. The first eye image photographed by image sensor 120 at the first timing, and the second eye image photographed by image sensor 120 at the second timing.
  • Implementing the step S305, the arithmetic unit 230 analyzes a gray scale value distribution of the first eye image and the second eye image to obtain the distributions of the first glints G1 a and the second glints G1 b. Specifically, the arithmetic unit 230 can obtain the arrangement, shape and range of the pixels which is close to the maximum gray scale value in all pixels through the gray scale value distribution of the first eye image and the second eye image. Further, the arithmetic unit 130 speculates the arrangement of the pixels corresponding to the arrangement of the first glint G1 a in the first image and the second glint G1 b in the second image.
  • Implementing the step S306, a difference image between the first image and the second image is produced by image subtraction. In this embodiment, the amount of the light sources 112 is four, and the first glints G1 a in the first image are provided by two light sources 112 arranged in diagonally opposite corners of the rectangular array, and the second glints G1 b in the second image are provided by the other light sources 112 arranged in other diagonally opposite corners of the rectangular array. The difference image is generated by subtracting the second image from the first image, and the difference gray scale value of the difference image range from −255 to 255.
  • Since the arrangement of the first glints G1 a and the arrangement of the second glints G1 b do not overlap, the gray scale values corresponding to the arrangement of the first glint G1 a and the second glint G1 b in the difference image between the first image and the second image are proximate to critical value. For example, in the difference image, the gray scale value corresponding to the arrangement of the first glint G1 a is proximate to a maximum value, whereas the gray scale value corresponding to the arrangement of the second glints G1 b is proximate to a minimum value (negative gray scale value).
  • Thus, in the difference image, the gray scale value corresponding to the arrangement of the first glint G1 a and the second glint G1 b show a special pattern. In this embodiment, the special pattern is defined by two brightest spots and two darkest spots. However, in the difference image, the difference image can be generated by subtracting the first image from the second image. Hence, the gray scale value corresponding to the arrangement of the second glint G1 b is proximate to a maximum value, whereas the gray scale value corresponding to the arrangement of the first glints G1 a is proximate to a minimum value and is not limited to the examples provided herein.
  • In addition, the arrangement of the first glints G1 a and the second glint G1 b can be further determined through the arithmetic unit 230. Specifically, in the process of determining the arrangement of the first glints G1 a and the second glint G1 b through the arithmetic unit 230, the arithmetic unit 230 analyzes the arrangement, shape and range of the pixels which are close to the maximum (255) and minimum (−255) gray scale value in all pixels to speculate a possibility arrangement of the first glints G1 a and the second glint G1 b. Then, the arithmetic unit 230 speculates whether the possibility arrangement of the first glints G1 a and the second glint G1 b corresponding to the above-mentioned special pattern.
  • Since the control unit 340 controls the different incident lights L1 to emit into the different positions of the eye E1 at the different timing, the arrangement of the first glints G1 a and the second glint G1 b at the different timing can be arranged. The arrangement of the first glints G1 a and the second glint G1 b can be more confirmed through the gray scale value and the above-mentioned special pattern after image subtraction. Hence, the possibility of the misjudgments of the glints G1 position can be more reduced.
  • Implementing the step S307, the arithmetic unit 230 determines the position of the pupil P1 according to the arrangement of the first glints G1 a and the second glints G1 b. Specifically, the arithmetic unit 230 selects an appropriate threshold gray scale value first. The gray scale value of the pupil P1 is less than the said threshold gray scale value, whereas the gray scale values of the first glints G1 a and the second glint G1 b in the difference image are greater than the said threshold gray scale value. The arithmetic unit 230 confirms the arrangement of the first glints G1 a and the second glint G1 b through the gray scale value. After confirming the arrangement of the first glints G1 a and the second glint G1 b, the arithmetic unit 230 scans the survey area M1 near the arrangement of the first glints G1 a or the second glint G1 b, and analyzes the gray scale value distribution of the survey area M1. The arithmetic unit 230 determines the part of the survey area M1, whose gray scale value is less than the threshold gray scale value.
  • For example, the gray scale values corresponding to the arrangement of the first glint G1 a and the second glint G1 b in the difference image are proximate to critical value, and the gray scale value corresponding to the arrangement of the first glint G1 a is proximate to a maximum value, whereas the gray scale value corresponding to the arrangement of the second glints G1 b is proximate to a minimum value. Thus, the arithmetic unit 230 confirms the arrangement of the first glints G1 a through the gray scale value, and then scans the survey area M1 near the arrangement of the first glint G1 a to determine the position of the pupil P1.
  • In particular, the survey area M1 can be defined by these first glints G1 a and/or second glints G1 b. The range of the survey area M1 contains the arrangement of the first glints G1 a and/or second glints G1 b, and can be equal to or slightly larger than the area surrounded by the arrangement of the first glints G1 a and/or second glints G1 b. Specifically, the position of the first glints G1 a and/or second glints G1 b can be at the boundary of the survey area M1 or in the survey area M1. User can set the range of the survey area M1 according to the pupil P1 size through the arithmetic unit 230. The present disclosure is not limited to the range of the survey area M1.
  • The arithmetic unit 230 selects the specific area from the survey area, and the gray scale value of the specific area is less than the threshold gray scale value, and then determines whether the shape and proportion of the specific area matches the pupil P1 in the difference image to reduce the possibility of the misjudgment of the pupil P1 position.
  • The arithmetic unit 230 can analyze the gray scale value distribution of the survey area M1 near the arrangement of the first glints G1 a and/or second glints G1 b in the difference image so as to search the position of the pupil P1 quickly. Therefore, compared with conventional technology, the arithmetic unit 230 does not analyze the gray scale value distribution of whole first or second eye image for searching scope of the pupil P1.
  • FIG. 4 depicts a flow diagram of a method of identifying iris in accordance with the third exemplary embodiment of the present disclosure. The method of identifying iris in accordance with the third exemplary embodiment can be implemented through eye detecting device 200 (shown in FIG. 2A). Please refer to FIG. 2A and FIG. 4.
  • Implementing the step S401, when the eye E1 is located at a reference position, and the reference position is corresponding to a position where the eye gazes straight ahead in this embodiment. the optical assembly 210 provides a plurality of incident lights L1 entering the eye E1. The incident lights L1 reflect to form a plurality of glints G1 near the pupil P1, and the arrangement of those glints G1 is defined to a first reference point, a second reference point and a third reference point.
  • Specifically, the incident lights L1 can be provided by the light source 212 and the dispersing component 214 so that the emission positions of the incident lights L1 are the illuminated position of the dispersing component 214. Or, the incident lights L1 can be provided by at least three light sources 212 without any dispersing component 214 so that the emission positions of the incident lights L1 are the position where the light sources 212 are placed. The position that the incident lights L1 enter in the iris I1 area near the pupil P1 can be adjusted by adjusting the arrangement of the light source 212 or the disposition of the light sources 212 and the dispersing component 214.
  • The first reference point, the second reference point and the third reference point are located near a pupil P1 of the eye E1 as a mark for locating the eye E1 at the reference position. The positions of the first reference point, the second reference point and the third reference point are corresponding to the emission position of the incident lights L1. In this embodiment, when the user looks straight ahead, namely, the eye gazes straight ahead, the user presets those glints positions corresponding to the emission arrangement of the incident lights L1 to be regarded as the positions of the first reference point, the second reference point and the third reference point. Specifically, a first reference axis is formed between the first reference point and the second reference point. A second reference axis is formed between the second reference point and the third reference point. A reference angle is formed between the first reference axis and the second reference axis. Besides, in order to mark the reference position clearly, the method of identifying iris can further include presets the fourth reference point or more other reference point, but not limited to the examples provided herein.
  • In this embodiment, three emission positions of the incident lights L1 are provided and are arranged approximately as the right angled triangle. The ratio of two sides of said right angled triangle is 2:1. Hence, those glints are located near a pupil P1 of the eye E1 and arranged approximately as the right angled triangle. Namely, the ratio between the first reference axis and the second reference axis is 2:1, and the reference angle is approximate to 90 degrees.
  • Implementing the step S402, when the eye E1 moves from the reference position to a measuring position, the incident lights L1 form a first measuring glint, a second measuring glint, and a third measuring glint near a pupil P1 of the eye E1. A first axis is formed between the first measuring glint and the second measuring glint. A second axis is formed between the second measuring glint and the third measuring glint. An angle is formed between the first axis and the second axis.
  • Specifically, the eye E1 is substantially spherical, and the iris I1 is the portion rising slightly above the surface of the sphere. The arrangement of those glints G1 is changed while the eye E1 moves corresponding to the reference position, whereas the glints G1 are the first reference point, the second reference point and the third reference point. Namely, when the eye E1 gaze direction moves from the front direction to lateral direction, the arrangement of those glints G1 is changed from the first reference point, the second reference point and the third reference point to the first measuring glint, the second measuring glint, and the third measuring glint.
  • Implementing the step S403, the image sensor 120 captures an eye image by photographing the eye E1. The eye image photographed by image sensor 120 shows the image of the eye E1 region and the image of the said first measuring glint, the second measuring glint, and the third measuring glint. Then, the image sensor 120 transmits the data of the eye image to the arithmetic unit 130 or 230.
  • Implementing the step S404, the arithmetic unit 130 or 230 analyzes a gray scale value distribution of the eye image to obtain the arrangement of the first measuring glint, the second measuring, and the third measuring glint. Specifically, the arithmetic unit 130 or 230 can obtain the arrangement, shape and range of the pixels which is close to the maximum gray scale value (255) in all pixels through the gray scale value distribution of the eye image. Further, the arithmetic unit 130 or 230 speculates the arrangement of the pixels corresponding to the arrangement of the first measuring glint, the second measuring, and the third measuring glint in the image.
  • Implementing the step S405, the displacement amounts of the first measuring glint, the second measuring glint, and the third measuring glint relative to the first reference point, the second reference point and the third reference point are calculated respectively. Therefore, a deformation amount caused by the iris image of the eye at the measuring position relative to the iris image of the eye at the reference position is obtained. Specifically, the arithmetic unit 130 or 230 calculates the first variation, which is a length and angular variation of the first axis relatives to the first reference axis. The arithmetic unit 130 or 230 calculates the second variation, which is a length and angular variation of the second axis relatives to the second reference axis. Equally, the third variation which is an angular variation of the angle relatives to the reference angle is calculated. Hence, the arithmetic unit 130 or 230 calculates the iris image deformation amount according to the first variation, the second variation, and the third variation. Furthermore, the proportion of the iris image deformation amount can be estimated according to the relative proportion of the first axis relatives to the first reference axis and the relative proportion of the second axis relatives to the second reference axis. Besides, the distance between the image sensor 120 and eye E1 can be estimated according to the length of the first axis and the second axis. Hence, the size of the pupil P1 can be estimated so that the position of the pupil P1 can be searched quickly.
  • It is worth to notice that when the user looks straight ahead, the shape of the pupil P1 image photographed by image sensor 120 is similar to a circle. While the measuring position is equal to the reference position, namely, the user keeps looking straight ahead, the shape of the pupil P1 image photographed by image sensor 120 keeps being similar to circle. While the measuring position is not equal to the reference position, namely, the eye E1 gaze direction moves from the front direction to lateral direction, the shape of the pupil P1 image photographed by image sensor 120 is similar to an ellipse.
  • The arithmetic unit 130 or 230 can calculate the major axis and minor axis of the said ellipse according to the first variation, the second variation, and the third variation. Hence, the boundary of the pupil P1 can be estimated so that the boundary of the pupil P1 can be searched quickly.
  • FIG. 5 depicts a flow diagram of a method of identifying iris in accordance with the fourth exemplary embodiment of the present disclosure. Please refer to FIG. 5 and FIG. 2A. The method of identifying iris shown in FIG. 5 is similar to the method of identifying iris shown in FIG. 4. The differences between these methods of identifying iris s are further discloses as follows.
  • Implementing the step S501, in this embodiment, the optical assembly 210 provides a plurality of incident lights L1 entering the eye E1. The incident lights L1 reflect to form a plurality of glints G1 near the pupil P1. Specifically, the incident lights L1 can be provided by the light source 212 and the dispersing component 214 so that the emission positions of the incident lights L1 are the illuminated position of the dispersing component 214. Or, the incident lights L1 can be provided by at least three light sources 212 without any dispersing component 214 so that the emission positions of the incident lights L1 are the position where the light sources 212 are placed. The position that the incident lights L1 enter in the iris I1 area near the pupil P1 can be adjusted by adjusting the arrangement of the light source 212 or the disposition of the light sources 212 and the dispersing component 214.
  • Implementing the step S502, the user sets a first reference point, a second reference point and a third reference point as a mark for locating the eye E1 at the reference position. The positions of the first reference point, the second reference point and the third reference point are corresponding to the emission position of the incident lights L1. In this embodiment, when the user looks straight ahead and there is a reference distance between the optical assembly 210 and the eye E1, the user presets those glints positions corresponding to the emission arrangement of the incident lights L1 to be regarded as the positions of the first reference point, the second reference point and the third reference point. However, the reference position cannot be the position right front the eye E1 when the eye E1 gaze direction deviates from the front direction of the eye E1, and is not limited to the examples provided herein.
  • Specifically, a first reference axis is formed between the first reference point and the second reference point. A second reference axis is formed between the second reference point and the third reference point. A reference angle is formed between the first reference axis and the second reference axis. In addition, in order to mark the reference position clearly, the method of identifying iris can further include presets the fourth reference point or more other reference point, but not limited to the examples provided herein.
  • In this embodiment, three emission positions of the incident lights L1 are provided and are arranged approximately as the right angled triangle. The ratio of two sides of said right angled triangle is 2:1. Hence, those glints are located near a pupil P1 of the eye E1 and arranged approximately as the right angled triangle. Namely, the ratio between the first reference axis and the second reference axis is 2:1, and the reference angle is approximate to 90 degrees. It is worth to note that the user presets those glints positions corresponding to the emission arrangement of the incident lights L1 to be regarded as the positions of the first reference point, the second reference point and the third reference point while there exists the reference distance between the optical assembly 210 and the eye E1.
  • Implementing the step S503, when the eye E1 is located at a measuring position, there is a measuring distance between the optical assembly 210 and the eye E1. The incident lights L1 form a first measuring glint, a second measuring glint, and a third measuring glint near a pupil P1 of the eye E1. The positions of the first measuring glint, the second measuring glint, and the third measuring glint are corresponding to the positions of the first reference point, the second reference point and the third reference point. A first axis is formed between the first measuring glint and the second measuring glint. A second axis is formed between the second measuring glint and the third measuring glint. An angle is formed between the first axis and the second axis.
  • Specifically, since different user has different face and nose height, there exists different distance between the optical assembly 210 and the eye E1 while different user wear eye detecting device 200 or 300. Hence, the glint G1 position formed by emitting the incident light L1 into the eye E1 can be changed to the first, second and the third measuring glint. That is, when the eye gaze direction remains without moving, the glint G1 position formed by emitting the incident light L1 into the eye E1 can be changed proportionally from the aforementioned the first, second and third reference point to the first, second and the third measuring glint. The angle is equal to the reference angle.
  • In this embodiment, since the ratio between the first and second reference axis is 2:1, the ratio between the first and second axis is 2:1. Since the reference angle is approximate to 90 degrees, the angle is approximate to 90 degrees.
  • Implementing the step S504, the image sensor 120 captures an eye image by photographing the eye E1. The eye image photographed by image sensor 120 shows the image of the eye E1 region and the image of the said first, second, and the third measuring glint, and an iris I1 image. Then, the image sensor 120 transmits the data of the eye image to the arithmetic unit 130 or 230.
  • Implementing the step S505, the arithmetic unit 130 or 230 analyzes a gray scale value distribution of the eye image to obtain the arrangement of the first, second, and third measuring glint. Specifically, the arithmetic unit 130 or 230 can obtain the arrangement, shape and range of the pixels which each have close to the maximum gray scale value (255) through the gray scale value distribution of the eye image. Further, the arithmetic unit 130 or 230 speculates the arrangement of the pixels corresponding to the arrangement of the first, second, and third measuring glint in the image.
  • Implementing the step S506, the variation of the distance between the first measuring glint and the second measuring glint with respect to the distance between the first reference point and the second reference point is calculated. The variation of the distance between the second measuring glint and the third measuring glint with respect to the distance between the second reference point and the third reference point is calculated. Accordingly, the resolution variation of an iris image when the eye is located at the measuring position is obtained. Specifically, the arithmetic unit 130 or 230 calculates the first variation, which is a length variation of the first axis relatives to the first reference axis. The arithmetic unit 130 or 230 calculates the second variation, which is a length variation of the second axis relatives to the second reference axis. Hence, the arithmetic unit 130 or 230 calculates the resolution variation of an iris image according to the first and second variation.
  • For example, the first reference axis has 20 pixels, whereas the second reference axis has 10 pixels. The ratio between pixel of the first and second reference axis is 2:1. The arithmetic unit 130 or 230 calculates that the first axis has 10 pixels and the second axis has 5 pixels. Hence, the arithmetic unit 130 or 230 calculates that the first axis is decrease by two times compare to the first reference axis and the second axis is decrease by two times compare to the second reference axis. Namely, the first and second variation is two. Hence, the boundary of the pupil P1 can be estimated so that the boundary of the pupil P1 can be searched quickly.
  • In summary, the present disclosure provides eye detecting device, methods of detecting pupil and identifying iris. The eye detecting device includes an optical assembly, an image sensor, and an arithmetic unit. The arithmetic unit can analyze the gray scale value distribution of the survey area near the arrangement of the glint in the first eye image so as to reduce searching scope of the pupil. Hence, the position of the pupil can be searched quickly. Therefore, compared with conventional technology, the arithmetic unit does not analyze the gray scale value distribution of whole first eye image for searching scope of the pupil.
  • The present disclosure provides eye detecting device, methods of detecting pupil. The eye detecting device includes an optical assembly, an image sensor, an arithmetic unit, and the control unit. Since the control unit controls the different incident lights to emit into the different positions of the eye at the different timing, the arrangement of the first and second glint at the different timing can be arranged. The arrangement of the first and second glint can be more confirmed through the gray scale value and the special pattern after image subtraction. Hence, the possibility of the misjudgments of the glints position can be more reduced. The arithmetic unit can analyze the gray scale value distribution of the survey area near the arrangement of the first glints and/or second glints in the difference image so as to search the position of the pupil quickly. Therefore, compared with conventional technology, the arithmetic unit does not analyze the gray scale value distribution of whole first or second eye image for searching scope of the pupil.
  • The arithmetic unit can calculate the major axis and minor axis of the said ellipse according to the first variation, the second variation, and the third variation. Hence, the boundary of the pupil P1 can be estimated so that the boundary of the pupil P1 can be searched quickly.
  • The present disclosure provides methods of identifying iris. The arithmetic unit can calculate the major axis and minor axis of the said ellipse according to the first, second, and third variation. Hence, the boundary of the pupil can be estimated so that the boundary of the pupil can be searched quickly.
  • The present disclosure provides methods of identifying iris. The arithmetic unit can calculate the first, second, and third variation. Hence, the boundary of the pupil can be estimated so that the boundary of the pupil can be searched quickly.
  • The above-mentioned descriptions represent merely the exemplary embodiment of the present disclosure, without any intention to limit the scope of the present disclosure thereto. Various equivalent changes, alternations or modifications based on the claims of present disclosure are all consequently viewed as being embraced by the scope of the present disclosure.

Claims (10)

What is claimed is:
1. A method of identifying iris comprising:
providing a plurality of incident lights entering an eye, the eye locating at a reference position;
setting a first reference point, a second reference point and a third reference point as a mark for locating the eye at the reference position, wherein the positions of the first reference point, the second reference point and the third reference point are corresponding to the emission position of the incident lights;
forming a first measuring glint, a second measuring glint, and a third measuring glint near a pupil of the eye by the incident lights after the eye moves from the reference position to a measuring position, and a plurality of positions of the first measuring glint, the second measuring glint, and the third measuring glint are corresponding to those positions of the first reference point, the second reference point and the third reference point;
capturing an eye image of the eye including a first measuring glint image, a second measuring glint image, a third measuring glint image, and an iris image;
analyzing a gray scale value of the eye image to obtain the positions of the first measuring glint, the second measuring glint, and the third measuring glint; and
calculating a variation of the distance between the first measuring glint and the second measuring glint with respect to the distance between the first reference point and the second reference point, and calculating a variation of the distance between the second measuring glint and the third measuring glint with respect to the distance between the second reference point and the third reference point, so as to obtain an resolution variation of the iris image when the eye is located at the measuring position.
2. The method of identifying iris according to claim 1, wherein a first reference axis is formed between the first reference point and the second reference point, and a second reference axis is formed between the second reference point and the third reference point.
3. The method of identifying iris according to claim 2 wherein a first axis is formed between the first glint and the second glint, and a second axis is formed between the second glint and the third glint.
4. The method of identifying iris according to claim 3, wherein the step of obtaining the resolution variation of the iris image comprising:
calculating a first variation, wherein the first variation is a length variation of the first axis relatives to the first reference axis;
calculating a second variation, wherein the second variation is a length variation of the second axis relatives to the second reference axis; and
calculating the resolution variation of the iris image according to the first variation, and the second variation.
5. The method of identifying iris according to claim 4, wherein a reference angle is formed between the first reference axis and the second reference axis and an angle is formed between the first axis and the second axis, the step of obtaining the resolution variation of the iris image further comprising:
calculating a third variation, wherein the third variation is an angular variation of the angle relatives to the reference angle is calculated.
6. The method of identifying iris according to claim 1, wherein the reference position is corresponding to a position where the eye gazes straight ahead.
7. The method of identifying iris according to claim 1, wherein the incident lights are infrared lights.
8. The method of identifying iris according to claim 1, wherein the incident lights are provided by an optical assembly.
9. The method of identifying iris according to claim 1, wherein the eye image is captured by an image sensor.
10. The method of identifying iris according to claim 1, wherein the incident lights are provided by at least one light source and at least one dispersing component, the light source providing lights, lights passing through the dispersing component for forming the incident lights.
US15/588,473 2013-11-14 2017-05-05 Method of identifying iris Abandoned US20170238800A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US15/588,473 US20170238800A1 (en) 2013-11-14 2017-05-05 Method of identifying iris
US16/679,421 US20200093368A1 (en) 2013-11-14 2019-11-11 Method of identifying iris
US18/319,436 US20230293007A1 (en) 2013-11-14 2023-05-17 Method of identifying iris

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
TW102141440 2013-11-14
TW102141440A TWI533224B (en) 2013-11-14 2013-11-14 Eye detecting device and methodes of detecting pupil and identifying iris
US14/478,517 US20150131051A1 (en) 2013-11-14 2014-09-05 Eye detecting device and methods of detecting pupil
US15/588,473 US20170238800A1 (en) 2013-11-14 2017-05-05 Method of identifying iris

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US14/478,517 Continuation US20150131051A1 (en) 2013-11-14 2014-09-05 Eye detecting device and methods of detecting pupil

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/679,421 Continuation US20200093368A1 (en) 2013-11-14 2019-11-11 Method of identifying iris

Publications (1)

Publication Number Publication Date
US20170238800A1 true US20170238800A1 (en) 2017-08-24

Family

ID=53043552

Family Applications (4)

Application Number Title Priority Date Filing Date
US14/478,517 Abandoned US20150131051A1 (en) 2013-11-14 2014-09-05 Eye detecting device and methods of detecting pupil
US15/588,473 Abandoned US20170238800A1 (en) 2013-11-14 2017-05-05 Method of identifying iris
US16/679,421 Abandoned US20200093368A1 (en) 2013-11-14 2019-11-11 Method of identifying iris
US18/319,436 Pending US20230293007A1 (en) 2013-11-14 2023-05-17 Method of identifying iris

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14/478,517 Abandoned US20150131051A1 (en) 2013-11-14 2014-09-05 Eye detecting device and methods of detecting pupil

Family Applications After (2)

Application Number Title Priority Date Filing Date
US16/679,421 Abandoned US20200093368A1 (en) 2013-11-14 2019-11-11 Method of identifying iris
US18/319,436 Pending US20230293007A1 (en) 2013-11-14 2023-05-17 Method of identifying iris

Country Status (2)

Country Link
US (4) US20150131051A1 (en)
TW (1) TWI533224B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10572008B2 (en) * 2014-02-21 2020-02-25 Tobii Ab Apparatus and method for robust eye/gaze tracking

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2511868B (en) * 2013-03-15 2020-07-15 Tobii Ab Eye/gaze tracker and method of tracking the position of an eye and/or a gaze point of a subject
CN104933419B (en) * 2015-06-30 2019-05-21 小米科技有限责任公司 The method, apparatus and red film for obtaining iris image identify equipment
FR3041230B1 (en) 2015-09-18 2022-04-15 Suricog METHOD FOR DETERMINING ANATOMICAL PARAMETERS
CN105528577B (en) * 2015-12-04 2019-02-12 深圳大学 Recognition methods based on intelligent glasses
CN105929963B (en) * 2016-05-11 2019-04-30 北京蚁视科技有限公司 It is a kind of for tracking the method and detection device of eyeball position
JP2017211891A (en) * 2016-05-27 2017-11-30 ソニー株式会社 Information processing device, information processing method, and recording medium
JP6751324B2 (en) * 2016-09-14 2020-09-02 株式会社デンソーアイティーラボラトリ Iris detection device, iris detection method, and program
US10303248B2 (en) 2017-04-28 2019-05-28 Microsoft Technology Licensing, Llc Eye tracking using scanned beam and multiple detectors
US10489648B2 (en) * 2017-08-04 2019-11-26 Facebook Technologies, Llc Eye tracking using time multiplexing
CN110850594B (en) * 2018-08-20 2022-05-17 余姚舜宇智能光学技术有限公司 Head-mounted visual equipment and eyeball tracking system for same
WO2020194529A1 (en) * 2019-03-26 2020-10-01 日本電気株式会社 Interest determination device, interest determination system, interest determination method, and non-transitory computer-readable medium having program stored therein
TWI754806B (en) * 2019-04-09 2022-02-11 栗永徽 System and method for locating iris using deep learning
CN110929570B (en) * 2019-10-17 2024-03-29 珠海虹迈智能科技有限公司 Iris rapid positioning device and positioning method thereof
CN112949370A (en) * 2019-12-10 2021-06-11 托比股份公司 Eye event detection
CN111781722A (en) * 2020-07-01 2020-10-16 业成科技(成都)有限公司 Eyeball tracking structure, electronic device and intelligent glasses
CN114136209B (en) * 2021-11-24 2023-11-24 京东方科技集团股份有限公司 Eyeball position positioning circuit, eyeball position positioning method, substrate and virtual reality wearable device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110109880A1 (en) * 2006-01-26 2011-05-12 Ville Nummela Eye Tracker Device
US20130100025A1 (en) * 2011-10-21 2013-04-25 Matthew T. Vernacchia Systems and methods for obtaining user command from gaze direction
US8913789B1 (en) * 2012-01-06 2014-12-16 Google Inc. Input methods and systems for eye positioning using plural glints

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110109880A1 (en) * 2006-01-26 2011-05-12 Ville Nummela Eye Tracker Device
US20130100025A1 (en) * 2011-10-21 2013-04-25 Matthew T. Vernacchia Systems and methods for obtaining user command from gaze direction
US8913789B1 (en) * 2012-01-06 2014-12-16 Google Inc. Input methods and systems for eye positioning using plural glints

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10572008B2 (en) * 2014-02-21 2020-02-25 Tobii Ab Apparatus and method for robust eye/gaze tracking

Also Published As

Publication number Publication date
US20200093368A1 (en) 2020-03-26
US20230293007A1 (en) 2023-09-21
US20150131051A1 (en) 2015-05-14
TWI533224B (en) 2016-05-11
TW201519103A (en) 2015-05-16

Similar Documents

Publication Publication Date Title
US20230293007A1 (en) Method of identifying iris
US11308711B2 (en) Enhanced contrast for object detection and characterization by optical imaging based on differences between images
US9285893B2 (en) Object detection and tracking with variable-field illumination devices
TWI565323B (en) Imaging device for distinguishing foreground and operating method thereof, and image sensor
US20230195220A1 (en) Eye tracking system with off-axis light sources
US11375133B2 (en) Automatic exposure module for an image acquisition system
US11315483B2 (en) Systems, devices, and methods for an infrared emitting display
US10694110B2 (en) Image processing device, method
US10485420B2 (en) Eye gaze tracking
US10089731B2 (en) Image processing device to reduce an influence of reflected light for capturing and processing images
CN104657702B (en) Eyeball arrangement for detecting, pupil method for detecting and iris discrimination method
KR101961266B1 (en) Gaze Tracking Apparatus and Method
EP4071578A1 (en) Light source control method for vision machine, and vision machine
JP2020515100A (en) Vignetting compensation
JP7318793B2 (en) Biometric authentication device, biometric authentication method, and its program
US20230092593A1 (en) Detection device detecting gaze point of user, control method therefor, and storage medium storing control program therefor
US20210118171A1 (en) Object characteristic locating device and laser and imaging integration system
KR101419676B1 (en) Object recognition apparatus

Legal Events

Date Code Title Description
AS Assignment

Owner name: PIXART IMAGING INC., TAIWAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HUANG, YU-HAO;REEL/FRAME:042260/0700

Effective date: 20140903

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION