TW201533609A - Method for pupil localization based on a corresponding position of auxiliary light, system and computer product thereof - Google Patents

Method for pupil localization based on a corresponding position of auxiliary light, system and computer product thereof Download PDF

Info

Publication number
TW201533609A
TW201533609A TW103105624A TW103105624A TW201533609A TW 201533609 A TW201533609 A TW 201533609A TW 103105624 A TW103105624 A TW 103105624A TW 103105624 A TW103105624 A TW 103105624A TW 201533609 A TW201533609 A TW 201533609A
Authority
TW
Taiwan
Prior art keywords
pupil
unit
output unit
eye
light sources
Prior art date
Application number
TW103105624A
Other languages
Chinese (zh)
Inventor
Chia-Chun Tsou
Po-Tsung Lin
Yun-Yang Lai
Original Assignee
Utechzone Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Utechzone Co Ltd filed Critical Utechzone Co Ltd
Priority to TW103105624A priority Critical patent/TW201533609A/en
Publication of TW201533609A publication Critical patent/TW201533609A/en

Links

Landscapes

  • Image Analysis (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

A method for pupil localization based on a corresponding position of auxiliary light, system and computer product thereof are provided. The method comprises: irradiating a eyeball with a plurality of light sources to form a plurality of reflex points on the eyeball; defining a plurality of distance value from the center of the pupil to each of the reflex points; defining a plurality of angle between a base line of the pupil center and each of vector of reflex points, wherein each of the vector of reflex points is from the center of the pupil to each of the reflex points; and transforming the resulting data into a first position on an output unit.

Description

以輔助燈光相對位置為基礎之瞳孔定位方法、其系統以及其電腦程式產 品 Pupil positioning method based on auxiliary light relative position, its system and its computer program production Product

本發明關於一種瞳孔定位方法、其系統以及其電腦程式產 品,尤指一種以輔助燈光相對位置為基礎之瞳孔定位方法、其系統以及其電腦程式產品。 The invention relates to a pupil positioning method, a system thereof and a computer program thereof A product, especially a pupil positioning method based on the relative position of the auxiliary light, its system, and its computer program product.

人眼追蹤(Eye tracking)技術常見應用於操控電腦, 可發展作為漸凍人或肢體不便之人士透過電腦與外界溝通的輔具,或是心理研究之工具。此外,眼動追蹤技術亦大幅應用於各種領域,例如神經科學、心理學、工業工程、人因工程、行銷廣告、電腦科學等。 Eye tracking technology is commonly used to control computers. It can be used as an aid to communicate with the outside world through a computer or as a tool for psychological research. In addition, eye tracking technology is also widely used in various fields, such as neuroscience, psychology, industrial engineering, human factors engineering, marketing advertising, computer science and so on.

此技術是指追蹤眼球的移動,得到眼球位置座標或 移動軌跡,並據以對電腦產生某種預設的控制指令。因此這方面技術首先必須能精準地偵測到眼球的移動,接著另一重點是必須能準確地轉換成電腦產生控制指令所需的資料,例如將眼球位置轉換對應成電腦顯示幕上的游標位置,否則將導致下錯控制指令。 This technique refers to tracking the movement of the eyeball and getting the coordinates of the eye position or Move the trajectory and generate some preset control commands to the computer. Therefore, this technology must first accurately detect the movement of the eyeball, and then another key point must be able to accurately convert the data needed to generate control commands from the computer, such as mapping the eye position to the cursor position on the computer display. Otherwise it will result in a wrong control command.

目前人眼追蹤(Eye tracking)技術以是否與人眼接觸 區分為接觸式及非接觸式,接觸式人眼追蹤技術可區分為搜尋線圈法及眼電圖法,非接觸式人眼追蹤技術主要是以視覺辨識為基礎(Vision based),可區分為頭戴式(Head-mount)或免頭戴式(Free-head)。 At present, Eye tracking technology is in contact with the human eye. Divided into contact and non-contact, contact human eye tracking technology can be divided into search coil method and electro-oculogram method. Non-contact human eye tracking technology is mainly based on visual recognition (Vision based), which can be divided into heads. Head-mount or Free-head.

在接觸式人眼追蹤技術方面,搜尋線圈法(Search coil)是讓使用者配戴具有感應線圈的軟式鏡片,當使用者轉動眼球進而帶動鏡片時,感應線圈會因為磁通量變化產生感應電動勢,此電動勢大小即代表眼球偏轉的角度,但是此方法的缺點在於容易受到使用者眼球狀況的影響,如眼部的分泌物等,且軟式鏡片是雙層結構,會影響使用者的視力;至於眼電圖(EOG)法,則是在眼部周圍貼附複數電極,並利用該等電極偵測眼球轉動所產生的電壓差來判斷其上下左右的角度,缺點是臉部貼附電極的皮膚電阻容易因為角質分泌使得取得的電訊號不穩定,且僅能記錄眼球的巨轉向而無法記錄較微小的角度變化。 Search for coil method in contact human eye tracking technology (Search The coil) allows the user to wear a soft lens with an induction coil. When the user rotates the eyeball to drive the lens, the induction coil generates an induced electromotive force due to the change of the magnetic flux. The magnitude of the electromotive force represents the angle of deflection of the eyeball, but the disadvantage of this method It is susceptible to the eye condition of the user, such as secretions from the eye, and the soft lens is a two-layer structure that affects the user's vision; as for the electro-oculogram (EOG) method, it is attached around the eye. The plurality of electrodes are used to detect the difference between the voltages of the eyeballs and the left and right angles. The disadvantage is that the skin resistance of the electrodes attached to the electrodes is easily unstable due to keratin secretion, and only Recording the giant turn of the eyeball and not being able to record smaller angle changes.

在頭戴式人眼追蹤技術方面,使用者必須配戴一附 有小型攝影機之眼鏡,由於眼部及攝影機的相對距離固定,如此就不會因為臉部偏移或眼部的相對距離變化導致判斷不準確,因而在使用者使用時就必須將眼鏡固定於頭部藉此固定小型攝影機與眼睛的相對位置,對使用者而言不但不方便也不舒適。 In the head-mounted human eye tracking technology, the user must wear a Glasses with small cameras, because the relative distance between the eyes and the camera is fixed, so that the judgment is not accurate due to the deviation of the face or the relative distance of the eyes, so the lens must be fixed to the head when the user uses it. By this, the relative position of the small camera and the eye is fixed, which is not only inconvenient or comfortable for the user.

免頭戴式人眼追蹤技術方面,國外有配合螢幕及雙 CCD攝影機的眼部追蹤器(Eye trackers),國內較有名的則有林宸生等人的相關研究。然而,目前所知的免頭戴式人眼追蹤技術係 採用較複雜的運算,且免頭戴式人眼追蹤技術須克服使用者頭部移動造成誤差的問題。另外,雙CCD攝影機的眼部追蹤器雖然可以對指標精確定位,但是造價十分昂貴,且需採用兩個CCD攝影機。另外,亦有使用一光源照射於使用者眼球上,以形成一光斑,並透過該光斑與瞳孔的相對位置,比對預先建立的資料庫後,即可計算出使用者瞳孔於螢幕上凝視點的位置。 Head-free human eye tracking technology, with foreign screens and doubles Eye trackers of CCD cameras, and more famous in China, are related researches of Lin Yusheng and others. However, the current head-free human eye tracking technology system is known. The use of more complex calculations, and head-free human eye tracking technology must overcome the problem of error caused by the movement of the user's head. In addition, the eye tracker of the dual CCD camera can accurately locate the index, but the cost is very expensive, and two CCD cameras are required. In addition, a light source is used to illuminate the eyeball of the user to form a light spot, and the relative position of the light spot and the pupil is transmitted, and the pre-established database is compared to calculate the gaze point of the user's pupil on the screen. s position.

而無論使用何種人眼追蹤方法,一般來說,由於每 一位使用者眼睛的不同,因此在第一次使用眼動儀時,均須進行校正;常見的校正方式為令使用者觀看螢幕的四個角落或上下左右的邊緣,偵測到的眼球座標為校正點,將這些校正點視為正確而映射到螢幕,才能精準地得到瞳孔中心與凝視點的位移比例;然而,校正程序可能造成使用者使用上的不便。 Regardless of the human eye tracking method used, in general, because each One user's eyes are different, so the first time you use the eye tracker, you need to make corrections; the common correction method is to let the user watch the four corners of the screen or the top, bottom, left and right edges, and the detected eyeball coordinates. In order to correct the points, these correction points are regarded as correct and mapped to the screen, in order to accurately obtain the displacement ratio of the pupil center and the gaze point; however, the correction procedure may cause inconvenience to the user.

為了滿足上述需求,本發明之揭露內容提供以下實施態樣。 In order to satisfy the above needs, the disclosure of the present invention provides the following embodiments.

本發明一方面係提供一種以複數光源之相對位置為基礎之瞳孔定位方法,以定位一使用者之一眼球注視一輸出單元的位置,包含:(a)將複數個光源之光線照射於該眼球上,以於該眼球上形成複數個反光點;(b)利用一攝影單元取得包括該眼球之瞳孔與複數個該反光點的眼部影像;(c)依據一運算單元定位該眼部影像中的瞳孔中心點及複數個該反光點之位置;(d)以該瞳孔中心點至複數個該反光點之距離,界定出複數個光斑向量之一第一 群組;及(e)將該第一群組轉換成該輸出單元上的一第一位置。 One aspect of the present invention provides a pupil positioning method based on a relative position of a plurality of light sources to position a user's eyeball to view an output unit, comprising: (a) illuminating a plurality of light sources on the eyeball Forming a plurality of reflective spots on the eyeball; (b) obtaining an eye image including the pupil of the eyeball and the plurality of the reflective points by using a photographing unit; (c) positioning the eye image according to an operation unit a pupil center point and a plurality of positions of the reflection point; (d) defining a plurality of spot vectors by the distance from the pupil center point to the plurality of the reflection points a group; and (e) converting the first group to a first location on the output unit.

進一步地,其中該步驟(a)所使用之複數個該光源係設置於 該輸出單元對角線端點之二個光源,該二個光源將光線照射於該眼球上,以便形成對應的二個反光點。 Further, wherein the plurality of the light sources used in the step (a) are disposed on Two light sources at the end of the diagonal of the output unit, the two light sources illuminate the eyeball to form corresponding two reflective points.

進一步地,其中該步驟(a)所使用之複數個該光源係為 設置於該輸出單元四個頂角位置或該輸出單元四個邊長中點位置,使該四個光源將光線照射於該眼球上,以便形成對應的四個反光點。 Further, wherein the plurality of the light sources used in the step (a) are The four apex positions of the output unit or the four side lengths of the output unit are arranged such that the four light sources illuminate the eyeball to form corresponding four reflective points.

進一步地,其中該步驟(b)之該攝影單元係透過搜尋一 臉部影像中找出兩個鼻孔中心點;計算出該兩鼻孔中心點之間的間距及決定一起算點座標;根據該間距及該起算點座標計算出一基準點座標;根據該基準點座標定義出一矩形框,且沿著該矩形框從開臉部影像中取出該眼部影像。 Further, wherein the photographing unit of the step (b) is through the search Find the center points of the two nostrils in the facial image; calculate the spacing between the center points of the two nostrils and determine the coordinates of the points together; calculate a coordinate point coordinate according to the spacing and the coordinates of the starting point; according to the coordinates of the reference point A rectangular frame is defined, and the eye image is taken out from the open face image along the rectangular frame.

進一步地,其中該步驟(d)包含依據該瞳孔的水平軸或 垂直軸作為基準線段,配合光斑向量界定出一角度值。 Further, wherein step (d) comprises a horizontal axis according to the pupil or The vertical axis is used as the reference line segment, and an angle value is defined in conjunction with the spot vector.

進一步地,其中該步驟(e)中之該運算單元與一儲存單 元連接,且該儲存單元包含有與該複數個光源數量相等之複數個角度-距離分布圖,藉以將該第一群組帶入該複數個角度-距離分布圖中轉換為該第一位置。 Further, wherein the operation unit and the storage list in the step (e) The element is connected, and the storage unit includes a plurality of angle-distance distribution maps equal to the number of the plurality of light sources, thereby converting the first group into the plurality of angle-distance distribution maps to be converted into the first position.

進一步地,其中複數個該角度-距離分布圖,係透過 該運算單元中之一訓練模組,將該輸出單元預先劃分成多組區域,每次只顯示其中一組區域,該訓練模組並執行下述步驟:每 次顯示一組區域供該使用者觀看時,控制該影像分析模組執行定位及控制該向量處理模組取得各組區域的距離-角度資料所組成。 Further, wherein the plurality of angle-distance profiles are transmitted through a training module in the computing unit, the output unit is pre-divided into a plurality of groups of regions, and only one of the regions is displayed at a time. The training module performs the following steps: When a group of areas is displayed for the user to watch, the image analysis module is controlled to perform positioning and control of the vector processing module to obtain distance-angle data of each group of regions.

進一步地,其中該步驟(c)係透過該運算單元中之一 影像分析模組執行、該步驟(d)係透過該運算單元中之一向量處理模組執行,且該步驟(e)係透過該運算單元中之一座標轉換模組執行。 Further, wherein the step (c) is transmitted through one of the operation units The image analysis module executes, and the step (d) is performed by a vector processing module in the operation unit, and the step (e) is performed by a coordinate conversion module in the operation unit.

本發明另一方面係提供一種以複數光源之相對位置 為基礎之瞳孔定位系統,包含:複數個光源,發射光束照射一使用者之眼部以形成對應的複數個反光點;一攝影單元,擷取該使用者之眼部影像;以及一運算單元,定位該眼部影像上的一瞳孔中心點及複數個該反光點之位置;其中該運算單元根據該瞳孔中心點至複數個該反光點之距離,界定出複數個光斑向量,並轉換複數個該光斑向量為一輸出單元上之第一位置。 Another aspect of the present invention provides a relative position of a plurality of light sources The basic pupil positioning system comprises: a plurality of light sources, the emitted light beam illuminates a user's eye to form a corresponding plurality of reflective points; a photographing unit captures the user's eye image; and an arithmetic unit, Positioning a pupil center point and a plurality of the reflection points on the eye image; wherein the operation unit defines a plurality of spot vectors according to the distance from the pupil center point to the plurality of the reflection points, and converts the plurality of spots The spot vector is the first position on an output unit.

進一步地,其中該運算單元係依據影像分析模組所 定位之該瞳孔中心點的水平軸或垂直軸作為基準線段,配合複數個該光斑向量界定出複數個角度值。 Further, wherein the computing unit is based on an image analysis module The horizontal axis or the vertical axis of the center point of the pupil is positioned as a reference line segment, and a plurality of angle values are defined by a plurality of the spot vectors.

進一步地,該瞳孔定位系統進一步包含與該運算單 元連接之一儲存單元,且該儲存單元包含有與該複數個光源數量相等之複數個角度-距離分布圖。 Further, the pupil positioning system further includes the operation list The element is connected to one of the storage units, and the storage unit includes a plurality of angle-distance distribution maps equal to the number of the plurality of light sources.

進一步地,該瞳孔定位系統更包含一輸出單元,相 鄰於該攝影單元,且該輸出單元顯示出對應於瞳孔注視位置之指標。 Further, the pupil positioning system further comprises an output unit, the phase Adjacent to the photographing unit, and the output unit displays an index corresponding to the pupil gaze position.

進一步地,其中複數個該光源係設置於輸出單元對 角線端點,用以發出光線照射於該眼部上,以便形成對應的二個反光點。 Further, wherein the plurality of the light sources are disposed in the output unit pair An end point of the corner line for emitting light onto the eye to form two corresponding reflecting points.

進一步地,其中複數個該光源係為設置於該輸出單 元四個頂角位置或該輸出單元四個周側中點位置,用以發出光線照射於該眼部上,以便形成對應的四個反光點。 Further, wherein the plurality of the light sources are disposed on the output list The four apex positions of the element or the four peripheral side midpoints of the output unit are used to emit light onto the eye to form corresponding four reflective points.

本發明再一方面係提供一種用以定位瞳孔之非暫態 (non-transitory)電腦程式產品,當電腦載入該電腦程式產品並執行後,係可完成如本發明上述之以複數光源之相對位置為基礎之瞳孔定位方法。 Another aspect of the present invention provides a non-transient state for positioning a pupil A non-transitory computer program product, when the computer is loaded into the computer program product and executed, the pupil positioning method based on the relative position of the plurality of light sources according to the present invention can be completed.

綜上所述,以輔助燈光相對位置為基礎之瞳孔定位方法、 其系統以及其電腦程式產品可利用瞳孔與複數個反光點的相對位置關係,對應計算出瞳孔的位置。因此,本發明實施例可精確地偵測瞳孔位置,從而實現準確的眼動追蹤,而達到多元的應用。甚且,在本發明之若干實施例中,藉由配合使用資料庫,亦可達成無需校正之功效。 In summary, the pupil positioning method based on the relative position of the auxiliary light, The system and its computer program products can use the relative positional relationship between the pupil and the plurality of reflective points to calculate the position of the pupil. Therefore, the embodiment of the invention can accurately detect the position of the pupil, thereby achieving accurate eye tracking, and achieving multiple applications. Moreover, in some embodiments of the present invention, the effect of no correction can be achieved by using a database.

5‧‧‧座標點 5‧‧‧ punctuation

6‧‧‧原始影像 6‧‧‧ original image

10‧‧‧瞳孔定位系統 10‧‧‧瞳孔定位系统

40‧‧‧使用者 40‧‧‧Users

41‧‧‧瞳孔中心點 41‧‧‧ pupil center point

42a‧‧‧反光點 42a‧‧‧Reflecting point

42b‧‧‧反光點 42b‧‧‧Reflecting point

42c‧‧‧反光點 42c‧‧‧reflection point

42d‧‧‧反光點 42d‧‧‧Reflecting point

42e‧‧‧反光點 42e‧‧‧Reflecting point

42f‧‧‧反光點 42f‧‧‧reflection point

43a‧‧‧光斑向量 43a‧‧‧ spot vector

43b‧‧‧光斑向量 43b‧‧‧ spot vector

43c‧‧‧光斑向量 43c‧‧‧ spot vector

43d‧‧‧光斑向量 43d‧‧‧ spot vector

43e‧‧‧光斑向量 43e‧‧‧ spot vector

43f‧‧‧光斑向量 43f‧‧‧ spot vector

61‧‧‧臉部影像 61‧‧‧Face images

62‧‧‧鼻孔中心點 62‧‧‧ Nose center point

100‧‧‧複數個照明單元 100‧‧‧Multiple lighting units

200‧‧‧運算單元 200‧‧‧ arithmetic unit

202‧‧‧訓練模組 202‧‧‧ training module

204‧‧‧影像分析模組 204‧‧‧Image Analysis Module

206‧‧‧向量處理模組 206‧‧‧Vector Processing Module

208‧‧‧座標轉換模組 208‧‧‧Coordinate conversion module

210‧‧‧眼睛搜索模組 210‧‧‧Eye Search Module

212‧‧‧眼睛狀態判斷模組 212‧‧‧Eye Status Judgment Module

300‧‧‧攝影單元 300‧‧‧Photographic unit

400‧‧‧儲存單元 400‧‧‧ storage unit

401‧‧‧眼部影像 401‧‧‧Eye images

420‧‧‧眼球 420‧‧‧ eyeballs

450‧‧‧眼白 450‧‧‧ eye white

500‧‧‧輸出單元 500‧‧‧Output unit

501‧‧‧指標 501‧‧‧ indicators

a~g‧‧‧步驟 a~g‧‧‧step

D‧‧‧間距 D‧‧‧ spacing

圖1為顯示本發明之2個輔助燈光相對位置為基礎之瞳孔定位系統實施例方塊圖。 BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 is a block diagram showing an embodiment of a pupil positioning system based on the relative position of two auxiliary lights of the present invention.

圖2為圖1實施例的示意圖。 Figure 2 is a schematic illustration of the embodiment of Figure 1.

圖3為運算單元透過攝影單元拍攝的影像取得眼部影像的流程圖。 FIG. 3 is a flow chart of obtaining an eye image by the arithmetic unit through the image captured by the photographing unit.

圖4顯示本發明之2個輔助燈光相對位置為基礎之瞳孔定位系統實施例中攝影單元所拍攝到的影像示意圖。 Fig. 4 is a view showing an image taken by a photographing unit in the embodiment of the pupil positioning system based on the relative position of the two auxiliary lights of the present invention.

圖5a顯示從影像所取出的眼部影像之示意圖,圖5b顯示向量模組計算眼睛參數示意圖。 Figure 5a shows a schematic view of the eye image taken from the image, and Figure 5b shows a schematic diagram of the vector module calculating the eye parameters.

圖6顯示本發明之一實施例中之角度-距離分布圖座標示意圖。 Figure 6 is a diagram showing the angle-distance distribution coordinates of an embodiment of the present invention.

圖7顯示4個輔助燈光相對位置為基礎之瞳孔定位系統之實施例示意圖。 Figure 7 shows a schematic diagram of an embodiment of a pupil positioning system based on the relative position of four auxiliary lights.

圖8a顯示本發明之4個輔助燈光相對位置為基礎之瞳孔定位系統之實施例所取出的眼部影像之示意圖,圖8b顯示向量模組計算眼睛參數示意圖。 Fig. 8a is a schematic view showing an eye image taken out by an embodiment of a pupil positioning system based on the relative position of the four auxiliary lights of the present invention, and Fig. 8b is a schematic view showing the calculation of eye parameters by the vector module.

有關本發明之前述及其他技術內容、特點與功效,在以下配合參考圖式之較佳實施例的詳細說明中,將可清楚的呈現。在本發明被詳細描述之前,要注意的是,在以下的說明內容中,類似的元件是以相同的編號來表示。 The foregoing and other objects, features, and advantages of the invention are set forth in the <RTIgt; Before the present invention is described in detail, it is noted that in the following description, similar elements are denoted by the same reference numerals.

請一併參見圖1及圖2,「圖1」為一方塊圖,顯示本發明之一實施例中之以複數光源之相對位置為基礎之瞳孔定位系統10,其可用以將一使用者的視線轉換到一螢幕的指標位置,「圖2」為圖1實施例之示意圖。如「圖1」及「圖2」中所示,該瞳孔定位系統10可包含複數個照明單元100、一運算單元200、一攝影單元300、一儲存單元400及一輸出單元500;其中,該運 算單元200包含一訓練模組202、一影像分析模組204、一向量處理模組206及一座標轉換模組208;此外,亦可進一步包含一眼睛搜索模組210以及一眼睛狀態判斷模組212。 Referring to FIG. 1 and FIG. 2 together, FIG. 1 is a block diagram showing a pupil positioning system 10 based on the relative positions of a plurality of light sources in an embodiment of the present invention, which can be used to The line of sight is switched to the position of the indicator on the screen, and Fig. 2 is a schematic view of the embodiment of Fig. 1. As shown in FIG. 1 and FIG. 2, the pupil positioning system 10 can include a plurality of lighting units 100, an arithmetic unit 200, a photographing unit 300, a storage unit 400, and an output unit 500. Transport The calculation unit 200 includes a training module 202, an image analysis module 204, a vector processing module 206, and a standard conversion module 208. In addition, the calculation unit 200 further includes an eye search module 210 and an eye state determination module. 212.

複數個該照明單元100可為紅外光源、可見光源、 或任何可將光源投射在使用者之眼球上者,藉由眼球反射該光源,而造成使用者之眼球上呈現反光點,較佳而言,照明單元100可為紅外光源,藉此避免造成使用者操作不適。本文中複數個該照明單元100之實施例將以2個照明單元100以及4個照明單元100為具體代表實例並於後文中詳述。 The plurality of illumination units 100 can be an infrared source, a visible light source, Or any one that can project the light source on the eyeball of the user, and the light source reflects the light source, thereby causing a reflective point on the eyeball of the user. Preferably, the illumination unit 100 can be an infrared light source, thereby avoiding the use of the light source. The operation is uncomfortable. The embodiment of the plurality of lighting units 100 herein will be exemplified by two lighting units 100 and four lighting units 100 and will be described in detail later.

在本實施例中,運算單元200以及儲存單元400可 共同構成一電腦或處理器,例如是個人電腦、工作站、主機電腦或其他型式之電腦或處理器,在此並不限制其種類。在本實施例中,運算單元200耦接於儲存單元400。 In this embodiment, the operation unit 200 and the storage unit 400 can be Together, they constitute a computer or processor, such as a personal computer, workstation, host computer, or other type of computer or processor, and are not limited in scope. In the embodiment, the computing unit 200 is coupled to the storage unit 400.

攝影單元300用以拍攝使用者之臉部,以產生多張 連續影像;攝影單元300可以是任何具有電荷耦合元件(Charge coupled device;CCD)鏡頭、互補式金氧半電晶體(Complementary metal oxide semiconductor transistors;CMOS)鏡頭,或紅外線鏡頭的攝影機,亦可以是可取得深度資訊的影像擷取設備,例如是深度攝影機(depth camera)或立體攝影機;攝影單元300可為具備可轉動地調整方向及角度的鏡頭,以便將該鏡頭調整成仰望該使用者臉部之狀態。例言之,使該鏡頭以仰角30度的角度朝向該使用者臉部。藉此,攝影單元300所擷取到的每一張影像中的鼻 孔都可清楚地顯示,這意味著每一張臉部影像的鼻孔辨識度將被大幅提升而有助於隨後所述之鼻孔搜尋程式之執行。此外,攝影單元300可更具有照明元件,藉由攝影單元300之內建偵測器於判斷光線不足時進行補光,以確保其所擷取到的臉部影像之清晰度。在其他實施例中,攝影單元300係透過通用串列匯流排(Universal Serial Bus;USB)等實體線路連接、或透過有線網路、或是藍芽、無線保真(Wireless Fidelity;WiFi)等無線傳輸介面,與運算單元200及儲存單元400傳輸信號。本發明實施例對於攝影單元300的種類並不限制。 The photographing unit 300 is used to photograph the face of the user to generate a plurality of sheets The continuous image; the photographing unit 300 may be any camera having a charge coupled device (CCD) lens, a complementary metal oxide semiconductor transistor (CMOS) lens, or an infrared lens, or may be The image capturing device for obtaining depth information is, for example, a depth camera or a stereo camera; the photographing unit 300 may be a lens having a rotatably adjustable direction and an angle to adjust the lens to look up to the user's face. status. For example, the lens is oriented toward the user's face at an angle of 30 degrees of elevation. Thereby, the nose in each image captured by the photographing unit 300 The holes are clearly displayed, which means that the nostril recognition of each facial image will be greatly enhanced to facilitate the execution of the nostril search program described later. In addition, the photographing unit 300 can further have an illumination component, and the built-in detector of the photographing unit 300 can perform fill light when determining that the light is insufficient to ensure the sharpness of the captured facial image. In other embodiments, the photographing unit 300 is connected through a physical line such as a universal serial bus (USB), or through a wired network, or wireless, such as Bluetooth or Wireless Fidelity (WiFi). The transmission interface transmits signals to the arithmetic unit 200 and the storage unit 400. The embodiment of the present invention is not limited to the type of the photographing unit 300.

儲存單元400可以是任何型態的固定或可移動隨機 存取記憶體(random access memory;RAM)、唯讀記憶體(read-only memory;ROM)、快閃記憶體(flash memory)或類似元件或上述元件的組合。儲存單元400亦可由一或更多個可存取之非揮發性記憶構件所構成。具體而言,其可為硬碟、記憶卡,亦可為積體電路或韌體。在一或更多實施例中,儲存單元400可用以記錄攝影單元300取得的包括瞳孔及特徵點的影像及統計資訊。 The storage unit 400 can be any type of fixed or movable random A random access memory (RAM), a read-only memory (ROM), a flash memory or the like or a combination of the above elements. The storage unit 400 can also be constructed from one or more accessible non-volatile memory components. Specifically, it can be a hard disk, a memory card, or an integrated circuit or a firmware. In one or more embodiments, the storage unit 400 can be used to record images and statistical information including the pupils and feature points obtained by the photographing unit 300.

輸出單元500可為顯示幕、螢幕、揚聲器、或任何 可將指令轉換成通常生物可接收處理的資訊形式之裝置。在較佳實施例中,輸出單元500為螢幕,以便將顯示幕的指標與使用者的瞳孔注視位置相配合。 The output unit 500 can be a display screen, a screen, a speaker, or any The instructions can be converted into devices that are typically bio-receivable for processing information. In the preferred embodiment, the output unit 500 is a screen to match the indicator of the display to the pupil's gaze position.

2個輔助燈光相對位置為基礎之瞳孔定位系統2 auxiliary position relative position based pupil positioning system

請繼續參照「圖2」,由圖2中可見,照明單元100 設置於輸出單元500之對角線端點。當使用者40眼睛注視輸出單元500時,位於其對角線端點的兩個照明單元100發出光源照射至該使用者的眼睛,於眼球上形成兩個反光點,且攝影單元300用以拍攝使用者之臉部,以產生多張連續影像,取得眼球之瞳孔及兩個反光點的眼部影像。 Please continue to refer to "Figure 2", as can be seen from Figure 2, the lighting unit 100 It is disposed at the diagonal end of the output unit 500. When the user 40 eyes gaze at the output unit 500, the two illumination units 100 located at the ends of the diagonal line emit light to the user's eyes, forming two reflective spots on the eyeball, and the photographing unit 300 is used for photographing. The user's face is used to generate multiple continuous images, and the pupil of the eyeball and the eye images of the two reflective points are obtained.

進一步說明該攝影單元300耦接運算單元200,進而 取得眼部影像之方式。一併參照「圖3」,為運算單元200透過攝影單元300拍攝的影像取得眼部影像的流程圖,以及「圖4」,為攝影單元300拍攝到之影像示意圖。當該運算單元300啟動眼睛搜索模組210時,該眼睛搜索模組210執行以下a至g步驟。 Further, the photographing unit 300 is coupled to the arithmetic unit 200, and further The way to get an eye image. Referring to FIG. 3 together, a flowchart for obtaining an eye image by the operation unit 200 through the image captured by the imaging unit 300, and FIG. 4, is a schematic diagram of the image captured by the imaging unit 300. When the computing unit 300 activates the eye search module 210, the eye search module 210 performs the following steps a through g.

步驟a,接收原始影像6。步驟b,從該原始影像6 取出臉部影像61。步驟c,找出該臉部影像61中的兩個鼻孔中心點62。步驟d,計算出該兩鼻孔中心點之間的間距D及決定起算點座標A(x1,y1)。步驟e,根據該間距D及該起算點座標A(x1,y1),計算出基準點座標B(x2,y2)。步驟f,根據該基準點座標B(x2,y2)定義出矩形框R1。步驟g,沿著該矩形框R1從該臉部影像取出眼部影像401;其中,x2=x1+k1×D,y2=y1+k2×D,k1=1.6~1.8,且k2=1.6~1.8。取得的眼部影像401如「圖5a」所示。 In step a, the original image 6 is received. In step b, the face image 61 is taken out from the original image 6. In step c, the two nostril center points 62 in the facial image 61 are found. In step d, the distance D between the center points of the two nostrils is calculated and the starting point coordinates A(x 1 , y 1 ) are determined. In step e, the reference point coordinates B(x 2 , y 2 ) are calculated according to the spacing D and the starting point coordinates A(x 1 , y 1 ). In step f, a rectangular frame R 1 is defined according to the reference point coordinate B(x 2 , y 2 ). Step g, 1 eye image taken from the face image along the rectangular frame R 401; wherein, x 2 = x 1 + k 1 × D, y 2 = y 1 + k 2 × D, k 1 = 1.6 ~ 1.8, and k 2 = 1.6~1.8. The acquired eye image 401 is as shown in "Fig. 5a".

進一步地,運算單元200內的眼睛狀態判斷模組212 可執行判斷眼睛狀態的步驟,用以在眼部影像401中搜索出眼睛的某一部位,例如上、下眼瞼或瞳孔,並根據該部位的檢測結果產生一眼睛狀態數據,例如以「0」代表張眼、以「1」代表閉眼 等。在本發明中,若以上眼瞼的彎曲程度來產生眼睛狀態數據,其可利用張眼狀態下,上眼瞼呈沿水平方向延伸的弧線;但閉眼狀態下,上眼瞼大致呈沿水平方向延伸的直線;因此,可利用沿著上眼瞼所得到的圖形計算拋物線的焦距,而不同的焦距代表不同彎曲程度的上眼瞼,並相對應得到眼睛狀態。 Further, the eye state determination module 212 in the operation unit 200 The step of determining the state of the eye may be performed to search for a certain part of the eye in the eye image 401, such as the upper and lower eyelids or the pupil, and generate an eye state data according to the detection result of the part, for example, “0” Representing the eyes, closing the eyes with "1" Wait. In the present invention, if the eye state data is generated by the degree of curvature of the upper eyelid, the upper eyelid may be an arc extending in the horizontal direction in the eye-opening state; but in the closed eye state, the upper eyelid is substantially a straight line extending in the horizontal direction. Therefore, the focal length of the parabola can be calculated using the graph obtained along the upper eyelid, and the different focal lengths represent the upper eyelids of different degrees of curvature, and correspondingly the eye state is obtained.

接著,於取得眼部影像401並可進一步判定其狀態 後,可利用位於眼球420(在此指人眼構造中的虹膜,為眼睛的黑色、藍色或褐色等非白色區域的眼白450區塊,在下文中以眼球420稱之)中心的瞳孔中心點41與兩個反光點42a及42b的相對位置關係,利用向量處理及座標轉換等方法對應計算出瞳孔的位置。 Then, the eye image 401 is obtained and the state thereof can be further determined. Thereafter, the pupil center point at the center of the eyeball 420 (here, the iris in the human eye configuration, which is the non-white area of the eye, such as black, blue, or brown, and the eyeball 420, hereinafter referred to as the eyeball 420) may be utilized. The relative positional relationship between the 41 and the two reflecting points 42a and 42b is calculated by vector processing and coordinate conversion to calculate the position of the pupil.

詳言之,運算單元200中之影像分析模組204先對 眼部影像401進行二值化處理(thresholding,又稱灰度分劃),使影像中灰度大於預設灰度值的部分,調整為黑色;影像中灰度小於預設灰度值的部分,調整為白色,藉此由眼球420中清楚找出瞳孔中心點41;同樣地,藉由調整預設的灰度值,可找出該二反光點42a、42b。 In detail, the image analysis module 204 in the computing unit 200 is first The eye image 401 performs a binarization process (also referred to as grayscale segmentation) to adjust a portion of the image whose grayscale is greater than the preset grayscale value to black; and the portion of the image whose grayscale is smaller than the preset grayscale value. It is adjusted to white, whereby the pupil center point 41 is clearly found by the eyeball 420; similarly, the two light reflecting points 42a, 42b can be found by adjusting the preset gray value.

接著,運算單元200中之向量處理模組206分別依 據該瞳孔中心點41與該二反光點42a、42b計算得到兩個距離值及兩個夾角角度。詳言之,參照「圖5b」,向量處理模組206分別以該瞳孔中心點41至反光點42a之距離,或該瞳孔中心點41至反光點42b之距離,界定第一距離值以及第二距離值,並由該瞳 孔中心點41界定一基準線(本實施例中為垂直軸)配合兩個光斑向量43a及43b界定出兩個夾角角度θ1及θ2,其中光斑向量43a及43b是由該瞳孔中心點41分別至反光點42a、42b之向量,進而組成一包含該兩個距離值(第一距離值及第二距離值)及該兩個夾角角度(θ1及θ2)之一第一群組。 Next, the vector processing module 206 in the computing unit 200 respectively According to the pupil center point 41 and the two reflecting points 42a, 42b, two distance values and two angles are calculated. In detail, referring to FIG. 5b, the vector processing module 206 defines the first distance value and the second distance from the pupil center point 41 to the reflection point 42a or the distance from the pupil center point 41 to the reflection point 42b. Distance value and by the 瞳 The hole center point 41 defines a reference line (in this embodiment, a vertical axis) that defines two angles θ1 and θ2 in cooperation with the two spot vectors 43a and 43b, wherein the spot vectors 43a and 43b are respectively from the pupil center point 41 to The vectors of the reflection points 42a, 42b, in turn, constitute a first group comprising the two distance values (the first distance value and the second distance value) and the two angle angles (θ1 and θ2).

需說明的是,本發明定義角度值θ為負數時,依據 下述公式將其轉換為0度至180度之間的角度:if Angle<0 then Angle=180+Angle。此外,於計算距離值時,可進一步考慮使用者的頭部相對攝影單元300而前後移動的狀況下的誤差校正問題;由於當使用者頭部較靠近攝影單元300時,眼睛影像20中該二反光點42a、42b之間的距離較大,隨著使用者頭部遠離攝影單元300,該二反光點42a、42b之間的距離變小。為了將此誤差消弭,可針對瞳孔中心41與反光點42a、42b所分別偵測的向量長度除以一正規化因子。 It should be noted that, when the invention defines that the angle value θ is a negative number, The following formula converts it to an angle between 0 and 180 degrees: if Angle<0 then Angle=180+Angle. In addition, when calculating the distance value, the error correction problem in the case where the user's head moves back and forth relative to the photographing unit 300 can be further considered; since the user's head is closer to the photographing unit 300, the second in the eye image 20 The distance between the reflecting points 42a, 42b is large, and as the user's head is away from the photographing unit 300, the distance between the two reflecting points 42a, 42b becomes small. To eliminate this error, the length of the vector detected separately for the pupil center 41 and the reflection points 42a, 42b can be divided by a normalization factor.

最後,利用運算單元200中之座標轉換模組208,利 用座標系統轉換方法,將向量處理模組206所得之第一群組轉換為輸出單元500上的第一位置。在本實施例,座標轉換模組108可採仿射轉換(affine transformation)方法進行將包含兩距離值(第一距離值及第二距離值)及該兩個夾角角度(θ1及θ2)所組成之第一群組進行座標轉換。進行座標轉換的方式可採用將該第一群組帶入儲存單元400中之角度-距離分布圖中進行轉換。以下,茲對利用訓練模組202進行角度-距離分布圖的建立方式詳加說明。 Finally, using the coordinate conversion module 208 in the computing unit 200, The first group obtained by the vector processing module 206 is converted to the first position on the output unit 500 by the coordinate system conversion method. In this embodiment, the coordinate conversion module 108 can perform an affine transformation method to include two distance values (a first distance value and a second distance value) and the two angle angles (θ1 and θ2). The first group performs coordinate conversion. The manner of coordinate conversion can be performed by taking the first group into the angle-distance distribution map in the storage unit 400. Hereinafter, the manner of establishing the angle-distance distribution map by the training module 202 will be described in detail.

舉例而言,瞳孔定位系統10中之訓練模組202每次 在輸出單元200顯示一第一目標區域,以將輸出單元202分割成4×4共16格的目標區域為例,供該使用者40觀看,接著再顯示第二目標區域供該使用者觀看,直到16格的目標區域都顯示完畢;如同上述,影像分析模組204可由該16次的注視,取得16個使用者4之眼部影像401,因此向量處理模組206可取得瞳孔中心點41至反光點42a之距離所定義的第一個訓練距離值至第十六個訓練距離值,以及光斑向量的第一夾角角度至第十六個夾角角度,進而獲得第一角度-距離分布圖(如圖6所示);同理,向量處理模組206可取得瞳孔中心點41至反光點42b之距離所定義的第一個訓練距離值至第十六個訓練距離值,以及光斑向量的第一夾角角度至第十六個夾角角度,進而獲得第二角度-距離分布圖(未圖式)。 For example, the training module 202 in the pupil positioning system 10 each time A first target area is displayed on the output unit 200 to divide the output unit 202 into 4×4 total 16-cell target areas for viewing by the user 40, and then the second target area is displayed for the user to view. Up to the 16th target area is displayed; as described above, the image analysis module 204 can obtain the eye images 401 of the 16 users 4 by the 16 times of gaze, so the vector processing module 206 can obtain the pupil center point 41 to The first training distance value defined by the distance of the reflection point 42a to the sixteenth training distance value, and the first angle angle of the spot vector to the sixteenth angle angle, thereby obtaining the first angle-distance distribution map (eg In the same manner, the vector processing module 206 can obtain the first training distance value defined by the distance from the pupil center point 41 to the reflection point 42b to the sixteenth training distance value, and the first of the spot vector. The angle of the angle is angled to the sixteenth angle, and a second angle-distance map (not shown) is obtained.

因此,座標轉換模組208可使用例如但不限於分組 對應法或內插對應法,將每一次使用者對輸出單元500不同的注視點,均可利用向量處理模組206得到兩個距離值(第一距離值及第二距離值)及該兩個夾角角度(θ1及θ2),帶入預設之第一角度-距離分布圖以及第二角度-距離分布圖中,得到相應的座標點5,而用以獲得輸出單元500上的相應位置。 Thus, coordinate conversion module 208 can use, for example, but not limited to, grouping Corresponding method or interpolation corresponding method, each time the user has different gaze points to the output unit 500, the vector processing module 206 can be used to obtain two distance values (the first distance value and the second distance value) and the two The angles of the angles (θ1 and θ2) are brought into the preset first angle-distance distribution map and the second angle-distance distribution map to obtain corresponding coordinate points 5 for obtaining corresponding positions on the output unit 500.

再者,由於本實施例中使用了兩個位於輸出單元500 對角線上的照明單元100,因此,請再參見圖5a,可利用眼部影像上的該二反光點42a、42b,利用座標轉換模組208將該二反光 點42a以及42b的位置轉換為對應輸出單元500對角線上的照明單元100的位置,接著,再將該瞳孔中心點41經過座標轉換模組28求出相應於輸出單元500上的第一位置。 Furthermore, since two of the output units 500 are used in this embodiment The illumination unit 100 on the diagonal line, therefore, referring to FIG. 5a again, the two reflection points 42a, 42b on the eye image can be utilized to reflect the two reflections by the coordinate conversion module 208. The positions of the points 42a and 42b are converted to positions corresponding to the illumination unit 100 on the diagonal of the output unit 500, and then the pupil center point 41 is further determined by the coordinate conversion module 28 to correspond to the first position on the output unit 500.

4個輔助燈光相對位置為基礎之瞳孔定位系統4 auxiliary position relative position based pupil positioning system

請參照「圖7」,其為本發明以四個輔助燈光相對位置為基礎之瞳孔定位系統示意圖。由圖中可見,照明單元100設置於輸出單元500四個邊長的中點位置,當使用者40眼睛注視輸出單元500時,位於輸出單元500四個周側中點位置的四個照明單元100發出光源照射至該使用者的眼睛,於眼球上形成四個反光點,且攝影單元300用以拍攝使用者之臉部,以產生多張連續影像,取得眼球之瞳孔及四個反光點的眼部影像。 Please refer to FIG. 7 , which is a schematic diagram of a pupil positioning system based on the relative positions of four auxiliary lights. As can be seen from the figure, the illumination unit 100 is disposed at a midpoint position of the four sides of the output unit 500. When the user 40 looks at the output unit 500, the four illumination units 100 located at the midpoint of the four peripheral sides of the output unit 500 The light source is irradiated to the user's eyes to form four reflective spots on the eyeball, and the photographing unit 300 is used to photograph the user's face to generate a plurality of continuous images, and obtain the pupil of the eyeball and the eyes of the four reflective points. Partial image.

運算單元200取得眼部影像的運作方式如前實施例所述,於此不再贅述,而取得的眼部影像如「圖8a」所示。接著,向量處理模組206同前實施例所述之方式,分別依據瞳孔中心點41及四個反光點42c、42d、42e及42f計算得到四個距離值,如「圖8b」,且四個光斑向量43c、43d、43e及43f配合瞳孔中心點41之垂直軸做為基準線,而得到四個夾角角度(θ3、θ4、θ5及θ6),進而組成一包含該四個距離值及四個夾角角度之一第二群組。 The operation mode of the operation unit 200 for acquiring the eye image is as described in the previous embodiment, and the details of the acquired eye image are as shown in FIG. 8a. Next, the vector processing module 206 calculates four distance values according to the pupil center point 41 and the four reflection points 42c, 42d, 42e, and 42f, respectively, in the manner described in the previous embodiment, such as "Fig. 8b", and four The spot vectors 43c, 43d, 43e, and 43f cooperate with the vertical axis of the pupil center point 41 as a reference line, and obtain four angles of angle (θ3, θ4, θ5, and θ6), thereby composing one including the four distance values and four One of the angled angles of the second group.

最後,利用運算單元200中之座標轉換模組208,利用座標系統轉換方法,將向量處理模組206所得之第二群組轉換為輸出單元500上的第二位置。轉換方式如同前實施例所述,於此不再贅述。須注意的是,由於本實施例使用了四個輔助燈光做 為光源,反射於使用者眼睛上將產生四個反光點,因此,可以獲知該瞳孔定位系統10可利用訓練模組202,如上述實施例的訓練方式,分別依據瞳孔和四個反光點的位置,得到四個角度-距離分布圖,因此,於本實施例中,該第二群組可比對四個角度-距離分布圖所形成的資料庫資訊,而可更精準的定位。 Finally, the second group obtained by the vector processing module 206 is converted into the second position on the output unit 500 by using the coordinate conversion module 208 in the computing unit 200. The conversion method is as described in the previous embodiment, and details are not described herein again. It should be noted that since this embodiment uses four auxiliary lights to do For the light source, the reflection on the user's eyes will produce four reflective points. Therefore, it can be known that the pupil positioning system 10 can utilize the training module 202, such as the training method of the above embodiment, according to the position of the pupil and the four reflective points respectively. The four angle-distance distribution maps are obtained. Therefore, in the embodiment, the second group can compare the database information formed by the four angle-distance distribution maps, and can be more accurately positioned.

再者,由於本實施例中使用了四個位於輸出單元500 四個邊長中點位置的照明單元100,因此,可利用眼部影像上的該四個反光點42c、42d、42e及42f,利用座標轉換模組208將瞳孔中心點41以及該四個反光點42c、42d、42e及42f的位置,轉換為對應於輸出單元500上的第二位置。根據上述四個反光點42c-42f的參考位置,可有效提高瞳孔中心點41對應於輸出單元500上第二位置的定位精確度,可省去於使用者初次使用時的校正程序。 Furthermore, since four are used in the output unit 500 in this embodiment The illumination unit 100 at the midpoint of the four side lengths, therefore, the pupil center point 41 and the four reflections can be utilized by the coordinate conversion module 208 by using the four reflection points 42c, 42d, 42e and 42f on the eye image. The positions of the points 42c, 42d, 42e, and 42f are converted to correspond to the second position on the output unit 500. According to the reference positions of the four reflective points 42c-42f, the positioning accuracy of the pupil center point 41 corresponding to the second position on the output unit 500 can be effectively improved, and the correction procedure when the user first uses can be omitted.

藉此,利用本發明實施例所提出的瞳孔定位方法, 其系統以及電腦程式產品,可提供精確的瞳孔定位,並可實現快速且準確的眼動追蹤,而可達到多元化的應用。例如,本發明實施例的瞳孔定位可被運用至安防產品。以門禁系統中由英文或數字的字元所組成的輸入鍵盤而言,當使用者在看向鍵盤時,透過本發明實施例對使用者的瞳孔進行定位,並配合深度擷取設備對使用者所擷取的深度值或是其他方式,可進而推算出使用者的注視方向,而由鍵盤中相對應的按鍵而得知使用者欲輸入的字元。 藉此,使用者只需看向鍵盤中欲輸入的字元,即可對門禁系統中例如眼動鎖等鎖具來進行操作。 Thereby, using the pupil positioning method proposed by the embodiment of the present invention, Its systems and computer program products provide precise pupil positioning and fast and accurate eye tracking for a wide range of applications. For example, the pupil positioning of the embodiment of the present invention can be applied to a security product. In the case of an input keyboard composed of English or numeric characters in the access control system, when the user looks at the keyboard, the user's pupil is positioned through the embodiment of the present invention, and the user is matched with the deep drawing device. The depth value or other method can be used to further calculate the gaze direction of the user, and the corresponding key of the keyboard is used to know the character to be input by the user. Thereby, the user can operate the lock such as an eye movement lock in the access control system by simply looking at the character to be input in the keyboard.

綜上所述,本發明之瞳孔定位方法、其系統以及其電腦程式產品,可利用瞳孔與複數個反光點的相對位置關係,對應計算出瞳孔的位置。因此,本發明實施例可精確地偵測瞳孔位置,從而實現準確的眼動追蹤,而達到多元的應用。甚且,在本發明之若干實施例中,藉由配合使用資料庫,亦可達成無需校正之功效。 In summary, the pupil positioning method, the system thereof and the computer program product of the present invention can calculate the position of the pupil by using the relative positional relationship between the pupil and the plurality of reflective points. Therefore, the embodiment of the invention can accurately detect the position of the pupil, thereby achieving accurate eye tracking, and achieving multiple applications. Moreover, in some embodiments of the present invention, the effect of no correction can be achieved by using a database.

惟以上所述者,僅為本發明之較佳實施例而已,當不能以此限定本發明實施之範圍,即大凡依本發明申請專利範圍及發明說明內容所作之簡單的等效變化與修飾,皆仍屬本發明專利涵蓋之範圍內。 The above is only the preferred embodiment of the present invention, and the scope of the invention is not limited thereto, that is, the simple equivalent changes and modifications made by the scope of the invention and the description of the invention are All remain within the scope of the invention patent.

10‧‧‧瞳孔定位系統 10‧‧‧瞳孔定位系统

40‧‧‧使用者 40‧‧‧Users

100‧‧‧複數個照明單元 100‧‧‧Multiple lighting units

200‧‧‧運算單元 200‧‧‧ arithmetic unit

202‧‧‧訓練模組 202‧‧‧ training module

204‧‧‧影像分析模組 204‧‧‧Image Analysis Module

206‧‧‧向量處理模組 206‧‧‧Vector Processing Module

208‧‧‧座標轉換模組 208‧‧‧Coordinate conversion module

210‧‧‧眼睛搜索模組 210‧‧‧Eye Search Module

212‧‧‧眼睛狀態判斷模組 212‧‧‧Eye Status Judgment Module

300‧‧‧攝影單元 300‧‧‧Photographic unit

500‧‧‧輸出單元 500‧‧‧Output unit

501‧‧‧指標 501‧‧‧ indicators

Claims (15)

一種以複數光源之相對位置為基礎之瞳孔定位方法,以定位一使用者之一眼球注視一輸出單元的位置,包含:(a)將複數個光源之光線照射於該眼球上,以於該眼球上形成複數個反光點;(b)利用一攝影單元取得包括該眼球之瞳孔與複數個該反光點的眼部影像;(c)依據一運算單元定位該眼部影像中的瞳孔中心點及複數個該反光點之位置;(d)以該瞳孔中心點至複數個該反光點之距離,界定出複數個光斑向量之一第一群組;及(e)將該第一群組轉換成該輸出單元上的一第一位置。 A pupil positioning method based on the relative positions of a plurality of light sources to position a user's eyeball to look at an output unit position, comprising: (a) illuminating a plurality of light sources on the eyeball for the eyeball Forming a plurality of reflective points on the upper surface; (b) obtaining an eye image including the pupil of the eyeball and the plurality of the reflective points by using a photographing unit; (c) positioning the pupil center point and the plural in the eye image according to an operation unit a position of the reflection point; (d) defining a first group of the plurality of spot vectors from the distance from the center point of the pupil to the plurality of reflection points; and (e) converting the first group into the A first position on the output unit. 如請求項1之瞳孔定位方法,其中該步驟(a)所使用之複數個該光源係設置於該輸出單元對角線端點之二個光源,該二個光源將光線照射於該眼球上,以便形成對應的二個反光點。 The pupil positioning method of claim 1, wherein the plurality of the light sources used in the step (a) are disposed at two ends of the diagonal end of the output unit, and the two light sources illuminate the eyeball. In order to form corresponding two reflective spots. 如請求項1之瞳孔定位方法,其中該步驟(a)所使用之複數個該光源係為設置於該輸出單元四個頂角位置或該輸出單元四個邊長中點位置,使該四個光源將光線照射於該眼球上,以便形成對應的四個反光點。 The method for positioning a pupil according to claim 1, wherein the plurality of the light sources used in the step (a) are disposed at four vertex positions of the output unit or at a midpoint of four sides of the output unit, so that the four The light source illuminates the eyeball to form corresponding four reflective spots. 如請求項1至3任一項之瞳孔定位方法,其中該步驟(b)之該攝影單元係透過搜尋一臉部影像中找出兩 個鼻孔中心點;計算出該兩鼻孔中心點之間的間距及決定一起算點座標;根據該間距及該起算點座標計算出一基準點座標;根據該基準點座標定義出一矩形框,且沿著該矩形框從開臉部影像中取出該眼部影像。 The pupil positioning method according to any one of claims 1 to 3, wherein the photographing unit of the step (b) finds two by searching for a facial image a nodal center point; calculate a spacing between the center points of the two nostrils and determine a coordinate point; calculate a coordinate point coordinate according to the spacing and the starting point coordinate; define a rectangular frame according to the reference point coordinate, and The eye image is taken out from the open face image along the rectangular frame. 如請求項1至3任一項之瞳孔定位方法,其中該步驟(d)包含依據該瞳孔的水平軸或垂直軸作為基準線段,配合光斑向量界定出一角度值。 The pupil positioning method according to any one of claims 1 to 3, wherein the step (d) comprises, according to the horizontal axis or the vertical axis of the pupil, a reference line segment, and an angle value is defined in accordance with the spot vector. 如請求項1至3任一項之瞳孔定位方法,其中該步驟(e)中之該運算單元與一儲存單元連接,且該儲存單元包含有與該複數個光源數量相等之複數個角度-距離分布圖,藉以將該第一群組帶入該複數個角度-距離分布圖中轉換為該第一位置。 The method for locating a pupil according to any one of claims 1 to 3, wherein the arithmetic unit in the step (e) is connected to a storage unit, and the storage unit includes a plurality of angle-distances equal to the number of the plurality of light sources. a distribution map by which the first group is brought into the plurality of angle-distance distribution maps to be converted into the first position. 如請求項6之瞳孔定位方法,其中複數個該角度-距離分布圖,係透過該運算單元中之一訓練模組,將該輸出單元預先劃分成多組區域,每次只顯示其中一組區域,該訓練模組並執行下述步驟:每次顯示一組區域供該使用者觀看時,控制該影像分析模組執行定位及控制該向量處理模組取得各組區域的距離-角度資料所組成。 The pupil positioning method of claim 6, wherein the plurality of the angle-distance distribution maps are pre-divided into a plurality of groups of regions through one of the training modules of the computing unit, and only one of the regions is displayed at a time. The training module performs the following steps: each time a group of regions is displayed for viewing by the user, the image analysis module is controlled to perform positioning and control of the vector processing module to obtain distance-angle data of each group of regions. . 如請求項7之瞳孔定位方法,其中該步驟(c)係透過該運算單元中之一影像分析模組執行、該步驟(d)係透過該運算單元中之一向量處理模組執行,且該步驟(e)係透過該運算單元中之一座標 轉換模組執行。 The method for locating a pupil according to claim 7, wherein the step (c) is performed by an image analysis module of the operation unit, and the step (d) is performed by a vector processing module of the operation unit, and the Step (e) is transmitted through a coordinate in the arithmetic unit The conversion module is executed. 一種以複數光源之相對位置為基礎之瞳孔定位系統,包含:複數個光源,發射光束照射一使用者之眼部以形成對應的複數個反光點;一攝影單元,擷取該使用者之眼部影像;以及一運算單元,定位該眼部影像上的一瞳孔中心點及複數個該反光點之位置;其中該運算單元根據該瞳孔中心點至複數個該反光點之距離,界定出複數個光斑向量,並轉換複數個該光斑向量為一輸出單元上之第一位置。 A pupil positioning system based on the relative positions of a plurality of light sources, comprising: a plurality of light sources, the emitted light beam illuminating a user's eye to form a corresponding plurality of reflective spots; and a photographing unit for capturing the user's eye And an operation unit for positioning a pupil center point and a plurality of the reflection points on the eye image; wherein the operation unit defines a plurality of spots according to the distance from the pupil center point to the plurality of the reflection points a vector, and converting a plurality of the spot vectors to a first position on an output unit. 如請求項8之瞳孔定位系統,其中該運算單元係依據影像分析模組所定位之該瞳孔中心點的水平軸或垂直軸作為基準線段,配合複數個該光斑向量界定出複數個角度值。 The pupil positioning system of claim 8, wherein the operation unit is configured to define a plurality of angle values according to a plurality of the spot vectors according to a horizontal axis or a vertical axis of the pupil center point positioned by the image analysis module. 如請求項8之瞳孔定位系統,進一步包含與該運算單元連接之一儲存單元,且該儲存單元包含有與該複數個光源數量相等之複數個角度-距離分布圖。 The pupil positioning system of claim 8, further comprising a storage unit connected to the computing unit, and the storage unit includes a plurality of angle-distance distribution maps equal to the number of the plurality of light sources. 如請求項9或10之瞳孔定位系統,更包含一輸出單元,相鄰於該攝影單元,且該輸出單元顯示出對應於瞳孔注視位置之指標。 The pupil positioning system of claim 9 or 10 further includes an output unit adjacent to the photographing unit, and the output unit displays an index corresponding to the pupil gaze position. 如請求項11之瞳孔定位系統,其中複數個該光源係設置於輸出單元對角線端點,用以發出光線照射於該眼部上,以便形成對應的二個反光點。 The pupil positioning system of claim 11, wherein the plurality of light sources are disposed at diagonal ends of the output unit for emitting light onto the eye to form corresponding two reflective points. 如請求項11之瞳孔定位系統,其中複數個該光源係為設置於該輸出單元四個頂角位置或該輸出單元四個周側中點位置,用以發出光線照射於該眼部上,以便形成對應的四個反光點。 The pupil positioning system of claim 11, wherein the plurality of the light sources are disposed at four corner positions of the output unit or at a midpoint of four peripheral sides of the output unit for emitting light onto the eye, so that Form corresponding four reflection points. 一種用以定位瞳孔之非暫態(non-transitory)電腦程式產品,當電腦載入該電腦程式產品並執行後,係可完成如請求項1至8中任一項所述之方法。 A non-transitory computer program product for locating a pupil, the method of any one of claims 1 to 8 being completed after the computer is loaded into the computer program product and executed.
TW103105624A 2014-02-20 2014-02-20 Method for pupil localization based on a corresponding position of auxiliary light, system and computer product thereof TW201533609A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW103105624A TW201533609A (en) 2014-02-20 2014-02-20 Method for pupil localization based on a corresponding position of auxiliary light, system and computer product thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW103105624A TW201533609A (en) 2014-02-20 2014-02-20 Method for pupil localization based on a corresponding position of auxiliary light, system and computer product thereof

Publications (1)

Publication Number Publication Date
TW201533609A true TW201533609A (en) 2015-09-01

Family

ID=54694794

Family Applications (1)

Application Number Title Priority Date Filing Date
TW103105624A TW201533609A (en) 2014-02-20 2014-02-20 Method for pupil localization based on a corresponding position of auxiliary light, system and computer product thereof

Country Status (1)

Country Link
TW (1) TW201533609A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI557601B (en) * 2015-10-30 2016-11-11 由田新技股份有限公司 A puppil positioning system, method, computer program product and computer readable recording medium
TWI692729B (en) * 2017-12-27 2020-05-01 大陸商北京七鑫易維信息技術有限公司 Method and device for determining pupil position
TWI739149B (en) * 2019-08-26 2021-09-11 大陸商業成科技(成都)有限公司 Device for tracking an electronic whiteboard through an eyeball

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI557601B (en) * 2015-10-30 2016-11-11 由田新技股份有限公司 A puppil positioning system, method, computer program product and computer readable recording medium
TWI692729B (en) * 2017-12-27 2020-05-01 大陸商北京七鑫易維信息技術有限公司 Method and device for determining pupil position
TWI739149B (en) * 2019-08-26 2021-09-11 大陸商業成科技(成都)有限公司 Device for tracking an electronic whiteboard through an eyeball

Similar Documents

Publication Publication Date Title
CN107004275B (en) Method and system for determining spatial coordinates of a 3D reconstruction of at least a part of a physical object
US11829523B2 (en) Systems and methods for anatomy-constrained gaze estimation
TWI704501B (en) Electronic apparatus operated by head movement and operation method thereof
US11294455B2 (en) Method and device for determining gaze placement, computer readable storage medium
Funes-Mora et al. Gaze estimation in the 3D space using RGB-D sensors: towards head-pose and user invariance
US20150029322A1 (en) Method and computations for calculating an optical axis vector of an imaged eye
JP6631951B2 (en) Eye gaze detection device and eye gaze detection method
JP2016173313A (en) Visual line direction estimation system, visual line direction estimation method and visual line direction estimation program
JP7030317B2 (en) Pupil detection device and pupil detection method
KR20140126630A (en) Method for tracking user&#39;s gaze position using mobile terminal and apparatus thereof
Lu et al. Estimating 3D gaze directions using unlabeled eye images via synthetic iris appearance fitting
JP6870474B2 (en) Gaze detection computer program, gaze detection device and gaze detection method
US20200364441A1 (en) Image acquisition system for off-axis eye images
JP2017194301A (en) Face shape measuring device and method
TWI557601B (en) A puppil positioning system, method, computer program product and computer readable recording medium
JP2018205819A (en) Gazing position detection computer program, gazing position detection device, and gazing position detection method
CN110313006A (en) A kind of facial image detection method and terminal device
JP2021077265A (en) Line-of-sight detection method, line-of-sight detection device, and control program
Lander et al. hEYEbrid: A hybrid approach for mobile calibration-free gaze estimation
US11435820B1 (en) Gaze detection pipeline in an artificial reality system
TW201533609A (en) Method for pupil localization based on a corresponding position of auxiliary light, system and computer product thereof
JP6288770B2 (en) Face detection method, face detection system, and face detection program
TWI761930B (en) Head mounted display apparatus and distance measurement device thereof
Nitschke Image-based eye pose and reflection analysis for advanced interaction techniques and scene understanding
JP2016111612A (en) Content display device